cancel
Showing results for 
Show  only  | Search instead for 
Did you mean: 

This documentation is for the new preview UI. It’s still being refined and is subject to change. For documentation for the old UI, see Knowledge Base.

Article link copied to clipboard
Updated
Published
2 min read

With AI usage tracking, you can see where and how AI services and tools are being used across your environment, based on signals collected by your IT Agents and IT sensors.

Capturing AI-related activity from Windows, Linux, and macOS devices is disabled by default, and must be explicitly enabled per discovery action.

Enable AI tracking per discovery action

You can enable or disable AI usage tracking per discovery action, giving you clearer control over how AI-assisted discovery is monitored in your environment.

To enable or disable AI tracking:

  1. In your Lansweeper Site, go to Discovery > Actions.

  2. Select the discovery action you want to configure.

  3. In the discovery action’s details page, enable or disable AI usage tracking.

You can keep track of discovered AI services and tools by going to Dashboards > All dashboards > AI Asset Management.

Scope of AI usage tracking

Below is the scope of signals typically captured when AI usage tracking is enabled.

Network & communication

  • Network traffic analysis: detects connections to known AI services (e.g. OpenAI, Anthropic, Google AI, Microsoft Copilot), including 14-day historical tracking.

  • Browser history scanning: identifies visits to AI platforms across Chrome, Firefox, and Edge.

  • Event log analysis (Windows): monitors Windows event logs for AI-related network activity indicators.

Software & applications

  • Installed software detection: scans for locally installed AI applications (e.g. ChatGPT, Cursor, Ollama, LM Studio, Stable Diffusion, TensorFlow, PyTorch).

  • Browser extension detection: identifies AI-powered browser extensions (e.g. GitHub Copilot, ChatGPT extensions).

  • IDE/plugin detection: detects AI coding assistants in IDEs (e.g. VS Code, Visual Studio, IntelliJ, and other JetBrains IDEs).

Local AI infrastructure

  • Local AI server detection: detects running AI model servers (e.g. Ollama, LM Studio, GPT4All, Jan, ComfyUI).

  • Browser-cached AI model indicators: scans browser storage (e.g. IndexedDB/LocalStorage) for indicators of cached AI models and WebGPU inference components.

Security & compliance indicators

  • API key/credential indicators: identifies stored AI service credentials (e.g. in environment variables or configuration files).

  • Cached AI data indicators: detects locally stored AI model data and inference artifacts.