Meta has begun deploying tracking software on the work computers of its U.S. employees, recording mouse movements, clicks, keystrokes, and screenshots. The initiative, disclosed by the company's Superintelligence Labs team, is designed to generate training data for AI models that still struggle with rudimentary computer-use behaviors — navigating dropdown menus, toggling between applications, using keyboard shortcuts. Meta stated that safeguards are in place to protect sensitive content, though the company has not detailed the scope or mechanics of those protections.

The project sits at the intersection of two trends that have been accelerating across the technology industry: the race to build autonomous AI agents capable of performing knowledge work, and the growing willingness of large employers to treat internal operations as a data source for model training.

The agent race and the data bottleneck

Over the past two years, the major AI laboratories have shifted significant resources toward building so-called computer-use agents — models that can operate software interfaces the way a human would, clicking buttons, filling forms, switching tabs, and executing multi-step workflows. OpenAI, Google DeepMind, and Anthropic have each demonstrated prototypes capable of performing basic desktop tasks, and several startups have entered the space with narrower automation tools aimed at specific enterprise workflows.

The central challenge is not architectural but empirical. Large language models learn language from vast text corpora scraped from the open web. No equivalent public dataset exists for how humans actually use computers. Screen recordings, mouse trajectories, and keystroke sequences are inherently private, generated behind login screens and inside corporate networks. Synthetic data and simulated environments can approximate some of this behavior, but they tend to produce brittle agents that fail when confronted with the unpredictable layouts and edge cases of real software. Meta's decision to instrument its own workforce represents a direct attempt to close that data gap using proprietary, high-fidelity behavioral traces.

The strategic logic is straightforward. A company with more than 70,000 employees performing a wide range of white-collar tasks — engineering, project management, human resources, finance, content moderation — generates an enormous volume of exactly the kind of interaction data these models need. If the resulting agents reach production quality, Meta could deploy them internally to reduce operational costs and externally as commercial products, potentially integrated into its existing suite of business tools.

Workplace surveillance and the consent question

Employee monitoring is not new. Time-tracking software, email scanning, and productivity dashboards have been fixtures of corporate IT for more than a decade, and their adoption surged during the shift to remote work. What distinguishes Meta's program is the purpose: the data is not being collected to evaluate individual performance but to train general-purpose AI systems whose applications may extend well beyond the company.

That distinction raises questions that existing workplace privacy frameworks were not designed to answer. In the United States, employers generally have broad legal authority to monitor activity on company-owned devices, particularly when employees are notified. But the reuse of behavioral data for AI training introduces a different calculus. Employees contributing interaction data are, in effect, providing labeled demonstrations of skilled computer use — a resource with commercial value that accrues to the employer. Whether current disclosure and consent practices are adequate for that kind of value transfer is an open question, and one that labor advocates and regulators in the European Union have already begun to examine in adjacent contexts.

Meta has said that safeguards exist to protect sensitive content captured during monitoring, but the company has not publicly specified whether employees can opt out, whether the data is anonymized before use, or how long recordings are retained. The absence of those details makes it difficult to assess the program's boundaries.

The broader tension is structural. Companies building AI agents need naturalistic human-computer interaction data at scale, and their own workforces are the most accessible source. If Meta's approach proves effective, other large employers are likely to follow. The question is whether the norms governing workplace data collection will evolve at the same pace as the commercial incentives to expand it — or whether the gap between the two will widen before any regulatory framework catches up.

With reporting from Fortune.

Source · Fortune