Skip to content
Big brother is watching

Report: Meta will train AI agents by tracking employees’ mouse, keyboard use

Move highlights the difficulty of finding high-quality interactive training data.

Kyle Orland | 59
Show me how you use this strange human tool called the "mouse" Credit: Getty Images
Show me how you use this strange human tool called the "mouse" Credit: Getty Images
Story text

Meta will begin tracking the mouse movements, clicks, and keystrokes of its US employees to generate high-quality training data for future AI agents, Reuters reports.

The news organization cites internal memos posted by the Meta Superintelligence Labs team in reporting on the new Model Capability Initiative employee-tracking software. That software will operate on specific work-related apps and websites and also make use of periodic screenshots to provide context for the AI training, according to the memo.

“This is where all Meta employees can help our models get better simply by doing their daily work,” the memo reads, in part, Reuters reports.

Meta spokesperson Andy Stone told Reuters that the collected training data will help Meta’s AI agents with tasks that it sometimes struggles with, including “things like mouse movements, clicking buttons, and navigating dropdown menus.”

“If we’re building agents to help people complete everyday tasks using computers, our models need real examples of how we actually use them,” Stone said, adding that the collected data would not be used to evaluate employees.

While Meta’s US employees will have their actions tracked by the new tracking software, similarly monitoring European Meta employees would likely run afoul of a number of national laws limiting how an employer can track employee actions. Meta has faced potential legal problems in the European Union for forcing users of its social media services to opt out of having their content used for AI training, rather than affirmatively opting in.

Getting on the training train

The Internet contains enormous amounts of text, images, and video that can be used to train generative AI models (with some important and heavily argued legal limits). But obtaining high-quality training data for physical actions or virtual computer interactions has proven more difficult. Some companies have resorted to complex physics simulations of elaborate hand-tracking prosthetics to create human interaction data that an AI robotics model can understand.

Meta’s move comes as major tech companies, including OpenAI, Anthropic, Google, and Perplexity, have recently introduced new tools that let AI agents take over your computer or web browser to complete certain tasks. Ars’ initial tests of some of these consumer offerings showed a surprising ability to convert many natural-language commands into virtual actions, with some significant limitations and brittleness regarding long-term automated tasks.

Meta has also reportedly begun setting AI usage goals among some employees, including coders and engineers. The company is also reportedly planning to start laying off up to 10 percent of its global workforce starting in May.

Photo of Kyle Orland
Kyle Orland Senior Gaming Editor
Kyle Orland has been the Senior Gaming Editor at Ars Technica since 2012, writing primarily about the business, tech, and culture behind video games. He has journalism and computer science degrees from University of Maryland. He once wrote a whole book about Minesweeper.
59 Comments