Four days. That’s the gap between Meta announcing 8,000 layoffs and Reuters revealing that the company is recording every keystroke, mouse movement, and screen action its employees make — to train AI that does their jobs.
You can’t make this stuff up.
The Timeline That Says Everything
April 17: Reuters reports Meta plans to cut 10% of its 78,865-person workforce starting May 20, with more cuts planned for late 2026.
April 21: Reuters breaks another story — Meta is deploying its Model Capability Initiative (MCI), a tracking tool that captures mouse movements, keystrokes, and periodic screenshots from every U.S. employee’s work computer.
The data feeds directly into training AI agents that can navigate computers autonomously. The stated goal, per CTO Andrew Bosworth’s internal memo: “The vision we are building towards is one where our agents primarily do the work and our role is to direct, review and help them improve.”
Read that sequence again. Study how humans work. Train the AI. Fire the humans.
What MCI Actually Captures
This isn’t your typical workplace monitoring. MCI tracks:
- Every mouse movement — hover patterns, scroll behavior, click targets
- Every keystroke — everything typed, everywhere
- Screen snapshots — periodic captures of whatever’s on display
The tracking runs across internal Meta tools and hundreds of external sites: Google, LinkedIn, GitHub, Slack, Atlassian, Wikipedia. The original list even included competitor tools like ChatGPT and Claude before someone apparently thought better of it.
Employees described the program as “very dystopian” in internal messages obtained by CNBC. Multiple workers flagged that screenshots could capture passwords, immigration documents, health information, and personal family details.
Meta says the data won’t be used for performance reviews and that “safeguards” exist. They haven’t said what those safeguards are. When your employer is simultaneously filming your work and handing you a pink slip, “trust us” doesn’t quite land.
Why Your Keystrokes Are Gold
Here’s the technical logic: AI models are surprisingly terrible at using computers the way humans do. They can write essays and generate code, but navigating a dropdown menu or filling out a multi-step form? They fumble.
The fix requires computer use training data — real recordings of humans interacting with interfaces. Same principle as self-driving cars needing millions of hours of human driving footage. Except instead of teaching a car to drive, you’re teaching a bot to do your specific job.
Meta brought in Scale AI’s Alexandr Wang to lead its SuperIntelligence Labs. The MCI data feeds that effort directly. This isn’t theoretical future planning — it’s active pipeline construction.
The Pattern Nobody Can Deny
Meta isn’t an outlier. It’s just the most transparent (involuntarily, via leaked memos).
Amazon has trimmed 30,000 corporate employees in recent months. Block announced it would replace entire teams with AI agents. The playbook is identical everywhere:
- Capture how humans do computer work
- Train AI on that behavioral data
- Deploy agents to do it autonomously
- Cut the humans
McKinsey estimated that up to 30% of hours worked globally could be automated by 2030. Programs like MCI are the mechanism — one keystroke log at a time.
You’re Training Your Replacement Every Day
Bosworth’s memo contains a line that deserves to be carved in stone: agents will “automatically see where we felt the need to intervene so they can be better next time.”
Every correction you make teaches the AI to not need you. Every workaround you demonstrate gets encoded into a model. The act of doing your job well is now indistinguishable from training your successor.
This fundamentally breaks the traditional employer-employee relationship. You’re not just exchanging labor for wages anymore. You’re generating training data that has independent, long-term value to the company — value that persists after you’re gone.
The Legal Vacuum
U.S. federal law broadly permits workplace monitoring on company devices. Most states don’t even require employers to notify workers about surveillance software. The EU’s GDPR and AI Act offer stronger protections, which is likely why Meta limited MCI to U.S. employees.
But legal permissibility isn’t the same as ethical defensibility. We’re watching the creation of a new labor dynamic in real time, and the regulatory framework is years behind.
What This Means for You
Even if you don’t work at Meta:
Your digital work patterns are training data. How you navigate software, structure tasks, use shortcuts — all of it has value to AI companies. Microsoft, Google, Amazon, and Anthropic are all building computer-use agents. The data has to come from somewhere.
“Soft skills” are your moat. Creative judgment, relationship building, strategic thinking — the things AI still can’t replicate. Everything else is on the clock.
Assume the trend accelerates. Meta is just early and loud. The economics are too compelling for other companies to resist.
The question isn’t whether workplace AI automation is coming. It arrived. The question is whether we’ll build labor protections fast enough to matter — or whether the only people moving fast enough are the ones doing the replacing.
Right now, smart money’s on the latter.