Your employer watching your screen isn’t new. But your employer recording every mouse movement, every keystroke, every dropdown menu selection — and feeding it all into an AI that might eventually replace you? That’s a different beast entirely.

Meta just crossed that line, and its own employees are furious about it.

What the Model Capability Initiative Actually Does

On April 21st, Meta’s Superintelligence Labs team rolled out the Model Capability Initiative (MCI) to all US-based employees and contractors. The tool captures mouse movements, click locations, keystrokes, and periodic screenshots — all piped directly into Meta’s AI training pipeline.

The internal memo laid it out plainly: AI models still lack basic computer skills like choosing from dropdowns and using keyboard shortcuts. To train agents that can handle everyday tasks, Meta needs real examples of how humans actually use computers.

Translation: Meta needs humans to teach AI how to be human — so it can eventually do human work without humans.

“How Do We Opt Out?” “You Don’t.”

The most popular comment on the internal announcement asked exactly what you’d expect: “How do we opt out?”

CTO Andrew Bosworth’s response: “There is no option to opt out of this on your work-provided laptop.”

That went over about as well as you’d imagine. Angry-face emoji reactions flooded the announcement. One anonymous employee told the BBC that having their “smallest actions on a computer being used to train AI” while expecting more layoffs “feels very dystopian.”

The timing is brutal. Meta has already cut around 2,000 employees in 2026. Its jobs page went from roughly 800 listings in March to seven this week. Seven.

Training Your Own Replacement

Here’s the quiet cruelty of it: Meta employees are generating the training data that will power AI agents designed to do their jobs.

In January, Zuckerberg said 2026 would be “the year that AI dramatically changes the way we work,” adding that projects requiring big teams could now be accomplished by “a single, very talented person.” Meta is spending $140 billion on AI in 2026 — nearly double its 2025 investment.

The employees aren’t paranoid. They’re doing math.

Why Meta Needs This Data (And Why It Matters Technically)

From a technical standpoint, MCI addresses a genuine problem. Current AI agents can write code and draft emails, but they stumble at navigating complex enterprise UIs — nested dropdown menus, modal dialogs, keyboard shortcuts. The reason is a training data gap: most models learn from text, not from demonstrations of how humans actually interact with software.

Anthropic and OpenAI have tackled this with synthetic data and contractor demonstrations. Meta’s taking a different route: turning its 70,000+ person workforce into an involuntary data-labeling operation.

The Two-Tier System

MCI is US-only. Rolling it out in Europe would almost certainly violate GDPR and national employee-monitoring laws requiring explicit consent. The EU has already gone after Meta for forcing users to opt out of AI training on social media data rather than opting in.

So American workers become training data while European workers get regulatory protection. It’s a pattern we’ve seen before in tech, but rarely this explicit.

What This Means for Everyone Else

Meta might be first, but it won’t be last. The logic is too compelling — every company building AI agents faces the same training data problem, and nothing beats authentic behavioral data from skilled professionals doing their actual jobs.

Expect other Big Tech companies to follow within months. Some will bury it in updated employee handbooks. Others will frame it as a voluntary “help us build better tools” initiative with enough social pressure to make it mandatory in all but name.

For workers outside tech: if your company is deploying AI tools, the line between “using AI to help you work” and “using your work to train AI” is blurrier than you think. Every interaction with a corporate AI assistant is potentially generating training data.

We’re entering an era where the act of working is itself unpaid labor for AI companies — labor that workers didn’t consent to and can’t opt out of.

The Binary Choice

Meta employees now face a simple calculation: accept the surveillance or leave. In a job market where Meta has gone from 800 openings to seven, and AI-driven layoffs are accelerating across the industry, that’s not much of a choice at all.

How this plays out — whether employees win concessions, regulators step in, or other companies quietly follow suit — will shape the relationship between workers and AI for years to come.

The uncomfortable truth is that Meta’s technical argument isn’t wrong. AI agents genuinely need this data to get better. But “we need it” has never been a sufficient justification for taking it without meaningful consent.