Users vote with their thumbs. Last weekend, they pressed delete.
ChatGPT mobile uninstalls in the U.S. surged 295% day-over-day on Saturday, February 28, according to Sensor Tower data. For context, ChatGPT’s typical daily uninstall fluctuation averaged about 9% over the past month. This wasn’t noise. This was a consumer revolt.
The trigger: OpenAI signed a deal with the Department of Defense — now officially rebranded as the Department of War under the Trump administration — to deploy AI models in classified military environments. The timing was brutal. The deal landed hours after the government blacklisted rival Anthropic for refusing to sign a similar agreement without safety guardrails.
And now Sam Altman is admitting it was a mistake.
Altman’s Mea Culpa: “Opportunistic and Sloppy”
In a rare move for any CEO — let alone one running a $300 billion AI company — Altman took to X on Monday to publicly share an internal memo acknowledging the fumble.
“We shouldn’t have rushed to get this out on Friday,” Altman wrote. “The issues are super complex, and demand clear communication. We were genuinely trying to de-escalate things and avoid a much worse outcome, but I think it just looked opportunistic and sloppy.”
That’s a remarkable admission. The CEO of the world’s most prominent AI company essentially saying: yes, it looked like we swooped in to steal Anthropic’s government contract the moment they got punished for having principles. Because that’s exactly what it looked like.
OpenAI is now renegotiating the contract to add language prohibiting domestic surveillance, citing consistency with the Fourth Amendment, the National Security Act of 1947, and FISA. Defense Intelligence Components — including the NSA, NGA, and DIA — would be barred from using OpenAI’s services.
Altman also called on the government to reverse its freeze on Anthropic, calling it a “very bad decision.” He even said, “If I received what I believed was an unconstitutional order, of course I would rather go to jail than follow it.”
Strong words. Whether actions match remains the central question.
The Numbers Are Brutal
Sensor Tower’s data paints a picture of genuine backlash — not just Twitter outrage:
- ChatGPT uninstalls: Up 295% day-over-day (Saturday, Feb 28)
- ChatGPT downloads: Down 13% after growing 14% the day before
- ChatGPT 1-star reviews: Surged 775%, then grew another 100% on Sunday
- ChatGPT 5-star reviews: Dropped 50%
- Claude downloads: Up 37% Friday, up 51% Saturday
- Claude App Store ranking: Hit #1 in the U.S., jumping over 20 ranks in under a week
Appfigures confirmed that Claude’s total daily U.S. downloads actually surpassed ChatGPT’s for the first time on Saturday, estimating the surge at 88% day-over-day. Anthropic told Business Insider that “every single day last week was an all-time record for Claude sign-ups.”
Claude Goes Global
The consumer migration wasn’t just American. Claude hit #1 free app in Belgium, Canada, Germany, Luxembourg, Norway, and Switzerland.
Before this week, Claude was a niche product — favored by developers and power users, invisible to casual consumers. ChatGPT had the brand recognition, the integrations, the installed base.
The Pentagon saga changed that overnight. Anthropic went from “the other AI company” to the AI company that said no to the military. Millions decided that mattered.
The irony is thick. Anthropic’s refusal to compromise on safety — the stance that got it blacklisted — became the most effective marketing campaign in AI history. No ad buy could have generated this kind of organic momentum.
The Government Doubles Down
While consumers rallied behind Anthropic, the federal government moved the opposite direction. Three more cabinet-level agencies — State, Treasury, and HHS — announced they would cease using Anthropic’s AI products.
The State Department confirmed it was switching its in-house chatbot, StateChat, from Claude to GPT-4.1. Treasury Secretary Bessent confirmed his department would end all Anthropic usage entirely. This follows Trump’s executive order directing all agencies to phase out Anthropic — turning one of America’s leading AI companies into a pariah within its own government.
The split is extraordinary: the U.S. government is punishing Anthropic for the same principles making it wildly popular with American consumers.
Why This Actually Matters
Consumer ethics have teeth. For years, tech companies assumed users don’t care about corporate ethics — that convenience always wins. A 295% uninstall spike isn’t performative outrage. It’s people choosing which company gets their data, attention, and money.
AI companies can’t hide behind complexity. Altman’s “de-escalation” framing didn’t work because the timeline was visible to everyone: Anthropic gets punished Friday morning, OpenAI signs the deal Friday afternoon. The sequence told a story no amount of corporate comms could rewrite.
The safety debate got real. This wasn’t abstract hand-wringing about AI risk. It was about whether an AI company would accept restrictions on surveillance and autonomous weapons. Anthropic’s position wasn’t anti-military — it was anti-unaccountable-deployment. That distinction resonated with consumers in a way theoretical safety debates never have.
Enforcement is still the elephant in the room. Even with renegotiated terms, legal experts are skeptical. Charlie Bullock, a senior fellow at the Institute for Law & AI, noted the surveillance language “does not address autonomous weapons concerns.” OpenAI employees have pushed for independent legal review of the full contract — a request that hasn’t been granted.
What Comes Next
Anthropic faces a paradox: more popular than ever with consumers, locked out of the federal government — one of the largest potential AI customers in the world. They need to convert new users into paying subscribers to offset lost government revenue.
OpenAI faces a credibility gap. Altman’s public mea culpa is tactically smart, but the contract language is only as good as its enforcement. Internal morale isn’t great.
The bigger question the entire industry is watching: can ethical positioning be a competitive moat in AI?
If Claude’s numbers hold — if the users who downloaded it last week stick around and pay for subscriptions — it sends a message that echoes far beyond one Pentagon contract. It tells every AI company on Earth that how you build matters as much as what you build.
That might be the most important AI development of 2026 so far.