There’s a war happening in Iran. There’s also a completely different war happening on your timeline — and that one is mostly fake.

Since US and Israeli strikes on Iran began February 28, social media has been hammered with AI-generated videos claiming to show missile strikes on Tel Aviv, burning skyscrapers in Dubai, and devastated military bases. These clips have collectively racked up hundreds of millions of views. The part that should make your stomach turn: the people creating them are getting paid to do it.

Welcome to the first armed conflict where AI misinformation isn’t a side effect. It’s a business model.

The Scale Is Unprecedented

“The scale is truly alarming and this war has made it impossible to ignore now,” says Timothy Graham, a digital media expert at Queensland University of Technology. “What used to require professional video production can now be done in minutes with AI tools.”

BBC Verify has been tracking the flood. One AI-generated video appearing to show missiles striking Tel Aviv appeared in over 300 posts, shared tens of thousands of times. A fake clip of Dubai’s Burj Khalifa engulfed in flames went massively viral while real residents were genuinely terrified about drone strikes on the city.

It’s not just video anymore. Iran’s state-aligned Tehran Times shared fabricated satellite imagery claiming to show damage to the US Navy’s Fifth Fleet headquarters in Bahrain. Google’s SynthID watermark detector confirmed it was AI-generated. The tell? Three vehicles were parked in the exact same position as in real satellite imagery taken a full year earlier.

X Is Literally Paying People to Lie About a War

Here’s where the story goes full dystopia. X’s head of product Nikita Bier admitted this week that “99%” of accounts spreading AI-generated war videos were doing it to “game monetization” — posting outrage-bait to generate engagement in exchange for payments through X’s Creator Revenue Sharing programme.

The economics are simple. Generate a realistic-looking war video in minutes using freely available tools — OpenAI’s Sora, ByteDance’s Seedance, or Grok (built directly into X). Post it with a scary caption. Watch the views roll in. Collect your check. Repeat.

Graham estimates X pays roughly $8 to $12 per million verified user impressions. “Once you’re in, viral AI-generated content is basically a money printer,” he says. “They’ve built the ultimate misinformation enterprise.”

The platform literally pays creators to fabricate war footage while real people are dying. That’s not a bug in the system — it’s the system working exactly as designed.

When the Fact-Checker Is Also Wrong

The darkest subplot involves Grok, X’s own AI chatbot. Users turned to it to verify whether viral war videos were real or AI-generated.

Grok failed spectacularly. In one documented instance, it told a user: “No, this isn’t AI, it’s a real photo from today’s Iranian ballistic missile strikes on central Israel” — then incorrectly cited Reuters, CNN, and Euronews as sources. None of those outlets had published anything confirming the fake video.

So the platform pays creators to post AI-generated war footage, and the platform’s own AI assistant validates it as real. If you wrote this in a dystopian novel, your editor would tell you it was too on the nose.

The Rise of the Shallowfake

Political scientist Steven Feldstein points out the misinformation landscape has evolved beyond simple deepfakes. We’re in the era of the “shallowfake” — subtle manipulations that blend truth and fiction to slip past people’s detection.

Rather than creating something entirely fabricated, bad actors take real imagery and make small AI-powered edits. A real photo of an Iraqi airport with a small plume of smoke becomes one showing a massive fireball. Real satellite imagery of a US base gets tweaked to show blast damage that isn’t there. Old photos from previous conflicts get recaptioned as breaking news.

The June 2025 Twelve-Day War was already a watershed — BBC Verify’s Shayan Sardarizadeh called it “the first example of a major global conflict where we were seeing more misinformation being produced using AI than in traditional ways.” The current conflict has supercharged that trend.

Platforms Are Scrambling, But It’s Not Enough

X announced it will temporarily suspend creators from monetization if they post AI-generated war videos without labeling them. Experts are skeptical about enforcement at scale.

X says it will rely on Community Notes to identify AI-generated content. When a single fake video can spawn 300+ reposts and reach millions before anyone flags it, reactive moderation is bringing a garden hose to a wildfire.

TikTok and Meta didn’t even respond to BBC Verify’s requests about whether they’d take similar action.

Victoire Rio, executive director of technology policy nonprofit What To Fix, explains the core problem: “The pipeline onto social media can now be almost fully automated.” AI tools generate content. Scheduling tools post it. Engagement algorithms amplify it. Monetization programs reward it. End-to-end misinformation factory.

The Liar’s Dividend

There’s an even more corrosive second-order effect. When people know AI can fake anything, they start disbelieving everything — including real footage of real atrocities.

“It’s now to a point where nothing that comes in beyond your own pre-existing narrative is accepted as something that is truthful,” says Feldstein. “And that’s just as harmful.”

This is the “liar’s dividend” researchers have warned about for years. In a world where anything could be fake, powerful actors dismiss genuine evidence as AI-generated. It cuts both ways, and it cuts deep.

Rumman Chowdhury, the prominent AI researcher and former US Science Envoy for AI, nails it: “We have reached a level of realism in video, audio, and image deepfakes that for most people, it is not discernible from fact.”

Where This Goes

The tools to create convincing synthetic media are now free, fast, and absurdly easy to use. The economic incentives to create fake content are baked directly into platform architecture. The guardrails — content moderation, AI detection, watermarking, community fact-checking — are all playing catch-up against a problem that moves at the speed of a GPU.

The Iran conflict isn’t just a military story. It’s the definitive case study for AI-powered information warfare. And the lesson so far isn’t encouraging: when you build systems that pay people for attention regardless of truth, truth becomes the first casualty.

The question isn’t whether this gets worse. It’s whether anyone has the will — or the business model — to make it better.