The highest court in the country just said what everyone already knew — machines aren’t authors.
On Monday, the U.S. Supreme Court declined to hear Thaler v. Perlmutter, the eight-year legal crusade to get copyright protection for art made entirely by AI. By refusing the case, the Court left lower court rulings intact: no human author, no copyright. Period.
It’s not a surprise. But the consequences are enormous.
The Guy Who Tried to Copyright a Robot’s Painting
Stephen Thaler, a Missouri computer scientist, has been fighting this battle since 2018. He filed a copyright application for an image called “A Recent Entrance to Paradise” — a surreal piece his AI system DABUS generated without human creative input.
The Copyright Office said no. A federal judge said no. The D.C. Circuit Court of Appeals said no. And now the Supreme Court has said: we’re not even going to look at it.
Thaler’s lawyers warned the refusal would have “irreversible” consequences for AI development. They’re probably right — just not in the way they hoped.
What This Actually Means
Let’s be precise. The Supreme Court didn’t rule that AI art can’t be copyrighted. It declined to hear the case, leaving existing rulings in place. Legally, that’s different from a binding opinion. Practically? It’s a green light for the status quo.
And the status quo is clear. As University of Kentucky law professor Brian Fyre put it: “Pretty much everyone across the board has said human authorship is required.”
But here’s the nuance most coverage misses: this case was about fully autonomous AI creation. Thaler deliberately positioned DABUS as the sole creator — no human in the loop whatsoever.
AI-assisted work is a different story. The Copyright Office clarified in 2025 that works created with AI tools can receive protection, but only for the parts reflecting human creative expression. Write a novel and use AI for one illustration? The novel’s protected. The illustration probably isn’t. Heavily curate and edit AI outputs? Your selection and arrangement might qualify.
The real legal frontier isn’t “can AI get copyright?” — it’s “how much human involvement is enough?”
The Global Consensus Is Forming
Thaler fought this fight everywhere, and lost almost everywhere:
- UK Supreme Court (2023): AI systems can’t be inventors on patents
- European Patent Office: Rejected DABUS patent applications
- Australia: Initially ruled in Thaler’s favor, then overturned on appeal
China is the outlier. A Beijing court ruled in 2024 that AI-generated images could receive copyright if the user made sufficient creative choices in prompting and selection. That’s a crack in the consensus worth watching.
But the pattern is unmistakable: courts worldwide weren’t built for machine creators, and they’re not stretching to accommodate them.
Why Every AI Company Should Be Paying Attention
Here’s where it gets uncomfortable for the generative AI industry.
If AI-generated content can’t be copyrighted, it enters something close to the public domain the moment it’s created. Anyone can copy it, modify it, redistribute it. For companies whose business model is generating content, this is a structural problem.
Think about it:
- Stock image companies selling AI-generated photos can’t stop competitors from copying and reselling those exact images
- AI music platforms can’t protect the tracks their systems produce
- Content marketing agencies can’t guarantee clients exclusive rights to AI-generated copy
- Game studios using AI assets may find those assets completely unprotectable
This doesn’t kill these businesses. But it forces a fundamental shift in value proposition — from owning the output to owning the process. The competitive advantage becomes your models, your fine-tuning, your curation speed. Not the content itself.
Some argue that’s exactly right. If AI can generate infinite variations of anything at near-zero cost, letting companies copyright all of it would flood the zone and crowd out human creators even further.
The Political Angle No One’s Talking About
The Trump administration itself urged the Court not to hear Thaler’s appeal. From a president who’s been aggressively pro-AI, that might seem contradictory. It’s not.
Keeping AI-generated works uncopyrighted means the government — and everyone else — can use, copy, and repurpose AI outputs freely. No licensing headaches. No royalty negotiations. It’s deregulation by omission.
This lands amid the administration’s broader AI power moves: phasing out Anthropic products across federal agencies, steering the State Department from Claude to OpenAI’s GPT-4.1, and reshaping who gets to build AI for the government. The copyright decision fits neatly into a world where AI serves institutional interests on institutional terms.
What Happens Next
The book isn’t permanently closed. The Supreme Court typically waits for “circuit splits” — different appeals courts reaching different conclusions. A case involving AI-assisted creation rather than fully autonomous generation could present a more compelling vehicle.
Congress could also act, though AI-related IP bills have gained zero traction so far.
For anyone working with generative AI right now, the practical playbook is simple:
- Don’t rely on copyright for purely AI-generated content
- Document your human contributions obsessively if you’re using AI as a creative tool
- Build moats around process, not output — trade secrets, speed, curation quality
- Watch the AI-assisted creation cases — that’s the next battleground
The Supreme Court punted on the philosophical question of whether machines can be authors. But in doing so, they answered a very practical one: if you want legal protection for creative work in 2026, a human better be involved.
The only real question is how long that standard holds as the line between human and machine creativity keeps blurring.