Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
AI

Patreon CEO to AI Companies: The 'Fair Use' Defense Is a Lie

Jack Conte asks the question that destroys the generative AI legal defense: If training is fair use, why are you paying Disney?

6 min read
Jack Conte speaking at SXSW 2026 about AI fair use
Patreon CEO to AI Companies: The 'Fair Use' Defense Is a Lie
Editor
Mar 20, 2026 · 6 min read
By Diana Trent · 2026-03-20

In competition law, we look for what is called 'inconsistent conduct' — where a company's public position contradicts its commercial behaviour. At SXSW this week, Patreon CEO Jack Conte identified precisely this incoherence in the artificial intelligence sector.

TLDR

Patreon CEO Jack Conte used his SXSW keynote to dismantle the 'fair use' defense used by AI companies, calling it 'bogus' in light of their licensing deals with major publishers. His comments come weeks after Anthropic settled a class-action lawsuit for $1.5 billion. The legal consensus is shifting toward a two-tier system where corporations get paid for data while individual creators are told their work is free for the taking.

KEY TAKEAWAYS

01Patreon CEO Jack Conte argues that AI companies' licensing deals with Disney and Condé Nast prove their 'fair use' defense is legally incoherent.
02Anthropic recently settled the Bartz v. Anthropic class action for $1.5 billion, implicitly acknowledging that training on pirated books carries liability risk.
03The TRAIN Act, a bipartisan bill introduced in 2026, would force AI companies to disclose training data records to copyright holders.
04Courts have increasingly rejected the argument that training on pirated content constitutes fair use, creating a liability crisis for model builders.

Conte's argument was simple. AI companies publicly claim that training their models on copyrighted work is 'fair use' under US copyright law, requiring no compensation. Yet privately, these same companies are signing nine-figure checks to entities like Disney, Warner Music, and Condé Nast for the right to use their data.

The AI companies are claiming fair use, but this argument is bogus. It's bogus because while they claim it's fair to use the work of creators as training data, they do multimillion-dollar deals with rights holders and publishers like Disney and Condé Nast... If it's legal to just use it, why pay?

— Jack Conte, Patreon CEO

The collapse of the fair use shield

For three years, the generative AI industry has operated on the assumption that mass data ingestion is protected by the fair use doctrine. That assumption is now effectively dead. The turning point was not a Supreme Court ruling but a settlement check.

In February 2026, Anthropic agreed to pay $1.5 billion to settle *Bartz v. Anthropic*, a class-action lawsuit brought by the Authors Guild on behalf of writers whose books were included in the 'Books3' dataset. The plaintiffs alleged Anthropic trained its Claude models on 465,000 pirated titles.

The settlement is instructive. Corporations do not pay $1.5 billion to settle claims they believe they can defeat. By settling, Anthropic avoided a precedent-setting ruling that could have declared their entire training methodology illegal. But the signal to the market is clear: the fair use defense has a price tag, and it is higher than the industry can afford.

Two-tier justice

The current landscape reveals a bifurcated legal reality. Large rights holders with the resources to litigate are extracting rent. Disney secured a $1 billion investment and licensing deal from OpenAI. Condé Nast signed a multi-year partnership in August 2024. Warner Music settled with music generators Suno and Udio on favourable terms.

Individual creators, however, are told their work is 'fair use'. This is the 'bogus' argument Conte is attacking. It suggests that copyright protection is a function of market power rather than legal right. If fair use applies to a freelancer's illustration, it should apply to a Marvel movie. If it does not apply to the movie, it cannot apply to the illustration.

Justice Louis Brandeis famously observed that sunlight is the best disinfectant. In the AI sector, that sunlight is coming in the form of discovery. The proposed TRAIN Act (Transparency and Responsibility for Artificial Intelligence Networks) would mandate that AI developers maintain and disclose detailed records of their training data. This would allow individual creators to verify if their work was used, a prerequisite for any class-action claim.

The market failure

The problem is not the technology itself. As Conte noted, the tools are valuable. The problem is a market failure where the inputs of production are treated as a public good simply because they are accessible online. This is an economic error as much as a legal one.

When a factory dumps waste into a river, we call it a negative externality — they are privatising the profit while socialising the cost. When an AI company scrapes the open web to build a commercial product, they are privatising the value of the commons. The $1.5 billion Anthropic settlement is essentially a retroactive pricing of that externality.

We are moving toward a licensing regime. The question is no longer whether AI companies will pay for data, but who they will pay. Currently, they are paying the entities that can sue them. A functional market would require a clearinghouse mechanism — similar to ASCAP in the music industry — that allows individual creators to be compensated for the use of their work at scale.

Until then, the fair use defense will remain what Conte called it: a convenient fiction that collapses the moment a lawyer walks into the room.

FREQUENTLY ASKED QUESTIONS

What is the fair use defense in AI training?
AI companies argue that using copyrighted data to train models is 'transformative' and therefore fair use, similar to how a human student reads books to learn. This defense allows them to use content without payment or permission.
Why did Anthropic pay $1.5 billion?
Anthropic settled a class-action lawsuit alleging it trained its models on pirated books. The settlement avoided a trial that could have resulted in a ruling declaring their training methods illegal.
Do individual creators get paid when AI uses their work?
Generally, no. While major publishers like Disney and Condé Nast have secured licensing deals, individual creators currently have no mechanism to demand payment, though class-action lawsuits are attempting to change this.
What is the TRAIN Act?
The TRAIN Act is a proposed US law that would require AI companies to keep detailed records of all data used to train their models, allowing copyright holders to check if their work was used.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.