In competition law, we look for what is called 'inconsistent conduct' — where a company's public position contradicts its commercial behaviour. At SXSW this week, Patreon CEO Jack Conte identified precisely this incoherence in the artificial intelligence sector.
TLDR
Patreon CEO Jack Conte used his SXSW keynote to dismantle the 'fair use' defense used by AI companies, calling it 'bogus' in light of their licensing deals with major publishers. His comments come weeks after Anthropic settled a class-action lawsuit for $1.5 billion. The legal consensus is shifting toward a two-tier system where corporations get paid for data while individual creators are told their work is free for the taking.
KEY TAKEAWAYS
Conte's argument was simple. AI companies publicly claim that training their models on copyrighted work is 'fair use' under US copyright law, requiring no compensation. Yet privately, these same companies are signing nine-figure checks to entities like Disney, Warner Music, and Condé Nast for the right to use their data.
The AI companies are claiming fair use, but this argument is bogus. It's bogus because while they claim it's fair to use the work of creators as training data, they do multimillion-dollar deals with rights holders and publishers like Disney and Condé Nast... If it's legal to just use it, why pay?
— Jack Conte, Patreon CEO
The collapse of the fair use shield
For three years, the generative AI industry has operated on the assumption that mass data ingestion is protected by the fair use doctrine. That assumption is now effectively dead. The turning point was not a Supreme Court ruling but a settlement check.
In February 2026, Anthropic agreed to pay $1.5 billion to settle *Bartz v. Anthropic*, a class-action lawsuit brought by the Authors Guild on behalf of writers whose books were included in the 'Books3' dataset. The plaintiffs alleged Anthropic trained its Claude models on 465,000 pirated titles.
The settlement is instructive. Corporations do not pay $1.5 billion to settle claims they believe they can defeat. By settling, Anthropic avoided a precedent-setting ruling that could have declared their entire training methodology illegal. But the signal to the market is clear: the fair use defense has a price tag, and it is higher than the industry can afford.
Two-tier justice
The current landscape reveals a bifurcated legal reality. Large rights holders with the resources to litigate are extracting rent. Disney secured a $1 billion investment and licensing deal from OpenAI. Condé Nast signed a multi-year partnership in August 2024. Warner Music settled with music generators Suno and Udio on favourable terms.
Individual creators, however, are told their work is 'fair use'. This is the 'bogus' argument Conte is attacking. It suggests that copyright protection is a function of market power rather than legal right. If fair use applies to a freelancer's illustration, it should apply to a Marvel movie. If it does not apply to the movie, it cannot apply to the illustration.
Justice Louis Brandeis famously observed that sunlight is the best disinfectant. In the AI sector, that sunlight is coming in the form of discovery. The proposed TRAIN Act (Transparency and Responsibility for Artificial Intelligence Networks) would mandate that AI developers maintain and disclose detailed records of their training data. This would allow individual creators to verify if their work was used, a prerequisite for any class-action claim.
The market failure
The problem is not the technology itself. As Conte noted, the tools are valuable. The problem is a market failure where the inputs of production are treated as a public good simply because they are accessible online. This is an economic error as much as a legal one.
When a factory dumps waste into a river, we call it a negative externality — they are privatising the profit while socialising the cost. When an AI company scrapes the open web to build a commercial product, they are privatising the value of the commons. The $1.5 billion Anthropic settlement is essentially a retroactive pricing of that externality.
We are moving toward a licensing regime. The question is no longer whether AI companies will pay for data, but who they will pay. Currently, they are paying the entities that can sue them. A functional market would require a clearinghouse mechanism — similar to ASCAP in the music industry — that allows individual creators to be compensated for the use of their work at scale.
Until then, the fair use defense will remain what Conte called it: a convenient fiction that collapses the moment a lawyer walks into the room.
SOURCES & CITATIONS
FREQUENTLY ASKED QUESTIONS



