Saturday, April 4, 2026
ASX 200: 8,412 +0.43% | AUD/USD: 0.638 | RBA: 4.10% | BTC: $87.2K
← Back to home
Work

Trump AI framework tells states to back off as critics warn of accountability gap

The White House wants to preempt state AI laws before they exist. The constitutional and practical problems run deeper than the four-page document suggests.

7 min read
Capitol building with shadow representing federal preemption
The Trump administration framework proposes preempting state AI regulation.
Editor
Mar 23, 2026 · 7 min read
By Diana Trent · 2026-03-23

The Trump administration released its national AI policy framework on Friday March 20, 2026. The document runs four pages. The ambition runs considerably further.

TLDR

The Trump administration released a national AI policy framework on Friday proposing that states should not regulate AI development. The four-page document recommends industry-led standards, regulatory sandboxes, and liability shields for developers. Critics warn it creates an accountability vacuum. Congress has twice rejected preemption language, and Senator Blackburn's competing bill includes developer liability provisions.

KEY TAKEAWAYS

01The White House framework proposes preempting state AI laws before they exist
02Congress has twice rejected AI preemption language in legislation this year
03The framework recommends industry self-regulation with no new regulatory body
0463% of Americans believe AI will lead to fewer jobs according to YouGov polling

At the centre of the framework sits a single proposition: states should not regulate artificial intelligence development, and they should not impose liability on developers when third parties misuse their products. Michael Kratsios, the President's science adviser, and David Sacks, the AI and cryptocurrency adviser, have spent three months developing this position since Trump's December executive order directed them to do so.

The document proposes no new regulatory body. It recommends "industry-led" standards. It creates regulatory sandboxes allowing companies to obtain exemptions from federal rules. It streamlines permitting for data centres. And it explicitly argues against what it calls "open-ended liability" for AI firms.

Speaker Mike Johnson and Majority Leader Steve Scalise released statements supporting the framework within hours. Committee chairs Brett Guthrie of Energy and Commerce, Jim Jordan of Judiciary, and Brian Babin of Science issued a joint statement urging Congress to "take action" on the recommendations.

Patrick Hedger of NetChoice, the industry group representing Meta, Google, and Amazon, called it a "light-touch regulatory environment." Daniel Castro of the Center for Data Innovation praised the framework for avoiding "alarmism."

Brad Carson, a former Democratic congressman who now leads Americans for Responsible Innovation, offered a different assessment: the framework represents "another chance for tech companies to launch harmful products with no accountability."

What the framework actually proposes

The preemption argument sits on constitutional foundations the framework does not fully examine. Federal preemption of state law requires either explicit congressional authorisation or a demonstration that state laws conflict with federal regulatory objectives. The framework has neither.

Congress has not passed AI legislation authorising preemption. The White House attempted to include preemption language in the GOP reconciliation bill earlier this year. That effort failed. A second attempt to insert preemption into the National Defense Authorization Act also failed. Senator Marsha Blackburn released her own AI draft bill this week, and it includes a "duty of care" on developers. Her bill directly contradicts the White House position against "open-ended liability."

The framework proposes relying on existing agencies rather than creating new oversight. The Federal Trade Commission would retain consumer protection authority. The FDA would oversee AI in medical devices. The SEC would supervise AI in financial markets. But these agencies lack staff, funding, and in many cases statutory authority specifically addressing AI systems.

The children's safety provisions require parental controls and "age assurance" rather than age verification. The distinction matters. Age assurance uses inference and estimation; age verification requires documentary proof. The framework explicitly rejects verification, citing privacy concerns.

On copyright, the framework defers to courts on fair use questions while suggesting a collective licensing system. It does not mandate such a system. It merely suggests one. Training data for AI models remains legally contested, and the framework provides no resolution.

The deepfake provisions include First Amendment carve-outs for satire and political commentary. The anti-"jawboning" provisions prohibit government officials from pressuring platforms to remove content. Federal datasets would become more accessible for AI training.

The accountability gap

Preemption creates a regulatory vacuum. If states cannot regulate and the federal government chooses not to, AI development operates in legal grey space.

Colorado passed the first comprehensive AI law in the United States in 2024. California considered SB 1047, which would have imposed safety requirements on large AI models. The bill ultimately failed after industry lobbying, but other states have pending legislation. The White House framework asks them to abandon these efforts.

The 63% of Americans who told YouGov in February 2026 that AI will lead to fewer jobs receive no provisions for workforce transition in the framework. The document mentions "non-regulatory workforce training methods" without specifying funding sources, program structures, or implementation timelines.

The liability shield proposed for third-party misuse deserves particular scrutiny. If a company develops an AI system used for fraud, discrimination, or physical harm, the framework suggests the developer should face no consequences provided they did not intend the misuse. Tort law ordinarily considers whether manufacturers could have foreseen and prevented harmful applications of their products. The framework abandons this standard.

Industry self-regulation has a poor track record in technology. The tech sector promised self-regulation on privacy through the 1990s and 2000s. The Cambridge Analytica scandal demonstrated where that approach leads. The framework bets on the same model producing different results.

What this means for Australia

American regulatory choices shape global defaults. When the US adopts a light-touch approach, Australian policymakers face pressure to match it.

The ACCC has spent years documenting digital platforms' market power. Its Digital Platform Services Inquiry identified systematic competition harms. Yet the News Media Bargaining Code, Australia's signature intervention, created a flawed template. The code mandated negotiations rather than structural remedies. It relied on arbitration as enforcement. Facebook briefly blocked news sharing in Australia to demonstrate its leverage. The code has delivered revenue to media companies, but it has not addressed underlying power imbalances.

AI regulation presents similar dynamics. If the US signals that preemption and self-regulation represent the appropriate model, Australian regulators must decide whether to follow or diverge. Divergence carries costs. Australian subsidiaries of American tech companies may face compliance burdens their parent companies do not. Investment flows toward lighter regulatory environments.

But alignment carries costs too. The accountability vacuum the White House framework creates would apply in Australia if local regulators defer to American standards. Algorithmic harms do not respect borders. Australian consumers and workers would bear consequences without recourse.

The Treasury's AI consultation process, currently underway, will need to address this tension directly. The easy path follows American deference to industry. The harder path requires Australian regulators to articulate independent standards, defend them against trade pressure, and enforce them against companies with market capitalisation exceeding Australian GDP.

What should change

The framework conflates two distinct questions: whether states should regulate inconsistently, and whether anyone should regulate at all.

Coordinated federal standards have merit. A patchwork of 50 different state AI laws would create compliance burdens, particularly for smaller companies. But the solution to patchwork regulation involves passing federal law with meaningful requirements, not preempting states while providing no federal alternative.

Congress should consider Senator Blackburn's "duty of care" proposal. Developers who release AI systems capable of causing substantial harm should bear some responsibility for foreseeable misuse. Strict liability applies to ultrahazardous activities in other contexts. Advanced AI systems that can generate convincing deepfakes, conduct cyberattacks, or automate discrimination at scale may warrant similar treatment.

Brandeis wrote that states serve as laboratories of democracy. Colorado's AI law, whatever its flaws, provides data about what works and what does not. California's failed SB 1047 generated debate that clarified technical and policy questions. The White House framework would shut down these experiments before the results arrive.

The framework takes effect only if Congress acts. Congressional Republicans appear supportive. Congressional Democrats have raised concerns. The next months will determine whether preemption passes or fails for the third time this legislative session.

The constitutional questions remain unresolved. The policy questions remain contested. The only certainty is that AI development will continue regardless of what any government does.

The question is whether affected people will have any remedy when things go wrong.

FREQUENTLY ASKED QUESTIONS

What does the Trump AI framework propose?
The framework proposes preempting state AI laws, recommending industry-led standards, creating regulatory sandboxes, and shielding developers from liability for third-party misuse.
Has Congress approved AI preemption?
No. Congress has rejected preemption language twice this legislative session, in the GOP reconciliation bill and the National Defense Authorization Act.
What are the Australian implications?
If the US adopts a light-touch approach, Australian policymakers face pressure to match it, potentially creating similar accountability gaps for Australian consumers.
Editor

Editor

The Bushletter editorial team. Independent business journalism covering markets, technology, policy, and culture.

The Morning Brief

Business news that matters. Five stories, five minutes, delivered every weekday. Trusted by professionals who need clarity before the market opens.

Free. No spam. Unsubscribe anytime.