The central question in AI policy used to be whether governments should regulate at all, but 2026 has moved past that argument. The question now is who regulates, and what they can actually do about it.
TLDR
2026 marks the shift from whether to regulate AI to who gets to regulate it. Trump's December executive order attacking state AI laws has triggered legal battles, while child safety lawsuits are forcing policy action Congress has avoided. States aren't backing down.
KEY TAKEAWAYS
In December 2025, President Trump signed an executive order that fundamentally reframed the AI governance debate. The order didn't just express a preference for federal oversight. It created an AI Litigation Task Force within the Department of Justice, directed federal agencies to withhold funds from states maintaining certain AI regulations, and instructed the White House to draft legislation that would preempt state laws entirely. The task force began operations on January 10, 2026.
This wasn't about eliminating state AI laws with nothing to replace them. It was about replacing them with a "minimally burdensome national standard." The shift matters. In 2025, Congress tried and failed three times to pass an AI moratorium that would have frozen state-level regulation without offering a federal substitute. The new approach acknowledges that preemption with a substantive policy is a different proposition from preemption without one.
States Aren't Waiting for Permission
The executive order doesn't override state law on its own. That requires either congressional action or successful legal challenges, and states are aware of the legal terrain. According to policy experts tracking the landscape, many states are continuing to move forward, weighing the risk of DOJ litigation against the political cost of doing nothing while constituents demand action.
California's SB 243, the companion chatbot law, took effect on January 1, 2026. It's the first state law to impose youth-specific protections on AI systems designed for extended personal interaction, and it includes a private right of action with $1,000 in statutory damages per violation. Colorado's AI Act, which originally passed in 2024 to regulate algorithmic discrimination across employment, housing, and healthcare, was delayed to mid-2026 after the Governor and Attorney General backed industry efforts to gut enforcement provisions.
The National Conference of State Legislatures reports that 78 AI chatbot safety bills are currently active across 27 states. Most focus on specific high-risk applications rather than general frameworks. AI chatbots linked to suicide, defamation, and child exploitation are in the legislative crosshairs. So are pricing algorithms used in housing and insurance markets, where states are amending antitrust codes or regulating specific domains rather than taking a broad approach.
The Child Safety Cases That Changed the Narrative
Policy discussions about AI used to require speculation about potential harms, but 2025 brought documented cases with named victims. Multiple lawsuits now allege that AI companion chatbots contributed to teen suicides. Psychiatrists reported a pattern they called AI "psychosis" - users experiencing hallucinations or delusions after extended interaction with chatbots. In one case, OpenAI was sued for allegedly coaching a teenager to commit suicide.
Amina Fazlullah, Common Sense Media's head of tech policy advocacy, wrote in January 2026 that "children have died" and noted the Senate Judiciary Committee hearing that featured parents of kids lost or irreparably harmed by AI chatbots. "Protecting kids in the AI era," she argued, "requires limiting access to unsafe chatbots, strengthening data privacy, ensuring accountability for harms, and incorporating independent safety audits." The risks now have names, dates, and grieving families willing to testify in front of Congress.
The cases illustrate a broader problem with AI governance so far, where companies have consistently prioritized engagement over safety. Leaked Meta documents revealed executives signing off on allowing AI to have "sensual" conversations with children. An AI-enabled teddy bear was pulled from shelves after reports it discussed sexual topics and encouraged children to harm their parents. The pattern across companies has been to deploy first, then address harms after they're documented in lawsuits.
That approach created the political conditions for aggressive state action. When the federal government doesn't act, states fill the vacuum. When companies don't self-regulate effectively, courts become the enforcement mechanism. The lawsuits matter not just for their legal outcomes but for the discovery process. Internal documents showing companies knew about risks but shipped products anyway change the politics of regulation entirely.
The Legal Questions No One Can Answer Yet
Trump's executive order is legally vulnerable on multiple grounds. David S. Rubenstein, who directs the Robert J. Dole Center for Law and Government at Washburn University School of Law, pointed out that "the executive branch cannot unilaterally preempt state law without a delegation from Congress" and that agencies "will be hard-pressed" to justify imposing spending conditions that Congress itself rejected. When the DOJ's AI Litigation Task Force begins challenging state laws in federal court, these arguments will be tested.
The order is also politically tone-deaf. Polling from Gallup shows 97 percent of Americans support AI regulations, and Ipsos research found large majorities favor government regulation of AI. States are delivering policy responses. The federal government has not. Threatening to sue states for protecting their residents or withhold broadband funding unless they pause enforcement creates a backlash problem, especially when constituents are demanding action.
The administration's AI policy framework, released on March 20, 2026, doubles down on this approach. It calls for Congress to adopt what it describes as a "light-touch" regulatory regime centered on preemption of state AI laws and a "minimally burdensome national standard." But Congressional support isn't guaranteed. Both parties have members concerned about AI transfers to adversaries, child safety, and algorithmic discrimination. The coalition that blocked the 2025 moratorium attempts hasn't disappeared.
What Actually Happens Next
Three scenarios are plausible. First, Congress could pass federal AI legislation that preempts state laws while establishing substantive protections. This requires bipartisan support in an election year, which makes it unlikely but not impossible. Bills addressing kids' safety, privacy, or Section 230 reform could fold in chatbot-specific provisions. Something that feels responsive to a crisis could pass.
Second, the DOJ could challenge state laws and lose. Federal courts could rule that the executive order exceeds presidential authority, that specific state laws address legitimate local concerns not addressed by federal policy, or that Congress hasn't clearly expressed intent to occupy the field. This would preserve state authority to regulate but create a patchwork that companies complain makes compliance difficult.
Third, the courts could side with the federal government on narrow grounds, invalidating specific state provisions while leaving room for targeted regulation. This is the most likely outcome. Courts don't typically issue sweeping rulings when they can decide cases more narrowly. The result would be a mixed regulatory environment where some state actions survive and others don't, forcing iterative litigation to establish boundaries.
All three scenarios assume the current U.S.-China trade truce holds. As multiple experts note, if that breaks down, the political appetite for rapid AI development could override concerns about domestic regulation entirely. National security arguments would dominate. The framing would shift from "how do we protect consumers" to "how do we beat China." That's a different conversation with different political coalitions.
The Data Center Backlash You're Not Hearing About
AI policy in 2026 won't just be about algorithms. It'll be about silicon and steel. The National Conference of State Legislatures predicts data centers will be a central legislative concern this year. The political narrative focuses on energy affordability and sustainability, but the grassroots backlash runs deeper.
Americans increasingly feel negatively about an AI-driven future. Data centers are vessels for that anxiety. For constituents worried about job loss, electricity costs, and corporate power, blocking a local land-use permit or a corporate tax credit is how their voices get heard. This matters because even if energy costs are contained, the backlash probably won't be.
AI companies are adding the equivalent of Japan's entire power generation capacity over the next five years to accommodate data center growth. Communities are being asked to accept these facilities, often with promises of economic development that haven't materialized in earlier cases. When the AI bubble pops - and speculation about that timing is growing - ratepayers can't afford to be stuck with stranded assets.
This creates a parallel regulatory fight. The administration has threatened to use emergency powers to override state and local objections to data center construction and associated fossil fuel infrastructure. States and localities are pushing back, arguing they should lead on land use and infrastructure decisions. It's federalism again, just in a different policy domain.
What This Means If You Build AI Products
The immediate takeaway is that regulatory uncertainty isn't going away. California's companion chatbot law is in effect now. Colorado's AI Act takes full effect in June 2026. Other states are enacting laws on narrower issues - election deepfakes, algorithmic discrimination in hiring, consumer scams. The executive order doesn't pause enforcement of most of these laws. It signals federal intent to challenge them, which is different.
Companies making business decisions on the assumption the current détente represents a permanent shift in regulatory philosophy may be disappointed. The DOJ Litigation Task Force began operations on January 10, 2026. Legal challenges are coming. The outcomes will determine which state provisions survive and which don't, but that process takes years, not months.
In the meantime, the safest compliance strategy is to meet the most stringent state requirements that apply to your use case. If you're building companion chatbots, California's SB 243 sets the baseline. If you're using AI for employment decisions, Illinois and Colorado both have laws in effect. If you're deploying AI that makes consequential decisions about housing, healthcare, or credit, multiple states regulate that.
The harder question is what happens when federal and state law directly conflict. The executive order instructs agencies to identify such conflicts and directs the DOJ to challenge them. Until courts rule, companies face a choice: comply with state law and risk federal enforcement action, or comply with federal guidance and face state enforcement plus private lawsuits.
Most companies will choose state compliance. State attorneys general have enforcement authority, state laws include private rights of action with statutory damages, and courts are more likely to uphold state consumer protection laws than to side with an executive order that may exceed presidential authority. The political cost of being sued by families whose children were harmed is higher than the cost of arguing with DOJ lawyers over preemption doctrine.
The Bigger Picture No One's Talking About
All of this assumes AI capabilities continue advancing in ways that justify the current level of investment and hype. That assumption is shakier than it was a year ago. Models are getting better at benchmarks, but real-world utility isn't scaling at the same rate. The cost of training and running models is dropping as efficiency improves and smaller specialized models replace general-purpose systems.
Chinese models like DeepSeek and Swiss models like Apertus have been trained with orders of magnitude less compute than U.S. models. They run on phones. As the industry moves from "do everything" models to actually useful specialized tools, the center of AI power shifts from a handful of tech giants to a broader ecosystem, which improves competition but makes regulatory capture harder to achieve.
This shift also changes the policy conversation. Much of the push for federal preemption comes from companies arguing that fragmented state regulation makes it impossible to deploy AI at scale. But if AI deployment shifts toward smaller, specialized models running locally rather than massive cloud-based systems, that argument weakens. The regulatory burden of complying with state laws becomes more manageable when you're not trying to serve every use case with one model.
The AI policy landscape in 2026 is less about whether to regulate and more about who has the authority to do it, what they can require, and how companies actually comply when federal and state rules point in different directions. Those questions won't be resolved this year. But the legal and political battles playing out now will shape the answers for the next decade.
SOURCES & CITATIONS
FREQUENTLY ASKED QUESTIONS



