The President Just Ordered His Administration to Wipe Out State AI Laws. The Courts Will Decide Whether He Can.
With Congress frozen and voters wary, the White House has chosen speed over consent—and dared the courts to stop it.
On Thursday, after weeks of hinting, President Trump issued a sweeping executive order to block U.S. states from regulating artificial intelligence. Not parts of it. Not specific applications. A blanket prohibition on state-level AI lawmaking—framed as a necessary step to protect American innovation and global competitiveness.
The EO’s logic sounds tidy when you read it fast. But it also lands the country in a constitutional, political, and cultural brawl that’s been brewing for years.
Over the past few years, all 50 states—plus Puerto Rico—have passed some form of AI-related law, and roughly a thousand bills have been introduced or are still winding their way through legislatures. These laws vary wildly. New York has tried to limit AI “companion” systems. Colorado has banned algorithmic discrimination in hiring. Illinois has outlawed AI systems that present themselves as therapy substitutes for human clinicians.
To AI companies, this looks like a nightmare: a patchwork of rules that makes national compliance messy, expensive, and legally risky. That complaint sits right at the heart of Trump’s order.

In it, the administration argues that the U.S. is in the “earliest days of a technological revolution,” locked in a race with foreign adversaries (read: China). To win, American AI companies must be free to innovate without “cumbersome” or “onerous” regulation. State laws, the order claims, threaten that imperative.
There’s just one problem. There is no federal AI law. The states didn’t rush into AI regulation because they wanted attention, or have particular hostility to the tech companies. They did it because the federal government hasn’t enacted any protections at all.
There are some national efforts in the works. In September Senators Durbin (D-IL) and Hawley (R-MO) introduced a bill, the Aligning Incentives for Leadership, Excellence, and Advancement in Development (AI LEAD) Act, that would subject AI companies to product liability law, making them culpable in civil court for defects and danger. And the EO directs tech investor and podcaster David Sacks, the president’s AI and crypto czar, to help recommend new federal legislation.
But in the meantime, we have nothing. No comprehensive data privacy law. No data transparency requirements. No binding rules governing training data, social media data, or how models can be deployed once they’re built. Nothing that meaningfully regulates the fuel or the engine.
States were, in this case, acting as the laboratory of democracy, because there was no binding action at the federal level—just a lot of press releases and voluntary frameworks.
Trump’s order doesn’t fill that gap. Instead, it tries to bulldoze the labs.
The document lays out several enforcement mechanisms. It authorizes the federal government to determine which state laws are “excessively burdensome.” It establishes an AI litigation task force, directing the Attorney General to sue states that cross that line. It allows the administration to withhold federal digital infrastructure funding from noncompliant states—a particularly sharp threat for rural areas that rely on that money to give their residents a shot at a digital-economy income.
And in theory, it gestures toward a future federal framework. The job of drafting legislative recommendations falls to the President’s science and technology advisor and the administration’s AI and crypto czar, David Sacks. The goal: a single national policy that preempts state law entirely.
That argument isn’t entirely frivolous. China doesn’t force firms to navigate a state-by-state regulatory maze. There’s one federal system to comply with.
But that comparison leaves out a crucial detail: China’s federal AI rules are anything but lax. Companies must document and disclose training data. Models must be auditable. Outputs must align with Communist Party doctrine. CEOs can be held personally liable for misuse.
It’s a single checklist—but it’s a long and punishing one.
In the U.S., industry advocates also warn that state-level regulation entrenches incumbents. Big companies can afford compliance teams; startups can’t. Of course, that argument sounds less convincing in a world where AI tools themselves can track regulatory requirements, and where the firms complaining aren’t just small startups, but also the most technically sophisticated and well-capitalized entities on Earth.
Duke law professor Nita Farahany offered one of the sharpest early analyses of the executive order, identifying three tensions it exposes.
First: state experimentation versus national uniformity. The patchwork creates real compliance challenges—but experimentation is how societies learn what works.
Second: democratic process versus executive urgency. The order frames speed as essential. But states acted quickly precisely because Washington didn’t.
Third—and most fundamental—innovation versus safety. Should companies be allowed to test powerful systems on the public and deal with harms later? Or should safety obligations come first?
On that final question, the executive order is clear. Experiment now. Sort it out afterward.
As Farahany points out, the order could have a chilling effect on the state legislatures currently in session. Lawmakers may hesitate to advance bills they’ve spent years developing, unsure whether they’ll survive federal challenge.
But behind the scenes, another reaction is almost certainly underway. State attorneys general, reading the same EO, are likely drafting some variation on the same response: See you in court.
Because what ultimately has to be decided isn’t just how AI should be regulated—but who gets to decide that at all. Can a president simply declare that states are barred from governing a technological advancement directly affecting their citizens, especially when Congress has failed to act?
That question won’t be answered by an executive order. It will be answered by judges.
And an irony hangs over the whole episode: this move didn’t come from voters, and it didn’t come from legislators willing to defend it at the ballot box. A September Gallup poll found that 80 percent of voters of all backgrounds don’t care at all about “supremacy” in the AI race. Instead, they want regulation — even if it slows down the industry. At one time it was rumored that this executive order would instead be an addendum to the National Defense Authorization Act. Perhaps lawmakers understood that carrying this policy publicly would be a political liability.
What makes this moment so revealing is not just the substance of the order, but its posture. Faced with widespread public concern, congressional inaction, and fifty states trying—if imperfectly—to do something, the administration didn’t offer a democratic solution. It chose a shortcut. A declaration that the problem of consent could be solved by preemption, and the problem of legitimacy by speed. (That’s a pretty tech attitude.) The EO could buy the AI industry a few quiet quarters. But it guarantees a louder reckoning later, when courts, voters, and the states themselves are forced to decide how to balance innovation and safety—and whether the future of technology policy belongs to the people it affects, or to the executives who insist they must be free to experiment on us all.


