State AI Law Is the Only AI Law. Everywhere It's Crumbling.
The Colorado AI Act was the country’s most ambitious effort to regulate algorithmic decision-making. This week it became nothing more than a notification requirement.
At 4:07 a.m. Tuesday, after a contentious overnight session, the Colorado legislature passed Senate Bill 189, a bill that strips out almost everything that made the 2024 Colorado AI Act the most-watched piece of state AI legislation in the country. Gone is the duty of care developers and deployers owed to consumers harmed by algorithmic discrimination. Gone are the mandatory risk-management programs. Gone are the impact assessments. What remains is a requirement that companies only have to let you know, after the fact, when an AI system has been used to deny you a loan, a job, or a place to live — and an opportunity to appeal. The use of AI to change your life is now fine. The only legal requirement left is that they have to tell you AI did so. The law’s effective date, originally February 2026, has been pushed to January 2027. Governor Jared Polis, who helped draft the replacement, is expected to sign it within days.
Senate Majority Leader Robert Rodriguez, the bill’s sponsor, told The Colorado Sun, “Everybody lost and everybody won.” The Colorado Technology Association, the trade group lobbying against the original law for two years, called the new version “meaningful progress.” But that’s all just code for the gutting of consumer protection.
This matters far beyond Colorado, because state laws are all we have. There is no federal AI law. There is no federal AI regulator. In fact, Congress has tried twice in twelve months to preempt state action, once through the budget reconciliation bill, once through the National Defense Authorization Act. They failed both times, but on December 11, the President signed an executive order directing the Justice Department to sue states whose AI laws the administration finds “onerous,” and naming Colorado’s law by name. For the moment, state legislation remains America’s entire regulatory floor.
That floor is crumbling. In Texas, the Responsible Artificial Intelligence Governance Act was introduced in December 2024 as a 43-page bill imposing duty-of-care obligations on developers of high-risk AI systems. By the time Governor Abbott signed it in June 2025, the high-risk framework had been removed entirely. The duty of care was gone. The remaining obligations applied mostly to state agencies, not companies. Disparate impact alone — AI that has a discriminatory effect on certain Americans, for instance — “is not sufficient to show intent to discriminate,” according to the new bill. In California, Governor Newsom vetoed SB 1047, the frontier-model safety bill, in September 2024 after a lobbying campaign from pro-industry forces that included Nancy Pelosi. In New York, Governor Hochul signed the RAISE Act in December 2025 only after securing chapter amendments that cut maximum penalties from $30 million to $3 million and narrowed the law to companies with annual revenues over $500 million.
Each of these laws was negotiated, redrafted, weakened, delayed. Each was advertised on the way down as a “balanced” or “minimally burdensome” approach. Each leaves AI accountability in roughly the place corporate AI deployers wanted it left: practically unenforceable.
The bills to watch next sit in California (pending impact-assessment requirements for high-risk AI systems, Senate Bill 1119, Assembly Bill 2023, and a fight over Senate Bill 53 implementation) and in Michigan, where Senate Bill 760 passed the chamber in May and now sits in the House. Florida, Washington, and Virginia have proposals advancing. Each will face the same pattern: an industry that has gotten very good at convincing state lawmakers that consumer protection and innovation cannot coexist, and that the second is more important than the first.


