Michigan's AI System is an Experiment on the Poor — and the Feds are Making it National
The state destroyed 40,000 lives and cost itself $20 million in court. Now, under new federal financial pressure, it's running the same playbook again.
In 2014, Carmelita Colvin was living just north of Detroit, taking classes at a local college, when a letter arrived from the Michigan Unemployment Insurance Agency. It said she had committed unemployment fraud. She owed more than $13,000. Her reaction, she told a reporter later, was: “This has got to be impossible. I just don’t believe it.” She had collected unemployment benefits in 2013 — lawfully, after the cleaning company she worked for let her go. In the end, it took nearly six years and a class-action lawsuit to clear her name. And all that time, the mistaken fraud charge continuously sabotaged her life, spiking job applications like one with a county sheriff’s office.
The system that flagged Colvin — Michigan’s Michigan Integrated Data Automated System, or MiDAS — looked at first glance like a victory overy bureaucracy: It adjudicated more fraud cases in two years than Michigan had processed in either of the preceding two decades. The year it was deployed, Michigan laid off a third of workers at its unemployment agency — 400 employees — replacing human judgment with an algorithm that presided as judge and jury. But along the way, MiDAS issued over 60,000 fraud determinations between October 2013 and August 2015 with a 93% error rate, wrongly accusing 40,000 people in total. And even when the state became aware the system wasn’t working, it did nothing to stop it. The state eventually settled the resulting class-action lawsuit, Bauserman v. Unemployment Insurance Agency, for $20 million in 2022. The system had cost $47 million to build.
Now it turns out Michigan has apparently learned nothing.
The state’s Department of Health and Human Services has deployed an AI case-reading tool to review Supplemental Nutrition Assistance Program applications for accuracy and fraud — using Google Vertex AI, a platform for building and scaling generative AI models. The agency disclosed the system only after Michigan Advance asked directly; it had no public-facing disclosure to applicants, no announced safeguards, and couldn’t specify a deployment date. Michele Gilman, the Venable Professor of Law at the University of Baltimore, noted that when automated systems produce decisions that cannot be explained, any nominally available human review becomes constitutionally meaningless — the classic “black box” problem.
The bureaucracies that provide social services like SNAP benefits are notoriously out of date and difficult to navigate. An AI system that makes the system function better for applicants would be a welcome change. But states aren’t incentivized to push toward easier applications or faster services. If anything, thanks to policies from the Trump administration, the opposite is true.
Under H.R. 1 — the One Big Beautiful Bill Act — states face financial penalties based on their SNAP payment error rates. Pay out too many people in error, and the federal government can force that state to pay millions in fines. But there’s a diabolical loophole, one that makes error-riddled AI systems deeply attractive to administrators. As the nonpartisan Brookings Institution discovered, wrongly rejecting an eligible applicant does not count as an error under this measure. In other words, the law structurally rewards wrongful denials.
Michigan’s SNAP error rate makes it highly vulnerable to the new federal fines. The state’s error rate reached 13% in 2022 and was still nearly 9.5% in 2024 — high enough to trigger substantial federal financial exposure. Under that pressure, the state reached for AI. Texas, facing a similar predicament, is projected to pay $709 million in federal penalties in 2027 under the new rules. Every state above the 6% threshold is now running the same political calculus — and nearly all of them are.
What’s happening in Michigan is happening globally, and the international response could not be more different from Washington’s. The EU AI Act, adopted in June 2024, explicitly classifies welfare-fraud detection systems as high-risk, requiring transparency, human oversight, and nondiscrimination testing. And already, systems like Michigan’s are being shot down by those regulations. Amnesty International’s investigation into Denmark’s welfare system found that its AI-powered fraud detection risks discriminating against people with disabilities, low-income individuals, migrants, refugees, and marginalized racial groups. And in the EU, that sort of bias isn’t something companies and governments can write off as a black box problem — it’s illegal. Amnesty concluded that Denmark’s system likely functions as a social scoring mechanism under the EU’s AI Act — and should therefore be banned under that law.
Here in the United States, however, we don’t enjoy those sorts of protections. The Trump administration repealed Biden’s 2023 AI executive order — which had tasked USDA and HHS with issuing guidelines for AI in programs like SNAP and Medicaid — and its Department of Justice has established an AI litigation task force with an explicit mandate to challenge state AI laws, which at the moment are the only regulations we have in this country.
As of March 2026, lawmakers in 45 states have introduced 1,561 AI-related bills — already surpassing the total for all of 2024, with key areas including algorithmic accountability and transparency requirements for automated decision-making. And here’s the thing: Michigan is among them. House Bill 4668 would require AI developers to conduct regular risk assessments, third-party audits, and public disclosure of safety protocols. Illinois enacted amendments to its Human Rights Act requiring anti-discrimination safeguards for automated decision systems. Virginia narrowly passed a bill modeled on Colorado’s AI Act — only to face a likely veto. The Texas Responsible AI Governance Act established baseline prohibitions against AI systems that enable government social scoring. Massachusetts, Minnesota, New Jersey, Washington, Pennsylvania, and North Carolina all have active legislation targeting automated decision-making in consequential contexts. None of this legislation, however, governs what Michigan’s DHHS is doing to SNAP applicants right now, and the threat of hundreds of millions in federal fines is enough to upend any state’s budget.
Jennifer Lord, one of the attorneys who spent years fighting MiDAS in court, has noted that private companies writing benefit-determination systems are essentially “writing regulations, implementing the law,” with a mandate to save money — and that handing government functions to private entities without checks and balances will produce exactly the same disaster again.
The pattern is consistent across every jurisdiction where this has played out: governments under fiscal pressure turn over life-affecting decisions to automated systems, eliminate the human capacity to review errors, and then spend years in court sorting through the damage. It’s state-sanctioned experimentation on the poor—the very people these systems were supposed to serve.
Further Reading
Undark: Government’s Use of Algorithm Serves Up False Fraud Charges
The Markup: The Seven-Year Struggle to Hold an Out-of-Control Algorithm to Account
Texas Tribune: Texas to Pay $700 Million in SNAP Penalties to the Feds
Benefits Tech Advocacy Hub: Michigan Unemployment Insurance False Fraud Determinations



Want to see it in action, what happens when the algorith takes over, and fucks it sideways ? All you had to do was look at Australia and the “robodebt” conservative government fiasco.
Miscalculated debt, a government who was told what it was doing was wrong, and illegal, and what happens when they keep doing it anyway.
This is why I am very concerned as well about AI systems in prior authorizations and hospital infrastructure and reimbursement. Under the guise of efficiency (which is sorely needed) we will be creating and empowering more opaque and decisive rejection machines.