The New AI Gatekeepers Are Coming — Even as States Try to Slam the Brakes
As insurers increasingly use algorithms to deny care, a federal Medicare pilot will push AI-driven claim rejections onto patients in six states — setting up a direct clash with state lawmakers.
When he received a letter from Cigna refusing to pay $350 for routine bloodwork ordered by his doctor, Nick van Terheyden, a physician suffering from the early stages of osteoperosis, recalled wondering just how it was that the medical director signing the letter could possibly know enough to make that judgement. “This was a clinical decision being second-guessed by someone with no knowledge of me,” van Terheyden told reporters at ProPublica. That was in 2021. By 2023, ProPublica knew enough to publish the following:
The company has built a system that allows its doctors to instantly reject a claim on medical grounds without opening the patient file, leaving people with unexpected bills, according to corporate documents and interviews with former Cigna officials. Over a period of two months last year, Cigna doctors denied over 300,000 requests for payments using this method, spending an average of 1.2 seconds on each case, the documents show.
For years, algorithmic decision-making has been on the rise in job interviews, rental applications, even bail decisions. Now that insurance companies are increasingly relying on these systems to allegedly batch-deny claims, Reddit forums are alight with tactics for fighting back. One startup, Counterforce Health, even gives users AI tools to do battle with the insurers’ AI systems, which one Redditor called “the most boring dystopia: pitting paperwork AIs against each other.”
Now the federal government is openly embracing the era of AI-powered denials. The federal agency in charge of Medicare is hiring AI companies under a pilot program that incentivizes them to reject as many claims as possible.
The Wasteful and Inappropriate Services Reduction (WISeR) Model will make traditional Medicare recipients in six states — Washington, Arizona, Ohio, Oklahoma, New Jersey, and Texas — run a new AI gauntlet to qualify for 15 procedures that treat everything from Parkinsons to incontinence. In a statement announcing the change, Centers for Medicare & Medicaid Services (CMS) director Dr. Mehmet Oz is quoted as saying that in his agency’s fight against “fraud, waste, and abuse,” the new model “will help root out waste in Original Medicare.”

Meanwhile, state lawmakers in the very states that the WISeR program targets are going in exactly the opposite direction. In Arizona, a new law last year mandates that an insurer or a utilization review entity cannot use AI alone to deny a prior authorization request. In Texas, agents are also newly forbidden from using AI to reject services to a patient. And in Washington State, lawmakers have proposed House Bill 1566 to tighten the reins on how health plans use automation — especially the kind of AI-driven prior authorization and coverage decisions CMS is pushing. These, of course, are laws that only govern state-run or private insurance coverage — Medicare is federally administrated, so federal law trumps state oversight. But it speaks to the powerful contradiction between local and federal perspectives on AI, and the enormous fights brewing in the courts.
A wave of lawsuits has already begun, in which plaintiffs allege that AI in health care is leading companies to abandon patients. In one widely cited case, the family of Gene Lokken alleges that UnitedHealthcare relied on an algorithm to cut off his rehabilitation care against doctors’ advice, contributing to his rapid decline — a claim now at the center of a major class action. Researchers writing for JAMA found that automated tools used by insurers routinely recommend ending care far earlier than physicians believe is safe, with many denials later overturned on appeal — raising questions about whether speed and cost savings are trumping patient needs. And a son described to The Guardian his repeated fight against algorithm-linked denials of rehab care for his elderly father, battles he sometimes won on paper but lost in real life as delays took their toll — now part of broader litigation over insurer use of predictive software.
Automated decision-making isn’t just an abstract worry: it’s becoming a lived experience. And while state lawmakers are eager to beat it back, the federal government appears to be preparing to let it flood the market.
“It’s not good medicine. It’s not caring for patients,” van Terheyden, who was eventually revealed to be at risk for bone fracture without testing and intervention, told ProPublica.
“Intellectually, I can understand it. As a physician, I can’t. To me, it feels wrong.”


