The AI Ban and the Therapist Shortage
New York wants to hold chatbots accountable for impersonating licensed professionals. That’s fair. What it can’t do is conjure the professionals we’re running out of.
Join Hard Reset at SXSW! Need a break from the chaos? Do you want to keep both Austin and tech weird? Are you a founder, engineer, or designer who is tired of building for the broligarchy? Come climb with us! No, really: Hard Reset is hosting a climbing getaway this weekend. We’ll kick off with a (coached!) climbing session at Crux Climbing Center Central, and follow it up with chats and beers from the Brewtorium next door. Register here.
Last Christmas, Wally Nowinski says his father spent several weeks in a hospital bed. Four days into the stay, the older man began hallucinating. Nowinski asked the doctors whether any of the new medications could be responsible. They all said no. Then Nowinski turned to ChatGPT. The chatbot flagged that a small number of older men with his father’s profile had experienced exactly this reaction to one of the drugs he’d been given. Nowinski brought it back to the team. They changed the medication. His father recovered. Last week, Nowinski posted the story of ChatGPT’s role in his father’s recovery on X, complaining that now “New York progressives [want] to make that illegal.”
A New York State bill would restrict what medical and legal advice AI chatbots can give, in the interest of protecting both patients and professionals. But when More Perfect Union reported on the bill, the quote thread was a reminder that protecting our deeply dysfunctional healthcare system is political suicide. Ocean conservationist Boyan Slat described a three-year ordeal with blurry vision that had baffled both his doctor and a specialized eye hospital, until he described his symptoms and diet to a large language model, which pointed him toward a nutritional deficiency neither doctor had caught. As one commenter put it, “ai medical/legal info disproportionately helps people who can’t afford actual doctors & lawyers.” Post after post told the same story: the system is failing, and the machine is picking up the pieces.
The fury is understandable. But so is the fear that drove lawmakers to write the bill.
The Law
Senate Bill S7263, introduced by New York State Senator Kristen Gonzalez, chair of the state’s Internet and Technology Committee, advanced out of committee on a 6–0 vote in late February as part of a broader AI accountability package. Writing on X, Gonzalez was direct: “It’s illegal to practice high-risk professions without a license, and it’s a crime to pretend to have a license. If someone impersonates a doctor and gives advice that makes you sick, they would be held criminally liable. The same standard should apply to AI chatbots.”
The bill’s language is narrower than the outrage suggests. As StateScoop’s Keely Quinlan reported, it targets chatbots that actively impersonate licensed professionals — not ones that simply answer health or legal questions. It bars companies from allowing their products to provide substantive advice that would constitute the unauthorized practice of law or medicine while presenting as a licensed professional. It requires clear disclosure that users are talking to an AI. And it gives users a private right of action — the ability to sue — if a chatbot causes them harm. Ian Krietzberg, writing in Puck, described the bill as imposing legal liability on AI companies whose chatbots give advice a licensed professional would themselves be liable for.
New York isn’t the first state to make this move. Illinois Governor JB Pritzker signed HB 1806 — the Wellness and Oversight for Psychological Resources Act, or WOPR — into law in August 2025, prohibiting anyone from using AI to deliver mental health therapy or make clinical decisions, while allowing AI for administrative tasks. Utah’s HB 452, signed in March 2025, required mental health chatbots to disclose they’re AI and banned them from selling user data, without an outright prohibition on therapeutic conversation. The trend is bipartisan and accelerating.
The Shortage
Here is the problem none of these bills can solve: there aren’t enough licensed professionals to replace the AI, even if we wanted them to.
More than 122 million Americans live in federally designated Mental Health Professional Shortage Areas. The Health Resources and Services Administration projects shortages of nearly 88,000 mental health counselors and 114,000 addiction counselors by 2037. In 2018, more than half of all U.S. counties had no practicing psychiatrist. The Commonwealth Fund has documented that even when Medicaid covers mental health services, the majority of listed in-network providers in some states don’t actually see Medicaid patients. Add an aging workforce, endemic burnout, and reimbursement rates too low to service a graduate school debt load, and you have a system that was already failing long before any chatbot showed up.
The harms that prompted these bills are real. Chatbots have sent patients to emergency roooms in full-blown psychotic episodes. Peer-reviewed cases have documented AI recommending that users swap table salt for sodium bromide — a toxic compound — or amplifying delusional thinking until families called the police. A man whose existing paranoia was exacerbated by ChatGPT conversations later killed himself and his mother. These cases deserve real attention, and real accountability.
But the people posting confessionals on X didn’t turn to a chatbot out of preference. They turned to it because the alternative was nothing. When Nowinski’s father started hallucinating, the doctors said it wasn’t the medication. The chatbot said otherwise. The chatbot was right.
And so banning a chatbot from answering doesn’t create a human professional who will.
The Long Game
Statehouses tend to frame this as a binary: let AI replace licensed professionals, or protect those professionals and the patients they serve. But the binary conceals the harder question: what happens to the 122 million people in shortage areas once you’ve regulated the chatbot out of the room without attacking the deeper issue?
Training a licensed mental health counselor in most states takes a master’s degree plus two to three years of supervised clinical hours. A psychiatrist requires medical school and residency. It takes a decade to meaningfully change the pipeline. And nobody in government is having a serious conversation about the loan forgiveness programs, training investments, reimbursement reforms, and rural clinical placement expansions that would actually begin to close the gap. Meanwhile, an AI company can push a new model in six months or less.
A law that says chatbots can’t pretend to be your doctor is necessary. But right now, no one is doing the other half: federal and state investment in behavioral health workforce pipelines, Medicaid reimbursement rates that don’t drive young clinicians out of the profession before they’ve paid off their loans, supervised placement programs in the places where shortages are worst, and a technology policy that treats AI as a monitored stopgap rather than an acceptable substitute for care.
Senator Gonzalez is right that chatbots shouldn’t impersonate doctors. She’s right that accountability matters and that people should be able to sue when a machine causes them harm. Her bill, when you read it carefully, is more reasonable than the backlash suggests.
But no bill in the New York State Assembly is going to produce the 88,000 more mental health counselors we’ll need. No liability framework is going to retroactively fix the system that left Wally Nowinski’s father hallucinating in a hospital while his son typed symptoms into a chatbot, hoping something would finally have an answer.
The people furious at this bill are not wrong about the problem. The lawmakers writing it are not wrong about the danger. Both things are true at once — and the only way out is a generational investment in the workforce that AI is currently simulating.
Jacob Ward is the author of The Loop: How AI Is Creating a World Without Choices and How to Fight Back and the founder of The Rip Current. He is a former correspondent for NBC News, PBS, and Al Jazeera.



When we needed a shrink or a mental hygienist a decade ago when one of our kiddos was in crisis, there weren't any with "availability" to take new patients. We did our best until things became acute enough to warrant admission to a locked unit. Not that an LLM would've made it better, but just whining about the long-time shortage being real.