The Lawmakers Fighting Your Unconscious Bond with A.I.
OpenAI’s “Code Red” push to make ChatGPT more “intuitive and personal” is colliding with a quiet wave of state laws that already regulate gambling apps, deepfake porn, kids’ feeds — and AI companions.
There’s no rest for the hardworking folks at OpenAI.
When they returned from the Thanksgiving break, maybe thinking (as anyone might) that they could coast into the holidays after amassing 800 million weekly users faster than any company in the history of companies, their boss dropped a dramatic directive on them instead. In a memo leaked to the Wall Street Journal, Sam Altman commanded them to abandon efforts at advertising, shopping agents, and a new personal assistant app. Never mind all that for the moment. Instead, he wrote, this was a “Code Red.” What’s the emergency? The core product must be improved.
The business press focused on what this suggested about OpenAI’s limited resources, and about the difficulty of preserving ChatGPT’s early lead against Google’s Gemini. But it was the rhetoric around what needs improving that caught my eye: Altman wanted the core product to be more flexible, faster, more general-purpose, and more tailored to its user’s desires. As Nick Turley, OpenAI’s head of ChatGPT, put it on X: “Our focus now is to keep making ChatGPT more capable, continue growing, and expand access around the world — while making it feel even more intuitive and personal.” [Emphasis mine.]
The product’s intuitive, personalized feel, of course, makes it the envy of the business world. Everyone these days is trying to sell feelings: Nike sells ambition, Dove sells self-love, Coca-Cola sells happiness. But imagine being a company facing the problem of too much emotional connection.

Discovering that at least a million people a week were developing unhealthy attachments to ChatGPT (and soon to be facing lawsuits from the families of people who had fallen into dangerous mental-health crises using it) OpenAI replaced its 4o model with a less sycophantic GPT-5 model. But that outraged millions of users, who liked the flattery and personalized attention of the 4o version, and in October Sam Altman sought to reassure them that the company would soon bring back what was lost:
In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).
Now, in order to make sure this product continues to be the world’s most popular chatbot, OpenAI’s founder needs all hands on deck, and every unit of user loyalty he can muster.
No federal regulation exists to deal with any of this, and President Trump has shown he’s actively hostile to regulating the AI industry at all. So it’s up to the states. And across the country, legislators are beginning to recognize that chatbots are creating a powerful emotional attachment in their users, and that regulation has to get involved.
Across the country, state law already treats certain forms of emotionally manipulative technology as dangerous enough to regulate: online gambling that hooks us on randomized rewards, A.I. services that make it easy to cast other people in a pornographic scene, social platforms that optimize for teen anxiety and scroll addiction.
Of the 38 states that have legalized online or casino gambling, all of them require that consumers must be able to place themselves on a “self-exclusion” list that makes marketing to them illegal, and forces casinos to refuse service to anyone on the list. (New Jersey’s self-exclusion system maintains a public list, and requires that once a person adds themselves to the list, they remain on it for a full year at minimum.) A full 29 of those states require that gamblers be allowed to self-impose a time or monetary limit on their wagers. And 35 of them prohibit the extension of house credit to a player. With gambling, state law assumes software will exploit our impulses.
At least 39 states have introduced or enacted laws addressing non-consensual deepfake pornography, aiming to deter creation and punish sharing of AI-generated sexual content. Many treat the emotional and reputational harm (humiliation, fear, harassment) as central to the offense. Massachusetts just passed “An Act to Prevent Abuse and Exploitation” (H.4744), which specifically criminalizes deepfake sexual images and revenge porn. And Michigan’s new law criminalizing non-consensual sexual deepfakes gives citizens the right to file civil suits where the depicted person suffers physical, emotional, reputational, or economic harm. In these 39 states, legislators have already accepted the idea that AI-generated intimacy can hurt people enough to warrant criminal penalties. These laws aren’t just about images as property; they’re about feelings as harm: fear, shame, coercion, social isolation.
And decades after the invention of social media, 40 states and Puerto Rico are specifically trying to limit kids’ exposure to it. Texas and North Carolina, for instance, were part of the first wave of states to enact or attempt bans/restrictions on minors’ social media accounts. Texas sought to restrict social-media accounts for anyone under 18; North Carolina targeted under-14s — both framed around protecting kids from mental-health and addiction harms. California and New York both specifically prohibit efforts to make an internet-based service or application “addictive” for minors without explicit parental consent, and California asserts that even parental consent does not shield a provider from liability for harm caused to a minor from their service.
So when will these legislative efforts bang into what OpenAI and other foundational model companies are building? AI chatbots that explicitly aim to be “intuitive and personal” could soon fall under regulations brewing or newly passed in a number of states.
The first battle line will be “therapy AI,” now specifically prohibited in states like Illinois, where the Wellness and Oversight for Psychological Resources Act seeks “to protect consumers from unlicensed or unqualified providers, including unregulated artificial intelligence systems.” Nevada has a similar law.
But New York and California are the first states to directly (if gently) regulate unhealthy personal relationships with AI. New York’s “Artificial Intelligence Companion Models” law, requires, among other things, that at least once per day, an AI chatbot must remind the user that the conversation is with a robot, not a human. California’s SB 243, signed into law in October, also requires periodic reminders that you’re talking to a chatbot, and forces the chatbot to remind minors to take a break every three hours. By 2027, California’s law will also require companies to report how they’re enforcing safety protocols around suicidal ideation on the platform, and gives California citizens the right to sue if those protocols aren’t being followed.
The United States has a long legal tradition of protecting its citizens from death and from financial loss — on the whole, we’re good at that. But this is a new, squishier legal world, one dealing in psychological definitions of harm and philosophical concepts of agency. Are state regulators ready to be the vanguard of this new legal movement? Companies like Uber, Lyft, and AirBnB shot right past local regulations because local lawmakers didn’t have the resources to handle a business model built to blow their doors off, and the legal creativity that flows into a locally regulated market once a loophole is discovered is not to be underestimated. And that’s cars and housing. Are state lawmakers—is anyone—prepared for questions of the mind?
In May 2024, when OpenAI first released the GPT 4o that drew so many people in more deeply than was healthy, Sam Altman wrote on his blog, in part:
Talking to a computer has never felt really natural for me; now it does. As we add (optional) personalization, access to your information, the ability to take actions on your behalf, and more, I can really see an exciting future where we are able to use computers to do much more than ever before.
Now, faced with a world of not just job displacement but the displacement of social and psychological norms, lawmakers are beginning to picture that future as well, and are forming a sense of their place in it.


