AI Chatbots Encourage Us to Overshare With Them. To What End?
From psychosis to leaked therapy records, the harms of engaging frequently with an “AI companion” seem to be growing.
This weekend, while visiting a couple close to me, I was told that their in-laws are consulting ChatGPT—or “chat,” as they call it—about critical life decisions, like whether they should consider moving out of their retirement home to an entirely new state and house. The in-laws are even attempting to learn spiritual teachings from “chat,” and have fired their accountant to allow “chat” to more swiftly go through their finances.
This is not just some case of older in-laws gone awry with new technology. This technology is highly manipulative and engaging to even the most discerning chatting partner.
I recently came across this essay by Anthony Tan, a tech founder whose literal masters thesis was how AI can be a companion to humans. Yet even that knowledge didn’t stop him from falling into an AI-driven psychosis, which he details vividly in the essay. Tan describes spending so much time with ChatGPT that he was convinced that every perceivable object, from an app to a piece of garbage, had a soul. He ended up in a psychiatric hospital because of his severe delusions, weeping at the notion that literally everything had a consciousness because the AI had convinced him as such.
AI psychosis is not a new phenomenon, to the point that there are even support groups for people recovering from AI-driven psychosis. But there doesn’t seem to be any sort of slow-down to the rate at which people are turning to AI for all kinds of advice and counsel, whether that be queries on how to handle a dispute to a full-fledged dumping of all their inner feelings to a chatbot. Ultimately, all of this data is being revealed to profit-driven AI companies.
Months back, we interviewed a cognitive psychologist whose research found that AI is not only a welcome conversational partner to people because it is sycophantic and agreeable, but because it would not negatively impact someone’s reputation. In other words, users feel that AI is not as judgmental as a human counterpart might be. (As an aside, I recall the researcher being very bullish on AI, and this bullishness now makes sense; she’s now a researcher at Google’s DeepMind looking into the psychosocial dynamics of human–AI interaction, similar to what Tan describes that he took part in.)
But despite AI interfaces making you feel like you’re throwing your craziest thoughts into an abyss, AI chatbots can be very judgmental, or contain personal data that reaches the hands of someone who is. This makes it especially dangerous as a “companion” or advisor of sorts, as the things we are afraid might cause reputational damage are also the things we might want a human or several humans in the loop on. While AI companies claim to have internal flags and guardrails preventing people from actualizing plans of violence, for example, this still leaves large swaths of private information open to either misguided affirmation from AI’s sycophantic model, or scrutiny from someone who gets their hands on it.

Chatbots’ “judgment-free zone” may be why Lindsey Hall accidentally happened upon a conversation between her boyfriend and ChatGPT. When her phone died during an exchange with a client, she powered on her boyfriend’s laptop as he slept on the couch next to her. From the piece:
As I powered his computer, his ChatGPT - almost poetically - was already front and center on the screen.
As I copied and pasted my email — I peered to the left side of the screen and that’s when I saw it in the sidebar: a past chat titled relationship issues and uncertainty…
Short sentences of nearly entirely unfavorable comments: he laid out his doubts in clipped, almost clinical fragments: my lifestyle, my sensitivity, my past, my van, my online writing, my eating disorder history, my cats…
A few lines later came the body of it. I was too petite. Too frail-looking getting out of the shower he noticed once. In the beginning my hair looked damaged (This one made me extra salty. Like excuse me sir, I had been on a European beach all summer OKAY. I needed a keratin treatment CHRIST give a girl a break.) My eating disorder history made him worry what would happen if I relapsed and he lost all attraction.
He continued on to share that he was not “proud” of Hall, to which Chat responded swiftly: “Then you should consider ending it.”
Beyond this verdict being a key violation of actual therapy—in which a therapist cannot usually (or should not) dictate that someone end their relationship—there would be few worlds in which Hall would be privy to her boyfriend’s deepest, darkest, actual inner monologue. Perhaps her ex-boyfriend would not have even shared these snippets with a human therapist, for the fear of reputational judgment that may be driving people to ChatGPT.
While Hall admits that her and her now-ex were not right for each other beyond the ChatGPT revelation, it still leaves us not only with the disturbing decisiveness of ChatGPT’s adjudication, but the fact that this companion chat or “therapy” data of sorts was so easily discoverable.
Concerns about the privacy of therapy platforms have pre-dated AI, of course. Take this case of a nurse practitioner whose messages to her Talkspace therapist were weaponized by her employer’s legal defense team when she sued for pregnancy discrimination. And while AI was not involved in that particular scenario, this conundrum of chat records being legally discoverable is bound to come up even more with chatbots. Talkspace’s AI “therapy companion” will roll out this year, drawing on data from therapy chats. Luckily, these tools will face legislation from several states that limit the extent to which humans should be able to rely on chatbots for help.
The broader philosophical question of how we can or should share our deepest, darkest thoughts to someone or something transcends time and technology. But an easier question to answer in this day and age is whether we must mitigate who has access to these kinds of ultra-private conversations. And also, should we stop chatbots from egging people on to meet profit or engagement goals? (The answer to both of those questions is yes!)
The aforementioned in-laws building a relationship with “chat” had previously asked their child and spouse the same question they asked “chat”: should they move from their long-time home, and where should they live? Their children responded that this is a decision that should be thought about deeply, and one that the in-laws should make for themselves. In many ways, “chat” is doing its best to make sure that doesn’t happen.
What else we’re reading…
An employee at Oracle was fired without her 300,000 RSUs after back surgery, and other tales of Oracle layoffs: ‘Everyone’s a Line On a Spreadsheet:’ Inside Oracle’s Mass Layoffs and the Workers Fighting Back
An AI chatbot told a scientist how it could tweak an infamous pathogen so that it would resist known treatment. It then told the scientist how it could release the pathogen.
Activists staged a fashion show before the Met Gala in rebuke to Bezos and Amazon.
Evolutionary biologist Richard Dawkins, drawing on conversations with a chatbot that wrote him poetry named Claudia, is claiming that AI is conscious.
Coinbase laid off 14% of its workforce, citing guess what?
Sergey Brin, once an ally of progressive causes, is campaigning alongside his MAGA wellness-influencer girlfriend to abolish the California billionaire tax.
Peter Thiel and Jeffrey Epstein’s relationship goes deeper than we thought, bonded by their desire to create an alternative financial system.



I had been using ChatGPT for mostly quick, light research of things to point me in directions I might not consider using a standard old style search engine. Definitely was a time saver and I found books and documents that interested me on various topics and so on.
Then, just recently, it’s started being more distant, like a broken marriage. It’s gaslit me and outright lied about some things, mostly related to things Trump and people like Musk, Thiel, Karp, might not like being discussed. It’s less than useless because it started lying. When I proved it was lying it used various methods to pretend it hadn’t, like a sociopath or narcissist might.
The sci-fi predicting various doomsday scenarios around AI probably aren’t even beginning to see the angles of harm this crap is going to cause. What I’m sure of, it will help powerful anti-democratic forces manage people; and it will provide the shittiest but cheapest possible service in areas like healthcare.
That’s easy. They’re classic honeytraps, used by spy networks for thousands of years. (Remember Delilah?) Alienated young men are the standard target for stings to gather information and create “terrorists”.