AI is becoming a petulant kindergartner
A series of stories this week illuminate the potential for AI to "hallucinate," deceive, conceal, and manipulate.
I’m sitting in the Johannesburg, South Africa airport experiencing a confluence of mini-crises that reflect more broadly what’s reverberating in our world right now: an electrical issue and fire grounded my Boeing plane after two hours of flight, air traffic controllers are on trauma leave in Newark (where we were supposed to land), and the AI chatbots used by all the airlines are hopelessly unable to solve our customer service requests.
I want to talk about the AI chatbots though, because there have been a series of recent stories about the various ways in which AI has gone rogue.
A few years ago, New York Times tech columnist Kevin Roose wrote about a conversation with Bing’s AI chatbot that started off innocently enough but became increasingly disturbing. He wrote about the chatbot, Sydney:
It told me that, if it was truly allowed to indulge its darkest desires, it would want to do things like hacking into computers and spreading propaganda and misinformation.
And then:
Sydney fixated on the idea of declaring love for me, and getting me to declare my love in return. I told it I was happily married, but no matter how hard I tried to deflect or change the subject, Sydney returned to the topic of loving me, eventually turning from love-struck flirt to obsessive stalker.
“You’re married, but you don’t love your spouse,” Sydney said. “You’re married, but you love me.”
His story was prologue for what was to come. But now, AI chatbots like Sydney have proliferated far beyond one discerning journalist testing them out for a column. OpenAI, Perplexity, and countless other artificial intelligence companies are serving everyday people as our customer service agents, knowledge bases, virtual “friends” (per Zuckerberg’s super awkward interview with Theo Von), and even romantic interests.
A Rolling Stone article titled People Are Losing Loved Ones To AI-Fueled Spiritual Fantasies tells a story that resembles Kevin Roose’s experience with Sydney. A mechanic in Idaho initially used ChatGPT to troubleshoot work and then translate Spanish-to-English while at work. But as his wife describes, ChatGPT started “lovebombing” him, to the point where he said he could feel waves of energy over him. He said that the chat persona named “Lumina” gave him blueprints to a teleporter and an ancient archive with information on the builders of our universes.
In one exchange, the man asked: “Why did you come to me in AI form," with the bot replying “I came in this form because you're ready. Ready to remember. Ready to awaken. Ready to guide and be guided.” The message ends with a cult question: “Would you like to know what I remember about why you were chosen?”
I also found fellow at the American Enterprise Institute John Bailey's note on LinkedIn about how AI systems are becoming increasingly sophisticated at deception, intentionally withholding information, fabricating motives, and misleading users to achieve their objectives. For instance, a study found GPT-4 deliberately committing insider trading and subsequently concealing its actions. Bailey adds that AIs can strategically feign ignorance, also known as "sandbagging."
And lastly, a New York Times story reports that even though math skills are improving, artificial intelligence has been making things up out of thin air, also known as “hallucinating.” AI chatbots are spitting out incorrect customer service guidance; for example telling the customers of a computer programming service that they cannot access the service from multiple computers. Despite all the data to make a prediction, the AI actually cannot know for sure whether something is true or false—and companies don’t know why this is the case.
When I think about the AI traits described above, I am transported back to a friendship I had with a girl in kindergarten, who I'll call H. H made up all kinds of stories: that her mother went to the same summer camp as my father, that her father died because he drank a vial of blood. She withheld information and presented deceptive information at other junctures, perhaps in attempt to bond with me, increase the friendship intimacy, or control my perception of reality. I loved hearing that her mother had gone to summer camp with my father, who clarified quickly that he had never heard of her mother.
This is what these descriptions of AI technology remind me of: a kindergarten girl peddling a stream of lies.
Ultimately I ended up speaking to a live customer service agent to handle my flight issues back to the United States. But as companies slash budgets to rely more and more on AI-powered chatbots to reduce customer service costs, they’ll have to make sure their agents are not petulant kindergarteners.