Q&A: Deep Inside the Mind of an A.I. Doomer
The activist group Stop AI says that A.I. is going to kill us all. Full-stop. We talked with their founder about where this gloomy philosophy came from.
As much as A.I. is a tangible product, it’s also a story, a belief system, and a philosophical debate. Will it truly surpass human intelligence to achieve something powerful? Are the costs to the environment, the economy, our own bodies and minds, worth the productivity-enhancing upside? Is the economic boost it’s creating real or a bubble? And what really is A.I. after all?
People on all sides of the debate are spouting a lot of wild ideas right now, gassed up in an environment that encourages both hype and hysteria. And one of the ideas that seems to consistently generate a lot of attention is that A.I. is going to wipe out humanity. In short time. With no ability for us to resist or fight back.
Sam Kirchner, a co-founder of the Bay Area-based activist group Stop AI, is a big proponent of this idea. The 27-year-old and his group came across my radar earlier this year in local news coverage about their actions. They’ve held regular protests at OpenAI’s headquarters in San Francisco, including one that resulted in the arrests of three of their members in February. And their flyers advertising weekly meet-ups at bars in San Francisco and Berkeley are hard to miss around town.
But I came face-to-face with the group for the first time last month, at a talk with author Karen Hao and others in San Francisco, where their members had to be cut off from the mic after shouting warnings about A.I. during a time for audience questions. I was surprised by the gloomyness of their predictions, the religious-like certitude about the apocalyptic end of human civilization — not later, but now — and the wild-eyed tone of their forecasts. It struck me as distinct from the typical alarmism of activist groups.
I wanted to learn more about them, who they were and what motivated them, and where their dark outlook had originated from. It felt like a good story about how everyone is going a bit mad right now about A.I. And who’s to say which mad man or woman’s prophecies will eventually turn out to be correct?
I sat down with Kirchner at a cafe in Oakland this week to discuss. Stop AI is most concerned with the development of artificial super or general intelligence, A.S.I. or A.G.I., a much bandied about but still vague threshold in which A.I. systems become capable of surpassing humans at most mental tasks. A lightly edited transcript of our conversation is below.

Eli Rosenberg: Hi, Sam. In your words, what are Stop AI’s guiding beliefs?
Sam Kirchner: Our primary demand is to permanently ban artificial super intelligence and artificial general intelligence. We want to do that because you cannot have proof, before building artificial super intelligence, that it will stay safe and not cause extinction or mass job loss. It’s going to be the new most intelligent species on the planet and could very likely wipe us out just like we wipe out many dumber species. Not because we hate them, but just because we have more important things to do. And that goes back to instrumental convergence, this idea that for any end goal, an A.I. will have sub-goals that are instrumental to achieving that goal. And those sub-goals will look like attaining more resources and preventing itself from being shut down.
How we most likely die is it wanting to cover the earth in data centers to perform more computational tasks that require a lot of compute power and the entire biosphere being eliminated.
We won’t be able to shut it down once [A.G.I.] is here. And then the second demand is to put in place a citizens’ assembly into the government, a random sortition of normal people to decide on what to do with the development of narrow A.I. That means systems that are good at one thing and are not general purpose and can’t improve their own design or lead to super intelligence.
There are some A.I.s that are good, like radiology, navigation, whatever, and as long as we’re voting as a society on what A.I. we want and what A.I. we don’t want, that’s fine.
ER: How did you arrive at that conclusion that A.I. or A.G.I. was definitely going to kill all of humanity?
SK: Our position is not that it’s a hundred percent definite it’s going to kill humanity. Our position is that we don’t know the actual chances that it’ll cause extinction. Because we don’t know the probability of causing extinction, and if we can’t shut it down once it’s created, we should never build it…
There’s a litany of A.I. expert quotes on this. Geoffrey Hinton, who won the Nobel Prize [for his work in A.I.], said that there’s a greater than 50% chance that artificial super intelligence could cause extinction.
ER: Are you concerned about the other, more immediate risks of A.I. such as environmental degradation, job loss, labor and data issues, autonomous weapons and surveillance, as well as the many ethical and moral questions about trying to supplant human intelligence?
SK: Our primary concern is extinction. There’s experts who have said this could happen in the next 1, 2, or 3 years. It’s the primary emotional thing driving us: preventing our loved ones, and all of humanity from dying. If people who are listening to this think we’re crazy sci-fi nutjobs, that’s fine. As long as you’re willing to come to a protest. As long as you’re willing to go block the door of this company and help shut down the economy and prevent the further development of very powerful A.I. systems, then I don’t care what your reasons are. We want to help anyone who has a bone to pick with A.I. If your thing is just about job loss, if your thing is just about IP infringement, if your thing is about racial bias or autonomous weapons, or whatever, as long as you’re willing to actually do something about it, we want to help and do something. Meaning in the real world, come to a protest.
ER: You folks were at a talk, which I wrote about, with a group of people who were both interested and concerned about A.I. Yet members of your group had their microphones cut off after not adhering to the question format, and drew grumbles and accusations from organizers. Do you think you’d have an easier time with a message that is more palatable to folks who are concerned about A.I. but not sure about the whole extinction thing?
SK: It’s a slow process, and it is kind of a lot to hit people with, “You’re going to be dead in two years.” People’s default setting is to just tune that out. It’s the Chicken Little thing. Most people who say the world’s going to end have been wrong. But [people] have to understand that you cannot keep feeding your family and not pay attention to this, because you’re not going to have a family in a year. They could be dead. You have to allocate some percent of your brain to this problem.
When I talked to [Karen Hao] after the event, she said, “I get where you’re coming from. But I just feel that the existential threat, the threat of everyone dying is sort of sucking the air out of all these other issues.” And that’s a valid concern, but we help promote other current harm issues at our protests and in our literature. The same people who are saying we should focus on current harm issues, almost never let us say our piece. It’s a little bit of hypocrisy.
ER: A lot of people working in A.I., including those at the heads of these companies who are trying to sell these products, have also warned about these existential threats.
What do you think about criticism that all the talk about how powerful and threatening these systems are, can actually help retail them, by making them seem inevitable, potent and obviously useful and worth people’s time and money, when all of those things are very much open questions. We’re talking about systems that still can’t count the number of b’s in the word blueberry.
SK: That is a very good counterargument, but I’ll say it’s really a weird sales strategy to say that your product is going to kill everyone. If they were right, and I think they are, then people should be losing their minds. But these issues are not super well known to the general public. If people knew what some experts were saying, it would only be a matter of time until the populace woke up and did something about it.
ER: Tell me about the origins of Stop AI. How did you all come together?
SK: We started online, [forming] out of a Discord server group called Pause A.I. They believe that artificial super intelligence is being built unsafely at the moment, but can be built safely eventually. Our position is that it can never be done safely.
I founded it with a few others. We kind of figured that if anything’s going to happen with this movement, it’s going to be in San Francisco. I flew down here from Seattle [almost two years ago] and stayed in a homeless shelter for four months and then got arrested for blocking the front entrance to OpenAI a couple of times. And then that got funding….Right now we have, if I counted up all the people on the Signal chats, probably like 200 people in the Bay Area who have varying levels of activity, and four guys in Oakland working on this full-time.
ER: How are you all funded?
SK: We have a mixture of small donations and large donations, and we definitely need both. We have a donate page.
ER: What was your background before this?
SK: I was working in mechanical and electrical engineering, in the avionics industry in Seattle. I originally wanted to be an electrical engineer, but then I read a book called “Superintelligence” in 2016, by a guy named Nick Bostrom. It was pretty clear that studying intelligence was going to be where it was at the next ten years.
I was originally going to go to school for neuroscience…I was doing community college to get my pre-reqs done. And then before applying to the four-year programs, which I would have gotten into as a junior, I was just like, this is pointless. We’re going to have A.G.I. by the time I’m done. And at that point, super intelligence will be able to do whatever I do in neuroscience a million times better. This is kind of what originally got me angry — the idea that if artificial super intelligence was able to do everything that we can do but better, a lot of people, including myself, would lose purpose in life.
The [Bostrom] book is pretty pivotal for a lot of people in the A.I. safety space. It talks about the loss of control to a super intelligent A.I. and how it could lead to extinction. The book doesn’t put a hard timeline on when it would arrive. But when GPT 3.5 came out [in November 2022], it was the first time that a lot of people experienced talking to a computer in a way that they talk to a human and they could no longer tell if they were talking to a computer or not. That’s when I was like, holy crap, A.G.I. is going to be smarter than all humans in all technical domains. And it’s going to be here way sooner than “Superintelligence” and a lot of experts predicted. The main reason I’m doing this now is because I don’t want people I love to die. And these companies are very likely going to build something that’s going to kill everybody.
ER: Do you have a background in activism? Where does your group look for organizing instruction and guidance?
SK: I’ve never really protested anything [before this]; I’ve been pretty apolitical. But we’re learning from the example of other groups, like Just Stop Oil, researching how they do it, and also looking at non-violent resistance. There’s a book called “Why Civil Resistance Works” by Harvard professor Erica Chenoweth. Chenoweth looked at violent and nonviolent revolutions over the past a hundred years and found that if you can engage only 3.5% of the country’s population in active nonviolent protest, you’re almost guaranteed to achieve any political demand. So it’s this notion that you don’t have to get 50% to actually enact change.
With Just Stop Oil, they achieved their primary demand — no new oil and gas licenses [in the North Sea], with less than 1% of the UK population engaging in protest. And so basically we’re just trying to be the Just Stop Oil of A.I.
ER: Is there a place where you draw the line, tactics-wise?
SK: Yeah, we’re nonviolent…I don’t know what our chance of succeeding is, but at the end of the day, it’s kind of like whatever chance we have, I think it’s very low. But regardless, I’m just going to go off fighting.
—
Here’s what else we’re reading:
In an absolutely blistering takedown, Georgetown Professor Olúfẹ́mi O. Táíwò writes that “Ezra Klein is wrong: shame is essential.” “Common decency stigmatizes people that do not participate in it—removes them from voluntary association. We indeed have to live with one another, but terms and conditions apply,” he writes in The Boston Review.
And another one, in Techdirt: “The ‘debate me bro’ playbook is simple and effective: demand that serious people engage with your conspiracy theories or extremist talking points. If they decline, cry “censorship!” and claim they’re “afraid of the truth.” If they accept, turn the interaction into a performance designed to generate viral clips and false legitimacy. It’s a heads-I-win-tails-you-lose proposition that has nothing to do with genuine intellectual discourse.”
Writer Adam Becker, in a withering review of the A.I. doomer tome du jour, “If Anyone Builds It, Everyone Dies,” says he doesn’t buy the gloom, believing that we humans are projecting our own evils — which exist without A.I. — onto these thought- and feeling-less machines. “In reality, [authors] Yudkowsky and Soares (unwittingly) serve the same interests as the accelerationists: those who profit from the unfounded certainty that AI will transform the world,” he writes in The Atlantic.
More tea on Larry Ellison’s ascendance as the billionaire string-puller of the moment, in this solid rundown in Wired.
The average warehouse worker in the Inland Empire — where one out of every 15 workers is employed in warehousing, most notably for Amazon, earns 75% of what other workers in the area make, according to a new report which spotlights concerns with overreliance on the industry locally.
And don’t forget to check out the first-ever Hard Reset Awards and submit your nominations.
See you all next week.