At an A.I. event in San Francisco, protesters struck a note of alarm
At a tony event in the city, angry A.I. protesters had to be cut off the mic amid a presentation on data showing rising concern about A.I. Elsewhere, a new study casts doubt on A.I.'s benefits.
The room was buzzing; the drinks had been poured and four panelists were readying to go on stage this week at a 15th floor ballroom in the regal Merchants Exchange Club in San Francisco to discuss — what else? — the future of A.I.
Hosted by the nonprofit TechEquity, the event was the official release party for a survey the group commissioned to study how people in California saw A.I., under the thesis that how A.I. is implemented here legally, will set the tone for the rest of the country. TechEquity co-founder Catherine Bracy was moderating a discussion with Empire of AI author Karen Hao, famed labor leader Dolores Huerta, and Aussie pollster Daniel Stone.
I had just found a table to perch at as the talk was set to begin. But then: a wailing siren blared from above in the wood-paneled room. And at first, it seemed like an overly aggressive way to call people to the stage. But no, it was a legit alarm, its loud blare overlaid by an automated voice instructing people to exit the building.
As the sense of concern grew in the room, people did as they were told, with event attendants shepherding people to the staircases for the long walk down to the sidewalk. And a few flights down, the word came: false alarm. The fire department had been called. It had been checked out. And while the alarm was still ringing through the giant building as fire fighters worked to disable it, people were invited to come back in. We’ll come back to this in a bit.
The event proceeded as planned. Stone shared the results of his polling, which have now been fairly widely reported: 55 percent of Californians reported that they are concerned about A.I., in comparison to 33 percent who said they were excited. Interestingly, Stone found that gender was a major fault line — with men, especially those with higher levels of education, being much more bullish on A.I. than women.
Nearly half of respondents in the survey reported thinking that A.I. is advancing too fast. 59 percent said they were worried that whatever benefits will come from the boom will only benefit the wealthiest households and corporations. The results cut across party lines, Stone said, with both Democrats and Republicans sharing these fears. Respondents in focus groups that Stone convened to look deeper at these ideas alongside the survey spoke in no uncertain terms of fears that a small group of ultra-elites would bend the political establishment and rewrite society’s rules in their own favor.

The data was presented to a group of lawmakers in Sacramento this week as well, a source said. Both the presentation Stone did as well as the report TechEquity released seemed to underscore a message for policymakers: don’t fear A.I. but regulate it. Address it head on and transparently in a way that engenders trust and doesn’t exacerbate existing concerns that the system is working against everyday people.
Some 70 percent of respondents said they wanted the government to establish safeguards around A.I., including around privacy, civil rights and non-discrimination.
But that message didn’t go far enough for a group of activists in attendance, from the group Stop AI, which has been holding somewhat regular protests outside of OpenAI’s headquarters in Mission Bay.
During a moment after the talk for audience questions, at least two people, one wearing the group’s red shirt, stepped up to the microphone and proceeded to warn, in dire terms, about the imminent societal collapse and mass death that A.I. will bring about.
Both had to have their microphones cut off after being warned that the time had been set aside for audience questions not statements. But both continued shouting in the ballroom, warning of the apocalypse that was just a few steps away from our doors.
I spoke briefly to one of the organizers for Stop AI after, and was struck by how blunt and absolutist the worldview was: A.I. (specifically, AGI, or artificial general intelligence, which is a somewhat vague idea that doesn’t yet exist) is coming and it will kill us all, in as soon as three years. Anyone who doesn’t recognize that fundamental truth is only helping these companies hasten that end.
After the talk was over, some people in the crowd speculated that it was an A.I. protester who had pulled the fire alarm before. The timing was strange; it came right as the event was about to begin after an hour of socializing before. I watched one of the bartenders say to another that he’d only heard one fire alarm in more than a dozen years of work there. As someone who went to a high school that was plagued by a couple of years of intentional fire alarm disruptions, it did feel familiar. However, a TechEquity rep told me the next day that they believed it was an unhoused person in the alley next door who had pulled the alarm.
I’ve come to feel that the doomers vs. boomers narrative of A.I. — boosters and proponents squaring off against the fearful and concerned — is a bit oversimplified, but this now has me intrigued to explore more about the doomer ideology. What is it about, and where does it originate? I was surprised by the tone: this was not a careful word of caution or a steady warning about an uncertain future, but instead a wide-eyed shout, an expressed belief in the certainty of destruction, a loud alarm disrupting a room of calm.
Here’s what else we’ve been reading this week:
A group of about 50 people, including some current and former employees, occupied a plaza at Microsoft’s headquarters in Washington state as part of continued protests over connections between its cloud platform, Azure and the war in Gaza. Hossam Nasr, who spoke to Ariella in June, was reportedly there.
An article about how “chatters” in the Philippines — people who are paid to message with desperate men who think they are DMing OnlyFans stars — are getting replaced/augmented, whatever you want to call it, by A.I. chat bots. Kind of an interesting parable about the Internet. The human labor is already in the business of providing a fake, pretending to be someone else. So one logical conclusion for that, it would seem, is even more — and cheaper — phonyness.
A.I. stocks have been tumbling this week. One of the things going on in the background? A recent report from MIT that found that a whopping 95 percent of companies and organizations that are using A.I. are getting zero return in terms of profit and margins. Only 5 percent of companies were getting monetary value from the implementation of A.I., despite nearly 40 percent having deployed these systems regularly. The MIT report does note that A.I. has been making workers more efficient. But if increased productivity isn’t moving the bottom line, will companies look to job reductions? Perhaps not immediately. The report notes that many companies find A.I. systems to be error prone, unable to respond with needed specificity to tasks, and lacking in critical context and memory.
U.K.-based Foxglove Legal, which has been behind many big name tech lawsuits, filed a suit against the British government after it overruled a local council to approve a data center in Buckinghamshire, outside of London.
I can’t help but feel like the gerrymandering conversation, kicked off by the GOP’s moves in Texas and Gavin Newsom’s threatened retaliation in CA has a silver lining, for people who would like to see stronger democratic systems in the U.S. Just getting people to debate it, around its framing as a negative (see this Ted Cruz tweet and ensuing A.I. responses) seems like an ever so small W.
See you all next week!




I highly recommend Adam Becker's "More Everything Forever: AI Overlords, Space Empires, and Silicon Valley's Crusade to Control the Fate of Humanity". He makes a pretty compelling argument that neither narrative (AI will kill us all or AI will save us all) has much data to support it, while emphasizing the real harm that comes from the followers of both ideologies.
Thank you, Eli.