A poet-mathematician on why she quit OpenAI
Zoë Hitzig is a former research scientist at OpenAI. She decided to quit OpenAI because she realized “the company is just not interested in being creative with regards to social problems."
Before I begin my post today, we’re organizing a guided collage night in San Francisco next Wednesday (February 25) in partnership with local art gallery, Roborant Review. Join us!
A couple of days ago, I saw a guest essay in the New York Times titled OpenAI Is Making the Mistakes Facebook Made. I Quit.
I found the story interesting and rare for a few reasons. One, the author Zoë Hitzig is not the traditional “whistleblower.” She quit swiftly, proactively, and with conviction–calling out the company’s public principles without revealing anything private, while also proposing productive solutions. OpenAI didn’t have the chance to call her disgruntled, a poor performer, or an informant with confidential information, as they usually do with employees who try to report their concerns while staying on.
And while Hitzig is one of several AI researchers or senior leaders to resign from companies in the last month due to ethical concerns, Hitzig chose to boldly voice her specific reason for quitting in one of the loudest ways she could: through the megaphone that is the New York Times. Her reason – or her “red line” that she had established for herself – is that OpenAI is now monetizing ads without guardrails for how they manipulate people’s intimate data.
Lastly, Hitzig is both a mathematician and a poet, that unusual left- and right-brained person who is technically gifted while also understanding and caring of the sociological implications of their work. (I can’t help but also note that a former safety researcher at Anthropic actually quit to pursue a degree in writing and poetry.)
In my conversation with Hitzig, she noted that at times she felt that her old self who joined OpenAI was naive. But for me, it was actually refreshing to see her still embody idealism and hope rather than be subsumed by cynicism.
Yes, these companies are contradicting their principles with ulterior motives, which is not necessarily new. But what I mostly took away from our conversation was something else altogether: the leadership of these AI companies are actually not matching the brilliance and inventiveness of their employees. So it seems inevitable that these employees would walk away to do more stimulating and creative work.
OpenAI and xAI can of course replace this brain drain with dutiful employees from other big tech companies. But as these new hires pursue rote and predictable money-making playbooks from the likes of Meta, some of their brightest employees find this to be not only unethical, but boring.
Anyway, here is an abridged version of my conversation with Zoë below. I hope you enjoy it!

Ariella Steinhorn: I’d love to start by learning what your background is, and how you made your way to OpenAI.
Zoë Hitzig: I’ve always been obsessed with math and fascinated by what can go right or wrong when you try to translate deeply human stuff – needs and desires and fears and so on – into something like code. Math is a beautiful abstract language that is alien to humans. And yet we need to calculate things to make society work – in our markets, in our governments, even in our homes. We need to turn human muck into code.
That obsession led me to economics – a social science pursued largely through mathematics, but one that also involves asking deep normative questions about how we should live with each other and make decisions collectively as a society.
In my PhD, I explored questions around digital privacy and algorithms. I wanted to understand how complicated social goals like equality, privacy, and efficiency are translated into mathematical formulas.
I always thought I would work in academia. But in the last year of my PhD, someone from OpenAI reached out with a desire to hire me. They were growing a team focused on the social and economic impacts of AI, and thought I had a lot to contribute on questions around privacy and the equitable distribution of AI resources.
I was a little suspicious at first, and I took a year to respond to their emails – I wasn’t particularly interested in the corporate world and had a lot of random recruiting emails. But ultimately I realized that I had a desire to dive in, and to make what I had been thinking about in the abstract more tangible. With these algorithms proliferating, I started to think that we may be in an emergency now, where social sciences were urgently needed.
AS: During your PhD, before you were recruited, was AI on your radar?
ZH: Well, there’s the question of what “artificial intelligence” even means. It’s somewhat of an empty term that could probably be replaced in most cases with “statistics” or “algorithms.”
Statistics is pattern matching from data. Algorithms follow a sequence of predefined steps, taking some input and translating it to output. I’ve certainly always been interested in statistics and algorithms! And especially what they do to human lives.
For example, one project I worked on before OpenAI focused on the algorithms used to allocate kids to public schools in Boston. What drew me to this case was the history. In 1974 a federal judge ruled that Boston had been deliberately segregating its public schools and ordered thousands of students to be bused across neighborhoods to integrate them. It triggered some of the most violent protests of the era – mobs throwing rocks at school buses, riots, and eventually massive white flight. The question of who goes to school was an explosive question.
Now, decisions about who goes to school in Boston are made by a highly abstract algorithm, engineered to have certain properties by teams of economists and computer scientists from MIT and Harvard. It’s a very different situation – now a group of experts say: we did the math, and we hereby present you with the most equal algorithm.
In other words, economists and computer scientists use the language of equality and equity, but those words have highly specific technical meanings that the public cannot really understand (and are not afforded the opportunity to open up the black box of decision making algorithms). There’s a gap between the rhetoric and the math. And a tweak in the algorithm could change the lives of thousands of students.
AS: This may not be answerable, but do you think that their assertion, that “this is the most equal algorithm,” is defensible?
ZH: It’s an impossible question, how to allocate kids to public school in a way that’s most fair. In my view, the only way to choose what’s fair is to focus on a fair process. A process that involves people, and allows them to voice their opinions in a way that can directly change the decisions.
AS: Back to OpenAI for a minute. Can you tell me what eventually led you to respond to the recruitment email a year later?
ZH: The person who recruited me was Miles Brundage, an extraordinary researcher and leader. I’m lucky to call him a mentor and a friend. He had a team of people who were working on a lot of big questions around the future of AI and how AI would be equitably distributed.
I responded to OpenAI’s overtures because I became convinced that the only way to think seriously about what was coming was to get in on the inside. We could be six months ahead of the curve in our decision-making–and beyond that, there was power in being in the room. I believed that there was power in using OpenAI’s megaphone to bring some of these issues around privacy and equity into the public consciousness.
We were in a small window where we could still think ahead about consequences and change the decisions being made because of them. Tons of decisions feel very small, but they actually have massive consequences. There’s this path dependence, where every decision creates lock-in.
AS: What’s an example of that?
ZH: The early internet is a good example. Not that many people were shaping the decisions that led to what the internet looks like today. We’re talking about a handful of engineers – Vint Cerf and Bob Kahn designing TCP/IP at DARPA in the 1970s, Tim Berners-Lee building the web at CERN in 1989, and small working groups at the IETF hashing out protocol decisions in rooms of maybe a few dozen people.
They were funded by the Defense Department, working at universities and government labs, and their design ethos was basically like, let’s build it for a small trusted network of researchers who already know each other, and make it work.
So the internet’s core protocols were designed without a built-in identity layer — not because someone decided against it, but because identity protocols weren’t a solution to any problems that existed yet. And I don’t think that was the wrong call at the time. But if you’d thought ahead to billions of users, you might have designed something like anonymized digital identities with some friction – not to track people, but to make it harder to spin up infinite fake accounts, or to spam, or to harass people with no accountability.
That one architectural choice helped produce an internet where everything is frictionless – and frictionless turns out to have enormous costs. The point is that a small number of people, solving reasonable problems for a small network, made choices that shaped the lives of billions of people. And what I want us to challenge ourselves to think about now, with AI, is what are those decisions that we can make differently now, if we try hard enough to see a likely future?
AS: The team lead who you went to work for at OpenAI, did he leave?
ZH: Yes, he left shortly after I joined. He recently launched a non-profit that facilitates third-party AI evaluations.
AS: And was your team disbanded, did it disintegrate?
ZH: The Policy Research team morphed after Miles left, and there is not really an equivalent at the company now. And there were broader organizational shifts at OpenAI, which I believe are very thoroughly reported on and well understood. The main theme being the challenges of going from a small nonprofit-ish research organization to a for-profit consumer business with hundreds of millions of users.
But I was, and still am, a little bit hopeful. Part of my interest in kicking up a storm right now is that there is time for these companies to engage in a race to the top. They can earn consumer trust, raise the bar, and commit to a kind of transparency that the last era of tech giants never attempted.
Companies like OpenAI and Anthropic have been talking about the mission of doing good for humanity from the start. It’s still reflected in their corporate governance structure – they are public benefit corporations (PBCs).
Just to dwell on that for a minute – it is quite unusual to have two companies of this size be PBCs, with their combined worth around a trillion dollars. (The numbers may be fake, but still it’s staggering to say.) So maybe there is some way to use that legal vehicle of being a PBC to create demands–to make these companies compete on trust, to compete on how well they’re living up to their values.
Here’s an interesting piece of my OpenAI backstory. A few months into joining, a friend of mine who had lost faith was on his way out, and basically said the company had crossed a red line for him.
And he asked me, actually, Zoë, what is your red line? Almost without thinking, I said that it was advertising: If the company starts selling ads without binding governance around how data will be used, it’s over for me. It would be like working at Meta, and I would never take a job at Meta.
Over the last few months, I was of course thinking back to that conversation. And to be honest, some part of me kept thinking, oh, my old self was just too naive. This is big business now, and we have to make compromises. This company is the most exciting place on the planet and I have awesome, brilliant, kind colleagues who respect me. And I myself respect them (and still do, by the way). I liked my day to day work!
Yet another part of me kept coming back to it. Why had I drawn that line? Should I listen to the voice that drew it? The conclusion I eventually came to was that it’s not a red line in the sense that the company has done something so evil I want no part of it. I don’t think they’ve done something evil yet – evil would be more like facing explicit and intentional tradeoffs of significant harm versus profit and choosing harm.
But it was a red line in the sense that the company is just not interested in being creative and ambitious in how they approach the social and economic impacts of AI.
AS: Why do you think OpenAI didn’t create a binding governance policy when it came to advertising?
ZH: It’s hard to say. The simple answer is that it’s a lot easier not to, and that it’s against their financial interests.
Also, many execs at OpenAI now have come from Meta – as well as the rank and file. I recently read a report that twenty percent of OpenAI’s workforce now has had Meta in its experience.
So I think that those facts paint a picture. These people are coming in with a certain set of habits and ways of thinking about problems.
Now, OpenAI did publish principles of what they will and won’t do with advertising. But the principles are very vague. And more importantly, they’re contradicted by simultaneous incentives to override them. Why should we trust them to hold to vague principles that they have billion-dollar reasons to override?
Also, I’m not sure they are holding to the principle right now even. One of the principles that OpenAI optimizes for is “long-term value” rather than time spent in ChatGPT. (The principle reads as: “We prioritize user trust and user experience over revenue.”) Yet that principle is already being bent. It’s been reported that the company has been maximizing engagement metrics like Daily Active Users. And I’d guess that the same model behaviors that get people into the platform daily are the same model behaviors that would keep people in ChatGPT longer.
AS: Changing topics for a moment. I could be wrong, but I’m not familiar with a lot of mathematicians who are also poets. Where do you think the poet in you has come from?
ZH: Poetry has such an unglamorous and unrewarded role in today’s economy. So it’s been true for hundreds of years that poets have had to have some other job.
One of my favorite poets is Wallace Stevens, an insurance executive who wrote poems walking to work everyday in Hartford. T.S. Eliot also spent time in banks. They’re not great characters, to be honest, but they are phenomenal poets who wrote some of the most important and defining works of the 20th century.
I was lucky to be part of a poetry workshop run by the incredible poet Jorie Graham. This was in graduate school at Harvard, and it was very much normalized that all of us were doing something else outside of poetry and bringing that into our poems.
For example, one person in my class was studying stem cells and inventing blood outside the body, then coming to workshop poems about the choreography of stem cells and the mice he had to decapitate. Another student was a historian of modern China. Others were from the medical school, the divinity school, the law school. That workshop helped me practice poetry as inseparable from whatever else I was doing, and a mode of inquiry that could take on almost any subject.
AS: Do you think poetry and mathematics have similarities?
ZH: One similarity between poetry and mathematics is that you can create something from nothing. For example, in mathematics, you begin with a certain set of axioms and you figure out what else you can say by using what you’ve already proven. It’s a small set of rules that build the world.
Poetry works the same way. The first few lines of a poem establish the rules — what counts as a move, what the music sounds like, what kind of logic holds. And once the reader accepts those rules, you can take them somewhere they wouldn’t have gone on their own.
AS: Have you written any poems about your experience at OpenAI?
ZH: No. But my poetry has always dealt with questions about technology and power. One of my books, Not Us Now, contemplates who we’re becoming in a world that is increasingly controlled by algorithms.
AS: There were and are major discussions around AI replacing writing and art. But now, the humanities seem to be more valued than ever before–we’re realizing with vibe coding that it’s actually the entry-level software engineering and routine white-collar work that may see the biggest hit. What do you think about this industry’s impact on the arts?
ZH: What makes art powerful is the context around it. Context makes the difference between art and content.
With art, you are hearing from a person who moved through the world and felt something powerfully. Then they made this extraordinary and irrational attempt to capture it so that other people could recognize what they had felt. That impulse – the artist’s anguish, the need to make the private communicable – is something that will never go away. It’s the context that allows a piece of art to move people across space and time.
Anything produced by AI definitionally doesn’t have that context.
At the same time, I don’t like being precious about the use of AI in art and writing. If I were to train a model on all of my own work, that model would be full of context – my context. So maybe something originating from that could be a brilliant or moving work of art. I’m open to it.
But if you contrast that with a corporate model shaped by corporate incentives – a regression to the mean with no artistic training – can the output of that be art? Probably not.
AS: I know you just quit. But what is next?
ZH: I hope to tackle the questions I raised in the opinion piece. I want to work on actual proposals. How do we build governance structures that make AI accessible without turning it into a surveillance tool?
What we’re reading:
A whistleblower revealed that ICE deepened their reliance on Microsoft more in the past year as part of their immigration crackdown, as reported by the Guardian and 972 Mag: “ICE more than tripled the amount of data it stored in Microsoft’s Azure cloud platform in the six months leading up to January 2026, a period in which the agency’s budget swelled and its workforce rapidly expanded, according to the files.”
Following pressure from organizers in Denver, Palantir announced they are moving their HQ from Denver to Miami. This was after they were pushed out of Palo Alto for the same reason. Wonder how long they’ll last in Miami?
In 2024 Alex Karp, Palantir CEO, took home $6.8 billion, after the company stock increased by over 500%. In 2025, “$1.5 billion of U.S. income and paid exactly zero federal income tax”, according to the Institute on Taxation & Economic Policy.
There’s a whole news cycle about Anthropic and their work with the US Department of Defense. It’s hard to know what to believe. Semafor reported yesterday that Palantir shared convos they had with Anthropic with the military, seemingly along the lines of “we don’t think the Anthropic guys are killers like us”. For the past few years Palantir thumping their chests about how strong they are has worked out real well for their stock price, something tells us the tide is turning against them.
Today in LA, Mark Zuckerberg is set to take the stand, for the first time ever, before parents and a jury of everyday Americans to answer for the harm caused to thousands of children and families through the intentional design of addictive products. The trial will reveal newly unsealed internal documents and powerful expert testimony.
A blog post published last week, mostly written by AI, titled “Something big is happening” got over 100 million views. It made the case that AI was coming for white collar work. Wall Street got scared and apparently started dumping stocks of companies with large numbers of white collar workers. In response, Molly Kinder, with the Brookings Institution wrote: “capability alone doesn’t automatically translate into job losses. And this is where the breathless conversation is missing something important”. Matt Stoller, with the American Economic Liberties Project, eviscerated the argument entirely. It’s an emerging debate and worth taking the time to both read widely and do your own research about.
The Irish tech regulator opened an investigation into X over creating sexualized AI images of kids. Good on them. Governments need to hold the oligarchs accountable.
Senator Bernie Sanders is in California this week. On Wednesday he’ll speak in LA in support of the California Billionaires Tax, and on Friday he’ll speak at an event in Stanford on Friday alongside Representative Ro Khanna.



