The Precursor to AI? It Might Be Eugenics.
A new documentary traces the ideology of “general intelligence” to the fathers of eugenics and statistics.
In the last year much of the tech industry has been fixated on the hiring of “superintelligence engineers” to build AI models, fueling the notion that there is a solo “genius” or a small group of geniuses making AI possible. Furthering the narrative that there are rare elite minds bringing AI to the masses, TIME Magazine named the “architects of AI” as 2025’s Person of the Year, specifically listing Sam Altman, Elon Musk, Mark Zuckerberg, Anthropic’s Dario Amodei, and Nvidia’s Jensen Huang, among others. In a photo on the cover of TIME Magazine, these “architects” are perched on the iron railing of a skyscraper, emulating the 1930s “Lunch Atop a Skyscraper” photo of ironworkers taking a lunch break.
But are these CEOs the “builders” that TIME Magazine claims them to be? While the aforementioned group are probably the most influential people in AI from a corporate governance perspective, the idea of them as ironworkers dangling from the top of 30 Rockefeller Center is a bit laughable. That’s in large part because the actual people in AI who are equivalent to ironworkers “building the skyscraper” are actually kept hidden from the public eye–and most likely living outside of the United States in countries with weak labor regulations and little opportunity for something better.
The most suitable modern-day comparison to the ironworkers breaking for lunch atop a Manhattan skyscraper are actually data annotators, a web of low-paid subcontractors in countries like Kenya and China who categorize and label parts of a whole image so that the large language models can be trained off of that information. Despite strict non-disclosure agreements that silence these workers from sharing information, some of their stories have come to light, in which they speak of psychological distress due to the graphic nature of the images they are seeing.
One documentarian named Valerie Veatch sought to understand the conditions that these workers face, while also grounding her film in the history and ideology that AI technology was born from. Veatch traces the origins of AI in theories of “general intelligence,” which was first promoted by the eugenicist Francis Galton and his protegé Karl Pearson, who used statistics to justify imperialist or capitalistic goals as well as mark some human beings as worthy or unworthy.
I spoke with Veatch about her film Ghost in the Machine, which premieres at Sundance Film Festival later this month.
Ariella Steinhorn: What drew you to this subject–had you heard of this theory and wanted to go deeper, or in your research did it become apparent through your investigations that AI had roots in eugenics?
Valerie Veatch: This was actually a process of discovery. A friend signed me up for OpenAI’s filmmakers and artists’ program last October. I’d become disillusioned with the film industry, and was interested to learn more.
I quickly realized that the generative videos the technology produces is really heinous and hyper-sexualized. There were women twerking for the camera, losing their clothes. To me, cinema is an intentional production of images, but this tech was not that. There was such a disconnect between the rhetoric presented. Instead of tools to democratize the creative process, these were organs of extraction and control with highly racist and sexist outputs. Cinema is visual imagery and those moving images are crucial in how we communicate culture. When that becomes the output of a toxic stew of data algorithms, we can feel what that’s doing.
I decided to tell OpenAI that their output was racist. But then they sidelined and isolated me, referring me to a third-party DEI initiative. My final product ended up being a sassy piece about how bad their tech was so of course it didn’t win anything, but I flew out to their screenings and was horrified by the groupthink on display and how everyone was so uncritically adopting this technology.
So I started reading white papers and reading books about AI, and reaching out to the academics behind them. That was the genesis of the film, which ended up being two hours and including 35 brilliant people.
The film traces where the idea that machines can think comes from. And that framework is largely shaped by colonialism, eugenics, extractive behaviors, and the patriarchy. It’s entirely unsurprising that Grok is a Hitler-loving chatbot.
AS: Okay, so speaking of eugenicists, walk us through what the connection is here? And the history?
VV: Machine learning runs on the clustering of algorithms and correlation without causation. The eugenicists also never took causation into account. So they’d say that Black people score lower on an IQ, although the questions were oriented towards something culturally unfamiliar. They might ask, does a teacup belong on a saucer, a table, or a horse? A working class person might answer “table” because they have no other context. They were therefore labeled “feeble-minded” and potentially sterilized.
Francis Galton was the founder of eugenics, which was the endeavor to improve mankind and improve the intelligence of mankind. We find the origins of eugenics in ML and AI in two streams: 1) that intelligence can be externalized and measured through an instrument like IQ, and 2) the attachment of the “general intelligence” score to a being.
The very notion of statistics comes from eugenics. The idea that you can take a data point and look for other data points around that data point comes from John Pearson, Galton’s protegé. He invented mathematical tools to sort humans into groups, and was part of the attempt to create a policy on which humans should be sterilized because their IQ was too low. The idea that you can take a data point on how long a nose should be and create data points around it carries into machine learning.
There’s this book called The Concept of the Mind, which began the field of behaviorism and stemmed from this post-World War II embarrassment of Victorian occultism. It calls the idea of humans being animated by a soul we can’t see as silly, calling it a “ghost in the machine” (the name in the documentary). The book asserts that we can only measure intelligence based on the behavior of meatsuits, based on quality of mind–and says therefore that women, infants and animals don’t have the same quality of mind.
Interestingly Hitler and the Germans got their ideas from the American eugenicists. Hitler wrote a letter to the Cold Spring Eugenics Record Office asking for their books about sterilization and race improvement.
AS: Your film connects with a lot of data annotators, who are often in Africa and Asia seeing some really dark stuff to train the models. Where would AI be without them, even possible?
VV: No. The true workers of AI are the data workers. And they are facing this concept of digital colonialism, which compares colonialism and slavery to the practices of these corporations and the way in which they prey on these populations.
The film includes the experience of a group of workers training ChatGPT before it was a product. They were reading loads of text and training the model to not put out harmful text.
But the text they were consuming was so awful, and they didn’t know what it was or what its purpose was. The volume of slop and horrific content they were having to review and categorize was incredibly traumatizing. There are classifications of sexual content I don’t even want to say out loud.
There’s another scene where workers at a predatory data farm gather. They try to mount this lawsuit, but depressingly, the president of Kenya announces that they have changed the law so no one will try to sue the big tech companies and workers can’t organize.
Something that not a lot of people know is that when these workers accept a gig on these platforms they themselves are used for their data. A prerequisite to getting a job offer is to consent to sharing all the photos in their photo library. So if they have photos of their children on their phones, the companies will use those photos because it’s hard to get kid data. The data worker becomes the data subject in this predatory scheme.
AS: Are the inventors of AI consciously making this connection between what they’re doing and eugenics? Are other philosophies like effective altruism or longtermism rooted in eugenics?
VV: On the first question, possible. We’ve now redefined human cognition into machine terms, like “our chatbot reaches this level of IQ.”
Eugenics is also connected to other subjects many tech elite favored like effective altruism and transhumanism. Their ultimate “goal” is to make humans the happiest, and they believe the way they can do that is to put them into machines, so we’re all going to be these little hard drives flying through space. Sam Altman has invested in deepbrain interface, so I’d bet he believes in that. Transhumanism and misogyny are also deeply linked.
There’s an interview between Joe Rogan and Elon Musk where they’re talking Grok–sex with a robot some day? Musk says, yeah, maybe in five years. And Rogan’s first question is “but will she be warm?”
Everything that’s being done is under the auspices of wanting to improve humanity, but there’s subtext to that whole narrative. The goal really is to rank human beings on a scale and to optimize them and to create technologies that frame and order that.
AS: What is the endgame then? We know what eugenics’ end game was in various forms, whether sterilization or eradication of entire groups of people…what then for AI?
VV: Fascism traditionally has promised a better tomorrow. But for the power players there is this narrative about the apocalypse that overthinks this current moment. For them, it’s the promise of a planetary escape. Sam Altman may not be trying to do that, but he is also engaging in a fantasy of withdrawal in the face of collapse. Escapism is part of it, this notion of a complete lack of accountability.
In the film I interview the author of a book called Survival of the Richest: Escape Fantasies of the Tech Billionaires. In the book he describes how he was flown out to this rich guy’s island who he can’t name. They were fantasizing about end time plans because they’re big preppers. A lot of their questions were around how they keep their staff happy when the world ends, how they keep them from turning on them.
The response was mostly “we’ll just pay them more money.” But their money would have no value. They’d have to engage in a system of care which I don’t think they’re capable of.
AS: Where do we go from here?
VV: We are swan-diving into the complete disintegration of democracy. A fundamental pillar of democracy is the ability to have a shared truth, but now even the idea of having solidarity is cringe. Not just AI slop doing this, but it completely erodes our ability to trust one another and create solidarity. Control populations dependent on this technology. That is why it’s important to talk about these things.
It’s critical that we focus our thoughts on systems of care. Only then can we shift this hyper-accelerated, hyper-fractured space that’s not sustainable. And we need to understand that this future is not inevitable. We can create and define the kind of technologies we want. And push back against tech that takes away agency and creative authorship and forces of extraction and control.
I also think we need to destabilize the lie of superintelligence. These machines are not thinking. They’re engaging in pattern recognition, they’re algorithms processing data. And unfortunately they’re doing so in the language space and humans are assigning meaning because that’s how we’re programmed.
It takes away agency from people. When I hear six-year-olds in my kid’s class say we don’t need to try or learn because AI will do it for us, we have to ask ourselves, what do we lose when these little boys think that? That is the biggest thought crime of the century.


