Sam Altman Tries, Fails to Distract From Damning 'New Yorker' Exposé
Hilariously timed announcements, invocations of superintelligent AI, and a softball Axios interview are not moving the needle.
It’s rare to get an inside look at how the upper echelons of a tech company navigate a public relations crisis, but thanks to discovery from the upcoming court case Elon Musk v. Sam Altman, we know a little bit about how OpenAI tracks and then spins unsavory news.
On March 8, 2024, Altman regained his board of directors seat, which was temporarily taken away from him during his snap firing and then rehiring in November 2023. OpenAI announced that an investigation into Altman’s behavior (others at OpenAI had accused him of being duplicitous) didn’t yield enough information to necessitate his removal. Crucially, OpenAI also announced that it would not be releasing a comprehensive report from the investigation.
Source: trust me is not at all convincing or reassuring, but that’s not really the point. A brief flurry of news stories and columns about OpenAI’s lack of transparency is much preferable to releasing an honest appraisal of how Altman lost his colleagues’ confidence.
At 10:10 p.m. on March 8, OpenAI’s then-communications chief Hannah Wong emailed the company’s board of directors (including Altman) about the “widespread coverage of today’s announcements.” She summarized key themes and criticisms she was seeing in news reports, and explained how Altman and fellow board member Bret Taylor were working to “control the narrative.” In the early-morning hours of March 10, Wong issued another update: “Our strategy of one high impact news moment paid off with coverage slowing significantly today… Next week we plan to ‘turn the page’ with a steady drumbeat of product and publisher deal announcements.”
I can’t help but notice some strategic similarities this week, as OpenAI tries to distract from the aftershocks of a New Yorker exposé about Altman. The piece, co-written by Ronan Farrow and Andrew Marantz, is sourced to more than 100 people “with firsthand knowledge of how Altman conducts business.” Farrow and Marantz also spoke to Altman “more than a dozen times.” Through their exhaustive research and reporting, an unmistakable theme emerges: Altman, according to those who know him best, is a chronic liar. Just a sampling from the piece:
A memo written by OpenAI’s former chief scientist Ilya Sutskever listed a bunch of observed patterns about Altman; the first observation is “lying.”
Altman has repeatedly told people that he was not fired from his previous role as president of Y Combinator. But Y Combinator cofounder Paul Graham reportedly said in private that “Altman was removed because of Y.C. partners’ mistrust… On one occasion, Graham told Y.C. colleagues that, prior to his removal, ‘Sam had been lying to us all the time.”’
An OpenAI board member said of Altman: “He’s unconstrained by truth,” adding, “He has two traits that are almost never seen in the same person. The first is a strong desire to please people, to be liked in any given interaction. The second is almost a sociopathic lack of concern for the consequences that may come from deceiving someone.”
A Microsoft senior executive said of Altman, “I think there’s a small but real chance he’s eventually remembered as a Bernie Madoff- or Sam Bankman-Fried-level scammer.”
The entire piece is like this: damning detail after damning detail. Given the unusually thorough reporting, Altman and OpenAI are clearly trying to “turn the page” with a couple of hilariously timed announcements.
One of OpenAI’s announcements is that the company is accepting applications for a “safety fellowship,” which is apparently “a new program for external researchers, engineers, and practitioners to pursue rigorous, high-impact research on the safety and alignment of advanced AI systems.” The stipend is great—$3,850 a week—but the fellowship lasts less than five months and is light on details. One could fairly question when the fellowship was dreamt up, and the motivations behind it, especially because The New Yorker piece extensively lays out how Altman went from talking about AI safety risks in apocalyptic terms to praising President Trump’s “deregulatory approach,” plus whatever this is:
“My vibes don’t match a lot of the traditional A.I.-safety stuff,” Altman said. He insisted that he continued to prioritize these matters, but when pressed for specifics he was vague: “We still will run safety projects, or at least safety-adjacent projects.” When we asked to interview researchers at the company who were working on existential safety—the kinds of issues that could mean, as Altman once put it, “lights-out for all of us”—an OpenAI representative seemed confused. “What do you mean by ‘existential safety’?” he replied. “That’s not, like, a thing.”
Farrow himself noted the peculiar timing of the safety fellowship announcement.
The other OpenAI announcement has gotten more buzz, probably because it’s more ambitious-sounding and also because OpenAI snookered Axios into taking it seriously. I’ll let Axios cofounders Mike Allen and Jim VandeHei take it from here:
OpenAI CEO Sam Altman is doing something no tech titan has ever done: He’s publishing a detailed blueprint for how government should tax, regulate and redistribute the wealth from the very technology he’s racing to build and spread.
Why it matters: Altman told us… that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract—on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression.
The supposedly “detailed” blueprint (it is not detailed at all) is called “Industrial Policy for the Intelligence Age: Ideas to Keep People First.” It states, without evidence, that “we’re beginning a transition toward superintelligence: AI systems capable of outperforming the smartest humans even when they are assisted by AI.” In an accompanying interview with Axios, Altman distanced himself from the document. “These are ideas—I think an ‘agenda’ is too strong of a word,” he told Allen. “We want to put these things into the conversation. Some will be good, some will be bad.”
I agree a few of the ideas are good; they are the sort of things a group of left-leaning friends might talk about at a bar while hypothesizing about the possibility (though not the inevitability!) of an AI-centric workforce. Stronger worker protections, four-day workweeks, higher taxes on capital gains and corporate income, and a tax on automated labor are floated in the paper. So is a public wealth fund “that provides every citizen—including those not invested in financial markets—with a stake in AI-driven economic growth.”
The paper acknowledges safety nets like unemployment insurance and SNAP need to be modernized and much more accessible. There is a funny bit about “portable benefits,” a radical concept where human rights like healthcare “are not tied to a single employer.” I think I’ve heard of something like that from a politician or two, but maybe I’m misremembering. There’s also a funny bit about devising “a formal way to collaborate with management to make sure AI improves job quality, enhances safety, and respects labor rights.” I think I’ve heard something about that sort of arrangement, too. (I’m not so sure Altman would be thrilled if his employees unionized.)
Section two of the paper is about SAFETY. Sam Altman promises he actually does care about safety, folks—ignore whatever he said to The New Yorker. The paper predicts—or hedges—depending on how generous you’re feeling, that “some systems may be misused for cyber or biological harm. Others may create new pressures on social and emotional well-being, including for young people, if deployed without adequate safeguards.”
The language in the safety section is impossibly vague and naive and bizarrely future-focused. OpenAI would prefer to gesture at a bogeyman, rather than acknowledging how AI tools are already causing harm because of a lack of guardrails. The U.S. military is leaning on AI right now, as part of a completely unjustified war in Iran. Wouldn’t you know it, a keyword search of the paper for terms like “military,” “war,” “missiles,” and “drones” returns zero results. Why? OpenAI just agreed to a contract so that it can usurp Anthropic and gain classified access to military systems. Separately, there are well-documented reports of how ChatGPT has caused psychological harm to young people—a phenomenon that has been attributed to a lack of adequate safeguards. These are not future problems, they are current problems.
Another remarkable (and I’d assume intentional) oversight in the paper is that there’s zero discussion about logistics. How would any proposed solution begin to gain traction in Congress and the White House? Altman was asked about this in his interview with Allen. His response was that he’s a naturally optimistic guy, and “I assume we’ll figure it out.”
More realistically, Altman and OpenAI would try to kill off any regulatory efforts, even if those efforts were inspired by the faux proposals in the paper. OpenAI is already killing off regulatory efforts! The company’s president, Greg Brockman, is a Trump mega-donor, and not because POTUS is passionate about government oversight of the AI industry. As noted by The New Yorker, OpenAI lobbied against AI regulation in the European Union. In California, OpenAI “began issuing threats” in private about a possible statewide bill that would have required safety testing for AI models. A legislative aide told The New Yorker, “I would say that, over the course of the year, we saw increasingly cunning, deceptive behavior from OpenAI.”
This is just how Altman operates. He floods the zone with preposterous projections, sometimes as a sales pitch, other times as a distraction to buy himself more time. In 2017, he reportedly told U.S. officials that China was well on its way to an “AGI Manhattan Project,” and then… never offered proof to back up the claim. He reportedly told the Biden Administration that by 2026, “an extensive network of nuclear-fusion reactors across the United States would power the A.I. boom.” (Nuclear fusion is not even remotely viable.)
I would guess OpenAI embargoed its latest fairytale—a list of ideas that will never come to fruition—so that Axios would unintentionally compete for headlines with The New Yorker’s piece. OpenAI roped in Axios, but the ploy still flopped. As it turns out, you can’t “turn the page” and zoom past a lengthy New Yorker investigation that’s literally premised on how Altman has made a career out of “turning the page.” The contradictions are too obvious to miss, no matter how much Altman and OpenAI try to change the subject.
When Altman tells Axios, “It’s incredibly important that the people building AI are high integrity, trustworthy people,” most viewers will agree. And thanks to The New Yorker, many of those viewers will laugh and roll their eyes at the messenger—a man who isn’t trustworthy and doesn’t know what he’s talking about.






He seems to get along well with his sister,