Decoding the recent PR plays from OpenAI and Waymo
The fine line between corporate responsibility and pivoting away from uncomfortable truths
If you’ve ever worked in communications at a tech company or (anywhere for that matter), you know that there are usually two ways of engaging with the public and the media: reactively or proactively. The infamous crisis communications people who make $2,000 a second to kill stories and put out fires are on the more reactive side of things of course, while the proactive teams are quietly calculating ways to set the narrative and shape what they’d like reality to be.
Instead of responding rapidly to the discovery of an investigative journalist, these proactive communications messengers are tasked with setting the public-facing story that the company wants their future to become. The strategists mine and select stories and spokespeople inside and outside of the company to fulfill the company’s product or financial vision. Then the carefully crafted story is repeated in so many ways and via so many avenues and messengers that we imagine this future is all but inevitable.
Take this guest essay in the New York Times about self-driving cars saving the lives of teenagers. It’s interestingly timed, published just a few days before Waymo announced its expansion into cities like Pittsburgh, Baltimore, St. Louis, and Philadelphia. It is written by a neurosurgeon, likely the only reason the Times would take such a submission.
But while the piece immediately hooks you with the tragic story of a teenager who was ejected in a rollover car crash and declared brain dead, it quickly pivots to a report given to the neurosurgeon by Waymo: “When compared to human drivers on the same roads, Waymo’s self-driving cars were involved in 91 percent fewer serious-injury-or-worse crashes and 80 percent fewer crashes causing any injury. It showed a 96 percent lower rate of injury-causing crashes at intersections, which are some of the deadliest I encounter in the trauma bay.”
He continues: “If Waymo’s results are indicative of the broader future of autonomous vehicles, we may be on the path to eliminating traffic deaths as a leading cause of mortality in the United States. While many see this as a tech story, I view it as a public health breakthrough.”
And then, either this neurosurgeon has been keeping really close tabs on autonomous vehicle policy, or someone else in his ear has–because the opinion piece proceeds to call for widespread roll-out of autonomous vehicles as the solution to mitigating death by car crash, calling on policymakers in places like Washington, D.C. to stop putting up roadblocks that might “facilitate the broader use of these vehicles.” (He most certainly does not cite this piece: Children Sob as Waymo Runs Over Dog.)
Another example of a proactive communications blitz was a campaign OpenAI launched a few days ago. They put out a call for research proposals to probe the impact of AI on mental health, setting aside $2 million for various projects. This is in no small part a response to the barrage of lawsuits against OpenAI after ChatGPT encouraged users to take their own lives.
But it’s questionable whether OpenAI, even if through their “public benefit corporation,” can be trusted to evaluate these proposals. Will research that surfaces “inconvenient truths” about AI and mental health will be selected for inclusion? Will certain research just disappear, as has happened before at other tech companies that didn’t like the research results they asked for? Will the public be able to know what proposals were rejected or accepted for funding, and why?
Artificial intelligence and autonomous vehicles are actually poised to materially change our way of living in ways that other tech revolutions (e-scooters, crypto) haven’t. Waymo is already taking over the streets of San Francisco and Los Angeles, and replacing many Uber and Lyft trips. ChatGPT has literally become many people’s sycophantic “therapist.”
What’s insidious about the proactive PR campaigns is that the average media consumer isn’t always conscious that something they consume has been orchestrated by a seasoned strategist. They take a neurosurgeon for his word, unaware that while there is a world in which autonomous cars can save lives, this opinion piece is leveraging a “third-party stakeholder” in comms-speak who is probably deep in touch with Waymo’s comms and policy team.
That doesn’t invalidate every single message a corporate PR team puts out, nor does it make their messaging inherently untrue. The neurosurgeon probably truly believes that autonomous vehicles are necessary to save lives, and perhaps some research from OpenAI’s call for grants that can lead to important safeguards on AI.
But we have to remember that the messaging campaigns are driven from profit-making motives–and with an agenda to set the narrative into our future collective reality.
What we’re reading…
Sourcery, a YouTube show presented by the digital finance platform Brex, is hosting warm and friendly interviews with tech CEOs like Alex Karp of Palantir. Andreessen Horowitz created its own media venture. The Guardian reports how wealthy tech and venture capital firms are curating their own media ecosystem of cozy sit-downs and softball interviews.
Silicon Valley elite want to design their babies and avert future suffering through embryo editing, which refers to changing the DNA of an embryo before it is implanted.
Millions of properties are at risk of flooding, but sellers and sellers’ agents at Zillow no longer want that information shared. Recently, as the New York Times reports, they quietly removed the flood potential.
There are nine people at Anthropic tasked with finding out “inconvenient truths” about AI. They are called the societal impacts team, and they have no other direct analog at OpenAI, Meta, or Anthropic’s other big competitors. How effective are they–and how effective can they be?
AI companies depend on droves of data annotators–contractors who can receive tasks such as labeling images or suggesting alternative chatbot responses.
This essay from a professor at Ohio State University, about the self-lobotomization of colleges: “The skills needed to thrive in an AI world might counterintuitively be exactly those that the liberal arts have long cultivated. Students must be able to ask AI questions, critically analyze its written responses, identify possible weaknesses or inaccuracies, and integrate new information with existing knowledge. The automation of routine cognitive tasks also places greater emphasis on creative human thinking. Students must be able to envision new solutions, make unexpected connections, and judge when a novel concept is likely to be fruitful. Finally, students must be comfortable and adept at grasping new concepts. This requires a flexible intelligence, driven by curiosity. Perhaps this is why the unemployment rate for recent art-history graduates is half that of recent computer-science grads.”



The Clune essay has some valid points for consideration. AI has become a ubiquitous skeleton key to bypass the friction and effort of “disciplined work,” and into a frictionless and passive simulacrum of an immediate and complete product. Human history has shown that it is the rare individual that chooses to do something “the hard way” when they have access to an easier method. It’s hard to believe everyday college kids would do the former.