The fusing of AI firms and the state is leading to a dangerous concentration of power
A researcher says that we should focus not just on AI-related job loss or LLM models--but on the consolidation of market power between tech firms building AI and the government
For some time now, the tech elite have been espousing a growing set of libertarian ideals and philosophies. They’ve warned of government regulation as the death of innovation, called for “building” no matter the cost, and even concocted plans for their own societies free of government regulation or taxation.
But when it comes to AI—perhaps the most hyped realm of “private innovation”—growth would be impossible without the backing of the very entity Silicon Valley says stifles them so much: the government.
Today, the federal government and the biggest tech firms building AI have brokered exclusive procurement and land deals to keep the power amongst a few players. Under the cover that this consolidation is necessary to keep the United States competitive globally, OpenAI, Oracle, and Meta are all angling for long-term marriages with the Trump administration to build data centers, grow their market shares, and further proliferate their products into a level of ubiquity.
But what is the benefit to the public of just a few players hoarding land resources and procurement of big-dollar federal contracts? Are the funds to build and scale colossal data centers being deployed sustainably? And who is reaping the profit from this rapid AI expansion?
As I was drafting this piece, I spotted a WIRED report about how the Trump administration is disappearing AI blog posts written during the tenure of former Federal Trade Commission chair, Lina Khan. Khan was and is a champion of competition and monopoly-busting—and many of the erased blog posts focused on the enablement of open source AI and consumer protection.
By deleting these posts, the administration is showing their cards on AI policy: that despite publicly alleging their support of free markets, competition, and choice, they are not serious about decentralizing any of the power in the AI ecosystem. In fact, they and their Big Tech AI partners are pushing for the opposite result—emboldened to consolidate, all at the expense of competition, user experience, and potential growth among smaller AI firms. Not so libertarian if you ask me!
With all of this in mind, Hard Reset spoke with researcher Sarah West, the co-executive director of a think tank advocating for an AI that benefits the public interest, not just a select few. We discuss this consolidation of power among a few AI players—and how the government is actually hindering the development of healthier competition and consumer-friendly AI products, while flirting with financial disaster.
Ariella Steinhorn: Tell us a little about what brought you to this work at the AI Now Institute?
Sarah West: I came into this work wearing dual hats, as a researcher and as an advocate. I was particularly interested in citizen media that engaged with digital activism, and learning how folks were leveraging blogs to discuss democracy in places where the media was state-run.
When I got started, tech companies had a geopolitically significant role in shaping the nature of public speech and the scope of democratic participation.
But over time, I also watched how that same tech that was created to enable privacy and security had actually morphed into a business model and an Internet reliant on commercial surveillance.
And so I began to study how market incentives and regulatory interventions had resulted in tech that is used for control rather than liberation.
All of this brought me to AI Now, a think tank that is single-mindedly focused on ensuring that the AI trajectory ensures the interest of the public. Right as ChatGPT took off, I took a brief pause from AI Now to advise former FTC chair Lina Khan on AI policy. But I’m back at AI Now as co-director.
AS: What are some of the under-told or untold narratives about AI that we should be paying attention to, narratives that even this (informed) audience might not be aware of?
SW: One is that we are treating AI as if it is operating under typical market imperatives. But what we’re actually seeing is a market that is being backstopped by government intervention. There is a fusing between giant tech companies and the state in a way that is deepening and concentrating. Power is concentrating in the hands of the powerful.
I don’t think we’re spending as much time as we could spend interrogating that. Rather than talking about AI replacing jobs, or a specific AI model—which are important questions on their own merits—we should be focused on a deeper set of questions. Do we want to allow the fundamental transformation of our institutions in the name of this technology? And if not, what do we need to do to change that?
It’s important for policymakers who have tended to stand back to take action. Oftentimes, we hear that because something is the transformative tech of our time, that we need to be less involved in regulating it. But the role of regulators is to act in the interest of the public, because the market won’t do it on its own.
AS: And where have regulators been successful in implementing guardrails, if at all?
SW: Candidly, AI has been allowed to metastasize so much that it’s difficult to meaningfully challenge concentration of power. Even an FTC fine is a budget line for the size and scale that these companies are operating at.
At the same time, there have been unprecedented antitrust cases against big tech companies. More of that is important, but it’s not nearly enough.
AS: You’ve been at the forefront of researching the AI bubble. Can you tell us in-depth about the warning signs you observed, and where we’re at now?
SW: There are a few factors. First, we have passed an inflection point in the market. Up until recently, this push to build AI data infrastructures have been funded by the tech giants’ cash reserves. Those companies have by and large been responsible for driving the build-outs—and they have sizable profit margins and cash on the books to do so.
But now, the funding sources are coming from companies without those margins. One moment over the summer in particular piqued my interest: when Oracle released its earnings statement. Overnight, Larry Ellison became the richest man in the world.
The problem? Oracle doesn’t have the amount of cash on hand that the other companies do. And its projected earnings hinge significantly on a deal with OpenAI to power their data center buildout—and both players are going to have to turn to credit for loans, to fund the build-out to fulfill its obligations.
Meanwhile, OpenAI doesn’t have anywhere near the revenues of the capital it’s spending; any promised income from OpenAI is dubious at best. And there’s also the matter of circular deals—like OpenAI’s deal with Nvidia, where Nvidia cut a check to OpenAI for its usage of Nvidia’s chips.
These structures are becoming more prevalent in the market, and there is more contagion for funding mechanisms beyond conventional financial vehicles (i.e. private credit firms like Apollo Global Management). But these moves to create alternative financial products are getting dangerously close to the things that people’s livelihoods are dependent on. So all of it is speculative and worrisome to me.
AS: Yeah, a lot of speculation and self-dealing. So: when does the bubble burst, and how did we allow it to get to this point?
SW: What’s more worrisome is that we may never see the bubble-bursting moment.
That’s because this technology is getting backstopped by the government in significant ways. The government is enabling infrastructural build-outs of data centers through the allocation of federal land. Procurement systems are guaranteeing big, lucrative contracts for the AI firms through deregulation.
So it’s become more of a slow government bailout, where you see the technology becoming more and more deeply integrated over time.
For example, we’re now seeing AI in the provision of social resources like Medicare and Medicaid. But these are not a lot of new startups cropping up to offer services, but the same big players with large contracts. So significant social programs are becoming cemented in technologies that don’t work very well, in contexts like healthcare that are very significant—with only a few players reaping profits.
The administration’s stance is that they’re expanding AI globally, and deeply investing in the export of U.S.-made tech.
But we’re bound to see a market correction with how much these firms are overvalued. Seven companies make up 50% of the stock market, and it doesn’t make sense that data center development is a greater contributor to GDP than all of consumer spending. With this exposure to credit markets, there’s a potential for a real financial crisis.
AS: What do you make of the go-to argument that we need to turbo-charge our AI ecosystems to “compete” with China?
SW: This is a longstanding argument that’s been put forward, in order to try and avert regulation. If we can’t innovate, we can’t compete with China.
It’s important we have a front-footed focus: what is the world we are actually trying to build, and then what role does AI play within it? It’s important that we not hinge all of our hopes for an innovative future on the narrow interests of a small sub-sector of the industry. They haven’t had great ideas on tech that is going to serve us all—let alone a viable business model.
There are really significant trade-offs being made against the interest of society at-large. We need to be putting innovation that serves the public first, rather than just bending to the imperatives of the market.
AS: Right, and even the traditional pro-business line of “job creation will happen” falls flat here—because data centers aren’t creating that many jobs.
SW: Yes, if we look at the promise of economic benefits, or jobs delivered from data centers—what actually transpires? Not much. Data centers are not places that are teeming with human life.
The economic benefits are overblown. And the folks who get left holding the bag are local utilities and members of the public, who have to deal with the noise pollution and air pollution.
AS: There’s a lot to be worried about. But what are the most pressing issues with the concentration of AI power, what do you lose sleep over?
SW: There are a few things that keep me up at night.
We’re seeing a very real and significant push to integrate commercial AI models and large language models into national security and defense contracts. These are probabilistic and inherently insecure systems, meaning we could be weakening security rather than strengthening it.
We’re increasing the speed and scale of military tech to target individuals. But when the stakes are life and death, and these systems have such low levels of accuracy, it seems irresponsible to be integrating them there.
AI is also being used more and more in healthcare contexts. But these systems are not getting adequate testing before they’re deployed. For example, say there’s a mistake in an ambient listening system a doctor uses in place of their own notes. What kinds of errors are going to be integrated into our medical histories? Nurses take great pains to document our drug contradictions or allergies. But we don’t yet have the right tools in place to make sure the AI tools are used as intended.
I also worry about the blind faith people are placing in AI to justify or rationalize less investment in infrastructure. Instead of funding school budgets to hire more teachers or budgeting for school lunch, we’re trying to push AI teachers. It’s a massive problem that we are replacing robust educational services, while ultimately only a few players profit.
AS: What is the end game for the people pushing for this concentration of power in AI? Enriching themselves, yes, but what else?
SW: It differs depending on the player. Some have espoused visions around political power, with control animating their interests. There’s more than a flirtation with far-right philosophies. And then for others it’s the pure pursuit of profit.
For example, Sundar Pichai of Microsoft and Mark Zuckerberg of Meta don’t have an underlying philosophy—but not competing is more costly for them than standing aside. They’re going to go all in.
AS: You are an activist at heart, and there has to be hope intrinsic to activism. I’m wondering what you have seen in the advocacy space that has given you hope in the face of all of this.
SW: I’m heartened by a lot of the organizing taking place. Unions have had a number of important successes setting the terms around whether and under what conditions AI gets used. We’re also seeing pushes for data privacy and data minimization, to limit the pools that are training these models. In general, there’s scope for a more public conversation around technology than we’ve been able to have in the past.
Right now, there needs to be a continuation of ground-up pressure. For people to say that we won’t stand for certain conditions, unless we can ensure that the public is going to reap the benefits and also be protected from the harms.
The central focus of activists today should be on how AI is being used to deepen and give greater breadth to the power that tech firms have over our lives. We need to mitigate the asymmetries that these tools enable. If we’re not meaningfully contesting that, we won’t get far.
What we’re reading this week…
Actor Bryan Cranston raised the issue of OpenAI’s Sora using actors’ likenesses without their consent.
Amazon delivery firms, citing rising insurance costs and vehicle maintenance, are quitting the program.
Union Pacific Railroad announced plans to acquire Norfolk Southern Railway in a $72 billion transaction. The combined $250 billion company would become the United States’ largest ever railroad company, raising concerns about consolidation and duopoly.
Cloudflare may actually have leverage to pressure Google to send traffic back to the sources of the information powering its AI overviews. Here’s why.
Amazon Web Services had an outage yesterday, bringing much of the Internet down with it. Never was there a clearer example of why the entire economy should not be built on top of one company, or a handful of companies.
News outlets won’t identify the “brown liquid” in an AI-generated video, created by President Trump to mock the No Kings protestors.