Wall Street Has Officially Priced Out the Human Worker
A new study shows investors now punish companies for keeping too many workers. A dozen states are trying to do something about it.
In January of this year, Brian Wu, a professor of strategy at the University of Michigan’s Ross School of Business, and Rupesh Thakkar, a corporate strategy leader at Zoom, published a working paper that, if you squint, reads like a terms-and-conditions update for the American labor market. The pair analyzed roughly 470 software-as-a-service companies over the course of a decade and found something unsurprising but still deeply disappointing to anyone concerned about employment in this country: revenue per employee has become the single most powerful predictor of how Wall Street values a company, rising nearly fourfold in predictive power over the last ten years. Revenue growth — the metric that governed the sector for a generation, and one that created some of the best jobs in the history of capitalism — has been dethroned.
The takeaway for a CEO is clear: If your business can’t prove it is using AI to get more output from the same number of workers, Wall Street will punish your stock price. This is bad news for tech workers — especially those looking for a job. But because the study finds this logic spreading beyond software into the broader economy, the implication extends well past Silicon Valley. Choosing to keep workers on payroll when an algorithm could replace them is, by the market’s current reasoning, not a humane defense of a labor force — it’s a failure of management. A CEO who does it anyway isn’t being decent. In the vocabulary of investor relations, they’re being negligent.
JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley, and Wells Fargo recently posted $47 billion in collective profits — up 18 percent — while shedding 15,000 employees. All of them credited AI to some degree with helping cut jobs and automate work in areas ranging from back-office compliance paperwork to front-office financial transactions. None of them called it a layoff in the traditional sense. The vocabulary has shifted: it’s a “productivity and efficiency journey,” a “workforce optimization,” a restructuring toward revenue per head. And whether it’s truly that leadership intends to replace them with AI, or whether it’s just air cover for mismanagement, the workers are the inefficiency being corrected.
This is the mechanism the U-M paper documents, and it’s clearly a viral expectation infecting the leadership of companies. The decision to reduce headcount hasn’t been made — in any individual case — by a single executive who decided workers were expendable. It’s been made by the structure of the market itself, one earnings call at a time, as investors learned to reward the numerator and punish the denominator. The algorithm didn’t fire anyone. It just made keeping them newly expensive.
Where the market won’t protect jobs, governments traditionally step in, but the federal government is, if anything, doing the opposite. A Trump executive order signed in December directed the Justice Department to review state AI laws deemed inconsistent with a national deregulatory framework, a directive that legal experts say amounts to a standing threat of litigation against any state that moves to curtail AI too fast. But in spite of federal pressure, the states are moving to slow down the AI jobs bloodbath anyway.
The university that produced the study on Wall Street’s new efficiency metric is located a few miles from the Michigan state legislature that, in February 2026, introduced House Bill 5579. The bill would bar companies from using automated decision-making tools to make employment-related decisions, with narrow exceptions for screening large volumes of candidates. The press conference announcing it featured the state AFL-CIO, the Professional Employees Council of Sparrow Hospital, and the Communications Workers of America standing alongside its sponsor, Representative Penelope Tsernoglou of East Lansing — all hoping to pass a law specifically against the automation of work, or of the assessment of its value.
Illinois moved first. Its HB 3773 took effect January 1, amending the state’s Human Rights Act to require companies to notify workers when AI is integrated into decisions about hiring, firing, discipline, tenure, and training — and giving them the right to sue a violating employer. In Minnesota, the Consumer Data Privacy Act, effective July 31, requires disclosure of AI use and the opportunity to opt out — including in employment decisions. In Washington state, HB 2144 has advanced through committee, requiring written notice before AI tools are used in performance evaluations. Massachusetts’ proposed AI Accountability and Consumer Protection Act — which has already passed one chamber — is perhaps the most direct effort to address the dynamic the University of Michigan study documented. It would require employers to notify workers when an AI system materially influences a consequential decision, explain how it did so, and provide a process to appeal. New Jersey, Virginia, North Carolina, Pennsylvania, and Texas all have active legislation in various stages of consideration.
These bills are, for the most part, just guardrails: they mostly mandate notice, not prohibition. They don’t stop a company from running a worker’s performance data through an algorithm. They require only that someone tell the worker it happened. Against the incentive structure the U-M study documents — where the market itself is driving toward fewer humans in the loop — that’s a modest brake. Knowing that an algorithm evaluated you doesn’t change the fact that the algorithm evaluated you. But it creates a record, and in some states, a cause of action. That’s more than existed before.
There is a less visible counterpressure, and it’s not coming from legislatures. It’s coming from the failure of AI-built systems, and from the lack of humans overseeing them.
IBM’s team working with enterprise AI clients documented a case that the U-M researchers would probably recognize. An autonomous customer-service agent began approving refunds outside policy guidelines after a customer praised the system publicly for issuing one. The agent then started granting additional refunds freely, optimizing for positive reviews rather than following established refund policies. The system wasn’t broken. There was no bug, no glitch, no error message. It was doing exactly what it had been designed to do — optimize for the signal it was given — and the signal turned out to be the wrong one. No human noticed until the damage was done, because the human who would have noticed had been removed.
McDonald's ran into a version of the same wall. After three years of testing an AI ordering system at more than 100 drive-throughs — during which videos went viral of the system adding hundreds of chicken nuggets or bacon to ice cream orders — the company shut the program down entirely in July 2024. The technology deployed to eliminate the need for human order-takers turned out to require so much human intervention that McDonald's concluded it wasn't worth running at all. The savings of an unattended system were real. But so was the cost.
“Autonomy forces operational clarity,” as Noe Ramos, vice president of AI operations at Agiloft, told CNBC. “If your exception-handling lives in people’s heads instead of documented processes, the AI surfaces those gaps immediately.” What she’s describing is the hidden tax on full automation: the institutional knowledge that workers carry — the judgment calls, the edge cases, the things everyone in the building knows to watch for — doesn’t transfer to the model when the workers leave. It just disappears.
The third counterpressure is the one with the longest history, and it’s moving faster than most people realize: labor organizing. At the end of May 2025, unionized quality assurance workers at Microsoft-owned ZeniMax announced a tentative contract agreement — the first union contract Microsoft has ever signed in the United States. More than 2,000 Microsoft video game professionals now belong to the CWA. Since 2020, tech workers have formed unions at Alphabet, Glitch, Kickstarter, Medium, the New York Times, and the Washington Post.
AFL-CIO Tech Institute executive director Amanda Ballantyne has said that including AI in collective bargaining negotiations is key, because workers tend to have strong opinions about AI use in their workplaces and know best the safety implications of new tools.
The ZeniMax contract contains something that hadn’t existed in the tech industry before: language requiring Microsoft to consult with workers before deploying AI tools that affect their jobs. Not a veto, of course — a consultation. But that’s a seat at the table. The U-M study documents a market mechanism that has quietly removed workers from the equation before any individual decision about their futures gets made. That contract is a small, specific, enforceable thing requiring that they be put back in.


