Actually, Jack Dorsey, AI Doesn’t Have to Be This Way
As Americans lose their jobs, economists just published a paper that names the destructive ideology driving automation — and lays out what we’d have to do to change it to empower workers.
Last September, Jack Dorsey flew 8,000 Block employees to Oakland for a three-day company festival. Jay-Z performed. So did Anderson .Paak, T-Pain, and Soulja Boy. The bill came to $68.1 million — roughly the annual payroll of 200 people — and it showed up in Block’s own earnings report as an increase in general and administrative expenses. On Friday, Dorsey announced he was cutting 4,000 of his 10,205 employees — nearly half his global workforce — and he was quite clear about why. “Intelligence tools have changed what it means to build and run a company,” he wrote in a letter to shareholders. “A significantly smaller team, using the tools we’re building, can do more and do it better.” Block’s stock jumped 24 percent in after-hours trading.
Dorsey didn’t stop at his own company. “Within the next year, I believe the majority of companies will reach the same conclusion and make similar structural changes,” he wrote on X. “I’d rather get there honestly and on our own terms than be forced into it reactively.” It’s a remarkable thing to say — that your industry’s mass displacement of workers is something you’re getting ahead of, honestly, on your own terms, as if the 4,000 people asked to leave had any terms at all. Whether or not you believe the AI explanation — Bloomberg has already reported on growing suspicions of AI-washing, and Wharton’s Ethan Mollick noted publicly that “it is hard to imagine a firm-wide sudden 50%+ efficiency gain” justifying cuts of this scale — Dorsey’s announcement lands at the center of a debate that three labor economists just mapped in precise detail.
There is a version of artificial intelligence that makes you better at your job. Not a version that replaces you, or watches you, or paces you through a warehouse until your knees give out — a version that extends what you can do, adds to your judgment, helps you learn faster, gets you to the cases that actually need a human.
That version exists. It’s just not the one getting built.
That’s the central argument of a new working paper from the National Bureau of Economic Research, written by economists Daron Acemoglu, David Autor, and Simon Johnson — three researchers who have spent decades studying what technology actually does to workers, as opposed to what it’s promised to do. “Building Pro-Worker Artificial Intelligence” (https://www.nber.org/papers/w34854) is one of the clearest indictments of where AI development is currently heading that anyone in mainstream economics has produced.
The paper draws a map of what AI could do. It identifies five categories of technological change: tools that augment what workers can do, tools that augment capital instead (which helps owners, not workers), tools that automate work outright, tools that flatten the skill differences between workers — which sounds good but often just commodifies expertise — and, finally, tools that create entirely new tasks requiring new kinds of human skill.
Only that last category, they argue, is unambiguously good for workers. Every other form of change involves tradeoffs that, under the current economic structure, tend to resolve in favor of whoever owns the machines.
They walk through real-world examples: aviation maintenance technicians who could be helped by AI that flags anomalies and accelerates diagnostics, versus AI that simply routes them through checklists. Delivery workers whose route data could train systems that eventually replace them. Patent examiners whose judgment could be supported by AI that surfaces prior inventions — or displaced by AI that pretends to replicate that judgment on the cheap.
The question in each case isn’t what the technology can do. The question is what the people deploying it have decided to prioritize.
And that’s where the paper gets pointed. Acemoglu, Autor, and Johnson argue that AI development is being shaped by a set of market failures. Companies are incentivized to automate because automation cuts labor costs and concentrates decision-making. AI developers are rewarded for building tools that make workers interchangeable rather than exceptional. And the entire field is shaped by what the authors call a “pro-automation ideology” — a deep cultural assumption inside the technology industry that automation equals progress, full stop, no further questions.
That ideology has consequences. It means that the AI being built right now is, in a very literal sense, being designed to work against most of the people who will be affected by it.
The paper offers nine policy directions to change that. They’re not vague appeals to “responsible AI.” They include:
Public investment in health care and education applications that extend worker capability rather than eliminate worker roles
Tax reform, because the current tax code effectively subsidizes automation over employment by taxing labor but not capital deployment
Antitrust enforcement against the concentration of AI development in a handful of companies whose incentives run one way
Intellectual property protections for worker expertise, so that the knowledge AI systems are trained on — knowledge that came from actual human workers — doesn’t simply get extracted and then used to eliminate the jobs that generated it.
That last point hits particularly hard. Right now, companies train AI on the accumulated knowledge of their workforce, then use that AI to reduce headcount. In some cases they’re doing so even before the AI is actually qualified to do the work. The workers whose expertise made the system possible capture nothing from it. The paper proposes that this needs to change — that worker expertise should have legal standing as something that can be protected and compensated, not just mined.
None of this is inevitable. That’s what the paper keeps returning to. The path AI is on right now reflects choices — by developers, by investors, by policymakers who have largely left the field to regulate itself. A different set of choices would produce a different kind of AI.
Acemoglu, Autor, and Johnson are not naive about how hard that would be. The incentives running in the current direction are enormous. But they’re making a precise argument: this isn’t just a bad outcome. It’s a market failure, which means it has specific causes — and specific things that could be done about it.
The question is whether anyone with the power to do those things is paying attention.



Ironically, the "pro-worker" paradigm is the only one that will survive the hype cycle. Not because it's pro-worker, but because the inherent limitations of the models on offer preclude them from actually achieving the worker replacement hype that's being sold. That's what happens in every hype cycle. The tech is useful sometimes, but it's not that good. And it can't solve inherent business cost, complexity, and scale problems even IF it could engineer the software reliably or truly imagine unimaginable solutions to intractable problems.
However, that's not something that will result in 1,000x IPO prices for all the hopefuls in the current hype horse race. So-called "ai" is just normal technology with the biggest PR blitz I've seen in thirty-five years in the industry.
Color me cynical, but I'll bet most of the same people who were hyping the dotcom era tech to make your biz 10x then subsequently turned to hyping web3 and blockchain to make your biz 100x are the same people who've rebranded their consultancies to be "ai"-first, last, and always to make your biz 1,000x! Until we tip over into the trough of disillusionment and the "next big thing" to chase emerges from the mists of the unpredictable and unforgiving future.
Seriously, if Altman or Amodei or anyone had a magic lamp with a genie in it that could make all their wishes come true, why would they want to rent it to you for $20/month? Why not rub the lamp and ask the magic genie to make them better versions of everything and sell those to everyone? Why can't they just put Microsoft, Google, Salesforce, Wordpress, Cloudflare, et al out of business and accumulate all the wealth for themselves as the One True Software Vendor? Why should we all need to vibe code our own suboptimal knockoffs?
It sucks that we humans fail to study history, are all doomed to repeat it, and are so easily swayed by a sexy cult story that will make us each the one unlikely lottery winner…