23 Comments
User's avatar
Alex Shultz's avatar

Hello! I'm catching up on some comments (all of which I sincerely appreciate) and want to address one overarching point: to me, this story isn't really about food delivery apps or the companies involved. It's a big, flashing warning about the potency of AI-generated scams. This certainly is not the first example of someone fibbing on the internet, or trying to fool journalists, but it is one of the first examples of how AI is making it even *easier* to fabricate accusations on a mass scale -- and potentially do so very quickly.

The fact that there's already so much credible reporting about how poorly gig workers are treated (in addition to bazillions of other credible stories/anecdotes, including in this very comment section!) is part of why this scam was so effective. It's very easy for readers to assume the allegations by the Redditor are true. I certainly leaned that way on first read. And as someone who regularly covers labor issues (and is keenly aware of how rank-and-file workers' stories are routinely ignored/suppressed), I can assure you I do not relish in writing a de facto exculpation of food delivery apps lol. But I always want to be accurate above all else, and afford people/companies the opportunity to correct the record. In this case, Uber *did* correct the record. Uber does not have a compelling reason to lie about the specifics of the Reddit post/"internal document"; when you ask a company to authenticate an internal document, that company doesn't know 1. if you've already done so yourself 2. if you have other sources confirming its authenticity and 3. if other reporters have independently confirmed its authenticity. Lying in this scenario could prove to be far more costly than saying "yes, it's real, but..."

And so my hope is that readers come away from this saga thinking less about the particular parties here, and more about what it means that AI-generated posts/documents are compelling enough to fool *millions* of people. This time, the target was a batch of companies that are reviled. But that might not be the case in the future.

Expand full comment
Philippe Gosselin's avatar

I worked for Uber Eats for about three to four months a few years ago, and I want to say this plainly: that letter could absolutely be true. I can’t verify every detail in it, but based on what I personally experienced, none of it surprises me.

When I started, late February into March, the pay was actually good. The work was heavily gamified: challenges, streaks, bonuses for completing a certain number of deliveries in a set time. It worked. The money made sense, and it felt worth doing. But that didn’t last. As spring arrived and the weather improved, those incentives quietly disappeared, and so did the income. By May, I was barely making anything. It didn’t take long for something that felt viable to become borderline pointless.

Over time, you also start noticing patterns. Bike couriers never get large orders, family meals, big restaurant pickups, multiple bags. Those always go to drivers with cars. I’ve heard people claim there are “legal reasons” for this, but that doesn’t hold water. Most drivers just throw the food on their back seat. Meanwhile, cyclists use insulated delivery bags and actually keep things contained and clean. It’s not about food safety; it’s about internal prioritization.

The post mentions desperation mechanics, and honestly, that rang true to me. Whether it was explicitly coded that way or not, the experience slowly shifts from “this pays well” to “why am I even out here tonight?” You go out and get nothing but tiny, low-paying orders, one after another.

Uber also runs a tier system, Gold, Platinum, Diamond, with “perks.” I reached Diamond. One of the rewards was 30% off ten meals. I never managed to redeem a single one. Every item I tried to order was rejected with the same message: reward not applicable. Different items, different restaurants, same result. Reinstalling the app didn’t help.

So I called support. Predictably, I was bounced back and forth between departments, each telling me they didn’t handle that issue and to call another number, often the same number that had just sent me to them. This isn’t an isolated glitch; it’s a structural pattern. Anyone who’s spent time digging through forums knows I’m far from alone here.

That’s why I find it almost funny that people are shocked this letter might be AI-generated. It sounds real because it matches reality. You’re dealing with a company that operates without accountability or honor. Why would any of this be surprising?

Expand full comment
Coby's avatar

This is clearly an AI comment. Again, like Trowaway_whistleblow, what is even the purpose of making this up???

Expand full comment
Philippe Gosselin's avatar

Fuck off man.

Expand full comment
Paul's avatar

I ran Philippe Gosselin's (@cultureshock) comment through Pangram, it came back "Fully Human Written". Why do you think his comment is AI?

Result link: https://www.pangram.com/history/0cd10100-20ff-47fc-9186-bff306f5e8c8/?ucc=q4aL6wfTxvg

Expand full comment
gvpws's avatar
20hEdited

There isn't a single chance that the article is true. It's AI generated, the article is even written and structured in a way that you can kind of tell exactly how he prompted the LLM. The single document from the single department just happens to go over every single point the original Reddit post talked about. For the breadth of the claims you'd expect the user to have built up that knowledge with scattered emails/documents/memos from a bunch of different departments like PR, Legal, Marketing, Finance, App Design, Trust & Safety, etc. But somehow this one document that's supposed to be very narrow in scope happens to go over every single one in detail no matter if it's relevant?

An engineering department writing a research article on an algorithm they designed doesn't get to make decisions about how they allocate their fees to "external legal counsel, lobbying firms, and trade associations" or how they legally classified a feature as "'Interaction Telemetry' rather than 'Biometric Data' to avoid GDPR Article 9 complications," they're not the ones that get to decide how it's presented in legal documents to regulators.

The really absurd parts are where it talks about calculating your supposed stress levels by measuring how much your phone is shaking and how it disables parts of the gamification UI if the device is "located within 200m of a known regulatory body (e.g., City Hall, Dept of Labor)". As if drivers are constantly holding their phones? And the only people who would ever complain, out of all people, are the bureaucrats that happen to use the app only while they're at work?

Expand full comment
Waarheid en vrede's avatar

I’m wondering what and whom to believe and trust? The apparent whistleblower? DoorDash CEO or you, the author. In this day and age almost every one’s using AI to generate or polish content.

Sure there are damning accusations in the post. But no mention of DoorDash directly per say.

If these accusations are completely wrong, or misleading would expect DoorDash to provide clear transparent evidence instead of just an X post?

Have other companies in this sector also responded?

What do you think?

Expand full comment
JD Free's avatar

You can’t prove a negative. It’s silly to demand that Doordash prove allegations false. The accuser is the one who needs to supply evidence.

Expand full comment
Waarheid en vrede's avatar

You’re correct about that. Such fake and untruthful posts only make things worse as there is an abundance of credible data that highlights the exploitation and manipulation that the Gig industry is conducting.

For instance this Human Rights Watch report which I assume is reliable and credible.

The Gig Trap. It lays out, clearly and thoroughly, how gig platforms exploit workers, underpay them, and avoid responsibility, all while framing it as progress.

Worth a read if you care about fairness and equity in this industry:

hrw.org/report/2025/05/12/the-gig-trap

Expand full comment
Pete McCutchen's avatar

Gig workers aren’t slaves. They aren’t even employees. They can stop any time if the terms aren’t to their liking.

Expand full comment
Waarheid en vrede's avatar

It’s about the power imbalance. Gig workers aren’t ‘slaves,’ nor “employees” which is also a big part of the problem.

But brushing it off with ‘they’re free to quit’ ignores how platforms manipulate their labor: algorithmic control, poverty wages, and no safety net.

Expand full comment
Chris Baugh's avatar

If you read the post, it is obviously fake. I saw it come around somewhere (like a FB link I think) and I pretty much discarded it as fake right away.

Expand full comment
Mommadillo's avatar

They don’t need algorithms. Gig jobs where you drive your own vehicle are not sustainable. The maintenance and upkeep eats you alive in the long run.

Expand full comment
Pete McCutchen's avatar

Then people will stop doing those jobs.

Expand full comment
Mommadillo's avatar

Not unless Elmo can get FSD to quit running down pedestrians.

Expand full comment
Neural Foundry's avatar

The LaTeX metadata detail is what makes this investigation really standout. Most people wouldnt think to check document generation artifacts, but that split-process theory (LLM content + LaTeX formatting to mask ChatGPT origin) shows how sophisticated these scams are getting. The fact that the fake document included real-sounding corporate terminology and formatting conventions means the barrier to creating believable disinformation is basically zero now. Had a colleague get burned by a similar AI-generated "internal memo" last quarter that looked completely legit until we ran metadata checks. The bigger problem is how many people will see the original viral post versus the debunk, classic misinformation asymmetry.

Expand full comment
bjkeefe's avatar

I'm not clear on why you think Xu should take his post down. Is it because it seems like he is suggesting that the fake Reddit post could be true for some other company?

Thanks for doing all of this investigative work.

Expand full comment
Alex Shultz's avatar

It’s a fair q! (And thanks for reading.) I get why Xu wouldn’t take the post down, ultimately. He/DoorDash want to make it clear they don’t have anything to do with the falsified Reddit thread. But Xu’s post—which has millions of impressions—allows for the possibility that another delivery app could be carrying out the allegations in the Reddit thread. Those allegations are AI-generated and false. And so at a minimum, Xu should be acknowledging as much in a follow-up post

Expand full comment
bjkeefe's avatar
3dEdited

Thank you. I agree with all of that -- understandable that he wants to leave his post up, but should edit it or add a follow-on to clarify.

Expand full comment
Nathan Cohen's avatar

Also anybody these days trying to either hide identity will use AI

Expand full comment
cate's avatar

1. what clear transparent evidence was presented to justify the original claim?

2. what could DoorDash provide that would suffice? they can't prove they are *not* doing it.

Expand full comment
Eamonn McKeown's avatar

Batching batching batching.

They have never cared about the customers.

Rideshare started from a hatred of SF cab drivers. The idea that they might respect food delivery drivers is laughable.

I’m doing DD this very minute. It blows chunks.

Expand full comment
Useless Talents's avatar

I believe people will be checking out of many kinds of media as a result of this type of AI uncertainty. We will question what we see, then we will question our or others' questioning. We can't spend our lives wondering if something is a hallucination without going mad.

Expand full comment