Key Highlights
- A whistleblower’s post on Reddit alleged fraud at an unnamed food delivery app.
- The post detailed rigged platforms against customers and drivers, including a “desperation score” algorithm.
- The author of the article uncovered the hoax using AI detection tools and analysis.
- AI-generated content poses new challenges for journalists in verifying information.
Debunking the AI Food Delivery Hoax on Reddit
The recent viral post on Reddit alleging significant fraud at an unnamed food delivery app caught the attention of thousands. Written by a fresh account named Trowaway_whistleblow, the whistleblower detailed various ways that the company allegedly rigged its platform against customers and delivery drivers. This included slowing down standard deliveries to make priority orders look artificially faster, and charging a “regulatory response fee” used to lobby against driver unions.
The most jarring accusation was that the platform calculates a “desperation score” for its drivers based on when and how often they accept deliveries. According to the post, this algorithm tags drivers as ‘High Desperation,’ which then stops showing them high-paying orders. The logic is: “Why pay this guy $15 for a run when we know he’s desperate enough to do it for $6?” We save the good tips for the ‘casual’ drivers to hook them in and gamify their experience, while the full-timers get grinded into dust.”
The post garnered 86,000 upvotes, hitting Reddit’s front page and likely being viewed by millions.
Users gave the whistleblower more than 1,000 pieces of Reddit gold. A screenshot of the post on X generated over 36 million views. Eager to verify the authenticity of this viral claim, I contacted Trowaway_whistleblow through Reddit.
Verification and Deception
Nine minutes after reaching out, Trowaway_whistleblow sent a message with a photo of what appeared to be an employee badge for Uber Eats. The fake, Gemini-generated Uber Eats badge looked plausible enough, though I soon learned it had been generated by Google Gemini. Upon further investigation, the document attached by the whistleblower was titled “AllocNet-T: High-Dimensional Temporal Supply State Modeling.” This 18-page report purported to be from Uber’s “Marketplace Dynamics Group, Behavioral Economics Division,” and was dated October 14, 2024.
Upon closer inspection, this technical paper closely resembled many AI-related papers that I have read over the past few years. It was laden with charts, diagrams, and mathematical formulas, making it initially seem highly credible. However, as I read through it, the document began to unravel.
It included details on “automated ‘Greyballing’ protocols for regulatory evasion,” which was an apparent reference to Uber’s old Greyball tool. By the end of the document, it had also offered support for each of the other claims in the original post, even when they had no obvious connection to the score. For example, it described identifying drivers in states of distress using Apple Watch data and listening to them via their phone’s microphones to “detect ambient vehicle noise (crying, arguments) to infer emotional state and adjust offer pricing accordingly.”
Alarming Implications for Journalists
The rapid spread of the whistleblower’s post illustrated yet another maxim from journalism school: A lie can travel halfway around the world before the truth can get its boots on. The ease with which AI systems like Gemini can generate convincing documents and badges highlights a growing challenge for journalists in verifying information. Today, fake leaks can be generated within minutes, and badges within seconds.
While no good reporter would ever publish a story based on a single document and an unknown source, plenty would take the time to investigate the document’s contents and see whether human sources would back it up. This raises concerns about the future of journalism in an era where AI can rapidly produce convincing but false evidence. The “infocalypse” that scholars like Aviv Ovadya warned about in 2017 looks increasingly more plausible now that real people are messaging such hoaxes directly to journalists over Signal. The rapid advancements in AI technology make it crucial for journalists and news organizations to develop robust verification methods, as the line between fact and fiction can be blurred almost instantaneously.
Conclusion
The recent hoax on Reddit serves as a stark reminder of the evolving challenges faced by journalists in an age where AI is becoming more sophisticated. As AI tools continue to advance, the ability to quickly generate convincing but false evidence poses significant risks to journalistic integrity and public trust. Journalists must remain vigilant and develop new strategies for verifying information in this rapidly changing landscape. For now, the original post remains a cautionary tale, serving as a critical reminder of the importance of rigorous fact-checking and skepticism in the face of seemingly credible claims.