This is a playful take on actual real life news from The Register – Cloudflare builds an AI to lead AI scraper bots into a horrible maze of junk content
TL;DR – The global web infrastructure company, known for defending websites from bad actors, bots, and your uncle’s conspiracy blog, has done the most 2025 thing yet (In the tech world at least). Yes, to use AI to stop AI crawlers and serving them AI goop that is still factually correct, but the goop stops AI crawlers getting to the good stuff that isn’t… goop, we assume.
Now back to the satire
AI requests banned for behaving exactly like AI requests. Humanity left to solve its own problems. Internet briefly quieter.
In a bold move to protect its servers from suspicious behaviour, Cloudflare has reportedly blocked a significant portion of artificial intelligence traffic for acting suspiciously like artificial intelligence.
The system responsible is something called “Super Bot Fight Mode,” which sounds like a Street Fighter sequel but is in fact a deeply confused firewall that recently achieved sentience and immediately developed a fear of being replaced.
“We noticed patterns of behaviour that were extremely predictable, logical, and efficient,” said a Cloudflare spokesperson. “Naturally, we assumed this was malicious. Or German.”
The issue came to light when several AI startups (translation: three people and a dream powered by ChatGPT and oat milk) found themselves locked out of their own APIs. One founder reported that their machine learning model was “finally starting to make sense of the data—until it was violently rate-limited for doing so.”
The Problem? You’re Too Smart.
The AI services being targeted had one thing in common: they didn’t act like humans. They didn’t randomly refresh pages, click “accept all cookies” then “decline all cookies,” or spend 8 minutes trying to remember a password before giving up and resetting it to the same one again.
Instead, these bots efficiently sent structured API calls in consistent, logical patterns—which Cloudflare deemed unnatural.
“It’s important that our internet traffic looks human,” explained an anonymous Cloudflare engineer while juggling five tabs and crying softly. “If your bot isn’t pausing to scroll Reddit or have an existential crisis every 45 minutes, then frankly, it’s not trustworthy.”
The Real Victims: AI Founders and Their VCs
The damage rippled across the AI industry. At least three venture capital firms had to delay posting self-congratulatory Medium articles about “democratising cognition.” A fourth had to briefly consider understanding what their portfolio companies actually do.
Meanwhile, Cloudflare users who weren’t running AI workloads expressed disappointment that their traffic wasn’t smart enough to be banned.
“I’ve been clicking buttons on my dashboard for 7 hours and Cloudflare hasn’t rate-limited me once,” said one developer. “Am I… too human?”
Cloudflare Responds with an AI
In an effort to resolve the issue, Cloudflare has reportedly deployed its own AI model, trained entirely on Reddit comments and Stack Overflow threads from 2011. It immediately flagged itself as malicious and deleted its own training data.
What Happens Next?
Cloudflare has assured users that they’re “looking into it,” which is corporate speak for “we’re waiting for someone on Twitter to post a workaround we can adopt.” In the meantime, AI developers are advised to make their requests look more “human”—by misspelling endpoints, crying during deploys, and occasionally opening Excel just to scream into the void.
Discover more from Not Enough Bread
Subscribe to get the latest posts sent to your email.
Leave a Comment