I'll spoil everything right away, so if you're short on time, you can save yourself the rest: yesterday, November 18, 2025, a Cloudflare outage took 20% of the internet offline for three hours. The cause? A configuration file for the Bot Management system that doubled in size, exceeding pre-allocated memory limits. The routing software panicked, generating HTTP 500 errors on thousands of sites: X, ChatGPT, Canva, PayPal, League of Legends. No cyberattacks. No DDoS. And most importantly, no artificial intelligence has tried to “take over the internet” As some crazy person is starting to rant on social media. Just a database query error that returned duplicate rows during a permissions update.
Matthew PrinceCloudflare CEO, called the event "the worst outage since 2019." The network returned to service at 17:06 PM UTC after a manual restore of the configuration file. But someone on X and Reddit has already brought up Skynet. And here's the long story.
The bot that wasn't a rogue AI
Let's start with the facts. Cloudflare uses a system called Bot Management To identify and block malicious automated traffic. This system is based on a machine learning model that analyzes every HTTP request in transit over the network. To function, the model reads a "feature file": a configuration file containing the characteristics used to classify a bot and determine whether it is a "friend" or a "foe" and therefore worthy of being rejected.
That file is updated every five minutes and distributed to all Cloudflare servers worldwide. On November 18, at 11:05 UTC, someone changed the permissions on a group of ClickHouse databases. The change was intended to improve permission management, making access to the underlying data explicit. But the command used to generate the configuration file was incorrect. Result: returned duplicate lines, doubling the file size.
The file has passed from about 60 features to over 200And the software had a fixed limit: exactly 200 features, pre-allocated in memory to optimize performance. When the file exceeded the limit, the system went into panic. And everything stopped, bringing down Cloudflare and a big, big chunk of the entire web.
No AI going crazy. No bot becoming aware. Just (sorry if it sounds Arabic, I'm about to use technical terms) a Result::unwrap() called on a Err In Rust. It's kind of like when you try to fit 11 people in a 10-passenger car and the system says, "No, you can't." Except here, the system said "no," crashing half the internet.
Down Cloudflare: Why the confusion with AI?
The term "Bot Management" doesn't help. For those outside the industry, "bot" immediately evokes autonomous artificial intelligence. Add the fact that the system uses machine learning, and the narrative practically unfolds automatically: "Cloudflare's AI has taken control." During the minutes of the outage, delirious threads appeared on X. "It's started," "Skynet has woken up," "AI has figured out how to shut down the internet." Some even cited the fact that Cloudflare's downtime status page had also gone offline as "proof" of a coordinated attack.
The status page is actually hosted on an external infrastructure, with no dependencies on Cloudflare. That was a coincidence. But when the internet is down and your source of service status information is also unreachable, the internal team thought for a moment that it could be a DDoSThen they realized: it was just the perfect chaos of a trivial mistake amplified on a global scale.
A Month of Blackouts: AWS, Azure, Cloudflare
The Cloudflare downtime is not an isolated incident. This is the third episode in a month. The 20 October 2025, AWS went offline for hours, taking with him Snapchat, Roblox, Fortnite, Duolingo, Ring, Coinbase. As we have reported here on Futuro Prossimo, the problem originated in data centers in Virginia, the region us-east-1, the beating heart of global cloud computing. DynamoDB and EC2, the two pillars of the AWS infrastructure, have suffered "increased error rates." Translation: they've collapsed.
The 28 October it was up to AzureMicrosoft's cloud. Once again, services distributed across the globe have ground to a halt due to problems localized to a few critical nodes. And now Cloudflare. Three outages in a month. Three giants. Three infrastructures that alone support a huge portion of the internet.
According to estimates, 34 million websites They use Cloudflare. AWS controls 30% of the cloud market, Azure 24%. In total, these three services support three-quarters of the Internet.And when one of them goes down, the domino effect is immediate and global.
The contradiction is obvious. We build "distributed" and "resilient" architectures, but then we put everything in the same data center because it's convenient, fast, and cost-effective. Until it isn't anymoreA PagerDuty survey of 1.000 IT executives revealed that 88% expect another global blackout in the next 12 monthsAt this point I would say that "pessimism", given the recent statistics, is at least legitimate.
The fragility of a centralized web
The real problem isn't that Cloudflare went down. It's that when Cloudflare goes down, and everything else goes down with it. The web was born distributed. The original idea of the internet was that of a decentralized network, where each node could communicate with any other node without passing through a central point. If a piece broke, traffic would find another route.
Today the internet works the opposite way. We've concentrated critical infrastructure in the hands of three or four companies: AWS, Azure, Google Cloud (will it be the next to crash?), and Cloudflare. If one of them has a technical problem, millions of services simultaneously stop working. Not because they're logically connected, but because they share the same physical infrastructure.
It's as if all the streets of a city passed over a single bridge. When the bridge collapses, it doesn't matter how well-built the roads are: nobody gets through. And when that bridge is called Cloudflare and it carries 78 million HTTP requests per second, the collapse is felt everywhere.
Cloudflare Down, Skynet Has Nothing to Do With It. But Don't Worry: It's Worse
The catastrophists always evoke Skynet, the artificial intelligence of Terminator who becomes aware and decides to exterminate humanity. It's a convenient fantasy. It makes us feel that danger is always something alien and hostile. But the reality is different. The greatest danger came not from AI rebelling, but from the choices we made, every day, about how to build digital infrastructure.
Cloudflare didn't go down because an AI attack decided to attack it. It went down because someone changed database permissions and a SQL query returned duplicate rows. AWS didn't go down because of a cyberattack. It went down because DynamoDB had a technical issue in the region. us-east-1Azure didn't go down because of a conspiracy. It went down because of a misconfiguration.
These are human errors. Banal. But amplified on a global scale because we have chosen to centralize everything. Instead of fantasizing about when AI will take over, let's ask ourselves how much longer we want to depend on three or four companies to run the internet.
Cloudflare has already announced countermeasures: stricter checks on configuration files, global "kill switches" to disable problematic features, and a review of memory limits. All well and good. But it doesn't solve the underlying problem. I repeat, at the risk of being boring: as long as 20% of the web depends on a single operator, a trivial error can take millions of services offline.
No AI is taking over. But reality could be worse than fiction if we don't correct our choices. Because Skynet is at least predictable: you know it wants to destroy you. A 200-line configuration file, on the other hand, catches you off guard.
And it reminds you that the internet, as indestructible as it seems, It rests on much more fragile foundations than we would like to believe..