The internet is still broken: A centralized bottleneck caused the global internet blackout today

Share:
A Cloudflare outage today disrupted access to services across the internet, exposing the significant amount of traffic that runs through Cloudflare.
Cloudflare’s status page described the event as an “internal service degradation” that began at 11:48 UTC, saying some services were “intermittently impacted” while teams worked to restore traffic flows.
Earlier, at 11:34 UTC, CryptoSlate noticed services were reachable at the origin, but Cloudflare’s London edge returned an error page, with similar behavior observed through Frankfurt and Chicago via VPN. That pattern suggests trouble in the edge and application layers rather than at the customer origin servers.
Cloudflare confirmed the problem publicly at 11:48 GMT, reporting widespread HTTP 500 errors and problems with its own dashboard and API.
NetBlocks, a network watchdog, reported disruptions to a range of online services in multiple countries and attributed the event to Cloudflare technical issues, while stressing that this was not related to state-level blocking or shutdowns.
Cloudflare acknowledged a global disruption at approximately 13:03 CET, followed by a first recovery update at around 13:21 CET.
Its own log of status updates shows how the incident evolved from internal degradation to a broad outage that touched user-facing tools, remote access products, and application services.
| Time (UTC) | Status page update |
|---|---|
| 11:48 | Cloudflare reports internal service degradation and intermittent impact |
| 12:03–12:53 | Company continues investigation while error rates remain elevated |
| 13:04 | WARP access in London disabled during remediation attempts |
| 13:09 | Issue marked as identified and fix in progress |
| 13:13 | Access and WARP services recover, WARP re-enabled in London |
| 13:35–13:58 | Work continues to restore application services for customers |
| 14:34 | Dashboard services restored, remediation ongoing for application impact |
While the exact technical root cause has not yet been publicly detailed, the observable symptoms were consistent across many services that sit behind Cloudflare.
Users encountered 500 internal server errors from the Cloudflare edge, front-end dashboards failed for customers, and API access used to manage configurations also broke. In practice, both users and administrators lost access at the same time.
The downstream impact was broad.
Users of X, the social platform formerly known as Twitter, reported login failures with messages such as “Oops, something went wrong. Please try again later.”
Access problems were also seen across ChatGPT, Slack, Coinbase, DownDetector, Perplexity, and other high-traffic sites, with many pages either timing out or returning error codes.
Some services appeared to degrade rather than go completely offline, with partial loading or regional pockets of normal behavior depending on routing. The incident did not shut down the entire internet, but it removed a sizable portion of what many users interact with each day.
The outage also made itself felt in a more subtle layer: visibility. At the same time that users tried to reach X or ChatGPT, many turned to outage-tracking sites to see if the problem sat with their own connection or with the platforms.
Monitoring portals that track incidents, such as DownDetector, Downforeveryoneorjustme, and isitdownrightnow, also experienced problems during the Cloudflare event. OutageStats reported that its own data showed Cloudflare “working fine” while acknowledging that isolated failures were possible, which contrasted with user experience on Cloudflare-backed sites.
Some status trackers relied on Cloudflare themselves, which reduced the quality of the real-time signal about the event.
For crypto and Web3, this episode is less about one vendor’s bad day and more about a structural bottleneck. Cloudflare’s network sits in front of a large fraction of the public web, handling DNS, TLS termination, caching, web application firewall functions, and access controls.
Cloudflare provides services for around 19% of all websites.
A failure in that shared layer turns into simultaneous trouble for exchanges, DeFi front ends, NFT marketplaces, portfolio trackers, and media sites that made the same choice of provider.
In practice, the event drew a line between platforms with their own backbone-scale infrastructure and those that rely heavily on Cloudflare. Services from Google, Amazon, and other tech giants with in-house CDNs appeared less affected.
Smaller or mid-sized sites that outsource edge delivery saw more visible impact. For crypto, this maps directly onto the long-running tension between decentralized protocols and centralized access layers.
A protocol may run across thousands of nodes, yet a single outage in a CDN or DNS provider can block user access to the interface that most people actually use.
Cloudflare’s history shows that this is not an isolated anomaly. A control plane and analytics outage in November 2023 affected multiple services for nearly two days, starting at 11:43 UTC on November 2 and resolving on November 4 after changes to internal systems.
Status aggregation by StatusGator lists multiple Cloudflare incidents in recent years across DNS, application services, and management consoles.
Each time, the impact reaches beyond Cloudflare’s direct customer list into the dependent ecosystem that assumes that layer will stay up.
Today’s incident also underlined how control planes can become a hidden point of failure.
That meant customers could not easily change DNS records, switch traffic to backup origins, or relax edge security settings to route around the trouble. Even where origin infrastructure was healthy, some operators were effectively locked out of the steering wheel while their sites returned errors.
From a risk perspective, the outage exposed three distinct layers of dependence.
First, user traffic was concentrated through one edge provider. Second, observability relied on tools that in many cases used the same provider, which muted or distorted insight during the event.
Third, operational control for customers was centralized in a dashboard and API that shared the same failure domain.
Crypto teams have long discussed multi-region redundancy for validator nodes and backup RPC providers. This event adds weight to a parallel conversation about multi-CDN, diverse DNS, and self-hosted entry points for key services.
Projects that pair on-chain decentralization with single-vendor front ends not only face censorship and regulatory risk, but they also inherit the operational outages of that vendor.
Still, cost and complexity shape real infrastructure decisions. Multi-CDN setups, alternative DNS networks, or decentralized storage for front ends can reduce single points of failure, yet they demand more engineering and operational work than pointing a domain at one popular provider.
For many teams, especially during bull cycles when traffic spikes, outsourcing edge delivery to Cloudflare or a similar platform is the most straightforward way to survive volume.
The Cloudflare event on November 18 gives a concrete data point in that tradeoff.
Widespread 500 errors, failures in both public-facing sites and internal dashboards, blind spots in monitoring, and regionally varied recovery together showed how a private network can act as a chokepoint for much of the public internet.
For now, the outage has been contained to a matter of hours, but it leaves crypto and broader web infrastructure operators with a clear record of how a single provider can interrupt day-to-day access to core online services.
As of press time, services appear stable, and Cloudflare has implemented a fix stating,
Monitoring – A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal.
Nov 18, 2025 – 14:42 UTCUpdate – We’ve deployed a change which has restored dashboard services. We are still working to remediate broad application services impact.
The post The internet is still broken: A centralized bottleneck caused the global internet blackout today appeared first on CryptoSlate.
The internet is still broken: A centralized bottleneck caused the global internet blackout today

Share:
A Cloudflare outage today disrupted access to services across the internet, exposing the significant amount of traffic that runs through Cloudflare.
Cloudflare’s status page described the event as an “internal service degradation” that began at 11:48 UTC, saying some services were “intermittently impacted” while teams worked to restore traffic flows.
Earlier, at 11:34 UTC, CryptoSlate noticed services were reachable at the origin, but Cloudflare’s London edge returned an error page, with similar behavior observed through Frankfurt and Chicago via VPN. That pattern suggests trouble in the edge and application layers rather than at the customer origin servers.
Cloudflare confirmed the problem publicly at 11:48 GMT, reporting widespread HTTP 500 errors and problems with its own dashboard and API.
NetBlocks, a network watchdog, reported disruptions to a range of online services in multiple countries and attributed the event to Cloudflare technical issues, while stressing that this was not related to state-level blocking or shutdowns.
Cloudflare acknowledged a global disruption at approximately 13:03 CET, followed by a first recovery update at around 13:21 CET.
Its own log of status updates shows how the incident evolved from internal degradation to a broad outage that touched user-facing tools, remote access products, and application services.
| Time (UTC) | Status page update |
|---|---|
| 11:48 | Cloudflare reports internal service degradation and intermittent impact |
| 12:03–12:53 | Company continues investigation while error rates remain elevated |
| 13:04 | WARP access in London disabled during remediation attempts |
| 13:09 | Issue marked as identified and fix in progress |
| 13:13 | Access and WARP services recover, WARP re-enabled in London |
| 13:35–13:58 | Work continues to restore application services for customers |
| 14:34 | Dashboard services restored, remediation ongoing for application impact |
While the exact technical root cause has not yet been publicly detailed, the observable symptoms were consistent across many services that sit behind Cloudflare.
Users encountered 500 internal server errors from the Cloudflare edge, front-end dashboards failed for customers, and API access used to manage configurations also broke. In practice, both users and administrators lost access at the same time.
The downstream impact was broad.
Users of X, the social platform formerly known as Twitter, reported login failures with messages such as “Oops, something went wrong. Please try again later.”
Access problems were also seen across ChatGPT, Slack, Coinbase, DownDetector, Perplexity, and other high-traffic sites, with many pages either timing out or returning error codes.
Some services appeared to degrade rather than go completely offline, with partial loading or regional pockets of normal behavior depending on routing. The incident did not shut down the entire internet, but it removed a sizable portion of what many users interact with each day.
The outage also made itself felt in a more subtle layer: visibility. At the same time that users tried to reach X or ChatGPT, many turned to outage-tracking sites to see if the problem sat with their own connection or with the platforms.
Monitoring portals that track incidents, such as DownDetector, Downforeveryoneorjustme, and isitdownrightnow, also experienced problems during the Cloudflare event. OutageStats reported that its own data showed Cloudflare “working fine” while acknowledging that isolated failures were possible, which contrasted with user experience on Cloudflare-backed sites.
Some status trackers relied on Cloudflare themselves, which reduced the quality of the real-time signal about the event.
For crypto and Web3, this episode is less about one vendor’s bad day and more about a structural bottleneck. Cloudflare’s network sits in front of a large fraction of the public web, handling DNS, TLS termination, caching, web application firewall functions, and access controls.
Cloudflare provides services for around 19% of all websites.
A failure in that shared layer turns into simultaneous trouble for exchanges, DeFi front ends, NFT marketplaces, portfolio trackers, and media sites that made the same choice of provider.
In practice, the event drew a line between platforms with their own backbone-scale infrastructure and those that rely heavily on Cloudflare. Services from Google, Amazon, and other tech giants with in-house CDNs appeared less affected.
Smaller or mid-sized sites that outsource edge delivery saw more visible impact. For crypto, this maps directly onto the long-running tension between decentralized protocols and centralized access layers.
A protocol may run across thousands of nodes, yet a single outage in a CDN or DNS provider can block user access to the interface that most people actually use.
Cloudflare’s history shows that this is not an isolated anomaly. A control plane and analytics outage in November 2023 affected multiple services for nearly two days, starting at 11:43 UTC on November 2 and resolving on November 4 after changes to internal systems.
Status aggregation by StatusGator lists multiple Cloudflare incidents in recent years across DNS, application services, and management consoles.
Each time, the impact reaches beyond Cloudflare’s direct customer list into the dependent ecosystem that assumes that layer will stay up.
Today’s incident also underlined how control planes can become a hidden point of failure.
That meant customers could not easily change DNS records, switch traffic to backup origins, or relax edge security settings to route around the trouble. Even where origin infrastructure was healthy, some operators were effectively locked out of the steering wheel while their sites returned errors.
From a risk perspective, the outage exposed three distinct layers of dependence.
First, user traffic was concentrated through one edge provider. Second, observability relied on tools that in many cases used the same provider, which muted or distorted insight during the event.
Third, operational control for customers was centralized in a dashboard and API that shared the same failure domain.
Crypto teams have long discussed multi-region redundancy for validator nodes and backup RPC providers. This event adds weight to a parallel conversation about multi-CDN, diverse DNS, and self-hosted entry points for key services.
Projects that pair on-chain decentralization with single-vendor front ends not only face censorship and regulatory risk, but they also inherit the operational outages of that vendor.
Still, cost and complexity shape real infrastructure decisions. Multi-CDN setups, alternative DNS networks, or decentralized storage for front ends can reduce single points of failure, yet they demand more engineering and operational work than pointing a domain at one popular provider.
For many teams, especially during bull cycles when traffic spikes, outsourcing edge delivery to Cloudflare or a similar platform is the most straightforward way to survive volume.
The Cloudflare event on November 18 gives a concrete data point in that tradeoff.
Widespread 500 errors, failures in both public-facing sites and internal dashboards, blind spots in monitoring, and regionally varied recovery together showed how a private network can act as a chokepoint for much of the public internet.
For now, the outage has been contained to a matter of hours, but it leaves crypto and broader web infrastructure operators with a clear record of how a single provider can interrupt day-to-day access to core online services.
As of press time, services appear stable, and Cloudflare has implemented a fix stating,
Monitoring – A fix has been implemented and we believe the incident is now resolved. We are continuing to monitor for errors to ensure all services are back to normal.
Nov 18, 2025 – 14:42 UTCUpdate – We’ve deployed a change which has restored dashboard services. We are still working to remediate broad application services impact.
The post The internet is still broken: A centralized bottleneck caused the global internet blackout today appeared first on CryptoSlate.











