During the incident the Cloud Metrics platform experienced intermittent latency spikes communicating with a backend cloud service in the prod-us-central-0 and prod-us-central-5 regions. During the incident the internal CSP-facing issue was escalated to a P1. After determining the scope of the latency spikes was limited to only one availability zone, the team mitigated the situation by migrating all write traffic from to the single nearly unaffected availability zone.
As the CSP service team attempted to remedy the situation, the situation became worse and began affecting the previously unaffected zone. Given this, another mitigation path was needed. Changing the connection strategy employed by Cloud Metrics to a different method was deployed to all environments, stabilizing the write path once again as we found the different connection method was more reliable and not affected by these increases in latency.
We have migrated all tenants back to multi-zone write paths and are happy with and confident in the current method of connectivity to the backend cloud service, which is the one we migrated to during the course of the incident. We have no immediate plans to use the previous problematic connectivity method for the foreseeable future.
Posted Mar 17, 2026 - 18:22 UTC
Update
We are rolling out a mitigation across the environments in these regions, and preemptively where possible to ensure it doesn’t spread elsewhere.
Posted Mar 06, 2026 - 21:44 UTC
Update
We have seen an increase in latency in our cloud providers services, and are rolling out a change to mitigate the issue. We are monitoring.
Posted Mar 06, 2026 - 20:53 UTC
Update
We are continuing to investigate this issue alongside the CSP, and have taken steps to escalate through the appropriate channels. The mitigation in place continues to work as expected, and any notable updates will continue to be shared here for tracking.
Posted Mar 05, 2026 - 22:22 UTC
Update
We are continuing to investigate this issue alongside the CSP. Any notable updates will continue to be shared here for tracking.
Posted Feb 27, 2026 - 22:05 UTC
Monitoring
We've implemented mitigation in place and are continuing to monitoring and investigating this issue.
Posted Feb 27, 2026 - 14:55 UTC
Update
We have begun rolling out mitigation steps to reduce write latency in the prod-us-central-0 and prod-us-central-5 regions. While these measures are expected to improve performance, we are continuing to investigate the underlying root cause of the issue. We will provide additional updates as more information becomes available.
Posted Feb 26, 2026 - 16:23 UTC
Investigating
Since February 19, we have been investigating an intermittent issue causing increased write latency in the prod-us-central-0 and prod-us-central-5 regions. The issue does not affect all traffic but may result in delayed write operations for some customers. Our engineering team is actively working to identify the root cause and stabilize performance. We will share additional updates as progress is made.
Posted Feb 25, 2026 - 19:54 UTC
This incident affected: Grafana Cloud: Prometheus (GCP Belgium - prod-eu-west-0: Ingestion, GCP US Central - prod-us-central-0: Ingestion, GCP US Central - prod-us-central-5: Ingestion) and Grafana Cloud: Tempo (GCP Belgium - prod-eu-west-0: Ingestion).