tag:www.githubstatus.com,2005:/historyGitHub Status - Incident History2026-04-14T04:40:12ZGitHubtag:www.githubstatus.com,2005:Incident/296086232026-04-14T04:40:12Z2026-04-14T04:40:12ZDisruption with some GitHub services<p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>04:40</var> UTC</small><br><strong>Update</strong> - We identified an issue that impacts the Copilot Dashboard on the Insights tab and are working on mitigation. We will continue to keep you updated on progress.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>03:47</var> UTC</small><br><strong>Update</strong> - The team continues to investigate issues accessing with Copilot Dashboard on the Insights tab. We will continue providing updates on the progress towards mitigation.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>02:40</var> UTC</small><br><strong>Update</strong> - The Copilot Dashboard on the Insights tab is not accessible and we are continuing to investigate.</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>02:37</var> UTC</small><br><strong>Update</strong> - Degradation of Service - Insights Page</p><p><small>Apr <var data-var='date'>14</var>, <var data-var='time'>01:57</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/296054022026-04-13T20:35:37Z2026-04-13T20:35:37ZIncident with Pages<p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>20:35</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>20:32</var> UTC</small><br><strong>Update</strong> - We have mitigated the issue with Pages.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>20:30</var> UTC</small><br><strong>Monitoring</strong> - The degradation affecting Pages has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>19:57</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with Pages. We will continue to keep users updated on progress towards mitigation.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>19:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Pages</p>tag:www.githubstatus.com,2005:Incident/296034822026-04-13T17:40:09Z2026-04-13T17:40:09ZDisruption with some GitHub services<p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>17:40</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>16:59</var> UTC</small><br><strong>Update</strong> - We have identified the root cause and are rolling out a fix for Copilot. The services should now be in recovery, with expected full recovery in 5 to 10 minutes.</p><p><small>Apr <var data-var='date'>13</var>, <var data-var='time'>16:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/295641432026-04-10T13:28:23Z2026-04-10T13:28:23ZProblems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard<p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:28</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:08</var> UTC</small><br><strong>Update</strong> - We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:07</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/295512632026-04-09T20:36:52Z2026-04-09T20:36:53ZDisruption with some GitHub services<p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>20:36</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>19:52</var> UTC</small><br><strong>Update</strong> - We continue to investigate periodic delays in Copilot Cloud Agent job processing</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>18:57</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate Copilot Cloud Agent job delays</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Update</strong> - Copilot Cloud Agent jobs are being processed and we are monitoring recovery</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>16:57</var> UTC</small><br><strong>Update</strong> - We are investigating delays processing Copilot Cloud Agent jobs</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>16:20</var> UTC</small><br><strong>Update</strong> - We are experiencing issues where jobs are being delayed to start for copilot coding agent</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>16:20</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/295472162026-04-09T10:15:37Z2026-04-09T10:15:37ZDisruption with some GitHub services<p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>10:15</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>10:15</var> UTC</small><br><strong>Monitoring</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>09:57</var> UTC</small><br><strong>Update</strong> - We are investigating an issue affecting GitHub Copilot coding agent. Users may experience significant delays when starting new agent sessions, with jobs remaining queued longer than expected. Our team has identified increased load as a contributing factor and is actively working to restore normal performance.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>09:50</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/295447172026-04-09T04:57:17Z2026-04-09T04:57:17ZDisruption with GitHub notifications<p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>04:57</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>04:57</var> UTC</small><br><strong>Monitoring</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 9</var>, <var data-var='time'>04:42</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/294530162026-04-02T21:48:43Z2026-04-06T17:17:13ZDisruption with some GitHub services<p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>21:48</var> UTC</small><br><strong>Resolved</strong> - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>21:48</var> UTC</small><br><strong>Monitoring</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>20:35</var> UTC</small><br><strong>Update</strong> - Although we are observing recovery once again, we expect continued periods of degradation. <br /><br />Work that is queued during times of degradation does eventually get processed. <br /><br />We continue to investigate and find a mitigation, and will update again within 2 hours.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - This issue has recurred. Customers will once again experience false job starts when assigning tasks to Copilot Cloud Agent. <br /><br />We are still investigating and trying to understand the pattern of degradation.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>18:25</var> UTC</small><br><strong>Update</strong> - We are once again seeing recovery with Copilot Cloud Agent job starts. <br /><br />We are keeping this open while we verify this won't recur.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>17:59</var> UTC</small><br><strong>Update</strong> - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running.<br /><br />We are investigating.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/294514712026-04-02T16:30:05Z2026-04-07T19:09:23ZCopilot Coding Agent failing to start some jobs<p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Resolved</strong> - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.<br /><br />This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:28</var> UTC</small><br><strong>Update</strong> - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. <br /><br />We are investigating.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/294277602026-04-01T23:45:45Z2026-04-10T19:02:47ZDisruption with GitHub's code search<p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>23:45</var> UTC</small><br><strong>Resolved</strong> - On April 1st, 2026 between 14:40 and 17:00 UTC the GitHub code search service had an outage which resulted in users being unable to perform searches.<br /><br />The issue was initially caused by an upgrade to the code search Kafka cluster ZooKeeper instances which caused a loss of quorum. This resulted in application-level data inconsistencies which required the index to be reset to a point in time before the loss of quorum occurred. Meanwhile, an accidental deploy resulted in query services losing their shard-to-host mappings, which are typically propagated by Kafka.<br /><br />We remediated the problem by performing rolling restarts in the Kafka cluster, allowing quorum to be reestablished. From there we were able to reset our index to a point in time before the inconsistencies occurred.<br /><br />The team is working on ways to improve our time to respond and mitigate issues relating to Kafka in the future.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>23:45</var> UTC</small><br><strong>Update</strong> - Code search has recovered and is serving production traffic.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>22:00</var> UTC</small><br><strong>Update</strong> - We have stabilized Code Search infrastructure, and are in the final stages of validation before slowly reintroducing production traffic.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>19:37</var> UTC</small><br><strong>Update</strong> - We are still working on recovering back to a serviceable state and expect to have a more substantial update within another two hours.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>17:48</var> UTC</small><br><strong>Update</strong> - We are observing some recovery for Code Search queries, but customers should be aware that the data being served may be stale, especially for changes that took place after 07:00 UTC today (1 April 2026). We are still working on recovering our ingestion pipeline, and synchronizing the indexed data.<br /><br />We will update again within 2 hours.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Update</strong> - We identified an issue in our ingestion pipeline that degraded the freshness of Code Search results. While fixing the issue with the ingestion pipeline, a deployment caused a loss of dynamic configuration which is causing most requests for Code Search results to fail. We are working to restore the service and to re-ingest the misaligned data.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>15:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/294289762026-04-01T16:10:17Z2026-04-02T03:17:14ZGitHub audit logs are unavailable<p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>16:10</var> UTC</small><br><strong>Resolved</strong> - On April 1, 2026, between 15:34 UTC and 16:02 UTC, our audit log service lost connectivity to its backing data store due to a failed credential rotation. During this 28-minute window, audit log history was unavailable via both the API and web UI. This resulted in 5xx errors for 4,297 API actors and 127 github.com users. Additionally, events created during this window were delayed by up to 29 minutes in github.com and event streaming. No audit log events were lost; all audit log events were ultimately written and streamed successfully. Customers using GitHub Enterprise Cloud with data residency were not impacted by this incident. <br /><br />We were alerted to the infrastructure failure at 15:40 UTC — six minutes after onset — and resolved the issue by recycling the affected environment, restoring full service by 16:02 UTC. We are conducting a thorough review of our credential rotation process to strengthen its resiliency and prevent recurrence. In parallel, we are strengthening our monitoring capabilities to ensure faster detection and earlier visibility into similar issues going forward.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>16:07</var> UTC</small><br><strong>Update</strong> - A routine credential rotation has failed for our our audit logs service; we have re-deployed our service and are waiting for recovery.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>16:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/294219372026-04-01T12:41:38Z2026-04-03T19:59:42ZIncident with Copilot<p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>12:41</var> UTC</small><br><strong>Resolved</strong> - On April 1, 2026, between 07:29 and 12:41 UTC, some customers experienced elevated 5xx errors and increased latency when using GitHub Copilot features that rely on `/agents/sessions` endpoints (including creating or viewing agent sessions). The issue was caused by resource exhaustion in one of the Copilot backend services handling these requests, in turn, causing timeouts and failed requests. We mitigated the incident by increasing the service’s available compute resources and tuning its runtime concurrency settings. Service health returned to normal and the incident was fully resolved by 12:41 UTC.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>12:10</var> UTC</small><br><strong>Update</strong> - The success rate and latency for creating and viewing agent sessions has stabilized at baseline levels, we are continuing to monitor recovery</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>12:02</var> UTC</small><br><strong>Update</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>11:37</var> UTC</small><br><strong>Update</strong> - The success rate for creating and viewing agent sessions has stabilized, and we're continuing to monitor latency, which is trending toward baseline levels.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>11:24</var> UTC</small><br><strong>Update</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Monitoring</strong> - The degradation affecting Copilot has been mitigated. We are monitoring to ensure stability.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>10:31</var> UTC</small><br><strong>Update</strong> - Users may see increased latency and intermittent errors when viewing or creating agent sessions. We are working on mitigations to return to baseline performance and success rate.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>10:00</var> UTC</small><br><strong>Update</strong> - We are investigating reports of issues with service(s): Copilot Dotcom Agents. We will continue to keep users updated on progress towards mitigation.</p><p><small>Apr <var data-var='date'> 1</var>, <var data-var='time'>09:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p>tag:www.githubstatus.com,2005:Incident/294022572026-03-31T21:23:43Z2026-04-03T19:37:17ZIncident with Pull Requests: High percentage of 500s<p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:23</var> UTC</small><br><strong>Resolved</strong> - On Monday March 31st, 2026, between 13:53 UTC and 21:23 UTC the Pull Requests service experienced elevated latency and failures. On average, the error rate was 0.15% and peaked at 0.28% of requests to the service. This was due to a change in garbage collection (GC) settings for a Go-based internal service that provides access to Git repository data. The changes caused more frequent GC activity and elevated CPU consumption on a subset of storage nodes, increasing latency and failure rates for some internal API operations.<br /><br />We mitigated the incident by reverting the GC changes. To prevent future incidents and improve time to detection and mitigation, we are instrumenting additional metrics and alerting for GC-related behavior, improving our visibility into other signals that could cause degraded impact of this type, and updating our best practices and standards for garbage collection in Go-based services.<br /></p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:16</var> UTC</small><br><strong>Monitoring</strong> - The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:12</var> UTC</small><br><strong>Update</strong> - We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>18:42</var> UTC</small><br><strong>Update</strong> - We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>17:16</var> UTC</small><br><strong>Update</strong> - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>16:15</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:39</var> UTC</small><br><strong>Update</strong> - We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:05</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p>tag:www.githubstatus.com,2005:Incident/294006682026-03-31T15:10:22Z2026-04-03T01:49:10ZIssues with metered billing report generation<p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:10</var> UTC</small><br><strong>Resolved</strong> - On March 31, 2026, between 06:15 UTC and 15:30 UTC, the GitHub billing usage reports feature was degraded due to reduced server capacity. Customers requesting billing usage reports and loading the top usage by organization and repository on the billing overview and usage pages were impacted. The average error rate for usage report requests was 15%, peaking at 98% over an eight-minute window. For the billing pages, an average of 56% of requests failed to load the top usage cards. The root cause was an increase in billing usage report requests with large datasets, which exhausted the capacity of the nodes responsible for reporting data. There was no impact on billing charges. <br /><br />We mitigated the incident by adjusting our auto-scaling thresholds to better meet our capacity needs. We are working to improve our metrics to reduce time to detection and mitigation for similar issues in the future.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:01</var> UTC</small><br><strong>Monitoring</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>14:59</var> UTC</small><br><strong>Update</strong> - We have applied mitigations to a data store related to billing reports, and are seeing partial recovery to billing report generation. We continue to monitor for full recovery.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>14:56</var> UTC</small><br><strong>Update</strong> - We are seeing a high number of 500s due to timeouts across GitHub services. We are redeploying some of our core services and we expect that this allow us to recover.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>14:39</var> UTC</small><br><strong>Update</strong> - We're continuing to see high failure rates on billing report generation, and are working on mitigations for a data store related to billing reports.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>13:56</var> UTC</small><br><strong>Update</strong> - We're seeing issues related to metered billing reports, intermittently affecting metered usage graphs and reports on the billing page. We have identified an issue with a data store, and are working on mitigations.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>13:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/293739482026-03-30T13:25:38Z2026-04-02T21:56:56ZElevated delays in Actions workflow runs and Pull Request status updates<p><small>Mar <var data-var='date'>30</var>, <var data-var='time'>13:25</var> UTC</small><br><strong>Resolved</strong> - On March 30, 2026, between 10:11 UTC and 13:25 UTC, GitHub Actions experienced degraded performance. During this time, approximately 2.65% of workflow jobs triggered by pull request events experienced start delays exceeding 5 minutes. The issue was caused by replication lag on an internal database cluster used by Actions, which triggered write throttling in our database protection layer and slowed job queue processing. <br /><br />The replication lag originated from planned maintenance to scale the internal database. Newly added database hosts triggered guardrails in the throttling layer, restricting write throughput. The incident was mitigated by excluding the new hosts from replication delay calculations. <br /><br />To prevent recurrence, we have updated our maintenance procedures to ensure new hosts are excluded from throttling assessments during scaling operations. Additionally, we are investing in automation to streamline this type of maintenance activity.</p><p><small>Mar <var data-var='date'>30</var>, <var data-var='time'>13:25</var> UTC</small><br><strong>Update</strong> - The degradation has been mitigated. We are monitoring to ensure stability.</p><p><small>Mar <var data-var='date'>30</var>, <var data-var='time'>13:20</var> UTC</small><br><strong>Monitoring</strong> - The degradation affecting Actions and Pull Requests has been mitigated. We are monitoring to ensure stability.</p><p><small>Mar <var data-var='date'>30</var>, <var data-var='time'>13:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions and Pull Requests</p>tag:www.githubstatus.com,2005:Incident/293108272026-03-27T05:00:00Z2026-03-27T18:42:58ZIncident with Copilot<p><small>Mar <var data-var='date'>27</var>, <var data-var='time'>05:00</var> UTC</small><br><strong>Resolved</strong> - On March 27, 2026, from 02:30 to 04:56 UTC, a misconfiguration in our rate limiting system caused users on Copilot Free, Student, Pro, and Pro+ plans to experience unexpected rate limit errors. The configuration that was incorrectly applied was intended solely for internal staff testing of rate-limiting experiences. Copilot Business and Copilot Enterprise accounts were not affected.<br /><br />During this period, affected users received error messages instructing them to retry after a certain time. Approximately 32% of active Free users, 35% of active Student users, 46% of active Pro users, and 66% of active Pro+ users were affected.<br /><br />After identifying the root cause, we reverted the change and restored the expected rate limits. We are reviewing our deployment and validation processes to help ensure configurations used for internal testing cannot be inadvertently applied to production environments.</p>tag:www.githubstatus.com,2005:Incident/292391642026-03-24T20:56:05Z2026-03-24T20:56:05ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:56</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:38</var> UTC</small><br><strong>Update</strong> - We are investigating elevated error rates affecting multiple GitHub services including Actions, Issues, Pull Requests, Webhooks, Codespaces, and login functionality. Some users may have experienced errors when accessing these features. Most services are now showing signs of recovery. We'll post another update by 21:00 UTC.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:23</var> UTC</small><br><strong>Update</strong> - Issues is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:23</var> UTC</small><br><strong>Update</strong> - Pull Requests is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:20</var> UTC</small><br><strong>Update</strong> - Webhooks is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>20:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p>tag:www.githubstatus.com,2005:Incident/292356442026-03-24T19:51:16Z2026-03-31T18:23:31ZTeams Github Notifications App is down<p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Resolved</strong> - On March 24, 2026, between 15:57 UTC and 19:51 UTC, the Microsoft Teams Integration and Teams Copilot Integration services were degraded and unable to deliver GitHub event notifications to Microsoft Teams. On average, the error rate was 37.4% and peaked at 90.1% of requests to the service -- approximately 19% of all integration installs failed to receive GitHub-to-Teams notifications in this time period.<br /><br />This was due to an outage at one of our upstream dependencies, which caused HTTP 500 errors and connection resets for our Teams integration.<br /><br />We coordinated with the relevant service teams, and the issue was resolved at 19:51 UTC when the upstream incident was mitigated.<br /><br />We are working to update observability and runbooks to reduce time to mitigation for issues like this in the future.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>18:50</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:09</var> UTC</small><br><strong>Update</strong> - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>16:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291794382026-03-22T10:02:05Z2026-03-26T19:59:44ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Resolved</strong> - On March 22, 2026, between 09:05 UTC and 10:02 UTC, users may have experienced intermittent errors and increased latency when performing Git http read operations. On average, the error rate was 3.84% and peaked at 15.55% of requests to the service. The issue was caused by elevated latency in an internal authentication service within one of our regional clusters. We mitigated the issue by redirecting traffic away from the affected cluster at 09:39 UTC, after which error rates returned to normal. The incident was fully resolved at 10:02 UTC. <br /><br />We are working to scale the authentication service and reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>09:27</var> UTC</small><br><strong>Update</strong> - We are investigating intermittently high latency and errors from Git operations.</p><p><small>Mar <var data-var='date'>22</var>, <var data-var='time'>09:08</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291232672026-03-20T01:58:40Z2026-03-25T14:54:45ZDisruption with Copilot Coding Agent Sessions<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:58</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br />peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br />backing datastore.<br /><br />We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /><br />We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:26</var> UTC</small><br><strong>Update</strong> - We are rolling out our mitigation and are seeing recovery.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:00</var> UTC</small><br><strong>Update</strong> - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/291140792026-03-20T00:05:15Z2026-03-31T17:01:09ZGit operations for users in the west coast are experiencing an increase in latency<p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026 between 16:10 UTC and 00:05 UTC (March 20), Git operations (clone, fetch, push) from the US west coast experienced elevated latency and degraded throughput. Users reported clone speeds dropping from typical speeds to under 1 MiB/s in extreme cases. The root cause was network transport link saturation at our Seattle edge site, where a fiber cut affecting our backbone transport resulted in saturation and packet loss. We had a planned scale-up in progress for the site that was accelerated to resolve the backbone capacity pressure. We also brought online additional edge capacity in a cloud region and redirected some users there. Current scale with the upgraded network capacity is sufficient to prevent reoccurrence, as we upgraded from 800Gbps to 3.2Tbps total capacity on this path. We will continue to monitor network health and respond to any further issues.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:05</var> UTC</small><br><strong>Update</strong> - We have reached stability with git operations through our changes deployed today.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>23:52</var> UTC</small><br><strong>Update</strong> - We are seeing early signs of improvement. We are working on one more small change to further improve traffic routing on the west coast.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>22:57</var> UTC</small><br><strong>Update</strong> - We have completed the rollout of our new network path and are monitoring its impact.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>21:59</var> UTC</small><br><strong>Update</strong> - We are beginning the rollout of our new network path. During this change, users will continue to see higher latency from the west coast. We will provide another update when the rollout is complete.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>18:27</var> UTC</small><br><strong>Update</strong> - We are working to enable a new network path in the west coast to reduce load and will monitor the impact on latency for Git Operations</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Update</strong> - We are still seeing elevated latency for Git operations in the west coast and are continuing to investigate</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>17:01</var> UTC</small><br><strong>Update</strong> - We are redirecting traffic back to our Seattle region and customers should see a decrease in latency for Git operations</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>16:25</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Git Operations</p>tag:www.githubstatus.com,2005:Incident/291109992026-03-19T14:32:55Z2026-03-25T15:39:32ZIssues with Copilot Coding Agent<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:32</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br /> peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br /> backing datastore.<br /> <br /> We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /> <br /> We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:06</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:02</var> UTC</small><br><strong>Update</strong> - We are investigating reports that Copilot Coding Agent session logs are not available in the UI.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>13:44</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290988322026-03-19T02:52:44Z2026-03-25T15:40:07ZDisruption with Copilot Coding Agent sessions<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:52</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br /> peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br /> backing datastore.<br /> <br /> We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /> <br /> We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:46</var> UTC</small><br><strong>Update</strong> - We have rolled out our mitigation and are seeing recovery for Copilot Coding Agent sessions</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:25</var> UTC</small><br><strong>Update</strong> - We are seeing widespread issues starting and viewing Copilot Agent sessions. We have a hypothesis for the cause and are working on remediation.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>02:05</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290949362026-03-19T01:44:01Z2026-04-07T19:22:44ZDisruption with some GitHub services<p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>01:44</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026 between 16:10 UTC and 00:05 UTC (March 20), Git operations (clone, fetch, push) from the US west coast experienced elevated latency and degraded throughput. Users reported clone speeds dropping from typical speeds to under 1 MiB/s in extreme cases. The root cause was network transport link saturation at our Seattle edge site, where a fiber cut affecting our backbone transport resulted in saturation and packet loss. We had a planned scale-up in progress for the site that was accelerated to resolve the backbone capacity pressure. We also brought online additional edge capacity in a cloud region and redirected some users there. Current scale with the upgraded network capacity is sufficient to prevent reoccurrence, as we upgraded from 800Gbps to 3.2Tbps total capacity on this path. We will continue to monitor network health and respond to any further issues.<br /><br />This was the same incident declared in https://www.githubstatus.com/incidents/xs6xtcv196g7</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>01:43</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in git operations for customers on the West Coast of the US.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Update</strong> - We continue to investigate the slow performance of Git Operations affecting the US West Coast.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>00:10</var> UTC</small><br><strong>Update</strong> - We continue to investigate degraded performance for git operations from the US West Coast.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>23:33</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate degraded performance for git operations from the US West Coast.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:48</var> UTC</small><br><strong>Update</strong> - We are experiencing increased latency when performing git operations, especially large pushes and pulls from customers on the west coast of the US. We are not seeing an increase in failures. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:36</var> UTC</small><br><strong>Update</strong> - Git Operations is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>22:36</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p>tag:www.githubstatus.com,2005:Incident/290905962026-03-18T19:46:38Z2026-03-19T21:15:58ZWebhook delivery is delayed<p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>19:46</var> UTC</small><br><strong>Resolved</strong> - On March 18, 2026, between 18:18 UTC and 19:46 UTC all webhook deliveries experienced elevated latency. During this time, average delivery latency increased from a baseline of approximately 5 seconds to a peak of approximately 160 seconds. This was due to resource constraints in the webhook delivery pipeline, which caused queue backlog growth and increased delivery latency. We mitigated the incident by shifting traffic and adding capacity, after which webhook delivery latency returned to normal. We are working to improve capacity management and detection in the webhook delivery pipeline to help prevent similar issues in the future.</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>19:25</var> UTC</small><br><strong>Update</strong> - We are seeing recovery and are continuing to monitor the latency for webhook deliveries</p><p><small>Mar <var data-var='date'>18</var>, <var data-var='time'>18:51</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Webhooks</p>