GitHub Enterprise Cloud - EU Status - Incident History https://eu.githubstatus.com Statuspage Mon, 13 Apr 2026 08:48:54 +0000 EU - Problems with third-party Claude and Codex Agent sessions not being listed in the agents tab dashboard <p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:28</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:08</var> UTC</small><br><strong>Update</strong> - We are investigating third party Claude and Codex Cloud Agent sessions not being listed in the agents tab dashboard.</p><p><small>Apr <var data-var='date'>10</var>, <var data-var='time'>13:07</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Fri, 10 Apr 2026 13:28:21 +0000 https://eu.githubstatus.com/incidents/ppp4f4y2jmdb https://eu.githubstatus.com/incidents/ppp4f4y2jmdb EU - Copilot Coding Agent failing to start some jobs <p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Resolved</strong> - Between 15:20 and 20:18 UTC on Thursday April 2, Copilot Cloud Agent entered a period of reduced performance. Due to an internal feature being developed for Copilot Code Review, the Copilot Cloud Agent infrastructure started to receive an increased number of jobs. This load eventually caused us to hit an internal rate limit, causing all work to suspend for an hour. During this hour, some new jobs would time out, while others would resume once rate limiting ended. Roughly 40% of jobs in this period were affected.<br /><br />Once the cause of this rate limiting was identified, we were able to disable the new CCR feature via a feature flag. Once the jobs that were already in the queue were able to clear, we didn't see additional instances of rate limiting afterwards.<br /><br />This was the same incident declared in https://www.githubstatus.com/incidents/d96l71t3h63k</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:28</var> UTC</small><br><strong>Update</strong> - When assigning tasks to Copilot Cloud Agent, the task will appear to be working, but may not actually be running. <br /><br />We are investigating.</p><p><small>Apr <var data-var='date'> 2</var>, <var data-var='time'>16:18</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 02 Apr 2026 16:30:06 +0000 https://eu.githubstatus.com/incidents/1xxr9czm0s01 https://eu.githubstatus.com/incidents/1xxr9czm0s01 EU - Incident with Pull Requests: High percentage of 500s <p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:23</var> UTC</small><br><strong>Resolved</strong> - On Monday March 31st, 2026, between 13:53 UTC and 21:23 UTC the Pull Requests service experienced elevated latency and failures. On average, the error rate was 0.15% and peaked at 0.28% of requests to the service. This was due to a change in garbage collection (GC) settings for a Go-based internal service that provides access to Git repository data. The changes caused more frequent GC activity and elevated CPU consumption on a subset of storage nodes, increasing latency and failure rates for some internal API operations.<br /><br />We mitigated the incident by reverting the GC changes. To prevent future incidents and improve time to detection and mitigation, we are instrumenting additional metrics and alerting for GC-related behavior, improving our visibility into other signals that could cause degraded impact of this type, and updating our best practices and standards for garbage collection in Go-based services.<br /></p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:16</var> UTC</small><br><strong>Monitoring</strong> - The degradation affecting Pull Requests has been mitigated. We are monitoring to ensure stability.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>21:12</var> UTC</small><br><strong>Update</strong> - We continue to see a small subset of repositories experiencing timeouts and elevated latency in Pull Requests, affecting under 1% of requests.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>19:28</var> UTC</small><br><strong>Update</strong> - Error rates remain elevated across multiple pull request endpoints. We are pursuing multiple potential mitigations.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>18:42</var> UTC</small><br><strong>Update</strong> - We continue to experience elevated error rates affecting Pull Requests. An earlier fix resolved one component of the issue, but some users may still encounter intermittent timeouts when viewing or interacting with pull requests. Our teams are actively investigating the remaining causes.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>17:16</var> UTC</small><br><strong>Update</strong> - We identified an issue causing increased errors when accessing Pull Requests. The mitigation is being applied across our infrastructure and we will continue to provide updates as the mitigation rolls out.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>16:35</var> UTC</small><br><strong>Update</strong> - We are seeing recovery in latency and timeouts of requests related to pull requests, even though 500s are still elevated. While we are continuing to investigate, we are applying a mitigation and expect further recovery after it is applied.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>16:15</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:39</var> UTC</small><br><strong>Update</strong> - We are investigating increased 500 errors affecting GitHub services. You may experience intermittent failures when using Pull Requests and other features. We are actively working to identify and resolve the underlying cause.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are seeing a higher than average number of 500s due to timeouts across GitHub services. We have a potential mitigation in flight and are continuing to investigate.</p><p><small>Mar <var data-var='date'>31</var>, <var data-var='time'>15:05</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Tue, 31 Mar 2026 21:23:44 +0000 https://eu.githubstatus.com/incidents/b16c1gfcpm8y https://eu.githubstatus.com/incidents/b16c1gfcpm8y EU - Teams Github Notifications App is down <p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>19:51</var> UTC</small><br><strong>Resolved</strong> - On March 24, 2026, between 15:57 UTC and 19:51 UTC, the Microsoft Teams Integration and Teams Copilot Integration services were degraded and unable to deliver GitHub event notifications to Microsoft Teams. On average, the error rate was 37.4% and peaked at 90.1% of requests to the service -- approximately 19% of all integration installs failed to receive GitHub-to-Teams notifications in this time period.<br /><br />This was due to an outage at one of our upstream dependencies, which caused HTTP 500 errors and connection resets for our Teams integration.<br /><br />We coordinated with the relevant service teams, and the issue was resolved at 19:51 UTC when the upstream incident was mitigated.<br /><br />We are working to update observability and runbooks to reduce time to mitigation for issues like this in the future.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>18:51</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure Teams APIs, which is impacting notifications from GitHub to Microsoft Teams. We are awaiting resolution from Azure.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:43</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability from Azure APIs, which is impacting notifications from GitHub to Microsoft Teams. We are working with Azure to resolve the issue.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:09</var> UTC</small><br><strong>Update</strong> - We found an issue impacting notifications from GitHub to Microsoft Teams. We are working on mitigation and will keep users updated on progress towards mitigation.</p><p><small>Mar <var data-var='date'>24</var>, <var data-var='time'>17:00</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Tue, 24 Mar 2026 19:51:17 +0000 https://eu.githubstatus.com/incidents/698y0sl10lb1 https://eu.githubstatus.com/incidents/698y0sl10lb1 EU - Disruption with Copilot Coding Agent Sessions <p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:58</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br />peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br />backing datastore.<br /><br />We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /><br />We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:26</var> UTC</small><br><strong>Update</strong> - We are rolling out our mitigation and are seeing recovery.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>01:00</var> UTC</small><br><strong>Update</strong> - We are seeing widespread issues starting and viewing Copilot Agent sessions. We understand the cause and are working on remediation.</p><p><small>Mar <var data-var='date'>20</var>, <var data-var='time'>00:58</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Fri, 20 Mar 2026 01:58:39 +0000 https://eu.githubstatus.com/incidents/qldf8kbyn5px https://eu.githubstatus.com/incidents/qldf8kbyn5px EU - Issues with Copilot Coding Agent <p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:32</var> UTC</small><br><strong>Resolved</strong> - On March 19, 2026, between 01:05 UTC and 02:52 UTC, and again on March 20, 2026, between 00:42 UTC and 01:58 UTC, the Copilot Coding Agent service was degraded and users were unable to start new Copilot Agent sessions or view existing ones. During the first incident, the average error rate was ~53% and<br /> peaked at ~93% of requests to the service. During the second incident, the average error rate was ~99%% and peaked at ~100%% of requests with significant retry amplification. Both incidents were caused by the same underlying system authentication issue that prevented the service from connecting to its<br /> backing datastore.<br /> <br /> We mitigated each incident by rotating the affected credentials, which restored connectivity and returned error rates to normal. The mitigation time was 01:24. The second occurrence was due to an incomplete remediation of the first.<br /> <br /> We are implementing automated monitoring for credential lifecycle events and improving operational processes to reduce our time to detection and mitigation of issues like this one in the future.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:32</var> UTC</small><br><strong>Monitoring</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>14:02</var> UTC</small><br><strong>Update</strong> - We are investigating reports that Copilot Coding Agent session logs are not available in the UI.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>13:45</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>12:32</var> UTC</small><br><strong>Update</strong> - We are investigating reports of Copilot coding agent session logs not loading and sessions intermittently not starting. Users are able to see their tasks and create new ones.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>12:00</var> UTC</small><br><strong>Update</strong> - We have resolved the underlying credential issue and have verified that users can see and interact with their Copilot Coding Agent tasks. <br /><br />We are now investigating issues with some Coding Agent tasks completing successfully.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>11:19</var> UTC</small><br><strong>Update</strong> - We believe we have identified an underlying credential issue and are working to resolve that across impacted environments.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>10:43</var> UTC</small><br><strong>Update</strong> - We are investigating reports of errors when accessing Copilot Coding Agent features. Users may be unable to view or start coding agent tasks through the Agents interface. Our engineers are actively working to restore full functionality.</p><p><small>Mar <var data-var='date'>19</var>, <var data-var='time'>10:41</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 19 Mar 2026 14:32:56 +0000 https://eu.githubstatus.com/incidents/ryd6zd04f689 https://eu.githubstatus.com/incidents/ryd6zd04f689 EU - Disruption with some GitHub services <p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:28</var> UTC</small><br><strong>Resolved</strong> - On 16 March 2026, between 14:16 UTC and 15:18 UTC, Codespaces users encountered a download failure error message when starting newly created or resumed codespaces. At peak, 96% of the created or resumed codespaces were impacted. Active codespaces with a running VSCode environment were not affected. <br /><br />The error was a result of an API deployment issue with our VS Code remote experience dependency and was resolved by rolling back that deployment. We are working with our partners to reduce our incident engagement time, improve early detection before they impact our customers, and ensure safe rollout of similar changes in the future.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:27</var> UTC</small><br><strong>Update</strong> - Errors starting or resuming Codespaces have resolved.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:06</var> UTC</small><br><strong>Update</strong> - We are investigating reports of users experiencing errors when starting or connecting to Codespaces. Some users may be unable to access their development environments during this time. We are working to identify the root cause and will implement a fix as soon as possible.</p><p><small>Mar <var data-var='date'>16</var>, <var data-var='time'>15:01</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Mon, 16 Mar 2026 15:28:23 +0000 https://eu.githubstatus.com/incidents/56bdkp8rkhl1 https://eu.githubstatus.com/incidents/56bdkp8rkhl1 EU - Disruption with some GitHub services <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Resolved</strong> - On March 5, 2026, between 12:53 UTC and 13:35 UTC, the Copilot mission control service was degraded. This resulted in empty responses returned for users' agent session lists across GitHub web surfaces. Impacted users were unable to see their lists of current and previous agent sessions in GitHub web surfaces. This was caused by an incorrect database query that falsely excluded records that have an absent field.<br /><br />We mitigated the incident by rolling back the database query change. There were no data alterations nor deletions during the incident.<br /><br />To prevent similar issues in the future, we're improving our monitoring depth to more easily detect degradation before changes are fully rolled out.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:30</var> UTC</small><br><strong>Update</strong> - Copilot coding agent mission control is fully restored. Tasks are now listed as expected.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:21</var> UTC</small><br><strong>Update</strong> - Users were temporarily unable to see tasks listed in mission control surfaces. The ability to submit new tasks, view existing tasks via direct link, or manage tasks was unaffected throughout. A revert is currently being deployed and we are seeing recovery.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 05 Mar 2026 01:30:38 +0000 https://eu.githubstatus.com/incidents/xc3gm34trprw https://eu.githubstatus.com/incidents/xc3gm34trprw EU - Some OpenAI models degraded in Copilot <p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Resolved</strong> - On March 5th, 2026, between approximately 00:26 and 00:44 UTC, the Copilot service experienced a degradation of the GPT 3.5 Codex model due to an issue with our upstream provider. Users encountered elevated error rates when using GPT 3.5 Codex, impacting approximately 30% of requests. No other models were impacted.<br /><br />The issue was resolved by a mitigation put in place by our provider.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>01:13</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and gpt-5.3-codex is once again available in Copilot Chat and across IDE integrations. We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:53</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the gpt-5.3-codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Mar <var data-var='date'> 5</var>, <var data-var='time'>00:47</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Thu, 05 Mar 2026 01:13:29 +0000 https://eu.githubstatus.com/incidents/3bgzsgpddqvw https://eu.githubstatus.com/incidents/3bgzsgpddqvw EU - Claude Opus 4.6 Fast not appearing for some Copilot users <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:11</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 19:44 UTC and 21:05 UTC, some GitHub Copilot users reported that the Claude Opus 4.6 Fast model was no longer available in their IDE model selection. After investigation, we confirmed that this was caused by enterprise administrators adjusting their organization's model policies, which correctly removed the model for users in those organizations. No users outside the affected organizations lost access.<br /><br />We confirmed that the Copilot settings were functioning as designed, and all expected users retained access to the model. The incident was resolved once we verified that the change was intentional and no platform regression had occurred.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>21:05</var> UTC</small><br><strong>Update</strong> - We believe that all expected users still have access to Claude Opus 4.6. We confirm that no users have lost access.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:31</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Mar 2026 21:11:31 +0000 https://eu.githubstatus.com/incidents/xwh8w5lmg8bv https://eu.githubstatus.com/incidents/xwh8w5lmg8bv EU - Incident with Copilot and Actions <p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>20:09</var> UTC</small><br><strong>Resolved</strong> - On March 3, 2026, between 18:46 UTC and 20:09 UTC, GitHub experienced a period of degraded availability impacting GitHub.com, the GitHub API, GitHub Actions, Git operations, GitHub Copilot, and other dependent services. At the peak of the incident, GitHub.com request failures reached approximately 40%. During the same period, approximately 43% of GitHub API requests failed. Git operations over HTTP had an error rate of approximately 6%, while SSH was not impacted. GitHub Copilot requests had an error rate of approximately 21%. GitHub Actions experienced less than 1% impact. <br /><br />This incident shared the same underlying cause as an incident in early February where we saw a large volume of writes to the user settings caching mechanism. While deploying a change to reduce the burden of these writes, a bug caused every user’s cache to expire, get recalculated, and get rewritten. The increased load caused replication delays that cascaded down to all affected services. We mitigated this issue by immediately rolling back the faulty deployment. <br /><br />We understand these incidents disrupted the workflows of developers. While we have made substantial, long-term investments in how GitHub is built and operated to improve resilience, we acknowledge we have more work to do. Getting there requires deep architectural work that is already underway, as well as urgent, targeted improvements. We are taking the following immediate steps: <br /><br />- We have added a killswitch and improved monitoring to the caching mechanism to ensure we are notified before there is user impact and can respond swiftly. <br />- We are moving the cache mechanism to a dedicated host, ensuring that any future issues will solely affect services that rely on it.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:32</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>19:17</var> UTC</small><br><strong>Update</strong> - We've identified the issue and have applied a mitigation. We're seeing recovery of services. We continue to montitor for full recovery.</p><p><small>Mar <var data-var='date'> 3</var>, <var data-var='time'>18:59</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Mar 2026 20:09:17 +0000 https://eu.githubstatus.com/incidents/38gs7szkgxvj https://eu.githubstatus.com/incidents/38gs7szkgxvj EU - Incident with Copilot <p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Resolved</strong> - On February 26, 2026, between 09:27 UTC and 10:36 UTC, the GitHub Copilot service was degraded and users experienced errors when using Copilot features including Copilot Chat, Copilot Coding Agent and Copilot Code Review. During this time, 5-15% of affected requests to the service returned errors.<br /><br />The incident was resolved by infrastructure rebalancing.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>11:06</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>26</var>, <var data-var='time'>10:22</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Thu, 26 Feb 2026 11:06:31 +0000 https://eu.githubstatus.com/incidents/6cb1qrfsydh3 https://eu.githubstatus.com/incidents/6cb1qrfsydh3 EU - Incident with Copilot Agent Sessions impacting CCA/CCR <p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:44</var> UTC</small><br><strong>Resolved</strong> - On February 25, 2026, between 15:05 UTC and 16:34 UTC, the Copilot coding agent service was degraded, resulting in errors for 5% of all requests and impacting users starting or interacting with agent sessions. <br /><br />This was due to an internal service dependency running out of allocated resources (memory and CPU). We mitigated the incident by adjusting the resource allocation for the affected service, which restored normal operations for the coding agent service.<br /><br />We are working to implement proactive monitoring for resource exhaustion across our services, review and update resource allocations, and improve our alerting capabilities to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var='date'>25</var>, <var data-var='time'>16:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 25 Feb 2026 16:44:50 +0000 https://eu.githubstatus.com/incidents/rkh4wvvhrqf6 https://eu.githubstatus.com/incidents/rkh4wvvhrqf6 EU - Incident with Copilot <p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:19</var> UTC</small><br><strong>Resolved</strong> - On February 23, 2026, between 14:45 UTC and 16:19 UTC, the Copilot service was degraded for Claude Haiku 4.5 model. On average, 6% of the requests to this model failed due to an issue with an upstream provider. During this period, automated model degradation notifications directed affected users to alternative models. No other models were impacted. The upstream provider identified and resolved the issue on their end. <br />We are working to improve automatic model failover mechanisms to reduce our time to mitigation of issues like this one in the future.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>16:00</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>23</var>, <var data-var='time'>14:56</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Mon, 23 Feb 2026 16:19:34 +0000 https://eu.githubstatus.com/incidents/2vyccmpfpxv8 https://eu.githubstatus.com/incidents/2vyccmpfpxv8 EU - Incident with Copilot GPT-5.1-Codex <p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>11:41</var> UTC</small><br><strong>Resolved</strong> - On February 20, 2026, between 07:30 UTC and 11:21 UTC, the Copilot service experienced a degradation of the GPT 5.1 Codex model. During this time period, users encountered a 4.5% error rate when using this model. No other models were impacted.<br />The issue was resolved by a mitigation put in place by the external model provider. GitHub is working with the external model provider to further improve the resiliency of the service to prevent similar incidents in the future.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>11:19</var> UTC</small><br><strong>Update</strong> - The issues with our upstream model provider have been resolved, and GPT 5.1 Codex is once again available in Copilot Chat and across IDE integrations [VSCode, Visual Studio, JetBrains].<br />We will continue monitoring to ensure stability, but mitigation is complete.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:36</var> UTC</small><br><strong>Update</strong> - We are still experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /></p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the GPT 5.1 Codex model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br />Other models are available and working as expected.</p><p><small>Feb <var data-var='date'>20</var>, <var data-var='time'>10:02</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Fri, 20 Feb 2026 11:41:40 +0000 https://eu.githubstatus.com/incidents/mw9g1x4f6kv2 https://eu.githubstatus.com/incidents/mw9g1x4f6kv2 EU - Disruption with some GitHub services regarding file upload <p><small>Feb <var data-var='date'>13</var>, <var data-var='time'>22:58</var> UTC</small><br><strong>Resolved</strong> - On February 13, 2026, between 21:46 UTC and 22:58 UTC (72 minutes), the GitHub file upload service was degraded and users uploading from a web browser on GitHub.com were unable to upload files to repositories, create release assets, or upload manifest files. During the incident, successful upload completions dropped by ~85% from baseline levels. This was due to a code change that inadvertently modified browser request behavior and violated CORS (Cross-Origin Resource Sharing) policy requirements, causing upload requests to be blocked before reaching the upload service.<br /><br />We mitigated the incident by reverting the code change that introduced the issue.<br /><br />We are working to improve automated testing for browser-side request changes and to add monitoring/automated safeguards for upload flows to reduce our time to detection and mitigation of similar issues in the future.</p><p><small>Feb <var data-var='date'>13</var>, <var data-var='time'>22:30</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Fri, 13 Feb 2026 22:58:43 +0000 https://eu.githubstatus.com/incidents/663qlvbkm8bd https://eu.githubstatus.com/incidents/663qlvbkm8bd EU - Intermittent disruption with Copilot completions and inline suggestions <p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>16:50</var> UTC</small><br><strong>Resolved</strong> - Between February 11th 21:30 UTC and February 12th 15:40 UTC, users in Western Europe experienced degraded quality for all Next Edit Suggestions requests. Additionally, on February 12th, between 18:40 UTC and 20:30 UTC, users in Australia and South America experienced degraded quality and increased latency of up to 500ms for all Next Edit Suggestions requests. The root cause was a newly introduced regression in an upstream service dependency.<br /><br />The incident was mitigated by failing over Next Edit Suggestions traffic to unaffected regions, which caused the increased latency. Once the regression was identified and rolled back, we restored the impacted capacity. We have improved our quality analysis tooling and are working on more robust quality impact alerting to accelerate detection of these issues in the future.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>15:33</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability in Western Europe for Copilot completions and suggestions. We are working to resolve the issue.<br /></p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>14:08</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability in some regions for Copilot completions and suggestions. We are working to resolve the issue.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>14:06</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 12 Feb 2026 16:50:02 +0000 https://eu.githubstatus.com/incidents/rwvpcr264nd7 https://eu.githubstatus.com/incidents/rwvpcr264nd7 EU - Disruption with some GitHub services <p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>11:12</var> UTC</small><br><strong>Resolved</strong> - From Feb 12, 2026 09:16:00 UTC to Feb 12, 2026 11:01 UTC, users attempting to download repository archives (tar.gz/zip) that include Git LFS objects received errors. Standard repository archives without LFS objects were not affected. On average, the archive download error rate was 0.0042% and peaked at 0.0339% of requests to the service. This was caused by deploying a corrupt configuration bundle, resulting in missing data used for network interface connections by the service.<br /><br />We mitigated the incident by applying the correct configuration to each site. We have added checks for corruption in this deployment, and will add auto-rollback detection for this service to prevent issues like this in the future.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>11:01</var> UTC</small><br><strong>Update</strong> - We have resolved the issue and are seeing full recovery.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>10:39</var> UTC</small><br><strong>Update</strong> - We are investigating an issue with downloading repository archives that include Git LFS objects.</p><p><small>Feb <var data-var='date'>12</var>, <var data-var='time'>10:38</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of impacted performance for some GitHub services.</p> Thu, 12 Feb 2026 11:12:15 +0000 https://eu.githubstatus.com/incidents/rwqr7934g1rt https://eu.githubstatus.com/incidents/rwqr7934g1rt EU - Copilot Policy Propagation Delays <p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>10:01</var> UTC</small><br><strong>Resolved</strong> - GitHub experienced degraded Copilot policy propagation from enterprise to organizations between February 3 at 21:00 UTC through February 10 at 16:00 UTC. During this period, policy changes could take up to 24 hours to apply. We mitigated the issue on February 10 at 16:00 UTC after rolling back a regression that caused the delays. The propagation queue fully caught up on the delayed items by February 11 at 10:35 UTC, and policy changes now propagate normally.<br /><br />During this incident, whenever an enterprise updated a Copilot policy (including model policies), there were significant delays before those policy changes reached their child organizations and assigned users. The delay was caused by a large backlog in the background job queue responsible for propagating Copilot policy updates.<br /><br />Our investigation determined the incident was caused by a code change shipped on February 3 that increased the number of background jobs enqueued per policy update, in order to accommodate upcoming feature work. When new Copilot models launched on February 5th and 7th, triggering policy updates across many enterprises, the higher job volume overwhelmed the shared background worker queue, resulting in prolonged propagation delays. No policy updates were lost; they were queued and processed once the backlog cleared.<br /><br />We understand these delays disrupted policy management for customers using Copilot at scale and have taken the following immediate steps:<br /><br />1. Restored the optimized propagation path and put tests in place to avoid a regression.<br />2. Ensured upcoming features are compatible with this design. <br />3. Added alerting on queue depth to detect propagation backlogs immediately.<br /><br />GitHub is critical infrastructure for your work, your teams, and your businesses. We are focused on these mitigations and continued improvements so Copilot policy changes propagate reliably and quickly.<br /></p><p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>00:52</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'>10</var>, <var data-var='time'>00:26</var> UTC</small><br><strong>Update</strong> - We're continuing to address an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users. This may prevent newly enabled models from appearing when users try to access them.<br /> <br />This issue is understand and we are working to get the mitigation applied. Next update in one hour.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>22:09</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>20:39</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>18:49</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.<br /><br />Next update in two hours.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>18:06</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate an issue where Copilot policy updates are not propagating correctly for a subset of enterprise users.<br /><br />This may prevent newly enabled models from appearing when users try to access them.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>17:23</var> UTC</small><br><strong>Update</strong> - We're continuing to investigate a an issue where Copilot policy updates are not propagating correctly for all customers.<br /><br />This may prevent newly enabled models from appearing when users try to access them.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>16:30</var> UTC</small><br><strong>Update</strong> - We’ve identified an issue where Copilot policy updates are not propagating correctly for some customers. This may prevent newly enabled models from appearing when users try to access them.<br /><br />The team is actively investigating the cause and working on a resolution. We will provide updates as they become available.</p><p><small>Feb <var data-var='date'> 9</var>, <var data-var='time'>16:29</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 10 Feb 2026 10:01:16 +0000 https://eu.githubstatus.com/incidents/frl62n451cky https://eu.githubstatus.com/incidents/frl62n451cky EU - Incident with Pull Requests <p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Resolved</strong> - On February 6, 2026, between 17:49 UTC and 18:36 UTC, the GitHub Mobile service was degraded, and some users were unable to create pull request review comments on deleted lines (and in some cases, comments on deleted files). This impacted users on the newer comment-positioning flow available in version 1.244.0 of the mobile apps. Telemetry indicated that the failures increased as the Android rollout progressed. This was due to a defect in the new comment-positioning workflow that could result in the server rejecting comment creation for certain deleted-line positions.<br /><br />We mitigated the incident by halting the Android rollout and implementing interim client-side fallback behavior while a platform fix is in progress. The client-side fallback is scheduled to be published early this week. We are working to (1) add clearer client-side error handling (avoid infinite spinners), (2) improve monitoring/alerting for these failures, and (3) adopt stable diff identifiers for diff-based operations to reduce the likelihood of recurrence.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:36</var> UTC</small><br><strong>Update</strong> - Some GitHub Mobile app users may be unable to add review comments on deleted lines in pull requests. We're working on a fix and expect to release it early next week.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:04</var> UTC</small><br><strong>Update</strong> - Pull Requests is operating normally.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>18:00</var> UTC</small><br><strong>Update</strong> - We're currently investigating an issue affecting the Mobile app that can prevent review comments from being posted on certain pull requests when commenting on deleted lines.</p><p><small>Feb <var data-var='date'> 6</var>, <var data-var='time'>17:49</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Pull Requests</p> Fri, 06 Feb 2026 18:36:53 +0000 https://eu.githubstatus.com/incidents/0fx8lrr9pvhb https://eu.githubstatus.com/incidents/0fx8lrr9pvhb EU - Incident with Copilot <p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:56</var> UTC</small><br><strong>Resolved</strong> - On February 3, 2026, between 09:35 UTC and 10:15 UTC, GitHub Copilot experienced elevated error rates, with an average of 4% of requests failing.<br /><br />This was caused by a capacity imbalance that led to resource exhaustion on backend services. The incident was resolved by infrastructure rebalancing, and we subsequently deployed additional capacity.<br /><br />We are improving observability to detect capacity imbalances earlier and enhancing our infrastructure to better handle traffic spikes.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:55</var> UTC</small><br><strong>Update</strong> - We are now seeing recovery.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:21</var> UTC</small><br><strong>Update</strong> - We are investigating elevated 500s across Copilot services.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>10:16</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Tue, 03 Feb 2026 10:56:29 +0000 https://eu.githubstatus.com/incidents/k5tg0khmvyg3 https://eu.githubstatus.com/incidents/k5tg0khmvyg3 EU - Incident with Actions <p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:56</var> UTC</small><br><strong>Resolved</strong> - On February 2, 2026, between 18:35 UTC and 22:15 UTC, GitHub Actions hosted runners were unavailable, with service degraded until full recovery at 23:10 UTC for standard runners and at February 3, 2026 00:30 UTC for larger runners. During this time, Actions jobs queued and timed out while waiting to acquire a hosted runner. Other GitHub features that leverage this compute infrastructure were similarly impacted, including Copilot Coding Agent, Copilot Code Review, CodeQL, Dependabot, GitHub Enterprise Importer, and Pages. All regions and runner types were impacted. Self-hosted runners on other providers were not impacted. <br /><br />This outage was caused by a backend storage access policy change in our underlying compute provider that blocked access to critical VM metadata, causing all VM create, delete, reimage, and other operations to fail. More information is available at https://azure.status.microsoft/en-us/status/history/?trackingId=FNJ8-VQZ. This was mitigated by rolling back the policy change, which started at 22:15 UTC. As VMs came back online, our runners worked through the backlog of requests that hadn’t timed out. <br /><br />We are working with our compute provider to improve our incident response and engagement time, improve early detection before they impact our customers, and ensure safe rollout should similar changes occur in the future. We recognize this was a significant outage to our users that rely on GitHub’s workloads and apologize for the impact this had.</p><p><small>Feb <var data-var='date'> 3</var>, <var data-var='time'>00:55</var> UTC</small><br><strong>Update</strong> - Actions is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:50</var> UTC</small><br><strong>Update</strong> - Based on our telemetry, most customers should see full recovery from failing GitHub Actions jobs on hosted runners.<br />We are monitoring closely to confirm complete recovery.<br />Other GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot) should also see recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:43</var> UTC</small><br><strong>Update</strong> - Actions is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:42</var> UTC</small><br><strong>Update</strong> - Copilot is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>23:31</var> UTC</small><br><strong>Update</strong> - Pages is operating normally.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:53</var> UTC</small><br><strong>Update</strong> - Our upstream provider has applied a mitigation to address queuing and job failures on hosted runners.<br />Telemetry shows improvement, and we are monitoring closely for full recovery.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>22:10</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We're waiting on our upstream provider to apply the identified mitigations, and we're preparing to resume job processing as safely as possible.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:30</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance. We are continuing to investigate.</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>21:13</var> UTC</small><br><strong>Update</strong> - We continue to investigate failures impacting GitHub Actions hosted-runner jobs.<br />We have identified the root cause and are working with our upstream provider to mitigate.<br />This is also impacting GitHub features that rely on GitHub Actions (for example, Copilot Coding Agent and Dependabot).</p><p><small>Feb <var data-var='date'> 2</var>, <var data-var='time'>20:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded availability for Actions and Pages</p> Tue, 03 Feb 2026 00:56:05 +0000 https://eu.githubstatus.com/incidents/3fyjyy8fx4ys https://eu.githubstatus.com/incidents/3fyjyy8fx4ys EU - Incident with Actions <p><small>Feb <var data-var='date'> 1</var>, <var data-var='time'>06:21</var> UTC</small><br><strong>Resolved</strong> - On February 1, 2026 between 05:05 UTC and 05:40 UTC, customers using the Sweden stamp of GitHub Enterprise Cloud experienced workflow failures and slow job starts on GitHub Actions. During the incident, approximately 2.7% of runs failed, and around 27.5% saw start times averaging 22 minutes. The incident was caused by connection churn in our stream processing system. We've implemented connection churn throttling, improved metrics for faster detection, and are enhancing client connection tooling to prevent recurrence.</p><p><small>Feb <var data-var='date'> 1</var>, <var data-var='time'>06:20</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Actions</p> Sun, 01 Feb 2026 06:21:31 +0000 https://eu.githubstatus.com/incidents/6p5q2tghfzfs https://eu.githubstatus.com/incidents/6p5q2tghfzfs EU - Copilot Chat - Grok Code Fast 1 Outage <p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:39</var> UTC</small><br><strong>Resolved</strong> - On Jan 21st, 2025, between 11:15 UTC and 13:00 UTC the Copilot service was degraded for Grok Code Fast 1 model. On average, more than 90% of the requests to this model failed due to an issue with an upstream provider. No other models were impacted.<br /><br />The issue was resolved after the upstream provider fixed the problem that caused the disruption. GitHub will continue to enhance our monitoring and alerting systems to reduce the time it takes to detect and mitigate similar issues in the future.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>12:09</var> UTC</small><br><strong>Update</strong> - We are experiencing degraded availability for the Grok Code Fast 1 model in Copilot Chat, VS Code and other Copilot products. This is due to an issue with an upstream model provider. We are working with them to resolve the issue.<br /><br />Other models are available and working as expected.</p><p><small>Jan <var data-var='date'>21</var>, <var data-var='time'>11:33</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 21 Jan 2026 12:39:00 +0000 https://eu.githubstatus.com/incidents/sb4r63z7syc5 https://eu.githubstatus.com/incidents/sb4r63z7syc5 EU - Copilot's GPT-5.1 model has degraded performance <p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:52</var> UTC</small><br><strong>Resolved</strong> - This incident has been resolved. Thank you for your patience and understanding as we addressed this issue. A detailed root cause analysis will be shared as soon as it is available.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>10:32</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model. We are also seeing an increase in failures for Copilot Code Reviews.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:53</var> UTC</small><br><strong>Update</strong> - We are continuing to investigate issues with the GPT-5.1 model with our model provider. Uses of other models are not impacted.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:26</var> UTC</small><br><strong>Update</strong> - Copilot is experiencing degraded performance when using the GPT-5.1 model. We are investigating the issue.</p><p><small>Jan <var data-var='date'>14</var>, <var data-var='time'>09:24</var> UTC</small><br><strong>Investigating</strong> - We are investigating reports of degraded performance for Copilot</p> Wed, 14 Jan 2026 10:52:12 +0000 https://eu.githubstatus.com/incidents/xfd5tdv0ggvb https://eu.githubstatus.com/incidents/xfd5tdv0ggvb