Skip to content

[ROCm][CI] Lower runner check gpu count for distributed jobs#166961

Closed
amdfaa wants to merge 1 commit intopytorch:mainfrom
amdfaa:patch-28
Closed

[ROCm][CI] Lower runner check gpu count for distributed jobs#166961
amdfaa wants to merge 1 commit intopytorch:mainfrom
amdfaa:patch-28

Conversation

@amdfaa
Copy link
Contributor

@amdfaa amdfaa commented Nov 4, 2025

This is a PR to temporarily relieve the queueing that is caused by an mi250 node outage. See this ticket for more information:
#166866

It relaxes the GPU count check to allow distributed jobs to run on 2-GPU runners

cc @jeffdaily @sunway513 @jithunnair-amd @pruthvistony @ROCmSupport @dllehr-amd @jataylo @hongxiayang @naromero77amd

@amdfaa amdfaa requested a review from a team as a code owner November 4, 2025 16:40
@pytorch-bot
Copy link

pytorch-bot bot commented Nov 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/166961

Note: Links to docs will display an error until the docs builds have been completed.

❗ 1 Active SEVs

There are 1 currently active SEVs. If your PR is affected, please view them below:

⏳ 4 Pending, 1 Unrelated Failure

As of commit eb974d2 with merge base 56dfd4c (image):

UNSTABLE - The following jobs are marked as unstable, possibly due to flakiness on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added module: rocm AMD GPU support for Pytorch topic: not user facing topic category labels Nov 4, 2025
@jeffdaily jeffdaily added the ciflow/periodic-rocm-mi200 Trigger "distributed" config CI on ROCm MI200 label Nov 4, 2025
@pytorch-bot pytorch-bot bot added the ciflow/rocm Trigger "default" config CI on ROCm label Nov 4, 2025
@jithunnair-amd jithunnair-amd changed the title [ROCm][CI] Lower runner check gpu count [ROCm][CI] Lower runner check gpu count for distributed jobs Nov 4, 2025
@jithunnair-amd
Copy link
Collaborator

@pytorchbot merge -f "Triggered distributed jobs successfully. Merging to alleviate queueing on distributed jobs"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/periodic-rocm-mi200 Trigger "distributed" config CI on ROCm MI200 ciflow/rocm Trigger "default" config CI on ROCm Merged module: rocm AMD GPU support for Pytorch open source topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants