Skip to content

Conversation

@eqy
Copy link
Collaborator

@eqy eqy commented Feb 28, 2023

Support for nonblocking NCCL communicators/fault tolerance/checking which was added in 2.14 as an experimental feature.
Enabled via the environment variable:

TORCH_NCCL_USE_COMM_NONBLOCKING=1

CC @ptrblck

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 28, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/95715

Note: Links to docs will display an error until the docs builds have been completed.

❌ 3 Failures

As of commit 8b5c091:

NEW FAILURES - The following jobs have failed:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@pytorch-bot pytorch-bot bot added the release notes: distributed (c10d) release notes category label Feb 28, 2023
@eqy eqy force-pushed the nccl_nonblocking branch 3 times, most recently from 1ccaf67 to 81ce78a Compare March 8, 2023 00:32
@eqy eqy changed the title [DO NOT MERGE][WIP] Nonblocking NCCL Fault Tolerance/Checking Nonblocking NCCL Fault Tolerance/Checking Mar 8, 2023
): ...
@staticmethod
def _group_start() -> None: ...
@staticmethod
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Removing static method as _group_end() might need to check the communicator map of the ProcessGroup to properly wait on collectives if nonblocking is used.

if (!comm_nonblocking) {
NCCL_CHECK(ncclCommCount(comm, &numranks));
} else {
NCCL_CHECK_NONBLOCKING(ncclCommCount(comm, &numranks), _comm);
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Might be unnecessary to also do a non-blocking check for ncclCommCount (unsure if there exists documentations on exactly which API calls might leave a communicator in an in-progress state).

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agree it is unnecessary. It is user responsibility to make sure comm is ready before accessing any attribute of it. (If comm is not ready, this call would actually error out rather than returning ncclInProgess.)


#if defined(NCCL_MAJOR) && (NCCL_MAJOR == 2) && defined(NCCL_MINOR) && \
(NCCL_MINOR >= 14)
#define NCCL_HAS_COMM_NONBLOCKING
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure why this needs to be redefined here in order to work when a definition already exists in ProcessGroupNCCL.hpp.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because NCCLUtils.hpp do not include ProcessGroupNCCL.hpp and these two are not in the same compilation unit?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: reminder for me to clean it.

@eqy eqy added ciflow/trunk Trigger trunk jobs on your pull request topic: not user facing topic category labels Mar 8, 2023
@eqy eqy changed the title Nonblocking NCCL Fault Tolerance/Checking [NCCL] Add experimental Nonblocking NCCL Fault Tolerance/Checking Mar 8, 2023
@eqy
Copy link
Collaborator Author

eqy commented Mar 8, 2023

CC @kwen2501 @ngimel

@drisspg drisspg added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Mar 8, 2023
@eqy
Copy link
Collaborator Author

eqy commented Mar 14, 2023

@pytorchmergebot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a rebase job. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased nccl_nonblocking onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout nccl_nonblocking && git pull --rebase)

#ifdef NCCL_HAS_COMM_NONBLOCKING
ncclResult_t result = to_nccl_result(status);
while (result == ncclInProgress) {
ncclCommGetAsyncError(to_nccl_comm(comm), &result);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder if to_nccl_comm(comm) is needed here.

Here is definition of to_nccl_comm:

ncclComm_t to_nccl_comm(torch::cuda::nccl::ncclComm_t var) {
  return reinterpret_cast<ncclComm_t>(var);
}

It seems to me comm is already a ncclComm_t (the one defined by NCCL).

Side note:
We should remove the duplicated ncclComm_t definition in torch::cuda::nccl. It is making things complicated.
It is out of scope of this PR. We can do that later.

Comment on lines 154 to 156
static inline void NCCL_CHECK_NONBLOCKING(
ncclResult_t result,
ncclComm_t comm) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems to me this should be the base case. I could be wrong though :)

Comment on lines 166 to 217
for (const auto i : c10::irange(comms.size())) {
do {
ncclCommGetAsyncError(to_nccl_comm(comms[i]), &result);
} while (result == ncclInProgress);
Copy link
Collaborator

@kwen2501 kwen2501 Mar 15, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I wonder which one should be the inner loop.

Would it be possible that a comm is hanging, while another already errors out, in which case we would miss catching the error here?

@eqy eqy force-pushed the nccl_nonblocking branch from 1f07d9f to 7cadf83 Compare April 13, 2023 21:29
@eqy
Copy link
Collaborator Author

eqy commented Apr 14, 2023

@pytorchmergebot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@eqy
Copy link
Collaborator Author

eqy commented Apr 14, 2023

@pytorchmergebot -f "assume inductor failures unrelated"

@pytorch-bot
Copy link

pytorch-bot bot commented Apr 14, 2023

❌ 🤖 pytorchbot command failed:

@pytorchbot: error: argument command: invalid choice: 'assume inductor failures unrelated' (choose from 'merge', 'revert', 'rebase', 'label', 'drci')

usage: @pytorchbot [-h] {merge,revert,rebase,label,drci} ...

Try @pytorchbot --help for more info.

@eqy
Copy link
Collaborator Author

eqy commented Apr 14, 2023

@pytorchmergebot merge -f "assume inductor failures unrelated"

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

pytorchmergebot pushed a commit that referenced this pull request Jun 15, 2023
#95715 added the functionality to abort `ncclCommInitRankConfig` by specifying `blocking=0` to enable non-blocking behavior.

However, calling the `pg._abort()` didn't recover from a stuck `ncclCommInitRankConfig` since the `_abort` method only looked through `devNCCLCommMap_` map and aborted those communicators. Since `ncclCommInitRankConfig` was stuck, the communicator itself wasn't added to the map and the host thread was stuck on this line: https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1171. As a result, `_abort` was a no-op.

To resolve this issue, I added the communicators to `inProgressCommMap_` as soon as they were created and then removed them once added to `devNCCLCommMap_`.

I also added a unit test that was failing without the changes to ProcessGroupNCCL.cpp
Pull Request resolved: #103264
Approved by: https://github.com/kwen2501
pritamdamania87 added a commit to pritamdamania87/pytorch that referenced this pull request Jun 20, 2023
pytorch#95715 added the functionality to abort `ncclCommInitRankConfig` by specifying `blocking=0` to enable non-blocking behavior.

However, calling the `pg._abort()` didn't recover from a stuck `ncclCommInitRankConfig` since the `_abort` method only looked through `devNCCLCommMap_` map and aborted those communicators. Since `ncclCommInitRankConfig` was stuck, the communicator itself wasn't added to the map and the host thread was stuck on this line: https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1171. As a result, `_abort` was a no-op.

To resolve this issue, I added the communicators to `inProgressCommMap_` as soon as they were created and then removed them once added to `devNCCLCommMap_`.

I also added a unit test that was failing without the changes to ProcessGroupNCCL.cpp
pytorchmergebot pushed a commit to pritamdamania87/pytorch that referenced this pull request Jun 23, 2023
pytorch#95715 added the functionality to abort `ncclCommInitRankConfig` by specifying `blocking=0` to enable non-blocking behavior.

However, calling the `pg._abort()` didn't recover from a stuck `ncclCommInitRankConfig` since the `_abort` method only looked through `devNCCLCommMap_` map and aborted those communicators. Since `ncclCommInitRankConfig` was stuck, the communicator itself wasn't added to the map and the host thread was stuck on this line: https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1171. As a result, `_abort` was a no-op.

To resolve this issue, I added the communicators to `inProgressCommMap_` as soon as they were created and then removed them once added to `devNCCLCommMap_`.

I also added a unit test that was failing without the changes to ProcessGroupNCCL.cpp
pytorchmergebot pushed a commit that referenced this pull request Jun 27, 2023
…3925)

#95715 added the functionality to abort `ncclCommInitRankConfig` by specifying `blocking=0` to enable non-blocking behavior.

However, calling the `pg._abort()` didn't recover from a stuck `ncclCommInitRankConfig` since the `_abort` method only looked through `devNCCLCommMap_` map and aborted those communicators. Since `ncclCommInitRankConfig` was stuck, the communicator itself wasn't added to the map and the host thread was stuck on this line: https://github.com/pytorch/pytorch/blob/main/torch/csrc/distributed/c10d/ProcessGroupNCCL.cpp#L1171. As a result, `_abort` was a no-op.

To resolve this issue, I added the communicators to `inProgressCommMap_` as soon as they were created and then removed them once added to `devNCCLCommMap_`.

I also added a unit test that was failing without the changes to ProcessGroupNCCL.cpp
Pull Request resolved: #103925
Approved by: https://github.com/osalpekar
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/inductor ciflow/periodic Trigger jobs ran periodically on master (periodic.yml) on the PR ciflow/trunk Trigger trunk jobs on your pull request Merged merging open source release notes: distributed (c10d) release notes category Reverted topic: not user facing topic category triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants