Skip to content

Conversation

@Aidyn-A
Copy link
Collaborator

@Aidyn-A Aidyn-A commented Feb 29, 2024

According to the cuBLAS API Reference the recommended workspace size for Hopper is 32 MiB and for the rest architectures 4 MiB. This PR increases the workspace size accordingly. I am not aware of the recommended workspace size for HIP, that is why I am keeping it unchanged.

cc @csarofeen @ptrblck @xwang233

@pytorch-bot
Copy link

pytorch-bot bot commented Feb 29, 2024

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/120925

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit b39bc70 with merge base 7128504 (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@Aidyn-A Aidyn-A requested review from eqy and jeffdaily February 29, 2024 18:17
@Aidyn-A Aidyn-A added module: cublas Problem related to cublas support topic: not user facing topic category labels Feb 29, 2024
Copy link
Collaborator

@eqy eqy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the fix!

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Mar 1, 2024

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Mar 1, 2024
@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: Approvers from one of the following sets are needed:

  • superuser (pytorch/metamates)
  • Core Reviewers (mruberry, lezcano, Skylion007, ngimel, peterbell10)
  • Core Maintainers (soumith, gchanan, ezyang, dzhulgakov, malfet)
Details for Dev Infra team Raised by workflow job

Failing merge rule: Core Maintainers

@eqy eqy requested a review from malfet March 1, 2024 00:54
Copy link
Contributor

@malfet malfet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, but do you know if this might push memory utilization for some smaller GPUs beyond the limit?

@eqy
Copy link
Collaborator

eqy commented Mar 1, 2024

We can add an additional qualifier that sm < 80 stick with the original 1 MiB, what do you think @Aidyn-A

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Mar 1, 2024

A dilemma of perf vs memory. I guess it makes sense to decrease it for old/small GPU. Lemme commit it 👍

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Mar 1, 2024

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@clee2000
Copy link
Contributor

clee2000 commented Mar 4, 2024

@pytorchbot revert -m "broke inductor models and caused accuracy regression on nightly dashboard https://hud.pytorch.org/pytorch/pytorch/commit/0a38a6ac8046e4d3f9cfaba86b7ec6517038646f https://github.com/pytorch/pytorch/actions/runs/8118465367/job/22193590228" -c nosignal

@pytorchmergebot
Copy link
Collaborator

@pytorchbot successfully started a revert job. Check the current status here.
Questions? Feedback? Please reach out to the PyTorch DevX Team

@pytorchmergebot
Copy link
Collaborator

@Aidyn-A your PR has been successfully reverted.

@mortzur
Copy link
Contributor

mortzur commented Mar 11, 2024

Aidyn-A A workflow that sets:
os.environ["CUBLAS_WORKSPACE_CONFIG"] = ":4096:8"
for determinism, fails following this change.
Is it possible to check if CUBLAS_WORKSPACE_CONFIG if provided by the user and enforce it if so?

@ptrblck
Copy link
Collaborator

ptrblck commented Mar 12, 2024

@mortzur Could you describe what exactly is failing? This PR changes the cublasLt workspace and does not touch cublas workspace handling.

@mortzur
Copy link
Contributor

mortzur commented Mar 12, 2024

ptrblck you're right, I can set CUBLASLT_WORKSPACE_SIZE to fix the workspace size. no issue here.

@ptrblck
Copy link
Collaborator

ptrblck commented Apr 11, 2024

@mortzur Great, thanks for confirming!

@malfet is this solving the internal test issues and is anything else needed to land the PR?

@github-actions
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Jun 10, 2024
@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Jul 3, 2024

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cublaslt_workspace onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout cublaslt_workspace && git pull --rebase)

@Aidyn-A
Copy link
Collaborator Author

Aidyn-A commented Jul 8, 2024

@pytorchbot rebase

@pytorchmergebot
Copy link
Collaborator

@pytorchbot started a rebase job onto refs/remotes/origin/viable/strict. Check the current status here

@pytorchmergebot
Copy link
Collaborator

Successfully rebased cublaslt_workspace onto refs/remotes/origin/viable/strict, please pull locally before adding more changes (for example, via git checkout cublaslt_workspace && git pull --rebase)

@github-actions github-actions bot closed this Aug 7, 2024
pytorchmergebot pushed a commit that referenced this pull request Feb 6, 2025
…es (#145130)

As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.

This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:

+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also #120925
+ fixes behavior broken behavior with the memtracker; #139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider

Pull Request resolved: #145130
Approved by: https://github.com/ngimel
pytorchmergebot pushed a commit that referenced this pull request Feb 23, 2025
…es (#145130)

As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.

This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:

+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also #120925
+ fixes behavior broken behavior with the memtracker; #139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider

Pull Request resolved: #145130
Approved by: https://github.com/ngimel
aditew01 pushed a commit that referenced this pull request Feb 28, 2025
…es (#145130)

As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.

This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:

+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also #120925
+ fixes behavior broken behavior with the memtracker; #139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider

Pull Request resolved: #145130
Approved by: https://github.com/ngimel
pytorchmergebot pushed a commit that referenced this pull request Mar 22, 2025
…es (#145130)

As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.

This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:

+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also #120925
+ fixes behavior broken behavior with the memtracker; #139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider

Pull Request resolved: #145130
Approved by: https://github.com/ngimel
amathewc pushed a commit to amathewc/pytorch that referenced this pull request Apr 17, 2025
…es (pytorch#145130)

As `cuBLAS` workspaces are already per-stream, there shouldn't be kernel execution overlap with `cuBLASLt` kernels.

This PR reuses `cuBLAS` workspaces for `cuBLASLt` for the following benefits:

+ caching (`cuBLAS` workspaces were already cached, so now we get that for `cuBLASLt`)
+ "free" workspace size bump for `cuBLASLt` `cuBLASLt` workspace sizes were previously smaller than those for `cuBLAS` by default which potentially hurts performance, and we encountered difficulty in increasing the size due to downstream OOMs , see also pytorch#120925
+ fixes behavior broken behavior with the memtracker; pytorch#139442 attempted to handle peaky allocation behavior that broke memtracker equivalence tests but it didn't seem to fully work, here the cached/reused `cuBLAS` workspace seems to fix it
+ one environment variable to rule them all: `CUBLAS_WORKSPACE_CONFIG` applies directly to `cuBLASLt` without a confusing `CUBLASLT_WORKSPACE_SIZE` that users would also need to consider

Pull Request resolved: pytorch#145130
Approved by: https://github.com/ngimel
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged module: cublas Problem related to cublas support open source Reverted Stale topic: not user facing topic category

Projects

None yet

Development

Successfully merging this pull request may close these issues.

8 participants