-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[functorch] vmap: chunk_size support #91157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91157
Note: Links to docs will display an error until the docs builds have been completed. ✅ No FailuresAs of commit e0ec003: This comment was automatically generated by Dr. CI and updates every 15 minutes. |
torch/_functorch/vmap.py
Outdated
| split_idxs = tuple(itertools.accumulate(chunk_numels)) | ||
|
|
||
| flat_args_chunks = tuple( | ||
| t.tensor_split(split_idxs, dim=in_dim) if in_dim is not None else [t, ] * len(split_idxs) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
tensor_split returns a list of views :
| - func: tensor_split.indices(Tensor(a -> *) self, SymInt[] indices, int dim=0) -> Tensor(a)[] |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code looks correct, I had a couple of minor comments. Some high-level points:
- jacrev(..., chunk_size=1) should actually do a for-loop, instead of doing a vmap over a dimension of size 1. This is because doing a for-loop gets us out of limitations of vmap and is an important thing for potential users switching over from torch.autograd.functional.jacobian, which has this point. I'm not completely sure what the behavior of vmap(..., chunk_size=1) should be, my initial thought is that it should be consistent.
- We should beef up the testing (I left some suggestions)
…to dev/vmap/chunk
|
PTAL @zou3519 :) |
zou3519
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some last minor comments, otherwise, this LGTM
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
Merge failedReason: The following mandatory check(s) failed (Rule Dig deeper by viewing the failures on hud Details for Dev Infra teamRaised by workflow job |
|
@pytorchbot merge |
Merge startedYour change will be merged once all checks pass (ETA 0-4 Hours). Learn more about merging in the wiki. Questions? Feedback? Please reach out to the PyTorch DevX Team |
As discussed at #91157 (comment) Pull Request resolved: #91326 Approved by: https://github.com/zou3519
Ref: pytorch/functorch#680
We introduce a kwarg
chunk_sizein vmap.Also, we leverage most of the code from
chunk_vmap(except for chunking the input based onchunk_size)Benchmarks from pytorch/functorch#774 apply.