Skip to content

Conversation

@kshitij12345
Copy link
Collaborator

@kshitij12345 kshitij12345 commented Dec 20, 2022

Ref: pytorch/functorch#680

We introduce a kwarg chunk_size in vmap.

Also, we leverage most of the code from chunk_vmap (except for chunking the input based on chunk_size)

Benchmarks from pytorch/functorch#774 apply.

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 20, 2022

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/91157

Note: Links to docs will display an error until the docs builds have been completed.

✅ No Failures

As of commit e0ec003:
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

split_idxs = tuple(itertools.accumulate(chunk_numels))

flat_args_chunks = tuple(
t.tensor_split(split_idxs, dim=in_dim) if in_dim is not None else [t, ] * len(split_idxs)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tensor_split returns a list of views :

- func: tensor_split.indices(Tensor(a -> *) self, SymInt[] indices, int dim=0) -> Tensor(a)[]

@kshitij12345 kshitij12345 marked this pull request as ready for review December 20, 2022 10:07
@kshitij12345 kshitij12345 added the release notes: torch.func release notes category for torch.vmap or torch.func.* APIs label Dec 20, 2022
Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code looks correct, I had a couple of minor comments. Some high-level points:

  • jacrev(..., chunk_size=1) should actually do a for-loop, instead of doing a vmap over a dimension of size 1. This is because doing a for-loop gets us out of limitations of vmap and is an important thing for potential users switching over from torch.autograd.functional.jacobian, which has this point. I'm not completely sure what the behavior of vmap(..., chunk_size=1) should be, my initial thought is that it should be consistent.
  • We should beef up the testing (I left some suggestions)

@ngimel ngimel added the triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module label Dec 21, 2022
@kshitij12345
Copy link
Collaborator Author

PTAL @zou3519 :)

Copy link
Contributor

@zou3519 zou3519 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some last minor comments, otherwise, this LGTM

@kshitij12345
Copy link
Collaborator Author

@pytorchbot merge

@pytorch-bot pytorch-bot bot added the ciflow/trunk Trigger trunk jobs on your pull request label Dec 22, 2022
@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@pytorchmergebot
Copy link
Collaborator

Merge failed

Reason: The following mandatory check(s) failed (Rule superuser):

Dig deeper by viewing the failures on hud

Details for Dev Infra team Raised by workflow job

@kshitij12345
Copy link
Collaborator Author

@pytorchbot merge

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged once all checks pass (ETA 0-4 Hours).

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/trunk Trigger trunk jobs on your pull request Merged open source release notes: torch.func release notes category for torch.vmap or torch.func.* APIs triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants