-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Run lazy tensor codegen in generate_code.py #73996
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
CI Flow Status⚛️ CI FlowRuleset - Version:
|
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 7a544d6 (more details on the Dr. CI page):
🕵️ 6 new failures recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
| Job | Step | Action |
|---|---|---|
| Unknown | 🔁 rerun | |
| Unknown | 🔁 rerun | |
| Unknown | 🔁 rerun |
🚧 1 fixed upstream failure:
These were probably caused by upstream breakages that were already fixed.
Please rebase on the viable/strict branch (expand for instructions)
If your commit is older than viable/strict, run these commands:
git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD
- linux-xenial-cuda11.3-py3.7-gcc7 / test (distributed, 1, 1, linux.8xlarge.nvidia.gpu) on Mar 16 from 6:41pm to 9:11pm (3d88075649 - 2f24a85)
This comment was automatically generated by Dr. CI (expand for details).
Please report bugs/suggestions to the (internal) Dr. CI Users group.
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
294b595 to
2116951
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
2116951 to
3adf1ef
Compare
3adf1ef to
6c73cfb
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
6c73cfb to
2c5abdd
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
1 similar comment
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
2c5abdd to
dcac332
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
dcac332 to
b0a0e4e
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
b0a0e4e to
4d78661
Compare
4d78661 to
f414da0
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
e678373 to
6866ca5
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: pytorch#73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Differential Revision: D34408536 fbshipit-source-id: 8435688b43a901ac609762982eba506b9fa70fd6
6866ca5 to
efa559e
Compare
|
|
||
| namespace torch { | ||
| namespace lazy { | ||
| using at::operator<<; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is probably cargo culted but could you file an issue about making sure ADL actually works for operator<< in all cases. This is probably a case of someone sticking an operator<< overload in the wrong namespace
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be possible that we only had this issue back when LazyIr.h was using its own namespace (not torch::lazy). I will check if it can be removed now. But you might still want me to file that issue since at least it was a problem in our old namespace setup.
| // to differentiate between HASH(nullopt, something) and HASH(something, nullopt), | ||
| // and using kNullValue in the hash function in the order of arguments | ||
| // serves this purpose. | ||
| static const torch::lazy::Value kNullValue = torch::lazy::Value(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
static value in header file? This is my suspicious face.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh yea, this looks really dumb. I'll figure out why we did this and move it somewhere better..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
oh.. i remember now. This isn't too bad since all I was going for was to not call the Value() constructor a ton of times, but 2 different null values are ok to interchange. Still, i'll fix it, it's ugly
| }) | ||
|
|
||
| # Generate IR node classes | ||
| fm.write_with_template(f'{backend_key}LazyIr.h', 'LazyIr.h', lambda: { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this was just a straight bug before right?
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: pytorch#73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Reviewed By: ezyang Differential Revision: D34408536 fbshipit-source-id: f5f915e4760b96a648767551be826336a7058749
efa559e to
f5de5e7
Compare
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: pytorch#73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Reviewed By: ezyang Differential Revision: D34408536 fbshipit-source-id: e50ebfe59f7020ffd0b16edb65cd6359666bee2f
f5de5e7 to
86774fa
Compare
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
torch/csrc/lazy/core/hash.cpp
Outdated
| // We can't assume a DataHash size/dataptr approach here bc | ||
| // vector<bool> can be optimized as vector<bit> and storage details | ||
| // are decoupled from actual size of 'bool' type | ||
| hash_t h = 0xad2ed1983bbf2e28; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ezyang Wdyt about this? I had a failure in macos build,
Compiling LazyIr.h is the first time we've exercised Hash(vector<bool)
I also added a new test (see test_misc.cpp in this diff)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
uhhh, sure! :) I'm not even sure what you find objectionable about this haha
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: pytorch#73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Reviewed By: ezyang Differential Revision: D34408536 fbshipit-source-id: 7ab411924f3ebfa8e6f5015955733158dd2d46b7
86774fa to
7a544d6
Compare
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: pytorch#73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Reviewed By: ezyang Differential Revision: D34408536 fbshipit-source-id: b7d46d817b3ed3c56108d65bbf052ed73b4e0827
|
This pull request was exported from Phabricator. Differential Revision: D34408536 |
7a544d6 to
851aa7a
Compare
Summary: Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel. Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later) Bazel support is added in a later diff. Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch. Pull Request resolved: #73996 Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h Reviewed By: ezyang Differential Revision: D34408536 fbshipit-source-id: 8af0aea3b95d81eccafc17d64390d70ddd176515
Summary:
Instead of gen.py, run in generate_code / autograd generator which is more native to
torch/csrc rather than aten
Differential Revision: D34408536