Skip to content

Conversation

@wconstab
Copy link
Contributor

Summary:
Instead of gen.py, run in generate_code / autograd generator which is more native to
torch/csrc rather than aten

Differential Revision: D34408536

@pytorch-bot
Copy link

pytorch-bot bot commented Mar 10, 2022

CI Flow Status

⚛️ CI Flow

Ruleset - Version: v1
Ruleset - File: https://github.com/wconstab/pytorch/blob/294b59556c1b7e59f0a93f478ac085d392b9fb68/.github/generated-ciflow-ruleset.json
PR ciflow labels: ciflow/default
Add ciflow labels to this PR to trigger more builds:

Workflows Labels (bold enabled) Status
Triggered Workflows
linux-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
linux-binary-libtorch-cxx11-abi ciflow/all, ciflow/binaries, ciflow/binaries_libtorch, ciflow/default, ciflow/trunk ✅ triggered
linux-binary-libtorch-pre-cxx11 ciflow/all, ciflow/binaries, ciflow/binaries_libtorch, ciflow/default, ciflow/trunk ✅ triggered
linux-binary-manywheel ciflow/all, ciflow/binaries, ciflow/binaries_wheel, ciflow/default, ciflow/trunk ✅ triggered
linux-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/noarch, ciflow/trunk ✅ triggered
linux-bionic-rocm4.5-py3.7 ciflow/all, ciflow/default, ciflow/linux, ciflow/rocm, ciflow/trunk ✅ triggered
linux-docs ciflow/all, ciflow/cpu, ciflow/default, ciflow/docs, ciflow/linux, ciflow/trunk ✅ triggered
linux-vulkan-bionic-py3.7-clang9 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk, ciflow/vulkan ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-cuda11.3-py3.7-gcc7-bazel-test ciflow/all, ciflow/bazel, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-build ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3-clang5-mobile-custom-build-static ciflow/all, ciflow/default, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-asan ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/sanitizers, ciflow/trunk ✅ triggered
linux-xenial-py3.7-clang7-onnx ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/onnx, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc5.4-mobile-lightweight-dispatch-build ciflow/all, ciflow/cpu, ciflow/default, ciflow/libtorch, ciflow/linux, ciflow/mobile, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7 ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
linux-xenial-py3.7-gcc7-no-ops ciflow/all, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
macos-arm64-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
macos-arm64-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
macos-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
macos-binary-libtorch-cxx11-abi ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
macos-binary-libtorch-pre-cxx11 ciflow/binaries, ciflow/binaries_libtorch, ciflow/default ✅ triggered
macos-binary-wheel ciflow/binaries, ciflow/binaries_wheel, ciflow/default ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-gradle-custom-build-single-full-jit ciflow/all, ciflow/android, ciflow/cpu, ciflow/default, ciflow/linux, ciflow/trunk ✅ triggered
win-vs2019-cpu-py3 ciflow/all, ciflow/cpu, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
win-vs2019-cuda11.3-py3 ciflow/all, ciflow/cuda, ciflow/default, ciflow/trunk, ciflow/win ✅ triggered
windows-binary-conda ciflow/binaries, ciflow/binaries_conda, ciflow/default ✅ triggered
windows-binary-libtorch-debug ciflow/all, ciflow/binaries, ciflow/binaries_libtorch, ciflow/default, ciflow/trunk ✅ triggered
windows-binary-libtorch-release ciflow/all, ciflow/binaries, ciflow/binaries_libtorch, ciflow/default, ciflow/trunk ✅ triggered
windows-binary-wheel ciflow/all, ciflow/binaries, ciflow/binaries_wheel, ciflow/default, ciflow/trunk ✅ triggered
Skipped Workflows
caffe2-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
docker-builds ciflow/all, ciflow/trunk 🚫 skipped
ios-12-5-1-arm64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/scheduled 🚫 skipped
ios-12-5-1-arm64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/scheduled 🚫 skipped
ios-12-5-1-arm64-custom-ops ciflow/all, ciflow/ios, ciflow/macos, ciflow/scheduled 🚫 skipped
ios-12-5-1-arm64-metal ciflow/all, ciflow/ios, ciflow/macos, ciflow/scheduled 🚫 skipped
ios-12-5-1-x86-64 ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
ios-12-5-1-x86-64-coreml ciflow/all, ciflow/ios, ciflow/macos, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda10.2-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
libtorch-linux-xenial-cuda11.3-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/trunk 🚫 skipped
linux-bionic-cuda10.2-py3.9-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/slow, ciflow/trunk 🚫 skipped
linux-docs-push ciflow/all, ciflow/cpu, ciflow/linux, ciflow/scheduled 🚫 skipped
linux-xenial-cuda11.3-py3.7-gcc7-no-ops ciflow/all, ciflow/cuda, ciflow/linux, ciflow/trunk 🚫 skipped
macos-10-15-py3-arm64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-10-15-py3-lite-interpreter-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
macos-11-py3-x86-64 ciflow/all, ciflow/macos, ciflow/trunk 🚫 skipped
parallelnative-linux-xenial-py3.7-gcc5.4 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
periodic-libtorch-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/libtorch, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-bionic-cuda11.5-py3.7-gcc7 ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-linux-xenial-cuda10.2-py3-gcc7-slow-gradcheck ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled, ciflow/slow, ciflow/slow-gradcheck 🚫 skipped
periodic-linux-xenial-cuda11.3-py3.7-gcc7-debug ciflow/all, ciflow/cuda, ciflow/linux, ciflow/scheduled 🚫 skipped
periodic-win-vs2019-cuda11.5-py3 ciflow/all, ciflow/cuda, ciflow/scheduled, ciflow/win 🚫 skipped
pytorch-linux-xenial-py3-clang5-android-ndk-r19c-build ciflow/all, ciflow/android, ciflow/cpu, ciflow/linux, ciflow/trunk 🚫 skipped
pytorch-xla-linux-bionic-py3.7-clang8 ciflow/all, ciflow/cpu, ciflow/linux, ciflow/trunk, ciflow/xla 🚫 skipped

@facebook-github-bot
Copy link
Contributor

facebook-github-bot commented Mar 10, 2022

🔗 Helpful links

💊 CI failures summary and remediations

As of commit 7a544d6 (more details on the Dr. CI page):


  • 9/10 failures introduced in this PR
  • 1/10 broken upstream at merge base a705486 on Mar 16 from 6:41pm to 9:11pm

🕵️ 6 new failures recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See GitHub Actions build linux-xenial-py3.7-clang7-asan / test (default, 2, 3, linux.2xlarge) (1/6)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-03-17T03:54:05.3165544Z SUMMARY: Undefined.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
2022-03-17T03:54:05.2661327Z     #10 0x561b3cc7a801 in run_mod /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:1037
2022-03-17T03:54:05.2662123Z     #11 0x561b3cc857a9 in PyRun_StringFlags /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:961
2022-03-17T03:54:05.2662572Z     #12 0x561b3cc8580b in PyRun_SimpleStringFlags /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:455
2022-03-17T03:54:05.2664115Z     #13 0x561b3cc85908 in pymain_run_command /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:420
2022-03-17T03:54:05.2664550Z     #14 0x561b3cc85908 in pymain_run_python /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:2907
2022-03-17T03:54:05.2665109Z     #15 0x561b3cc85908 in pymain_main /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:3460
2022-03-17T03:54:05.2665438Z     #16 0x561b3cc85ccb in _Py_UnixMain /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:3495
2022-03-17T03:54:05.3164654Z     #17 0x7f51850e083f in __libc_start_main /build/glibc-S7Ft5T/glibc-2.23/csu/../csu/libc-start.c:291
2022-03-17T03:54:05.3165062Z     #18 0x561b3cc2a554 in _start (/opt/conda/bin/python3.7+0x1d7554)
2022-03-17T03:54:05.3165228Z 
2022-03-17T03:54:05.3165544Z SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
2022-03-17T03:54:05.3341292Z + retcode=1
2022-03-17T03:54:05.3341792Z + set -e
2022-03-17T03:54:05.3342059Z + return 1
2022-03-17T03:54:05.3345831Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX-* ]]
2022-03-17T03:54:05.3346315Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X ]]
2022-03-17T03:54:05.3346873Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX2-* ]]
2022-03-17T03:54:05.3347362Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]]
2022-03-17T03:54:05.3347940Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX512-* ]]
2022-03-17T03:54:05.3348411Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]]
2022-03-17T03:54:05.3351088Z + [[ linux-xenial-py3.7-clang7-asan-default == *tbb* ]]

See GitHub Actions build linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 1, 2, linux.4xlarge.nvidia.gpu) (2/6)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-03-17T04:38:23.7817214Z test_add_done_ca...arg() takes 0 positional arguments but 1 was given
2022-03-17T04:38:23.7787253Z   /opt/conda/lib/python3.7/unittest/suite.py(122): run
2022-03-17T04:38:23.7787581Z   /opt/conda/lib/python3.7/unittest/suite.py(84): __call__
2022-03-17T04:38:23.7788029Z   /opt/conda/lib/python3.7/site-packages/xmlrunner/runner.py(67): run
2022-03-17T04:38:23.7788386Z   /opt/conda/lib/python3.7/unittest/main.py(271): runTests
2022-03-17T04:38:23.7788739Z   /opt/conda/lib/python3.7/unittest/main.py(101): __init__
2022-03-17T04:38:23.7789246Z   /opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py(656): run_tests
2022-03-17T04:38:23.7789601Z   test_futures.py(331): <module>
2022-03-17T04:38:23.7789777Z 
2022-03-17T04:38:23.7789878Z ok (1.511s)
2022-03-17T04:38:23.7809830Z   test_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.003s)
2022-03-17T04:38:23.7817214Z   test_add_done_callback_no_arg_error_is_ignored (__main__.TestFuture) ... [E pybind_utils.h:201] Got the following error when running the callback: TypeError: no_arg() takes 0 positional arguments but 1 was given
2022-03-17T04:38:23.7818976Z ok (0.001s)
2022-03-17T04:38:23.7834035Z   test_add_done_callback_simple (__main__.TestFuture) ... ok (0.001s)
2022-03-17T04:38:23.7894264Z   test_chained_then (__main__.TestFuture) ... ok (0.006s)
2022-03-17T04:38:23.8916682Z   test_collect_all (__main__.TestFuture) ... ok (0.102s)
2022-03-17T04:38:23.8926011Z   test_done (__main__.TestFuture) ... ok (0.001s)
2022-03-17T04:38:23.8940445Z   test_done_exception (__main__.TestFuture) ... ok (0.001s)
2022-03-17T04:38:23.8962621Z   test_interleaving_then_and_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.002s)
2022-03-17T04:38:23.8974791Z   test_interleaving_then_and_add_done_callback_propagates_error (__main__.TestFuture) ... [E pybind_utils.h:201] Got the following error when running the callback: ValueError: Expected error
2022-03-17T04:38:23.8975238Z 
2022-03-17T04:38:23.8975398Z At:

See GitHub Actions build linux-bionic-py3.7-clang9 / test (noarch, 1, 1, linux.2xlarge) (3/6)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-03-17T03:59:37.2944285Z test_add_done_ca...arg() takes 0 positional arguments but 1 was given
2022-03-17T03:59:37.2917003Z   /opt/conda/lib/python3.7/unittest/suite.py(122): run
2022-03-17T03:59:37.2917499Z   /opt/conda/lib/python3.7/unittest/suite.py(84): __call__
2022-03-17T03:59:37.2918194Z   /opt/conda/lib/python3.7/site-packages/xmlrunner/runner.py(67): run
2022-03-17T03:59:37.2918718Z   /opt/conda/lib/python3.7/unittest/main.py(271): runTests
2022-03-17T03:59:37.2919176Z   /opt/conda/lib/python3.7/unittest/main.py(101): __init__
2022-03-17T03:59:37.2919882Z   /opt/conda/lib/python3.7/site-packages/torch/testing/_internal/common_utils.py(656): run_tests
2022-03-17T03:59:37.2920320Z   test_futures.py(331): <module>
2022-03-17T03:59:37.2920501Z 
2022-03-17T03:59:37.2920603Z ok (0.224s)
2022-03-17T03:59:37.2937592Z   test_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.002s)
2022-03-17T03:59:37.2944285Z   test_add_done_callback_no_arg_error_is_ignored (__main__.TestFuture) ... [E pybind_utils.h:201] Got the following error when running the callback: TypeError: no_arg() takes 0 positional arguments but 1 was given
2022-03-17T03:59:37.2945240Z ok (0.001s)
2022-03-17T03:59:37.2957509Z   test_add_done_callback_simple (__main__.TestFuture) ... ok (0.001s)
2022-03-17T03:59:37.2994178Z   test_chained_then (__main__.TestFuture) ... ok (0.004s)
2022-03-17T03:59:37.4013670Z   test_collect_all (__main__.TestFuture) ... ok (0.102s)
2022-03-17T03:59:37.4022731Z   test_done (__main__.TestFuture) ... ok (0.001s)
2022-03-17T03:59:37.4034986Z   test_done_exception (__main__.TestFuture) ... ok (0.001s)
2022-03-17T03:59:37.4053323Z   test_interleaving_then_and_add_done_callback_maintains_callback_order (__main__.TestFuture) ... ok (0.002s)
2022-03-17T03:59:37.4062575Z   test_interleaving_then_and_add_done_callback_propagates_error (__main__.TestFuture) ... [E pybind_utils.h:201] Got the following error when running the callback: ValueError: Expected error
2022-03-17T03:59:37.4063018Z 
2022-03-17T03:59:37.4063113Z At:

See GitHub Actions build linux-xenial-py3.7-clang7-asan / test (default, 3, 3, linux.2xlarge) (4/6)

Step: "Unknown" (full log | diagnosis details | 🔁 rerun)

2022-03-17T03:54:06.1342815Z SUMMARY: Undefined.../jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in
2022-03-17T03:54:06.0806412Z     #10 0x55a074243801 in run_mod /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:1037
2022-03-17T03:54:06.0807560Z     #11 0x55a07424e7a9 in PyRun_StringFlags /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:961
2022-03-17T03:54:06.0808094Z     #12 0x55a07424e80b in PyRun_SimpleStringFlags /tmp/build/80754af9/python_1627392990942/work/Python/pythonrun.c:455
2022-03-17T03:54:06.0810065Z     #13 0x55a07424e908 in pymain_run_command /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:420
2022-03-17T03:54:06.0810584Z     #14 0x55a07424e908 in pymain_run_python /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:2907
2022-03-17T03:54:06.0810986Z     #15 0x55a07424e908 in pymain_main /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:3460
2022-03-17T03:54:06.0811531Z     #16 0x55a07424eccb in _Py_UnixMain /tmp/build/80754af9/python_1627392990942/work/Modules/main.c:3495
2022-03-17T03:54:06.1341974Z     #17 0x7fb47ffb783f in __libc_start_main /build/glibc-S7Ft5T/glibc-2.23/csu/../csu/libc-start.c:291
2022-03-17T03:54:06.1342322Z     #18 0x55a0741f3554 in _start (/opt/conda/bin/python3.7+0x1d7554)
2022-03-17T03:54:06.1342483Z 
2022-03-17T03:54:06.1342815Z SUMMARY: UndefinedBehaviorSanitizer: undefined-behavior /var/lib/jenkins/workspace/aten/src/ATen/Utils.cpp:20:3 in 
2022-03-17T03:54:06.1532558Z + retcode=1
2022-03-17T03:54:06.1532939Z + set -e
2022-03-17T03:54:06.1533216Z + return 1
2022-03-17T03:54:06.1536173Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX-* ]]
2022-03-17T03:54:06.1536693Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X ]]
2022-03-17T03:54:06.1537240Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX2-* ]]
2022-03-17T03:54:06.1537683Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\2 ]]
2022-03-17T03:54:06.1538279Z + [[ linux-xenial-py3.7-clang7-asan-default == *-NO_AVX512-* ]]
2022-03-17T03:54:06.1538761Z + [[ default == \n\o\g\p\u\_\N\O\_\A\V\X\5\1\2 ]]
2022-03-17T03:54:06.1541066Z + [[ linux-xenial-py3.7-clang7-asan-default == *tbb* ]]

See GitHub Actions build macos-10-15-py3-arm64 / build (5/6)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-17T03:53:49.7109290Z ../torch/csrc/lazy... long' to 'const torch::lazy::hash_t' is ambiguous
2022-03-17T03:53:49.6764150Z   ^
2022-03-17T03:53:49.6802780Z ../torch/csrc/lazy/core/hash.h:27:3: note: candidate constructor
2022-03-17T03:53:49.6852210Z   hash_t(int64_t val) : uint128(static_cast<uint64_t>(val)) {}
2022-03-17T03:53:49.6865150Z   ^
2022-03-17T03:53:49.6904290Z ../torch/csrc/lazy/core/hash.h:28:3: note: candidate constructor
2022-03-17T03:53:49.6954290Z   hash_t(uint32_t val) : uint128(val) {}
2022-03-17T03:53:49.6966180Z   ^
2022-03-17T03:53:49.7006180Z ../torch/csrc/lazy/core/hash.h:29:3: note: candidate constructor
2022-03-17T03:53:49.7057220Z   hash_t(uint64_t val) : uint128(val) {}
2022-03-17T03:53:49.7067820Z   ^
2022-03-17T03:53:49.7109290Z ../torch/csrc/lazy/core/hash.cpp:98:23: error: conversion from 'unsigned long' to 'const torch::lazy::hash_t' is ambiguous
2022-03-17T03:53:49.7158700Z   static const hash_t h_false = 0xe39f30789cab5382;
2022-03-17T03:53:49.7168680Z                       ^         ~~~~~~~~~~~~~~~~~~
2022-03-17T03:53:49.7212530Z ../torch/csrc/lazy/core/hash.h:24:3: note: candidate constructor
2022-03-17T03:53:49.7260870Z   hash_t(int8_t val) : uint128(static_cast<uint32_t>(val)) {}
2022-03-17T03:53:49.7269810Z   ^
2022-03-17T03:53:49.7314060Z ../torch/csrc/lazy/core/hash.h:25:3: note: candidate constructor
2022-03-17T03:53:49.7362350Z   hash_t(int16_t val) : uint128(static_cast<uint32_t>(val)) {}
2022-03-17T03:53:49.7370600Z   ^
2022-03-17T03:53:49.7415660Z ../torch/csrc/lazy/core/hash.h:26:3: note: candidate constructor
2022-03-17T03:53:49.7463840Z   hash_t(int32_t val) : uint128(static_cast<uint32_t>(val)) {}

See GitHub Actions build macos-11-py3-x86-64 / build (6/6)

Step: "Build" (full log | diagnosis details | 🔁 rerun)

2022-03-17T03:54:51.8853400Z ../torch/csrc/lazy... long' to 'const torch::lazy::hash_t' is ambiguous
2022-03-17T03:54:51.8448920Z   ^
2022-03-17T03:54:51.8533190Z ../torch/csrc/lazy/core/hash.h:27:3: note: candidate constructor
2022-03-17T03:54:51.8548620Z   hash_t(int64_t val) : uint128(static_cast<uint64_t>(val)) {}
2022-03-17T03:54:51.8634530Z   ^
2022-03-17T03:54:51.8650580Z ../torch/csrc/lazy/core/hash.h:28:3: note: candidate constructor
2022-03-17T03:54:51.8656050Z   hash_t(uint32_t val) : uint128(val) {}
2022-03-17T03:54:51.8735950Z   ^
2022-03-17T03:54:51.8751700Z ../torch/csrc/lazy/core/hash.h:29:3: note: candidate constructor
2022-03-17T03:54:51.8756940Z   hash_t(uint64_t val) : uint128(val) {}
2022-03-17T03:54:51.8837400Z   ^
2022-03-17T03:54:51.8853400Z ../torch/csrc/lazy/core/hash.cpp:98:23: error: conversion from 'unsigned long' to 'const torch::lazy::hash_t' is ambiguous
2022-03-17T03:54:51.8857860Z   static const hash_t h_false = 0xe39f30789cab5382;
2022-03-17T03:54:51.8939050Z                       ^         ~~~~~~~~~~~~~~~~~~
2022-03-17T03:54:51.8954780Z ../torch/csrc/lazy/core/hash.h:24:3: note: candidate constructor
2022-03-17T03:54:51.8958820Z   hash_t(int8_t val) : uint128(static_cast<uint32_t>(val)) {}
2022-03-17T03:54:51.9041210Z   ^
2022-03-17T03:54:51.9057760Z ../torch/csrc/lazy/core/hash.h:25:3: note: candidate constructor
2022-03-17T03:54:51.9060400Z   hash_t(int16_t val) : uint128(static_cast<uint32_t>(val)) {}
2022-03-17T03:54:51.9143140Z   ^
2022-03-17T03:54:51.9159500Z ../torch/csrc/lazy/core/hash.h:26:3: note: candidate constructor
2022-03-17T03:54:51.9163180Z   hash_t(int32_t val) : uint128(static_cast<uint32_t>(val)) {}

3 failures not recognized by patterns:

Job Step Action
GitHub Actions linux-bionic-rocm4.5-py3.7 / test (default, 1, 2, linux.rocm.gpu) Unknown 🔁 rerun
GitHub Actions linux-bionic-rocm4.5-py3.7 / test (default, 2, 2, linux.rocm.gpu) Unknown 🔁 rerun
GitHub Actions linux-xenial-cuda11.3-py3.7-gcc7 / test (default, 2, 2, linux.4xlarge.nvidia.gpu) Unknown 🔁 rerun

🚧 1 fixed upstream failure:

These were probably caused by upstream breakages that were already fixed.

Please rebase on the viable/strict branch (expand for instructions)

If your commit is older than viable/strict, run these commands:

git fetch https://github.com/pytorch/pytorch viable/strict
git rebase FETCH_HEAD

This comment was automatically generated by Dr. CI (expand for details).

Please report bugs/suggestions to the (internal) Dr. CI Users group.

Click here to manually regenerate this comment.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

wconstab added a commit to wconstab/pytorch that referenced this pull request Mar 16, 2022
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: pytorch#73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Differential Revision: D34408536

fbshipit-source-id: 8435688b43a901ac609762982eba506b9fa70fd6

namespace torch {
namespace lazy {
using at::operator<<;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this is probably cargo culted but could you file an issue about making sure ADL actually works for operator<< in all cases. This is probably a case of someone sticking an operator<< overload in the wrong namespace

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be possible that we only had this issue back when LazyIr.h was using its own namespace (not torch::lazy). I will check if it can be removed now. But you might still want me to file that issue since at least it was a problem in our old namespace setup.

// to differentiate between HASH(nullopt, something) and HASH(something, nullopt),
// and using kNullValue in the hash function in the order of arguments
// serves this purpose.
static const torch::lazy::Value kNullValue = torch::lazy::Value();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

static value in header file? This is my suspicious face.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh yea, this looks really dumb. I'll figure out why we did this and move it somewhere better..

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

oh.. i remember now. This isn't too bad since all I was going for was to not call the Value() constructor a ton of times, but 2 different null values are ok to interchange. Still, i'll fix it, it's ugly

})

# Generate IR node classes
fm.write_with_template(f'{backend_key}LazyIr.h', 'LazyIr.h', lambda: {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this was just a straight bug before right?

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

wconstab added a commit to wconstab/pytorch that referenced this pull request Mar 16, 2022
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: pytorch#73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: f5f915e4760b96a648767551be826336a7058749
wconstab added a commit to wconstab/pytorch that referenced this pull request Mar 17, 2022
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: pytorch#73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: e50ebfe59f7020ffd0b16edb65cd6359666bee2f
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

// We can't assume a DataHash size/dataptr approach here bc
// vector<bool> can be optimized as vector<bit> and storage details
// are decoupled from actual size of 'bool' type
hash_t h = 0xad2ed1983bbf2e28;
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ezyang Wdyt about this? I had a failure in macos build,

Compiling LazyIr.h is the first time we've exercised Hash(vector<bool)

I also added a new test (see test_misc.cpp in this diff)

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

uhhh, sure! :) I'm not even sure what you find objectionable about this haha

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

wconstab added a commit to wconstab/pytorch that referenced this pull request Mar 17, 2022
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: pytorch#73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: 7ab411924f3ebfa8e6f5015955733158dd2d46b7
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: pytorch#73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: b7d46d817b3ed3c56108d65bbf052ed73b4e0827
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D34408536

facebook-github-bot pushed a commit that referenced this pull request Mar 17, 2022
Summary:
Hooks into existing autograd codegen script (generate_code.py) to take advantage of its integrations into buck/cmake/bazel.

Adds a new option (--gen_lazy_ts_backend) to. generate_code.py, calling this from CMake OSS build and fbcode build, but not from other internal xplat/ovrsource builds (these could be opted in later)

Bazel support is added in a later diff.

Includes one generated file (torch/csrc/lazy/generated/LazyIr.h) in a unit test (test/cpp/lazy/test_ir.cpp) to partially verify the generator is working, but does not compile the remaining output sources from the generator yet as they depend on other files not yet landed from lazy_tensor_staging branch.

Pull Request resolved: #73996

Test Plan: OSS/internal CI - verify all builds are working and test_ir.cpp compiles LazyIr.h

Reviewed By: ezyang

Differential Revision: D34408536

fbshipit-source-id: 8af0aea3b95d81eccafc17d64390d70ddd176515
@wconstab wconstab added topic: not user facing topic category release notes: lazy release notes category labels Mar 17, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants