Skip to content

Conversation

@mrshenli
Copy link
Contributor

@mrshenli mrshenli commented Aug 20, 2020

Stack from ghstack:

Differential Revision: D23242698

[ghstack-poisoned]
mrshenli added a commit that referenced this pull request Aug 20, 2020
ghstack-source-id: 71dcfa8
Pull Request resolved: #43337
@dr-ci
Copy link

dr-ci bot commented Aug 20, 2020

💊 CI failures summary and remediations

As of commit 2c7df5b (more details on the Dr. CI page):



🕵️ 1 new failure recognized by patterns

The following CI failures do not appear to be due to upstream breakages:

See CircleCI build pytorch_linux_backward_compatibility_check_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun)

Aug 20 18:08:24 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.
Aug 20 18:08:24 processing existing schema:  __str__(__torch__.torch.classes._TorchScriptTesting._StackString _0) -> (str _0) 
Aug 20 18:08:24 processing existing schema:  __init__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0, int[] _1) -> (None _0) 
Aug 20 18:08:24 processing existing schema:  __getstate__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int[] _0) 
Aug 20 18:08:24 processing existing schema:  __setstate__(__torch__.torch.classes._TorchScriptTesting._PickleTester _0, int[] _1) -> (None _0) 
Aug 20 18:08:24 processing existing schema:  top(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int _0) 
Aug 20 18:08:24 processing existing schema:  pop(__torch__.torch.classes._TorchScriptTesting._PickleTester _0) -> (int _0) 
Aug 20 18:08:24 processing existing schema:  get(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0, Tensor _1) -> (str _0) 
Aug 20 18:08:24 processing existing schema:  __getstate__(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0) -> (int _0) 
Aug 20 18:08:24 processing existing schema:  __setstate__(__torch__.torch.classes._TorchScriptTesting._LiteInterpreterTest _0, int _1) -> (None _0) 
Aug 20 18:08:24 processing existing schema:  __init__(__torch__.torch.classes.dist_rpc.WorkerInfo _0, str _1, int _2) -> (None _0) 
Aug 20 18:08:24 The PR is introducing backward incompatible changes to the operator library. Please contact PyTorch team to confirm whether this change is wanted or not.  
Aug 20 18:08:24  
Aug 20 18:08:24 Broken ops: [ 
Aug 20 18:08:24 	aten::_compute_linear_combination.out(Tensor input, Tensor coefficients, Tensor(a!) out) -> (Tensor(a!)) 
Aug 20 18:08:24 ] 
Aug 20 18:08:24 + cleanup 
Aug 20 18:08:24 + retcode=1 
Aug 20 18:08:24 + set +x 
Aug 20 18:08:24 =================== sccache compilation log =================== 
Aug 20 18:08:24 =========== If your build fails, please take a look at the log above for possible reasons =========== 
Aug 20 18:08:24 Compile requests                 0 

❄️ 1 failure tentatively classified as flaky

but reruns have not yet been triggered to confirm:

See CircleCI build pytorch_xla_linux_bionic_py3_6_clang9_test (1/1)

Step: "Run tests" (full log | diagnosis details | 🔁 rerun) ❄️

Aug 20 19:31:36 RuntimeError: tensorflow/compiler/xla/xla_client/xrt_local_service.cc:56 : Check failed: tensorflow::NewServer(server_def, &server_) == ::tensorflow::Status::OK() (Unknown: Could not start gRPC server vs. OK)
Aug 20 19:31:36   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/distributed/xla_multiprocessing.py", line 315, in _setup_replication 
Aug 20 19:31:36     device = xm.xla_device() 
Aug 20 19:31:36   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 231, in xla_device 
Aug 20 19:31:36     devkind=devkind if devkind is not None else None) 
Aug 20 19:31:36   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 136, in get_xla_supported_devices 
Aug 20 19:31:36     xla_devices = _DEVICES.value 
Aug 20 19:31:36   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/utils/utils.py", line 32, in value 
Aug 20 19:31:36     self._value = self._gen_fn() 
Aug 20 19:31:36   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/core/xla_model.py", line 18, in <lambda> 
Aug 20 19:31:36     _DEVICES = xu.LazyProperty(lambda: torch_xla._XLAC._xla_get_devices()) 
Aug 20 19:31:36 RuntimeError: tensorflow/compiler/xla/xla_client/xrt_local_service.cc:56 : Check failed: tensorflow::NewServer(server_def, &server_) == ::tensorflow::Status::OK() (Unknown: Could not start gRPC server vs. OK) 
Aug 20 19:31:37 Traceback (most recent call last): 
Aug 20 19:31:37   File "/var/lib/jenkins/workspace/xla/test/test_mp_save.py", line 63, in <module> 
Aug 20 19:31:37     xmp.spawn(_mp_fn, args=(temp_file,)) 
Aug 20 19:31:37   File "/opt/conda/lib/python3.6/site-packages/torch_xla-1.6-py3.6-linux-x86_64.egg/torch_xla/distributed/xla_multiprocessing.py", line 395, in spawn 
Aug 20 19:31:37     start_method=start_method) 
Aug 20 19:31:37   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 158, in start_processes 
Aug 20 19:31:37     while not context.join(): 
Aug 20 19:31:37   File "/opt/conda/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 113, in join 
Aug 20 19:31:37     (error_index, exitcode) 
Aug 20 19:31:37 Exception: process 1 terminated with exit code 17 

This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 2 times.

@facebook-github-bot
Copy link
Contributor

@osalpekar merged this pull request in a12fe1a.

@facebook-github-bot facebook-github-bot deleted the gh/mrshenli/222/head branch August 24, 2020 14:15
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants