-
Notifications
You must be signed in to change notification settings - Fork 26.3k
[pytorch] flatten_indices function should use vector::resize instead of reserve #73831
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…of reserve Summary: stderr: test_jagged_2d_to_dense_truncation (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest) ... third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045: std::vector::reference std::vector<long>::operator[](std::vector::size_type) [_Tp = long, _Alloc = std::allocator<long>]: Assertion '__n < this->size()' failed. After some digging, we found the issue is not caused by fbgemm. It is caused by the underlying coo_sparse_tensor.to_dense() function, which fails for all sparse tensors on platform010, because of the misuse of vector::reserve in flatten_indices function. Test Plan: buck test mode/dev -c fbcode.platform=platform010 //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test -- test_jagged_2d_to_dense_truncation Reviewed By: jspark1105 Differential Revision: D34665804 fbshipit-source-id: 2adad5496bdd64fb21607118b82e574fc156f59e
CI Flow Status⚛️ CI FlowRuleset - Version:
|
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 9dd1cf0 (more details on the Dr. CI page): 💚 💚 Looks good so far! There are no failures yet. 💚 💚 This comment was automatically generated by Dr. CI (expand for details).Please report bugs/suggestions to the (internal) Dr. CI Users group. |
|
This pull request was exported from Phabricator. Differential Revision: D34665804 |
…of reserve (#73831) Summary: Pull Request resolved: #73831 stderr: test_jagged_2d_to_dense_truncation (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest) ... third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045: std::vector::reference std::vector<long>::operator[](std::vector::size_type) [_Tp = long, _Alloc = std::allocator<long>]: Assertion '__n < this->size()' failed. After some digging, we found the issue is not caused by fbgemm. It is caused by the underlying coo_sparse_tensor.to_dense() function, which fails for all sparse tensors on platform010, because of the misuse of vector::reserve in flatten_indices function. Test Plan: buck test mode/dev -c fbcode.platform=platform010 //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test -- test_jagged_2d_to_dense_truncation Reviewed By: jspark1105 Differential Revision: D34665804 fbshipit-source-id: 685cfd516fded224cfdb44b5a5f18c2d8e0ec644
|
Hey @jiyuanzFB. |
…of reserve (#73831) Summary: Pull Request resolved: pytorch/pytorch#73831 stderr: test_jagged_2d_to_dense_truncation (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest) ... third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045: std::vector::reference std::vector<long>::operator[](std::vector::size_type) [_Tp = long, _Alloc = std::allocator<long>]: Assertion '__n < this->size()' failed. After some digging, we found the issue is not caused by fbgemm. It is caused by the underlying coo_sparse_tensor.to_dense() function, which fails for all sparse tensors on platform010, because of the misuse of vector::reserve in flatten_indices function. Test Plan: buck test mode/dev -c fbcode.platform=platform010 //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test -- test_jagged_2d_to_dense_truncation Reviewed By: jspark1105 Differential Revision: D34665804 fbshipit-source-id: 685cfd516fded224cfdb44b5a5f18c2d8e0ec644 (cherry picked from commit 0c33c3a8499d58149afa7b54a333e5b28803210b)
…of reserve (#73831) Summary: Pull Request resolved: pytorch/pytorch#73831 stderr: test_jagged_2d_to_dense_truncation (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest) ... third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045: std::vector::reference std::vector<long>::operator[](std::vector::size_type) [_Tp = long, _Alloc = std::allocator<long>]: Assertion '__n < this->size()' failed. After some digging, we found the issue is not caused by fbgemm. It is caused by the underlying coo_sparse_tensor.to_dense() function, which fails for all sparse tensors on platform010, because of the misuse of vector::reserve in flatten_indices function. Test Plan: buck test mode/dev -c fbcode.platform=platform010 //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test -- test_jagged_2d_to_dense_truncation Reviewed By: jspark1105 Differential Revision: D34665804 fbshipit-source-id: 685cfd516fded224cfdb44b5a5f18c2d8e0ec644 (cherry picked from commit 0c33c3a8499d58149afa7b54a333e5b28803210b)
Summary:
stderr:
test_jagged_2d_to_dense_truncation (deeplearning.fbgemm.fbgemm_gpu.test.sparse_ops_test.SparseOpsTest) ... third-party-buck/platform010/build/libgcc/include/c++/trunk/bits/stl_vector.h:1045: std::vector::reference std::vector::operator [_Tp = long, _Alloc = std::allocator]: Assertion '__n < this->size()' failed.
After some digging, we found the issue is not caused by fbgemm. It is caused by the underlying coo_sparse_tensor.to_dense() function, which fails for all sparse tensors on platform010, because of the misuse of vector::reserve in flatten_indices function.
Test Plan: buck test mode/dev -c fbcode.platform=platform010 //deeplearning/fbgemm/fbgemm_gpu:sparse_ops_test -- test_jagged_2d_to_dense_truncation
Reviewed By: jspark1105
Differential Revision: D34665804