-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Description
🚀 Feature
A new test structure that:
- allows backends to specify which tests (and which variants of those tests) should be run when testing them
- lets backends specify which (other) backend to compare against (where applicable)
- lets new backends quickly register themselves for testing
For example, a backend like XLA should be able to register itself for a suite of backend-generic tests simply, a backend like CUDA should be able to mark some test inputs, like bfloat16, as unsuitable, and an FPGA backend might list some generic tests as unsuitable because they use unsupported ops.
Users should be able to request that tests be run for a specific backend and compared to another. E.g. "run the CUDA tests and compare outputs with the CPU backend." Ideally users can also specify input data types, too, like "run only the bfloat16 XLA tests."
Motivation
Historically PyTorch has only seriously supported CPU and CUDA (plus HIPMasqueradingasCUDA) backends. We want to support a variety of backends, however, and providing them a simple mechanism to hook into PyTorch's existing test infrastructure will be immensely helpful in ensuring they correctly implement PyTorch's interface. Giving backend developers more control over the tests they run should also improve their speed of development.
Pitch
See feature. I'll create an initial proposal for us to review and post it back to this issue.
Alternatives
We need to let new backends test their implementation of the PyTorch interface, the only question is how much work we do to support this. This issue is to discuss possible improvements.