Tags: mandroid6/pytorch
Tags
Merge branch 'master' of https://github.com/pytorch/pytorch into niki… …taved/compressed_index_check
Update on "Fix ModuleInfo skip logic" Fixes pytorch#80247 This PR: * Refactors the skip logic as done for OpInfo in pytorch#62713, fixing the logic error * For tests that were wrongly skipped before and now fail: * Fix `TestModule.test_cpu_gpu_parity` to support Lazy modules - this was affecting `LazyConv*` * Adds `expectedFailure` decorators and a follow-up message to address `Conv*` failures on `TestModule.test_memory_format` [ghstack-poisoned]
Rebase on "Fix istft default output length" Fixes pytorch#79778 The old code uses `end = -1` for non-centered input when no length is given, however `end` should point one past the end, so it was trimming away the last element. [ghstack-poisoned]
Update on "Support per-parameter test decoration" Fixes pytorch#79161 This PR does the following: * Expands the `parametrize_fn()` signature from returning a 3-tuple of `(test, test_name, param_kwargs)` to returning a 4-tuple of `(test, test_name, param_kwargs, decorator_fn)`. Expected signature for the addition is `decorator_fn(param_kwargs) -> List[decorator]` i.e. given the full set of test params, return a list of decorators to apply. * `modules`, `ops`, and `parametrize` now fit the new signature, returning `decorator_fn`s instead of applying decorators themselves. * `instantiate_parametrized_tests()` and `instantiate_device_type_tests()` now call the returned `decorator_fn`, passing in the full set of `param_kwargs` (after composition + `device` / `dtype` additions) and applying the returned decorators. * Composing multiple `parametrize_fn`s also composes the corresponding `decorator_fn`s; the composed `decorator_fn` simply concatenates the decorator lists returned by the constituents. * Expands `DecorateInfo.is_active` to support callables: ```python DecorateInfo( unittest.expectedFailure, "TestOps", "test_python_ref_executor", device_type='cuda', active_if=lambda params: params['executor'] == 'nvfuser' ), ``` * Adds several tests to `test/test_testing.py` ensuring proper decoration using `parametrize` and `modules`. * (minor) Refactors `ModuleInfo` slightly to better match `OpInfo`; replace `should_skip()` with `get_decorators()` and merge `skips` and `decorators`. * (minor) Fixes a couple `ModuleInfo` naming oddities uncovered during testing. [ghstack-poisoned]
Update on "[_shard] make ShardedTensor be a Tensor and nn.Parameter" [ghstack-poisoned]
PreviousNext