-
Notifications
You must be signed in to change notification settings - Fork 26.3k
Open
Labels
oncall: relengIn support of CI and Release EngineeringIn support of CI and Release Engineering
Milestone
Description
🐛 Describe the bug
- 4x performance regression for 3D convs with AMP on torch 2.9.0 #166122 - @Lucaskabela
- Release 2.9 README.md has links to images that do not work on pypi #165559 - @atalman
- [dynamo] Key error in bytecode_transformation #166033 - @williamwen42
- [dynamo] another 3.11 resume codegen KeyError #166176 - @williamwen42
-
torch.bmm+torch.compileno longer works in 2.9, but worked in 2.8 #165892 - @atalman - [dynamo] error_on_graph_break(True) fails to error on graph break in some cases #166589 - @williamwen42
- Inductor bug when compiling Gemma #165579 - @Lucaskabela
- fix registration design for inductor graph partition for vLLM #165341 - @Lucaskabela
- [Inductor][AO] Flex Attention with AO hit the recompile limit on
v2.9.0#166153 - @Lucaskabela -
version.txtmismatch with tags in release branch #151425 - @atalman - investigate vLLM llama4 maverick regression in 2.9 #166169 - @Lucaskabela
- numeric bug in CUDNN_ATTENTION, NaN #166211 - @Lucaskabela
- Significant Memory Regression in
F.conv3dwithbfloat16Inputs inPyTorch 2.9.0#166643 - @Lucaskabela - inductor graph partition x AOTAutograd Cache issue #165471 - @Lucaskabela @BoyuanFeng
- unnecessary warning triggered by inductor in pytorch 2.9 #166286 - @atalman
- UnboundLocalError: cannot access local variable 'tracer_output' where it is not a ssociated with a value #167344 - @zou3519
- Reverts #163712 and forces allgather/scatter inputs/outputs to be contiguous #166181 - @atalman
- [dynamo] Revert C++-fying of symbolic shape guards #166427 - @anijain2305
- [dynamo] fix keyerror in resume_execution (again) #166040 - @williamwen42
- [dynamo] fix error_on_graph_break bug where non-empty checkpoint results in unwanted graph break resumption #166586 - @williamwen42
- [Inductor] No longer throw error in bmm out_dtype lowering due to template heuristics #166457 - @PaulZhang12
- [dynamo] fix store attr graph break in with block #166036 - @williamwen42
- [2.9.1][cuDNN][SDPA] bump cuDNN frontend to 1.12 patch release #166912 - @Lucaskabela
- [cuDNN][conv] Re-enable cuDNN for 3D convolutions (fixed in 9.15+) #166480 - @Lucaskabela
- [Graph Partition] fix partition x memory plan issue #165514 - @BoyuanFeng
- Delete deprecated fp32 precision warnings #166956 - @atalman
Additional validation checks - @atalman :
-
Validate Linux aarch64 CUDA builds with triton (Please note all CUDA Aarch64 builds where validated by Nvidia) - https://github.com/pytorch/test-infra/actions/runs/19246514410/job/55036423156
-
Validate Metadata section of wheels - make sure python versions are set
-
PyTorch 2.8.0 exposes statically linked libstdc++ CXX11 ABI symbols : PyTorch 2.8.0 exposes statically linked
libstdc++CXX11 ABI symbols. #133437 - Issue Target torch 2.8.1 -
CUDA
- Check cuda 1.12.1 update issue:
torch.linalg.eighfails on GPU #94772 with small wheels . Passes on GPU but failing on CPU, new issue: torch.linalg.eigh fails on CPU #145801
- Check cuda 1.12.1 update issue:
-
torch.compile- Basic test works (for example see test mentioned in Search for
libdevicerelative to shared library triton-lang/triton#1176 ) in PyTorch docker container -
torch.compileraises an error if used on Windows. Test (part of torchvision): https://github.com/pytorch/test-infra/actions/runs/14182325015/job/39731076931#step:9:447 -
torch.compileworks on 3.13 : Test: https://github.com/pytorch/test-infra/actions/runs/14315674885/job/40121143490#step:15:3483 -
torch.compileraises error on 3.13t: Validated :RuntimeError: torch.compile is not supported on Python built with GIL disabled
- Basic test works (for example see test mentioned in Search for
-
MPS
- Resnet is usable out of the box (https://github.com/pytorch/test-infra/actions/runs/14315674885/job/40121143490#step:15:3469)
- Is torchvision usable? True German shepherd (cpu): 37.6% German shepherd (mps): 34.1%
-
Validate docker release builds
Versions
2.9.1
Metadata
Metadata
Assignees
Labels
oncall: relengIn support of CI and Release EngineeringIn support of CI and Release Engineering
Type
Projects
Status
No status