Commit 421f40e
Use binary units for CUDA memory summary (#91854)
To reduce confusion, use for example `KiB` instead of `KB` since we're talking powers of 2 and not 10.
https://en.wikipedia.org/wiki/Byte#Multiple-byte_units
```
import torch
x = torch.zeros(1024 * 1024, dtype=torch.uint8, device='cuda')
print(torch.cuda.memory_summary())
```
```
|===========================================================================|
| PyTorch CUDA memory summary, device ID 0 |
|---------------------------------------------------------------------------|
| CUDA OOMs: 0 | cudaMalloc retries: 0 |
|===========================================================================|
| Metric | Cur Usage | Peak Usage | Tot Alloc | Tot Freed |
|---------------------------------------------------------------------------|
| Allocated memory | 1024 KiB | 1024 KiB | 1024 KiB | 0 B |
| from large pool | 0 KiB | 0 KiB | 0 KiB | 0 B |
| from small pool | 1024 KiB | 1024 KiB | 1024 KiB | 0 B |
|---------------------------------------------------------------------------|
| Active memory | 1024 KiB | 1024 KiB | 1024 KiB | 0 B |
| from large pool | 0 KiB | 0 KiB | 0 KiB | 0 B |
| from small pool | 1024 KiB | 1024 KiB | 1024 KiB | 0 B |
|---------------------------------------------------------------------------|
```
Pull Request resolved: #91854
Approved by: https://github.com/ngimel1 parent b8057aa commit 421f40e
1 file changed
+2
-2
lines changed| Original file line number | Diff line number | Diff line change | |
|---|---|---|---|
| |||
453 | 453 | | |
454 | 454 | | |
455 | 455 | | |
456 | | - | |
| 456 | + | |
457 | 457 | | |
458 | 458 | | |
459 | 459 | | |
460 | 460 | | |
461 | 461 | | |
462 | 462 | | |
463 | 463 | | |
464 | | - | |
| 464 | + | |
465 | 465 | | |
466 | 466 | | |
467 | 467 | | |
| |||
0 commit comments