Skip to content

Commit b78c9b2

Browse files
committed
Update on "[optim][adamw] default to foreach when CUDA + differentiable=False"
cc ezyang gchanan [ghstack-poisoned]
1 parent 125537e commit b78c9b2

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

torch/optim/adamw.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,7 @@ class AdamW(Optimizer):
6464
minimizing (default: False)
6565
foreach (bool, optional): whether foreach implementation of optimizer is used.
6666
If unspecified by the user (so foreach is None), we will try to use foreach
67-
over the for-loop implementation on CUDA, since it is usually significantly
67+
over the for-loop implementation on CUDA, since it is usually significantly
6868
more performant. (default: None)
6969
capturable (bool, optional): whether this instance is safe to capture in a CUDA graph.
7070
Passing True can impair ungraphed performance, so if you don't intend to

0 commit comments

Comments
 (0)