Skip to content

Commit 0203d70

Browse files
Jiayu Liufacebook-github-bot
authored andcommitted
[nit] fix some typo within documentation (#40692)
Summary: Apologize if this seems trivial, but i'd like to fix them on my way of reading some of the source code. Thanks! Pull Request resolved: #40692 Differential Revision: D22284651 Pulled By: mrshenli fbshipit-source-id: 4259d1808aa4d15a02cfd486cfb44dd75fdc58f8
1 parent 8e0714a commit 0203d70

File tree

5 files changed

+11
-11
lines changed

5 files changed

+11
-11
lines changed

torch/csrc/jit/codegen/cuda/ir_printer.h

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88

99
/*
1010
* IRMathPrinter and IRTransformPrinter allow the splitting up of fusion print
11-
* functions. IRMathPrinter as its name implies focuses soley on what tensor
11+
* functions. IRMathPrinter as its name implies focuses solely on what tensor
1212
* computations are taking place. Resulting TensorView math will reflect the
1313
* series of split/merge/computeAts that have taken place, however these
1414
* nodes will not be displayed in what is printed. IRTransformPrinter does not

torch/nn/modules/linear.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -96,7 +96,7 @@ def extra_repr(self) -> str:
9696
)
9797

9898

99-
# This class exists soley for Transformer; it has an annotation stating
99+
# This class exists solely for Transformer; it has an annotation stating
100100
# that bias is never None, which appeases TorchScript
101101
class _LinearWithBias(Linear):
102102
bias: Tensor

torch/nn/modules/loss.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -474,7 +474,7 @@ class BCELoss(_WeightedLoss):
474474
However, an infinite term in the loss equation is not desirable for several reasons.
475475
476476
For one, if either :math:`y_n = 0` or :math:`(1 - y_n) = 0`, then we would be
477-
multipying 0 with infinity. Secondly, if we have an infinite loss value, then
477+
multiplying 0 with infinity. Secondly, if we have an infinite loss value, then
478478
we would also have an infinite term in our gradient, since
479479
:math:`\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty`.
480480
This would make BCELoss's backward method nonlinear with respect to :math:`x_n`,
@@ -1316,7 +1316,7 @@ class CTCLoss(_Loss):
13161316
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
13171317
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
13181318
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
1319-
>>>
1319+
>>>
13201320
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
13211321
>>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)
13221322
>>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)

torch/nn/modules/module.py

Lines changed: 5 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -60,7 +60,7 @@ def register_module_forward_pre_hook(hook: Callable[..., None]) -> RemovableHand
6060
.. warning ::
6161
6262
This adds global state to the `nn.module` module
63-
and it is only intended for debugging/profiling purposes.
63+
and it is only intended for debugging/profiling purposes.
6464
6565
The hook will be called every time before :func:`forward` is invoked.
6666
It should have the following signature::
@@ -92,7 +92,7 @@ def register_module_forward_hook(hook: Callable[..., None]) -> RemovableHandle:
9292
.. warning ::
9393
9494
This adds global state to the `nn.module` module
95-
and it is only intended for debugging/profiling purposes.
95+
and it is only intended for debugging/profiling purposes.
9696
9797
The hook will be called every time after :func:`forward` has computed an output.
9898
It should have the following signature::
@@ -124,7 +124,7 @@ def register_module_backward_hook(
124124
125125
.. warning ::
126126
This adds global state to the `nn.module` module
127-
and it is only intended for debugging/profiling purposes.
127+
and it is only intended for debugging/profiling purposes.
128128
129129
The current implementation will not have the presented behavior
130130
for complex :class:`Module` that perform many operations.
@@ -977,7 +977,7 @@ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
977977
error_msgs.append('While copying the parameter named "{}", '
978978
'whose dimensions in the model are {} and '
979979
'whose dimensions in the checkpoint are {}, '
980-
'an exception occured : {}.'
980+
'an exception occurred : {}.'
981981
.format(key, param.size(), input_param.size(), ex.args))
982982
elif strict:
983983
missing_keys.append(key)
@@ -1329,7 +1329,7 @@ def _get_name(self):
13291329
def extra_repr(self) -> str:
13301330
r"""Set the extra representation of the module
13311331
1332-
To print customized extra information, you should reimplement
1332+
To print customized extra information, you should re-implement
13331333
this method in your own modules. Both single-line and multi-line
13341334
strings are acceptable.
13351335
"""

torch/nn/quantized/modules/functional_modules.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66

77

88
class FloatFunctional(torch.nn.Module):
9-
r"""State collector class for float operatitons.
9+
r"""State collector class for float operations.
1010
1111
The instance of this class can be used instead of the ``torch.`` prefix for
1212
some operations. See example usage below.
@@ -84,7 +84,7 @@ def add_relu(self, x, y):
8484

8585

8686
class QFunctional(torch.nn.Module):
87-
r"""Wrapper class for quantized operatitons.
87+
r"""Wrapper class for quantized operations.
8888
8989
The instance of this class can be used instead of the
9090
``torch.ops.quantized`` prefix. See example usage below.

0 commit comments

Comments
 (0)