Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion torch/csrc/jit/codegen/cuda/ir_printer.h
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

/*
* IRMathPrinter and IRTransformPrinter allow the splitting up of fusion print
* functions. IRMathPrinter as its name implies focuses soley on what tensor
* functions. IRMathPrinter as its name implies focuses solely on what tensor
* computations are taking place. Resulting TensorView math will reflect the
* series of split/merge/computeAts that have taken place, however these
* nodes will not be displayed in what is printed. IRTransformPrinter does not
Expand Down
2 changes: 1 addition & 1 deletion torch/nn/modules/linear.py
Original file line number Diff line number Diff line change
Expand Up @@ -96,7 +96,7 @@ def extra_repr(self) -> str:
)


# This class exists soley for Transformer; it has an annotation stating
# This class exists solely for Transformer; it has an annotation stating
# that bias is never None, which appeases TorchScript
class _LinearWithBias(Linear):
bias: Tensor
Expand Down
4 changes: 2 additions & 2 deletions torch/nn/modules/loss.py
Original file line number Diff line number Diff line change
Expand Up @@ -474,7 +474,7 @@ class BCELoss(_WeightedLoss):
However, an infinite term in the loss equation is not desirable for several reasons.
For one, if either :math:`y_n = 0` or :math:`(1 - y_n) = 0`, then we would be
multipying 0 with infinity. Secondly, if we have an infinite loss value, then
multiplying 0 with infinity. Secondly, if we have an infinite loss value, then
we would also have an infinite term in our gradient, since
:math:`\lim_{x\to 0} \frac{d}{dx} \log (x) = \infty`.
This would make BCELoss's backward method nonlinear with respect to :math:`x_n`,
Expand Down Expand Up @@ -1316,7 +1316,7 @@ class CTCLoss(_Loss):
>>> # Initialize random batch of input vectors, for *size = (T,N,C)
>>> input = torch.randn(T, N, C).log_softmax(2).detach().requires_grad_()
>>> input_lengths = torch.full(size=(N,), fill_value=T, dtype=torch.long)
>>>
>>>
>>> # Initialize random batch of targets (0 = blank, 1:C = classes)
>>> target_lengths = torch.randint(low=1, high=T, size=(N,), dtype=torch.long)
>>> target = torch.randint(low=1, high=C, size=(sum(target_lengths),), dtype=torch.long)
Expand Down
10 changes: 5 additions & 5 deletions torch/nn/modules/module.py
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ def register_module_forward_pre_hook(hook: Callable[..., None]) -> RemovableHand
.. warning ::

This adds global state to the `nn.module` module
and it is only intended for debugging/profiling purposes.
and it is only intended for debugging/profiling purposes.

The hook will be called every time before :func:`forward` is invoked.
It should have the following signature::
Expand Down Expand Up @@ -92,7 +92,7 @@ def register_module_forward_hook(hook: Callable[..., None]) -> RemovableHandle:
.. warning ::

This adds global state to the `nn.module` module
and it is only intended for debugging/profiling purposes.
and it is only intended for debugging/profiling purposes.

The hook will be called every time after :func:`forward` has computed an output.
It should have the following signature::
Expand Down Expand Up @@ -124,7 +124,7 @@ def register_module_backward_hook(

.. warning ::
This adds global state to the `nn.module` module
and it is only intended for debugging/profiling purposes.
and it is only intended for debugging/profiling purposes.

The current implementation will not have the presented behavior
for complex :class:`Module` that perform many operations.
Expand Down Expand Up @@ -977,7 +977,7 @@ def _load_from_state_dict(self, state_dict, prefix, local_metadata, strict,
error_msgs.append('While copying the parameter named "{}", '
'whose dimensions in the model are {} and '
'whose dimensions in the checkpoint are {}, '
'an exception occured : {}.'
'an exception occurred : {}.'
.format(key, param.size(), input_param.size(), ex.args))
elif strict:
missing_keys.append(key)
Expand Down Expand Up @@ -1329,7 +1329,7 @@ def _get_name(self):
def extra_repr(self) -> str:
r"""Set the extra representation of the module

To print customized extra information, you should reimplement
To print customized extra information, you should re-implement
this method in your own modules. Both single-line and multi-line
strings are acceptable.
"""
Expand Down
4 changes: 2 additions & 2 deletions torch/nn/quantized/modules/functional_modules.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@


class FloatFunctional(torch.nn.Module):
r"""State collector class for float operatitons.
r"""State collector class for float operations.
The instance of this class can be used instead of the ``torch.`` prefix for
some operations. See example usage below.
Expand Down Expand Up @@ -84,7 +84,7 @@ def add_relu(self, x, y):


class QFunctional(torch.nn.Module):
r"""Wrapper class for quantized operatitons.
r"""Wrapper class for quantized operations.
The instance of this class can be used instead of the
``torch.ops.quantized`` prefix. See example usage below.
Expand Down