Skip to content

Conversation

@miraliahmadli
Copy link
Contributor

@miraliahmadli miraliahmadli commented Jul 31, 2020

torch.nn.Hardsigmoid and torch.nn.Hardswish classes currently do not support inplace operations as it uses torch.nn.functional.hardsigmoid and torch.nn.functional.hardswish functions with their default inplace argument which is False.

So, I added inplace argument for torch.nn.Hardsigmoid and torch.nn.Hardswish classes so that forward operation can be done inplace as well while using these layers.

@miraliahmadli miraliahmadli requested a review from apaszke as a code owner July 31, 2020 01:13
@miraliahmadli miraliahmadli changed the title Added inplace option for hardsigmoid and hardswish layers Add inplace option for hardsigmoid and hardswish layers Jul 31, 2020
@miraliahmadli miraliahmadli changed the title Add inplace option for hardsigmoid and hardswish layers Add inplace option for torch.nn.Hardsigmoid and torch.nn.Hardswish layers Jul 31, 2020
@mruberry mruberry added triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module module: nn Related to torch.nn labels Jul 31, 2020
@mruberry mruberry requested review from albanD and mruberry and removed request for apaszke July 31, 2020 20:48
Copy link
Collaborator

@albanD albanD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry for the late review!
This looks good. Thanks for the PR!

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@albanD has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot
Copy link
Contributor

@albanD merged this pull request in ff6a2b0.

@HsLOL
Copy link

HsLOL commented Nov 15, 2020

hello @miraliahmadli
my torch.nn.functional.hardsigmoid and torch.nn.functional.hardswish functions are the same as yours. But I also get the error "
torch.nn.modules.module.ModuleAttributeError: 'Hardswish' object has no attribute 'inplace'"

here is my code of the torch.nn.Hardsigmoid and torch.nn.Hardswish classes .
`class Hardsigmoid(Module):
r"""Applies the element-wise function:

.. math::
    \text{Hardsigmoid}(x) = \begin{cases}
        0 & \text{if~} x \le -3, \\
        1 & \text{if~} x \ge +3, \\
        x / 6 + 1 / 2 & \text{otherwise}
    \end{cases}

Args:
    inplace: can optionally do the operation in-place. Default: ``False``

Shape:
    - Input: :math:`(N, *)` where `*` means, any number of additional
      dimensions
    - Output: :math:`(N, *)`, same shape as the input

Examples::

    >>> m = nn.Hardsigmoid()
    >>> input = torch.randn(2)
    >>> output = m(input)
"""
__constants__ = ['inplace']

inplace: bool

def __init__(self, inplace : bool = False) -> None:
    super(Hardsigmoid, self).__init__()
    self.inplace = inplace

def forward(self, input: Tensor) -> Tensor:
    return F.hardsigmoid(input, self.inplace)

class Tanh(Module):
r"""Applies the element-wise function:

.. math::
    \text{Tanh}(x) = \tanh(x) = \frac{\exp(x) - \exp(-x)} {\exp(x) + \exp(-x)}

Shape:
    - Input: :math:`(N, *)` where `*` means, any number of additional
      dimensions
    - Output: :math:`(N, *)`, same shape as the input

.. image:: ../scripts/activation_images/Tanh.png

Examples::

    >>> m = nn.Tanh()
    >>> input = torch.randn(2)
    >>> output = m(input)
"""

def forward(self, input: Tensor) -> Tensor:
    return torch.tanh(input)

class SiLU(Module):
r"""Applies the silu function, element-wise.

.. math::
    \text{silu}(x) = x * \sigma(x), \text{where } \sigma(x) \text{ is the logistic sigmoid.}

.. note::
    See `Gaussian Error Linear Units (GELUs) <https://arxiv.org/abs/1606.08415>`_ 
    where the SiLU (Sigmoid Linear Unit) was originally coined, and see 
    `Sigmoid-Weighted Linear Units for Neural Network Function Approximation 
    in Reinforcement Learning <https://arxiv.org/abs/1702.03118>`_ and `Swish: 
    a Self-Gated Activation Function <https://arxiv.org/abs/1710.05941v1>`_ 
    where the SiLU was experimented with later.

Shape:
    - Input: :math:`(N, *)` where `*` means, any number of additional
      dimensions
    - Output: :math:`(N, *)`, same shape as the input

Examples::

    >>> m = nn.SiLU()
    >>> input = torch.randn(2)
    >>> output = m(input)
"""
__constants__ = ['inplace']
inplace: bool

def __init__(self, inplace: bool = False):
    super(SiLU, self).__init__()
    self.inplace = inplace

def forward(self, input: Tensor) -> Tensor:
    return F.silu(input, inplace=self.inplace)

def extra_repr(self) -> str:
    inplace_str = 'inplace=True' if self.inplace else ''
    return inplace_str

class Hardswish(Module):
r"""Applies the hardswish function, element-wise, as described in the paper:

`Searching for MobileNetV3`_.

.. math::
    \text{Hardswish}(x) = \begin{cases}
        0 & \text{if~} x \le -3, \\
        x & \text{if~} x \ge +3, \\
        x \cdot (x + 3) /6 & \text{otherwise}
    \end{cases}

Args:
    inplace: can optionally do the operation in-place. Default: ``False``

Shape:
    - Input: :math:`(N, *)` where `*` means, any number of additional
      dimensions
    - Output: :math:`(N, *)`, same shape as the input

Examples::

    >>> m = nn.Hardswish()
    >>> input = torch.randn(2)
    >>> output = m(input)

.. _`Searching for MobileNetV3`:
    https://arxiv.org/abs/1905.02244
"""
__constants__ = ['inplace']

inplace: bool

def __init__(self, inplace : bool = False) -> None:
    super(Hardswish, self).__init__()
    self.inplace = inplace

def forward(self, input: Tensor) -> Tensor:
    return F.hardswish(input, self.inplace)`

Thank your for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged module: nn Related to torch.nn open source triaged This issue has been looked at a team member, and triaged and prioritized into an appropriate module

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants