Skip to content

Conversation

@soumith
Copy link
Contributor

@soumith soumith commented Jan 2, 2017

also, removing all, any stateless methods

Still left:

  • ones
  • zeros
  • rand
  • randn
  • index_select
  • masked_select
  • t
  • transpose
  • reshape
  • cat
  • squeeze
  • scatter
  • multinomial
  • normal
  • sort
  • topk
  • kthvalue
  • renorm
  • trace
  • unfold
  • symeig
  • geqrf
  • ger
  • gesv
  • inverse
  • orgqr
  • ormqr
  • potrf
  • potri
  • potrs
  • pstrf
  • qr
  • svd
  • trtrs

torch/docs.py Outdated
Returns a new `Tensor` with the arccosine of the elements of :attr:`input`.
Args:
tensor (Tensor): the input `Tensor`

This comment was marked as off-topic.

acos(input, out=None) -> Tensor
Computes the element-wise inverse cosine of a tensor.
Returns a new `Tensor` with the arccosine of the elements of :attr:`input`.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

torch/docs.py Outdated
::
>>> a = torch.randn(4)
>>> print(a)

This comment was marked as off-topic.

torch/docs.py Outdated

add_docstr(torch._C.add,
"""
.. function:: add(tensor, value, out=None)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

torch/docs.py Outdated
addbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor
Performs a batch matrix-matrix product of matrices stored in :attr:`batch1` and :attr:`batch2`,
with a reduced add step (all matrix multiplications get accumulated in a single place).

This comment was marked as off-topic.

torch/docs.py Outdated
If :attr:`mat1` is a `n x m` Tensor, :attr:`mat2` is a `m x p` Tensor, :attr:`out` and :attr:`mat` will be `n x p` Tensors.
`alpha` and `beta` are scaling factors on `mat1@mat2` and `mat` respectively.

This comment was marked as off-topic.

:math:`out = (beta * M) + (alpha * mat1 * mat2)`
Args:
beta (float, optional): multiplier for :attr:`mat`

This comment was marked as off-topic.

torch/docs.py Outdated
If :attr:`mat` is a `n x m` Tensor, :attr:`vec` is a 1D Tensor of size `m`, :attr:`out` and :attr:`tensor` will be 1D of size `n`.
`alpha` and `beta` are scaling factors on `mat*vec` and `tensor` respectively.

This comment was marked as off-topic.

If :attr:`vec1` is a vector of size `n` and :attr:`vec2` is a vector of size `m`, then :attr:`mat` must be a matrix of size `n x m`
Args:
beta (float, optional): multiplier for :attr:`mat`

This comment was marked as off-topic.

torch/docs.py Outdated
"""
baddbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor
Performs a batch matrix-matrix product of matrices stored in :attr:`batch1` and :attr:`batch2`.

This comment was marked as off-topic.

torch/docs.py Outdated
Returns a new `Tensor` with the arccosine of the elements of :attr:`input`.
Args:
tensor (Tensor): the input `Tensor`

This comment was marked as off-topic.

torch/docs.py Outdated
Args:
input (Tensor): the input `Tensor`
value (Float): the number to be added to each element of :attr:`input`

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

.. autofunction:: addmm
.. autofunction:: addmv
.. autofunction:: addr
.. autofunction:: all

This comment was marked as off-topic.

This comment was marked as off-topic.

acos(input, out=None) -> Tensor
Computes the element-wise inverse cosine of a tensor.
Returns a new `Tensor` with the arccosine of the elements of :attr:`input`.

This comment was marked as off-topic.

torch/docs.py Outdated
Example:
>>> torch.acos(torch.FloatTensor([1, -1]))
FloatTensor([0.0000, 3.1416])
::

This comment was marked as off-topic.

This comment was marked as off-topic.

Example:
::
>>> a = torch.randn(4)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@soumith soumith merged commit a461804 into master Jan 3, 2017
@soumith soumith deleted the moredocs branch January 3, 2017 23:29
simaiden added a commit to simaiden/pytorch that referenced this pull request Feb 26, 2020
Replaced `sum` with `mean` in line pytorch#392
mrshenli pushed a commit to mrshenli/pytorch that referenced this pull request Apr 11, 2020
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 23, 2020
Fix reduction heuristics so we don't recompile and we use the correct launch params.
Co-authored-by: Kevin Stephano <kevin.stephano@gmail.com>
jjsjann123 pushed a commit to jjsjann123/pytorch that referenced this pull request Sep 24, 2020
Fix reduction heuristics so we don't recompile and we use the correct launch params.
Co-authored-by: Kevin Stephano <kevin.stephano@gmail.com>
KyleCZH pushed a commit to KyleCZH/pytorch that referenced this pull request Sep 20, 2021
These scripts should probably point to the main repository where these
are published.

Will do a follow up PR to make it configurable later on.

Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
eellison pushed a commit to eellison/pytorch that referenced this pull request Jun 29, 2022
…Variable (pytorch#392)

* Implement UnspecializedPrimitiveVariable codegen

* Make UnspecilizedPrimitiveVariable as GraphArg

* Update make_call_generated_code

* Update min/max builtin func

* Support random.random

* Remove unnecessary change

* Fix lint

* Refactor to support multiple random.random

* Refactor out unspecialized numpy and python variables

* Fix RandomValueSource guard

* Support multiple random functions

* Rebase to updated main

* Refactor out random_values_var

* Fix lint

* Fix lint

* Move random_values_var to output graph

* Add need_unwrap to distinguish unspec from x.item()

* Make global rand func unique

* Fix lint

* Add raw value propagation for unspec variables

* Fix lint

* Directly load type(raw_value) & update random func example value

* Fix lint
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants