-
Notifications
You must be signed in to change notification settings - Fork 26.3k
adding docs for some torch.* functions #392
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
torch/docs.py
Outdated
| Returns a new `Tensor` with the arccosine of the elements of :attr:`input`. | ||
| Args: | ||
| tensor (Tensor): the input `Tensor` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| acos(input, out=None) -> Tensor | ||
| Computes the element-wise inverse cosine of a tensor. | ||
| Returns a new `Tensor` with the arccosine of the elements of :attr:`input`. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| :: | ||
| >>> a = torch.randn(4) | ||
| >>> print(a) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
|
|
||
| add_docstr(torch._C.add, | ||
| """ | ||
| .. function:: add(tensor, value, out=None) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| addbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor | ||
| Performs a batch matrix-matrix product of matrices stored in :attr:`batch1` and :attr:`batch2`, | ||
| with a reduced add step (all matrix multiplications get accumulated in a single place). |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| If :attr:`mat1` is a `n x m` Tensor, :attr:`mat2` is a `m x p` Tensor, :attr:`out` and :attr:`mat` will be `n x p` Tensors. | ||
| `alpha` and `beta` are scaling factors on `mat1@mat2` and `mat` respectively. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| :math:`out = (beta * M) + (alpha * mat1 * mat2)` | ||
| Args: | ||
| beta (float, optional): multiplier for :attr:`mat` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| If :attr:`mat` is a `n x m` Tensor, :attr:`vec` is a 1D Tensor of size `m`, :attr:`out` and :attr:`tensor` will be 1D of size `n`. | ||
| `alpha` and `beta` are scaling factors on `mat*vec` and `tensor` respectively. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| If :attr:`vec1` is a vector of size `n` and :attr:`vec2` is a vector of size `m`, then :attr:`mat` must be a matrix of size `n x m` | ||
| Args: | ||
| beta (float, optional): multiplier for :attr:`mat` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| """ | ||
| baddbmm(beta=1, mat, alpha=1, batch1, batch2, out=None) -> Tensor | ||
| Performs a batch matrix-matrix product of matrices stored in :attr:`batch1` and :attr:`batch2`. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| Returns a new `Tensor` with the arccosine of the elements of :attr:`input`. | ||
| Args: | ||
| tensor (Tensor): the input `Tensor` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| Args: | ||
| input (Tensor): the input `Tensor` | ||
| value (Float): the number to be added to each element of :attr:`input` |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| .. autofunction:: addmm | ||
| .. autofunction:: addmv | ||
| .. autofunction:: addr | ||
| .. autofunction:: all |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| acos(input, out=None) -> Tensor | ||
| Computes the element-wise inverse cosine of a tensor. | ||
| Returns a new `Tensor` with the arccosine of the elements of :attr:`input`. |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
torch/docs.py
Outdated
| Example: | ||
| >>> torch.acos(torch.FloatTensor([1, -1])) | ||
| FloatTensor([0.0000, 3.1416]) | ||
| :: |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
| Example: | ||
| :: | ||
| >>> a = torch.randn(4) |
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
This comment was marked as off-topic.
This comment was marked as off-topic.
Sorry, something went wrong.
Replaced `sum` with `mean` in line pytorch#392
fix typo in cheat sheet (pytorch#381)
Fix reduction heuristics so we don't recompile and we use the correct launch params. Co-authored-by: Kevin Stephano <kevin.stephano@gmail.com>
Fix reduction heuristics so we don't recompile and we use the correct launch params. Co-authored-by: Kevin Stephano <kevin.stephano@gmail.com>
These scripts should probably point to the main repository where these are published. Will do a follow up PR to make it configurable later on. Signed-off-by: Eli Uriegas <eliuriegas@fb.com>
…Variable (pytorch#392) * Implement UnspecializedPrimitiveVariable codegen * Make UnspecilizedPrimitiveVariable as GraphArg * Update make_call_generated_code * Update min/max builtin func * Support random.random * Remove unnecessary change * Fix lint * Refactor to support multiple random.random * Refactor out unspecialized numpy and python variables * Fix RandomValueSource guard * Support multiple random functions * Rebase to updated main * Refactor out random_values_var * Fix lint * Fix lint * Move random_values_var to output graph * Add need_unwrap to distinguish unspec from x.item() * Make global rand func unique * Fix lint * Add raw value propagation for unspec variables * Fix lint * Directly load type(raw_value) & update random func example value * Fix lint
also, removing all, any stateless methods
Still left: