Skip to content

Conversation

@anjali411
Copy link
Contributor

@anjali411 anjali411 commented Jul 10, 2020

Stack from ghstack:

Differential Revision: D22482989

@anjali411 anjali411 requested review from gchanan and mruberry July 10, 2020 16:56
Example::
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
>>> # initial default for floating point is torch.float
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: this isn't a huge deal, but I think the previous wording was slightly better, because it matches what is returned for the next line. As it is now, people will wonder why torch.float and torch.float32 don't match exactly.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahh ok

>>> # initial default for floating point is torch.cfloat
>>> torch.tensor([1.2, 3j]).dtype
torch.complex64
>>> torch.set_default_tensor_type(torch.DoubleTensor)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think we want to document this -- the type interface isn't complete (e.g. there's no quantized, XLA) -- it's really logically deprecated.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah that makes sense: updated to torch.set_default_dtype(torch.float64)

used as default floating point type for type inference in
:func:`torch.tensor`.
r"""Sets the default floating point dtype to :attr:`d`.
This type is also used:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this says "also used" but it's sort of defined by how it's used, i.e. "sets the default floating point dtype" doesn't really mean anything outside of how it' s used. I'd just change this to something like:
"This dtype is used as:"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

1. As default floating point type for type inference in :func:`torch.tensor`.
2. To determine the default complex dtype. The default complex dtype is set to
``torch.complex128`` if default floating point tensor type is ``torch.float64``,
else it's set to ``torch.complex64``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: else -> otherwise.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it not true that this is the corresponding complex dtype for the float default?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that's true

:func:`torch.tensor`.
r"""Sets the default floating point dtype to :attr:`d`.
This type is also used:
1. As default floating point type for type inference in :func:`torch.tensor`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this could be a little clearer (I know it's coming from the prior writeup). But "default floating point type" isn't really defined yet. Maybe something like:
"As the dtype inferred for python floats" (and similar for complex).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated

@anjali411 anjali411 requested a review from gchanan July 10, 2020 17:56
r"""Sets the default floating point dtype to :attr:`d`.
This dtype is:
1. The inferred dtype for python floats in :func:`torch.tensor`.
2. Used to determine the default complex dtype. The default complex dtype is set to
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same issue with 1 -- this should say something like "the inferred dtype for python complex numbers"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does this sound better?

Used to infer dtype for python complex numbers. The default complex dtype is set to
 ``torch.complex128`` if default floating point dtype is ``torch.float64``, otherwise it's set to ``torch.complex64``

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm!

1. The inferred dtype for python floats in :func:`torch.tensor`.
2. Used to determine the default complex dtype. The default complex dtype is set to
``torch.complex128`` if default floating point tensor type is ``torch.float64``,
otherwise it's set to ``torch.complex64``
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm curious: why did you word it this way instead of saying something about the corresponding float dtype?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only reason why I didn't it frame it that way is because we don't have a complex dtype corresponding to torch.bfloat16, torch.float16 (technically we do but the support for ComplexHalf is almost non-existing)

anjali411 added a commit that referenced this pull request Jul 10, 2020
ghstack-source-id: 736011b
Pull Request resolved: #41263
@dr-ci
Copy link

dr-ci bot commented Jul 11, 2020

💊 CI failures summary and remediations

As of commit 2030496 (more details on the Dr. CI page):


  • 2/2 failures possibly* introduced in this PR
    • 2/2 non-CircleCI failure(s)

ci.pytorch.org: 2 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 1 time.

This dtype is:
1. The inferred dtype for python floats in :func:`torch.tensor`.
2. Used to infer dtype for python complex numbers. The default complex dtype is set to
``torch.complex128`` if default floating point dtype is ``torch.float64``,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"if the default..."

:func:`torch.tensor`.
r"""Sets the default floating point dtype to :attr:`d`.
This dtype is:
1. The inferred dtype for python floats in :func:`torch.tensor`.
Copy link
Collaborator

@mruberry mruberry Jul 11, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This isn't quite right since it's the inferred type for Python floats in other contexts, too:

t = torch.tensor(5)
torch.set_default_dtype(torch.double)
(t + 5.).dtype
: torch.float64

torch.full((2,), 2.).dtype
: torch.float64

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this falls under the torch.tensor case since when you add a scalar to a tensor, we first make a zero dim tensor containing the scalar (hence the dtype of this new tensor in this case will be torch.float64

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's true in the first case but also an implementation detail I wouldn't expect a user to be be aware of. In the second case I don't think torch.tensor() is involved?

r"""Sets the default floating point dtype to :attr:`d`.
This dtype is:
1. The inferred dtype for python floats in :func:`torch.tensor`.
2. Used to infer dtype for python complex numbers. The default complex dtype is set to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You may want to make this a note for clarity:

"This setting also determines the inferred dtype of complex numbers. If the default floating point dtype is torch.float64 then complex numbers are inferred to have a dtype of torch.complex128, otherwise they are assumed to have a dtype of torch.complex64."

``torch.complex128`` if default floating point dtype is ``torch.float64``,
otherwise it's set to ``torch.complex64``
The default floating point dtype is initially ``torch.float32``.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might also be noteworthy?

Example::
>>> torch.tensor([1.2, 3]).dtype # initial default for floating point is torch.float32
>>> # initial default for floating point is torch.float32
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Following the above comment, maybe it'd be nice to have an example that isn't torch.tensor based?

@facebook-github-bot
Copy link
Contributor

@anjali411 merged this pull request in e888c3b.

@facebook-github-bot facebook-github-bot deleted the gh/anjali411/43/head branch July 18, 2020 14:16
malfet pushed a commit that referenced this pull request Jul 22, 2020
Summary: Pull Request resolved: #41263

Test Plan: Imported from OSS

Differential Revision: D22482989

Pulled By: anjali411

fbshipit-source-id: 2aadfbb84bbab66f3111970734a37ba74d817ffd
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants