Skip to content

Conversation

@gmagogsfm
Copy link
Contributor

@gmagogsfm gmagogsfm commented Jul 24, 2020

[2/N] Implement Enum JIT support

Add prim::EnumName and prim::EnumValue and their lowerings to support getting name and value attribute of Python enums.

Supported:
Enum-typed function targuments
using Enum type and comparing them
Support getting name/value attrs of enums

TODO:
Add PyThon sugared value for Enum
Support Enum-typed return values
Support enum values of different types in same Enum class
Support serialization and deserialization

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@facebook-github-bot facebook-github-bot added the oncall: jit Add this issue/PR to JIT oncall triage queue label Jul 24, 2020
Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@pytorch pytorch deleted a comment from dr-ci bot Jul 24, 2020
@dr-ci
Copy link

dr-ci bot commented Jul 24, 2020

💊 CI failures summary and remediations

As of commit 804329f (more details on the Dr. CI page):


💚 💚 Looks good so far! There are no failures yet. 💚 💚


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 20 times.

@gmagogsfm gmagogsfm changed the title Add op EnumName and EnumValue Add prim::EnumName and prim::EnumValue ops Jul 24, 2020
@gmagogsfm gmagogsfm marked this pull request as ready for review July 24, 2020 01:56
@gmagogsfm gmagogsfm requested a review from apaszke as a code owner July 24, 2020 01:56
Copy link

@SplitInfinity SplitInfinity left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very cool!

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
# TODO(gmagogsfm): Re-enanble hooks when serialization/deserialization
# TODO(gmagogsfm): Re-enable hooks when serialization/deserialization

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Consider adding an assert here to check that e's type is an enum and that enum_val_type is an enum type. Actually, I think you can pass in only e and derive enum_val_type from it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why did you choose to pass in enum_val_type explicitly instead of deriving it from e? Am I looking at an old version?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mostly to follow convention and to save one 'cast'. But I guess the second reason is too minute and voided by having an assert there. Done.

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def enum_comp(x: Color) -> int:
def enum_value(x: Color) -> int:

Comment on lines 95 to 98

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
scripted_enum_comp = torch.jit.script(enum_comp)
self.assertEqual(scripted_enum_comp(Color.RED), Color.RED.value)
self.assertEqual(scripted_enum_comp(Color.GREEN), Color.GREEN.value)
scripted_enum_value = torch.jit.script(enum_value)
self.assertEqual(scripted_enum_value(Color.RED), Color.RED.value)
self.assertEqual(scripted_enum_value(Color.GREEN), Color.GREEN.value)

Comment on lines 77 to 80

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
scripted_enum_comp = torch.jit.script(enum_comp)
self.assertEqual(scripted_enum_comp(Color.RED), Color.RED.name)
self.assertEqual(scripted_enum_comp(Color.GREEN), Color.GREEN.name)
scripted_enum_name = torch.jit.script(enum_name)
self.assertEqual(scripted_enum_name(Color.RED), Color.RED.name)
self.assertEqual(scripted_enum_name(Color.GREEN), Color.GREEN.name)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
def enum_comp(x: Color) -> str:
def enum_name(x: Color) -> str:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The types of the IR values corresponding to enum and the return value are more narrow than AnyEnumType and Any. I think there is nothing we can do about AnyEnumType because those types are user-defined, but is there a restriction on underlying storage type? Can we use that to define more specific operators? E.g.
prim::EnumValue(AnyEnumType enum) -> int
prim::EnumValue(AnyEnumType enum) -> float
prim::EnumValue(AnyEnumType enum) -> bool (lol)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a great point! Done.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We're not including bool as a type here, but we are registering a kernel for it

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, we have two types of assert's we use: TORCH_CHECK, and TORCH_INTERNAL_ASSERT. The first is for user errors and the second is for developer invariants. In this case, it would be a developer invariant. But you don't need to use either here because we have e->type()->expect<EnumType>(); which will throw if it's not the right type.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done, replaced with expect()

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: remove assert and use -type()->expect

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, thanks. Didn't know that. Done.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't actually need this custom create logic, bc there is only one possible schema for prim::EnumName, so you can use the normal schema driven insert logic:

https://github.com/pytorch/pytorch/blob/master/torch/csrc/jit/ir/ir.h#L1181

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, but it is actually nice to have a method to create it, more discoverable and we can enforce type checking here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is at least worth a discussion, because the duplication of the kernel has some binary implications. However, I think not-standard ops (unschematized ops) have some other complications around mobile and stuff so I think this is probably worth it. @ljk53 what do you think ?

@SplitInfinity

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the "binary implication" you are referring to with dup kernels?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Each operator increases the size of the mobile build (if they're included in a mobile model). In this case, we could unify all of the prim::EnumValue operators but at the cost of having to make it a non-standard op where the schema doesnt express its true types. I think our current decision is good.

@eellison
Copy link
Contributor

LGTM !! I just have one comment for the mobile folks and a couple nits. Might be good to hear in from @ljk53 before continuing, but it's a pretty minor decision either way so i think we're also fine to continue

Copy link
Contributor

@eellison eellison left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM!! This a kind of minor note, but right now when we call unify_types we unify to Optional[int] etc, but we didnt register a kernel for those options. I think it's fine if we don't handle optional, we can just throw an error in that case

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

Copy link
Contributor

@facebook-github-bot facebook-github-bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@gmagogsfm has imported this pull request. If you are a Facebook employee, you can view this diff on Phabricator.

@gmagogsfm
Copy link
Contributor Author

LGTM!! This a kind of minor note, but right now when we call unify_types we unify to Optional[int] etc, but we didnt register a kernel for those options. I think it's fine if we don't handle optional, we can just throw an error in that case

Thanks for the review.

Do you mean this? I think it will become an error when constructing EnumType in IR, so we should be safe the error should be clear enough.

Or do you mean something else?

@facebook-github-bot
Copy link
Contributor

@gmagogsfm merged this pull request in 8e03c38.

@gmagogsfm gmagogsfm linked an issue Jul 27, 2020 that may be closed by this pull request
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Merged oncall: jit Add this issue/PR to JIT oncall triage queue

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Support Enum in TorchScript

5 participants