Skip to content

Conversation

@supriyar
Copy link
Contributor

@supriyar supriyar commented Aug 19, 2020

Stack from ghstack:

Summary:
Use common config for float and quantized embedding_bag modules

Test Plan:

python -m pt.qembeddingbag_test

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 35.738

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 62.708

python -m pt.embeddingbag_test

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 46.878

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 103.904

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D23245531

Summary:
Use common config for float and quantized embedding_bag modules

Test Plan:
```
python -m pt.qembeddingbag_test

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 35.738

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 62.708

python -m pt.embeddingbag_test

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 46.878

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 103.904

```

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Aug 19, 2020
Summary:
Use common config for float and quantized embedding_bag modules

Test Plan:
```
python -m pt.qembeddingbag_test

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 35.738

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 62.708

python -m pt.embeddingbag_test

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 46.878

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 103.904

```

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 2644726
Pull Request resolved: #43296
@dr-ci
Copy link

dr-ci bot commented Aug 19, 2020

💊 CI failures summary and remediations

As of commit 77a941e (more details on the Dr. CI page):


None of the CI failures appear to be your fault 💚



🚧 1 ongoing upstream failure:

These were probably caused by upstream breakages that are not fixed yet:


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 4 times.

Copy link
Contributor

@vkuzo vkuzo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for merging it with floating point!

embedding_dim=dim,
mode=mode,
include_last_offset=include_last_offset).to(device=device)
numpy.random.seed((1 << 32) - 1)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

just curious, what's the context on this line?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It was used in the embeddingbag_test. So I did the same here for consistency

Summary:
Use common config for float and quantized embedding_bag modules

Test Plan:
```
python -m pt.qembeddingbag_test

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 35.738

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 62.708

python -m pt.embeddingbag_test

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 46.878

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 103.904

```

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
supriyar added a commit that referenced this pull request Aug 20, 2020
Summary:
Use common config for float and quantized embedding_bag modules

Test Plan:
```
python -m pt.qembeddingbag_test

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 35.738

 Benchmarking PyTorch: qEmbeddingBag
 Mode: Eager
 Name: qEmbeddingBag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 62.708

python -m pt.embeddingbag_test

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetTrue_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: True, device: cpu
Forward Execution Time (us) : 46.878

 Benchmarking PyTorch: embeddingbag
 Mode: Eager
 Name: embeddingbag_embeddingbags10_dim4_modesum_input_size8_offset0_sparseTrue_include_last_offsetFalse_cpu
 Input: embeddingbags: 10, dim: 4, mode: sum, input_size: 8, offset: 0, sparse: True, include_last_offset: False, device: cpu
Forward Execution Time (us) : 103.904

```

Reviewers:

Subscribers:

Tasks:

Tags:

ghstack-source-id: 079cf2f
Pull Request resolved: #43296
@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 7024ce8.

@facebook-github-bot facebook-github-bot deleted the gh/supriyar/166/head branch August 28, 2020 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants