Skip to content

Conversation

@zou3519
Copy link
Contributor

@zou3519 zou3519 commented Mar 12, 2018

Fixes #5671

The reported issue is that on the CPU path, norm(value, dim) is slower than manually using pow, sqrt, and summing.

It turns out that the CPU path for norm(value, dim) is missing optimizations in the value=1, 2 cases. I added those in as well as an optimization for value = 3 (not sure if this is necessary, but this optimization is used for tensor.pow(3)).

@li-roy could you take a look?

Perf numbers:


In [1]: import torch
   ...: x = torch.randn(1024, 256)
   ...: y = torch.randn(1024, 256)
   ...:
   ...: %timeit torch.norm(x-y, 1, 1)
   ...: %timeit (x-y).sum(1)
   ...:
   ...: %timeit torch.norm(x-y, 2, 1)
   ...: %timeit torch.sqrt((x-y).pow(2).sum(1))
   ...:
   ...: %timeit torch.norm(x-y, 3, 1)
   ...: %timeit torch.pow((x - y).abs().pow(3).sum(1), 1/3)
   ...:
362 µs ± 56.4 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
332 µs ± 33.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

340 µs ± 8.42 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
358 µs ± 5.87 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

352 µs ± 4.55 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
691 µs ± 49.2 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)

Copy link
Contributor

@apaszke apaszke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does CUDA have optimized implementations?

@zou3519
Copy link
Contributor Author

zou3519 commented Mar 12, 2018

Yup!

@JaeDukSeo
Copy link

wait so is this fixed...?

@soumith
Copy link
Contributor

soumith commented Apr 19, 2019

@JaeDukSeo yes

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

torch.norm(x-y, 2, 1) is significantly slower than torch.sqrt((x - y).pow(2).sum(1))

4 participants