Skip to content

Conversation

@ssnl
Copy link
Collaborator

@ssnl ssnl commented Mar 17, 2018

This is the first of three PRs that #5537 will be split into.

This PR adds mkl headers to included files, and provides helper functions for MKL fft and cuFFT.
In particular, on POSIX, headers are using mkl-include from conda, and on Windows, it is from a new file @yf225 and I made and uploaded to s3.

cc @soumith @apaszke @ezyang

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

@ezyang
Copy link
Contributor

ezyang commented Mar 19, 2018

Are the mkl implementations always superior or do you want to turn them off sometimes, in the same way as cudnn.enabled? Maybe this is moot because we'll come up with a new dispatching strategy in C10.

@ssnl
Copy link
Collaborator Author

ssnl commented Mar 19, 2018

@ezyang Currently there is only mkl implementation of fft. I looked into writing a naive inefficient fft in case there is no mkl, but it was a bit tricky and very slow. Since we ship with mkl anyways, I think this should be fine for now.

@ezyang ezyang merged commit 22ef8e5 into pytorch:master Mar 19, 2018
@ejoebstl
Copy link
Contributor

ejoebstl commented Mar 19, 2018

I'm afraid this broke the build from source. It seems like my mkl lib was found, but still there is a variable that's not set.

-- Checking for [mkl_gf_lp64 - mkl_gnu_thread - mkl_core - gomp - pthread - m - dl]
--   Library mkl_gf_lp64: /home/emi/anaconda2/lib/libmkl_gf_lp64.so
--   Library mkl_gnu_thread: /home/emi/anaconda2/lib/libmkl_gnu_thread.so
--   Library mkl_core: /home/emi/anaconda2/lib/libmkl_core.so
--   Library gomp: -fopenmp
--   Library pthread: /usr/lib/x86_64-linux-gnu/libpthread.so
--   Library m: /usr/lib/x86_64-linux-gnu/libm.so
--   Library dl: /usr/lib/x86_64-linux-gnu/libdl.so
-- MKL library found
-- Found a library with BLAS API (mkl).
-- Found a library with LAPACK API (mkl).
-- Found cuDNN: v6.0.21  (include: /usr/local/cudnn/include, library: /usr/local/cudnn/lib64/libcudnn.so.6)
-- Could NOT find NNPACK (missing:  NNPACK_INCLUDE_DIR NNPACK_LIBRARY CPUINFO_LIBRARY PTHREADPOOL_LIBRARY) 
-- NNPACK not found. Compiling without NNPACK support
-- Using python found in /home/emi/anaconda2/bin/python
disable contrib because ATEN_NO_CONTRIB is set
CMake Error: The following variables are used in this project, but they are set to NOTFOUND.
Please set them or make sure they are set and tested correctly in the CMake files:
MKL_INCLUDE_DIR (ADVANCED)

update

Okay, the problem was the new mkl-include dependency, which can only be installed with a newer conda version than I was using. A more descriptive error-message for this case might be useful.

ezyang added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
soumith added a commit that referenced this pull request Mar 19, 2018
…simple") moving average" (#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (#5766)"

This reverts commit 99b1f6c.
@ssnl ssnl deleted the fft_build branch March 19, 2018 22:19
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…rch#5855)

This is the first of three PRs that pytorch#5537 will be split into.

This PR adds mkl headers to included files, and provides helper functions for MKL fft and cuFFT.
In particular, on POSIX, headers are using mkl-include from conda, and on Windows, it is from a new file @yf225 and I made and uploaded to s3.

* add mkl-include to required packages

* include MKL headers; add AT_MKL_ENABLED flag; add a method to query MKL availability

* Add MKL and CUFFT helpers
jekbradbury pushed a commit to jekbradbury/pytorch that referenced this pull request Mar 21, 2018
…simple") moving average" (pytorch#5892)

* Revert "Port ATen and JIT C++ tests to Catch2 (pytorch#5788)"

This reverts commit 6f80023.

* Revert "Fix error message for cat-ing zero-dim tensors (pytorch#5819)"

This reverts commit cf2e176.

* Revert "Softmax symbolic should account for negative dim (pytorch#5846)"

This reverts commit ba64724.

* Revert "[fft][1 of 3] build system and helpers to support cuFFT and MKL (pytorch#5855)"

This reverts commit 22ef8e5.

* Revert "Don't modify requires_grad when running DataParallel in no_grad mode (pytorch#5880)"

This reverts commit d11b7fb.

* Revert "fix some methods not showing up in doc (pytorch#5882)"

This reverts commit 24fca0e.

* Revert "ReduceOps cleanup and set_num_threads (pytorch#5723)"

This reverts commit 84400d5.

* Revert "introduce shape_as_tensor and reshape_from_variable_shape (pytorch#5824)"

This reverts commit f446b82.

* Revert "Enable resetting of batchnorm running moments and cumulative ("simple") moving average (pytorch#5766)"

This reverts commit 99b1f6c.
wuhuikx pushed a commit to wuhuikx/pytorch that referenced this pull request Jan 30, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants