12

I would like some help understanding exactly what I have done/ why my code isn't running as I would expect.

I have started to use joblib to try and speed up my code by running a (large) loop in parallel.

I am using it like so:

from joblib import Parallel, delayed
def frame(indeces, image_pad, m):

    XY_Patches = np.float32(image_pad[indeces[0]:indeces[0]+m, indeces[1]:indeces[1]+m,  indeces[2]])
    XZ_Patches = np.float32(image_pad[indeces[0]:indeces[0]+m, indeces[1],                  indeces[2]:indeces[2]+m])
    YZ_Patches = np.float32(image_pad[indeces[0],                 indeces[1]:indeces[1]+m,  indeces[2]:indeces[2]+m])

    return XY_Patches, XZ_Patches, YZ_Patches


def Patch_triplanar_para(image_path, patch_size):

    Image, Label, indeces =  Sampling(image_path)

    n = (patch_size -1)/2
    m = patch_size

    image_pad = np.pad(Image, pad_width=n, mode='constant', constant_values = 0)

    A = Parallel(n_jobs= 1)(delayed(frame)(i, image_pad, m) for i in indeces)
    A = np.array(A)
    Label = np.float32(Label.reshape(len(Label), 1))
    R, T, Y =  np.hsplit(A, 3)

    return R, T, Y, Label

I have been experimenting with "n_jobs", expecting that increasing this will speed up my function. However as I increase n_jobs, things slow down quite significantly. When running this code without "Parallel", things are slower, until I increase the number of jobs from 1.

Why is this the case? I understood that the more jobs I run, the faster the script? am i using this wrong?

Thanks!

4
  • First, how many CPUs or cores do you have in the computer you run this? Second, n_jobs sets the maximum number of concurrently running jobs. Have you tried n_jobs=-1? This should use all the CPUs in your computer. Third, how big is this indeces of your for loop? Commented Jan 7, 2017 at 11:26
  • I have 24 cores and a huge amount of memory. indeces has roughly 10,000 entries so had thought this would be a good thing to parallelise. I can try n_jobs=-1 and report back. Commented Jan 10, 2017 at 14:02
  • Yes. I can imagine that if you increase n_jobs from 1 to the max (n_jobs=23, njobs = -1) then you will reach a point in which incrementing this number will involve more overhead so you have to find a sweet spot. Of course if you can use the backend="threading" might be maybe better but you have to experiment. Commented Jan 10, 2017 at 14:54
  • Then, I would like to suggest this SO post, http://stackoverflow.com/questions/21027477/joblib-parallel-multiple-cpus-slower-than-single which has really good answers, and one of them directly from joblib author, although might be obsolete... Commented Jan 10, 2017 at 15:14

2 Answers 2

3
+50

Maybe your problem is caused because image_pad is a large array. In your code, you are using the default multiprocessing backend of joblib. This backend creates a pool of workers, each of which is a Python process. The input data to the function is then copied n_jobs times and broadcasted to each worker in the pool, which can lead to a serious overhead. Quoting from joblib's docs:

By default the workers of the pool are real Python processes forked using the multiprocessing module of the Python standard library when n_jobs != 1. The arguments passed as input to the Parallel call are serialized and reallocated in the memory of each worker process.

This can be problematic for large arguments as they will be reallocated n_jobs times by the workers.

As this problem can often occur in scientific computing with numpy based datastructures, joblib.Parallel provides a special handling for large arrays to automatically dump them on the filesystem and pass a reference to the worker to open them as memory map on that file using the numpy.memmap subclass of numpy.ndarray. This makes it possible to share a segment of data between all the worker processes.

Note: The following only applies with the default "multiprocessing" backend. If your code can release the GIL, then using backend="threading" is even more efficient.

So if this is your case, you should switch to the threading backend, if you are able to release the global interpreter lock when calling frame, or switch to the shared memory approach of joblib.

The docs say that joblib provides an automated memmap conversion that could be useful.

Sign up to request clarification or add additional context in comments.

Comments

3

It's quite possible that the problem you are running up against is a fundamental one to the nature of the python compiler.

If you read "https://www.ibm.com/developerworks/community/blogs/jfp/entry/Python_Is_Not_C?lang=en", you can see from a professional who specialises in optimisation and parallelising python code that iterating through large loops is an inherently slow operation for a python thread to perform. Therefore, spawning more processes that loop through arrays is only going to slow things down.

However - there are things that can be done.

The Cython and Numba compilers are both designed to optimise code that is similar to C/C++ style (i.e. your case) - in particular Numba's new @vectorise decorators allow scalar functions to take in and apply operations on large arrays with large arrays in a parallel manner (target=Parallel).

I don't understand your code enough to give an example of an implementation, but try this! These compilers, used in the correct ways, have brought speed increases of 3000,000% to me for parallel processes in the past!

Comments

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.