Multiprocessing a for loop in Python
I have a program that currently takes a very long time to run since it processes a large number of files. I was hoping to be able to run the program over all of the 12 processors on my computer at once, to decrease the run time. I've been trying to get it to work for a while now but something seems to be wrong whenever I try running it. My program looks something like this before I've tried to introduce multiprocessing:
files = [file for file in listdir(data/) if isfile(join(data/, file))]
for file in files:
filename = file
ir = xr.open_dataset(path + filename)
if __name__ == "__main__":
pic_nr = np.unique(ir.pic)[0]
image_lst = ir.time.searchsorted(
ir.where(ir.pic == pic_nr, drop=True).time
)
run_and_save(image_lst[0:6], pic_nr))
gc.collect()
Essentially, what I want the code to do is to run the entire for-loop for several processors at once where each processor works on a file in the list called 'files', but I can't seem to get it right even after reading some guides. Would anyone know the quickest way to get it to work correctly?