60 seconds for a job is very short-lived ; you should probably "pack" benchmarks run together in a single submission script, for instance by algorithm, with a submission script like this (4 CPUs used for each benchmarks) :
#!/bin/bash
#SBATCH --ntasks=1
#SBATCH --ncpus-per-task=4
#SBATCH --mem-per-cpu=...
#SBATCH ...
module load ...
algorithm="thealgorithm"
hyperparametersvalues=(0.1 1 10 1000)
files=(data/*)
for hyper in hyperparametersvalues
do
for file in files; do
./benchmak_script $algorithm --hyperparameter=$hyper $file
done
done
If you have access to GNU parallel, you can rewrite it like this, that easily allows running benchmarks for the same algorithm in parallel (on a single node):
#!/bin/bash
#SBATCH --ntasks=10
#SBATCH --ncpus-per-task=4
#SBATCH --nodes=1-1
#SBATCH --mem-per-cpu=...
#SBATCH ...
module load ...
algorithm="thealgorithm"
hyperparametersvalues=(0.1 1 10 1000)
files=(data/*)
parallel -P $SLURM_NTASKS ./benchmak_script $algorithm --hyperparameter={1} {2} ::: ${hyperparametersvalues[@]} ::: ${files[@]}
If you do not have parallel, you can achieve the same with the & and wait in the loop.
You can also use multiple nodes and release the --nodes=1-1 constraint by inserting a srun --exact ... in the argument of parallel.
You can also create a job array with the algorithm as parameter:
#!/bin/bash
#SBATCH --ntasks=10
#SBATCH --ncpus-per-task=4
#SBATCH --nodes=1-1
#SBATCH --mem-per-cpu=...
#SBATCH ...
#SBATCH --array=0-2
module load ...
algorithms=(thealgorithm thesecondalgorithm thethirdalgorithm)
algorithm=${algorithms[$SLURM_ARRAY_TASK_ID]}
hyperparametersvalues=(0.1 1 10 1000)
files=(data/*)
parallel -P $SLURM_NTASKS parallel ./benchmak_script $algorithm --hyperparameter={1} {2} ::: ${hyperparametersvalues[@]} ::: ${files[@]}