Example Slurm Job Script

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: March 27, 2023
Average Rating: Not Rated
Your feedback is important to us, help us by logging in to rate this article and provide feedback.

This is an example slurm job script for 28-core queues:

#!/bin/bash
#
#SBATCH --job-name=test
#SBATCH --output=res.txt
#SBATCH --ntasks-per-node=28
#SBATCH --nodes=2
#SBATCH --time=05:00
#SBATCH -p short-28core
#SBATCH --mail-type=BEGIN,END
#SBATCH --mail-user=jane.smith@stonybrook.edu

module load intel/compiler/64/2017/17.0.0
module load intel/mkl/64/2017/0.098
module load intel/mpi/64/2017/0.098

cd /gpfs/projects/samples/intel_mpi_hello/
mpiicc mpi_hello.c -o intel_mpi_hello

mpirun ./intel_mpi_hello

This job will utilize 2 nodes, with 28 CPUs per node for 5 minutes in the short-28core queue to run the intel_mpi_hello script. 

If we named this script "test.slurm", we could submit the job using the following command:

sbatch test.slurm

And the output (in res.txt) would look like this:

Hello world from processor sn088, rank 0 out of 56 processors
Hello world from processor sn111, rank 28 out of 56 processors
Hello world from processor sn088, rank 1 out of 56 processors
Hello world from processor sn111, rank 29 out of 56 processors
Hello world from processor sn088, rank 2 out of 56 processors
Hello world from processor sn111, rank 30 out of 56 processors
Hello world from processor sn088, rank 5 out of 56 processors
Hello world from processor sn111, rank 31 out of 56 processors
...

The processor names (sn088 and sn111) will vary depending on which two nodes the job is using.

This is an example slurm job script for GPU queues:

#!/bin/bash
#
#SBATCH --job-name=test-gpu
#SBATCH --output=res.txt
#SBATCH --ntasks-per-node=28
#SBATCH --nodes=2
#SBATCH --time=05:00
#SBATCH -p gpu
#SBATCH --mail-type=BEGIN,END
#SBATCH --mail-user=jane.smith@stonybrook.edu

module load anaconda/3
module load cuda102/toolkit/10.2
module load cudnn/7.4.5

source activate tensorflow2-gpu


cd /gpfs/projects/samples/tensorflow

python tensor_hello3.py

Breakdown:

The directive 

#SBATCH --job-name=test-gpu

gives the name "test-gpu" to your job.

The directives 

#SBATCH --ntasks-per-node=28
#SBATCH --nodes=2
#SBATCH --time=05:00

indicate that we are requesting 2 nodes, and we will run 28 tasks per node for 5 minutes.

The directive 

#SBATCH -p gpu

indicates to the batch scheduler that you want to use the GPU queue.

The mail-related directives

#SBATCH --mail-type=BEGIN,END
#SBATCH --mail-user=jane.smith@stonybrook.edu

control whether (and when) the user should be notified via email of changes to the job state. In this example, the --mail-type=BEGIN,END indicates that an email should be sent to the user when the job starts and when it finishes. 

Other useful mail-type options include:

  • FAIL (email upon job failure)
  • ALL (email for all state changes).

Note that emails will only be sent to "stonybrook.edu" addresses.

 

All of these directives are passed straight to the sbatch command, so for a full list of options just take a look at the sbatch manual page by issuing the command:

man sbatch

 

For more information on SLURM, please also see the official documentation.

SUBMIT A TICKET

Additional Information


There are no additional resources available for this article.

Provide Feedback


Your feedback is important to us, help us by logging in to rate this article and provide feedback.

Sign in with NetID

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Quick Ticket