Interactive shell
Requesting an Interactive Session in Slurm
Slurm allows for running an interactive shell on compute nodes. First, make sure the slurm module is loaded:
module load slurm
To enter an interactive session, use the srun command with the --pty directive. At a minimum, provide the following options to srun to enable the interactive shell:
srun -p <queue> --pty bash
You can pass the same additional options to srun as you would in your Slurm job script files. Some useful options are:
- -N <# of nodes>
- -t <hh:mm:ss>
- -n <tasks per node>
For an interactive job using 1 node and 28 tasks per node with an 8hr runtime on the gpu queue this would look like:
srun -N 1 -n 28 -t 8:00:00 -p gpu --pty bash
Note that sessions created with srun will not allow X11 tunneling. Please see this page for instructions on creating an X11-enabled interactive session.
Running an Interactive Job
Upon initializing the interactive shell, you will be taken away from the login node.
All of your environment variables from the login node will be copied to your interactive shell (just as when you submit a job). This means all of your modules will still be loaded and you will remain in the same working directory as before. You can immediately run your program for testing:
mpicc source_code.c -o my_program mpirun ./my_program <command_line_args>
All contents sent to stdout will be printed directly to the terminal, unless otherwise directed. For more information on handling output, you can see this FAQ page.
To end the interactive job session and return to the login node, type exit.
For More Information Contact
Still Need Help? The best way to report your issue or make a request is by submitting a ticket.
Request Access or Report an Issue