Interactive shell

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: December 04, 2019

Requesting an Interactive Session in Slurm

Slurm allows for running an interactive shell on compute nodes. First, make sure the slurm module is loaded:

module load slurm

To enter an interactive session, use the srun command with the --pty directive. At a minimum, provide the following options to srun to enable the interactive shell:

srun -p <queue> --pty bash

You can pass the same additional options to srun as you would in your Slurm job script files. Some useful options are:

  • -N <# of nodes>
  • -t hh:mm:ss
  • -n <tasks per node>

For an interactive job using 1 node and 28 tasks per node with an 8hr runtime on the gpu queue this would look like:

srun -N 1 -n 28 -t 8:00:00 -p gpu --pty bash

Running an Interactive Job

Upon initializing the interactive shell, you will be taken away from the login node.

All of your environment variables from the login node will be copied to your interactive shell (just as when you submit a job). This means all of your modules will still be loaded and you will remain in the same working directory as before. You can immediately run your program for testing:

mpicc source_code.c -o my_program
mpirun ./my_program <command_line_args>

All contents sent to stdout will be printed directly to the terminal, unless otherwise directed. For more information on handling output, you can see this FAQ page.

Submit a ticket

Additional Information

There are no additional resources available for this article.

Getting Help

The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket

Supported By

IACS Support System