Choosing the right login nodes for your jobs

As of October 2022, different sets of SeaWulf login nodes will provide access to different compute nodes. This article will explain how to choose which login nodes to access when submitting your Slurm jobs.

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: October 11, 2023
Average Rating: Not Rated
Your feedback is important to us, help us by logging in to rate this article and provide feedback.

The following sets of nodes are now exclusively accessed from a new set of login nodes (milan1, milan2, xeonmax) that are separate from the existing login nodes (login1 and login2):

  • 40-core
  • 96-core
  • hbm-96core
  • A100

To run jobs in the 40-core, 96-corehbm-96core, or  a100 queues, users should first ssh to the new login nodes with:

ssh -X netid@milan.seawulf.stonybrook.edu
These new milan login nodes (milan1 and milan2) are running a different operating system, Rocky 8. They also have a newer version of the Slurm workload manager. 
 

On the new milan1 or milan2 login nodes you should load the slurm/seawulf3/21.08.8 module:

module load slurm/seawulf3/21.08.8

In addition, an Intel Sapphire Rapids login node ("xeonmax") is now available for submitting jobs to the above queues. This login node uses the same version of the Slurm workload manager as the milan login nodes but features a Rocky 9 Linux operating system.  Users can access it via:

ssh -X netid@xeonmax.seawulf.stonybrook.edu

For now, all other queues will be accessed from the existing login nodes (login1 and login2). So, if you wish to run jobs that use any of the following types of queues, you will ssh as you have previously (to login.seawulf.stonybrook.edu):

  • 28-core  
  • gpu, gpu-long, and gpu-large
  • v100 and p100

On the existing login1 or login2 nodes and their associated queues, you should load the slurm/17.11.12 module:

module load slurm/17.11.12

 

Here is an example of the steps that would be taken to submit an interactive job to the short-40core partition via the new milan login nodes:

1. ssh to the appropriate login node:

ssh netid@milan.seawulf.stonybrook.edu
2. Load the slurm module:
module load slurm/seawulf3/21.08.8
3. Submit your job
srun -p short-40core <other Slurm flags> --pty bash

In contrast, here are the steps that would be taken to submit an interactive job to the short-28core partition via the standard login nodes:

1. ssh to the appropriate login node:

ssh netid@login.seawulf.stonybrook.edu
2. Load the slurm module:
module load slurm/17.11.12
3. Submit your job
srun -p short-28core <other Slurm flags> --pty bash

 

All login and compute nodes will continue to use the same GPFS file system, so you will have access to the same files and data no matter which set of login nodes you are present on.

 

Submit a ticket

Additional Information


There are no additional resources available for this article.

Provide Feedback


Your feedback is important to us, help us by logging in to rate this article and provide feedback.

Sign in with NetID

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Quick Ticket

For More Information Contact


IACS Support System