This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: October 11, 2023
Average Rating: Not Rated
The following sets of nodes are now exclusively accessed from a new set of login nodes (milan1, milan2, xeonmax) that are separate from the existing login nodes (login1 and login2):
- 40-core
- 96-core
- hbm-96core
- A100
To run jobs in the 40-core, 96-core, hbm-96core, or a100 queues, users should first ssh to the new login nodes with:
ssh -X netid@milan.seawulf.stonybrook.edu
On the new milan1 or milan2 login nodes you should load the slurm/seawulf3/21.08.8 module:
module load slurm/seawulf3/21.08.8
In addition, an Intel Sapphire Rapids login node ("xeonmax") is now available for submitting jobs to the above queues. This login node uses the same version of the Slurm workload manager as the milan login nodes but features a Rocky 9 Linux operating system. Users can access it via:
ssh -X netid@xeonmax.seawulf.stonybrook.edu
For now, all other queues will be accessed from the existing login nodes (login1 and login2). So, if you wish to run jobs that use any of the following types of queues, you will ssh as you have previously (to login.seawulf.stonybrook.edu):
- 28-core
- gpu, gpu-long, and gpu-large
- v100 and p100
On the existing login1 or login2 nodes and their associated queues, you should load the slurm/17.11.12 module:
module load slurm/17.11.12
Here is an example of the steps that would be taken to submit an interactive job to the short-40core partition via the new milan login nodes:
ssh netid@milan.seawulf.stonybrook.edu
module load slurm/seawulf3/21.08.8
srun -p short-40core <other Slurm flags> --pty bash
In contrast, here are the steps that would be taken to submit an interactive job to the short-28core partition via the standard login nodes:
ssh netid@login.seawulf.stonybrook.edu
module load slurm/17.11.12
srun -p short-28core <other Slurm flags> --pty bash
All login and compute nodes will continue to use the same GPFS file system, so you will have access to the same files and data no matter which set of login nodes you are present on.