How to use the GPU Nodes on SeaWulf

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: December 04, 2019

SeaWulf has 8 nodes containing 4 Tesla K80 GPUs each.  One node with 2 Tesla P100 GPUs is also available.  

To access the GPU nodes, you can submit to the GPU queue using the SLURM workload manager.

module load slurm/17.11.12
sbatch [...]

You can open an interactive shell onto a gpu node with the following:

srun -J [job_name] -N 1 -p gpu --ntasks-per-node=28 --pty bash

If you want to use CUDA to take advantage of GPU acceleration, you will need to load the modules, then compile it with NVCC

module load cuda91/toolkit/9.1
nvcc INFILE -o OUTFILE

For a sample CUDA program, see:

 /gpfs/projects/samples/cuda/test.cu

 

The GPU queues have the following attributes:

Queue

Default run time

Max run time

Max # of nodes

gpu 1 hour 8 hours 2
gpu-long 8 hours 48 hours 1
gpu-large 1 hour 8 hours 4
p100 1 hour 24 hours 1

 

Submit a ticket

Additional Information


There are no additional resources available for this article.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket