High Performance Computing

Stony Brook University offers researchers several different options depending on needs.

High Performance Computing Clusters

Stony Brook University researchers may access the services provided by the following two HPC clusters: 

LI-red

LI-red is a computational cluster using top-of-the-line components from Cray, IBM, Intel, Mellanox and other technology partners. It is intended for members of the campus community, as well as industrial partners, and is located in the Institute for Advanced Computational Science's (IACS) server room.  

 

Learn more about LI-red

SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

 

Learn more about SeaWulf

 

New User Warnings

  • If you have never used a high performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!).  It is a common practice by new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs.
  • Do NOT use the login nodes for computationally intensive work of any kind.  This includes compiling software.  If everyone runs CPU- or RAM-intensive processes on either login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster.  If you need to test or compile your code, please request an interactive job instead of testing it on the login nodes!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB).  It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days.   If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students.   If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.

Announcements

June 27, 2019

Due to preventive maintenance on the Campus Data Center’s generator and scheduled maintenance on SeaWulf’s Storage Array, the SeaWulf cluster’s job queues will be disabled starting at 5:00 PM on Monday July 1st.  All SeaWulf Nodes will be unavailable.  The work is expected to be completed by 5:00 PM on Wednesday July 3rd.

May 23, 2019

Due to preventive maintenance on the Campus Data Center’s generator, some of the SeaWulf cluster’s job queues will be disabled starting at 5:00 PM on Wednesday May 29th. The Login nodes, the 24-core & 40-core queues will NOT be impacted by this outage.

The following queues are expected to be back on-line by noon, on Thursday May 30th:

Short, Long, Extended, Medium, Large, Gpu, Gpu-long, and Gpu-large

April 01, 2019

The SeaWulf cluster is being expanded by the addition of 64 new compute nodes, each with 40 CPU cores and 192 Gigabytes of RAM. After this upgrade, the SeaWulf cluster will have roughly 10,000 CPU cores and over 50 terabytes of RAM, in aggregate.

In order to integrate the upgrades with our existing environment, the SeaWulf cluster will be going offline at 5:00 PM on Monday April 15th and coming back online Thursday afternoon on April 18th.

Scheduling jobs on the 64 new compute nodes will be done through our new job scheduling system, SLURM. Please see below for our FAQ entry on how to use SLURM on SeaWulf:

https://it.stonybrook.edu/help/kb/using-the-slurm-workload-manager

March 25, 2019

GPU nodes are now available through the SLURM workload manager. Please view the FAQ item Using the Slurm workload manager on how to submit jobs for GPU nodes.

The FAQ item for How to use the GPU Nodes on SeaWulf has been updated for the changes made to SeaWulf.

March 06, 2019

Due to continuing capital maintenance on the Campus Data Center’s electrical switch gear, the SeaWulf cluster’s extended, long, short, large, medium and GPU queues will be disabled and all running jobs on those queues terminated, starting Sunday March 17th at 9 pm. The login nodes & li-red cluster queues (long-24core, short-24core, large-24core, extended-24core, medium-24core) will NOT be affected by this outage.

The SeaWulf compute nodes are expected to come back online by noon on Monday, March 18th. We apologize for this inconvenience and thank you for your understanding while these essential updates are performed to the electrical infrastructure.

January 28, 2019

2019 XSEDE HPC Monthly Workshop Series - Big Data

XSEDE along with the Pittsburgh Supercomputing Center is pleased to announce a two-day Big Data workshop, to be held February 12-13, 2019 in the IACS Conference Room #1. Stony Brook will be one of several sites remotely hosting this workshop. This workshop will focus on topics such as Hadoop and Spark and will be presented using the Wide Area Classroom (WAC) training platform.

Registration is required.  Please visit this link to register.  

December 10, 2018

In order to provide our user community with a more robust & up-to-date software stack and to provision new hardware, we are scheduling an outage of the SeaWulf cluster. During the upgrade window, we will be updating the operating system, drivers, further optimizing GPFS, and introducing additional Login & GPU nodes.

We will be bringing the SeaWulf cluster off-line starting 10:00 AM on Monday December 24th to perform these upgrades. We expect to have the cluster back on-line by the close of business on Tuesday January 1st.

We apologize for the inconvenience this outage may pose, and appreciate your patience as we complete these necessary upgrades.

October 19, 2018

Due to preventive maintenance on the Campus Data Center’s generator, the SeaWulf cluster’s job queues will be disabled starting at 9 am on Wednesday, November 28th.  The login node & cluster will go off-line at end-of-business that same day.   The login and compute nodes are expected to be back online by 3 pm on Thursday, November, 29th.

August 13, 2018

The release version of the 2018 Intel compilers are now available as modules on SeaWulf.  Please use the "module avail" command to view all available modules.

August 09, 2018

The electrical maintenance in the Campus Data Center is complete, the SeaWulf compute nodes have been powered up and the 28 core queues have been enabled.

August 09, 2018

Due to a failed electrical component that feeds the Campus Data Center, additional time will be needed to complete the capital maintenance. We have been advised to anticipate that normal power to the Data Center will be restored by late this afternoon. Once power has been restored, we will bring up the SeaWulf compute nodes and re-enable the 28 core queues. 

 

July 26, 2018

Due to capital maintenance on the Campus Data Center’s electrical switch gear, the SeaWulf cluster’s job queues will be disabled starting Wednesday August 8th. The login nodes & li-red cluster queues (long-24core, short-24core, large-24core) will NOT be affected by this outage.

The SeaWulf compute nodes are expected to go off-line by the end of business on Wednesday August 8th and come back online by noon on Thursday August 9th. We apologize for this inconvenience and thank you for your understanding while these essential updates are performed to the electrical infrastructure.

 

Partners

Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that by 2017, it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff. the IACS aims to grow close to 100 people by 2018. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to assure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 

University Libraries

Stony Brook Libraries are known for a wide-range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceed 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection.The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​

Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

XSEDE

XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services — things like supercomputers, collections of data and new tools — to improve our planet.

Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Additional Information


There are no additional resources available for this service.

Please Contact


IACS Support System