High Performance Computing

Stony Brook University offers researchers several different options depending on needs.

High Performance Computing Clusters

Stony Brook University researchers may access the services provided by the following two HPC clusters: 

LI-red

LI-red is a computational cluster using top-of-the-line components from Cray, IBM, Intel, Mellanox and other technology partners. It is intended for members of the campus community, as well as industrial partners, and is located in the Institute for Advanced Computational Science's (IACS) server room.  

 

Learn more about LI-red

SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

 

Learn more about SeaWulf

 

New User Warnings

  • If you have never used a high performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!).  It is a common practice by new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs.
  • Do NOT use the login nodes for computationally intensive work of any kind.  This includes compiling software.  If everyone runs CPU- or RAM-intensive processes on either login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster.  If you need to test or compile your code, please request an interactive job instead of testing it on the login nodes!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB).  It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days.   If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students.   If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.

Announcements

April 01, 2022

Due to a power outage on Campus this morning, the SeaWulf  28-core and GPU queues went down. With campus operations having restored power, the queues are now back online.  Any jobs running on them will need to be restarted.

December 21, 2021

We have just been notified by Campus Operations and Maintenance that yet another electrical maintenance is tentatively scheduled, this time for January 6th starting at 2:00 PM. We've been advised that:

"If the daytime temperature is below 40ºF we will be required to postpone the shutdown."

In anticipation of this scheduled electrical maintenance, we are tentatively scheduling a shutdown of the SeaWulf 24-core queues & P100 queue on January 6th starting at 1:00 PM. This outage is not expected to impact the 28-core or 40-core queues, nor the login nodes (login1 and login2). We anticipate the 24-core and P100 queues will be back up in the morning of January 7th 2022, pending timely completion of the electrical maintenance.

We once again thank you for your patience while Campus Operations attempts to complete these necessary upgrades.

December 01, 2021

We were just notified that electrical maintenance will be performed on the circuits feeding the 24-core and P100 queues on Thursday 12/16/2021. This outage in not expected to impact the 28-core or 40-core queues, nor the login nodes (login1 and login2). In anticipation of this maintenance the 24-core and P100 queues will be disabled starting at 1:00 PM on Thursday 12/16/2021. We currently anticipate the 24-core and P100 queues to be back up on the afternoon of 12/17/2021, pending timely completion of the electrical maintenance.

November 05, 2021

We were just notified that electrical maintenance will be performed on the circuits feeding the 24-core and P100 queues on Tuesday 11/16/2021. This outage in not expected to impact the 28-core or 40-core queues, nor the login nodes (login1 and login2). In anticipation of this maintenance the 24-core and P100 queues will be disabled starting at 5:00 PM on Monday 11/15/2021.

 We currently anticipate the 24-core and P100 queues to be back up on the afternoon of 11/16/2021, pending timely completion of the electrical maintenance.

October 14, 2021

Our HPC Support Team will be resuming our weekly office hours for this semester. Support staff will be available from 1pm - 3pm every Thursday starting October 14th until the end of the semester. You can also try joining the meeting at any time and if available, one of our support staff may be able to join the meeting to assist you. Click here to join office hours.

August 13, 2021

We were just notified that electrical maintenance will be performed on the circuits feeding the 24-core queues on Friday 8/27/2021. This outage is not expected to impact the 28-core or 40-core queues, nor the primary login node (login1). In anticipation of this maintenance the 24-core queues will be disabled starting at 5:00 PM on Thursday 8/26/2021 and login2 will be powered down.

We currently anticipate the 24-core queues & login2 to be back up by 1:00 PM on 8/27/2021, pending timely completion of the electrical maintenance.

July 06, 2021

We will be powering down the SeaWulf cluster starting Tuesday July 13th at 5:00 PM and bringing it back online Wednesday July 14th by the close of business.  During this outage, the SeaWulf login nodes, compute nodes, and storage will be unavailable.  We will be performing an upgrade to the storage during this time.

We apologize for this inconvenience, and thank you for your patience while this upgrade is being completed.

June 01, 2021

We were just notified of another wave of electrical work which will be impacting the SeaWulf cluster. In anticipation of this scheduled electrical maintenance, we will be powering down the SeaWulf cluster Monday evening June 7th at 5:00 PM and bringing it back online on Wednesday afternoon June 9th. During this outage, the SeaWulf login nodes, compute nodes, and storage will be unavailable.

We will be overlapping our storage upgrade with this electrical outage, to minimize the impact to our user community.

We apologize for this inconvenience, and thank you for your patience while additional 15kV electrical feeders are installed and data center generators are tested.

April 20, 2021

We were just notified of another wave of electrical work which will be impacting the SeaWulf cluster. In anticipation of this scheduled electrical maintenance, we will be powering down the SeaWulf cluster Thursday morning April 29th starting at 9:00 AM and bringing it back online on Friday April 30th by noon. During this outage, the SeaWulf login nodes, compute nodes, and storage will be unavailable.

We will be overlapping our maintenance window with this electrical outage, to minimize the impact to our user community.

We apologize for this inconvenience, and thank you for your patience while the replacement 15kV electrical feeders are installed.

April 05, 2021

We have been notified that there will be an extended power outage impacting the SeaWulf cluster, while preventive maintenance is performed on the high voltage switch gear supporting it.

In anticipation of this outage, the SeaWulf cluster’s login & compute nodes, as well storage, will be going offline at the end of business on Tuesday April 6th and anticipated to be back up by the end of business on Wednesday April 7th.

We apologize for this inconvenience, and thank you for your patience while these necessary preventive maintenance steps are being taken.

Partners

Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that by 2017, it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff. the IACS aims to grow close to 100 people by 2018. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to assure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 

University Libraries

Stony Brook Libraries are known for a wide-range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceed 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection.The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​

Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

XSEDE

XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services — things like supercomputers, collections of data and new tools — to improve our planet.

Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Additional Information


There are no additional resources available for this service.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Quick Ticket

Please Contact


IACS Support System