High Performance Computing

Stony Brook University offers researchers several different options depending on needs.

High Performance Computing Clusters

Stony Brook University researchers may access the services provided by the following two HPC clusters: 

LI-red

LI-red is a computational cluster using top-of-the-line components from Cray, IBM, Intel, Mellanox and other technology partners. It is intended for members of the campus community, as well as industrial partners, and is located in the Institute for Advanced Computational Science's (IACS) server room.  

 

Learn more about LI-red

SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

 

Learn more about SeaWulf

 

New User Warnings

  • If you have never used a high performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!).  It is a common practice by new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs.
  • Do NOT use the login nodes for computationally intensive work of any kind.  This includes compiling software.  If everyone runs CPU- or RAM-intensive processes on either login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster.  If you need to test or compile your code, please request an interactive job instead of testing it on the login nodes!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB).  It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days.   If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students.   If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.

Announcements

September 14, 2023
We will be performing upgrades & maintenance on the SeaWulf Storage on Monday October 9th starting at 9:00 AM. During this maintenance window, all Seawulf login nodes and queues, as well as the storage, will NOT be available. The SeaWulf cluster is scheduled to return to normal operation by the end of business on Tuesday October 10th.
 

We thank you for your patience while these necessary upgrades are completed.

August 12, 2023

We have just been notified of scheduled electrical maintenance at the Campus Data Center on Tuesday August 22nd. During these necessary electrical upgrades the 28-core and GPU (K80) queues will be off-line and jobs running on those queues will be terminated, starting at 9:00 AM on Tuesday the 22nd. The maintenance is expected to be completed by the end of business on the 22nd.

 

No other queues will be affected, and the login nodes will remain accessible during this maintenance period.

 

We thank you for your patience while these necessary upgrades are completed.

May 31, 2023

In preparation for a new expansion to SeaWulf (announcement to follow), we will be performing maintenance on the 96-core, 40-core, and A100-GPU queues, as well as the milan.seawulf.stonybrook.edu login nodes, starting at 10:00 AM on Wednesday June 14th and concluding by the end of business the same day.

 

During this maintenance window, the 28-core and GPU queues, as well as the login.seawulf.stonybrook.edu login nodes will continue to be available.

 

We thank you for your patience while we perform these necessary preparations.

April 12, 2023

In order to allow the university to perform electrical maintenance on the circuits for the 24-core queues and the login node 'Login2', they will be offline starting Sunday evening April 16th concluding Tuesday April 18th by the end of the day.  No other queues will be affected, and the login node 'Login1' will remain accessible during this maintenance period.

We thank you for your patience while these necessary upgrades are implemented.

March 17, 2023

We have just been notified by the manufacturer of a critical firmware update that needs to be applied to SeaWulf to ensure we can continue to provide a robust computational environment. Out of an abundance of caution, we are taking an emergency maintenance window, scheduled for ~3 hours from now at 3:30 PM today. This window is expected to last for ~24 hours.

During this maintenance window, all Seawulf login nodes and queues, as well as the  storage will NOT be available.

We apologize for this unforeseen development, and thank you for your patience while we apply these critical firmware updates.

March 16, 2023

In order to allow the university to perform electrical maintenance on the campus data center, the 28-core and Tesla K80 gpu queues will be going offline starting at end-of-business on March 21st.  These queues will come back online after the maintenance is concluded by the end of business on March 22nd. No other queues will be affected, and the login nodes will remain accessible during this maintenance period.

We thank you for your patience while these necessary upgrades are implemented.

January 31, 2023

In order to perform the 2nd phase of updates on the SeaWulf storage arrays, upgrade the networking and perform electrical maintenance on the infrastructure, the SeaWulf cluster will be going off-line starting 9:00 AM on Wednesday February 15th. The maintenance is expected to be completed by the end of business on Thursday February 16th. During this maintenance window, all login nodes, compute nodes as well as the storage array will NOT be accessible.

 

We thank you for your patience while we perform these necessary updates.

November 14, 2022

In order to perform updates on the SeaWulf storage arrays, the SeaWulf cluster will be going off-line starting at 9:00 AM Tuesday December 6th . The maintenance is expected to be completed by the end of business on Wednesday December 7th. During this maintenance window, all login nodes, compute nodes as well as the storage array will NOT be accessible.

 

The second phase of this two part maintenance is tentatively scheduled for January 4th and 5th.

 

We thank you for your patience while we perform these necessary updates.

September 26, 2022

Starting at 5:00 PM on Friday September 30th, the 40-core queues on the SeaWulf cluster will be going off-line for scheduled maintenance. All jobs, queued or running, on the 40-core queues at that time will need to be resubmitted at the end of the maintenance window, which is scheduled to conclude on Friday October 7th by the end of business.

The 24-core, 28-core and 96-core queues will be in operation throughout this time. Thank you for your patience while we perform these necessary maintenance operations.

September 21, 2022

The intel-stack module has been updated to provide compilers, MKL, and MPI from Intel oneAPI version 2022.2.

Older versions of the Intel modules are still available and can be loaded individually, but they are no longer part of the intel-stack module.

Partners

Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff, the IACS aims to grow close to 100 people in the near future. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to assure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 

University Libraries

Stony Brook Libraries are known for a wide-range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceed 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection.The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​

Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

ACCESS

Whether you’re looking for advanced computational resources – and outstanding cyberinfrastructure – to take your research to the next level, to explore a career in advanced CI or just to experience the amazing scientific discoveries enabled by supercomputers, you’re in the right place.

Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Additional Information


There are no additional resources available for this service.

Please Contact


IACS Support System