High Performance Computing

Stony Brook University offers researchers a variety of high-performance computing resources designed to meet the specific demands of their projects.

High Performance Computing Clusters

SeaWulf

SeaWulf is a computational cluster using top-of-the-line components from Penguin, DDN, Intel, Nvidia, Mellanox, and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

Learn more about SeaWulf
 

Getting Started

Managed by a dedicated team of System Administrators and students, SeaWulf is optimized to ensure high performance and availability. To make the most of this resource, new users must thoroughly review the comprehensive guides and FAQs specific to SeaWulf. These resources provide critical instructions and best practices for utilizing the cluster effectively. Ignoring these guidelines can lead to disruptions and impact other users' work, potentially resulting in account suspension. Therefore, it is essential for users to familiarize themselves with these materials before starting their computational tasks on SeaWulf.



New User Warnings

  • If you have never used a high-performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!). It is a common practice for new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs. 
  • Do NOT use the login nodes for computationally intensive work of any kind. This includes compiling software. If everyone runs CPU- or RAM-intensive processes on either login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster. If you need to test or compile your code, please request an interactive job instead of testing it on the login nodes!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB). It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days. If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students. If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.

Announcements

October 23, 2024

We have just been notified of scheduled electrical maintenance that will be performed on the circuits feeding the 96-core HBM nodes and the Xeonmax login node on Wednesday, Nov 6th. In anticipation of this maintenance the 96-core HBM queues will be disabled starting at 4:00 PM on Tuesday, November 5thNo other queues will be affected. We anticipate the 96-core HBM queues and Xeonmax to be back up on the afternoon of WednesdayNovember 6th, pending timely completion of the electrical maintenance.

In addition, the DoIt networking team will be performing maintenance which will cause a disruption to connections to the SeaWulf login servers, Milan1 & 2, and the Open On-Demand Portal, on November 6th, between 6am-7am. The network maintenance is anticipated to last only a few minutes, and will not impact running or queued jobs.

We thank you for your patience while these necessary maintenance steps are performed.

September 5, 2024

Virtual office hours are resuming for the Fall semester!

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Monday 2 - 3 pm
  • Friday 1 - 2 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

September 4, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, October 14th. The maintenance is expected to conclude by lunch time on Tuesday, October 15th.

We thank you for your patience while these necessary tests are conducted.

August 2, 2024

Due to a momentary power outage in the Laufer Center building, the HBM xeon max nodes briefly lost power.  They are currently in the process of being rebooted.  Jobs that were interrupted will need to be resubmitted.  We apologize for this disruption and thank you for your patience.

Update: all HBM nodes have been rebooted and are available for jobs.  The 28-core nodes also briefly lost power and are being rebooted.  Once again, we thank you for your patience while we work to resolve these issues.

June 27, 2024

The gcc-stack and intel-stack modules have been updated to provide the latest available compiler and MPI releases. For gcc-stack this is GCC 13.2.0 and Mvapich 2.3.7.  For Intel, this is the the oneAPI 24.2 release.

The compilers and MPI previously provided by gcc-stack and intel-stack are still available as individual modules for those who may prefer to use them.

May 31, 2024

We are excited to announce the publication of an article showcasing our team's work on the performance of the new Sapphire Rapids CPUs available in Seawulf. The article also delves into the significant influence of high bandwidth memory on computational efficiency. This insightful analysis is essential reading for those interested in cutting-edge advancements in high-performance computing and especially in using the new Sapphire Rapids nodes.

Read the article here.

May 30, 2024

In order to provide updated libraries and the latest functionality, the anaconda/3 module has been updated to the latest version.  We recommend using this new version in most cases, but if you require the old version of the anaconda/3 module it is available under the name anaconda/3-old.

February 15, 2024

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Wednesday 12 - 1 pm
  • Friday 11 am - 12 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

February 15, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, March 11th. The maintenance is expected to conclude by lunch time on Tuesday, March 12th.

We thank you for your patience while these necessary tests are conducted.

February 13, 2024

Four of our Intel Sapphire Rapids nodes have been updated to include 1 TB of DDR5 memory each. In order to simplify the user experience, the high bandwidth memory on these nodes has also been reconfigured from main memory to level 4 cache.  These nodes are now accessible via the hbm-1tb-long-96core queue. For more information, please see our full list of SeaWulf queues.

February 07, 2024

UPDATE: the authentication issue described below has been resolved, and new connections to SeaWulf are no longer failing.

The campus Identity server, which we rely on for authenticating access to SeaWulf, is currently experiencing issues, resulting in new connections to our HPC environment failing.

Frequently Asked Questions

Partners

Institute for Advanced Computational Science (IACS)
Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff, the IACS aims to grow close to 100 people in the near future. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)
Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to ensure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 


 

University Libraries
University Libraries

Stony Brook Libraries are known for a wide range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceeds 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection. The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​


 

Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

Internet2 (NYSERNet)

ACCESS

Whether you’re looking for advanced computational resources – and outstanding cyberinfrastructure – to take your research to the next level, to explore a career in advanced CI or just to experience the amazing scientific discoveries enabled by supercomputers, you’re in the right place.

ACCESS

Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Brookhaven National Lab

For More Information Contact


IACS Support System

Still Need Help? The best way to report your issue or make a request is by submitting a ticket.

Request Access or Report an Issue