High Performance Computing

Stony Brook University offers researchers a variety of high-performance computing resources designed to meet the specific demands of their projects.

High Performance Computing Clusters


SeaWulf is a computational cluster using top-of-the-line components from Penguin, DDN, Intel, Nvidia, Mellanox, and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

Learn more about SeaWulf

Getting Started

Managed by a dedicated team of System Administrators and students, SeaWulf is optimized to ensure high performance and availability. To make the most of this resource, new users must thoroughly review the comprehensive guides and FAQs specific to SeaWulf. These resources provide critical instructions and best practices for utilizing the cluster effectively. Ignoring these guidelines can lead to disruptions and impact other users' work, potentially resulting in account suspension. Therefore, it is essential for users to familiarize themselves with these materials before starting their computational tasks on SeaWulf.

New User Warnings

  • If you have never used a high-performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!). It is a common practice for new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs. 
  • Do NOT use the login nodes for computationally intensive work of any kind. This includes compiling software. If everyone runs CPU- or RAM-intensive processes on either login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster. If you need to test or compile your code, please request an interactive job instead of testing it on the login nodes!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB). It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days. If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students. If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.


June 27, 2024

The gcc-stack and intel-stack modules have been updated to provide the latest available compiler and MPI releases. For gcc-stack this is GCC 13.2.0 and Mvapich 2.3.7.  For Intel, this is the the oneAPI 24.2 release.

The compilers and MPI previously provided by gcc-stack and intel-stack are still available as individual modules for those who may prefer to use them.

May 31, 2024

We are excited to announce the publication of an article showcasing our team's work on the performance of the new Sapphire Rapids CPUs available in Seawulf. The article also delves into the significant influence of high bandwidth memory on computational efficiency. This insightful analysis is essential reading for those interested in cutting-edge advancements in high-performance computing and especially in using the new Sapphire Rapids nodes.

Read the article here.

May 30, 2024

In order to provide updated libraries and the latest functionality, the anaconda/3 module has been updated to the latest version.  We recommend using this new version in most cases, but if you require the old version of the anaconda/3 module it is available under the name anaconda/3-old.

February 15, 2024

Members of our HPC Support team will be holding virtual office hours at the following times:

  • Wednesday 12 - 1 pm
  • Friday 11 am - 12 pm

If you have questions or need assistance troubleshooting a SeaWulf problem, you may use this link to attend the office hours during either of the above times slots.

February 15, 2024

In order to allow the university to perform a generator test on the campus data center, the 28-core, Tesla k80 gpu, Tesla p100, and Tesla v100 queues and login nodes login1 and login2 will be going offline for scheduled maintenance at 4pm on Monday, March 11th. The maintenance is expected to conclude by lunch time on Tuesday, March 12th.

We thank you for your patience while these necessary tests are conducted.

February 13, 2024

Four of our Intel Sapphire Rapids nodes have been updated to include 1 TB of DDR5 memory each. In order to simplify the user experience, the high bandwidth memory on these nodes has also been reconfigured from main memory to level 4 cache.  These nodes are now accessible via the hbm-1tb-long-96core queue. For more information, please see our full list of SeaWulf queues.

February 07, 2024

UPDATE: the authentication issue described below has been resolved, and new connections to SeaWulf are no longer failing.

The campus Identity server, which we rely on for authenticating access to SeaWulf, is currently experiencing issues, resulting in new connections to our HPC environment failing.

September 14, 2023

We will be performing upgrades & maintenance on the SeaWulf Storage on Monday October 9th starting at 9:00 AM. During this maintenance window, all Seawulf login nodes and queues, as well as the storage, will NOT be available. The SeaWulf cluster is scheduled to return to normal operation by the end of business on Tuesday October 10th.

We thank you for your patience while these necessary upgrades are completed.

August 12, 2023

We have just been notified of scheduled electrical maintenance at the Campus Data Center on Tuesday August 22nd. During these necessary electrical upgrades, the 28-core and GPU (K80) queues will be offline and jobs running on those queues will be terminated, starting at 9:00 AM on Tuesday the 22nd. The maintenance is expected to be completed by the end of business on the 22nd.

No other queues will be affected, and the login nodes will remain accessible during this maintenance period.

We thank you for your patience while these necessary upgrades are completed.

Frequently Asked Questions


Institute for Advanced Computational Science (IACS)
Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff, the IACS aims to grow close to 100 people in the near future. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)
Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to ensure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 


University Libraries
University Libraries

Stony Brook Libraries are known for a wide range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceeds 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection. The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​


Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

Internet2 (NYSERNet)


Whether you’re looking for advanced computational resources – and outstanding cyberinfrastructure – to take your research to the next level, to explore a career in advanced CI or just to experience the amazing scientific discoveries enabled by supercomputers, you’re in the right place.


Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Brookhaven National Lab

For More Information Contact

IACS Support System

Still Need Help? The best way to report your issue or make a request is by submitting a ticket.

Request Access or Report an Issue