High Performance Computing

Stony Brook University offers researchers several different options depending on needs.

High Performance Computing Clusters

Stony Brook University researchers may access the services provided by the following two HPC clusters: 

LI-red

LI-red is a computational cluster using top-of-the-line components from Cray, IBM, Intel, Mellanox and other technology partners. It is intended for members of the campus community, as well as industrial partners, and is located in the Institute for Advanced Computational Science's (IACS) server room.  

 

Learn more about LI-red

SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  It is intended for members of the campus community, as well as industrial partners, and is located in the Computing Center.

 

Learn more about SeaWulf

 

New User Warnings

  • If you have never used a high performance computing cluster, or are not familiar with SeaWulf, YOU WILL WANT to read and follow the instructions in the FAQ below (start with the Getting Started Guide!).  It is a common practice by new users to ignore this FAQ and simply try to run jobs without understanding what they are doing. Such carelessness can and WILL easily impact the many users who are currently running critical jobs on the cluster. If your actions compromise the health of SeaWulf, your account will be LOCKED so please make sure you thoroughly read the guides below before you embark on running jobs.
  • Do NOT use the login node for computationally intensive work of any kind.  This includes compiling software.  If everyone runs CPU- or RAM-intensive processes on the login node, it will crash and prevent all other SeaWulf users from being able to login to the cluster.  If you need to test or compile your code, please request an interactive job instead of testing it on the login node!
  • Do NOT use your home directory (/gpfs/home/NETID) to store large files as you may quickly exceed the directory’s storage limit (20 GB).  It is best to place large or temporary files in your scratch directory (/gpfs/scratch/NETID), though please keep in mind that files and directories in scratch are automatically removed after 30 days.   If your research group needs a persistent directory with a larger storage limit, consider requesting a shared project space.
  • Never submit a large number of jobs (greater than 5) without first running a small test case to make sure your code works as expected. Start slow and then ramp up with jobs once you are familiar with how things work.
  • SeaWulf is managed by a small number of System Administrators and students.   If you encounter an issue or have problems running your job, please make sure you read through the FAQ items below before submitting a ticket to HPC Support.

Announcements

September 18, 2018

The release version of the 2019 Intel compilers are now available as modules on SeaWulf.  Please use the "module avail" command to view all available modules.

August 13, 2018

The release version of the 2018 Intel compilers are now available as modules on SeaWulf.  Please use the "module avail" command to view all available modules.

August 09, 2018

The electrical maintenance in the Campus Data Center is complete, the SeaWulf compute nodes have been powered up and the 28 core queues have been enabled.

August 09, 2018

Due to a failed electrical component that feeds the Campus Data Center, additional time will be needed to complete the capital maintenance. We have been advised to anticipate that normal power to the Data Center will be restored by late this afternoon. Once power has been restored, we will bring up the SeaWulf compute nodes and re-enable the 28 core queues. 

 

July 26, 2018

Due to capital maintenance on the Campus Data Center’s electrical switch gear, the SeaWulf cluster’s job queues will be disabled starting Wednesday August 8th. The login nodes & li-red cluster queues (long-24core, short-24core, large-24core) will NOT be affected by this outage.

The SeaWulf compute nodes are expected to go off-line by the end of business on Wednesday August 8th and come back online by noon on Thursday August 9th. We apologize for this inconvenience and thank you for your understanding while these essential updates are performed to the electrical infrastructure.

 

July 19, 2018

The Bioinformatics Software FAQ page has been updated to reflect SeaWulf's current bioinformatics software stack. 

May 07, 2018

Due to preventive maintenance on the Campus Data Center’s generator, the SeaWulf cluster’s job queues will be disabled starting Sunday May 20th. The login node & cluster will go off-line at 4:30 PM on Monday May 21st. SeaWulf is expected to be back online by the end of business on Tuesday May 22nd.

March 26, 2018

A new FAQ item explaining how to quickly transfer the contents of your former LI-red home directory to SeaWulf is now available:

March 19, 2018

Version 3.4.4 of the R statistical computing and graphics software is now available on SeaWulf and can be accessed by loading the "R/3.4.4" module.

February 26, 2018

Starting the morning of March 12th, the LI-red cluster’s compute nodes will be going down for a scheduled merge with the SeaWulf cluster’s, as part of on-going upgrades. The merge is expected to be completed on March 19th. During the upgrade window, the SeaWulf cluster will be operational. Once the merge is completed, LI-red accounts can only be used for retrieving data from the LI-red cluster’s storage array.

User accounts on LI-red will NOT be automatically migrated to SeaWulf. If you do not already have an account on the SeaWulf cluster, please put in a SeaWulf account request via our ticketing system to avoid interruptions to your workflows.

January 17, 2018

The latest release of the Genome Analysis Toolkit (GATK) is now available on SeaWulf and can be accessed by loading the "GATK/4.0.0" module.  

December 19, 2017

A GPU and OpenMP accelerated version of LAMMPS is now available as a module, lammps/gpu/11Aug17. See here for more details.

November 20, 2017

In order to provide a more robust storage system for the SeaWulf cluster, we will be performing system wide firmware upgrades, followed by a full upgrade of the storage sub-systems and file servers. This scheduled maintenance is specifically geared towards resolving recurring issues with the SeaWulf Storage Array and GPFS.

During these upgrades, the SeaWulf cluster, including the Login node, will not be available; user data will not be effected (Home/Project directories). In anticipation of this outage, we will pause new job submissions starting 12:00 PM (noon) Saturday November 25th. The cluster will be down from Monday November 27th at 10:00 AM until the end of the business day on Wednesday, November 29th.

November 13, 2017

The release version of the 2018 Intel compilers are now available as modules on SeaWulf.  Please use the "module avail" command to view all available modules.

Partners

Institute for Advanced Computational Science (IACS)

The mission of the IACS is to make sustained advances in the fundamental techniques of computation and in high-impact applications with a vision that by 2017, it will be an internationally-recognized institute having vibrant multidisciplinary research and education programs with demonstrated economic benefit to New York State. Including students and staff. the IACS aims to grow close to 100 people by 2018. There are presently 10 faculty spanning chemistry, materials by design, condensed matter, astrophysics, atmospheric science, nano-science, sociology, applied mathematics, and computer science. Another approximate 20 faculty are affiliated with the institute from diverse departments.

Office of the Vice President for Research (OVPR)

The Office of the Vice President for Research helps strategically position the University to be successful in attracting external support for research from federal and other government sources, industry, private foundations, philanthropy, and through partnerships with allied organizations such as Brookhaven National Lab, Cold Spring Harbor, and others. It facilitates the process of applying for grants and contracts and manages awards to assure that research is carried out successfully and that the requirements of sponsoring agencies are fulfilled. It also provides technology tools to disseminate information, facilitate collaboration, and streamline research administration. 

University Libraries

Stony Brook Libraries are known for a wide-range of print and digital resources and world-renowned special collections. The Stony Brook Libraries belong to the Association of Research Libraries (ARL), with a Health Sciences Center Library that is a member of the Association of Academic Health Sciences Libraries. The Libraries’ collection exceed 2 million volumes, including eBooks, print and electronic holdings of scholarly journal subscriptions, microforms, music recordings, and a sizable map collection.The University’s Libraries stand as the largest academic research library on Long Island, serving as a resource in the local community, state-wide, and nationally through the National Network of the National Libraries of Medicine.​

Service Providers

Internet2 (NYSERNet)

Stony Brook offers researchers high-capacity, high-speed connections to the Internet as a member of NYSERNet and Internet2.

XSEDE

XSEDE is a single virtual system that scientists can use to interactively share computing resources, data and expertise. People around the world use these resources and services — things like supercomputers, collections of data and new tools — to improve our planet.

Brookhaven National Lab

Brookhaven National Lab acts as an extension of Stony Brook University for faculty as another service provider. 

Additional Information


There are no additional resources available for this service.

Please Contact


IACS Support System