Understanding SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  Its name is a portmanteau of "Seawolf" and "Beowulf," the name of one of the first high performance computing clusters.

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: October 07, 2021
Average Rating: Not Rated
Your feedback is important to us, help us by logging in to rate this article and provide feedback.
  • 330 nodes and 9,552 cores, with peak performance of ~ 536 TFLOP/s available for research computation.
    • 164 compute nodes from Penguin, each with two Intel Xeon E5-2683v3 CPUs.
      • CPUs are codenamed “Haswell” and offer 14 cores each, and operate at a base speed of 2.0 Gigahertz. The new SeaWulf cluster has 4,592 of these cores in total and has a peak Double Precision (DP) FLOPS (Floating-point Operations Per Second) of 240 Teraflops* (240,000,000,000,000 FLOPS).
      • 8 of the compute nodes contain 4 GPUs each.
        • Total of 32 x Nvidia Tesla K80 Accelerators, offering 64x GK210 (K40) Cores (159,744 CUDA cores).
      • Each node has 128 Gigabytes (16GB reserved for the system) of DDR4 Memory, configured as 8 memory modules, each operating at 2,133 Mega-Transfers, for a combined memory bandwidth per node of 133.25 Gigabytes** per second.
    • 64 compute nodes each with two Intel Xeon Gold 6148 CPUs with 20 cores each that operate at a base speed of 2.4 Gigahertz and have 192 Gigabytes of RAM.
    • 100 compute nodes each with two Intel Xeon E5-2690v3 CPUs with 12 cores each, and operate at a base speed of 2.6 Gigahertz.
    • The system also highlights a large memory node that spotlights 3 Terabyte of DDR4 RAM and 4 Intel E7-8870v3 processors with 18 cores each operating at 2.1 Gigahertz, for a total of 72 cores and 144 threads (via Hyper-Threading).
    • Additionally, the system also includes two login nodes.
  • The nodes are interconnected via a high-speed InfiniBand®(IB)  FDR network by Mellanox ® Technologies, allowing transfer speeds of up to 7 Gigabytes of data each second.
  • The Storage array is a GPFS storage solution with ~1 Petabyte of SAS spinning disk & ~50 Terabyte of SSD,  IB-attached to two Network Shared Disk (NSD) servers. The SSD portion of this storage system can provide sustained 4k Random Read Input Output Operations per Second (IOPS) over 8,000,000, and can sustain sequential reads at over 80 Gigabytes per second.

*     28 Cores per node x 164 nodes x 2.0 GigaHertz per core x 16 Double Precision FLOPS per cycle + 32x Nvidia K80 GPUs @ 2.91 DP Teraflops each.
**   Memory Clock of 1,066 MegaHertz x 2 (Double Data Rate) x 64 bit Memory Bus width x 4 Memory interfaces per CPU (Quad-channel) x 2 CPUs per Node/ 8 bits per
        byte.

SUBMIT A TICKET

Additional Information


There are no additional resources available for this article.

Provide Feedback


Your feedback is important to us, help us by logging in to rate this article and provide feedback.

Sign in with NetID

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Quick Ticket

For More Information Contact


IACS Support System