Understanding SeaWulf

SeaWulf is a computational cluster using top of the line components from Penguin, DDN, Intel, Nvidia, Mellanox and numerous other technology partners.  Its name is a portmanteau of "Seawolf" and "Beowulf," the name of one of the first high performance computing clusters.

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: June 28, 2018
  • 164 compute nodes from Penguin, each with two Intel Xeon E5-2683v3 CPUs.
    • 8 of the compute nodes contain 4 GPUs each.
    • Total of 32 x Nvidia Tesla K80 Accelerators, offering 64x GK210 (K40) Cores (159,744 CUDA cores).
  • CPUs are codenamed “Haswell” and offer 14 cores each, and operate at a base speed of 2.0 Gigahertz. The new SeaWulf cluster has 4,592 of these cores in total and has a peak Double Precision (DP) FLOPS (Floating-point Operations Per Second) of 240 Teraflops* (240,000,000,000,000 FLOPS).
  • Each node has 128 Gigabytes of DDR4 Memory, configured as 8 memory modules, each operating at 2,133 Mega-Transfers, for a combined memory bandwidth per node of 133.25 Gigabytes** per second.
  • The nodes are interconnected via a high-speed InfiniBand®(IB) network by Mellanox ® Technologies operating at 40 Gigabits per second, allowing transfer of ~5 Gigabytes of data each second.
  • The Storage array is a DDN GPFS storage solution comprised of 180x 6 Terabyte nearline SAS disks IB-attached to two Network Shared Disk (NSD) servers, and 5x 1,600 Gigabyte Solid State Disks (SSD) acting as the metadata pool for the GPFS. This storage system can provide sustained 4k Random Read Input Output Operations per Second (IOPS) over 13,000***, and can sustain sequential transfers at over 14 Gigabytes per second.
  • Large Memory node, configured with 96 x 32 Gigabyte DDR4 Memory Modules operating at 1,600 Mega-Transfers, with 3 Terabytes of RAM (3,072 Gigabytes). This system has 4 Intel E7-8870v3 processors with 18 cores each operating at 2.1 Gigahertz, for a total of 72 cores and 144 threads (via Hyper-Threading).
  • The Cloud Component of the cluster is provided by 3 OpenStack Controllers from Penguin orchestrating up to 20% of the compute nodes as hyper-visors to host virtual machines (VMs) instantiated via the Self-Service Portal.

*     28 Cores per node x 164 nodes x 2.0 GigaHertz per core x 16 Double Precision FLOPS per cycle + 32x Nvidia K80 GPUs @ 2.91 DP Teraflops each.
**   Memory Clock of 1,066 MegaHertz x 2 (Double Data Rate) x 64 bit Memory Bus width x 4 Memory interfaces per CPU (Quad-channel) x 2 CPUs per Node/ 8 bits per
        byte.
*** RAID6 (8+2) x 180 drives, in a 4k Random Read scenario comprised of 99% Reads, QD=1, 32 threads.

SUBMIT A TICKET

Additional Information


There are no additional resources available for this article.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket

For More Information Contact


IACS Support System