Understanding LI-red

LI-red is a computational cluster using top of the line components from Cray, IBM, Intel, Mellanox and numerous other technology partners.

Audience: Faculty, Researchers and Staff

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Researchers, Staff
Last Updated: October 12, 2017

This is LI-red logo.

  • 100 compute nodes from Cray, each with two Intel Xeon E5-2690v3 CPUs.
  • CPUs are codenamed “Haswell” and offer 12 cores each, and operate at a base speed of 2.6 Gigahertz. The LIRED cluster has 2,400 of these cores in total and has a peak FLOPS (FLoating-point Operations Per Second) of 100 Teraflops* (100,000,000,000,000 FLOPS).
  • Each node has 128 Gigabytes of DDR4 Memory, configured as 8 memory modules, each operating at 2,133 Mega-Transfers, for a combined memory bandwidth per node of 133.25 Gigabytes** per second.
  • The nodes are interconnected via a high-speed InfiniBand® network by Mellanox ® Technologies operating at 56 Gigabits per second, allowing transfer of ~7 Gigabytes of data each second. 
  • The Storage array is an IBM GPFS storage solution comprised of 160x 3 Terabyte nearline SAS disks fiber-attached to two Network Shared Disk (NSD) servers, and 10x 400 Gigabyte Solid State Disks (SSD) acting as the metadata pool for the GPFS. This storage system can provide sustained 4k Random Read Input Output Operations per Second (IOPS) over 11,000***, and can sustain sequential transfers at over 4 Gigabytes per second.
  • Large Memory node, configured with 96 x 32 Gigabyte DDR4 Memory Modules operating at 1,600 Mega-Transfers, with 3 Terabytes of RAM (3,072 Gigabytes). This system has 4 Intel E7-8870v3 processors with 18 cores each operating at 2.1 Gigahertz, for a total of 72 cores and 144 threads (via Hyper-Threading).
  • The login node is equipped with the same CPU & Memory configuration as the compute nodes, and supplemented with dual 800 Gigabyte Intel Solid State Drives to maximize local IOPS. This node is attached to the university network over a port-channeled 20 Gigabits (2x10 Gigabits) per second network connection.

*     24 Cores per node x 100 nodes x 2.6 GigaHertz per core x 16 Double Precision FLOPS per cycle
**   Memory Clock of 1,066 MegaHertz x 2 (Double Data Rate) x 64 bit Memory Bus width x 4 Memory interfaces per CPU (Quad-channel) x 2 CPUs per Node/ 8 bits per byte
*** RAID6 (8+2) x 160 drives, in a 4k Random Read scenario comprised of 99% Reads

Additional Information


There are no additional resources available for this article.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket

For More Information Contact


IACS Support System