Understanding LI-red

LI-red is a computational cluster using top of the line components from Cray, IBM, Intel, Mellanox and numerous other technology partners.

Audience: Faculty, Postdocs, Researchers, Staff and Students

This KB Article References: High Performance Computing
This Information is Intended for: Faculty, Postdocs, Researchers, Staff, Students
Last Updated: March 20, 2018

This is LI-red logo.

  • 100 compute nodes from Cray, each with two Intel Xeon E5-2690v3 CPUs.
  • CPUs are codenamed “Haswell” and offer 12 cores each, and operate at a base speed of 2.6 Gigahertz. The LIRED cluster has 2,400 of these cores in total and has a peak FLOPS (FLoating-point Operations Per Second) of 100 Teraflops* (100,000,000,000,000 FLOPS).
  • Each node has 128 Gigabytes of DDR4 Memory, configured as 8 memory modules, each operating at 2,133 Mega-Transfers, for a combined memory bandwidth per node of 133.25 Gigabytes** per second.
  • The nodes are interconnected via a high-speed InfiniBand® network by Mellanox ® Technologies operating at 56 Gigabits per second, allowing transfer of ~7 Gigabytes of data each second. 
  • The Storage array is a DDN GPFS storage solution comprised of 180x 6 Terabyte nearline SAS disks IB-attached to two Network Shared Disk (NSD) servers, and 5x 1,600 Gigabyte Solid State Disks (SSD) acting as the metadata pool for the GPFS. This storage system can provide sustained 4k Random Read Input Output Operations per Second (IOPS) over 13,000***, and can sustain sequential transfers at over 14 Gigabytes per second.
  • The login node is equipped with the same CPU & Memory configuration as the compute nodes, and supplemented with dual 800 Gigabyte Intel Solid State Drives to maximize local IOPS. This node is attached to the university network over a port-channeled 20 Gigabits (2x10 Gigabits) per second network connection.

*     24 Cores per node x 100 nodes x 2.6 GigaHertz per core x 16 Double Precision FLOPS per cycle.
**   Memory Clock of 1,066 MegaHertz x 2 (Double Data Rate) x 64 bit Memory Bus width x 4 Memory interfaces per CPU (Quad-channel) x 2 CPUs per Node/ 8 bits per byte.
*** RAID6 (8+2) x 160 drives, in a 4k Random Read scenario comprised of 99% Reads.

Additional Information


There are no additional resources available for this article.

Getting Help


The Division of Information Technology provides support on all of our services. If you require assistance please submit a support ticket through the IT Service Management system.

Submit A Ticket

For More Information Contact


IACS Support System