MPI Cluster Overview

The MPI cluster enables jobs with tightly coupled parallel codes using Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space.

Key Features

  • Infiniband and Omni-Path networking
  • Minimum of 2 full nodes per Job on mpi partition, minimum 1 full node on ndr partition

Specifications

Partition Host Architecture Nodes Cores/Node Mem/Node Mem/Core Scratch Network Node Names
mpi Intel Xeon Gold 6342 (Ice Lake) 136 48 512 GB 10.6 GB 1.6 TB NVMe HDR200; 10GbE mpi-n[0-135]
ndr AMD EPYC 9575F 136 48 1.5 TB 11.2 GB 2.9 TB NVMe NDR200; 10GbE mpi-n[136-153]
opa-high-mem Intel Xeon Gold 6132 (Skylake) 36 28 192 GB 6.8 GB 500 TB SSD OPA; 10GbE opa-n[96-131]