MPI Cluster Overview

The MPI cluster enables jobs with tightly coupled parallel codes using Message Passing Interface APIs for distributing computation across multiple nodes, each with its own memory space.

Key Features

  • Infiniband and Omni-Path networking
  • Minimum of 2 Nodes per Job

Specifications

Partition Host Architecture Nodes Cores/Node Mem/Node Mem/Core Scratch Network Node Names
mpi Intel Xeon Gold 6342 (Ice Lake) 136 48 512 GB 10.6 GB 1.6 TB NVMe HDR200; 10GbE mpi-n[0-135]
opa-high-mem Intel Xeon Gold 6132 (Skylake) 36 28 192 GB 6.8 GB 500 TB SSD OPA; 10GbE opa-n[96-131]