High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI,
Key Features
NVIDIA Grace™ Hopper Superchip (Grace CPU and H100 GPU);
NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s;
Up to 624GB of coherent memory per node including 480GB LPDDR5X and 144GB of HBM3e for LLM applications;
3x PCIe 5.0 X16 supporting one NVIDIA Bluefield 3 and two connectX-7 cards and 1x PCIe 5.0 x8 slot per node supporting a 10GB LP NIC Card;
6 Hot Swap Heavy Duty Fans with Optional Speed Control;
Form Factor
Enclosure: 438.4 x 87 x 900mm (17.3" x 3.43" x 35.43")
Package: 700 x 280 x 1200mm (27.56" x 11.02" x 47.24")
Supported GPU: NVIDIA: H100 Tensor Core GPU on GH200 Grace Hopper™ Superchip (Air-cooled)
CPU-GPU Interconnect: NVLink®-C2C
GPU-GPU Interconnect: NVIDIA® NVLink®
System Memory Slot Count: Onboard Memory
Additional GPU Memory: Up to 144GB ECC HBM3
Drive Bays Configuration Default: Total 3 bays
3 front hot-swap E1.S NVMe drive bays
M.2: 2 M.2 NVMe slots (M-key)
Expansion Slots Default
3 PCIe 5.0 x16 (in x16) FHFL slots
On-Board Devices System on Chip
Input / Output LAN: 1 RJ45 1 GbE Dedicated BMC LAN port(s)
Video: 1 mini-DP port(s)
System Cooling Fans: 6 Removable heavy-duty 6cm Fan(s)
Power Supply 4x 2000W Redundant (2 + 2) Titanium Level (96%) power supplies
System BIOS BIOS Type: AMI 64MB SPI Flash EEPROM
PC Health Monitoring CPU: Monitors for CPU Cores, Chipset Voltages, Memory
FAN: Fans with tachometer monitoring
Status monitor for speed control
Pulse Width Modulated (PWM) fan connectors
Temperature: Monitoring for CPU and chassis environment