High Performance Computing, AI/Deep Learning Training and Inference, Large Language Model (LLM) and Generative AI,
Key Features
NVIDIA Grace Hopper™ Superchip (Grace CPU and H100 GPU);
NVLink® Chip-2-Chip (C2C) high-bandwidth, low-latency interconnect between CPU and GPU at 900GB/s;
Up to 576GB of coherent memory per node including 480GB LPDDR5X and 96GB of HBM3 for LLM applications;
9 Hot-Swap Heavy Duty Fans with Optimal Fan Speed Control;
This system supports two E1.S drives directly from the processor only.;
Form Factor
Enclosure: 440 x 44 x 940mm (17.33" x 1.75" x 37")
Package: 1219 x 241 x 711mm (48" x 9.5" x 28")
GPU Max GPU Count: Up to 1 onboard GPU
CPU-GPU Interconnect: NVLink®-C2C
System Memory Slot Count: Onboard Memory
Max Memory: Up to 480GB ECC LPDDR5X
Additional GPU Memory: Up to 96GB ECC HBM3
Drive Bays Configuration Default: Total 8 bays
8 front hot-swap E1.S NVMe drive bays
M.2: 2 M.2 NVMe slots (M-key)
Expansion Slots Default
2 PCIe 5.0 x16 FHFL slots
On-Board Devices System on Chip
Input / Output LAN: 1 RJ45 1 GbE Dedicated BMC LAN port(s)
Video: 1 mini-DP port(s)
TPM: 1 TPM Onboard/port 80
System Cooling Fans: 9 Removable heavy-duty 4cm Fan(s)
Power Supply 2x 2000W Redundant Titanium Level (96%) power supplies
System BIOS BIOS Type: AMI 64MB SPI Flash EEPROM
PC Health Monitoring CPU: Monitors for CPU Cores, Chipset Voltages, Memory
FAN: Fans with tachometer monitoring
Status monitor for speed control
Pulse Width Modulated (PWM) fan connectors
Temperature: Monitoring for CPU and chassis environment
Thermal Control for fan connectors