NVIDIA DGX™ H200
The Gold Standard for AI Factory Infrastructure
Powering next-generation AI server innovation.
NVIDIA DGX™ H200 is the cornerstone for modern AI enterprise
The NVIDIA DGX™ H200 is the world’s leading choice for AI enterprise factories, empowering organizations to dramatically scale AI capabilities. Leveraging the extraordinary performance of the NVIDIA H200 Tensor Core GPU, the DGX H200 delivers unparalleled acceleration for generative AI, deep learning, and mission-critical workloads.
Key Highlights & Core Features
Ideal for generative AI, LLMs, deep learning recommendation systems, and high-performance computing (HPC).
Core Features
Ideal for generative AI, LLMs, deep learning recommendation systems, and high-performance computing (HPC).
NVIDIA DGX H200 Specifications
Component | Specification |
GPUs | 8x NVIDIA H200 Tensor Core GPUs (1,128 GB total GPU memory) |
CPU | Dual Intel Xeon Platinum 8480C (112 cores total) |
System Memory | 2 TB |
GPU Interconnect | 18x NVLink connections per GPU, 900 GB/s bandwidth |
NVSwitches | 4x NVIDIA NVSwitches (7.2 TB/s GPU-GPU bandwidth) |
Networking | 10x NVIDIA ConnectX-7 400Gb/s (1 TB/s network bandwidth) |
Storage | 30 TB NVMe SSD |
System Power Usage | ~10.2 kW maximum |
Dimensions | Height: 14.0in, Width: 19.0in, Length: 35.3in |
Weight | 287.6 lbs (130.45 kg) |
Operating Temperature | 5–30°C (41–86°F) |
Management | 10Gb/s onboard NIC (RJ45), optional 100Gb/s NIC |
Benefits & Use Cases
Built on years of expertise in digital innovation, we have established ourselves as a trusted name in AI infrastructure.
P.O.Box 108093 Al Moroor street Abu Dhabi University Building
info@centeraivision.com
+971 50 108 0066
© 2024-2025, All Rights Reserved