NVIDIA H200

Supercharging AI and HPC workloads

Next-generation GPU performance, now with groundbreaking HBM3e memory.

img

Product Overview

NVIDIA H200 Tensor Core GPU

The NVIDIA H200 Tensor Core GPU transforms generative AI and high-performance computing (HPC) workloads with unprecedented memory capacity and bandwidth. Featuring industry-first HBM3e memory, the H200 accelerates large language models (LLMs), generative AI, and complex scientific simulations, offering superior performance and energy efficiency.

img
img

Key Features & Benefits

Key Features & Benefits

  1. Llama2 70B Inference: 1.9X Faster
  2. GPT-3 175B Inference: 1.6X Faster
  3. High-Performance Computing: Up to 110X Faster

Core Benefits

Unmatched AI Inference

Supercharged HPC Performance

Energy Efficiency & Sustainability

Enterprise-Ready

NVIDIA H200 NVL: Enterprise-Ready GPU

The NVIDIA H200 Tensor Core GPU delivers groundbreaking performance built on the advanced NVIDIA Hopper™ architecture.

Flexible Configurations for Mainstream Enterprise Servers:

Enterprise Software Included

Specifications

NVIDIA H200 Specifications

FeatureH200 SXM¹H200 NVL¹
FP6434 TFLOPS30 TFLOPS
FP64 Tensor Core67 TFLOPS60 TFLOPS
FP3267 TFLOPS60 TFLOPS
TF32 Tensor Core²989 TFLOPS835 TFLOPS
BFLOAT16 Tensor Core²1,979 TFLOPS1,671 TFLOPS
FP16 Tensor Core²1,979 TFLOPS1,671 TFLOPS
FP8 Tensor Core²3,958 TFLOPS3,341 TFLOPS
INT8 Tensor Core²3,958 TFLOPS3,341 TFLOPS
GPU Memory141 GB HBM3e141 GB HBM3e
GPU Memory Bandwidth4.8 TB/s4.8 TB/s
Confidential ComputingSupportedSupported
Max TDPUp to 700WUp to 600W
Multi-Instance GPU7 MIGs @18GB each7 MIGs @16.5GB each
Form FactorSXMPCIe dual-slot
InterconnectNVLink: 900GB/sNVLink bridge: 900GB/s
PCIe InterfacePCIe Gen5: 128GB/sPCIe Gen5: 128GB/s
Server OptionsHGX™ H200 SystemsMGX™ H200 NVL Systems
NVIDIA AI EnterpriseAdd-onIncluded

Applications

Ideal Applications

The NVIDIA H200 Tensor Core GPU is a powerhouse for demanding AI workloads, deep learning, and high-performance computing (HPC) applications. With its advanced Tensor Cores and high memory bandwidth, it’s ideal for training and deploying large language models (LLMs), generative AI, scientific simulations, and data-intensive analytics.

Large-scale LLM inference and training

Generative AI development and deployment

Scientific and research-intensive HPC workloads

Complex real-time data processing

Genomics and Computational Biology

logo

Built on years of expertise in digital innovation, we have established ourselves as a trusted name in AI infrastructure.

Address

P.O.Box 108093 Al Moroor street Abu Dhabi University Building

Email Address

info@centeraivision.com

Number Phone

+971 50 108 0066

© 2024-2025, All Rights Reserved