Come explore AI technology at booth 471 during SC23 in Denver, from November 12th to 17th — Experience it before you invest!
󰅖

Supercomputing Redefined: The NVIDIA H100 AI Revolution

Ready to transform your AI journey with the world's top supercomputer for Machine Learning?

contact an expert
Nvidia H100 GPU

Technical Specifications

Nvidia h100 GPU

H100 SXM Specifications

GPU Memory
  • (80 GB)
GPU memory bandwidth
  • 3.35TB/s
Max thermal design power (TDP)
  • Up to 700W (configurable)
Multi-instance GPUs
  • Up to 7 MIGS @ 10 GB each.
Form factor
  • SXM
Interconnect
  • NVLink: 900GB/s PCIe Gen5: 128GB/s
Server options
  • Dataknox NovaScape
NVIDIA AI Enterprise
  • Add-on
Nvidia H100 PCIe

H100 PCIe Specifications

GPU Memory
  • (80 GB)
GPU memory bandwidth
  • 2 TB/s
Max thermal design power (TDP)
  • 300-350W (configurable)
Multi-instance GPUs
  • Up to 7 MIGS @ 10 GB each.
Form factor
  • PCIe
  • Dual-slot air-cooled
Interconnect
  • NVLink: 600GB/s PCIe Gen5: 128GB/s
Server options
  • Dataknox NovaNode
NVIDIA AI Enterprise
  • Included
Intel Gaudi 2

H100 NVL Specifications

GPU Memory
  • 188GB
GPU memory bandwidth
  • 7.8TB/s3
Max thermal design power (TDP)
  • 2x 350-400W(configurable)
Multi-instance GPUs
  • Up to 14 MIGS @ 12GBeach
Form factor
  • 2x PCIe
  • Dual-slot air-cooled
Interconnect
  • NVLink: 600GB/s
  • PCIe Gen5: 128GB/s
Server options
  • Partner andNVIDIA-Certified Systemswith 2-4 pairs
NVIDIA AI Enterprise
  • Included

Why Choose the "Hopper" H100 GPU?

10X faster terabyte-scale accelerated computing

Confidental Compute

NVIDIA Confidential Computing preserves the confidentiality and integrity of AI models and algorithms that are deployed on H100 GPUs. Protection from Unauthorized Access, Hardware-Based Security and Isolation, Verifiability with Device Attestation and No Application Code Change.

6x Higher Performance Without Losing Accuracy

H100 Transformer Engine Supercharges AI Training. Transformer models are the backbone of LLMs used widely today, such as BERT and GPT. Initially developed for natural language processing use cases, their versatility is increasingly being applied to computer vision, drug discovery and more.

Multi-Instance GPU (MIG)

You can achieve up to 7X more GPU resources on a single GPU. MIG gives researchers and developers more resources and flexibility than ever before.

Your Gateway to the H100 GPU: Discover Our Exclusive Services.

1

Rent the H100: Step Up Your Performance Level Today!

We believe that the transformative power of the NVIDIA H100 should be accessible without the burden of upfront costs. That's where our rental program shines, offering a cost-effective pathway to harness this technological marvel. Beyond just access, we champion a seamless transition.

Our comprehensive AI support program ensures that your shift to the H100 is smooth and efficient, removing any technical roadblocks along the way.

learn about rentals

Dataknox plug in and Start

Dataknox clusters are seamlessly equipped with high-speed networking, optimized parallel storage, intuitive cluster management, and optional MLOps platforms. Crafted with NVIDIA PCIe, HGX, or DGX GPU nodes, they're meticulously configured to align with your distinct performance, task, software, and budgetary criteria.

󰗡
Power AI/ML models and resource-intensive workflows with the world's most powerful GPUs

Ready?

Our team of experts are available and ready to assist you with any inquiries and offer customized 3-5 year financing options that align with your budget and goals.

Contact us