AI infrastructure with constant uptime.
NVIDIA DGX SuperPOD™ with DGX GB200 systems is purpose-built for training and inferencing trillion-parameter generative AI models. Each liquid-cooled rack features 36 NVIDIA GB200 Grace Blackwell Superchips–36 NVIDIA Grace CPUs and 72 Blackwell GPUs–connected as one with NVIDIA NVLink. Multiple racks connect with NVIDIA Quantum InfiniBand to scale up to tens of thousands of GB200 Superchips.
Benefits
Resources
DGX SuperPOD with NVIDIA DGX B200 or DGX H200 systems is an ideal choice for large development teams working on enterprise AI workloads.
NVIDIA Enterprise Services provide support, education, and infrastructure specialists for your DGX infrastructure. With NVIDIA experts available at every step of your AI journey, Enterprise Services can help you get your projects up and running quickly and successfully.
Get Started