Common Configs
This page captures some common configurations we have seen GPU Clouds implement
NVIDIA B200
- 1× or 8× B200 GPU 180GB SXM
- 16× or 128× vCPU Intel Emerald Rapids
- 224 or 1792 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS (CUDA® 12)
NVIDIA H200
- 1× or 8× H200 GPU 141GB SXM
- 16× or 128× vCPU Intel Sapphire Rapids
- 200 or 1600 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS (CUDA® 12)
NVIDIA H100
- 1× or 8× H100 GPU 80GB SXM
- 16× or 128× vCPU Intel Sapphire Rapids
- 200 or 1600 GB DDR5
- 3.2 Tbit/s InfiniBand
- Ubuntu 22.04 LTS (CUDA® 12)
NVIDIA L40S (Intel)
- 1× L40S GPU 48GB PCIe
- 8× or 40× vCPU Intel Xeon Gold
- 32 or 160 GB DDR5
- Ubuntu 22.04 LTS (CUDA® 12)
NVIDIA L40S (AMD)
- 1× L40S GPU 48GB PCIe
- 16× or 192× vCPU AMD EPYC
- 96 or 1152 GB DDR5
- Ubuntu 22.04 LTS (CUDA® 12)
NVIDIA A100
- 1× or 8× A100 GPU 40GB or 80GB (SXM or PCIe)
- 16× to 128× vCPU Intel Cascade Lake or Milan
- 256 to 2048 GB DDR4/DDR5
- 3.2 Tbit/s InfiniBand or 100 Gbps Ethernet
- Ubuntu 20.04 / 22.04 LTS (CUDA® 11/12)
NVIDIA A40
- 1× A40 GPU 48GB PCIe
- 8× to 64× vCPU Intel Xeon Gold / EPYC
- 128 to 1024 GB DDR4/DDR5
- 10/25/100 Gbps Ethernet
- Ubuntu 20.04 / 22.04 LTS (CUDA® 11/12)
NVIDIA T4
- 1× T4 GPU 16GB PCIe
- 8× to 32× vCPU Intel Xeon / AMD EPYC
- 64 to 512 GB DDR4
- 10/25 Gbps Ethernet
- Ubuntu 20.04 LTS (CUDA® 11)
NVIDIA L4
- 1× L4 GPU 24GB PCIe
- 8× to 32× vCPU Intel Xeon / AMD EPYC
- 128 to 512 GB DDR5
- 25 Gbps Ethernet or local NVMe
- Ubuntu 22.04 LTS (CUDA® 12)
By Use Case¶
End users can select the VM configuration best suited for on their computational requirements.
Single GPU (1x)¶
Perfect for development, prototyping, and small-scale inference
Dual GPU (2x)¶
Ideal for medium-scale training and distributed workloads
Quad GPU (4x)¶
Designed for large-scale training and high-performance computing
By GPU Type/Model¶
Nvidia H100 SXM¶
80GB HBM3 memory per GPU, optimized for AI/ML workloads
Nvidia L40S¶
48GB GDDR6 memory per GPU, ideal for inference and training