
PNY NVIDIA H100 80GB HBM3 Hopper Data Centre Graphics Card
80GB NVIDIA H100, PCIe 5.0, Hopper, HBM3, NVLink, 51 TFLOPS SP, 26 TFLOPS DP, 756 TFLOPS Tensor
Email me when the availability or price changes
Take an Order-of-Magnitude Leap in Accelerated Computing
The NVIDIA H100 Tensor Core GPU delivers unprecedented performance, scalability, and security for every workload. With NVIDIA® NVLink®, two H100 PCIe GPUs can be connected to accelerate demanding compute workloads, while the dedicated Transformer Engine supports large parameter language models. H100 uses breakthrough innovations in the NVIDIA Hopper™ architecture to deliver industry-leading conversational AI, speeding up large language models by 30x over the previous generation.
Ready for Enterprise AI?
NVIDIA H100 Tensor Core GPUs for mainstream servers come with a five-year software subscription, including enterprise support, to the NVIDIA AI Enterprise software suite, simplifying AI adoption with the highest performance. This ensures organisations have access to the AI frameworks and tools they need to build H100-accelerated AI workflows such as AI chatbots, recommendation engines, vision AI, and more.
Securely Accelerate Workloads from Enterprise to Exascale
NVIDIA H100 GPUs feature fourth-generation Tensor Cores and the Transformer Engine with FP8 precision, further extending NVIDIA’s market-leading AI leadership with up to 9x faster training and an incredible 30x inference speedup on large language models. For high performance computing (HPC) applications, H100 triples the floating point operations per second (FLOPS) of FP64 and adds dynamic programming (DPX) instructions to deliver up to 7x higher performance. With second-generation Multi-Instance GPU (MIG), built-in NVIDIA confidential computing, and NVIDIA NVLink, H100 securely accelerates all workloads for every data centre from enterprise to exascale.
NVIDIA H100 Tensor Core GPU
Built with 80 billion transistors using a cutting-edge TSMC 4N process custom tailored for NVIDIA’s accelerated compute needs, H100 is the world’s most advanced chip ever built. It features major advances to accelerate AI, HPC, memory bandwidth, interconnect, and communication at data centre scale.
Transformer Engine
The Transformer Engine uses software and Hopper Tensor Core technology designed to accelerate training for models built from the world’s most important AI model building block, the transformer. Hopper Tensor Cores can apply mixed FP8 and FP16 precisions to dramatically accelerate AI calculations for transformers.
NVLink Switch System
NVLink enables the scaling of dual H100 input/output (IO) at 600 gigabytes per second (GB/s) bidirectional per GPU, over 7x the bandwidth of PCIe Gen5, and also delivers 9x higher bandwidth than InfiniBand HDR on the NVIDIA Ampere architecture.
NVIDIA Confidential Computing
NVIDIA Confidential Computing is a built-in security feature of Hopper that makes NVIDIA H100 the world’s first accelerator with confidential computing capabilities. Users can protect the confidentiality and integrity of their data and applications in use while accessing the unsurpassed acceleration of H100 GPUs.
Second-Generation Multi-Instance GPU (MIG)
The Hopper architecture’s second-generation MIG supports multi-tenant, multi-user configurations in virtualised environments, securely partitioning the GPU into isolated, right-size instances to maximise quality of service (QoS) for 7x more secured tenants.
DPX Instructions
Hopper’s DPX instructions accelerate dynamic programming algorithms by 40x compared to CPUs and 7x compared to NVIDIA Ampere architecture GPUs. This leads to dramatically faster times in disease diagnosis, real-time routing optimisations, and graph analytics.
Accelerate Every Workload, Everywhere
The NVIDIA H100 is an integral part of the NVIDIA data centre platform. Built for AI, HPC, and data analytics, the platform accelerates over 3,000 applications, and is available everywhere from data centre to edge, delivering both dramatic performance gains and cost-saving opportunities.
Deploy H100 with the NVIDIA AI Platform
NVIDIA AI is the end-to-end open platform for production AI built on NVIDIA H100 GPUs. It includes NVIDIA accelerated computing infrastructure, a software stack for infrastructure optimisation and AI development and deployment, and application workflows to speed time to market.
Manufacturing Process: 4nm.
Memory: 80GB HBM3.
Memory Bandwidth: 2TB/s.
Power Consumption: 350W.