PNY NVIDIA A2 16GB Ampere AI Graphics Card

Images

Thumbnail Thumbnail

Videos

PNY NVIDIA A2 16GB Tensor Core AI Graphics Card, 1280 CUDA, PCI Express 4.0 x8, 4.5 TFLOPS SP, 18 TFLOPS HP

Scan Code:LN121712

Manufacturer Code:TCSA2MATX-PB

Customer Reviews and Q&A

Product Overview

Versatile Entry-Level Inference

The NVIDIA A2 Tensor Core GPU provides entry-level inference with low power, a small footprint, and high performance for NVIDIA AI at the edge. Featuring a low-profile PCIe Gen4 card and a low 40-60 watt (W) configurable thermal design power (TDP) capability, the A2 brings adaptable inference acceleration to any server.

A2's versatility, compact size, and low power exceed the demands for edge deployments at scale, instantly upgrading existing entry level CPU servers to handle inference. Servers accelerated with A2 GPUs deliver higher inference performance versus CPUs and more efficient intelligent video analytics (IVA) deployments than previous GPU generations—all at an entry-level price point.

NVIDIA-Certified Systems™ featuring A2 GPUs and NVIDIA AI, including the NVIDIA Triton™ Inference Server, deliver breakthrough inference performance across edge, data centre, and cloud. They ensure that AI-enabled applications deploy with fewer servers and less power, resulting in easier deployments, faster insights, and significantly lower costs.


Up to 20X More Inference Performance

AI inference is deployed to make consumer lives more convenient through real-time experiences, and enables them to gain insights on trillions of end-point sensors and cameras. Compared to CPU-only servers, the servers built with NVIDIA A2 Tensor Core GPU offer up to 20X more inference performance, instantly upgrading any server to handle modern AI.


Higher IVA Performance for Intelligent Edge

Servers equipped with A2 offer up to 1.3X more performance in intelligent edge use cases, including smart cities, manufacturing, and retail. NVIDIA A2 GPUs running IVA workloads result in more efficient deployments with up to 1.6X better price-performance and ten percent better energy efficiency than previous GPU generations.


Third-Generation Tensor Cores

The third-generation Tensor Cores in A2 support integer math, down to INT4, and floating point math, up to FP32, to deliver high AI training and inference performance. The NVIDIA Ampere architecture also supports TF32 and NVIDIA’s automatic mixed precision (AMP) capabilities.


Root of Trust Security

Providing security in edge deployments and end-points is critical for enterprise business operations. A2 offers secure boot through trusted code authentication and hardened rollback protections to protect against malicious malware attacks.


Second-Generation RT Cores

A2 includes dedicated RT Cores for ray tracing that enable ground breaking technologies at breakthrough speed. With up to 2X the throughput over the previous generation and the ability to concurrently run ray tracing with either shading or denoising capabilities.


Hardware Transcoding Performance

Exponential growth in video applications demand real-time scalable performance, requiring the latest in hardware encode and decode capabilities. A2 GPUs use dedicated hardware to fully accelerate video decoding and encoding for the most popular codecs, including H.265, H.264, VP9, and AV1 decode.


Complete Inference Portfolio

NVIDIA offers a complete portfolio of NVIDIA-Certified Systems featuring Ampere Tensor Core GPUs as the inference engine powering NVIDIA AI. A2 Tensor Core GPUs add entry-level inference in a low-profile form factor to the NVIDIA AI portfolio that already includes A100 and A30 Tensor Core GPUs.

A100 features the highest inference performance at every scale and A30 brings optimal inference performance for mainstream servers. NVIDIA A2, NVIDIA A30, and NVIDIA A100 Tensor Core GPUs deliver leading inference performance across edge, data centre, and cloud.


NVIDIA AI Enterprise

NVIDIA AI Enterprise, an end-to-end cloud-native suite of AI and data analytics software, is certified to run on A2 in hypervisor-based virtual infrastructure with VMware vSphere.

This enables management and scaling of AI and inference workloads in a hybrid cloud environment.

Features
Specifications
Key Specifications
Graphics Chipset NVIDIA A2
Edition NVIDIA A2 TENSOR CORE GPU
Manufacturing process 8 nm
Microarchitecture Ampere
GPU Name  
Cores & Clocks
NVIDIA CUDA Cores 1280
Core Clock 1440 MHz
Boost Clock 1770 MHz
Video Memory (VRAM)
Memory Size 16 GB
Memory Type GDDR6
Memory Clock 6251 MHz
Memory Bit Rate 128 Bit
Memory Bandwidth 200 GB/s
ECC Technology Yes
Cooling
Cooling Passive
I/O & Connectivity
Interface PCIe 4.0 (x8)
Interface Bandwidth  
Graphics Output  
Visuals
Multi GPU Support  
NVLink Support N/A
Total NVLink Bandwidth N/A
Microsoft DirectX Support  
HDCP Ready No
Multi Monitor Support  
Concurent Users N/A
H.264 1080p30 Streams N/A
Maximum Digital Resolution  
Maximum VGA Resolution  
Supported Graphics APIs  
Compute Performance
Supported Compute APIs  
Single Precision (FP32) Processing Yes
Single Precision (FP32) Performance 4.5 teraFLOPS
Double Precision (FP64) Processing No
Double Precision (FP64) Performance  
Tensor Processing Yes
Deep Learning (Tensor) Performance  
NVIDIA Tensor Cores 40
Integer Operations (INT8) 36 TOPS (Tera-Operations per Second)
Ray-Tracing
NVIDIA RT Cores 10
RTX-OPS  
Rays Cast  
Power & Thermals
Graphics Card Power Connectors  
Board Power 40 ~ 60 W
Minimum Recommended PSU  
Maximum GPU Temperature  
Physical
Form Factor Single Slot
Low Profile Compatible Yes
Low Profile Support Yes
Dimensions  
Package Type  
Additional Information
Scan Code LN121712
Model Number TCSA2MATX-PB
GTIN 3536403388454
Warranty

Please note your statutory rights are not affected.

For further information regarding Scan's warranty procedure please see our terms and conditions

Details
Duration:
36 months
Type:
Return to base
DOA Period:
28 days
RTB Period:
24 months
Manufacturer Contact Details
Manufacturer:
PNY/SCAN
Telephone:
0871 472 4747
Reviews
Questions & Answers