PNY NVA2TCGPU-KIT A2 Tensor Core Graphic Card - 16 GB GDDR6 - LP - PCIe 4.0 - 1 Slot

MPN: NVA2TCGPU-KIT
Out of Stock
Highlights
$1,349.00
Please log in to add an item to your wishlist.
Non-cancelable and non-returnable
B2B pricing options available.

SabrePC B2B Account Services

Save instantly and shop with assurance knowing that you have a dedicated account team a phone call or email away to help answer any of your questions with a B2B account.

  • Business-Only Pricing
  • Personalized Quotes
  • Fast Delivery
  • Products and Support
Need Help? Let's talk about it.
Please log in to add an item to your wishlist.
PNY NVA2TCGPU-KIT A2 Tensor Core Graphic Card - 16 GB GDDR6 - LP - PCIe 4.0 - 1 Slot
MPN: NVA2TCGPU-KIT

$1,349.00

Non-cancelable and non-returnable

PNY NVA2TCGPU-KIT A2 Tensor Core Graphic Card - 16 GB GDDR6 - LP - PCIe 4.0 - 1 Slot

Out of Stock
Highlights

NVIDIA A2 | Unprecedented Acceleration for World's Highest-Performing Elastic Data Centers

The NVIDIA A2 Tensor Core GPU provides entry-level inference with low power, a small footprint, and high performance for intelligent video analytics (IVA) or NVIDI AI at the edge. Featuring a low-profile PCIe Gen4 card and a low 40-60 watt (W) configurable thermal design power (TDP) capability, the A2 brings versatile inference acceleration to any server.

A2's versatility, compact size, and low power exceed the demands for edge deployments at scale, instantly upgrading existing entry-level CPU servers to handle inference. Servers accelerated with A2 GPUs deliver up to 20X higher inference performance versus CPUs and 1.3x more efficient IVA deployments than previous GPU generations - all at an entry-level price point.

NVIDIA-Certified systems with the NVIDIA A2, A30, and A100 Tensor Core GPUs and NVIDIA AI-including the NVIDA Triton Inference Server, open source inference service software-deliver breakthrough inference performance across edge, data center, and cloud. They ensure that AI-enabled applications deploy with fewer servers and less power, resulting in easier deployments and faster insights with dramatically lower costs.