NVIDIA 900-21001-0040-100 A30 24 GB GDDR6 Graphic Card - Passive - PCIe 4.0 x16 - Dual Slot

MPN: 900-21001-0040-100
NVIDIA 900-21001-0040-100 A30 24 GB GDDR6 Graphic Card - Passive - PCIe 4.0 x16 - Dual Slot
In Stock
Highlights
  • Standard Memory: 24 GB
  • Host Interface: PCI Express 4.0
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
$4,875.00
Please log in to add an item to your wishlist.
Non-cancelable and non-returnable
B2B pricing options available.

SabrePC B2B Account Services

Save instantly and shop with assurance knowing that you have a dedicated account team a phone call or email away to help answer any of your questions with a B2B account.

  • Business-Only Pricing
  • Personalized Quotes
  • Fast Delivery
  • Products and Support
Need Help? Let's talk about it.
Please log in to add an item to your wishlist.
NVIDIA 900-21001-0040-100 A30 24 GB GDDR6 Graphic Card - Passive - PCIe 4.0 x16 - Dual Slot
MPN: 900-21001-0040-100

$4,875.00

Non-cancelable and non-returnable

NVIDIA 900-21001-0040-100 A30 24 GB GDDR6 Graphic Card - Passive - PCIe 4.0 x16 - Dual Slot

In Stock
Highlights
  • Standard Memory: 24 GB
  • Host Interface: PCI Express 4.0
  • Cooler Type: Passive Cooler
  • Product Type: Graphic Card
B2B pricing options available.

SabrePC B2B Account Services

Save instantly and shop with assurance knowing that you have a dedicated account team a phone call or email away to help answer any of your questions with a B2B account.

  • Business-Only Pricing
  • Personalized Quotes
  • Fast Delivery
  • Products and Support

AI Inference and Mainstream Compute for Every Enterprise

NVIDIA A30 Tensor Core GPU is the most versatile mainstream compute GPU for AI inference and mainstream enterprise workloads. Powered by NVIDIA Ampere architecture Tensor Core technology, it supports a broad range of math precisions, providing a single accelerator to speed up every workload.

Built for AI inference at scale, the same compute resource can rapidly re-train AI models with TF32, as well as accelerate high-performance computing (HPC) applications using FP64 Tensor Cores. Multi-Instance GPU (MIG) and FP64 Tensor Cores combine with fast 933 gigabytes per second (GB/s) of memory bandwidth in a low 165W power envelope, all running on a PCIe card optimal for mainstream servers.

The combination of third-generation Tensor Cores and MIG delivers secure quality of service across diverse workloads, all powered by a versatile GPU enabling an elastic data center. A30's versatile compute capabilities across big and small workloads deliver maximum value for mainstream enterprises.

A30 is part of the complete NVIDIA data center solution that incorporates building blocks across hardware, networking, software, libraries, and optimized AI models and applications from NGCâ„¢. Representing the most powerful end-to-end AI and HPC platform for data centers, it allows researchers to deliver real-world results and deploy solutions into production at scale.

Groundbreaking Innovations | The End-to-End Solution for Enterprises

NVIDIA AMPERE ARCHITECTURE Whether using MIG to partition an A30 GPU into smaller instances or NVIDIA NVLink to connect multiple GPUs to speed larger workloads, A30 can readily handle diverse-sized acceleration needs, from the smallest job to the biggest multi-node workload. A30 versatility means IT managers can maximize the utility of every GPU in their data center with mainstream servers, around the clock.

THIRD-GENERATION TENSOR CORES

NVIDIA A30 delivers 165 teraFLOPS (TFLOPS) of TF32 deep learning performance. That's 20X more AI training throughput and over 5X more inference performance compared to NVIDIA T4 Tensor Core GPU. For HPC, A30 delivers 10.3 TFLOPS of performance, nearly 30 percent more than NVIDIA V100 Tensor Core GPU.

NEXT-GENERATION NVLINK

NVIDIA NVLink in A30 delivers 2X higher throughput compared to the previous generation. Two A30 PCIe GPUs can be connected via an NVLink Bridge to deliver 330 TFLOPs of deep learning performance.

MULTI-INSTANCE GPU (MIG)

An A30 GPU can be partitioned into as many as four GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications. And IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.

HBM2

With up to 24GB of highbandwidth memory (HBM2), A30 delivers 933GB/s of GPU memory bandwidth, optimal for diverse AI and HPC workloads in mainstream servers. MULTI-INSTANCE GPU (MIG) An A30 GPU can be partitioned into as many as four GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications. And IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.