News

NVIDIA GTC 2025 Keynote Highlights

March 21, 2025 • 5 min read

SPC-Blog-GTC-2025-Highlights.jpg

Introduction

NVIDIA GTC 2025 is the world’s most comprehensive conference on accelerated computing and AI, where innovators and businesses can share new technological advancements using NVIDIA hardware and software.

At the start of every GTC, CEO Jensen Huang delivers a keynote to announce new initiatives including new data center GPUs, frameworks, and software, and highlight amazing partnerships and advancements in AI. Let’s go over the biggest points Jensen made during his keynote.

1. NVIDIA CUDA is More than Just AI

NVIDIA CUDA has made accelerated computing possible, speeding up calculation and making the extremely time-consuming and unfeasible possible. This first started when engineers at NVIDIA realized the parallelizability of their GPU hardware could extend past graphics and calculating pixels on a screen. They can be leveraged to calculate hundreds of thousands of other math calculations such as AI neural networks, simulation, and molecular dynamics.

NVIDIA highlights its extensive GPU-accelerated libraries and microservices through CUDA-X in HPC workloads such as computer-aided engineering, physics, biology, weather, healthcare, and even as simple as accelerating NumPy. NVIDIA cuOPT, their decision-making optimization library, will be made open source to standardize and democratize the optimization algorithms to be accelerated by GPUs and make decision-making even faster.

2. NVIDIA GB300 NVL72

Before we got to hear the new data center hardware announcements, we got a backstory as to why NVIDIA has shifted from traditional air-cooled multi-stand-alone node DGX to liquid-cooled multi-stand-alone racks DGX NVL lineup. They saw a benefit to disaggregating the NVLink from the motherboard and liquid-cooling Grace Blackwell Superchips for increased density and scale-out capability, decentralizing NVLink and increasing scalability and flexibility.

Jensen Huang also announced that future generations will adopt a new naming convention where the NVL suffix will indicate the total number of GPU dies rather than GPU chips. For example, Blackwell Ultra NVL72 has 36 Blackwell GPUs with each GPU having 2 Blackwell dies for a total of 72 GPU dies. Coming in the second half of 2025, NVIDIA plans the launch of NVIDIA Blackwell Ultra NVL72, offering 1.5x performance improvement over its predecessor, the GB200 NVL72.

3. NVIDIA Vera Rubin NVL144 and Rubin Ultra NVL576

Planned for release in the second half of 2026, NVIDIA's Vera Rubin will incorporate 144 GPU dies in its NVL144 system. The new architecture promises remarkable improvements, delivering 3.3x the performance compared to the GB300 NVL72. Vera Rubin will have all new next-generation technologies including ConnectX-9 and NVLink-6, which offer double the bandwidth of their predecessors. Additionally, HBM4 memory will bring a significant 1.6x boost in memory bandwidth.

Furthermore, Jensen Huang annoucned the NVIDIA Rubin Ultra planned for 2027 H2. The Rubin Ultra GPU features 4 GPU-dies per GPU for next level compute density. Rubin Ultra NVL576 will be a revolutionary AI factory delivering 14x the performance of Blackwell Ultra, adopting the new generations of ConnectX-9 but an all-new NVLink7 technology and feature HBM4e memory.

4.NVIDIA Photonics and Fiber - Quantum-X & Spectrum-X

Data transfer efficiency and interconnection in modern data centers are often more critical than raw computing power. Power constraints and telecommunications efficiency become major challenges when scaling GPU deployments, particularly with connection types varying by distance requirements.

To address this, NVIDIA partnered with TSMC to develop a new Photonics system featuring TSMC's photonic engine, MRMs, and high-efficiency lasers to build a more efficient and fast fiber optics networking solution. NVIDIA will release Quantum-X in H2 2025 and Spectrum-X in H2 2026 offering 3.5x better efficiency, 10x higher resiliency, and 1.3x faster deployment.

5. Physical AI and Robotics

Last GTC, Jensen Huang says, “Everything that moves will become robotics.” We are at the forefront of autonomous machinery with cars, manipulating arms, and eventually humanoid robots. NVIDIA’s edge devices inside cars drive in NVIDIA Omniverse for testing, optimization, and finetuning against normal and edge circumstances. NVIDIA has highlighted three groundbreaking tools:

  • NVIDIA Cosmos: Advanced generative AI platform capable of creating limitless virtual environments, enabling AI systems to adapt their capabilities across diverse deployment scenarios.
  • NVIDIA Newton: In partnership with Google DeepMind and Disney, NVIDIA Newton emerges as an open-source physics engine, delivering precise simulation capabilities for robotic movement training that seamlessly transfers to real-world applications.
  • Isaac GR00T N1: A framework and foundational model for humanoid robotics as designed for finetuning humanoid robot control. NVIDIA has announced the open-source release of GR00T N1, as a versatile and accessible platform that encompasses a broad spectrum of common tasks and supports additional fine-tuning.

Conclusion

Every year NVIDIA's GTC conference is a highlight for the AI community showcasing inspiration innovation and discovery for the way we compute things at large. With AI emerging as an essential way to process and interact with information in every industry, NVIDIA is pushing forward to enabling and powering the AI of the next generation.

NVIDIA no longer just makes GPUs to game on manufactures the high-performance accelerators for solving the world’s greatest challenges. It’s shaping up to look like NVIDIA wants a stake in powering the entire workflow for the future of computing with robotics. From building the data centers for training agentic AI models to supplying the robotics computers for inferencing to advancing and providing AI robotics frameworks and training accessible.


Tags

nvidia

gtc

gpu

blackwell

vera rubin



Related Content