Nvidia launches Rubin platform with Vera Rubin superchip at CES 2026

2026-01-05 23:41:00
Summary:
-
NVIDIA launches Rubin platform at CES 2026
-
Vera Rubin superchip integrates CPU and dual GPUs
-
Platform targets agentic AI and MoE models
-
Designed for training and inference at scale
-
Reinforces NVIDIA’s annual AI hardware cadence
AI hardware leader NVIDIA has formally lifted the curtain on its next-generation Rubin platform, announcing the launch of the Vera Rubin superchip during CES 2026 in Las Vegas. The new processor marks a significant step in the company’s accelerated computing roadmap and reinforces its strategy of delivering annual generational upgrades to meet surging AI demand.
Vera Rubin is one of six chips that collectively make up the Rubin platform, which NVIDIA is positioning as its most advanced AI system architecture to date. The superchip integrates one Vera CPU with two Rubin GPUs into a single processor, reflecting NVIDIA’s continued emphasis on tight hardware co-design across compute, memory and interconnect. That approach has become a defining feature of the company’s AI offerings, enabling performance gains that go beyond incremental silicon improvements.
NVIDIA is pitching the Rubin platform as purpose-built for the next wave of artificial intelligence workloads, particularly agentic AI, advanced reasoning models and mixture-of-experts (MoE) architectures. These models rely on routing tasks dynamically across specialised “expert” systems, placing heavy demands on compute efficiency, memory bandwidth and inter-chip communication. By combining multiple high-performance components into a unified superchip, Rubin is designed to accelerate both AI training and inference at scale.
The timing of the launch underscores NVIDIA’s view that AI compute demand remains structurally strong. Speaking alongside the announcement, CEO Jensen Huang said Rubin arrives as demand for AI computing is “going through the roof,” spanning hyperscale data centres, enterprise deployments and increasingly complex model architectures. He framed the platform as a major leap forward enabled by NVIDIA’s rapid product cadence and deep integration across its silicon stack.
CES has increasingly become a venue for NVIDIA to outline long-term strategic direction rather than simply showcase consumer-facing technology. The Rubin announcement fits that pattern, highlighting the company’s focus on AI infrastructure rather than end-user devices. It also reinforces NVIDIA’s ambition to remain at the centre of the global AI build-out, as governments, cloud providers and enterprises race to deploy more capable and efficient AI systems.
With Rubin, NVIDIA is signalling that the next phase of AI growth will be driven not just by larger models, but by more sophisticated reasoning, orchestration and real-world deployment — workloads that demand an entirely new class of AI supercomputing platforms.



