Nvidia says Rubin AI chips are in production as demand accelerates

Nvidia Chief Executive Jensen Huang said the company’s next-generation Rubin data-center processors are already coming off manufacturing lines, with early customer deployments slated for the second half of the year. The company is pitching a major performance jump over Blackwell as AI workloads grow more complex and infrastructure demand stays intense.

By Ahmed Azzam | @3zzamous | 6 January 2026

Copied
Nvidia Rubin
  • Rubin is in production, with customer deployments targeted for the second half of the year.

  • Nvidia says Rubin delivers a step-change in training and inference performance versus Blackwell.

  • Huang framed demand as being driven by increasingly complex AI software and heavier compute needs.

  • A small group of hyperscalers still accounts for most near-term spending as Nvidia pushes into broader industries.

Rubin moves from roadmap to production

Nvidia said its Rubin generation of data-center processors has reached a key milestone: chips are now in production, and customers will soon begin testing systems built around the platform. Speaking at CES in Las Vegas, CEO Jensen Huang described Rubin as the company’s next major accelerator cycle, aimed at keeping Nvidia ahead in the fast-moving market for AI infrastructure.

The company’s message was straightforward: AI adoption is not merely continuing, it is becoming more computationally demanding, and today’s hardware is being stretched. Nvidia is positioning Rubin as the answer to that strain, arguing that the next wave of AI workloads requires more specialized compute at lower unit cost per result.

Performance claims raise the stakes

Nvidia said Rubin is designed to be materially faster than Blackwell, both for training large AI models and for running them in production, where inference efficiency often determines economics. The company also highlighted a new CPU design paired with the platform, arguing it delivers a significant uplift over the component it replaces.

The pitch is not just about speed. Nvidia is also leaning on operating efficiency, saying Rubin-based systems can deliver similar outcomes with fewer components than prior generations, lowering running costs for customers scaling large clusters.

Big customers first, broader economy next

Nvidia said major cloud providers will be among the first to deploy Rubin systems in the second half of the year, reflecting how concentrated AI infrastructure spending remains. Much of the current capex engine is still driven by a narrow set of hyperscalers building out data centers to train and serve AI models at scale.

At the same time, Nvidia is trying to widen the demand base beyond the usual buyers. The company has been pairing its hardware roadmap with software and tools intended to accelerate adoption across sectors such as robotics, healthcare and industrial applications, where AI usage is moving from pilots into deployed systems.

Competitive noise rises, Nvidia doubles down

Even as Nvidia projects confidence, the market backdrop is less forgiving. Some investors have questioned how long AI spending can sustain its current pace, while rivals and large data-center operators increasingly develop in-house accelerators. Nvidia’s response has been to compress the cadence of product disclosure, detailing new platforms earlier than usual to keep customers and developers locked into its ecosystem.

Huang’s message at CES was that demand remains strong because AI is evolving into more multi-stage, problem-solving systems — workloads that can overwhelm existing infrastructure. Nvidia is betting Rubin’s timing and performance will keep it at the center of the buildout.

China demand and supply allocation remain in focus

Nvidia also flagged continuing demand in China for specific chips, while noting that export licensing decisions by the US government remain a swing factor for what can be shipped and when. Company executives indicated that supply planning aims to serve global customers without crowding out other regions, even as policy uncertainty persists.

For markets, the near-term question is whether Rubin’s rollout reinforces Nvidia’s execution narrative at a moment when investors are increasingly sensitive to signs of capex fatigue. Nvidia is trying to make the case that AI’s hardware cycle is not peaking — it’s just getting started.

Copied