US AI infrastructure companies NVIDIA and OpenAI have signed a letter of intent for a strategic partnership to deploy at least 10 GWe of NVIDIA systems for OpenAI’s next-generation AI infrastructure. The aim is to train and run its next generation of models on the path to deploying superintelligence. To support this deployment including data centre and power capacity, NVIDIA intends to invest up to $100bn in OpenAI as the new NVIDIA systems are deployed. The first phase is targeted to come online in the second half of 2026 using NVIDIA’s Vera Rubin platform.
OpenAI will work with NVIDIA as a preferred strategic compute and networking partner for its AI factory growth plans. OpenAI and NVIDIA will work together to co-optimise their roadmaps for OpenAI’s model and infrastructure software and NVIDIA’s hardware and software. This partnership complements the work OpenAI and NVIDIA are already doing with a broad network of collaborators, including Microsoft, Oracle, SoftBank, and Stargate partners, focused on building the world’s most advanced AI infrastructure. OpenAI has grown to over 700m weekly active users and strong adoption across global enterprises, small businesses, and developers.
“NVIDIA and OpenAI have pushed each other for a decade, from the first DGX supercomputer to the breakthrough of ChatGPT,” said Jensen Huang, founder and CEO of NVIDIA. “This investment and infrastructure partnership mark the next leap forward – deploying 10 GW to power the next era of intelligence.”
Sam Altman, co-founder and CEO of OpenAI said: “Compute infrastructure will be the basis for the economy of the future, and we will utilise what we’re building with NVIDIA to both create new AI breakthroughs and empower people and businesses with them at scale.”
Greg Brockman, co-founder and President of OpenAI, noted: We’ve been working closely with NVIDIA since the early days of OpenAI. We’ve utilised their platform to create AI systems that hundreds of millions of people use every day. We’re excited to deploy 10 of compute with NVIDIA to push back the frontier of intelligence and scale the benefits of this technology to everyone.”
In an interview with CNBC, Huang noted: “This partnership is about building an AI infrastructure that enables AI to go from the labs into the world.” Huang emphasised that though this is the start of a massive buildout of AI infrastructure around the world, it’s just the beginning. “We’re literally going to connect intelligence to every application, to every use case, to every device – and we’re just at the beginning. This is the first 10 gigawatts, I assure you of that.”
Altman told CNBC: “The cost per unit of intelligence will keep falling and falling and falling, and we think that’s great,” said Altman. “But on the other side, the frontier of AI, maximum intellectual capability, is going up and up. And that enables more and more use – and a lot of it.”
Without enough computational resources, Altman explained, people would have to choose between impactful use cases, for example either researching a cancer cure or offering free education. “No one wants to make that choice,” he said. “And so increasingly, as we see this, the answer is just much more capacity so that we can serve the massive need and opportunity.”
The partnership expands on a long-standing collaboration between NVIDIA and OpenAI, which began with Huang hand-delivering the first NVIDIA DGX system to the company in 2016. “This is a billion times more computational power than that initial server,” Brockman, told CNBC. “We’re able to actually create new breakthroughs, new models… to empower every individual and business because we’ll be able to reach the next level of scale.”
The companies said they expect to finalise details in the coming weeks. Huang told CNBC the $100bn investment comes on top of all NVIDIA’s existing commitments and was not included in the company’s recent financial forecasts to investors. The partnership announcement comes a week after NVIDIA disclosed a $5bn investment in Intel, taking a 4% stake in its longtime competitor as the two companies plan to co-develop custom data centre and PC products.
Commenting on the deal, Ars Technica noted: “To put that power demand in perspective, 10 GW equals the output of roughly 10 nuclear reactors, which typically output about 1 GW per facility. Current data centre energy consumption ranges from 10 MW to 1 GW, with most large facilities consuming between 50 and 100 MW. OpenAI’s planned infrastructure would dwarf existing installations, requiring as much electricity as multiple major cities.”
While the companies did not specify power sources in their announcement, the massive energy requirements have driven other tech giants to nuclear partnerships for similar projects.
Ars Technica further noted; “The planned infrastructure buildout would significantly increase global energy consumption, which also raises environmental concerns. The International Energy Agency estimates that global data centres already consumed roughly 1.5% of global electricity in 2024. OpenAI’s project also faces practical constraints. Existing power grid connections represent bottlenecks in power-constrained markets, with utilities struggling to keep pace with rapid AI expansion that could push global data centre electricity demand to 945 TWh by 2030, according to the International Energy Agency.”