It’s not a surprise that generative AI demands an exorbitant amount of computing power for its sophisticated advances. Top AI labs like Google, OpenAI, and Anthropic frequently talk about this specific topic, especially when unveiling new products and features that cause a demand surge.
Perhaps in a bid to keep up with this trend, NVIDIA recently announced its plan to invest up to $100 billion in OpenAI. The strategic partnership will allow the ChatGPT maker to build at least 10 gigawatts of AI datacenters, which will help train and run next-gen AI models and even potentially create the path toward superintelligence.
It’s worth noting that the first gigawatt of NVIDIA systems is set to be deployed in the second half of 2026 on the Nvidia Vera Rubin platform. The chipmaker’s first $10 billion investment in OpenAI will be made once both parties reach an agreement for OpenAI to purchase NVIDIA chips. However, reports suggest that NVIDIA’s first $10 billion investment will be deployed when the first gigawatt is completed.
While making the announcement, OpenAI CEO Sam Altman indicated:
“Everything starts with compute. Compute infrastructure will be the basis for the economy of the future, and we will utilize what we’re building with Nvidia to both create new AI breakthroughs and empower people and businesses with them at scale.”
At the beginning of the year, OpenAI unveiled its $500 billion Stargate project, designed to facilitate the construction of data centers across the United States to power its AI advances. Consequently, Microsoft lost its exclusive cloud provider status for OpenAI, though it still holds the right of first refusal.
NVIDIA and OpenAI are set to finalize the new partnership details in the coming weeks, potentially addressing the ChatGPT maker’s cloud compute woes.
Sam Altman admits OpenAI is compute-constrained
Earlier this year, OpenAI CEO Sam Altman claimed that the company was no longer compute-constrained, despite Microsoft backing out from two mega data center deals because it did not want to provide additional training support for ChatGPT.
But following the new partnership announcement between OpenAI and NVIDIA, Sam Altman indicated:
“The compute constraints that the whole industry has been, and our company in particular have been terrible. We’re so limited right now in the services we can offer. There is so much more demand than what we can do.”
The executive further elaborated that the compute constraints would place OpenAI in a tough spot within the next two years, forcing it to make painful tradeoffs. He indicated that the company might be forced to choose between curing cancer through research or providing free education to everyone on earth with 5-10 gigawatts of compute power.
Follow Windows Central on Google News to keep our latest news, insights, and features at the top of your feeds!
#OpenAIs #billion #partner #isnt #Microsoft #NVIDIA