It’s Jensen Huang’s world, and we’re all just living in it. Recently, the AI chip giant reported a profit of $14.881 billion, up 629% compared to the corresponding quarter last year. Revenue was $26.04 billion, surpassing the estimated $24.65 billion.
NVIDIA controls a whopping 95% of the AI chip market right now. To put it in perspective, NVIDIA is now larger than Tesla and Amazon combined. Furthermore, NVIDIA is now larger than the entire German stock market.
BREAKING: Nvidia stock, $NVDA, is now trading with a market cap above $2.5 TRILLION for the first time in history.
— The Kobeissi Letter (@KobeissiLetter) May 22, 2024
To put this in perspective, Nvidia is now larger than Tesla and Amazon COMBINED.
Furthermore, Nvidia is now larger than the entire German stock market.
Nvidia is… pic.twitter.com/Uaj9mgFoML
NVIDIA X (OpenAI Spring Update, Google I/O and Microsoft Build)
Last week, we saw several tech events, from OpenAI’s Spring update to Google’s I/O and Microsoft Build. What tied them together was NVIDIA, which made a notable presence at all three.
OpenAI released GPT-4o, which won hearts with its ‘omni’ capabilities across text, vision, and audio. OpenAI’s demos included a real-time translator, a coding assistant, an AI tutor, a friendly companion, a poet, and a singer. However, this would not have been possible without NVIDIA.
“I just want to thank the incredible OpenAI team and thanks to Jensen and the NVIDIA team for bringing us the advanced GPU to make this demo possible today,” said OpenAI CTO Mira Murati at the end of the event, expressing gratitude to NVIDIA. Notably, Huang personally delivered the first DGX H200 to OpenAI last month.
Similarly, both Microsoft and Google were quick to claim that they were the first ones to get their hands on NVIDIA’s latest GPU, Blackwell.
“We are also proud to be one of the first cloud providers to offer NVIDIA’s cutting-edge Blackwell GPUs, available in early 2025,” said Google Chief Sundar Pichai at Google I/O.
“We’re bringing in the latest H200s to Azure later this year and will be among the first cloud providers to offer NVIDIA’s Blackwell GPUs in B100 as well as GB200 configurations,” said Microsoft chief Satya Nadella at Microsoft Build.
He added that Microsoft is continuing to work with NVIDIA to train and optimise models like GPT-4o and small language models like the Phi-3 family.
Apart from the above three players, leading LLM companies such as Adept, Anthropic, Character.AI, Cohere, Databricks, DeepMind, Meta, Mistral, xAI, and many others are building on NVIDIA AI in the cloud.
Blackwell Fever Begins
NVIDIA’s data centre revenue alone was $22.6 billion, a record, up 23% sequentially and 427% year-on-year, driven by continued strong demand for the NVIDIA Hopper GPU computing platform.
‘The demand for GPUs in all data centres is incredible. We’re racing every single day. And the reason for that is because of applications like ChatGPT and GPT-4o,’ said Jensen, adding that the future is going to be multi-modal.
Moreover, he announced that NVIDIA will build a new chip every year. “I can announce that after Blackwell, there’s another chip. We’re on a one-year rhythm,” said Huang on the earnings call. He further stated that the Blackwell platform is in full production and forms the foundation for trillion-parameter scale generative AI.
“Our production shipments will start in Q2 and ramp in Q3, and customers should have data centres stood up in Q4,” he said. “Blackwell’s time-to-market customers include Amazon, Google, Meta, Microsoft, OpenAI, Oracle, Tesla, and xAI,” said NVIDIA CTO Colette Kress.
In India, Yotta is India’s sole NVIDIA Partner Network Cloud Partner (NCP) and will receive NVIDIA’s latest GPU Blackwell by October. Yotta plans to scale up its GPU inventory to 32,768 units by the end of 2025. Last year, the company announced that it would import 24,000 GPUs, including NVIDIA H100s and L40S, in a phased manner.
Huang also announced NVIDIA Spectrum-X, an advanced Ethernet networking platform specifically designed to enhance the performance and efficiency of AI workloads.
NVIDIA is betting big on autonomous vehicles. “We supported Tesla’s expansion of its training AI cluster to 35,000 H100 GPUs. Their use of NVIDIA AI infrastructure paved the way for the breakthrough performance of FSD Version 12, their latest autonomous driving software based on Vision,” said Kress.
Similarly, in a recent interview with Yahoo Finance, Huang stated that besides the cloud industry, the primary users of NVIDIA’s data-center chips are from the automotive industry. He then emphasised about autonomous cars and their advancements.
“Tesla is far ahead in self-driving cars, but every single car, someday, will have to have autonomous capability,” said Huang.
.