Michael Dell, the CEO and chairman of Dell technologies posted on X that the company is building Dell AI Factory with NVIDIA GPUs for powering Grok, the AI model built by Elon Musk’s xAI.
xAI does not lack computational resources. Earlier in 2023, Musk acquired tens of thousands of GPUs. Earlier this year, Musk revealed that training the Grok 2 model required approximately 20,000 Nvidia H100 GPUs. He added that the Grok 3 model and future models would need about 100,000 Nvidia H100 chips.
According to another report, Musk aims to have the proposed supercomputer operational by fall 2025. It was also mentioned that xAI might collaborate with Oracle to build this extensive computer system.
Once completed, the interconnected array of NVIDIA H100 GPUs would be at least four times larger than the largest existing GPU clusters, as Musk indicated during a presentation to investors in May.
In April this year, xAI introduced Grok-1.5V, a first-generation multimodal model. In addition to its strong text capabilities, Grok can process a wide variety of visual information, including documents, diagrams, charts, screenshots, and photographs.
“It seems like Elon Musk is assembling an avengers-like team of tech giants and turning them into a formidable force in AI, with Grok as their secret weapon,” said a user on X.
Recently, in a “staggering” revelation, Meta AI chief Yann LeCun confirmed that Meta has obtained $30 billion worth of NVIDIA GPUs to train their AI models. Enough to run a small nation or even put a man on the moon in 1969.