Elon Musk's xAI Building 'Gigafactory of Compute' Supercomputer for Grok AI

Gábor Bíró 2024. May 05.
2 min de lecture

Elon Musk's AI startup, xAI, is building a massive supercomputer, dubbed the "Gigafactory of Compute." This supercomputer's task will be to elevate their conversational AI, Grok, to a new level.

Elon Musk's xAI Building 'Gigafactory of Compute' Supercomputer for Grok AI
Source: Création originale

Elon Musk's AI startup, xAI, is working on an ambitious project: building a giant supercomputer named the "Gigafactory of Compute." Its purpose is to significantly enhance the capabilities of their conversational AI, Grok. This machine is planned to utilize 100,000 specialized semiconductor chips (specifically, NVIDIA H100 GPUs), aiming to position xAI at the forefront of AI technology.

Musk's vision extends far beyond current technological levels; he predicts that AI will surpass human cognitive abilities by the end of 2025. At a recent investor meeting, Musk stated that xAI aims to catch up with industry leaders like OpenAI and DeepMind by the end of 2024. Furthermore, Musk believes AI development could reach a point where it replaces all human jobs, raising new questions about the role of human life. Musk suggests that in the future, humanity's task might be to "give meaning to AI."

The "Gigafactory of Compute" supercomputer is expected to be completed by the fall of 2025 and will utilize tens of thousands of NVIDIA H100 GPUs, at a cost potentially running into billions of dollars. This massive computational power will be used to develop and operate the third version of Grok, which will require at least 100,000 of these chips – five times the number used for Grok 2.0. Grok is currently at version 1.5, released in April, which already boasts impressive capabilities like processing images and diagrams, and providing AI-powered news summaries for premium X users.

The core components of xAI's "Gigafactory of Compute" supercomputer will be Nvidia's cutting-edge H100 graphics processing units (GPUs). According to Musk, the size of this new GPU cluster will be at least four times larger than the current GPU clusters used by xAI's competitors. The supercomputer will require tens of thousands of these high-performance H100 GPUs, and training the third version of Grok alone will necessitate at least 100,000 chips – a significant increase from the 20,000 GPUs reportedly used for Grok 2.0.

xAI is partnering with Oracle to build the infrastructure for the "Gigafactory of Compute" supercomputer, highlighting the project's scale and seriousness. To finance this large-scale venture, Musk initially sought to raise $4 billion at a $15 billion valuation for xAI. Due to strong investor interest, the target was later increased to $6 billion, reportedly closing at an $18 billion pre-money valuation. This substantial financial investment, estimated in the billions of dollars, is aimed at assembling the tens of thousands of NVIDIA H100 GPUs required to power the next-generation Grok AI system.

Gábor Bíró 2024. May 05.