Elon Musk has announced bold new plans for xAI, aiming to push the boundaries of artificial intelligence development. The company intends to deploy 50 million video cards equivalent in power to Nvidia’s H100 GPUs over the next five years, delivering a staggering 50 ExaFLOPS of computational power for AI training. This monumental upgrade would position xAI as a leader in the race to build next-generation AI systems capable of tackling complex global challenges.
Currently, xAI is already leveraging a robust infrastructure, utilizing 200,000 H100 and H200 GPUs alongside 30,000 GB200 GPUs. The company’s roadmap includes the development of Colossus 2, a massive cluster exceeding one million GPUs, designed to accelerate AI research and deployment.
However, this ambitious scale comes with significant hurdles, particularly in energy demands.
To power this supercomputing hub, xAI estimates a need for up to 35 gigawatts of electricity—equivalent to the output of 35 nuclear power plants. Even with more efficient architectural designs, the energy consumption is unlikely to decrease substantially, posing a critical challenge. Securing such a vast power supply raises questions about infrastructure, sustainability, and regulatory feasibility.
Also read:
- Elon Musk Explains Why Colonizing Mars is Crucial for Civilization’s Survival
- Apple TV Launches Countdown for First Footage of New Vince Gilligan Series
- Uber Introduces Gender Preference Option to Enhance Safety and Comfort
Whether xAI can achieve this goal by 2028-2030 remains uncertain, hinging on advancements in energy solutions, partnerships, and technological breakthroughs. Musk’s vision underscores xAI’s commitment to accelerating human scientific discovery, but the path forward will test the limits of innovation and resource management in the AI era.

