Between Tesla and xAI, Elon Musk’s artificial intelligence aspirations have cost some $10 billion dollars in bringing training and inference compute capabilities online this year, according to a Thursday post on X (formerly Twitter) by Tesla investor Sawyer Merritt.
“Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas – and will have 50,000 H100 capacity by the end of October, and ~85,000 H100 equivalent capacity by December,” Merritt noted.
By the end of this year, Elon Musk’s companies (Tesla & xAI) will have brought online roughly $10 billion worth of training compute capacity in 2024 alone.
Tesla already deployed and is training ahead of schedule on a 29,000 unit Nvidia H100 cluster at Giga Texas – and will have… pic.twitter.com/UgvmsBLuQp
— Sawyer Merritt (@SawyerMerritt) October 29, 2024
Tesla also revealed its Cortex AI cluster in August, which will be leveraged to train the company’s Full Self-Driving system and uses 50,000 Nvidia H100 GPUs along with another 20,000 Dojo AI chips developed by Tesla itself. The Colossus supercomputer, which Tesla unveiled in September, uses just as many H100 GPUs as the Memphis and is slated to expand by another 50,000 H100 and 50,000 H200 GPUs in the coming months.
xAI, on the other hand, began assembling its Memphis supercomputer in July at its Gigafactory of Compute, located in an old Electrolux production facility in Memphis, Tennessee. Musk claims that the Memphis is “the most powerful AI training cluster in the world,” as it runs on 100,000 Nvidia’s H100 GPUs, through Musk has promised to double that capacity in short order. It came online in September and has since been tasked with building the “world’s most powerful AI by every metric by December of this year” — likely, Grok 3. xAI has not disclosed how much the Memphis cost to build, though Tom’s Hardware estimates that the company has spent at least $2…
Read full on Digital Trends