Stellantis’ German BEV horror show
The Amsterdam-headquartered conglomerate joins Renault and Tesla in Teutonic turmoil
Tesla will spend up to $4bn on Nvidia hardware in 2024, CEO says
Tesla CEO Elon Musk has again tempered expectations for his company's ability to power its transition to an AI and robotics company with its own supercomputing hardware Dojo, calling the chances for Tesla's in-house supercomputer a "longshot".
While Tesla says Dojo is performing certain limited training tasks, the company has been ordering hardware such as GPUs from Nvidia, which Musk has described as "hedging our bets".
Musk also denies reports that he had diverted Nvidia microchips intended for Tesla to his other companies X and xAI, saying that the decision was taken only because Tesla did not have space to store the hardware.
Musk denies that this means Tesla is pulling back on any AI targets, saying that "once we have that system working, we will order more hardware". But the CEO continued his recent habit of tempering expectations about the company's shift to an autonomy and AI-defined company.
"Also, I cannot overstate the difficulty of making 50,000 H100s train as a coherent system. No company on earth has been able to achieve this yet," Musk writes on social media site X, formerly Twitter.
He goes on to break down Tesla's previously announced $10bn AI investment plan for 2024, which he confirmed will involve a substantial reliance on Nvidia hardware.
"Of the roughly $10bn in AI-related expenditures I said Tesla would make this year, about half is internal, primarily the Tesla-designed AI inference computer and sensors present in all of our cars, plus Dojo," Musk says.
"For building the AI training superclusters, Nvidia hardware is about two thirds of the cost. My current best guess for Nvidia purchases by Tesla are $3bn to $4bn this year," he adds.
But Musk also advocates a potential reason for confidence in Tesla's ability to have enough computing power on hand for the levels of AI training required for its lofty FSD goals, saying on X that such training capacity would only need a fraction of Tesla's current capacity for power consumption.
"Training compute for Tesla is relatively small compared to inference compute, as the latter scales linearly with size of fleet," he says. In this instance, 'inference compute' refers to the capacity needed while Tesla's vehicle AI models are in action, drawing conclusions form new data, while 'training compute' refers to the design and establishment of models through showing it its desired outcomes.
"When the Tesla fleet reaches 100mn vehicles, peak power consumption of AI hardware in cars will be approximately 100GW. Training power consumption is probably less than 5GW," Musk says, although admitting that "these are very rough guesses".
As much as Musk hopes to contextualise the challenge of training compute capacity relative to a much higher bar, the CEO goes on to admit that 5GW of AI training compute remains "enormous by current standards".
But he returns to his points that it is only "about 5pc of total Tesla AI compute capacity". Tesla is currently installing multiple tranches of vast computing capacity, including one at Giga Texas.
"There is a path for Dojo to exceed Nvidia. It is a long shot, as I have said before, but success is one of the possible outcomes," Musk says.
But Tesla will continue to source GPUs from Nvidia for the foreseeable future. And Musk's admissions of Dojo's chances at taking the reigns as a "longshot" is an uncharacteristically cautious tone compared to his usual preferred style of bombastic hype.
Insider Focus LTD (Company #14789403)