Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
Physical Address
304 North Cardinal St.
Dorchester Center, MA 02124
The list of AI models that have missed their promised windows continues to grow.
Last summer, billionaire Elon Musk, the founder and CEO of the AI company xAI, said that Grok 3, the next generation of AI xAI, will arrive “by the end of the year” 2024. GrokxAI’s answer to brands like OpenAI’s GPT-4o and Google Geminiit can analyze images and answer questions, powering a number of features on X, Musk’s social network.
“Grok 3 by the end of the year after training on 100k H100s should be something special,” Musk he wrote in the July post on X, about the xAIs a great band from Memphis for GPUs. “Grok 3 will be a leap forward,” he said he said in the middle of the following December.
Yet it’s January 2nd, and the Grok 3 hasn’t arrived – nor are there any signs that its release is imminent.
In fact, some code on the xAI website was spotted by AI tipster Tibor Blaho it shows so that the middle model, “Grok 2.5,” arrives first.
Grok(.)com may be coming soon with Grok 2.5 version (grok-2-latest – “Our smartest version”) – thanks for the suggestion, anon! pic.twitter.com/emsvmZyaf7
— Tibor Blaho (@btibor91) December 20, 2024
Of course, this isn’t the first time Musk has set a lofty goal and missed it. I am regular that Musk’s claims about when things will be launched are often vague.
And to be fair, in an interview with podcaster Lex Fridman in August, Musk he said that Grok 3 “hopefully” will be available in 2024 “if we’re lucky.”
But the MIA appearance of the Grok 3 is interesting because it is part of a growing trend.
Last year, AI startup Anthropic failed to deliver a superior successor Close 3 Functions example. Months later to announce that the next version, Claude 3.5 Opus, will be released at the end of 2024, Anthropic removed all references to the model from its software documentation. (According to (to another report, Anthropic finished training the Claude 3.5 Opus sometime last year, but decided that releasing it was not a good idea.)
He says, Google and OpenAI too he suffered obstacles and their top models in recent months.
This may be evidence of limitations current rules for AI scaling – Methods that companies are using to increase the efficiency of their models. In the past, it was possible to increase the performance of training models by using more computing power and larger and larger data sets. But profits with each generation of the model have started to decrease, which leads the industry to follow other ways.
Grok 3 is trained with 10X, soon 20X compute of Grok 2
– Elon Musk (@elonmusk) September 21, 2024
Musk himself mentioned this in an interview with Fridman.
“You expect (Grok 3) to be more skilled?” Friedman asked.
“I hope so,” Musk replied. “I mean, this is the goal. We can fail at this goal. That’s the desire.”
There may be other reasons for the Grok 3’s delay. xAI has a much smaller team than many of its competitors, for one. However, the low activation time adds to the evidence that AI training methods are hitting the wall.