Google unveils Ironwood, its most powerful AI processor yet

afidel

Ars Legatus Legionis
18,196
Subscriptor
Google says the higher inference speed and efficiency of Ironwood sets the stage for more breakthroughs in the coming year.

It does? Doesn't it just increase the rate of training which will lead to incremental improvements in existing methods? Breakthroughs happen because someone does new research that leads to a new approach or uses an existing approach in a novel way, simply increasing the amount of CPU power does little to fundamentally change the way things work.
 
Upvote
40 (54 / -14)

wildsman

Ars Tribunus Militum
1,688
Google says the higher inference speed and efficiency of Ironwood sets the stage for more breakthroughs in the coming year.

It does? Doesn't it just increase the rate of training which will lead to incremental improvements in existing methods? Breakthroughs happen because someone does new research that leads to a new approach or uses an existing approach in a novel way, simply increasing the amount of CPU power does little to fundamentally change the way things work.
As Demis Hassabis said, they are going to try both approaches. They're going to continue scaling and try to come up with architectural breakthroughs as well - the additional compute will help speed things up regardless of the approach.
 
Last edited:
Upvote
38 (38 / 0)

BigOlBlimp

Ars Scholae Palatinae
842
Subscriptor
It sounds like maybe these chips are inference focused? Inference is a really really basic operation compared to training, so it makes sense there's a lot of room for optimization. Hopefully it helps with energy consumption as well
Inference is a nontrivial part of training. You can’t know how to optimize your model if you can’t measure how wrong it is against training data.
 
Upvote
25 (25 / 0)
Post content hidden for low score. Show…

rwhitwam

Smack-Fu Master, in training
45
" ... like simulated reasoning ... "

Please provide a precise definition of real reasoning.
To be clear, we like to use the phrase "simulated reasoning" at Ars to delineate the sometimes confusing boundary between the the way brains reason and the kind of reasoning AI models do. As far as we can currently tell, these two processes are not the same.
 
Upvote
18 (18 / 0)

graylshaped

Ars Legatus Legionis
67,981
Subscriptor++
Whenever Google talks about the capabilities of a new Gemini version, it notes that the model's capabilities are tied not only to the code but to Google's infrastructure.
"Here I am, brain the size of a planet, and you want me to decide which ads to serve to a site featuring user-generated hentai haiku."
 
Upvote
4 (4 / 0)

SeanJW

Ars Legatus Legionis
11,920
Subscriptor++
Google says the higher inference speed and efficiency of Ironwood sets the stage for more breakthroughs in the coming year.

It does? Doesn't it just increase the rate of training which will lead to incremental improvements in existing methods? Breakthroughs happen because someone does new research that leads to a new approach or uses an existing approach in a novel way, simply increasing the amount of CPU power does little to fundamentally change the way things work.

Faster inferencing meant things like search predictively happening in real time as you type could be created. The original TPUs (which were inferencing only) were created for exactly that reason - without them, Google was estimating that they'd need to double their online compute to do things like that.

It's not just training that matters, its actually running the models that matters more.
 
Upvote
0 (0 / 0)