Back

Cerebras Trains Llama Models to Leap over GPUs

50 points5 daysnextplatform.com
pk-protect-ai38 minutes ago

Wow, 44GB SRAM, not HBM3 or HBM3e, but actual SRAM ...

latchkey7 hours ago

  1x MI300x has 192GB HBM3.

  1x MI325x has 256GB HBM3e.
They cost less, you can fit more into a rack and you can buy/deploy at least the 300's today and 325's early next year. AMD and library software performance for AI is improving daily [0].

I'm still trying to wrap my head around how these companies think they are going to do well in this market without more memory.

[0] https://blog.vllm.ai/2024/10/23/vllm-serving-amd.html

krasin7 hours ago

> I'm still trying to wrap my head around how these companies think they are going to do well in this market without more memory.

Cerebras and Groq provide the fastest (by an order of magnitude) inference. This is very useful for certain workflows, which require low-latency feedback: audio chat with LLM, robotics, etc.

Outside that narrow niche, AMD stuff seems to be the only contender to NVIDIA, at the moment.

latchkey6 hours ago

> Cerebras and Groq provide the fastest (by an order of magnitude) inference.

Only on smaller models, their numbers are all 70b in the article.

Those numbers also need to be adjusted for the comparable amounts of capex+opex costs. If the costs are so high that they have to subsidize the usage/results, then they are just going to run out of money, fast.

krasin6 hours ago

> Only on smaller models, their numbers are all 70b in the article.

No, they are 5x-10x faster for all the model sizes (because it's all just running from SRAM and they have more of it than NVIDIA/AMD), even though they benchmarked just up to 70B.

> Those numbers also need to be adjusted for the comparable amounts of capex+opex costs. If the costs are so high that they have to subsidize the usage/results, then they are just going to run out of money, fast.

True. Although, for some workloads, fast enough inference is a strict prerequisite and GPUs just don't cut it.

+4
latchkey5 hours ago
+1
cma6 hours ago
hhdhdbdb3 hours ago

How is this a narrow niche?

Chain of thought type operations is in this "niche".

Also anything where the value is in the follow up chat not the one shot.

wmf7 hours ago

Groq and Cerebras only make sense at massive scale which is why I guess they pivoted to being API providers so they can amortize the hardware over many customers.

latchkey7 hours ago

Correct except that massive scale doesn't work cause it just uses up exponentially more power/space/resources.

They also have a very limited use case... if things ever shift away from LLM's and into another form of engineering that their hardware does not support, what are they going to do? Just keep deploying hardware?

Slippery slope.

arisAlexis4 hours ago

The article explains in depth the issues with memory, did you read through ?

YetAnotherNick2 hours ago

2x 80GB A100 is better in all the metrics than MI300x while being cheaper.

asdf11458 hours ago

clickbait title: inference is not training

mentalically8 hours ago

The value proposition of Cerebras is that they can compile existing graphs to their hardware and allow inference at lower costs and higher efficiencies. The title does not say anything about creating or optimizing new architectures from scratch.

germanjoey7 hours ago

the title says "Cerebras Trains Llama Models"...

mentalically7 hours ago

That's correct and if you read the whole thing you will realize that it is followed by "... to leap over GPUs" which indicates that they're not literally referring to optimizing the weights of the graph on a new architecture or freshly initialized variables on existing ones.

+1
pama6 hours ago
htrp6 hours ago

Title is about training.... article about inference

KTibow6 hours ago

Why is nobody mentioning that there is no such thing as Llama 3.2 70B

7e8 hours ago

"It would be interesting to see what the delta in accuracy is for these benchmarks."

^ the entirety of it

asdf11458 hours ago

did they release MLPerf data yet or wouldn't help their IPO?

7e8 hours ago

"So, the delta in price/performance between Cerebras and the Hoppers in the cloud when buying iron is 2.75X but for renting iron it is 5.2X, which seems to imply that Cerebras is taking a pretty big haircut when it rents out capacity. That kind of delta between renting out capacity and selling it is not a business model, it is a loss leader from a startup trying to make a point."

As always, it is about TCO, not who can make the biggest monster chip.