Back

LLM architecture comparison

224 points10 hoursmagazine.sebastianraschka.com
DeveloperErrata1 hour ago

This was really educational to me, felt at the perfect level of abstraction to learn a lot about the specifics of LLM architecture without the difficulty of parsing the original papers

strangescript5 hours ago

The diagrams in this article are amazing if you are somewhere in between a novice and expert. Seeing all of the new models laid out next to each other is fantastic.

webappguy5 hours ago

Would love to see a PT.2 w even what is rumored in top closed source frontier models eg. o5, o3 Pro, o4 or 4.5, Gemini 2.5 Pro, Grok 4 and Claude Opus 4

bravesoul29 hours ago

This is a nice catchup for some who hasn't been keeping up like me

Chloebaker6 hours ago

Honestly its crazy to think how far we’ve come since GPT-2 (2019), today comparing LLMs to determine their performance is notoriously challenging and it feels like every 2 weeks a models beats a new benchmark. I’m really glad DeepSeek was mentioned here, bc the key architectural techniques it introduced in V3 that improved its computational efficiency and distinguish it from many other LLMs was really transformational when it came out.

southernplaces758 minutes ago

Truly, the downvoting on this site is a ridiculous little thing, all the more so among people who just love to frequently stroke themselves about how superior the intellectual faculties of the average HN reader/commentator are.. Gave you an upvote simply because for no fathomable reason, your two cents about LLM progress got downvoted into grey.

Someone thinks something specific in the completely reasonable opinion you gave about LLM progress during the last few years is wrong? Okay, so why not mention it, maybe open a tiny debate, instead of digitally reacting like a 12 year old child on a YouTube comment thread?

cindyllm22 minutes ago

[dead]

dmezzetti7 hours ago

While all these architectures are innovative and have helped improve either accuracy or speed, the same fundamental problem of generating factual information still exists.

Retrieval Augmented Generation (RAG), Agents and other similar methods help mitigate this. It will be interesting to see if future architectures eventually replace these techniques.

tormeh5 hours ago

To me, the issue seems to be that we're training transformers to predict text, which only forces the model to embed limited amounts of logic. We'd have to find something different to train models on in order for them to stop hallucinating.

bsenftner5 hours ago

I'm still thinking about how RAG being conceptually simple and easy to implement, why the foundational models have not incorporated it into their base functionality? The lack of that strikes me as a negative point about RAG and it's variants, because if any of them worked, it would be in the models directly and not need to be added afterwards.

bavell4 hours ago

RAG is a prompting technique, how could they possibly incorporate it into the pre training?

maleldil3 hours ago

CoT is a prompting technique too, and it's been incorporated.

+2
bavell3 hours ago
bsenftner2 hours ago

The same way developers incorporate it now. Why are you thinking "pre-training", this is a feature of the deployed model: it ingests documents and generates a mini-fine tune right then.