Back

A guide to open-source LLM inference and performance

96 points11 hoursbaseten.co
llwu4 hours ago

Question on the "Batching memory-bound processes on a GPU" section - it says "This enables us to reuse parts of the model that we’ve already loaded into the GPU’s SRAM", but the 10 GB we are loading is into the HBM, right? How did we overcome the HBM <-> SRAM bottleneck?

More generally, how can we find out the size of the SRAM?

varunshenoy3 hours ago

Good question. Yes, the 10GB available for batching is in the HBM. In a single forward pass, you move the entire model from HBM -> SRAM exactly once. In a batched forward pass, this is still the case, so you end up doing more compute for the same amount of memory movement.

You can calculate the SRAM as follows: an A100 has 108 SMs, and each SM has 192 KB in SRAM (shared memory, aka its L1 cache) [1]. Multiplied out, this is ~20 MB of total SRAM. This happens to match up with the diagram in the Flash Attention paper [2].

[1] https://developer.nvidia.com/blog/cuda-refresher-cuda-progra...

[2] https://arxiv.org/pdf/2205.14135.pdf

abcdabcd9877 hours ago

Related discussion on serving finetuned LLMs: https://news.ycombinator.com/item?id=38196661

joaquincabezas7 hours ago

Thanks a lot for the material Varun, neat presentation with exhaustive computations that make it easy to follow. Question on the serving part: vLLM, Deepspeed, TensorRT-LLM... ? Thanks!

varunshenoy7 hours ago

Thanks!

vLLM for quick set up, TRT-LLM for best performance. Both available on https://baseten.co/.

bicepjai8 hours ago

That’s really detailed explanation. Can we do something like this for M1 ultra/M2 ultra/M3 max with large RAM ?

varunshenoy7 hours ago

Absolutely. Looks like the M1 Ultra has 800GB/s of memory bandwidth and ~20 TFLOPS of compute.

The same calculations from the post should hold, except with these new values.

alanaan4 hours ago

great post. could you apply this same framework to optimize training as well?

seth_8 hours ago

love the deep dive here

samspenc10 hours ago

Likely trending on home page since this is directly relevant to LLM costs, i.e., questions like "how much would it cost to rebuild ChatGPT from scratch".

simsspoons9 hours ago

highly relevant question today

varunshenoy8 hours ago

:)