Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

LLM speed is roughly <memory_bandwidth> / <model_size> tok/s.

DDR4 tops out about 27Gbs

DDR5 can do around 40Gbs

So for 70B model at 8 bit quant, you will get around 0.3-0.5 tokens per second using RAM alone.

 help



DRAM speeds is one thing, but you should also account for the data rate of the PCIe bus (and/or VRAM speed). But yes, holding it "lukewarm" in DRAM rather than on NVMe storage is obviously faster.

Yes.

In general systems usually have PCIE version with bandwidth better than RAM of that system.

For example a system with DDR4 (27Gbs) usually has at least PCIE4 (32Gbs at 16x).

But you can bottleneck that by building a DDR5 (40Gbs) system with PCIE4 card.


yeah, actually, I'm bottlenecked af since my mobo got pcie3 only :(

Channels matter a lot, quad channel ddr4 is going to beat ddr5 in dual channel most of the time.

Four channels of DDR4-3200 vs two channels of DDR5-6400 (four subchannels) should come out pretty close. I don't see any reason why the DDR4 configuration would be consistently faster; you might have more bank groups on DDR4, but I'm not sure that would outweigh other factors like the topology and bandwidth of the interconnects between the memory controller and the CPU cores.

Faster than the 0.2tok/s this approach manages

Should be active param size, not model size.

Yes, you’re right.

LLama 3.1 however is not MoE, so all params are active.

For MoE it is tricky, because for each token you only use a subset of params (an “expert”) but you don’t know which one, so you have to keep them all in memory or wait until it loads from slower storage, potentially different for each token.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: