Loading⦠· Real benchmarks, not vibes
Pick the right hardware
to run local LLMs.
Tell us what you want to run β a use case, a memory target, or a specific open-weights model. We'll match it to real builds with current prices, community-measured tokens/sec, and idle & active power draw.
Start pickingβbuilds compared
βopen-source LLMs
Q4_K_Mdefault quantization
Tell us what you want
Pick one of the three modes. You can switch at any time.
48GB
Comfortable for 70 B Q4 dense
Pick one or more models you want to run. We'll size for the largest.
Quantization
Categories
Top picks for your setup
All compatible builds
| Build | Memory | Bandwidth | Price | tg/s | Idle W | Active W | 5-yr est. |
|---|
Popular open-weights models
How each model rates across use cases. SOTA = best-in-class for that task.
Every build, side by side
Full comparison β Apple Silicon, NVIDIA consumer + datacenter, Intel Arc Pro, AMD Strix Halo.
Sources
All performance numbers come from public benchmarks. Click to verify.