Changelog
A running log of dataset changes — new hardware builds, removed ones, price refreshes, rating adjustments, and bug fixes. Latest first.
The home-page footer "data last refreshed" date matches the most recent entry below. Older entries are kept for transparency; the underlying JSON in the public repo has the full history.
2026-05-14 — Legacy hardware comeback + modern Radeon Pro
Nine new builds covering the two stories the local-LLM community has been talking about in 2026: cheap legacy datacenter cards getting fresh community software support, and AMD's RDNA4 Pro lineup finally landing as a credible Nvidia alternative under $1,500.
New builds (9)
- Tesla P40 24 GB single — $750 whole-system build. Cheapest 24 GB CUDA card on the planet (~$300 used). No FP16 so image gen is slow; great for 30 B Q4 chat.
- Tesla P40 quad (96 GB) — $2,000 homelab classic. Honest cons line: dense 70 B is unusable without NVLink (~0 t/s); the build is really for 120 B MoE (gpt-oss class).
- Tesla P100 16 GB — $600 build. The cheapest HBM2 card (732 GB/s for ~$150) with native FP16, so SDXL actually works. Capped at 14 B.
- Tesla V100 32 GB SXM2 mod — $900 build. The viral 2026 mod: $200 SXM2 card + $100 PCIe adapter + 3D-printed cooling = 32 GB CUDA HBM2 with Tensor Cores. Documented by Tom's Hardware and Hackaday this month.
- AMD MI50 32 GB single — $600 build. The headline 2025–26 LocalLLaMA story: 1 TB/s HBM2 for ~$200 a card. AMD dropped MI50 from ROCm 7, but the community's vLLM-gfx906 fork plus llama.cpp Vulkan keep it alive.
- AMD MI50 quad (128 GB) — $1,800 build. 128 GB HBM2 for under $2 K all-in; 70 B Q4 at ~35 t/s beats 2× 3090 on $/perf.
- AMD Radeon Pro W7800 32 GB — $2,700 build. Workstation-grade with ECC and ISV certs, but increasingly hard to recommend over the R9700 (which is ~$1 K cheaper for the same 32 GB). Flagged in the build's own cons.
- AMD Radeon AI Pro R9700 32 GB — $1,700 build. RDNA4 with 2nd-gen AI accelerators; $1,299 MSRP at retail since Oct 2025. ROCm 6/7 plus Vulkan-on-Windows means it's the most plug-and-play 32 GB AMD card you can buy new.
- AMD R9700 dual (64 GB) — $3,100 build. Cheapest new 64 GB workstation; undercuts dual W7900.
Regional pricing & retailers (26 regions)
- Every new build now has
productLinksentries in all 26 supported regions (US, GB, DE, FR, IT, ES, NL, PL, SE, IE, AT, BE, PT, FI, DK, CZ, CH, NO, JP, AU, KR, IN, CA, MX, BR, EU). 225 region×build retailer cells added. - Used cards (P40, P100, V100 SXM2, MI50): regional eBay domains where eBay operates (
ebay.co.uk,ebay.de,ebay.fr,ebay.it,ebay.es,ebay.nl,ebay.com.au,ebay.ca,ebay.ie) + Alibaba global. Other markets fall back toebay.cominternational. - Retail R9700 / W7800: Amazon regional + 1–2 strong regional retailers per market — Scan / Overclockers UK, Mindfactory / Caseking / Alternate DE, LDLC / Materiel.net FR, Drako IT, PcComponentes ES, Megekko NL, Komputronik / x-kom PL, Webhallen SE, Alza CZ, Digitec CH, Yodobashi / Tsukumo JP, Coupang / Danawa KR, MD Computers / Amazon IN, Canada Computers / Memory Express CA, Cyberpuerta MX, Kabum BR, Centre.com / PC Case Gear AU.
regionPricespopulated for the three retail-available builds (W7800, R9700, R9700×2) in 10–11 local currencies. Used-card builds intentionally omitregionPrices— the used market is too noisy to anchor a number; the site falls back topriceUSDas a reference.
YouTube reviews (12 verified)
- Every new
videoIdwas fetched againstyoutube.com/watch?v=...to confirm the page returns a real title before it landed in the JSON. - Highlights: DeepSeek on Tesla P40 vs RTX 4090, DIY 4× P40 96 GB build, V100 SXM2 mod + HP Z8 G4, R9700 full benchmarks, dual R9700 vs RTX 5090/4090, MI50 Ollama vs llama.cpp speed test.
- The W7800 build has no
reviewsentry — no LLM-specific YouTube video met the bar. Left empty rather than fabricated.
Documentation
agent/UPDATE.mdupdated: the weekly refresh playbook now mandates full 26-region coverage for any new build, and requires the agent to append a changelog entry to this page on every commit.
Performance numbers verified
After the nine builds went in, a second pass cross-checked every tps / imageSec / videoSec number against published benchmarks (LocalScore, databasemart, ahelpme, wtarreau, meefik, hardware-corner, TinyComputers, llama.cpp discussions). Twelve values were corrected; the meaningful ones:
- Tesla P40 single — 14B tps 22 → 15 (LocalScore measured 13.6 t/s on Qwen2.5-14B Q4_K_M).
- Tesla P40 quad — 30B tps 14 → 50 and 120B-MoE tps 14 → 28 (TinyComputers documented quad-P40 at ~50 t/s on Qwen3-Coder-30B-A3B and 28.1 t/s on gpt-oss-120B); 14B tps 25 → 16; 8B tps 50 → 48.
- Tesla P100 — 8B tps 49 → 33 (databasemart Ollama: 33 t/s on Llama-3.1-8B); SDXL 22 → 30 s (low-confidence; flagged in the source).
- Tesla V100 SXM2 mod — 14B tps 45 → 50 (LocalScore: 50.1); SDXL 7 → 12 s (no FP16-Volta source supported the optimistic 7 s).
- Radeon Pro W7800 — SDXL 7 → 14 s (W7800 isn't W7900-fast on RDNA3 matrix; 7 s was flagship-tier).
- AMD MI50 single — 30B tps 35 → 66 (wtarreau + ahelpme: Qwen3-Coder-30B-A3B Q4_K_M tg128 measured at 66–75 t/s; 35 was a dense-32B number, inconsistent with the dataset's "best quant that fits" convention); SDXL 12 → 18 s (no direct ROCm gfx906 SDXL source backed the 12 s).
- AMD MI50 quad — 8B tps 90 → 72 (single-stream TG doesn't accelerate with multi-GPU when the model fits in one card); 14B 55 → 50; SDXL 12 → 18 s.
The image-gen seconds for several builds remain low-confidence — no direct SDXL/FLUX benchmark exists for them. They're flagged here so future agents know to re-measure when better sources surface.