this post was submitted on 06 Jan 2026
13 points (100.0% liked)

Technology

1421 readers
55 users here now

A tech news sub for communists

founded 3 years ago
MODERATORS
 

Qwen3-30B-A3B-Instruct-2507 device-optimized quant variants without output quality falling off a cliff.

A 30B runs on a Raspberry Pi 5 (16GB) achieving 8.03 TPS at 2.70 BPW, while retaining 94.18% of BF16 quality. ShapeLearn tends to find better TPS/quality tradeoffs versus alternatives.

What’s new/interesting in this one

  1. CPU behavior is mostly sane

On CPUs, once you’re past “it fits,” smaller tends to be faster in a fairly monotonic way. The tradeoff curve behaves like you’d expect.

  1. GPU behavior is quirky

On GPUs, performance depends as much on kernel choice as on memory footprint. So you often get sweet spots (especially around ~4b) where the kernels are “golden path,” and pushing lower-bit can get weird.

models: https://huggingface.co/byteshape/Qwen3-30B-A3B-Instruct-2507-GGUF

you are viewing a single comment's thread
view the rest of the comments
[–] yogthos@lemmygrad.ml 7 points 3 months ago

Yeah that seems like it should be doable. It's really interesting to see how you can run fairly large models on very modest hardware now. Pi version is quantized of course, but it's still way more powerful than stuff you needed a literal data centre for like a couple of years ago. It's kind of mind boggling to consider.