this post was submitted on 14 May 2026
15 points (71.4% liked)
Linux
65258 readers
401 users here now
From Wikipedia, the free encyclopedia
Linux is a family of open source Unix-like operating systems based on the Linux kernel, an operating system kernel first released on September 17, 1991 by Linus Torvalds. Linux is typically packaged in a Linux distribution (or distro for short).
Distributions include the Linux kernel and supporting system software and libraries, many of which are provided by the GNU Project. Many Linux distributions use the word "Linux" in their name, but the Free Software Foundation uses the name GNU/Linux to emphasize the importance of GNU software, causing some controversy.
Rules
- Posts must be relevant to operating systems running the Linux kernel. GNU/Linux or otherwise.
- No misinformation
- No NSFW content
- No hate speech, bigotry, etc
Related Communities
Community icon by Alpár-Etele Méder, licensed under CC BY 3.0
founded 7 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
As a side note, Qwen3.6-27B is much more capable than Qwen3.6-35B, even though it is much slower.
https://huggingface.co/unsloth/Qwen3.6-27B-GGUF
For coding tasks where you don't mind waiting, you should be able to barely squeeze in the 8-bit quantized version with 32 GB RAM + 8 GB VRAM and have a pretty competent local model. 4-bit quants work but they have issues with complex tool calls.
If you use the MTP branch of llama.cpp (and a suitable model) you can even double or triple your token generation speed: https://github.com/ggml-org/llama.cpp/pull/22673
For easier tasks, disable reasoning for instant responses.
I probably have to wait for my client (for noobs) to support MTP. So until then I play around with what I have. I'm not even that deep into Ai anyway and mostly play around and only use it occasionally to help. But thanks for the suggestion.
I'm still experimenting, and just started doing some custom settings. What makes these "bigger" models more usable is, lowering the context to free up VRAM a bit and in exchange load more of the core model into VRAM. In example I'm trying this with a 31B unsloth gemma 4 model, but Q3_K_M and get 4 tok/sec. It's slow and doesn't have huge context, but for the occasional questions this is tolerable, with respect to the hardware I have.
My main models are the previously mentioned 35B-A3B and 26B-A4B (where only a few billion parameters are active from a bigger pool) anyway, as they are pretty fast with 17 to 50 tok/sec. While the quality is acceptable and not really much different from the "bigger" models I can run.