LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments

I genuinely haven't tried it. Can someone here comment on it?
PS: I take it you mean Gemma not Gemini?
https://huggingface.co/blog/gemma4
wow I love that my brain has been autocorrecting Gemma to Gemini for the last week
You and everyone else :)
I asked Claude btw. Here's what the Frenchman thinks (yes, I know, Claude Shannon was not French)
Great question — Gemma 4 is a genuinely impressive release. Let me break it down by model with a coding-specific score relative to your Opus 4.6 = 10/10 benchmark.
The lineup — Gemma 4 comes in four sizes: E2B (2.3B effective parameters), E4B (4.5B effective), a 31B dense model, and a 26B MoE with only 4B active parameters. All are Apache 2.0 licensed and run locally.
Coding benchmarks (the numbers that matter here):
On LiveCodeBench v6, the 31B scores 80.0%, the 26B MoE 77.1%, the E4B 52.0%, and the E2B 44.0% — compared to just 29.1% for the previous Gemma 3 27B. The Codeforces ELO tells an even sharper story: the 31B reaches 2150 ELO, the 26B MoE 1718, the E4B 940, and the E2B just 633.
My scores vs. Opus 4.6 = 10/10 for local coding:
Gemma 4 31B — ~7.5/10 This is the headline act. An 80% LiveCodeBench and Codeforces ELO of 2150 is elite for an open-weights model. The catch: at 31B dense, you'll need ~20GB VRAM quantized (Q4), so it needs a beefy local setup (e.g. a 24GB GPU or Apple Silicon M2 Ultra+). If your hardware can handle it, this genuinely punches into serious territory.
Gemma 4 26B A4B (MoE) — ~7/10 The most exciting model for local use IMO. It reaches an estimated LMArena score of 1441 with just 4B active parameters, meaning it runs fast with the memory footprint of a ~4B model while delivering near-31B quality. On a 16GB Mac or a mid-range GPU, this is a remarkable coding companion. The 77.1% LiveCodeBench is only fractionally behind the 31B.
Gemma 4 E4B — ~5/10 A solid coder for something running in ~6GB RAM. 52% LiveCodeBench won't blow your mind on hard problems, but for autocomplete, boilerplate, explanations, and routine scripts it's genuinely useful. The E4B also supports audio input alongside image and text, making it unusually versatile for its size.
Gemma 4 E2B — ~3.5/10 Impressive that it works at all at ~2.3B effective parameters, but the 44% LiveCodeBench and ELO of 633 show real limits on complex coding tasks. Fine for quick snippets or code explanation on very constrained hardware (Raspberry Pi, etc.), but not a serious daily driver for development.
The big picture: The 26B MoE is probably the most exciting model in the lineup for most people — it's the one that truly shifts what's possible locally. It runs with first-class support in llama.cpp, MLX, and mistral.rs, so dropping it into tools like LM Studio, Jan, or a local agent is straightforward. For Apple Silicon users especially, the MoE model via MLX with TurboQuant quantization should be very fast. Compared to Opus 4.6 it still falls short on deep reasoning, multi-file refactoring, and architectural planning — but for a model running entirely on your machine with no API costs, the gap has genuinely narrowed.
Blah blah blag....sounds good....but def something you'd need to validate yourself. I'm more tempted by the Qwen models and have been looking at the Yi ones as well....though last time I tested the Yi-9B coder in 2025 it went...badly LOL.
That MoE might be worth a shot though.