this post was submitted on 27 Apr 2026
17 points (94.7% liked)

LocalLLaMA

4722 readers
21 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Are there any open models that can actually compete with proprietary ones like GPT 5.5 Extended Thinking or Claude Opus 4.7? I am getting really good results with those in their chat interfaces for coding tasks. They sometimes spend 30-45 minutes working on my task and have an internal container they are doing tool calls on, like cloning a repository and compiling their code, and can find online documentation. Their answers are very good and usually correct for very complex tasks requiring specific protocols.

So I would like to know how well we can replicate this using open models since I want more control over how it runs, and privacy. Do any of you hook in agentic capabilities into your local models? How do you do it, and which models give you good results?

Pretend I have unlimited resources (local llama.cpp, sufficient fast storage/memory, and unlimited time to wait for a good response).

all 13 comments
sorted by: hot top controversial new old
[–] Zikeji@programming.dev 6 points 2 weeks ago

I've been running Qwen 3.5 122B A10B but recently swapped to Qwen 3.6 35B A3B - both using OpenCode e as my agentic harness (though I've also used Pi). I've been happy with the output, though I have to be more precise with my prompts and do planning passes.

[–] cecilkorik@piefed.ca 2 points 2 weeks ago

"Compete with"? Depends on your definition of competition. You can accomplish similar results with smaller, local models but you cannot do it as lackadaisically as with Claude by just throwing a sentence at it and letting it go to town for 15 minutes.

Doing things like this locally will take more time and effort in countless different ways. You need to structure the prompts and the environment much more carefully. You need to wait much longer for much smaller portions of work. You need to retry again when it gets it wrong, which will happen, either relying on better luck or adjusting your plan, your prompts or your context to better guide it to what you're actually looking for.

If you're used to Claude, working the same way with both and comparing them directly side by side, then no. Open models are not directly competitive like that. They can compete with it, if you're willing to be much more involved in the process.

If Claude is like a junior developer with access to an entire library of programming books, open models are like a 14-year-old in their first programming class with access to an entire library of programming books that they don't know how to utilize effectively. They require a lot more guidance.

You may wonder "what's the point if I have to do so much work anyway, maybe I should just do it myself" and indeed, this is the crux of the problem. It's even more obvious with smaller, open models than it is with the commercial AI models. This is not a new problem, it has been a problem even when training new employees. The difference is, real junior developers actually learn and grow based on my efforts to guide them and they eventually become senior developers. I'm not convinced that Claude or any open model ever actually will, despite how much effort goes into "training" them.

[–] Peruvian_Skies@sh.itjust.works 2 points 2 weeks ago (1 children)

I'd also love to know but I suspect none are quite there yet.

[–] Quexotic@sh.itjust.works 3 points 2 weeks ago (1 children)

That's my problem. None are there yet, at least with my hardware.

If you've got 20 grand to spend, there's a couple of models out there, like the one mentioned above that should do fine.

[–] hok@lemmy.dbzer0.com 3 points 2 weeks ago (1 children)

What I have yet to learn is how much of the intelligence and accuracy comes from the model itself and how much comes from the agentic tool system. For example, my experience with ChatGPT probably would be much worse with the free version (no thinking or container).

[–] Quexotic@sh.itjust.works 1 points 2 weeks ago

I'd say it's 60-40 or 40-60. Both are important and have a large bearing on your results, but 128B and 8B will always have a big difference in reasoning capacity

[–] troed@fedia.io 1 points 2 weeks ago

I run a quant of Qwen 35B A3B (Qwen3.6-35B-A3B-GGUF:UD_Q4_K_XL) at the moment, using Opencode and llama.cpp. I'm getting useful work out of it - but it's of course not Claude. My hardware is a 5060Ti with 16GB VRAM and then ~20GB or so of system mem is getting used as well.

It's important to put boundaries on less capable models though, so I have two plugins in Opencode as well that really makes a big difference to the results: @tarquinen/opencode-dcp@latest and superpowers@git+https://github.com/obra/superpowers.git.

I want to work in small steps with good control over what the models do so it's not very similar to what you describe with just having them run away for half an hour and do everything.