this post was submitted on 28 Mar 2026
102 points (90.5% liked)
Technology
83222 readers
5038 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I literally said I'm using qwen3.5:122b for coding. I also use GLM-5 but it's slightly slower so I generally stick with qwen.
It's right there, in ollama's library: https://ollama.com/library/qwen3.5:122b
The weights and everything else for it are on Huggingface: https://huggingface.co/Qwen/Qwen3.5-122B-A10B
This is not speculation. That's what I'm actually using nearly every day. It's not as good as Claude Code with Opus 4.6 but it's about 90% of the way there (if you use it right). When GLM-5 came out that's when I cancelled my Claude subscription and just stuck with Ollama Cloud.
I can use gpt-oss:20b on my GPU (4060 Ti 16GB)—and it works well—but for $20/month, the ability to use qwen3.5 and GLM-5 are better options.
I still use my GPU for (serious) image generation though. Using ChatGPT (DALL-E) or Gemini (Nano Banana) are OK for one-offs but they're slow AF compared to FLUX 2 and qwen's image models running locally. I can give it a prompt and generate 32 images in no time, pick the best one, then iterate from there (using some sophisticated ComfyUI setups). The end result is a superior image than what you'd get from Big AI.
Both of those models appear proprietary closed-source freeware. To be open source, they need to provide the source for the blobs.
I don't blame you if the AI industry deceived you, because it's gotten to the point where people that review this stuff need to refer to "actual open source" to differentiate.
So... Do you actually use open-source models?