this post was submitted on 19 Apr 2026
43 points (95.7% liked)

LocalLLaMA

4719 readers
13 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

When I first got into local LLMs nearly 3 years ago, in mid 2023, the frontier closed models were ofcourse impressively capable.

I then tried my hand on running 7b size local models, primarily one called Zephyr-7b (what happened to these models?? Dolphin anyone??), on my gaming PC with 8GB AMD RX580 GPU. Fair to say it was just a curiosity exercise (in terms of model performance).

Fast forward to this month, I revisit local LLM. (Although I no longer have the gaming PC, cost-of-living-crisis anyone 😫 )

And, the 31b size models look very sufficient. #Qwen has taken the helm in this order. Which is still very expensive to setup locally, although within grasp.

I'm rooting for the edge-computing models now - the ~2b size models. Due to their low footprint, they are practical to run in a SBC 24/7 at home for many people.

But these edge models are the 'curiosity category' now.

all 31 comments
sorted by: hot top controversial new old
[–] inconel@lemmy.ca 6 points 3 weeks ago (3 children)

For small model bonsai series seems getting the spotlight. Natively trained on1bit and ternary 1.58bit, 8B runs on ~1GB memory.

[–] ntn888@lemmy.ml 1 points 3 weeks ago

Interesting thanks!

[–] ntn888@lemmy.ml 1 points 3 weeks ago

funny I tried the 8B bonsai https://huggingface.co/prism-ml/Bonsai-8B-gguf when loaded it takes ~7GB RAM!! When prompting it stalls my llama.cpp container (I'm running on a weak 4th gen i5)

[–] PixelatedSaturn@lemmy.world 5 points 3 weeks ago (1 children)

For what stuff do you want to use them? I don't think they come remotely close to today's commercial models. Maybe for a specific purpose?

[–] ntn888@lemmy.ml 5 points 3 weeks ago (2 children)

hey, thanks for your response.. yeah that's what I meant, the 2b models aren't usable in today's state, but more practical for everyday use if they work out..

I actually meant the 31b models are useful for my purpose. I don't do full-on agentic coding, just interactive chat/prompting. Example, I make good use for making linux shell scripts (as I don't know howto myself). Currently I use qwen3.5-flash via cloud. It's as good as the frontier models back then if not better..

[–] PixelatedSaturn@lemmy.world 2 points 3 weeks ago (1 children)

I wanted to use smaller models, but then do more work on the "thinking" process. I didn't come far, because it get so slow with normal hardware and too expensive on dedicated one. Time consuming (I'm also not a programmer) but a fun project, but in the end I just decided to satisfy the privacy angle with protons ai Lumo.

[–] inari@piefed.zip 2 points 3 weeks ago (1 children)

Proton has AI? Damn, that's gotta be bleeding their coffers

[–] fozid@feddit.uk 3 points 3 weeks ago

For me, anything less than gpt oss 20b (a2b) is just for messing around with or for basic categorisation and basic text or data processing with highly structured prompts.

[–] ZoteTheMighty@lemmy.zip 3 points 3 weeks ago (1 children)

This weekend I had an LLM walk me through setting up some home server stuff and networking. I tried using Proton's Lumo and Qwen 3.6 locally. I have to say Qwen was the more impressive of the two models. When I first tried running models locally like llama 4, I remember thinking to myself that this was a dead end and big servers would always have the advantage, but it seems like we're hitting a turning point where many things can be done locally.

[–] ntn888@lemmy.ml 0 points 3 weeks ago (1 children)

cool what was your hardware, and which qwen size you used? thanks

[–] ZoteTheMighty@lemmy.zip 2 points 3 weeks ago (2 children)

I have a 24GB AMD 7900XTX, and it's a 35b parameter model.

[–] ericwdhs@discuss.online 4 points 3 weeks ago

Ooo... I'm running a 7900 XTX as well. Having 24GB without the Nvidia tax has been super nice for AI stuff. I have a 16GB 6900 XT running in another computer, and a lot of my AI model selection is still sized for it. I may need to stop procrastinating and copy your setup sooner rather than later.

[–] ericwdhs@discuss.online 1 points 3 weeks ago

Before I forget, can I ask you what GPU driver version you're running? I recently encountered some stability issues after a driver update (trying to support gaming and AI stuff at the same time), and the latest version I could find any stability claims for was 24.12.1.