this post was submitted on 16 May 2026
20 points (95.5% liked)

LocalLLaMA

4724 readers
50 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

I was browsing Reddit (yetch) while waiting for some stuff to finish when I came across this post

https://old.reddit.com/r/LocalLLM/comments/1tek00h/why_is_llm_is_so_expensive/

The author make a (very) interesting claim: if table stakes are $6K (they're not...but go with it for now), then most folks are cooked from the get go.

Personally, I have been figuring out how to get more from less. For example, people have found ways to run Qwen3.6 35B on a 6GB VRAM GTX 1060 at ~20tok/s (--ctx 64K IIRC, but go check the vids yourself)

https://youtu.be/8F_5pdcD3HY

I think there's a lot of juice to squeeze by turning LLMs from "all seeing sages" into basically mouth pieces for shit that actually runs fast on regular silicon - but that's just me and my crazy brain. YMMV.

you are viewing a single comment's thread
view the rest of the comments
[โ€“] Luminous5481@anarchist.nexus 3 points 8 hours ago (1 children)

I've been running Qwen3.6 35B very easily. but that's because I've got a ASUS Z13, which is one of the newish laptops that have the AMD Ryzen AI MAX+ 395 in it. you can get that chip with 128gb of unified memory, 96gb of which I have dedicated to be VRAM. I can also run Qwen3 Coder Next 80B. I'm not sure how many tokens per second I'm getting with Coder, but it's fast.

honestly I think this unified memory might be the future of mobile chips, because the things I can do with it are pretty crazy. it's not just useful for AI either, it's in a few gaming laptops because it also works really well when gaming. but the things you can do with LLMs or diffusion models is amazing. I donate compute to AI Horde, and I'm finishing image generation jobs for people in like 4 seconds.

[โ€“] SuspiciousCarrot78@aussie.zone 2 points 8 hours ago* (last edited 7 hours ago)

Good man/woman. Nerd Valhalla awaits you :)