this post was submitted on 21 Aug 2025
21 points (92.0% liked)

LocalLLaMA

3621 readers
58 users here now

Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.

Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.

As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.

Rules:

Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.

Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.

Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.

Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.

founded 2 years ago
MODERATORS
 

Yes this is a recipe for extremely slow inference: I'm running a 2013 Mac Pro with 128gb of ram. I'm not optimizing for speed, I'm optimizing for aesthetics and intelligence :)

Anyway, what model would you recommend? I'm looking for something general-purpose but with solid programming skills. Ideally obliterated as well, I'm running this locally I might as well have all the freedoms. Thanks for the tips!

you are viewing a single comment's thread
view the rest of the comments
[–] pebbles@sh.itjust.works 1 points 1 week ago

Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don't have a Mac, but I've found building it to be pretty simple. Just one or two commands for me.

Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.

Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.


Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I'd look into this after you get llamacpp running and are somewhat comfy with it. You'll absolutely want it though coming from ollama. At this point its a full replacement IMO.