LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
oh I didn't realize I could use llamacpp with openwebui. I recall reading something about how ollama was somehow becoming less FOSS so I'm inclined to use llamacpp. Plus I want to be able to more easily use sharded ggufs. You have a guide for setting up llamacpp with openwebui?
I somehow hadn't heard of GLM 4.5 Air, I'll take a look thanks!
Yeah setting up openwebui with llamacpp is pretty easy. I would start with building llamacpp by cloning it from github and then following the short guide for building it linked on the readme. I don't have a Mac, but I've found building it to be pretty simple. Just one or two commands for me.
Once its built just run llama-sever with the right flags telling it to load model. I think it can take huggingface links, but I always just download gguf files. They have good documentation for llama-server on the readme. You also specify a port when you run llama-server.
Then you just add http://127.0.0.1:PORT_YOU_CHOSE/v1 as one of your openai api connections in the openwebui admin panel.
Separately, if you want to be able to swap models on the fly, you can add llama-swap into the mix. I'd look into this after you get llamacpp running and are somewhat comfy with it. You'll absolutely want it though coming from ollama. At this point its a full replacement IMO.
What happend to ollama? Did it got bought? Is it turning propitary?