LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
I think this is where a lot of LLMs will land, in local usage. At first people will try to use big general models for their rather domain-specific tasks until they realize smaller specialized models can do the same thing, but cheaper, with no subscription or per-token costs and with full ownership of the model, not just renting a model.
Imagine having a model that is proprietary to your company, cost nothing but the cost of electricity to run, no renting computing capacity etc. You own the model and can customise it at will and have no restrictions.
Currently super large models are driven by a hope for AGI capabilities, but we are not there yet for years and LLMs will never get us there. It requires different architectures.
I run ministral 8B on my laptop and as long as I don't ask it too complex tasks it can do things like translating, explaining simple things or help me understand functions and basic code while I am learning. If it can web search, RAG search and index you don't need that big of a model for it to work as a decent assistant.
If you want to replace people and not just enhance their work, then at current prices i don't think you get your bang for the bucks. A model owned and run by someone else outside the company is a just a very expensive consultant... They'll leave with all their competence if you stop paying them. An in-house AI will never leave and only get better with time and you don't have to pay anything other than electricity and the initial capital cost for the server.
I give it a year or two before companies realise this, but first they need to realise that the money they spend on subscriptions and tokens aren't investments, but rather costs. They don't get the money back. With a local model, it is an investment that keeps on giving after initial capital investment and cost much less for repetitive work.
Mistral is betting on this and I think it will pay off. Unless I am wrong about AGI.