LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
"Compete with"? Depends on your definition of competition. You can accomplish similar results with smaller, local models but you cannot do it as lackadaisically as with Claude by just throwing a sentence at it and letting it go to town for 15 minutes.
Doing things like this locally will take more time and effort in countless different ways. You need to structure the prompts and the environment much more carefully. You need to wait much longer for much smaller portions of work. You need to retry again when it gets it wrong, which will happen, either relying on better luck or adjusting your plan, your prompts or your context to better guide it to what you're actually looking for.
If you're used to Claude, working the same way with both and comparing them directly side by side, then no. Open models are not directly competitive like that. They can compete with it, if you're willing to be much more involved in the process.
If Claude is like a junior developer with access to an entire library of programming books, open models are like a 14-year-old in their first programming class with access to an entire library of programming books that they don't know how to utilize effectively. They require a lot more guidance.
You may wonder "what's the point if I have to do so much work anyway, maybe I should just do it myself" and indeed, this is the crux of the problem. It's even more obvious with smaller, open models than it is with the commercial AI models. This is not a new problem, it has been a problem even when training new employees. The difference is, real junior developers actually learn and grow based on my efforts to guide them and they eventually become senior developers. I'm not convinced that Claude or any open model ever actually will, despite how much effort goes into "training" them.