I'm trying to find a way to use it with Guidance to control my smart home, actually really doable with only a 13b model
LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
I've been waiting for ExLLama to have guidance support, but there seem to have been some integration issues. We need more people to learn and get involved, haha, including me
I actually just recently started having really good experiences with exllama on only 13B models, specifically I found the orca tuned ones to perform really well
I used it quite a lot at the start of the year, for software architecture and development. But the number of areas where it was useful were so small, and running it locally is quite slow. (which I do for privacy reasons)
I noticed that much of what was generated needed to be double checked, and were sometimes just wrong, so I've basically stopped using it.
Now I'm hopeful for better code generation models, and will spend the fall building a framework around a local model. See if the helps in guiding the models generation.
I'm pumped for Llama2 which was released yesterday. Early tests slow some big improvements. Can't wait for Wizard/Vicuna/Uncensored versions of it.
It’s marginally better than original but WAYY more censored. It is pretty intrusive. It refused to write a bash script to kill a process by regexp 🤦
The first uncensored variants are already on Huggingface though, look for The Bloke. :)
I'm building an assistant for Jungian shadow work with persistent storage, but I'm a terrible programmer so it's taking longer than expected.
Since shadow work is very intimate and personal, I wouldn't trust a ChatGPT integration and I'd never be fully open in conversations.
Wow. I'm always amazed by what - previously unknown (to me) stuff - people do. I had to look that one up. Is this some kind of leisure activity? self-improvement or -therapy? or are you just pushing the boundaries of psychology?
I was fascinated by Jung's works after tripping on shrooms and becoming obsessed with understanding conciousness. I already stumbled upon llama.cpp and started playing around with LLMs and just decided to build a prototype for myself, because I've doing shadow work for self-therapy reasons anways.
It's not really that useful yet, but making it into a product is unlikely because most people who wouldn't trust ChatGPT won't trust an open source model on my machine(s) either. Also shipping a product glued together from multiple open source components with rather strict GPU requirements seems like a terrible experience for potential customers and I don't think I'd be able to handle the effort of supporting others to properly set it up. Dunno, we'll see. :D