62
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
this post was submitted on 20 Jul 2023
62 points (100.0% liked)
Technology
30 readers
1 users here now
This magazine is dedicated to discussions on the latest developments, trends, and innovations in the world of technology. Whether you are a tech enthusiast, a developer, or simply curious about the latest gadgets and software, this is the place for you. Here you can share your knowledge, ask questions, and engage in discussions on topics such as artificial intelligence, robotics, cloud computing, cybersecurity, and more. From the impact of technology on society to the ethical considerations of new technologies, this category covers a wide range of topics related to technology. Join the conversation and let's explore the ever-evolving world of technology together!
founded 2 years ago
It's still pretty rough to selfhost an LLM. You can get one that's kind of okay on an average computer, but to get a really competitive one running locally at a good speed, you need a huge amount of RAM that is still beyond most average users (VRAM for GPU based projects).
I've been trying to get Vicuna going and the RAM usage is rough, 60gb is suggested, and I've got 64 and I think I need a lot more honestly.