LocalLLaMA
Welcome to LocalLLaMA! Here we discuss running and developing machine learning models at home. Lets explore cutting edge open source neural network technology together.
Get support from the community! Ask questions, share prompts, discuss benchmarks, get hyped at the latest and greatest model releases! Enjoy talking about our awesome hobby.
As ambassadors of the self-hosting machine learning community, we strive to support each other and share our enthusiasm in a positive constructive way.
Rules:
Rule 1 - No harassment or personal character attacks of community members. I.E no namecalling, no generalizing entire groups of people that make up our community, no baseless personal insults.
Rule 2 - No comparing artificial intelligence/machine learning models to cryptocurrency. I.E no comparing the usefulness of models to that of NFTs, no comparing the resource usage required to train a model is anything close to maintaining a blockchain/ mining for crypto, no implying its just a fad/bubble that will leave people with nothing of value when it burst.
Rule 3 - No comparing artificial intelligence/machine learning to simple text prediction algorithms. I.E statements such as "llms are basically just simple text predictions like what your phone keyboard autocorrect uses, and they're still using the same algorithms since <over 10 years ago>.
Rule 4 - No implying that models are devoid of purpose or potential for enriching peoples lives.
view the rest of the comments
I use a ton of different ones. I can test specific models if you like.
The good ol' anything v3 and DPM Karras 2m+
that would give me a good baseline. Thanks! :)
Does the resolution or steps or anything else matter?
512x512 and 1024x1024 would be interesting
and 50 steps
That'd be awesome!
I ran these last night, but didn’t have the correct VAE, so I’m not sure if that affects anything. 512x512 was about 7.5it/s. 1024x1024 was about 1.3s/it (iirc). I used somebody else’s prompt which used loras and embeddings, so I’m not sure how that affects things either. I’m not a professional benchmarker so consider these numbers anecdotal at best. Hope that helps.
Edit: formatting
7.5it/s for 512x512 is what i was looking for! On par (actually even faster than my 3070) with NVidia!
Thank you very much! And how / what exactly did you use to install?
The install wasn’t too hard. I mean it wasn’t like just running a batch file on Windows, but if you have even a tiny bit of experience with the Linux shell and installing python apps, you will be good. You mostly just need to make sure you’re using the correct (ROCm) version of PyTorch. Happy to help, any time (best on evenings and weekends EST). Please DM.
i'm quite familiar with Linux and installing stuff - so there is no compiling special versions of some weird packages and manually put them in venv or something i assume😄
Thanks again!
No special compiling. Just need to download the ROCm drivers from AMD and the special ROCm PyTorch version.
Also you’re welcome!