22

Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

you are viewing a single comment's thread
view the rest of the comments
[-] Kerfuffle@sh.itjust.works 2 points 1 year ago

If you're using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.

[-] Naz@sh.itjust.works 2 points 1 year ago

Thank you so much! I'll be sure to check that out / get it updated

this post was submitted on 25 Aug 2023
22 points (100.0% liked)

LocalLLaMA

2214 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS