this post was submitted on 26 Dec 2025
254 points (99.6% liked)
Linux
10793 readers
971 users here now
A community for everything relating to the GNU/Linux operating system (except the memes!)
Also, check out:
Original icon base courtesy of lewing@isc.tamu.edu and The GIMP
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
I successfully ran local Llama with llama.cpp and an old AMD GPU. I'm not sure why you think there's no other option.
Amd had approximately 1 consumer gpu with rocm support so unless your framework supports opencl or you want to fuck around with unsupported rocm drivers then you’re out of luck. They’ve completely failed to meet the market
Llama.cpp now supports Vulkan, so it doesn't matter what card you're using.
Devs need actual support, tensor and other frameworks
I mean... my 6700xt dont have offical rocm support, but the rocm driver works perfectly fine for it. The difference is amd hasnt but the effort in testing rocm on their consumer cards, thus cant make claims support for it.
The fact that it’s still off label like that is kinda nuts to me. ML and AI have been huge money makers for a decade and a half and amd seemingly doesn’t care about gpus. I wish they would invest more in testing and packaging drivers for the hardware that’s out there. In the year of our lord 2025 I shouldn’t have to compile from source or use aur packages for drivers 😭