22

Is it just memory bandwidth? Or is it that AMD is not well supported by pytorch well enough for most products? Or some combination of those?

all 16 comments
sorted by: hot top controversial new old
[-] turbodrooler@lemmy.world 8 points 1 year ago

The memory bandwidth stinks compared to a discrete gpu. That’s the reason. It’s still possible.

[-] kelvie@lemmy.ca 1 points 1 year ago

The question is, though, would it be better than just a CPU with lots of RAM?

[-] turbodrooler@lemmy.world 3 points 1 year ago

Yes, it seems so according to this person’s testing: https://youtu.be/HPO7fu7Vyw4

[-] j4k3@lemmy.world 7 points 1 year ago

Ultimately, it is all about data throughput to the CPU caches because tensors are so large. The M2 claims a 128 bit bus. The instruction support for ARM built into llama.cpp is weak compared to x86. If you want to run big models that require lots of memory, without spending five figures, find a Intel chip that supports AVX-512 and has support for 96+ GB of ram. AVX-512 and the related sub commands are directly supported in llama.cpp and that gets you 512 bit instructions. Apple can't match that.

If you want a laptop, get something with a 3080Ti. It needs to specifically be the Ti version. This has 16GBV ram and came in several 2022 models.

Run Fedora with it. They have Nvidia support including a slick script that builds the GPU driver from source with every kernel update automatically, and keeps secure boot working all the time.

[-] Atemu@lemmy.ml 3 points 1 year ago

The instruction support for ARM built into llama.cpp is weak compared to x86.

I don't know about you but my M1 Pro is a hellovalot faster than my 5800x in llama.cpp.

These CPUs benchmark similarly across a wide range of other tasks.

[-] kelvie@lemmy.ca 2 points 1 year ago

I run exllama on a 24GB GPU right now, just seeing what's feasible for larger models -- so an intel CPU with lots of RAM would in theory outperform an AMD iGPU with the same amount of ram allocated as VRAM? (I'm looking at APU/iGPUs solely because you can configure the amount of VRAM allocated to them.

[-] j4k3@lemmy.world 3 points 1 year ago

I'm pretty sure it is not super relevant. The amount of vram in a GPU is different than the amount in a CPU. The system memory with x86 is mostly virtual bits. I haven't played in this space in awhile, and so my memory is rusty. The system memory is not directly accessible by an address bus. It creates a major bottleneck when you need to access a lot of information at once. It is more of a large storage system that is made to move chunks of data that are limited in size. If you want more info read about address buses and physical/virtual buses: https://en.m.wikipedia.org/wiki/Physical_Address_Extension

In a GPU, the goal is to move data in parallel where most of the memory is available at the same time. This doesn't have the extra overhead of complicated memory management systems. Each small processor is directly addressing the memory it needs. With a GPU, more memory usually means more physical compute hardware .

If you ever feel motivated to build vintage computing hardware like Ben Eater's 8 bit bread board computer project on YouTube, or his 6502 stuff, you'll see a lot of this first hand. The early 8 bit computer stuff is when a lot of this memory bus and address space was a major design aspect that is much more clear to understand because it is manually configured in hardware external to the processor.

[-] kelvie@lemmy.ca 1 points 1 year ago

As per the link (YouTube) in the other thread, it seems like iGPU + increased allocation of VRAM is better than using the CPU, though it also seems APUs max out at 16GB. Maybe something AMD can improve in the future then...

[-] rufus@discuss.tchncs.de 1 points 1 year ago

What's the memory bandwith on the AMD platform?

[-] Naz@sh.itjust.works 4 points 1 year ago* (last edited 1 year ago)

I've gotten LLAMA running locally during CLBlast on an AMD GPU, and using the CPU simultaneously (basically APU execution pathway)

AMD is seriously slacking when it comes to machine learning, the hardware is Uber powerful, but just like everyone complains about, software isn't there.

ROCM doesn't even work on Windows, FFS.

You can run models on almost anything but the token generation is extremely slow. Like, you might be waiting upwards of 5 minutes for a response, or something like 0.2-0.6/tokens per second, which for a minimum of 100 tokens to be coherent is abysmal.

[-] django@discuss.tchncs.de 3 points 1 year ago

Isn't windows for gaming and weird proprietary applications like photoshop?

[-] Kerfuffle@sh.itjust.works 2 points 1 year ago

If you're using llama.cpp, some ROCM stuff recently got merged in. It works pretty well, at least on my 6600. I believe there were instructions for getting it working on Windows in the pull.

[-] Naz@sh.itjust.works 2 points 1 year ago

Thank you so much! I'll be sure to check that out / get it updated

this post was submitted on 25 Aug 2023
22 points (100.0% liked)

LocalLLaMA

2249 readers
1 users here now

Community to discuss about LLaMA, the large language model created by Meta AI.

This is intended to be a replacement for r/LocalLLaMA on Reddit.

founded 1 year ago
MODERATORS