this post was submitted on 12 Feb 2024
178 points (98.9% liked)

Open Source

31654 readers
121 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 5 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] xyguy@startrek.website 61 points 10 months ago (1 children)

This is the sort of thing that to me highlights the inherent inefficiency of proprietary software and processes.

"Oh sorry, you'll need our magic hardware in order to run this software. It simply can't happen any other way."

Turns out that wasnt true which of course it isn't.

Imagine instead of everyone could have been working together on a fully open graphics compute stack. Sure, optimize it for the hardware you sell, why not, but then it's up to the "best" product instead of the one with the magic software juice.

[–] twei@discuss.tchncs.de 8 points 10 months ago

but then it’s up to the “best” product

that's the part why it didn't happen

[–] haui_lemmy@lemmy.giftedmc.com 25 points 10 months ago (1 children)

Cue the nvidia shills that find some reason still why amd is not objectively better.

[–] troyunrau@lemmy.ca 11 points 10 months ago

Not a shill. Don't like Nvidia. But, this drop-in replacement is more like a framework for a future fully compatible drop-in replacement than a fully functional one. It's like wine from two decades ago to windows -- you might get a few things to work...

[–] cbarrick@lemmy.world 14 points 10 months ago (1 children)

After two years of development and some deliberation, AMD decided that there is no business case for running CUDA applications on AMD GPUs. One of the terms of my contract with AMD was that if AMD did not find it fit for further development, I could release it. Which brings us to today.

From https://github.com/vosen/ZLUDA?tab=readme-ov-file#faq

[–] NightAuthor@lemmy.world 4 points 10 months ago

So AMD already gave up on this, and if they hadn’t they’d have kept it proprietary?

[–] MayonnaiseArch@beehaw.org 9 points 10 months ago (3 children)

A serious question - when will nvidia stop selling their products and start asking for rent? Like 50 bucks a month is a 4070, your hardware can be a 4090 but thats a 100 a month. I give it a year

[–] poVoq@slrpnk.net 11 points 10 months ago

It's more efficient to rent the same GPU to multiple people the same time, and Nvidia is already doing that with GeforceNow.

[–] umbrella@lemmy.ml 3 points 10 months ago

whenever the infrastructure is good enough they can keep the hardware and stream your workload to you.

[–] Ludrol@szmer.info 3 points 10 months ago

When the AI and data center hardware will stop being profitable.

[–] autotldr@lemmings.world 5 points 10 months ago

This is the best summary I could come up with:


While there have been efforts by AMD over the years to make it easier to port codebases targeting NVIDIA's CUDA API to run atop HIP/ROCm, it still requires work on the part of developers.

The tooling has improved such as with HIPIFY to help in auto-generating but it isn't any simple, instant, and guaranteed solution -- especially if striving for optimal performance.

In practice for many real-world workloads, it's a solution for end-users to run CUDA-enabled software without any developer intervention.

Here is more information on this "skunkworks" project that is now available as open-source along with some of my own testing and performance benchmarks of this CUDA implementation built for Radeon GPUs.

For reasons unknown to me, AMD decided this year to discontinue funding the effort and not release it as any software product.

Andrzej Janik reached out and provided access to the new ZLUDA implementation for AMD ROCm to allow me to test it out and benchmark it in advance of today's planned public announcement.


The original article contains 617 words, the summary contains 167 words. Saved 73%. I'm a bot and I'm open source!

[–] Ludrol@szmer.info 4 points 10 months ago (1 children)

There still is no support for ROCm on linux but this is still good to hear

[–] leopold@lemmy.kde.social 9 points 10 months ago (1 children)

what do you mean? rocm does support linux and so does zluda.

[–] Ludrol@szmer.info 3 points 10 months ago (1 children)
[–] leopold@lemmy.kde.social 3 points 10 months ago

Officially, sure, these are the only supported cards. In practice, it works with most AMD cards.