this post was submitted on 27 Jan 2025
879 points (98.1% liked)

Technology

61224 readers
6315 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related content.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, to ask if your bot can be added please contact us.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

cross-posted from: https://lemm.ee/post/53805638

(page 2) 50 comments
sorted by: hot top controversial new old
[–] ChiefGyk3D@infosec.pub 57 points 3 days ago (11 children)

My understanding is that DeepSeek still used Nvidia just older models and way more efficiently, which was remarkable. I hope to tinker with the opensource stuff at least with a little Twitch chat bot for my streams I was already planning to do with OpenAI. Will be even more remarkable if I can run this locally.

However this is embarassing to the western companies working on AI and especially with the $500B announcement of Stargate as it proves we don't need as high end of an infrastructure to achieve the same results.

[–] sunzu2@thebrainbin.org 33 points 3 days ago

500b of trust me Bros... To shake down US taxpayer for subsidies

Read between the lines folks

load more comments (10 replies)
[–] index@sh.itjust.works 38 points 3 days ago (6 children)

It still rely on nvidia hardware why would it trigger a sell-off? Also why all media are picking up this news? I smell something fishy here...

[–] ArchRecord@lemm.ee 18 points 2 days ago (16 children)

Here's someone doing 200 tokens/s (for context, OpenAI doesn't usually get above 100) on... A Raspberry Pi.

Yes, the "$75-$120 micro computer the size of a credit card" Raspberry Pi.

If all these AI models can be run directly on users devices, or on extremely low end hardware, who needs large quantities of top of the line GPUs?

[–] aesthelete@lemmy.world 21 points 2 days ago* (last edited 2 days ago)

Thank the fucking sky fairies actually, because even if AI continues to mostly suck it'd be nice if it didn't swallow up every potable lake in the process. When this shit is efficient that makes it only mildly annoying instead of a complete shitstorm of failure.

load more comments (15 replies)
[–] Railcar8095@lemm.ee 33 points 3 days ago (1 children)

The way I understood it, it's much more efficient so it should require less hardware.

Nvidia will sell that hardware, an obscene amount of it, and line will go up. But it will go up slower than nvidia expected because anything other than infinite and always accelerating growth means you're not good at business.

load more comments (1 replies)
[–] PhAzE@lemmy.ca 22 points 3 days ago (4 children)

It requires only 5% of the same hardware that OpenAI needs to do the same thing. So that can mean less quantity of top end cards and it can also run on less powerful cards (not top of the line).

Should their models become standard or used more commonly, then nvidis sales will drop.

load more comments (4 replies)
load more comments (3 replies)
[–] daddy32@lemmy.world 30 points 3 days ago

Great, a stock sale.

[–] Kazumara@discuss.tchncs.de 39 points 3 days ago (3 children)

Hm even with DeepSeek being more efficient, wouldn't that just mean the rich corps throw the same amount of hardware at it to achieve a better result?

In the end I'm not convinced this would even reduce hardware demand. It's funny that this of all things deflates part of the bubble.

[–] UnderpantsWeevil@lemmy.world 51 points 3 days ago* (last edited 3 days ago) (1 children)

Hm even with DeepSeek being more efficient, wouldn’t that just mean the rich corps throw the same amount of hardware at it to achieve a better result?

Only up to the point where the AI models yield value (which is already heavily speculative). If nothing else, DeepSeek makes Altman's plan for $1T in new data-centers look like overkill.

The revelation that you can get 100x gains by optimizing your code rather than throwing endless compute at your model means the value of graphics cards goes down relative to the value of PhD-tier developers. Why burn through a hundred warehouses full of cards to do what a university mathematics department can deliver in half the time?

[–] AppleTea@lemmy.zip 8 points 2 days ago* (last edited 2 days ago) (1 children)

you can get 100x gains by optimizing your code rather than throwing endless compute at your model

woah, that sounds dangerously close to saying this is all just developing computer software. Don't you know we're trying to build God???

load more comments (1 replies)
load more comments (2 replies)
[–] RubicTopaz@lemmy.world 38 points 3 days ago (1 children)

Finally a proper good open source model as all tech should be

load more comments (1 replies)
[–] thespcicifcocean@lemmy.world 11 points 2 days ago (11 children)

nvidia falling doesn't make much sense to me, GPUs are still needed to run the model. Unless Nvidia is involved in its own AI model or something?

load more comments (11 replies)
[–] sith@lemmy.zip 25 points 3 days ago

Nice. Fuck you Nvidia.

[–] jlh@lemmy.jlh.name 45 points 3 days ago (8 children)

Bizarre story. China building better LLMs and LLMs being cheaper to train does not mean that nVidia will sell less GPUs when people like Elon Musk and Donald Trump can't shut up about how important "AI" is.

I'm all for the collapse of the AI bubble, though. It's cool and all that all the bankers know IT terms now, but the massive influx of money towards LLMs and the datacenters that run them has not been healthy to the industry or the broader economy.

load more comments (8 replies)
load more comments
view more: ‹ prev next ›