this post was submitted on 19 Sep 2025
48 points (100.0% liked)

Hardware

4002 readers
142 users here now

All things related to technology hardware, with a focus on computing hardware.


Rules (Click to Expand):

  1. Follow the Lemmy.world Rules - https://mastodon.world/about

  2. Be kind. No bullying, harassment, racism, sexism etc. against other users.

  3. No Spam, illegal content, or NSFW content.

  4. Please stay on topic, adjacent topics (e.g. software) are fine if they are strongly relevant to technology hardware. Another example would be business news for hardware-focused companies.

  5. Please try and post original sources when possible (as opposed to summaries).

  6. If posting an archived version of the article, please include a URL link to the original article in the body of the post.


Some other hardware communities across Lemmy:

Icon by "icon lauk" under CC BY 3.0

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] SavageCoconut@lemmy.world 24 points 1 week ago (5 children)

I'm going to copy two comments from TPU that i think are the most accurate regarding this proposal:

This doesn't solve anything at all. You still have to plug power into the motherboard, likely via the problematic 12V6X2 connector once they extend it beyond 250W. Now you also need to dedicate additional (expensive) PCB layers and throw a lot more copper traces (expensive) at it too. It's just a glorified extension cable that adds another connector to the equation. Why go from the PSU directly to the GPU when you can now do it with extra steps!! (/s) Cable still required, but now you need a new motherboard, too.... Why? Because cables are evil, apparently and this fits the dumb BTF form factor that's all about form over function, for people with infinite wallets.

This is a stupid idea. It should be separate as always. It also makes problems worse if connector-gate happens again, as now you fry a much larger PCB instead of tiny GPU board. And high power means you need to worry about heat on the mobo now too. It makes repair difficult too. But I'm not surprised. Asus always sucked on the user friendly aspect. Worst customer support. Apparently that's the trend to keep up with the "You'll own nothing and be happy" motto of stream everything, throwaway everything, get in debt all the time mentality.

[–] tiramichu@sh.itjust.works 8 points 1 week ago (3 children)

Yeah, it doesn't make sense.

I could understand the rationale for wanting a high-power PCIe specification if there were multiple PCIe devices that could benefit from extra juice, but it's literally just the graphics card.

One might make the argument "Oh but what if you had multiple GPUs? Then it makes sense!" except it doesn't, because the additional power would only be enough for ONE high-performance GPU. For multiple GPUs you'd need even more motherboard power sockets...

It's complexity for no reason, or purely for aesthetics. The GPU is the device that needs the power, so give the GPU the power directly, as we already are.

[–] tal@olio.cafe 5 points 1 week ago (2 children)

I could understand the rationale for wanting a high-power PCIe specification if there were multiple PCIe devices that could benefit from extra juice, but it's literally just the graphics card.

There was a point in the past when it was common to run multiple GPUs. Today, that's not something you'd normally do unless you're doing some kind of parallel compute project, because games don't support it.

But it might be the case, if stuff like generative AI is in major demand, that sticking more parallel compute cards in systems might become a thing.

[–] cmnybo@discuss.tchncs.de 3 points 1 week ago

But it might be the case, if stuff like generative AI is in major demand, that sticking more parallel compute cards in systems might become a thing.

Then you could be looking at multiple kilowatts being supplied by the motherboard. It would need large busbars if they stuck with 12V.

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)