this post was submitted on 02 Feb 2026
16 points (58.9% liked)
Technology
79985 readers
3658 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
That’s a huge generalization, and it depends what you use your system for. Some people might be on old threadripper workstations that works fine, for instance, and slaps in a second GPU. Or maybe someone needs more cores for work; they can just swap their CPU out. Maybe your 4K gaming system can make do with an older CPU.
I upgraded RAM and storage just before the RAMpocalypse, and that’s not possible on many laptops. And I can stuff a whole bunch of SSDs into the body and use them all at once.
I’d also argue that ATX desktops are more protected from anti-consumer behavior, like soldered price-gouged SSDs, planned obsolescence, or a long list of things you see Apple do.
…That being said, there’s a lot of trends going against people, especially for gaming:
There’s “initial build FOMO” where buyers max out their platform at the start, even if that’s financially unwise and they miss out on sales/deals.
We just went from DDR4 to DDR5, on top of some questionable segmentation from AMD/Intel. So yeah, sockets aren’t the longest lived.
Time gaps between generations are growing as silicon gets more expensive to design.
…Buyers are collectively stupid and bandwagon. See: the crazy low end Nvidia GPU sales when they have every reason to buy AMD/Intel/used Nvidia instead. So they are rewarding bad behavior from companies.
Individual parts are more repairable. If my 3090 or mobo dies, for instance, I can send it to a repairperson and have a good chance of saving it.
You can still keep your PSU, case, CPU heating, storage and such. It’s a drop in the bucket cost-wise, but it’s not nothing.
IMO things would be a lot better if GPUs were socketable, with LPCAMM on a motherboard.
You nailed it, except "huge generalization" is actually being generous. The article simply wrong. The author is speaking esoteric gobbledigook:
It's the sort of argument a husband might give his not tech savvy wife when she asks why he repeatedly needs to spend so much $$$ on something only he uses.
I think FOMO says it pretty well, or simply consumerism.
Now that hardware is getting more expensive again, this is really sending the wrong message.
And OP keeps doubling & tripling down despite basically every comment disagreeing. I think they wrote that article.
Don't forget about PCIe expansion. Just yesterday I got a FireWire PCIe card for 20€ to transfer old DV tapes to digital with no quality loss. Plug the card in and you're done. To get the same result on a laptop I'd need a Thunderbolt port and two adapters, one of which isn't manufactured anymore and goes for 150€+ on secondhand stores.
While throwing out working things is terrible, the cost of servicing a motherboard outpaces the cost of replacing it. They can possibly still charge you 200 dollars and tell you the board cant be fixed, right? I think the right balance is that you observe the warranty period, try to troubleshoot it yourself --and then call it a day, unless you have a 400+ dollar motherboard.
Yeah, probably. I actually have no idea what they charge, so I’d have to ask.
It’s be worth it for a 3090 though, no question.
Typically I’ve seen a motherboard supports about 2 generations of gpu before some underlying technology makes it no longer can keep up.
If you are going from a 30 series to a 50 series gpu there is going to be a need for increased pci bandwidth in terms of lanes and pcie- spec for it to be fully utilized.
I just saw this play out with a coworker where he replaced 2x3090 with a 5090. The single card is faster but now the he can’t fully task his storage and gpu at the same time due to pci-lane limits. So it’s a new motherboard, which needs a new cpu which needs new ram.
Basically a 2 generation gpu upgrade needs a whole new system.
Each generation of pcie doubles bandwith so a future 2x pcie-6 gpu will need an 8x pcie 4 worth of bandwidth.
Even then gpu’s and cpu have been getting more power hungry. Unless you over spec your psu there is a reasonable chance once you get past 2 gpu generations you need a bigger Psu. Power supplies are wear items. They continue to function, but may not provide power as cleanly when you get to 5+ years of continuous use.
Sure you can keep the case and psu but literally everything else will run thunderbolt or usb-c without penalties.
At this point why not run storage outside the box for anything sizeable? Anything fast runs on nvme internal.
This doesn’t make any sense, especially the 2x 3090 example. I’ve run my 3090 at PCIe 3.0 over a riser, and there’s only one niche app where it ever made any difference. I’ve seen plenty of benches show PCIe 4.0 is just fine for a 5090:
https://gamersnexus.net/gpus/nvidia-rtx-5090-pcie-50-vs-40-vs-30-x16-scaling-benchmarks
1x 5090 uses the same net bandwidth, and half the PCIe lanes, as 2x 3090.
Storage is, to my knowledge, always on a separate bus than graphics, so that also doesn’t make any sense.
My literally ancient TX750 still worked fine with my 3090, though it was moved. I’m just going to throttle any GPU that uses more than 420W anyway, as that’s ridiculous and past the point of diminishing returns.
And if you are buying a 5090… a newer CPU platform is like a drop in the bucket.
I hate to be critical, and there are potential issues, like severe CPU bottlenecking or even instruction support. But… I don’t really follow where you’re going with the other stuff.
That is the point of the article.
The problem my friends has is that he is rendering video so he has a high performance Sas host adapter on the same PCI bus as the GPU. He upgraded both hoping the 5090 would play nicer with the sas adapter but he can't pull full disk bandwith and render images at the same time. Maybe it's ok for gaming, not for compute and writing to disk.
The thing with power supplies, they continue to provide enough power long after they lose the ability to provide clean power under load. Only when they are really on their last legs will they actually stop providing the rated power. I have seem a persistent networking issue resolved by swapping a power supply. Most of the time you don't test a power supply under load to understand if each rail is staying where it needs to be.