Honestly, I've found that my compute needs have been surpassed quite a while ago, and so I could easily get away with buying a $300 computer.
Technology
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related content.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, to ask if your bot can be added please contact us.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
Honestly, for real, a lot of low-power PCs are really useful once they have crap like Windows off of them and a lightweight Linux distro on them.
Exactly. Get yourself a somewhat low-end PC, wipe windows, and install Linux Mint, and you're pretty much golden.
Did exactly this with an old laptop and use to mainly for tv and occasionally browsing when staying at our hut/cottage? Still bit slow but works.
I've found my preferences have been creeping up in price again, but only because I've found I want an actually physically lightweight laptop, and those have been getting more available, linux-able and capable.
I only need a few hundred dollars worth of computer, and anything more can live on a rack somewhere. I'll pay more than that for my computer to be light enough I don't need to think about.
Up until the early 2000s, serial computation speed doubled about every 18 months. That meant that virtually all software just ran twice as quickly every 18 months of CPU advances. And since taking advantage of that was trivial, new software releases did, traded CPU cycles for shorter development time or more functionality, demanded current hardware to run at a reasonable clip.
In that environment, it was quite important to upgrade the CPU.
But that hasn't been happening for about twenty years now. Serial computation speed still increases, but not nearly as quickly any more.
This is about ten years old now:
https://preshing.com/20120208/a-look-back-at-single-threaded-cpu-performance/
Throughout the 80’s and 90’s, CPUs were able to run virtually any kind of software twice as fast every 18-20 months. The rate of change was incredible. Your 486SX-16 was almost obsolete by the time you got it through the door. But eventually, at some point in the mid-2000’s, progress slowed down considerably for single-threaded software – which was most software.
Perhaps the turning point came in May 2004, when Intel canceled its latest single-core development effort to focus on multicore designs. Later that year, Herb Sutter wrote his now-famous article, The Free Lunch Is Over. Not all software will run remarkably faster year-over-year anymore, he warned us. Concurrent software would continue its meteoric rise, but single-threaded software was about to get left in the dust.
If you’re willing to trust this line, it seems that in the eight years since January 2004, mainstream performance has increased by a factor of about 4.6x, which works out to 21% per year. Compare that to the 28x increase between 1996 and 2004! Things have really slowed down.
We can also look at about the twelve years since then, which is even slower:
https://www.cpubenchmark.net/compare/2026vs6296/Intel-i7-4960X-vs-Intel-Ultra-9-285K
This is using a benchmark to compare the single-threaded performance of the i7 4960X (Intel's high-end processor back at the start of 2013) to that of the Intel Ultra 9 285K, the current one. In those ~12 years, the latest processor has managed to get single-threaded performance about (5068/2070)=~2.448
times the 12-year-old processor. That's (5068/2070)^(1/12)=1.07747
, about a 7.7% performance improvement per year. The age of a processor doesn't matter nearly as much in that environment.
We still have had significant parallel computation increases. GPUs in particular have gotten considerably more powerful. But unlike with serial compute, parallel compute isn't a "free" performance improvement -- software needs to be rewritten to take advantage of that, it's often hard to parallelize solving problems, and some problems cannot be solved in parallel.
Honestly, I'd say that the most-noticeable shift is away from rotational drives to SSDs -- there are tasks for which SSDs can greatly outperform rotational drives.
My line for computational adequacy was crossed with the Core2Duo. Any chip since has been fine for everyday administration or household use, and they are still fine running linux.
Any Apple silicon including the M1 is now adequate even for high end production, setting a new low bar, and a new watershed.
You know, that would explain a lot because I had no idea that there was an authentication pin and that's total bullshit.
For real, I'm happily using an APU for 90% of the time. I barely need a dedicated GPU at all any more. I use Mint btw.
I was that way for the longest time. I was more than content with my 4 core 8 thread 4th Gen. i7 laptop. I only upgraded to an 11th Gen. i9 system because I wanted to play some games on the go.
But after I upgraded to that system I started to do so much more, and all at once. Mostly because I actually could, and the old system would cry in pain long before then. But Mid last year I finally broke and bought a 13th Gen. i9 system to replace it and man do I flog the shit out of this computer. Just having the spare power lying around made me want to do more and more with it.
Linux, after all, already runs on the Grace Blackwell Superchip. Windows doesn’t.
And why is that?
Project DIGITS features the new NVIDIA GB10 Grace Blackwell Superchip, offering a petaflop of AI computing performance for prototyping, fine-tuning and running large AI models.
With the Grace Blackwell architecture, enterprises and researchers can prototype, fine-tune and test models on local Project DIGITS systems running Linux-based NVIDIA DGX OS, and then deploy them seamlessly on NVIDIA DGX Cloud™, accelerated cloud instances or data center infrastructure.
Oh, because it's not a fucking consumer product. It's for enterprises that need a cheap supercomputer
*monkey paw closes*
But it's just for AI bullshit.
I don't care why they got their shit together, I'm happy as long as they fix the open source drivers.
Where's the PC? Is it the brick on the desk? 🤣
https://www.nvidia.com/en-us/project-digits/
Yes, actually lmao
But... Why make it so ugly?
I actually think it looks neat.
The mock-up looks much cooler than the actual device in the photo (assuming that's actually it and not just a render or something).
The one in the photo looks like they cobbled it together from an old cardboard box.
Yeah and stuck a bunch of gold glitter on it with Elmer's glue.
NVCC is still proprietary and full of telemetry. You cannot build CUDA without it.
Well, it's still a modified custom distro and other distros will need to invest extra effort to be able to run there. So, no actual freedom of choice for users again...
Don't get too excited -- if this goes like the last few NVidia hardware, it will:
- cost too much
- run a non-mainline kernel
- NVidia will discontinue support for it after 3 months
Go talk to all the Jetson owners out there and see how happy they are with NVidia Linux boxes. I'll believe it when I see it (and when it is supported for longer than a quarter)
I hope to see some nice Risc-V PCs soon
Or you can just buy any random potato computer (or assemble it yourself from stuff you found) and still run Linux on it.
I'm planning on getting new pc soon. I was planning on avoiding nvidia because i had read it might be more difficult to get drivers. Does this mean they are going to improve things in general or just for the newest and likely most expensive stuff? I dont want to buy the newest possible gpu since they always have bloated price for being new and a bit older ones are likely decent enough too.
Nvidia drivers on Linux are messy and have been for a long time. It took them ages to fix Vsync in Wayland. If you want to run Linux, go AMD (or Intel).
I was planning on getting some amd gpu. Are there any other components that might have similar issues? I want to build this pc specificially for linux
Not really, everything else should just work. At least if you don't plan to buy an obscure USB sound card or something like that 😄
Have fun!
Haven't they been making things like the Jetson AGX for years? I guess this is an announcement of the next generation.
Not Acer. I’ve been burnt by them too much in the past.
Cant load the article. Does it mention if this will be ARM computers?