this post was submitted on 24 Nov 2025
586 points (93.5% liked)

Today I Learned

26171 readers
225 users here now

What did you learn today? Share it with us!

We learn something new every day. This is a community dedicated to informing each other and helping to spread knowledge.

The rules for posting and commenting, besides the rules defined here for lemmy.world, are as follows:

Rules (interactive)


Rule 1- All posts must begin with TIL. Linking to a source of info is optional, but highly recommended as it helps to spark discussion.

** Posts must be about an actual fact that you have learned, but it doesn't matter if you learned it today. See Rule 6 for all exceptions.**



Rule 2- Your post subject cannot be illegal or NSFW material.

Your post subject cannot be illegal or NSFW material. You will be warned first, banned second.



Rule 3- Do not seek mental, medical and professional help here.

Do not seek mental, medical and professional help here. Breaking this rule will not get you or your post removed, but it will put you at risk, and possibly in danger.



Rule 4- No self promotion or upvote-farming of any kind.

That's it.



Rule 5- No baiting or sealioning or promoting an agenda.

Posts and comments which, instead of being of an innocuous nature, are specifically intended (based on reports and in the opinion of our crack moderation team) to bait users into ideological wars on charged political topics will be removed and the authors warned - or banned - depending on severity.



Rule 6- Regarding non-TIL posts.

Provided it is about the community itself, you may post non-TIL posts using the [META] tag on your post title.



Rule 7- You can't harass or disturb other members.

If you vocally harass or discriminate against any individual member, you will be removed.

Likewise, if you are a member, sympathiser or a resemblant of a movement that is known to largely hate, mock, discriminate against, and/or want to take lives of a group of people, and you were provably vocal about your hate, then you will be banned on sight.

For further explanation, clarification and feedback about this rule, you may follow this link.



Rule 8- All comments should try to stay relevant to their parent content.



Rule 9- Reposts from other platforms are not allowed.

Let everyone have their own content.



Rule 10- Majority of bots aren't allowed to participate here.

Unless included in our Whitelist for Bots, your bot will not be allowed to participate in this community. To have your bot whitelisted, please contact the moderators for a short review.



Partnered Communities

You can view our partnered communities list by following this link. To partner with our community and be included, you are free to message the moderators or comment on a pinned post.

Community Moderation

For inquiry on becoming a moderator of this community, you may comment on the pinned post of the time, or simply shoot a message to the current moderators.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] Luffy879@lemmy.ml 97 points 2 weeks ago (6 children)

If you look at it logically, it only makes sense.

With these Supercomputers, you often run on very specialized Hardware which you have to write costum kernels and drivers for, and if you arent willing to spend millions to get Microsoft to support it, your only other Option is Linux really

[–] Eggymatrix@sh.itjust.works 43 points 2 weeks ago (5 children)

Not really, we are not in the eighties anymore, modern supercomputers are mainly a bunch of off the shelf servers connected together

[–] remotelove@lemmy.ca 37 points 2 weeks ago (1 children)

They still probably need a ton of customization and tuning at the driver level and beyond, which open source allows for.

I am sure there is plenty of existing "super computer"-grade software in the wild already, but a majority of it probably needs quite a bit of hacking to get running smoothly on newer hardware configurations.

As a matter of speculation, the engineers and scientists that build these things are probably hyper-picky about how some processes execute and need extreme flexibility.

So, I would say it's a combination of factors that make Linux a good choice.

[–] jj4211@lemmy.world 10 points 2 weeks ago

Surprisingly not a lot of 'exciting tuning', a lot of these are exceedingly conservative when it comes to tuning. From a software perspective, the most common "weird" thing in these systems is the affinity for diskless boot, and that's mostly coming from a history of when hard drives used to be a more frequent failure causing downtime (yes, the stateless nature of diskless boot continues to be desired, but the community would have likely never bothered if not for OS HDD failures). They also sometimes like managing the OS kind of like a common chroot to oversimplify, but that's mostly about running hundreds of thousands of what should be the exact same thing over and over again, rather than any exotic nature of their workload.

Linux is largely the choice by virtue of this market evolving from largely Unix based but most applications they used were open source, out of necessity to let them bid, say, Sun versus IBM versus SGI and still keep working regardless of who was awarded the business. In that time frame, Windows NT wasn't even an idea, and most of these institutions wouldn't touch 'freeware' for such important tasks.

In the 90s Linux happened and critically for this market, Red Hat and SUSE happened. Now they could have a much more vibrant and fungible set of hardware vendors with some credible commercial software vendor that could support all of them. Bonus that you could run the distributions or clones for free to help a lot of the smaller academic institutions get a reasonable shot without diverting money from hardware to software. Sure, some aggressively exotic things might have been possible versus the prior norm of proprietary, but mostly it was about the improved vendor-to-vendor consistency.

Microsoft tried to get into this market in the late 2000s, but no one asked for them. They had poor compatibility with any existing code, were more expensive, and much worse at managing at scale in the context of headless, multi-user compute nodes.

[–] Gullible@sh.itjust.works 14 points 2 weeks ago (4 children)

So is it just hundreds of servers, each running their own OS and coordinating on tasks?

[–] olosta@lemmy.world 19 points 2 weeks ago

Some have thousands but yes.on most of these systems :

  • Process launch and scheduling is done by a resource manager (SLURM is common)
  • Inter process communication uses an MPI implementation (like OpenMPI)
  • These inter node communications uses a low latency (and high bandwidth) network. This is dominated by Infiniband from Nvidia (formerly Mellanox)

What's really peculiar in modern IT, is that it often use old school Unix multi user management. Users connect to the system through SSH with their own username, use a POSIX filesystem and their processes are executed with their own usernames.

There is kernel knobs to pay attention to, but generally standard RHEL kernels are used.

[–] Treczoks@lemmy.world 12 points 2 weeks ago

This is called a "Cluster" and it precedes Linux by a decade or two, but yes.

And what else would the supercomputers run on? Windows? You won't get into the tops if half your computers are bluescreening while the other half is busy updating...

The times when supercomputers were batch-oriented machines where your calculation was the only thing that was running on the hardware, with your software basically including the OS (or at least the parts that you needed) are long over.

[–] virku@lemmy.world 7 points 2 weeks ago
[–] ji59@hilariouschaos.com 2 points 2 weeks ago

I think that the software is specialized, but the hardware is not. They use some smart algorithms to distribute computation over huge number of workers.

[–] sp3ctr4l@lemmy.dbzer0.com 5 points 2 weeks ago* (last edited 2 weeks ago)

I mean, what the first person said is true...

... and what you have just said is true.

There is no tension between these concepts.

Nearly all servers run on linux, nearly all supercomputers are some kind of locally networked cluster... that run linux.

Theres... theres no conflict here.


In fact, this kind of multi computer paradigm for Linux is the core of why X11 is weird and fucky, in the context of a modern, self contained PC, and why Wayland is a thing nowadays.

X11 is built around a paradigm where you have a whole bunch of hardware units doing actual calcs of some kind, and then, some teeny tiny hardware that is basically just a display and input device... well thats the only thing that even needs to load a display or input related code/software/library.

You also don't really need to worry so much about security in the display/input framework itself, because your only potential threat is basically a rogue employee at your lab, and everyone working there is some kind of trained expert.

This makes sense if your scenario is a self contained computer research facility that is only networked to what is in its building...

... it makes less sense and has massive security problems if you have a single machine that can do all of that, and that single machine is also networked to millions of remote devices (via the modern internet), and in a world where computer viruses, malware, are a multi billion dollar industry... and the average computer user is roughly as intelligent and knowledgeable as a 6th grader.

[–] AnUnusualRelic@lemmy.world 1 points 2 weeks ago

Soooo many raspberry pis...

[–] Valmond@lemmy.world 12 points 2 weeks ago

Nah it's to avoid the forced windows reboots /j

[–] SlurpingPus@lemmy.world 7 points 2 weeks ago* (last edited 2 weeks ago) (1 children)

The competition wasn't between Linux and Windows, but rather Linux with some dedicated server OSes like Solaris, HP-UX and whatnot — mostly variants of Unix, but idk which ones exactly.

[–] jj4211@lemmy.world 3 points 2 weeks ago

And for those, it's pretty clear. Solaris, HP-UX, Irix, AIX... They all were proprietary offerings that strove to lock in users to a specific hardware stack with very high prices.

Linux opened up the competitive field to a much broader set of business concerns, making performance per dollar much more attractive. Also the open source having a great deal of appeal for some of the academic market, a huge participant in the HPC community.

[–] IphtashuFitz@lemmy.world 7 points 2 weeks ago

I managed a research cluster for a university for about 10 years. The hardware was largely commodity and not specialized. Unless you call nVidia GPU’s or InfiniBand “specialized”. Linux was the obvious choice because many cluster-aware applications, both open source and commercial, run on Linux.

We even went so far as to integrate the cluster with CERN’s ATLAS grid to share data and compute power for analyzing ATLAS data from the LHC. Virtually all the other grid clusters ran Linux, so that made it much easier to add our cluster to its distributed environment.

[–] Eldritch@piefed.world 5 points 2 weeks ago

Sad BSD noises.

[–] YesButActuallyMaybe@lemmy.ca 2 points 2 weeks ago

It runs on the same hardware you and me could buy if we weren’t poor. And no, people don’t write their own kernels or drivers because your vendors will tell you that you can use their supported version or get fucked when something breaks. Yes there’s some optimizations for like enterprise hardware but everyone will half a brain cell can and should tweak the system at that price point. Idk you don’t sound like someone who knows a thing about HPC. Sorry.