enumerator4829

joined 1 year ago
[–] enumerator4829@sh.itjust.works 5 points 1 day ago (1 children)

I wonder how much that high cost could be reduced by modern manufacturing. Same/similar designs, but modern tooling and logistics.

I mean, they did not have CNC mills back then.

Fairly significant factor when building really large systems. If we do the math, there ends up being some relationships between

  • disk speed
  • targets for ”resilver” time / risk acceptance
  • disk size
  • failure domain size (how many drives do you have per server)
  • network speed

Basically, for a given risk acceptance and total system size there is usually a sweet spot for disk sizes.

Say you want 16TB of usable space, and you want to be able to lose 2 drives from your array (fairly common requirement in small systems), then these are some options:

  • 3x16TB triple mirror
  • 4x8TB Raid6/RaidZ2
  • 6x4TB Raid6/RaidZ2

The more drives you have, the better recovery speed you get and the less usable space you lose to replication. You also get more usable performance with more drives. Additionally, smaller drives are usually cheaper per TB (down to a limit).

This means that 140TB drives become interesting if you are building large storage systems (probably at least a few PB), with low performance requirements (archives), but there we already have tape robots dominating.

The other interesting use case is huge systems, large number of petabytes, up into exabytes. More modern schemes for redundancy and caching mitigate some of the issues described above, but they are usually onlu relevant when building really large systems.

tl;dr: arrays of 6-8 drives at 4-12TB is probably the sweet spot for most data hoarders.

Oh, I fully agree that the tech behind X is absolute garbage. Still works reasonably well a decade after abandonment.

I’m not saying we shouldn’t move on, I’m saying the architecture and fundamental design of Wayland is broken and was fundamentally broken from the beginning. The threads online when they announced the projects were very indicative of the following decade. We are replacing a bit unmaintainable pile of garbage with 15 separate piles of hardware accelerated soon-to-be unmaintainable tech debt.

Oh, and a modern server doesn’t usually have a graphics card. (Or rather, the VM you want to host users within). I won’t bother doing the pricing calculations, but you are easily looking at 2-5x cost per seat, pricing in GPU hardware, licensing for vGPUs and hypervisors.

With Xorg I can easily reach a few hundred active users per standard 1U server. If you make that work on Wayland I know some people happy to dump money on you.

The fundamental architectural issue with Wayland is expecting everyone to implement a compositor for a half baked, changing, protocol instead of implementing a common platform to develop on. Wayland doesn’t really exist, it’s just a few distinct developer teams playing catch-up, pretending to be compatible with each other.

Implementing the hard part once and allowing someone to write a window manager in 100 lines of C is what X did right. Plenty of other things that are bad with X, but not that.

[–] enumerator4829@sh.itjust.works 2 points 1 week ago (2 children)

Tell me you never deployed remote linux desktop in an enterprise environment without telling me you never deployed remote desktop linux in an enterprise environment.

After these decades of Wayland prosperity, I still can’t get a commercially supported remote desktop solution that works properly for a few hundred users. Why? Because on X, you could highjack the display server itself and feed that into your nice TigerVNC-server, regardless of desktop environment. Nowadays, you need to implement this in each separate compositor to do it correctly (i.e. damage tracking). Also, unlike X, Wayland generally expects a GPU in your remote desktop servers, and have you seen the prices for those lately?

Programmers use butterflies.

Real sysadmins use programmers.

[–] enumerator4829@sh.itjust.works 4 points 1 week ago (1 children)

The M-series hardware is locked down and absofuckinglutely proprietary and locked down and most likely horrible to repair.

But holy shit, every other laptop I’ve ever used looks and feels like a cheap toy in comparison. Buggy firmware that can barely sleep, with shitty drivers from the cheapest components they could find. Battery life in low single digits. The old ThinkPads are kinda up there in perceived ”build quality”, but I haven’t seen any other laptop that’s even close to a modern macbook. Please HP, Dell, Lenovo, Framework or whoever , just give me a functional high quality laptop. I’ll pay.

Moving people from closed commercial offerings onto something self hosted is enough work without gatekeeping US open source projects, even if they are flawed. If we want to move normal people away from the commercial offerings onto something better, we can’t do things like that. Better save such warnings for when they are actually needed (”Project X has been dead for five years and is full of security holes, you should migrate to project Y instead”). Keep the experience positive regardless.

You do you, but different people have differing requirements and preferences. Don’t scare them away please.

[–] enumerator4829@sh.itjust.works 1 points 2 weeks ago (1 children)

Because dockers record with regards to security is questionable, and some people like to get automatic updates from their distro. For me personally, I think the design of Docker is absolute garbage. Containers are fine, but Docker is not the correct mechanism for it. (It’s also nothing new, see BSD jails and Solaris zones.)

Immich on Nixos works perfectly, and I also get automatic updates.

If you stay on X, you can keep using the same window manager for longer. My XMonad config is over a decade old, and I bet my old dwm config.h still compiles.

The relative size of the double handling is the potential problem. I think Nvidia is just trying to extend the gold rush for a bit longer.

[–] enumerator4829@sh.itjust.works 1 points 1 month ago (1 children)

Agreed, it’s not perfect, especially not with regards to drivers from some of them. But:

https://insights.linuxfoundation.org/project/korg/contributors?timeRange=past365days&start=2024-12-31&end=2025-12-31

I expect that the ability of B2C-products to keep their code somewhat closed keeps them from moving to other platforms, while simultaneously pumping money upstream to their suppliers, expecting them to contribute to development. The linked list is dominated by hardware vendors, cloud vendors and B2B-vendors.

Linux didn’t win on technical merit, it won on licensing flexibility. Devs and maintainers are very happy with GPL2. Does it suck if you own a Tivo? Yes. Don’t buy one. On the consumer side, we can do some voting with our wallets, and some B2C vendors are starting to notice.

view more: next ›