this post was submitted on 20 Feb 2026
108 points (75.7% liked)
Technology
81653 readers
4143 users here now
This is a most excellent place for technology news and articles.
Our Rules
- Follow the lemmy.world rules.
- Only tech related news or articles.
- Be excellent to each other!
- Mod approved content bots can post up to 10 articles per day.
- Threads asking for personal tech support may be deleted.
- Politics threads may be removed.
- No memes allowed as posts, OK to post as comments.
- Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
- Check for duplicates before posting, duplicates may be removed
- Accounts 7 days and younger will have their posts automatically removed.
Approved Bots
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
It uses a completely different paradigm of process chaining and management than POSIX and the underlying Unix architecture.
That’s not to say it’s bad, just a different design. It’s actually very similar to what Apple did with OS X.
On the plus side, it’s much easier to understand from a security model perspective, but it breaks some of the underlying assumptions about how scheduling and running processes works on Linux.
So: more elegant in itself, but an ugly wart on the overall systems architecture design.
Lol, no. Way more code in Systemd. Also more CVE per year than in some bad (now dead) init/svc' lifetime.
I think that's exactly it for most people. The socket, mount, timer unit files; the path/socket activations; the
After=,Wants=,Requires=dependency graph, and the overall architecture as a more unified 'event' manager are what feels really different than most everything else in the Linux world.That coupled with the ini-style VerboseConfigurationNamesForThatOneThing and the binary journals made me choose a non-systemd distro for personal use - where I can tinker around and it all feels nice and unix-y. On the other hand I am really thankful to have systemd in the server space and for professional work.
I'm not great at any init things, but systemd has made my home server stuff relatively seamless. I have two NASs that I mount, and my server starts up WAY faster than both of them, and I (stupidly) have one mount within the other. So I set requirements that nasB doesn't mount until nasA has, then docker doesn't start until after nasB is mounted. Works way better than going in after 5 minutes and remounting and restarting.
Of course, I did just double my previous storage on A, so I could migrate all of Bs stuff back. But that would require a small amount of effort.
what do you use as a prerequisite for the nas A mount? or does it iust keep trying in a loop?
I have a wait-for-ping service that pings nas A, once it gets a successful response it tries to mount.
I lifted it from a time when I needed to ping my router because Debian had a network-online service bug. I adapted it to my nas because the network-online issue eventually got fixed and mounting my shares became the next biggest issue.
It seems like this person might have grabbed that same fix for what I eventually did because our files are...oddly almost exactly the same.
https://cweiske.de/tagebuch/systemd-wait-nfs.htm
thanks!
do you perhaps also have a solution for hanging accesses to network mounts when the server is inaccessible?
I've started doing podman quadlets recently, and the ini config style is ugly as hell compared to yaml (even lol) in docker compose. The benefits outweigh that though imho.
I agree that quadlets are pretty ugly but I'm not sure that's the ini style's fault. In general I find yaml incredibly frustrating to understand, but toml/ini style is pretty fluent to me. Maybe just a preference, IDK.