14
submitted 1 year ago* (last edited 1 year ago) by tarneo@lemmy.ml to c/selfhost@lemmy.ml

Tl;dr: Automatic updates on my home server caused 8 hours of downtime of all of renn.es' docker services including email and public websites

top 20 comments
sorted by: hot top controversial new old
[-] Moonrise2473@feddit.it 11 points 1 year ago

I don't want to seem rude, but in my opinion automated unattended updates on Gentoo is a bad idea.

[-] tarneo@lemmy.ml 4 points 1 year ago* (last edited 1 year ago)

That's what I learned :-)

Edit: no saying that isn't rude

[-] ReversalHatchery@beehaw.org 6 points 1 year ago* (last edited 1 year ago)

While we are here: what do you think about unattended updates on Debian and such? (as such being derivatives, including Proxmox VE)

[-] tarneo@lemmy.ml 6 points 1 year ago

Unattended updates are 10x better because those programs allow you to only do security updates. Plus they are much more stable, and something like this would never happen on a stable distro.

[-] yote_zip@pawb.social 3 points 1 year ago* (last edited 1 year ago)

I think auto-upgrading Debian Stable is probably the one exception I'd make to "no blind upgrades", though I still don't feel comfortable recommending it due to potential dependency/apt problems that could somehow happen. In the case of Debian Stable it barely ever has package upgrades anyway so I'd just do it manually once a week and it would take like 30 seconds to grab 4 packages. If you're public-facing you might want a tighter system for notifying about security upgrades, or just auto-upgrade security patches.

[-] yote_zip@pawb.social 5 points 1 year ago

Blind automatic upgrades are a bad idea even for casual home users. You could run into a Linus Tech Tips "do as I say" scenario where it uninstalls half your system due to a dependency issue. Or it could accidentally uninstall part of your system that you don't notice.

I'm not sure how stable Gentoo's default branch is but I know that daily upgrades on Arch Linux is close to suicide - you have a higher chance of installing a buggy package before it's fixed if you install every package version as it comes in.

I'm surprised this strategy was approved for a public server - it's playing with a loaded revolver and it looks like you were finally shot.

[-] tarneo@lemmy.ml 5 points 1 year ago

I'm surprised this strategy was approved for a public server

The goal was to avoid getting hacked on a server that could have many vulnerable services (there are more than 20 services on there). When I set this up I was basically freaked out by the fact I hadn't updated mastodon more than a week after the last critical vulnerability in it was found (arbitrary code execution on the server). The quantity of affected users, compared to the impact it would have if hacked, made me choose the option of auto-updates back then, even if I now agree it wasn't clever (and I ended up shooting myself I'm the foot). These days I just do updates semi-regularly and I am subscribed to mailing lists like oss-security to know there's a vulnerability as early as possible. Plus I am not the only person in charge anymore.

[-] yote_zip@pawb.social 4 points 1 year ago

I'm not a real sysadmin so take it with a grain of salt, but in all reality this is probably why you would choose something like Debian for a server instead a bleeding-edge distro. Debian quickly backports security updates and fixes but otherwise keeps everything else stable and extremely well-tested, which pretty much 100% prevents serious bugs from reaching its Stable branch. You may still need to figure out an appropriate strategy for keeping your Mastodon container updated, but at least the rest of your system isn't at risk of causing catastrophic errors like this. Also, Debian Stable does allow you to auto-upgrade security patches only, if you still want that functionality.

[-] tarneo@lemmy.ml 2 points 1 year ago

I totally agree. But I just wouldn't necessarily say gentoo is a bleeding edge distro: it's kinda up to the user. They are free to configure the package manager (portage) however they want and can even do updates manually. I just like the idea of having newer packages at the cost of stability, because I also use the server as a shell account host (with an isolated user ;-)) and need things like the latest neovim. These days I would know if an update failed because I would literally be in front of the process and test services are working after the updates, so I'd know if I have to rollback. This makes it basically like a stable distro IMO (even though the packages aren't battle tested before being pushed as updates).

[-] skilltheamps@feddit.de 2 points 1 year ago* (last edited 1 year ago)

I don't know to what extent you got molested by the prophets of immutable distros yet, but I can only recommend to join the cult. Install Fedora IoT (or CoreOS) and simply know that you'll get a working container host (powered by podman) with every update. The whole discussion about which distro might survive whatever massacre the respective package manager commits next becomes superflous: You simply get the next image that was built upstream solely to serve containers. The whole package-udpating-shengiangs is done by other people for you, you only collect the sweet result. The only "downside" is that one has to become familiar with containers, but since you run docker already that should work out. Also for stuff like tinkering with the latest tools, just put those in a distrobox. That way they are indipendent from your solid container host, and you can mess them up in whatevery way you fancy and dispose them without any traces left behind.

Edit: To give one more example why this is awesome: It wouldn't even matter which one you install, you can just rebase to the other (IoT lives in the fedora-iot remote. silverblue, coreos and the others in the fedora remote. Just for anybody who might be confused by only looking at ostree remote refs fedora)

[-] tarneo@lemmy.ml 1 points 1 year ago

To me, this is only one of the few advantages of immutability. I have already used nixOS on a server and I really didn't like having to learn how to do everything the right way. As for distrobox, to me it sounds quite like an additional failure point: it is an abstraction over the containers concept that hides the actual way it is done from you. I'd say if you run an app in a container, go all the way: make the container yourself. To me it just sounds like a bad idea, and I didn't really like distrobox when I tried it. I just want to say that both of these concepts (immutability, distrobox) would be great if it was perfectly done. But the learning curve of nixos and the wackiness of distrobox drove me away.

[-] skilltheamps@feddit.de 1 points 1 year ago* (last edited 1 year ago)

The learning curve of NixOS is also what keeps me from trying it out, hence I prefer the "take it or leave it" mantra of the immutable fedoras, and try to keep the amount of packages I have rpm-ostree layer on top minimal.

As for Distrobox, yes there's ways it can fail, altough that happened rarely to me. What happens mostly is that the distro inside distrobox goes kaput because that's just what mutable distros beared with a plethora of questionable tooling installed with "curl something | bash" does. But for me that's the point of distrobox: separate all that shady cruft one may need for work/developing/etc from the host os. It's a place for messing about without messing up the computer and with it the bits that need to keep working

[-] tarneo@lemmy.ml 1 points 1 year ago

You convinced me for immutable fedora. Maybe I'll try it out sometime on our backup/testing server and maybe it will make its way to production if I'm happy with it.

As for distrobox I'll see.

The main reason I used Gentoo is because of being able to reduce the attack surface with USE flags. But as it seems the tradeoffs with it are greater than the advantages (the mastodon issue I mentioned). If I don't switch the server to immutable fedora, I'll just use something like plain fedora or debian I think.

[-] skullgiver@popplesburger.hilciferous.nl 2 points 1 year ago* (last edited 1 year ago)

[This comment has been deleted by an automated system]

[-] yote_zip@pawb.social 1 points 1 year ago

Right, it was clearly LTT's fault for not reading, but automatic upgrades are the same thing as not reading. I've been using Linux for a very long time now, and I've seen Apt try to do some very stupid things before. Maybe it's better nowadays but I don't know if I'll ever shake the gut instinct to not allow Apt to do whatever it thinks is right.

[-] skullgiver@popplesburger.hilciferous.nl 1 points 1 year ago* (last edited 1 year ago)

[This comment has been deleted by an automated system]

[-] yote_zip@pawb.social 1 points 1 year ago

Yeah I really don't trust GUI package managers yet. I feel like they shouldn't be that hard to get working properly, but I always seem to get quirky behavior when I try to use them. As for readability apt is one of the worse tools IMO. I've been using nala lately and really like how it lays out its operations. Contrast that format to what Linus saw in his video.

Maybe we could have a blacklist of packages/metapackages marked "important" that cause warnings, like xorg, pipewire, pulseaudio, kde-desktop, gnome-desktop, etc. If you're uninstalling something like that you better hit confirm twice because that's not typical behavior.

[-] thisisawayoflife@lemmy.world 2 points 1 year ago

What is the reason to shy away from Ubuntu? It is pretty solid in terms of automatic updating and rebooting. I used to be hardcore centos but I gave up after all of the hubbub around 8. I just need to server to update, reboot when necessary and keep running all my stuff so I don't have to touch it. In my old age, I don't care to tinker anymore - I just want my services running and I want reports given to me about health and status.

Also, if you're concerned about privilege escalation, running a MAC is probably a good idea. SELinux saved my hide one a dozen years ago with a php bug where I did not sandbox an app properly. Thankfully, SELinux caught this and prevented anything bad from happening.

[-] tarneo@lemmy.ml 1 points 1 year ago

what is the reason you shy away from ubuntu? Canonical. Snaps. Ubuntu is the first server OS I used, and while it was quite good I think I prefer using a base distrobox instead of a derivative. If I'm going to use Debian, I'll use Debian. Not Debian with corporate stuff on top.

As for SELinux: I've tried around a year ago. But as soon as I started doing stuff with users and tweaking docker permissions things went wrong and I just set it to permissive. Maybe I'll try that again soon, because other parts of managing servers have become much easier over time as I learned. I agree that having a server without SELinux is quite dumb and not very professional.

[-] thisisawayoflife@lemmy.world 1 points 1 year ago

Permissive mode is definitely a life saver. My path was usually exercising the application in permissive mode for a few days then running the SELinux scanner on the log file to determine what roles needed to be setup. Same with the Debian/Ubuntu equivalent.

Good luck!

this post was submitted on 20 Aug 2023
14 points (93.8% liked)

Self Hosted - Self-hosting your services.

11587 readers
29 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules

Important

Beginning of January 1st 2024 this rule WILL be enforced. Posts that are not tagged will be warned and if not fixed within 24h then removed!

Cross-posting

If you see a rule-breaker please DM the mods!

founded 3 years ago
MODERATORS