Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
view the rest of the comments
You’re talking high availability design. As someone else said, there’s almost always a single point of failure but there are ways to mitigate depending on the failures you want to protect against and how much tolerance you have for recovery time. instant/transparent recovery IS possible, you just have to think through your failure and recovery tree.
proxy failures are kinda the simplest to handle if you’re assuming all the backends for storage/compute/network connectivity is out of scope. You set up two (or more) separate VMs that have the same configuration and float a virtual IP between them that your port forwards connect to. If any VM goes down, the VIP migrates to whatever VM is still up and your clients never know the difference. Look up Keepalived, that’s the standard way to do it on Linux.
But you then start down a rabbit hole. Is your storage redundant, the networking connectivity redundant, power? All of those can be made redundant too, but it will cost you, time and likely money for hardware. It’s all doable, you just have to decide how much it’s worth for you.
Most home labbers I suspect will just accept the 5mins it takes to reboot a VM and call it a day. Short downtime is easier handle, but there are definitely ways to make your home setup fully redundant and highly available. At least unless a meteor hits your house anyway.
The more I go into this rabbit hole, the more I understand this, and I understand now that I went into the hole with practically 0 knowledge of this topic. It was so frustrating to get my "HA" proxy on LAN with replicated containers, DNS and shared storage, hours sank into getting permission to work, just to realise "oh god, this only works on LAN" when my certs failed to renew.
I do not think I need this, truth is that the lab is in a state where I have most things I want[need] working very well and this is a fun nice to have to learn some new things.
Thanks for the info! I will look into it!
IIRC there's a couple different ways with Caddy to replicate the letsencrypt config between instances, but I never quite got that working. I didn't find a ton of value in a HA reverse proxy config anyways since almost all of my services are running on the same machine, and usually the proxy is offline because that machine is offline. The more important thing was HA DNS, and I got that working pretty well with keepalived. The redundant DNS server just runs on a $100 mini PC. Works well enough for me.