Currently dealing with extraordinarily slow network interface speeds on my NAS. Did a quick IO test with dd, and the results were great. I'd troubleshot this before to no avail, let the device power cycle and network speeds were fine afterwards. No dice this time, so I'm just replacing most of the hardware aside from the drive pool since I'd planned to anyways. Will troubleshoot my router's network card as well for sanity's sake.
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
I got zfs-zed working again after hours spent on vanishing notifications that worked before a kernel update that replaced a config file.
Turns out I missed a $ in a bash function call.
It's been fairly smooth lately, knock on wood!
My Valheim server that is set up for friends and family had some issues, but nothing in the logs so I assume it was a weird network issue that solved itself.
I also battled some problems with the Jellyfin temp/transcode folder ballooning in size, causing the whole server to crash as I hadn't dedicated enough space to the container. Considered making a script to clear the folder at even intervals, but it would cock up streaming if the missus was watching while the purge happened.
Ended up just giving it 100 GB and let the daily clear be enough.
It ended up being the missus' tablet suddenly requesting transcode of everything but H264, so I'm quietly hinting that she is due an upgrade anyways...
Next project planned: Caddy (I've been saying that for 6 months.....)
Isnt there a schedules task to clean the transcode dir?
If I remember correctly that it exists, might be worth to increase the frequency insted
You are right, there is a checkbox, but no way to adjust the interval AFAIK.
It seems to be a daily occurrence, which is fine when I just adjusted the container size.
I'm going to be more weary of buying devices without H265/AV1 in the future, which is what I grab mostly. That should remove the need for transcoding completely anyways.
Have you tried clicking on the task?
Just checked my server and I could adjust the frequency as I please
I obviously need to have another look when I get home!
The issue started on 10.10, but I haven't looked into it after upgrading.
Thanks for taking the time, Freund!
My pleasure :)
Seems to be missing in 10.11.3, so I might just be a few patches behind.
I'll read the patch notes and see if it's been added recently.
Just posting as it's good to know for others searching, I guess.

Dunno where you are looking at. But you need to look in the scheduled tasks (left menu almost the last option.
There are several maintenance tasks of which one is the one you might be looking for.
Oooooooooh, well god dingit, there it is!
Nice! Thanks again!
:)
2 problems this week
Accidentally had 2 Jellyfin pods trying to write to SQLite together and corrupted the DB. Not really anyway to fix it so just killed it and rebuilt the library.
Also, my son's Minecraft server got corrupted. Longhorn backup to the rescue 🛟
Some of the things in my house were set up so long ago, and running so smoothly, i havent looked at them in years (other than auto updates) now i'm afraid i've accidentally left some security hole without realizing it
For example, i set up cerbot 10 years ago and back then there was no DNS challenge, so i had to open my webserver to port 80 to renew.... well since everything was running from https/443, i decided to block port 80
so i edited the systemctl unit for certbot to temporarily open port 80 for the renewal, and close it right after...
It was only 5 years later i realized i made a mistake and port 80 had been open for 5 years to the open internet
Probably no harm since its a public server anyway... defense in depth is the key
I finally figured out it was a bad stick of RAM in my server that has been causing random freezes and not some stupid mistake on my part. Thankfully it’s DDR3 so I can keep both of my kidneys and still afford the replacement.
Thankfully it’s DDR3
It's one of the benefits of having older equipment. I use these guys for RAM purchases: https://www.memorystock.com/
One of my hard drives started randomly disconnecting.
I tried all the cables, but got nothing. I don't have time to fix it before leaving for work, so I've set up a rightly reset and I'll hope for the best. Angry family texts incoming!
I've been hinking about infrastructure as code tools. Skimmed the very surface of opentofu, looked at the list of alternatives.
I'm in need of something that is both, deployment automation and (implicit) documentation of the thing that I call "the zoo". Namely:
- network definition
- machine definitions (VMs, containers) and their configuration
- inventory: keeping track of third party resources
Now I think about which tool would be the right one for the job while I'm still not 100% sure what the job is. I don't like added complexity, it is quite possible this could become a dead end for me, if I spend more time wrangling the tool than I gain in the end.
PS: If you haven't already, please take a look at your openssl packages. Since this week there are two new CVEs rated as high: https://openssl-library.org/news/vulnerabilities/index.html
Finally killed my Discord account and moved my monitoring notifications to a self-hosted ntfy server. Works well.
Recently obtained a free circa-2017 mac mini which I installed Linux on, to create a docker hosting environment. Current have Jellyfin, SearXNG, and Forgejo.
My much older NAS serves as the NFS drive for the Jellyfin media (formerly, I ran Plex directly on the NAS, but this was slow/unreliable as the NAS has only dual 1Ghz ARM cores).
One of the drives in the NAS died Thursday night, but no serious issue as its RAID 1. I wonder if the new load on it pushed it over the edge. (Also, I wonder if I could use the mac minis SSD as a sort of cache in front of the NAS, to reduce wear on it, if that would even help...)
Luckily I had some gift cards from recycling old tablets and phones, so I could get a replacement drive at minimal cost. I went with a cheap WD Blue drive instead of the 2.5x more expensive Seagate IronWolf drives I had used in the past. We will see how that fares over the next few years.
Upon replacing the drive yesterday, I found the one that failed was a 2017 mfg date, so its life was 8 years (from when I initially populated the NAS). The other drive was replaced in 2021 (but it actually failed in 2020, I just left the NAS unused for a year at that time, so it had a life of 3 years). Some insight into the life span of the Iron Wolf drives.
Things I'd like to add soon:
- kiwix instance
- normalize my ebook/magazine collection
- setup to download my youtube subscriptions to Jellyfin's media directory so I can avoid the youtube app/website
- something for music to ditch that subscription
On the topic of dns, I still use GoDaddy. People ask why, it's because GoDaddy seems like a good idea in 2003 when I got my first domain, and 2006 when I got my current one. At that point it's just inertia, I tend to buy several years in advance because I don't like annual payments, I know it makes me a weirdo. That means I'm locked in for several years and it's not enough of a problem to do anything about.
Anyone who uses GoDaddy knows that they turned off their dynamic DNS option quite some time ago. My system is pretty stable so I don't usually need to change it, but if I have a power failure at home or I need to reboot my router, I obviously need to change my DNS at those moments.
When I'm away from home, I end up having to use TeamViewer to hop into a jump box vm I have set up for that purpose. The two obvious problems with that are first of that TeamViewer is a proprietary product, and the second of all that they see me hopping into a jump box regularly and they assume that I'm a commercial customer. There is apparently a way to tell them that you're just a hobbyist, but I haven't gotten around to filing that.
What I did do is set up a script that compares the current IP to my DNS IP, and if they are different then I send myself an email that contains the old IP in the new IP. This way, I don't need to hop into my network to find out what the new IP address is. I also added a little bit there to save the last successful IP address sent by email to /tmp/ so that if I lose my IP address but I'm doing something where I can't hop onto the GoDaddy website to fix it, I don't get 100,000 emails with my new IP address.
I killed my house power a couple weeks ago, and the whole system worked exactly as intended. I was pretty happy to see that.
Cool! Note that the nameservers for your domain don't have to be from your registrar. I use Hetzner for DNS despite having my domains elsewhere. And I use a similar thing as you, a cronjob that compares my public IP to the DNS records and adjusts them via Hetzner API when necessary.
Oh, and for anyone who has never used it, Apache guacamole is a really neat tool for centralizing configuration. Effectively, you can set it up as a website with a username and password that will transfer through ssh, telnet, VNC, and RDP, so if you need to hop into something while you are outside the home, it's going to be effective. That's something that I wish I had known about earlier, it would have made a lot of rough days a lot easier.
So much has been going on
I moved recently and had to change ISPs. I went from 2 Gbps symmetrical fiber to 90/3 Mbps satellite behind CGNAT.
Fastest place to get the WAN cable into the house was through the attic and into my guest room / office. But that caused some serious heat and noise issues.
Ran some structural Cat6, installed new electrical outlet, put in some keystone jacks, wired a new patch panel, then moved the rack to the basement.
Bought and installed a UPS which has already saved me twice in a month.
Up speeds were too slow and the high latency to the satellite constellation was causing issues, so I spun up a small VPS. But that means I have to sync content back to my local.
I’ve been wrestling with rsync for over a month… fiddling with flags to get the best results. I think I finally settled on a config yesterday and the service and timer are working well
CGNAT is messing with remote access, so I set up cloudflare tunnels. But the tunneling is not well suited for streaming. I was only getting ~100 Kbps on remote connections. Ran some iperf3 testing over tailscale and was slightly better.
My preferred audiobook app Prologue released a major update to v4.0 which broke Plex libraries on launch, so I had to quickly pivot to AudioBookShelf.
To achieve remote streaming and access for Prologue, I had to explain Tailscale set up and create new user accounts. Only halfway through my user base. Not looking forward to explaining it to my parents
Finally, I’m trying to set up Claude to run on my server rather than my locked down enterprise laptop. That’ll allow more tooling access like git rather than before when I was spending a lot of time downloading and uploading files manually. I need to figure out how to keep my session open. I’ll probably run tmux inside a docker container then run claude inside the tmux window. Hopefully that works
As someone chronically behind CGNAT, you have my condolences
Oh, I also want to look into using a tailscale exit node to use a proton vpn wire guard route so I don’t have to switch between two separate VPNs
I also want to look into the exit node stuff.
Got hit with this recently
https://github.com/jellyfin/jellyfin/issues/15148
Just restored an old backup. Everything is behind a vpn and is working so ill give it a while and see if it gets sorted before resorting to swapping out the sqlite version for each update.
Ouchy!
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
| Fewer Letters | More Letters |
|---|---|
| CGNAT | Carrier-Grade NAT |
| DHCP | Dynamic Host Configuration Protocol, automates assignment of IPs when connecting to a network |
| DNS | Domain Name Service/System |
| HTTP | Hypertext Transfer Protocol, the Web |
| IP | Internet Protocol |
| MQTT | Message Queue Telemetry Transport point-to-point networking |
| NAS | Network-Attached Storage |
| NAT | Network Address Translation |
| NFS | Network File System, a Unix-based file-sharing protocol known for performance and efficiency |
| NVR | Network Video Recorder (generally for CCTV) |
| Plex | Brand of media server package |
| RAID | Redundant Array of Independent Disks for mass storage |
| SATA | Serial AT Attachment interface for mass storage |
| SSD | Solid State Drive mass storage |
| VNC | Virtual Network Computing for remote desktop access |
| VPN | Virtual Private Network |
| VPS | Virtual Private Server (opposed to shared hosting) |
| nginx | Popular HTTP server |
[Thread #51 for this comm, first seen 1st Feb 2026, 10:01] [FAQ] [Full list] [Contact] [Source code]
I had a weird issue with a server SSD disk.
6 months ahead of scheduled swap, it didn't die, it just started reading and writing really sluggishly, making the whole server behaving really weird. Disk smart statistics looked healthy and disk self tests passed with flying colors. Anyway, had to swap it early and do a re-install of the OS.
The rest of my cluster temporarily took over running some pods and only saw downtime for a few pods that were dependent on some disks in the failing server.
I guess the incident has restarted my interest in distributed storage.
Blergh, how did you pinpoint it?
More luck than anything really. It was probably because it had 6 months left and the fact that reading and writing felt slow. Everything else behaved normally and buying a new disk was an educated guess that turned out to be the correct choice.
My server mysteriously stopped working in December. After a scheduled restart, the OS wouldn't load so the fan was running on high for a few days while I was staying at a friends for a few days.
I checked the logs and couldn't find anything suspicious. Loaded a previous backup that worked and still nothing loaded on startup. Tested the Pi 5 with a USB drive that had a fresh Alpine Linux install on it and everything loaded up fine so I was able to rule out any hardware issues. The HDD with the old OS mounted just fine to my laptop. I still have no idea what happened.
This happened a few days before my domain name expired and I was planning to change my domain name to something shorter. Decided to hold off on remaking my server from scratch until I finish a few other projects.
The other projects will help me manage my network connected devices so it's all working towards a common goal. Fortunately I am getting very close to finishing those projects. I am putting the final touches on my last project and should done within a few days.
Next I'll reinstall my Pi 4 with HomeAssistant again to fix it's networking issue. Only the terrarium grow lights are affected and my gecko chose to hibernate outside of the terrarium this winter so she's unaffected (heat lamps are controlled by a separate, isolated device). After that I'll fix my Pi 5 server and this time go with Podman over Docker.
I finally installed my wife
Man....technology has come a long way.
Nothing here to write home about. A couple of minor tweaks to the network, and blocking even more unnecessary traffic. I've been on a mission to reduce costs in consumables such as electricity. I have a cron that shuts everything down at a certain time in the evening, and am working on a WOL routine fired by a cron from my stand alone pfsense box, to the server, to crank it back up in the morning just before I get up. It seemed to be the lowest hanging fruit so I have it on priority. It just didn't make sense to run the server for 10 - 12 hours on idle I don't have any midnight mass downloads of Linux iso's nor do I make services available to other users so, it seemed to be a good place to start. I guess, by purist's standards, it's not a server anymore but an intermittent service, but it seems to be working for me. Will check consumption totals at the end of the month.
Other than that, I haven't added anything new to the lineup, and I am just enjoying the benefits.
If you want to go all in, get some plug that measures the energy! Also let's you directly see the effects of turning stuff on/off. My last server went up 3W when I started using the second network interface! Let drives go to sleep, play with C-States, etc
I had a post a while back about what I was doing to cut costs.
- TLP: Adjusts CPU frequency scaling, PCI‑e ASPM, SATA link power‑management
- Powertop: Used to profile power consumption and has a tune feature sudo powertop --auto-tune
- cpufrequtils: Used to manage the CPU governor directly
- logind.conf: Can be used to put the whole server to sleep when idle
After doing all of that, which does help out during operational hours, I decided to save 10-12 hours of consumption by just shutting it down. The old 'turn the light out if you're not in the room' concept. Right now I am manually booting the server, and it doesn't take that long to resume operations. However, why not employ some automation and magic packets to fire it back up in the morning.
ETA: I do have a watt meter on the server.
Sounds good! Are you on SSD or HDD?
The OS lives on an SSD and I have two aux drives. One is HDD, but it is a samba share for Navidrome, so it's not like it's spinning constandly. Everything gets a 3,2,1 backup.
ETA: Now that you mention it, I guess I could employ a park(?) for the HDD before shutting down.
Moved all my Unraid 'apps' to Dockhand, and linked my Pangolin VPS with the Hawser agent. I had Dockge for a while on newer container deployments, but wanted something a bit more playful, Dockhand is it.
I degoogled my GMail last year to Infomaniak, which was OK, but moved to Fastmail last week, which I now love! Setting the custom domain pulled in the sites favicon for the Fastmail account header, which made me smile too much for such a simple thing. Think I'll be on Fastmail for the future. (Background syncing with the new Bichon email archiver).
Fixing a Nvidia driver mismatch that was causing the newest kernel module to not build properly (that might not be the right terminology) and not boot was on my list, as discovered after a node reboot that wouldn't start.
It was fairly straightforward, though finding a way to fully remove some of the old DKMS stuff took a bit of digging (manually delete a couple files). The new driver installs went smoothly and the improved GPU passthrough in PVE 9 made the passthrough config tasks pretty quick.
I also got go2rtc set up and piped my cameras through that instead of having individual connections for things like Home Assistant and Blue Iris NVR. I'm still struggling to get the motion notifications in Home Assistant to work though. Followed a tutorial on (that other site) and got the MQTT message coming in just fine, but the node-red flow isn't working...an issue with entities, so still some tinkering left to do there.
Waiting for my new glintet Ethernet kvm to arrive and connect to my server....
I am currently switching over from Debian/rocky lxc containers on proxmox to declaratively creating vm via opentofu, then running nixos-anywhere and then running colmena for updates etc. works great and I should have done it sooner.
Problem Tailscale. I encrypted the authkey via agenix but the new nixos hosts can not read the file and fail to login. The file is available but I think the vms can not decrypt it. Needs further investigation