All of you bragging about 100+ containers, please may in inquire as to what the fuck that's about? What are you doing with all of those?
Selfhosted
A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.
Rules:
-
Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.
-
No spam posting.
-
Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.
-
Don't duplicate the full text of your blog or github here. Just post the link for folks to click.
-
Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).
-
No trolling.
-
No low-effort posts. This is subjective and will largely be determined by the community member reports.
Resources:
- selfh.st Newsletter and index of selfhosted software and apps
- awesome-selfhosted software
- awesome-sysadmin resources
- Self-Hosted Podcast from Jupiter Broadcasting
Any issues on the community? Report it using the report flag.
Questions? DM the mods!
In my case, most things that I didn't explicitly make public are running on Tailscale using their own Tailscale containers.
Doing it this way each one gets their own address and I don't have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.
On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.
Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.
And that is just for one of my web crawlers.
/S
About 62 deployments with 115 "pods"
Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:
| Fewer Letters | More Letters |
|---|---|
| NAS | Network-Attached Storage |
| Plex | Brand of media server package |
| k8s | Kubernetes container management package |
3 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.
[Thread #42 for this comm, first seen 29th Jan 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]
About 50 on a k8s cluster, then 12 more on a proxmox vm running debian and about 20 ish on some Hetzner auction servers.
About 80 in total, but lots more at work:)
I have currently got 23 on my n97 mini pc and 3 on my raspberry pi 4, making 26 in total.
I have no issues managing these. I use docker compose for everything and have about 10 compose.yml files for the 23 containers.
140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:
- 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
- 55 Manual-updates (either it's family-facing e.g. Jellyfin, or it's got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it's something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody's in the middle of watching something)
I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.
Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.
I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?
Running home assistant with a few addons on a mostly dormant raspberry pi. This totals to 19 lines.
9 containers of which 1 is container manager with 8 containers inside (multi-containers counted as 1). And 9 that are installed off the NAS app store. 18 total.
- Because I'm old, crust, and prefer software deployments in a similar manner.
I salute you and wish you the best in never having a dependency conflict.
I've been resolving them since the late 90s, no worries.
My worst dependency conflict was a libcurlssl error when trying to build on a precompiled base docker image.
I used Debian
Me too!
26 tho this include multi container services like immich or paperless who have 4 each.
0, it's all organised nicely with nixos
Boooo, you need some chaos in your life. :D
That's why I have one host called theBarrel and it's just 100 Chaos Monkeys and nothing else
This is the way.
I have 1 podman container on NixOS because some obscure software has a packaging problem with ffmpeg and the NixOS maintainers removed it. docker: command not found
35 containers and everything is running stable and most of it is automatically updated. In case something breaks I have daily backups of everything.
9
58, my cpu is usually around 10-20% usage. I really don’t have any trouble managing/maintaining these. Things break almost weekly but I understand how to fix them every time, it only takes a few minutes
Server01: 64 Server02: 19 Plus a bunch of sidecar containers solely for configs that aren't running.
Well the containers are grouped into services. I would easily have 15 services running, some run a separate postgres or redis while others do an internal sqlite so hard to say.
If we're counting containers then between Nextcloud and Home Assistant I'm probably over 20 already lol.
89 - 79 on my main server and 10 on my sandbox.
44 containers and my average load over 15 min is still 0,41 on an old Intel nuc.
11 running on my little N150 box. Barely ever breaks a sweat.
I don't have access to my server right now, but it's around 20 containers on my little N100 box.
51 containers on my Unraid server, but only 39 running right now
Server 1: 5 containers Server 2: 4 containers Server 3: 4 containers Server 4: 61 containers
Basically if a container is a resource hog, it gets moved somewhere with more resources or specialized hardware.
That's a wee bit imbalanced. Is server 4 your big boi?
It's the oldest, but not the most powerful. Not everything I host sees a lot of activity. But things like Plex/Jellyfin/Immich found their own hardware with better GPU support, and serious A/V or disk intense processes have a full spec PC available. There is also a remote backup system in place so a couple containers are duplicates.
At my house around 10 For lemmy.ca and our other sites, 35ish maybe. At work... hundreds.
$ docker ps | wc -l
14
Just running 13 myself.
Between 100 and 150.
25, with your "docker ps" command, on my aging Nuc10 PC. Only using 5GB of its 16GB of RAM.
What, me worry?
53