this post was submitted on 29 Jan 2026
37 points (100.0% liked)

Selfhosted

55194 readers
935 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

There is a post about getting overwhelmed by 15 containers and people not wanting to turn the post into a container measuring contest.

But now I am curious, what are your counts? I would guess those of you running k*s would win out by pod scaling

docker ps | wc -l

For those wanting a quick count.

top 41 comments
sorted by: hot top controversial new old
[–] drkt@scribe.disroot.org 3 points 1 hour ago (2 children)

All of you bragging about 100+ containers, please may in inquire as to what the fuck that's about? What are you doing with all of those?

[–] StrawberryPigtails@lemmy.sdf.org 1 points 25 minutes ago

In my case, most things that I didn't explicitly make public are running on Tailscale using their own Tailscale containers.

Doing it this way each one gets their own address and I don't have to worry about port numbers. I can just type http://cars/ (Yes, I know. Not secure. Not worried about it) and get to my LubeLogger instance. But it also means I have 20ish copies of just the Tailscale container running.

On top of that, many services, like Nextcloud, are broken up into multiple containers. I think Nextcloud-aio alone has something like 5 or 6 containers it spins up, in addition to the master container. Tends to inflate the container numbers.

[–] slazer2au@lemmy.world 1 points 1 hour ago

Things and stuff. There is the web front end, API to the back end, the database, the redis cache, mqtt message queues.

And that is just for one of my web crawlers.

/S

[–] kylian0087@lemmy.dbzer0.com 2 points 1 hour ago

About 62 deployments with 115 "pods"

[–] Decronym@lemmy.decronym.xyz 3 points 2 hours ago* (last edited 55 minutes ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
NAS Network-Attached Storage
Plex Brand of media server package
k8s Kubernetes container management package

3 acronyms in this thread; the most compressed thread commented on today has 5 acronyms.

[Thread #42 for this comm, first seen 29th Jan 2026, 11:00] [FAQ] [Full list] [Contact] [Source code]

[–] RockChai@piefed.social 2 points 2 hours ago

About 50 on a k8s cluster, then 12 more on a proxmox vm running debian and about 20 ish on some Hetzner auction servers.

About 80 in total, but lots more at work:)

[–] fozid@feddit.uk 2 points 2 hours ago

I have currently got 23 on my n97 mini pc and 3 on my raspberry pi 4, making 26 in total.

I have no issues managing these. I use docker compose for everything and have about 10 compose.yml files for the 23 containers.

[–] kmoney@lemmy.kmoneyserver.com 12 points 4 hours ago* (last edited 4 hours ago) (1 children)

140 running containers and 33 stopped (that I spin up sometimes for specific tasks or testing new things), so 173 total on Unraid. I have them gouped into:

  • 118 Auto-updates (low chance of breaking updates or non-critical service that only I would notice if it breaks)
  • 55 Manual-updates (either it's family-facing e.g. Jellyfin, or it's got a high chance of breaking updates, or it updates very infrequently so I want to know when that happens, or it's something I want to keep particular note of or control over what time it updates e.g. Jellyfin when nobody's in the middle of watching something)

I subscribe to all their github release pages via FreshRSS and have them grouped into the Auto/Manual categories. Auto takes care of itself and I skim those release notes just to keep aware of any surprises. Manual usually has 1-5 releases each day so I spend 5-20 minutes reading those release notes a bit more closely and updating them as a group, or holding off until I have more bandwidth for troubleshooting if it looks like an involved update.

Since I put anything that might cause me grief if it breaks in the manual group, I can also just not pay attention to the system for a few days and everything keeps humming along. I just end up with a slightly longer manual update list when I come back to it.

[–] a_fancy_kiwi@lemmy.world 4 points 3 hours ago

I’ve never looked into adding GitHub releases to FreshRSS. Any tips for getting that set up? Is it pretty straight forward?

[–] gjoel@programming.dev 3 points 3 hours ago

Running home assistant with a few addons on a mostly dormant raspberry pi. This totals to 19 lines.

[–] imetators@lemmy.dbzer0.com 4 points 4 hours ago

9 containers of which 1 is container manager with 8 containers inside (multi-containers counted as 1). And 9 that are installed off the NAS app store. 18 total.

[–] neidu3@sh.itjust.works 12 points 6 hours ago (2 children)
  1. Because I'm old, crust, and prefer software deployments in a similar manner.
[–] slazer2au@lemmy.world 4 points 6 hours ago (3 children)

I salute you and wish you the best in never having a dependency conflict.

[–] neidu3@sh.itjust.works 5 points 6 hours ago* (last edited 5 hours ago)

I've been resolving them since the late 90s, no worries.

[–] Urist@lemmy.ml 2 points 5 hours ago

My worst dependency conflict was a libcurlssl error when trying to build on a precompiled base docker image.

[–] RIotingPacifist@lemmy.world 1 points 4 hours ago

I used Debian

[–] Arghblarg@lemmy.ca 1 points 4 hours ago
[–] eskuero@lemmy.fromshado.ws 2 points 4 hours ago

26 tho this include multi container services like immich or paperless who have 4 each.

[–] Sibbo@sopuli.xyz 7 points 7 hours ago (2 children)

0, it's all organised nicely with nixos

[–] slazer2au@lemmy.world 2 points 6 hours ago* (last edited 6 hours ago) (1 children)

Boooo, you need some chaos in your life. :D

[–] thinkercharmercoderfarmer@slrpnk.net 4 points 6 hours ago (1 children)

That's why I have one host called theBarrel and it's just 100 Chaos Monkeys and nothing else

[–] MasterBlaster@lemmy.world 1 points 6 minutes ago

This is the way.

I have 1 podman container on NixOS because some obscure software has a packaging problem with ffmpeg and the NixOS maintainers removed it. docker: command not found

[–] Tywele@piefed.social 2 points 5 hours ago

35 containers and everything is running stable and most of it is automatically updated. In case something breaks I have daily backups of everything.

[–] eksb@programming.dev 2 points 5 hours ago

58, my cpu is usually around 10-20% usage. I really don’t have any trouble managing/maintaining these. Things break almost weekly but I understand how to fix them every time, it only takes a few minutes

Server01: 64 Server02: 19 Plus a bunch of sidecar containers solely for configs that aren't running.

[–] Dave@lemmy.nz 4 points 6 hours ago

Well the containers are grouped into services. I would easily have 15 services running, some run a separate postgres or redis while others do an internal sqlite so hard to say.

If we're counting containers then between Nextcloud and Home Assistant I'm probably over 20 already lol.

[–] antifa_ceo@lemmy.ml 2 points 5 hours ago

89 - 79 on my main server and 10 on my sandbox.

[–] blurry@feddit.org 5 points 7 hours ago

44 containers and my average load over 15 min is still 0,41 on an old Intel nuc.

[–] otacon239@lemmy.world 4 points 6 hours ago

11 running on my little N150 box. Barely ever breaks a sweat.

[–] Strit@lemmy.linuxuserspace.show 3 points 6 hours ago

I don't have access to my server right now, but it's around 20 containers on my little N100 box.

[–] MrQuallzin@lemmy.world 2 points 6 hours ago

51 containers on my Unraid server, but only 39 running right now

[–] Ebby@lemmy.ssba.com 2 points 7 hours ago* (last edited 7 hours ago) (1 children)

Server 1: 5 containers Server 2: 4 containers Server 3: 4 containers Server 4: 61 containers

Basically if a container is a resource hog, it gets moved somewhere with more resources or specialized hardware.

[–] slazer2au@lemmy.world -1 points 7 hours ago (1 children)

That's a wee bit imbalanced. Is server 4 your big boi?

[–] Ebby@lemmy.ssba.com 2 points 6 hours ago

It's the oldest, but not the most powerful. Not everything I host sees a lot of activity. But things like Plex/Jellyfin/Immich found their own hardware with better GPU support, and serious A/V or disk intense processes have a full spec PC available. There is also a remote backup system in place so a couple containers are duplicates.

[–] Shadow@lemmy.ca 2 points 7 hours ago

At my house around 10 For lemmy.ca and our other sites, 35ish maybe. At work... hundreds.

[–] slazer2au@lemmy.world 2 points 7 hours ago
$ docker ps | wc -l
14

Just running 13 myself.

[–] filcuk@lemmy.zip 1 points 6 hours ago

Between 100 and 150.

[–] perishthethought@piefed.social 1 points 6 hours ago

25, with your "docker ps" command, on my aging Nuc10 PC. Only using 5GB of its 16GB of RAM.

What, me worry?

[–] Smash@lemmy.self-hosted.site 1 points 6 hours ago