93

For the last two years, I've been treating compose files as individual runners for individual programs.

Then I brainstormed the concept of having one singular docker-compose file that writes out every single running container on my system... (that can use compose), each install starts at the same root directory and volumes branch out from there.

Then I find out, this is how most people use compose. One compose file, with volumes and directories branching out from wherever ./ is called.

THEN I FIND OUT... that most people that discover this move their installations to podman because compose works on different versions per app and calling those versions breaks the concept of having one singular docker-compose.yml file and podman doesn't need a version for compose files.

Is there some meta for the best way to handle these apps collectively?

top 39 comments
sorted by: hot top controversial new old
[-] null@slrpnk.net 54 points 8 months ago

I think compose is best used somewhere in between.

I like to have separate compose files for all my service "stacks". Sometimes that's a frontend, backend, and database. Other times it's just a single container.

It's all about how you want to organize things.

[-] mhzawadi@lemmy.horwood.cloud 28 points 8 months ago

I do this, 1 compose file per application. That has all the things that application need, volumes, networks, secrets.

In single docker host land, each application even has its own folder with the compose file and any other artifacts in it.

[-] fraydabson@sopuli.xyz 3 points 8 months ago

Yeah this post had me a little worried I’m doing something wrong haha. But I do it just like that. Compose file per stack.

[-] chiisana@lemmy.chiisana.net 37 points 8 months ago

Multiple compose file, each in their own directory for a stack of services. Running Lemmy? It goes to ~/compose_home/lemmy, with binds for image resized and database as folders inside that directory. Running website? It goes to ~/compose_home/example.com, with its static files, api, and database binds all as folders inside that. Etc etc. Use gateway reverse proxy (I prefer Traefik but each to their own) and have each stack join the network to expose only what you’d need.

Back up is easy, snapshot the volume bind (stop any service individually as needed); moving server for specific stack is easy, just move the directory over to a new system (update gateway info if required); upgrading is easy, just upgrade individual stack and off to the races.

Pulling all stacks into a single compose for the system as a whole is nuts. You lose all the flexibility and gain… nothing?

[-] antsu@lemmy.wtf 7 points 8 months ago

This. And I recently found out you can also use includes in compose v2.20+, so if your stack complexity demands it, you can have a small top-level docker-compose.yml with includes to smaller compose files, per service or any other criteria you want.

https://docs.docker.com/compose/multiple-compose-files/include/

[-] Lasso1971@thelemmy.club 1 points 8 months ago

I prefer compose merge because my "downstream" services can propagate their depends/networks to things that depend on them up the stream

There's an env variables you set in .env so it's similar to include

The one thing I prefer about include is that each include directory can have its own .env file, which merges with the first level .env. With merge it seems you're stuck with one .env file for all in-file substitutes

[-] JustEnoughDucks@feddit.nl 3 points 8 months ago

That's what I do. I always thought I was doing it "wrong" but it just made sense to me. I can also just up/down/etc... compose files to individually pull new images, test things, disable a service, and apply config updates without affecting another container at all.

I even keep my docker config files in a seperate directory so I can backup the docker composes in a second over the network.

I started by using a single mariaDB instance with multiple databases, but now I see the same benefits from moving to one database container per compose file that needs it to make it even more flexible so I don't need to start up mariadb and redis before all of my containers.

File permission problems? Down the compose that needs it, fix it, re-up it without losing any uptime for other services and never having to use docker commands kludged together.

[-] possiblylinux127@lemmy.zip 19 points 8 months ago

I use multiple compose files for simplicity

[-] MonkCanatella@sh.itjust.works 16 points 8 months ago

I've always heard the opposite advice - don't put all your containers in one compose file. If you have to update an image for one app, wouldn't you have to restart the entirety of your apps?

[-] lue3099@lemmy.world 3 points 8 months ago* (last edited 8 months ago)

You can reference a single or multiple containers in a compose stack.

docker compose -f /path/to/compose.yml restart NameOfServiceInCompose

[-] MonkCanatella@sh.itjust.works 1 points 8 months ago

whoa, I never knew that. Great tip!

[-] SheeEttin@lemmy.world 3 points 8 months ago

If by app you mean container, no. You pull the latest image and rerun docker compose. It will make only the necessary changes, in this case restarting the container to update.

[-] Mythnubb@lemm.ee 11 points 8 months ago

As other have said, I have a root docker directory then have directories inside for all my stacks, like Plex. Then I run this script which loops through them all to update everything in one command.

for n in plex-system bitwarden freshrss changedetection.io heimdall invidious paperless pihole transmission dashdot
do
    cd /docker/$n
    docker-compose pull
    docker-compose up -d
done

echo Removing old docker images...
docker image prune -f
[-] pete_the_cat@lemmy.world 17 points 8 months ago

Or just use the Watchtower container to auto-update them 😉

[-] DH10@feddit.de 7 points 8 months ago

I don’t like the auto update function. I also use a script similar to the one op uses (with a .ignore file added). I like to be in control when (or if) updates happen. I use watchtower as a notification service.

[-] Mythnubb@lemm.ee 1 points 8 months ago

Exactly, when it updates, I want to initiate it to make sure everything goes as it should.

[-] pete_the_cat@lemmy.world 1 points 8 months ago

Nothing off mine is that important that I couldn't create/rollback the container if it does happen to screw up.

[-] chiisana@lemmy.chiisana.net 1 points 8 months ago

I scream test myself… kidding aside, I try to pin to major versions where possible — Postgres:16-alpine for example will generally not break between updates and things should just chip along. It’s when indie devs not tagging anything other than latest or adhere to semantic versioning best practices where I keep watchtower off and update once in a blue moon manually as a result.

[-] Toribor@corndog.social 9 points 8 months ago

I moved from compose to using Ansible to deploy containers. The Ansible container config looks almost identical to a compose file but I can also create folders, config files, set permissions, etc.

[-] SheeEttin@lemmy.world 4 points 8 months ago

Can you give an example playbook?

[-] Toribor@corndog.social 6 points 8 months ago

Sure. Below is an example playbook that is fairly similar to how I'm deploying most of my containers.

This example creates a folder for samba data, creates a config file from a template and then runs the samba container. It even has a handler so that if I make changes to the config file template it will cycle the container for me after deploying the updated config file.

I usually structure everything as an ansible role which just splits up this sort of playbook into a folder structure instead. ChatGPT did a great job of helping me figure out where to put files and generally just sped up the process of me creating tasks to do common things like setup a cronjob, install a package, or copy files around.

- name: Run samba
  hosts: servername

  vars:
    samba_data_directory: "/home/me/docker/samba"

  tasks:
  - name: Create samba data directory
    ansible.builtin.file:
      path: "{{ samba_data_directory }}"
      state: directory
      mode: '0755'

  - name: Create samba config from a jinja template file
    ansible.builtin.template:
      src: templates/smb.conf.j2
      dest: "{{ samba_data_directory }}/smb.conf"
      mode: '0644'
    notify: Restart samba container

  - name: Run samba container
    community.docker.docker_container:
      name: samba
      image: dperson/samba
      ports:
        - 445:445
      volumes:
        - "{{ samba_data_directory}}:/etc/samba/"
        - "/home/me/samba_share:/samba_share"
      env:
        TZ: "America/Chicago"
        UID: '1000'
        GUID: '1000'
        USER: "me;mysambapassword"
        WORKERGROUP: "my-samba-workergroup"
      restart_policy: unless-stopped

  handlers:
  - name: Restart samba container
    community.docker.docker_container:
      name: samba
      restart: true

[-] poVoq@slrpnk.net 8 points 8 months ago

The best way is to use Podman's Systemd integration.

[-] dandroid@dandroid.app 4 points 8 months ago

This is what I use whenever I make my own services or am using a simple service with only one container. But I have yet to figure out how to convert a more complicated service like lemmy that already uses docker-compose, so I just use podman-docker and emulate docker-compose with podman. But that doesn't get me any of the benefits of systemd and now my podman has a daemon, which defeats one of the main purposes of podman.

[-] poVoq@slrpnk.net 6 points 8 months ago

Just forget about podman-compose and use simple Quadlet container files with Systemd. That way it is not all in the same file, but Systemd handles all the inter-relations between the containers just fine.

Alternatively Podman also supports kubernetes configuration files, which is probably closer to what you have in mind, but I never tried that myself as the above is much simpler and better integrated with existing Systemd service files.

[-] vegetaaaaaaa@lemmy.world 1 points 8 months ago

Quadlet

Requires podman 4.4 though

[-] poVoq@slrpnk.net 1 points 8 months ago

No, from that version on, it is integrated in Podman, but it was available for earlier versions as a 3rd party extension as well.

But if you are not yet on Podman 4.4 or later you should really upgrade soon, that version is quite old already.

[-] vegetaaaaaaa@lemmy.world 1 points 8 months ago

you should really upgrade soon

Debian stable has podman 4.3 and 4.4 is not in stable-backports

[-] krolden@lemmy.ml 6 points 8 months ago

Podman with systemd works better if you just do your podman run command with all the variables and stuff and then run podman generate systemd.

Podman compose feels like a band aid for people coming from docker compose. If you run podman compose and then do podman generate systemd, it will just make a systemd unit that starts podman compose. In my experience having all of the config stuff in the actual systemd unit file makes your life easier in the long run. Fewer config files the better I say.

[-] poVoq@slrpnk.net 5 points 8 months ago

It's even simpler now that Quadlet is integrated in Podman 4.x or later.

[-] krolden@lemmy.ml 2 points 8 months ago

Oh yeah I remember reading some stuff about that but didn't dig too deep. I'll have to check it out again

[-] exu@feditown.com 2 points 8 months ago

You can use podman pods and generate the systemd file for the whole pod.

[-] dandroid@dandroid.app 1 points 8 months ago

But how do I convert the docker-compose file to a pod definition? If I have to do it manually, that's a pass because I don't want to do it again if lemmy updates and significantly changes it's docker-compose file, which it did when 0.18.0 came out.

[-] ErwinLottemann@feddit.de 3 points 8 months ago

doesn't systemd come with it's own container thingy?

[-] Max_P@lemmy.max-p.me 3 points 8 months ago

You're probably thinking about systemd-nspawn. Technically yes they're containers, but not the same flavour of them. It's more like LXC than Docker: it runs init and starts a full distro, like a VM but as a container.

[-] exu@feditown.com 0 points 8 months ago

Nope, but it integrates very well with Podman.

[-] Decronym@lemmy.decronym.xyz 3 points 8 months ago* (last edited 8 months ago)

Acronyms, initialisms, abbreviations, contractions, and other phrases which expand to something larger, that I've seen in this thread:

Fewer Letters More Letters
LXC Linux Containers
NAT Network Address Translation
Plex Brand of media server package
VPS Virtual Private Server (opposed to shared hosting)

[Thread #217 for this sub, first seen 15th Oct 2023, 20:15] [FAQ] [Full list] [Contact] [Source code]

[-] LievitoPadre@feddit.it 1 points 8 months ago

Have you tried portainer?

[-] TheHolm@aussie.zone 0 points 8 months ago

you can always add Makefile to traverse directories.

[-] csolisr@communities.azkware.net -1 points 8 months ago

I'm currently using YunoHost behind CG-NAT with a Wireguard VPS bypass, but plan on moving to a Dockerized setup soon because of YNH still using an outdated version of Debian. What do you recommend me to keep my setup as similar to YNH?

this post was submitted on 15 Oct 2023
93 points (96.0% liked)

Selfhosted

37765 readers
321 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS