21

All the times I just put docker-compose.yml to one user (my user) directory and call it a day.

But what about a service with multiple admins or with more horizontally split up load?

all 13 comments
sorted by: hot top controversial new old
[-] witten@lemmy.world 10 points 10 months ago

I'm not sure I understand the question. By "data" do you mean "configuration"? If you've got multiple devs working on a project (or even if you don't), IMO your Docker Compose or Podman configuration should be in source control. That will allow multiple devs to all collaborate on the config, do code reviews, etc. Then, you can use whatever your deployment method is to effect those changes on your server(s)... manually run Ansible, automatically run CI-triggered deployment, whatever.

[-] eager_eagle@lemmy.world 4 points 10 months ago* (last edited 10 months ago)

I had Portainer setup, but it was clunky and the web UI added little value.

Now I just have a local git repo with a directory for each compose stack and run docker compose commands as needed. The repo holds all yaml and config files I care to keep track. Env variables are in gitignored .env files with similar .env.example in version control. I keep sensitive info in my password manager if I have to recreate a .env from its example counterpart.

To handle volumes, I avoid docker-managed volumes at all costs to favor cleaner bind mounts instead. This way the data for each stack is always along with the corresponding configuration files. If I care about keeping the data, it's either version controlled (when mostly text) or backed up with kopia (when mostly binary).

[-] retrodaredevil@lemmy.world 2 points 10 months ago

I do something similar, but I avoid gitignore at all costs because any secret data should have root read only permissions on it. Plus any data that is not version controlled goes in a common directory, so all I have to do is backup that directory and I'm good. It makes moving between machines easy if I ever need to do that.

[-] ghulican@lemmy.ml 4 points 10 months ago* (last edited 10 months ago)

Env variables get saved to 1Password (self hosted alternative would be Infisical) with a project for each container.

Docker compose files get synced up to my GitHub account.

I have been using the new “include” attribute to split up each container into its own docker compose file.

Usually I organize by service type: media

  • sonarr
  • radarr downloaders
  • sab

Not sure if that answers the question…

[-] Smiling_Fanatic@lemmy.world 2 points 10 months ago

Can we get a link to your gh

[-] smileyhead@discuss.tchncs.de 1 points 10 months ago

I do similliarly, but my question was about the situation when there are more admins with access to server. Would you create one account for everything or how else to manage it?

[-] TheProtector0034@feddit.nl 3 points 10 months ago

I use Portainer to manage compose files (called stacks in Portainer)

[-] Toribor@corndog.social 3 points 10 months ago* (last edited 10 months ago)

I've been slowly moving all my containers from compose to pure Ansible instead. Makes it easier to also manage creating config files, setting permissions, cycling containers after updating files etc.

I still have a few things in compose though and I use Ansible to copy updates to the target server. Secrets are encrypted with Ansible vault.

[-] SheeEttin@lemmy.world 2 points 10 months ago

Multiple admins should be able to manage podman just fine.

[-] skadden@ctrlaltelite.xyz 2 points 10 months ago

I host forgejo internally and use that to sync changes. .env and data directories are in .gitignore (they get backed up via a separate process)

All the files are part of my docker group so anyone in it can read everything. Restarting services is handled by systemd unit files (so sudo systemctl stop/start/restart) any user that needs to manipulate containers would have the appropriate sudo access.

It's only me they does all this though, I set it up this way for funsies.

[-] Lodra@programming.dev 1 points 10 months ago

Well I'm also not entirely sure what you're looking for. But here's my guess 😅

None of this stuff should run under the account of a human user. Without docker/compose, I would suggest that you create one account for each service, deploy them to different directories with different permissions. With docker compose, just deploy them all together and run it all under a single service account. Probably name it "docker". When an admin needs to access, you sudo su - docker and then do stuff.

[-] Aux@lemmy.world -2 points 10 months ago

It's better to manage your infrastructure with Ansible.

this post was submitted on 17 Aug 2023
21 points (92.0% liked)

Selfhosted

37737 readers
370 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 1 year ago
MODERATORS