this post was submitted on 30 Jan 2026
50 points (98.1% liked)

Selfhosted

55587 readers
349 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

  7. No low-effort posts. This is subjective and will largely be determined by the community member reports.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Not containers and data, but the images. The point would be reproducability in case a remote registry does not contain a certain image anymore. Do you do that and how?

you are viewing a single comment's thread
view the rest of the comments
[–] eskuero@lemmy.fromshado.ws 3 points 4 days ago* (last edited 4 days ago)

Yes I do. I cooked a small python script that runs at the end of every daily backup

import subprocess
import json
import os

# Output directory
OUTPUT_DIR = "/data/dockerimages"
try:
        os.mkdir(OUTPUT_DIR)
except:
        pass

# Grab all the docker images. Each line a json string defining the image
imagenes = subprocess.Popen(["docker", "images", "--format", "json"], stdout = subprocess.PIPE, stderr = subprocess.DEVNULL).communicate()[0].decode().split("\n")

for imagen in imagenes[:-1]:
        datos = json.loads(imagen)
        # ID of the image to save
        imageid = datos["ID"]
        # Compose the output name like this
        # ghcr.io-immich-app-immich-machine-learning:release:2026-01-28:3c42f025fb7c.tar
        outputname = f"{datos["Repository"]}:{datos["Tag"]}:{datos["CreatedAt"].split(" ")[0]}:{imageid}.tar".replace("/", "-")
        # If the file already exists just skip it
        if not os.path.isfile(f"{OUTPUT_DIR}/{outputname}"):
                print(f"Saving {outputname}...")
                subprocess.run(["docker", "save", imageid, "-o", f"{OUTPUT_DIR}/{outputname}"])
        else:
                print(f"Already exists {outputname}")