this post was submitted on 17 Feb 2025
74 points (98.7% liked)

Selfhosted

42972 readers
381 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

Basically title. I'm in the process of setting up a proper backup for my configured containers on Unraid and I'm wondering how often I should run my backup script. Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent? Whats your schedule and do you strictly backup your appdata (container configs), or is there other data you include in your backups?

top 50 comments
sorted by: hot top controversial new old
[–] Darkassassin07@lemmy.ca 33 points 1 week ago* (last edited 1 week ago) (2 children)

I run Borg nightly, backing up the majority of the data on my boot disk, incl docker volumes and config + a few extra folders.

Each individual archive is around 550gb, but because of the de-duplication and compression it's only ~800mb of new data each day taking around 3min to complete the backup.

Borgs de-duplication is honestly incredible. I keep 7 daily backups, 3 weekly, 11 monthly, then one for each year beyond that. The 21 historical backups I have right now RAW would be 10.98tb of data. After de-duplication and compression it only takes up 407.98gb on disk.

With that kind of space savings, I see no reason not to keep such frequent backups. Hell, the whole archive takes up less space than one copy of the original data.

[–] Sunny@slrpnk.net 5 points 1 week ago

Thanks for sharing the details on this, very interesting!

load more comments (1 replies)
[–] darklamer@lemmy.dbzer0.com 21 points 1 week ago (1 children)
[–] IsoKiero@sopuli.xyz 4 points 1 week ago

Yep. Even if the data I'm backing up doesn't really change that often. Perhapas I should start to back up files from my laptop and workstation too. Nothing too important is stored only on those devices, but reinstalling and reconfiguring everything back is a bit of a chore.

[–] slazer2au@lemmy.world 18 points 1 week ago (1 children)
[–] metaStatic@kbin.earth 4 points 1 week ago (2 children)
[–] slazer2au@lemmy.world 35 points 1 week ago (3 children)

That is what the B in RAID stands for.

[–] AtariDump@lemmy.world 8 points 1 week ago

Just like the “s” in IoT stands for “security”

[–] avidamoeba@lemmy.ca 4 points 1 week ago (1 children)

What's the second B stand for?

[–] meyotch@slrpnk.net 4 points 1 week ago (1 children)

Beets.

Or bears.

Or buttsex.

It’s context dependent, like “cool”.

[–] avidamoeba@lemmy.ca 6 points 1 week ago* (last edited 1 week ago)

If Raid is backup, then Unraid is?

[–] Ganbat@lemmy.dbzer0.com 12 points 1 week ago

I do not as I cannot afford the extra storage required to do so.

[–] savvywolf@pawb.social 12 points 1 week ago

Daily backups here. Storage is cheap. Losing data is not.

[–] ikidd@lemmy.world 10 points 1 week ago* (last edited 1 week ago) (2 children)

Proxmox servers are mirrored zpools, not that RAID is a backup. Replication between Proxmox servers every 15 minutes for HA guests, hourly for less critical guests. Full backups with PBS at 5AM and 7PM, 2 sets apiece with one set that goes off site and is rotated weekly. Differential replication every day to zfs.rent. I keep 30 dailies, 12 weeklys, 24 monthly and infinite annuals.

Periodic test restores of all backups at various granularities at least monthly or whenever I'm bored or fuck something up.

Yes, former sysadmin.

load more comments (2 replies)

I use Duplicati for my backups, and have backup retention set up like this:

Save one backup each day for the past week, then save one each week for the past month, then save one each month for the past year.

That way I have granual backups for anything recent, and the further back in the past you go the less frequent the backups are to save space

[–] 30p87@feddit.org 6 points 1 week ago (1 children)

Every hour, automatically

Never on my Laptop, because I'm too lazy to create a mechanism that detects when it's possible.

[–] thejml@lemm.ee 4 points 1 week ago

I just tell it to back up my laptops every hour anyway. If it’s not on, it just doesn’t happen, but it’s generally on enough to capture what I need.

[–] JASN_DE@lemmy.world 5 points 1 week ago

Nextcloud data daily, same for the docker configs. Less important/rarely changing data once per week. Automatic sync to NAS and online storage. Irregular and manual sync to an external disk.

7 daily backups, 4 weekly backups, "infinite" monthly backups retained (until I clean them up by hand).

Boils down to how much are you willing to lose? Personally I do weekly

[–] Lem453@lemmy.ca 4 points 1 week ago (2 children)

Local zfs snap every 5 mins.

Borg backups everything hour to 3 different locations.

I've blown away docker folders of config files a few times by accident. So far I've only had to dip into the zfs snaps to bring them back.

load more comments (2 replies)
[–] zero_gravitas@aussie.zone 4 points 1 week ago* (last edited 1 week ago)

Right now, I have a cron job set to run on Monday and Friday nights, is this too frequent?

Only you can answer this. How many days of data are you prepared to lose? What are the downsides of running your backup scripts more frequently?

[–] deadcatbounce@reddthat.com 4 points 1 week ago

If you haven't tested your backups, you ain't got a backup.

[–] mosjek@lemmy.world 3 points 1 week ago

I classify the data according to its importance (gold, silver, bronze, ephemeral). The regularity of the zfs snapshots (15 minutes to several hours) and their retention time (days to years) on the server depends on this. I then send the more important data that I cannot restore or can only restore with great effort (gold and silver) to another server once a day. For bronze, the zfs snapshots and a few days of storage time on the server are enough for me, as it is usually data that I can restore (build artifacts or similar) or is simply not that important. Ephemeral is for unimportant data such as caches or pipelines.

[–] Solaer@lemmy.world 3 points 1 week ago

Backup all of my proxmox-LXCs/VMs to a proxmox backup server every night + sync these backups to another pbs in another town. A second proxmox backup every noon to my nas. (i know, 3-2-1 rule is not reached...)

[–] desentizised@lemm.ee 3 points 1 week ago

rsync from ZFS to an off-site unraid every 24 hours 5 times a week. on the sixth day it does a checksum based rsync which obviously means more stress so only do it once a week. the seventh day is reserved for ZFS scrubbing every two weeks.

[–] Slax@sh.itjust.works 3 points 1 week ago

I have

  • Unraid back up it's USB
  • Unraid appears gets backed up weekly by a community applications (CA app backup) and I use rclone to back it up to an old box account (100GB for life..) I did have it encrypted but seems I need to fix that..
  • Parity drive on my Unraid (8TB)
  • I am trying to understand how to use Rclone to back up my photos to Proton Drive so that's next.

Music and media is not too important yet but I would love some insight

[–] 3aqn5k6ryk@lemmy.world 3 points 1 week ago

No backup for my media. Only redundacy.

For my nextcloud data, anytime i made major changes.

[–] Dagamant@lemmy.world 3 points 1 week ago

Weekly full backup, nightly incremental

[–] hendrik@palaver.p3x.de 3 points 1 week ago* (last edited 1 week ago)

Most backup software allow you to configure backup retention. I think I went with some pretty standard once per day for a week. After that they get deleted, and it keeps just one per week of the older ones, for one or two months. And after that it's down to monthly snapshots. I think that aligns well with what I need. Sometimes I find out something broke the day before yesterday. But I don't think I ever needed a backup from exactly the 12th of December or something like that. So I'm fine if they get more sparse after some time. And I don't need full backups more than necessary. An incremental backup will do unless there's some technical reason to do full ones.

But it entirely depends on the use-case. Maybe for a server or stuff you work on, you don't want to lose more than a day. While it can be perfectly alright to back up a laptop once a week. Especially if you save your documents in the cloud anyway. Or you're busy during the week and just mess with your server configuration on weekends. In that case you might be alright with taking a snapshot on fridays. Idk.

(And there are incremental backups, full backups, filesystem snapshots. On a desktop you could just use something like time machine... You can do different filesystems at different intervals...)

[–] atzanteol@sh.itjust.works 3 points 1 week ago

I have a cron job set to run on Monday and Friday nights, is this too frequent?

Only you can answer that - what is your risk tolerance for data loss?

[–] nichtburningturtle@feddit.org 3 points 1 week ago (1 children)

Timeshift creates a btrfs snapshot on each boot for me. And my server gets nightly borg backups.

[–] QuizzaciousOtter@lemm.ee 5 points 1 week ago (3 children)

Just a friendly reminder that BTRFS snapshots are not backups.

[–] tal@lemmy.today 3 points 1 week ago (2 children)

You're correct and probably the person you're responding to is treating one as an alternative as another.

However, theoretically filesystem snapshotting can be used to enable backups, because they permit for an instantaneous, consistent view of a filesystem. I don't know if there are backup systems that do this with btrfs today, but this would involve taking a snapshot and then having the backup system backing up the snapshot rather than the live view of the filesystem.

Otherwise, stuff like drive images and database files that are being written to while being backed up can just have a corrupted, inconsistent file in the backup.

[–] vividspecter@lemm.ee 3 points 1 week ago (1 children)

btrbk works that way essentially. Takes read-only snapshots on a schedule, and uses btrfs send/receive to create backups.

There's also snapraid-btrfs which uses snapshots to help minimise write hole issues with snapraid, by creating parity data from snapshots, rather than the raw filesystem.

[–] tal@lemmy.today 2 points 1 week ago* (last edited 1 week ago)

and uses btrfs send/receive to create backups.

I'm not familiar with that, but if it permits for faster identification of modified data since a given time than scanning a filesystem for modified files, which a filesystem could potentially do, that could also be a useful backup enabler, since now your scan-for-changes time doesn't need to be linear in the number of files in the filesystem. If you don't do that, your next best bet on Linux -- and this way would be filesystem-agnostic -- is gonna require something like having a daemon that runs and uses inotify to build some kind of on-disk index of modifications since the last backup, and a backup system that can understand that.

looks at btrfs-send(1) man page

Ah, yeah, it does do that. Well, the man page doesn't say what time it runs in, but I assume that it's better than linear in file count on the filesystem.

[–] QuizzaciousOtter@lemm.ee 2 points 1 week ago

Absolutely, my backup solution is actually based on BTRFS snapshots. I use btrbk (already mentioned in another reply) to take the snapshots and copy them to another drive. Then a nightly restic job backs up the latest snapshot to B2.

load more comments (2 replies)
[–] metaStatic@kbin.earth 2 points 1 week ago

Thanks for reminding me to validate.

Daily here also.

[–] ocean@lemmy.selfhostcat.com 2 points 1 week ago

Depends on the system but weekly at least

[–] AMillionMonkeys@lemmy.world 2 points 1 week ago

I tried Kopia but it was unstable and janky, so now it's whenever I remember to manually run a bunch of rsync. I backup my desktop to cold storage on the first of the month, so I should get in the habit of backing up my server to the NAS then also.

[–] papertowels@mander.xyz 2 points 1 week ago

And equally important, how do you do your backups? What system and to where?

[–] truxnell@infosec.pub 2 points 1 week ago

Daily backups. Currently using restic on my NixOS servers. To avoid data corruption, I make a zfs snapshot at 2am, and after that restic does a backup of my mutable data dirs both to my local Nas and CloudFlare r3. The Nas backup folder is synced to backblaze nightly as well for a more cold store.

Depends on the application. I run a nightly backup of a few VM's because realistically they dont change much. I have containers on the other hand that run critical (to me) systems like my photo backup and they are backed up twice a day.

[–] avidamoeba@lemmy.ca 2 points 1 week ago* (last edited 1 week ago)

Every hour. Could do it more frequently if needed.

It depends on how resource intensive the backup process is.

Consider an 800GB Immich instance.

Using Duplicity or rsync takes 1 hour per backup. 99% of the time is spent in traversing the directory structure and checking which files have changed. 1% is spent into transferring the difference to the backup. Any backup system that operates on top of the file system would take this much. In addition, unless you're using something that can take snapshots of the filesystem, you have to stop Immich during the backup process in order to prevent backing up an invalid app state.

Using ZFS send on the other hand (with syncoid) takes less than 5 seconds to discover the differences and the rest of the time is spent on the data transfer, at 100MB/s in my case. Since ZFS send is based on snapshots, I don't have to stop the service either.

When I used Duplicity to backup, I would backup once week because the backup process was long and heavy on the disk array. Since I switched to ZFS send, I do it once an hour because there's almost no visible impact.

I'm now in the process of migrating my laptop to ZFS on root in order to be able to utilize ZFS send for regular full system backups. If successful, eventually I'll move all my machines to ZFS on root.

[–] HeyJoe@lemmy.world 2 points 1 week ago

I honestly don't have too much to back up, so I run one full backup job every Sunday for different directories I care about. They run a check on the directory and only back up any changes or new files. I don't have the space to backup everything, so I only take the smaller stuff and most important. The backup software also allows live monitoring if I enable it, so some of my jobs I have that turned on since I didn't see any reason not to. I reuse the NAS drives that report errors that I replace with new ones to save on money. So far, so good.

Backup software is Bvckup2, and reddit was a huge fan of it years ago, so I gave it a try. It was super cheap for a lifetime license at the time, and it's super lightweight. Sorry, there is no Linux version.

[–] MangoPenguin@lemmy.blahaj.zone 2 points 1 week ago* (last edited 1 week ago)

Longest interval is every 24 hours. With some more frequent like every 6 hours or so, like the ones for my game servers.

I have multiple backups (3-2-1 rule), 1 is just important stuff as a file backup, the other is a full bootable system image of everything.

With proper backup software incremental backups don't use any more space unless files are changed, so no real downside to more frequent backups.

[–] Xanza@lemm.ee 2 points 1 week ago (7 children)

I continuous backup important files/configurations to my NAS. That's about it.

IMO people who redundant/backup their media are insane... It's such an incredible waste of space. Having a robust media library is nice, but there's no reason you can't just start over if you have data corruption or something. I have TB and TB of media that I can redownload in a weekend if something happens (if I even want). No reason to waste backup space, IMO.

load more comments (7 replies)
load more comments
view more: next ›