this post was submitted on 03 Oct 2025
406 points (97.0% liked)

Selfhosted

51975 readers
2311 users here now

A place to share alternatives to popular online services that can be self-hosted without giving up privacy or locking you into a service you don't control.

Rules:

  1. Be civil: we're here to support and learn from one another. Insults won't be tolerated. Flame wars are frowned upon.

  2. No spam posting.

  3. Posts have to be centered around self-hosting. There are other communities for discussing hardware or home computing. If it's not obvious why your post topic revolves around selfhosting, please include details to make it clear.

  4. Don't duplicate the full text of your blog or github here. Just post the link for folks to click.

  5. Submission headline should match the article title (don’t cherry-pick information from the title to fit your agenda).

  6. No trolling.

Resources:

Any issues on the community? Report it using the report flag.

Questions? DM the mods!

founded 2 years ago
MODERATORS
 

You might not even like rsync. Yeah it's old. Yeah it's slow. But if you're working with Linux you're going to need to know it.

In this video I walk through my favorite everyday flags for rsync.

Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/

Here's a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync

Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ

Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it's still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc

Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica... why not use (insert shiny object here)

top 50 comments
sorted by: hot top controversial new old
[–] Appoxo@lemmy.dbzer0.com 1 points 1 hour ago

Veeam for image/block based backups of Windows, Linux and VMs.
syncthing for syncing smaller files across devices.

Thank you very much.

[–] atk007@lemmy.world 7 points 3 hours ago (2 children)

Rsnapshot. It uses rsync, but provides snapshot management and multiple backup versioning.

Yah, I really like this approach. Same reason I set up Timeshift and Mint Backup on all the user machines in my house. For others rsync + cron is aces.

[–] Tja@programming.dev 1 points 1 hour ago (1 children)

Yes, but a few hours writing my own scripts will save me from several minutes of reading its documentation...

[–] atk007@lemmy.world 1 points 1 hour ago

It took me like 10 min to setup rsnapshot (installing, and writing systemd unit /timer files) on my servers.

[–] RestrictedAccount@lemmy.world 2 points 5 hours ago (2 children)

I use syncthing.

Is rsync better?

Syncthing works pretty well for me and my stable of Ubuntu, pi, Mac, and Windows

[–] conartistpanda@lemmy.world 1 points 42 minutes ago

Syncthing is technically to synchronize data across different devices in real time (which I do with my phone), but I also use it to transfer data weekly via wi-fi to my old 2013 laptop with a 500GB HDD and Linux Mint (I only boot it to transfer data, and even then I pause the transfers to this device when its done transferring stuff) so I can have larger data backups that wouldn't fit in my phone, since LocalSend is unreliable for large amounts of data while Synchting can resume the transfer if anything goes wrong. On top of that Syncthing also works in Windows and Android out of the box.

[–] WhyJiffie@sh.itjust.works 2 points 1 hour ago

its for a different purpose. I wouldn't use syncthing the way I use rsync

[–] Mio@feddit.nu 4 points 9 hours ago* (last edited 9 hours ago) (2 children)

I think the there are better alternatives for backup like kopia and restic. Even seafile. Want protection against ransomware, storage compression, encryption, versioning, sync upon write and block deduplication.

[–] lazynooblet@lazysoci.al 2 points 3 hours ago

comparing seafile to rsync reminds me the old "Space Pen" folk tale.

[–] Toribor@corndog.social 0 points 1 hour ago

This exactly. I'd use rsync to sync a directory to a location to then be backed up by kopia, but I wouldn't use rsync exclusively for backups.

[–] quick_snail@feddit.nl 13 points 13 hours ago (2 children)
[–] HereIAm@lemmy.world 1 points 1 hour ago (1 children)

Compared to something multi threaded, yes. But there are obviously a number of bottlenecks that might diminish the gains of a multi threaded program.

[–] Tja@programming.dev 2 points 1 hour ago

With xargs everything is multithreaded.

[–] okamiueru@lemmy.world 7 points 5 hours ago

That part threw me off. Last time i used it, I did incremental backups of a 500 gig disk once a week or so, and it took 20 seconds max.

[–] vext01@lemmy.sdf.org 2 points 9 hours ago

I used to use rsnapshot, which is a thin wrapper around rsync to make it incremental, but moved to restic and never looked back. Much easier and encrypted by default.

[–] Xylight 2 points 9 hours ago

rsync for backups? I guess it depends on what kind of backup

for redundant backups of my data and configs that I still have a live copy of, I use restic, it compresses extremely well

I have used rsync to permanently move something to another drive though

[–] sugar_in_your_tea@sh.itjust.works 20 points 17 hours ago (1 children)

Yeah it’s slow

What's slow about async? If you have a reasonably fast CPU and are merely syncing differences, it's pretty quick.

[–] pathief@lemmy.world 4 points 9 hours ago (2 children)

It's single thread, one file at a time.

That would only matter if it's lots of small files, right? And after the initial sync, you'd have very few files, no?

Rsync is designed for incremental syncs, which is exactly what you want in a backup solution. If your multithreaded alternative doesn't do a diff, rsync will win on larger data sets that don't have rapid changes.

[–] Newsteinleo@midwest.social 1 points 2 hours ago

For a home setup that seems fine. But I can understand why you wouldn't want this for a whole enterprise.

[–] NuXCOM_90Percent@lemmy.zip 51 points 22 hours ago (3 children)

I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.

Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.

At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But... backups are incredibly important and it is very much important to understand what a backup actually needs to be.

[–] tal@olio.cafe 18 points 21 hours ago* (last edited 21 hours ago) (3 children)

I would generally argue that rsync is not a backup solution.

Yeah, if you want to use rsync specifically for backups, you're probably better-off using something like rdiff-backup, which makes use of rsync to generate backups and store them efficiently, and drive it from something like backupninja, which will run the task periodically and notify you if it fails.

rsync: one-way synchronization

unison: bidirectional synchronization

git: synchronization of text files with good interactive merging.

rdiff-backup: rsync-based backups. I used to use this and moved to restic, as the backupninja target for rdiff-backup has kind of fallen into disrepair.

That doesn't mean "don't use rsync". I mean, rsync's a fine tool. It's just...not really a backup program on its own.

[–] koala@programming.dev 1 points 3 hours ago (1 children)

Beware rdiff-backup. It certainly does turn rsync (not a backup program) into a backup program.

However, I used rdiff-backup in the past and it can be a bit problematic. If I remember correctly, every "snapshot" you keep in rdiff-backup uses as many inodes as the thing you are backing up. (Because every "file" in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.

But it does make rsync a backup solution; a snapshot or a redundant copy is very useful, but it's not a backup.

(OTOH, rsync is still wonderful for large transfers.)

[–] tal@olio.cafe 2 points 1 hour ago

Because every “file” in the snapshot is either a file or a hard link to an identical version of that file in another snapshot.) So this can be a problem if you store many snapshots of many files.

I think that you may be thinking of rsnapshot rather than rdiff-backup which has that behavior; both use rsync.

But I'm not sure why you'd be concerned about this behavior.

Are you worried about inode exhaustion on the destination filesystem?

load more comments (2 replies)
load more comments (2 replies)
[–] BCsven@lemmy.ca 2 points 11 hours ago

Grsync is great. Having a GUI can be helpful

[–] ryper@lemmy.ca 9 points 16 hours ago* (last edited 16 hours ago) (3 children)

I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we're already talking about rsync, I guess I may as well ask if this is right way to go?

[–] Cyber@feddit.uk 4 points 7 hours ago (1 children)

It depends

rsync is fine, but to clarify a little further...

If you think you'll stop the transfer and want it to resume (and some data might have changed), then yep, rsync is best.

But, if you're just doing a 1-off bulk transfer in a single run, then you could use other tools like xcopy / scp or - if you've mounted the remote NAS at a local mount point - just plain old cp

The reason for that is that rsync has to work out what's at the other end for each file, so it's doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.

(From memory, I think Raspberry Pi don't handle large transfers over scp well... I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)

Also, on a local network, there's probably no point in using encryption or compression options - esp. for photos / videos / music... you're just loading the CPU again to work out that it can't compress any further.

[–] ryper@lemmy.ca 1 points 1 hour ago (1 children)

It's just a one-off transfer, I'm not planning to stop the transfer, and it's my media library, so nothing should change, but I figured something resumable is a good idea for a transfer that's going to take 12+ hours, in case there's an unplanned stop.

[–] Cyber@feddit.uk 1 points 1 hour ago

One thing I forgot to mention: rsync has an option to preserve file timestamps, so if that's important for your files, then thst might also be useful... without checking, the other commands probably have that feature, but I don't recall at the moment.

rsync -Prvt <source> <destination> might be something to try, leave for a minute, stop and retry ... that'll prove it's all working.

Oh... and make sure you get the source and destination paths correct with a trailing / (or not), otherwise you'll get all your files copied to an extra subfolder (or not)

[–] Suburbanl3g3nd@lemmings.world 7 points 15 hours ago* (last edited 15 hours ago)

I couldn't tell you if it's the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there's new data

[–] GreenKnight23@lemmy.world 5 points 15 hours ago (1 children)

yes, it's the right way to go.

rsync over ssh is the best, and works as long as rsync is installed on both systems.

[–] qjkxbmwvz@startrek.website 3 points 11 hours ago

On low end CPUs you can max out the CPU before maxing out network---if you want to get fancy, you can use rsync over an unencrypted remote shell like rsh, but I would only do this if the computers were directly connected to each other by one Ethernet cable.

[–] mesamunefire@piefed.social 42 points 22 hours ago (3 children)

Ive personally used rsync for backups for about....15 years or so? Its worked out great. An awesome video going over all the basics and what you can do with it.

[–] eager_eagle@lemmy.world 5 points 17 hours ago* (last edited 16 hours ago) (3 children)

It works fine if all you need is transfer, my issue with it it's just not efficient. If you want a "time travel" feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.

[–] suicidaleggroll@lemmy.world 1 points 15 minutes ago* (last edited 14 minutes ago)

If you want a “time travel” feature, your only option is to duplicate data.

Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.

[–] bandwidthcrisis@lemmy.world 3 points 10 hours ago (1 children)

I have it add a backup suffix based on the date. It moves changed and deleted files to another directory adding the date to the filename.

It can also do hard-link copied so that you can have multiple full directory trees to avoid all that duplication.

No file deltas or compression, but it does mean that you can access the backups directly.

[–] koala@programming.dev 2 points 3 hours ago

Thanks! I was not aware of these options, along with what other poster mentioned about --link-dest. These do turn rsync into a backup program, which is something the root article should explain!

(Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)

load more comments (1 replies)
[–] Eldritch@piefed.world 17 points 22 hours ago (1 children)

And I generally enjoy Veronica's presentation. Knowledgable and simple.

[–] mesamunefire@piefed.social 17 points 22 hours ago (1 children)

Her https://tinkerbetter.tube/w/ffhBwuXDg7ZuPPFcqR93Bd made me learn a new way of looking at data. There was some tricks I havent done before. She has such good videos.

load more comments (1 replies)
load more comments (1 replies)
[–] surph_ninja@lemmy.world 7 points 17 hours ago

Use borg/borgmatic for your backups. Use rsync to send your differentials to your secondary & offsite backup storage.

[–] probable_possum@leminal.space 12 points 20 hours ago* (last edited 20 hours ago) (1 children)

rsnapshot is a script for the purpose of repeatedly creating deduplicated copies (hardlinks) for one or more directories. You can chose how many hourly, daily, weekly,... copies you'd like to keep and it removes outdated copies automatically. It wraps rsync and ssh (public key auth) which need to be configured before.

[–] Cyber@feddit.uk 2 points 7 hours ago (1 children)

Hardlinks need to be on the same filesystem, don't they? I don't see how that would work with a remote backup...?

[–] suicidaleggroll@lemmy.world 1 points 2 minutes ago

The hard links aren’t between the source and backup, they’re between Friday’s backup and Saturday’s backup

load more comments
view more: next ›