You might not even like rsync. Yeah it’s old. Yeah it’s slow. But if you’re working with Linux you’re going to need to know it.

In this video I walk through my favorite everyday flags for rsync.

Support the channel:
https://patreon.com/VeronicaExplains
https://ko-fi.com/VeronicaExplains
https://thestopbits.bandcamp.com/

Here’s a companion blog post, where I cover a bit more detail: https://vkc.sh/everyday-rsync

Also, @BreadOnPenguins made an awesome rsync video and you should check it out: https://www.youtube.com/watch?v=eifQI5uD6VQ

Lastly, I left out all of the ssh setup stuff because I made a video about that and the blog post goes into a smidge more detail. If you want to see a video covering the basics of using SSH, I made one a few years ago and it’s still pretty good: https://www.youtube.com/watch?v=3FKsdbjzBcc

Chapters:
1:18 Invoking rsync
4:05 The --delete flag for rsync
5:30 Compression flag: -z
6:02 Using tmux and rsync together
6:30 but Veronica… why not use (insert shiny object here)

  • state_electrician@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    32
    arrow-down
    3
    ·
    3 days ago

    Why videos? I feel like an old man yelling at clouds every time something that sounds interesting is presented in a fucking video. Videos are so damn awful. They take time, I need audio and I can’t copy&paste. Why have they become the default for things that should’ve been a blog post?

  • 1984@lemmy.today
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    3 days ago

    I never thought of it as slow. More like very reliable. I dont need my data to move fast, I need it to be copied with 100% reliability.

  • atk007@lemmy.world
    link
    fedilink
    English
    arrow-up
    21
    ·
    3 days ago

    Rsnapshot. It uses rsync, but provides snapshot management and multiple backup versioning.

    • harambe69@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      Rustic scares me. I will 100% forget what tool I used to backup after 5 years and be unable to recover my files.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        Yup, just configure a snapshot policy and you can recover deleted and modified files going back as long as you choose. And it is probably more space efficient than both/restic too.

  • NuXCOM_90Percent@lemmy.zip
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    1
    ·
    4 days ago

    I would generally argue that rsync is not a backup solution. But it is one of the best transfer/archiving solutions.

    Yes, it is INCREDIBLY powerful and is often 90% of what people actually want/need. But to be an actual backup solution you still need infrastructure around that. Bare minimum is a crontab. But if you are actually backing something up (not just copying it to a local directory) then you need some logging/retry logic on top of that.

    At which point you are building your own borg, as it were. Which, to be clear, is a great thing to do. But… backups are incredibly important and it is very much important to understand what a backup actually needs to be.

    • non_burglar@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      4 days ago

      I use rsync and a pruning script in crontab on my NFS mounts. I’ve tested it numerous times breaking containers and restoring them from backup. It works great for me at home because I don’t need anything older than 4 monthly, 4 weekly, and 7 daily backups.

      However, in my job I prefer something like bacula. The extra features and granularity of restore options makes a world of difference when someone calls because they deleted prod files.

  • clif@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    3 days ago

    I’ll never not upvote Veronica Explains. Excellent creator and excellent info on everything I’ve seen.

  • mesa@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    40
    ·
    4 days ago

    Ive personally used rsync for backups for about…15 years or so? Its worked out great. An awesome video going over all the basics and what you can do with it.

    • Eager Eagle@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      4 days ago

      It works fine if all you need is transfer, my issue with it it’s just not efficient. If you want a “time travel” feature, your only option is to duplicate data. Differential backups, compression, and encryption for off-site ones is where other tools shine.

      • bandwidthcrisis@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I have it add a backup suffix based on the date. It moves changed and deleted files to another directory adding the date to the filename.

        It can also do hard-link copied so that you can have multiple full directory trees to avoid all that duplication.

        No file deltas or compression, but it does mean that you can access the backups directly.

        • koala@programming.dev
          link
          fedilink
          English
          arrow-up
          2
          ·
          3 days ago

          Thanks! I was not aware of these options, along with what other poster mentioned about --link-dest. These do turn rsync into a backup program, which is something the root article should explain!

          (Both are limited in some aspects to other backup software, but they might still be a simpler but effective solution. And sometimes simple is best!)

      • suicidaleggroll@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        3 days ago

        If you want a “time travel” feature, your only option is to duplicate data.

        Not true. Look at the --link-dest flag. Encryption, sure, rsync can’t do that, but incremental backups work fine and compression is better handled at the filesystem level anyway IMO.

        • Eager Eagle@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          3 days ago

          Isn’t that creating hardlinks between source and dest? Hard links only work on the same drive. And I’m not sure how that gives you “time travel”, as in, browsing snapshots or file states at the different times you ran rsync.

          Edit: ah the hard link is between dest and the link-dest argument, makes more sense.

          I wouldn’t bundle fs and backup compression in the same bucket, because they have vastly different reqs. Backup compression doesn’t need to be optimized for fast decompression.

          • BCsven@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            3 days ago

            Snapper and BTRFS. Its only adjusts changes in data, so time travel is just pointing to what blocks changed and when, and not building a duplicate of the entire file or filesystem. A snapshot is instant, and new block changes belong to the current default.

    • confusedpuppy@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      4 days ago

      I use rsync for many of the reasons covered in the video. It’s widely available and has a long history. To me that feels important because it’s had time to become stable and reliable. Using Linux is a hobby for me so my needs are quite low. It’s nice to have a tool that just works.

      I use it for all my backups and moving my backups to off network locations as well as file/folder transfers on my own network.

      I even made my own tool (https://codeberg.org/taters/rTransfer) to simplify all my rsync commands into readable files because rsync commands can get quite long and overwhelming. It’s especially useful chaining multiple rsync commands together to run under a single command.

      I’ve tried other backup and syncing programs and I’ve had bad experiences with all of them. Other backup programs have failed to restore my system. Syncing programs constantly stop working and I got tired of always troubleshooting. Rsync when set up properly has given me a lot less headaches.

    • okamiueru@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      ·
      4 days ago

      That part threw me off. Last time i used it, I did incremental backups of a 500 gig disk once a week or so, and it took 20 seconds max.

    • HereIAm@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      3 days ago

      Compared to something multi threaded, yes. But there are obviously a number of bottlenecks that might diminish the gains of a multi threaded program.

      • sugar_in_your_tea@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        3
        ·
        3 days ago

        That would only matter if it’s lots of small files, right? And after the initial sync, you’d have very few files, no?

        Rsync is designed for incremental syncs, which is exactly what you want in a backup solution. If your multithreaded alternative doesn’t do a diff, rsync will win on larger data sets that don’t have rapid changes.

      • Wispy2891@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        If I connect to the same server via my own VPN I don’t have the disconnections, so I’m thinking it’s tailscale cutting connections after too much traffic. But connecting via tailscale is so much more convenient 😢

    • Encrypt-Keeper@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      3 days ago

      I’m not super familiar with Syncthing, but judging by the name I’d say Syncthing is not at all meant for backups.

    • conartistpanda@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      Syncthing is technically to synchronize data across different devices in real time (which I do with my phone), but I also use it to transfer data weekly via wi-fi to my old 2013 laptop with a 500GB HDD and Linux Mint (I only boot it to transfer data, and even then I pause the transfers to this device when its done transferring stuff) so I can have larger data backups that wouldn’t fit in my phone, since LocalSend is unreliable for large amounts of data while Synchting can resume the transfer if anything goes wrong. On top of that Syncthing also works in Windows and Android out of the box.

  • ryper@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 days ago

    I was planning to use rsync to ship several TB of stuff from my old NAS to my new one soon. Since we’re already talking about rsync, I guess I may as well ask if this is right way to go?

    • Suburbanl3g3nd@lemmings.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 days ago

      I couldn’t tell you if it’s the right way but I used it on my Rpi4 to sync 4tb of stuff from my Plex drive to a backup and set a script up to have it check/mirror daily. Took a day and a half to copy and now it syncs in minutes tops when there’s new data

    • SayCyberOnceMore@feddit.uk
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      It depends

      rsync is fine, but to clarify a little further…

      If you think you’ll stop the transfer and want it to resume (and some data might have changed), then yep, rsync is best.

      But, if you’re just doing a 1-off bulk transfer in a single run, then you could use other tools like xcopy / scp or - if you’ve mounted the remote NAS at a local mount point - just plain old cp

      The reason for that is that rsync has to work out what’s at the other end for each file, so it’s doing some back & forwards communications each time which as someone else pointed out can load the CPU and reduce throughput.

      (From memory, I think Raspberry Pi don’t handle large transfers over scp well… I seem to recall a buffer gets saturated and the throughput drops off after a minute or so)

      Also, on a local network, there’s probably no point in using encryption or compression options - esp. for photos / videos / music… you’re just loading the CPU again to work out that it can’t compress any further.

      • ryper@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        It’s just a one-off transfer, I’m not planning to stop the transfer, and it’s my media library, so nothing should change, but I figured something resumable is a good idea for a transfer that’s going to take 12+ hours, in case there’s an unplanned stop.

        • SayCyberOnceMore@feddit.uk
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          One thing I forgot to mention: rsync has an option to preserve file timestamps, so if that’s important for your files, then thst might also be useful… without checking, the other commands probably have that feature, but I don’t recall at the moment.

          rsync -Prvt <source> <destination> might be something to try, leave for a minute, stop and retry … that’ll prove it’s all working.

          Oh… and make sure you get the source and destination paths correct with a trailing / (or not), otherwise you’ll get all your files copied to an extra subfolder (or not)

    • GreenKnight23@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      4 days ago

      yes, it’s the right way to go.

      rsync over ssh is the best, and works as long as rsync is installed on both systems.

      • qjkxbmwvz@startrek.website
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 days ago

        On low end CPUs you can max out the CPU before maxing out network—if you want to get fancy, you can use rsync over an unencrypted remote shell like rsh, but I would only do this if the computers were directly connected to each other by one Ethernet cable.

  • Appoxo@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    3 days ago

    Veeam for image/block based backups of Windows, Linux and VMs.
    syncthing for syncing smaller files across devices.

    Thank you very much.

  • solrize@lemmy.ml
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    4 days ago

    I’ve been using borg because of the backend encryption and because the deduplication and snapshot features are really nice. It could be interesting to have cross-archive deduplication but maybe I can get something like that by reorganizing my backups. I do use rsync for mirroring and organizing downloads, but not really for backups. It’s a synchronization program as the name implies, not really intended for backups.