What is your Backup Software of preference?

2»

Comments

  • vyasvyas OGRetired
    edited December 2021

    @casadebamburojo said:
    I'm sorry, but I'm quite confused: which one is the better one -- Duplicati, Duplicacy, or Duplicity?

    Lol. Wudnt blame ya

    Thanked by (2)Ympker saibal
  • YmpkerYmpker OGContent Writer

    @beagle If you need more devices for Aomei, the lifetime deal af Stacksocial is currently available for 17$ using code CMSAVE40 .

    Thanked by (1)beagle
  • I've been using Duplicati for the last year or so on a laptop (not mine). Three or four times I noticed that the daily backups were not running successfully anymore. Restores were similarly prevented. Each time I had to do some manual intervention, rebuilding the database or such, in order to be able to run a successful backup (and restore) again. I told myself if it happens again I'm switching to borg on WSL, but things have been smooth for the last couple of months.
    On servers I've been using borg for a few years and can't recall ever encountering any issues. Servers are different in that they're always on and connected though, so I don't know how it would fare with notebooks.

    Thanked by (1)casadebamburojo
  • bdlbdl OG
    edited December 2021

    @casadebamburojo said:
    I'm sorry, but I'm quite confused: which one is the better one -- Duplicati, Duplicacy, or Duplicity?

    I am in the (paid) Duplicacy camp. Chose it over Duplicati as users have commented that Duplicati has had data integrity problems in the past (eg: https://forum.duplicati.com/t/backup-integrity-problems/3542). I also like the (paid) web front end of Duplicacy, plus the option to use the open source CLI version https://github.com/gilbertchen/duplicacy.

    Also, being able to restore/prune/check my backup repos using the web product without a paid licence also is a bonus.

    The lifetime deal on Black Friday is just cream on top :)

    Duplicacy (web) has a 30 day trial, see what you think! I was using restic prior but finding it had some kind of strange memory leak which was causing issues with 1.5tb of backups on the hardware I was using...

    Thanked by (1)casadebamburojo
  • @bdl Is the paid Duplicacy fool proof? I need something that I can set up and forget as I'll be installing it for someone who's not really a techie.

  • bdlbdl OG
    edited December 2021

    @casadebamburojo said:
    @bdl Is the paid Duplicacy fool proof? I need something that I can set up and forget as I'll be installing it for someone who's not really a techie.

    I've found it foolproof for completely non-techy family members on their macOS machines, I can jump in and administer via the web interface whenever needed. Been running the same setup (sftp to a VPS) for just over a year and a half now.

    From memory, I think on Windows you have to set it up as a service if you want it to run without a user logging in however not sure if that applies in your use case.

    Last two BF's they also had a promo where you could get a lifetime license - see https://forum.duplicacy.com/t/black-friday-2021-sale-on-lifetime-license-again/5624 - if you like it/it works well you may want to do that next year for them.

    Try the trial and see what you think!

    Thanked by (1)casadebamburojo
  • Thanks @bdl, I'll check it out!

    A question for anyone reading: a wiseman once said that unless a file exists in at least 3 different locations, it doesn't exist at all.

    Say for example I intend to back up to Google Drive and a storage VPS. Is it better to daisy chain it or do a star topology? Something like:
    PC -> Google Drive -> VPS
    or
    Google Drive <- PC -> VPS

    Let me know your opinions. :smile:

    Thanked by (1)bdl
  • vyasvyas OGRetired
    edited December 2021

    @casadebamburojo said:
    @bdl Is the paid Duplicacy fool proof?

    I need something that I can set up and forget as I'll be installing it for someone who's not really a techie.

    It will depend on the percentage of foolishness in the person.

    Just like alcohol

    Thanked by (2)Ympker casadebamburojo
  • YmpkerYmpker OGContent Writer

    @vyas said:

    @casadebamburojo said:
    @bdl Is the paid Duplicacy fool proof?

    I need something that I can set up and forget as I'll be installing it for someone who's not really a techie.

    It will depend on the percentage of foolishness in the person.

    Just like alcohol

    OVH burn kinda confirmed that. "But my only backup was in the same location and now I am loosing millions". Hundreds of posts like these.

    Thanked by (1)vyas
  • @casadebamburojo said:
    Thanks @bdl, I'll check it out!

    A question for anyone reading: a wiseman once said that unless a file exists in at least 3 different locations, it doesn't exist at all.

    Say for example I intend to back up to Google Drive and a storage VPS. Is it better to daisy chain it or do a star topology? Something like:
    PC -> Google Drive -> VPS
    or
    Google Drive <- PC -> VPS

    Let me know your opinions. :smile:

    I use a start topology with borgBackup because of this comment on their documentation:
    https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-copy-or-synchronize-my-repo-to-another-location

    If you want to have redundant backup repositories (preferably at separate locations), 
    the recommended way to do that is like this:
    
    borg init repo1
    borg init repo2
    client machine —borg create—> repo1
    client machine —borg create—> repo2
    This will create distinct repositories (separate repo ID, separate keys) and nothing bad
    happening in repo1 will influence repo2.
    
    Some people decide against above recommendation and create identical copies of a repo 
    (using some copy / sync / clone tool).
    
    While this might be better than having no redundancy at all, you have to be very careful about
     how you do that and what you may / must not do with the result (if you decide against our 
    recommendation).
    
    What you would get with this is:
    
    client machine —borg create—> repo
    repo —copy/sync—> copy-of-repo
    There is no special borg command to do the copying, you could just use any reliable tool that
     creates an identical copy (cp, rsync, rclone might be options).
    
    But think about whether that is really what you want. If something goes wrong in repo, you will
     have the same issue in copy-of-repo.
    
    Make sure you do the copy/sync while no backup is running, see borg with-lock about how to do 
    that.
    
    Also, you must not run borg against multiple instances of the same repo (like repo and copy-of-repo)
     as that would create severe issues:
    
    Data loss: they have the same repository ID, so the borg client will think they are identical and 
    e.g. use the same local cache for them (which is an issue if they happen to be not the same). 
    See #4272 for an example.
    
    Encryption security issues if you would update repo and copy-of-repo independently, due to AES 
    counter reuse.
    
    There is also a similar encryption security issue for the disaster case: If you lose repo and the 
    borg client-side config/cache and you restore the repo from an older copy-of-repo, you also 
    run into AES counter reuse.
    
    Thanked by (2)casadebamburojo imok
  • @beagle said:

    @casadebamburojo said:
    Thanks @bdl, I'll check it out!

    A question for anyone reading: a wiseman once said that unless a file exists in at least 3 different locations, it doesn't exist at all.

    Say for example I intend to back up to Google Drive and a storage VPS. Is it better to daisy chain it or do a star topology? Something like:
    PC -> Google Drive -> VPS
    or
    Google Drive <- PC -> VPS

    Let me know your opinions. :smile:

    I use a start topology with borgBackup because of this comment on their documentation:
    https://borgbackup.readthedocs.io/en/stable/faq.html#can-i-copy-or-synchronize-my-repo-to-another-location

    If you want to have redundant backup repositories (preferably at separate locations), 
    the recommended way to do that is like this:
    
    borg init repo1
    borg init repo2
    client machine —borg create—> repo1
    client machine —borg create—> repo2
    This will create distinct repositories (separate repo ID, separate keys) and nothing bad
    happening in repo1 will influence repo2.
    
    Some people decide against above recommendation and create identical copies of a repo 
    (using some copy / sync / clone tool).
    
    While this might be better than having no redundancy at all, you have to be very careful about
     how you do that and what you may / must not do with the result (if you decide against our 
    recommendation).
    
    What you would get with this is:
    
    client machine —borg create—> repo
    repo —copy/sync—> copy-of-repo
    There is no special borg command to do the copying, you could just use any reliable tool that
     creates an identical copy (cp, rsync, rclone might be options).
    
    But think about whether that is really what you want. If something goes wrong in repo, you will
     have the same issue in copy-of-repo.
    
    Make sure you do the copy/sync while no backup is running, see borg with-lock about how to do 
    that.
    
    Also, you must not run borg against multiple instances of the same repo (like repo and copy-of-repo)
     as that would create severe issues:
    
    Data loss: they have the same repository ID, so the borg client will think they are identical and 
    e.g. use the same local cache for them (which is an issue if they happen to be not the same). 
    See #4272 for an example.
    
    Encryption security issues if you would update repo and copy-of-repo independently, due to AES 
    counter reuse.
    
    There is also a similar encryption security issue for the disaster case: If you lose repo and the 
    borg client-side config/cache and you restore the repo from an older copy-of-repo, you also 
    run into AES counter reuse.
    

    This is good thinking.
    Having identical repos means an error in one will probably exist in the second as well.

    Thanked by (1)beagle
  • I haven't found a lot of other people using burp. Came out of the dev's master's thesis: tiered incremental, chunk-level dedup with definable cross-client pools, resumeable. Primarily Linux, but there's a windows version using VSS.

    Thanked by (1)uptime
  • I use restic to backup data to rsync.net + rclone to backup data to Backblaze B2.

    Works well for my needs.

  • @nfn said:
    I use restic to backup data to rsync.net + rclone to backup data to Backblaze B2.

    What’s your rclone backup to b2 setup?

    By the way restic backs up to b2 destination out of the box, right?

Sign In or Register to comment.