Hacker News new | past | comments | ask | show | jobs | submit login

Do something simpler. Backups shouldn’t be complex.

This should be simpler still:

https://github.com/nathants/backup






Cool, but looks like it's going to miss capabilities, so not suitable for a full OS backup (see https://github.com/python/cpython/issues/113293)

Interesting. I'm not trying to restore bootable systems, just data. Still, probably worthwhile to rebuild in Go soon.

Index of files stored in git pointing to a remote storage. That sounds exactly like git LFS. Is there any significant difference? In particular in terms of backups.

Definitely similar.

Git LFS is 50k loc, this is 891 loc. There are other differences, but that is the main one.

I don't want a sophisticated backup system. I want one so simple that it disappears into the background.

I want to never fear data loss or my ability to restore with broken tools and a new computer while floating on a raft down a river during a thunder storm. This is what we train for.


Is this a joke?

I don't see what value this provides that rsync, tar and `aws s3 cp` (or AWS SDK equivalent) provides.


How do you version your rsync backups?

I use rsyncs --link-dest

abridged example:

    rsync --archive --link-dest 2025-06-06 backup_role@backup_host:backup_path/ 2025-06-07/

Actual invocation is this huge hairy furball of an rsync command that appears to use every single feature of rsync as I worked on my backup script over the years.

    rsync_cmd = [
      '/usr/bin/rsync',
      '--archive',
      '--numeric-ids',
      '--owner',
      '--delete',
      '--delete-excluded',
      '--no-specials',
      '--no-devices',
      '--filter=merge backup/{backup_host}/filter.composed'.format(**rsync_param),
      '--link-dest={cwd}/backup/{backup_host}/current/{backup_path}'.format(**rsync_param),
      '--rsh=ssh -i {ssh_ident}'.format(**rsync_param),
      '--rsync-path={rsync_path}'.format(**rsync_params),
      '--log-file={cwd}/log/{backup_id}'.format(**rsync_params),
      '{remote_role}@{backup_host}:/{backup_path}'.format(**rsync_params),
      'backup/{backup_host}/work/{backup_path}'.format(**rsync_params) ]

This is cool. Do you always --link-dest to the last directory, and that traverses links all the way back as far as needed?

Yes, this adds a couple of nice features, it is easy to go back to any version using only normal filesysem access and because they are hard links it only uses space for changed files and you can cull old versions without worrying about loosing the backing store for the diff.

I think it sort of works like apples time-machine but I have never used that product so... (shrugs)

Note that it is not, in the strictest sense, a very good "backup" mainly because it is too "online", to solve that I have a set of removable drives that I rotate through, so with three drives, each ends up with every third day.


Sounds like “rsnapshot” :

https://rsnapshot.org/


Dirvish

Perl still exists?

Uh, who has the money to store backups in AWS?!

Glacier Deep Archive is the cheapest cloud backup option at $1USD/month/TB.

Google Cloud Store Archive Tier is a tiny bit more.


To quote the old mongodb video: If you don't care about restores, /dev/null is even cheaper, and its webscale.

Both would be pretty expensive to actually restore from, though, IIRC.

Quite expensive, but it should only ever be a last resort after your local backups have all failed in some way or another. For $1/mo/TB you purchase the opportunity to pay an exorbitant amount to recover from an otherwise catastrophic situation.

If you don't test your backups, they don't exist.

Depends how big they are. My high value backups go into S3, R2, and a local x3 disk mirror[1].

My low value backups go into a cheap usb hdd from Best Buy.

1. https://github.com/nathants/mirror


Support for S3 means you can just have minio server somewhere acting as backup storage (and minio is pretty easy to replicate). I have local S3 on my NAS replicated to cheapo OVH serwer for backup



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: