Hacker Newsnew | past | comments | ask | show | jobs | submit | chaz6's commentslogin

I would really like some kind of agnostic backup protocol, so I can simply configure my backup endpoint using an environment variable (e.g. `-e BACKUP_ENDPOINT=https://backup.example.com/backup -e BACKUP_IDENTIFIER=xxxxx`), then the application can push a backup on a regular schedule. If I need to restore a backup, I log onto the backup app, select a backup file and generate a one time code which I can enter into the application to retrieve the data. To set up a new application for backups, you would enter a friendly name into the backup application and it would generate a key for use in the application.

I’m working on introducing this kind of protocol in NixOS. I called it contracts. https://github.com/NixOS/rfcs/pull/189

The idea is a contract is defined saying which options exist and what they mean. For backups, you’d get the Unix user doing the backup, what folders to backup and what patterns to exclude. But also what script can be run to create a backup and restore from a backup.

Then you’d get a contract consumer, the application to be backup, which declares what folders to backup either which users.

On the other side you have a contract provider, like Restic or Borgbackup which understand this contract and know thanks to it how to backup the application.

As the user, your role is just to plug-in a contract provider with a consumer. To choose which application backs up which application.

This can be applied to LDAP, SSO, secrets and more!


At the moment I am docker compose down everything, run the backup of their files and then docker compose up -d again afterwards. This sort of downtime in the middle of the night isn't an issue for home services but its also not an ideal system given most wont be mid writing a file at the time of backup anyway because its the middle of the night! But if I don't do it the one time I need those files I can guarantee it will be corrupted so at the moment don't feel like there are a lot of other options.

Maybe apps could offer backup to stdout and then you pipe it. That way each app doesn’t have to reason about how to interact with your target, doesn’t need to be trusted with credentials, and we don’t need a new standard.

I use Pika Backup which runs on the BorgBackup protocol for backing up my system’s home directory. I’m not really sure if this is exactly what you’re talking about, though. It just sends backups to network shares.

I'm actively in the process of setting this up for my devices. What have you done for off-site backups? I know there are Borg specific cloud providers (rsync.net, borgbase, etc.). Or have you done something like rclone to an S3 provider?

No off-site backup for me, these items aren’t important enough, it’s more for “oops I broke my computer” or “set my new computer up faster” convenience.

Anything I really don’t want to lose is in a paid cloud service with a local backup sync over SMB to my TrueNAS box for some of the most important ones.

An exception is GitHub, I’m not paying for GitHub, but git kinda sorta backs itself up well enough for my purposes just by pulling/pushing code. If I get banned from GitHub or something I have all the local repos.


Good to know! I have shifted more to self hosting, e.g., Gitea rather than Github, and need to establish proper redundancy. Hopefully Borg Backup, with it's deduplication will be good, at least for on-site backups.

I also learned this exists today: https://www.urbackup.org/

I am much more in-between. I don’t mind cloud stuff and even consider it safer than my local stuff due to other smart people doing the work. And I’m not looking for a second job self hosting, except for my game servers.

I mostly just don’t want to be stuck with cloud services from big tech that have slimy practices. I’d rather pay for honest products that let me own my data better. With the exception given to GitHub which I guess is out of my own laziness and maybe I should do something about that.

If you’re using gitea you might be interested in Forgejo, it’s a fork and I think it’s well regarded since gitea went more commercial-ish IIRC?


Why not virtualize everything and then just backup the entire cluster?

Proxmox Backup Server?


If I enter the following:-

    <p><p></p>
Should the second <p> be nested or not?

No. P elements does not nest, so this is parsed as: <p></p><p></p>

Page currently shows

    404 This page could not be found.


Having a shell script in the code path that processes router advertisements seems sub-optimal.


It's amazing the number of people that thing shell scripts should be anything other than throwaway single-person hacks.

They should probably go through their whole system and verify that there aren't more shell scripts being used, e.g. in the init system. Ideally a default distro would have zero shell scripts.


I can't tell whether you're making a joke, seeing as the entire BSD init system is built on shell scripts.


Probably not a joke. In the same way people want to get away from the C language due to its propensity to memory vulnerabilities, shell scripts have their own share of footguns, the most common being a variable not being quoted when it should (which is exactly the issue described in this advisory).

It doesn't mean getting away from scripting languages; it means getting away from shell scripts in particular (the parent poster said specifically "zero shell scripts"). If the script in question was written in Lua, or heck even Javascript, this particular issue most probably wouldn't have happened, since these scripting languages do not require the programmer to manually quote every single variable use.


That's fine; I just thought it was weird to say that we should check to see whether any shell scripts are used in the BSD init system. We know there are; it was a deliberate design decision at the time, even if we might now wish for it to be different.


Not a joke. I knew they used to use a pile of janky shell scripts for their init system. I didn't know they still do. That's disappointing.

And cesarb is correct - the issue isn't scripts; it's shell scripts, especially Bash and similar. Something like Deno/Typescript would be a decent option for example. Nushell is probably acceptable.

Even Python - while a terrible choice - is a better option than shell scripts.


The issue is POSIX standardizing legacy stuff like shells, thereby tempting people to write "portable" software, leading these technologies to ossify and stick with us for half a century and counting. Someone comes along and builds something better but gets threatened for not following "the UNIX way".


This is a very good point. I wonder how hard it would be to get POSIX to standardise a scripting language that isn't awful.

Probably never going to happen. There is a dearth of good scripting languages, and I would imagine any POSIX committee is like 98% greybeard naysayers who think 70s Unix was the pinnacle of computing.


POSIX does not specify the init/rc script system, so it's not a factor here at all. A POSIX-compliant system could use Python scripts. macOS (which is UNIX 03 certified) uses launchd. A POSIX system has to ship the shell, not use it.

And FreeBSD isn't actually POSIX-certified anyway!

The real consideration here is simply that there are tons of existing rc scripts for BSDs, and switching them all would be a large task.


Unfortunately your joke has wooshed over quite a few heads but what you say is true. The shell should be one of the most reliable parts of your operating system. Why on earth would you NOT trust the primary interface of your OS? Makes no sense.


The shell itself may be reliable but shell scripts are notorious for security issues.


I'm not sure I follow you but it wasn't a joke. Shell scripts are notoriously error-prone. I absolutely do not trust shell script authors to get everything right.

Also the shell isn't even "the primary interface of your OS". For Linux that's the Linux ABI, or arguably libc.

Unless you meant "human interface", in which case also no - KDE is the primary interface of my OS.


> I'm not sure I follow you but it wasn't a joke. Shell scripts are notoriously error-prone. I absolutely do not trust shell script authors to get everything right.

This is an extremely naive take as are the rest of your comments. Any language in the wrong hands is error prone.


> Any language in the wrong hands is error prone.

Talk about naive!


Feel free to implement system utilities in whichever language you feel will completely eliminate the possibility of bugs.

I wait with bated breath.


"error-prone" means bugs are more likely than the alternatives. It doesn't mean that the alternatives completely eliminate the possibility of bugs. Come on.


I wonder what the tally is for "things posted to HN that'll replace bash/ksh/zsh in every respect REAL Soon Now". It's a genre of post unto itself.


What language is Systemd written in? I'm pretty sure it's not Bash.


I've never been able to use systemd as a command interpreter.


An init system doesn't need to be a command interpreter. Why are you being so obtuse?


It doesn't need to be, but there are some advantages in being able to have system startup scripts in the same language that you do one-liners in at the terminal.


You are being downvoted, but I agree with you.

I've always believed sh, csh, bash, etc, are very bad programming languages that require excessive efforts to learn how to write code in without unintentionally introducing bugs, including security holes.


Sir, this is a Wendy's.

If you want all-singing, all-dancing opaque binaries to handle every conceivable configuration eventuality, MacOS and Windows are <-- that way. Or, you could have patience, and sometime soon systemd will likely expand to cover your use-case.


On MacOS I remember many .plist files but no binary config files. The .plist format looks similar to XML.

I like the .ini format used by systemd (and do not have an opinion about the overall quality of systemd).


Is there any reason you need to store them in the RAM on the backend once they have been transferred to the client?


That is a valid point. Currently, the backend keeps them in RAM mainly to support multi-device syncing (like the QR handoff feature) during an active session. If a user scans the QR code to open the same inbox on mobile, the backend needs to serve those existing messages to the new client.

However, I'm exploring a 'Transfer & Purge' logic where, once a message is successfully delivered and acknowledged by the primary client, it could be encrypted or removed from the server-side RAM entirely, leaving the responsibility of persistence to the client-side IndexedDB. It’s a delicate balance between UX and the absolute 'zero-trace' goal.


That makes sense, thanks!


I am in the UK, and I work for a combined FNO/ISP (a company that owns and operates both the access network and the internet service). It makes me angry that corporations and governments are ruining what was once a thriving network that allowed people to communicate freely with one another. I hope that we will be able to save what remains before it's completely out of our control. My fear is that eventually all devices will be required to have a government-mandated backdoor installed, and anyone found with a non-compliant device will be treated as a criminal.

For now, I used my Hetzner server via Tailscale running fast-socks5 [1] using FoxyProxy [2] (for Mozilla Firefox) which allows me to select a list of domains to re-direct through the socks proxy. I also have Tor installed which is useful when roaming.

[1] https://github.com/dizda/fast-socks5 [2] https://addons.mozilla.org/en-GB/firefox/addon/foxyproxy-sta...


>My fear is that eventually all devices will be required to have a government-mandated backdoor installed,

They tried to have the back door mandated with the Clipper chip. Governments will try again.

https://en.wikipedia.org/wiki/Clipper_chip


This can be fixed by the ISP by using MPLS control word.

https://datatracker.ietf.org/doc/html/rfc4448#section-4.6


It seems to suffer from a chicken and egg problem. To get an image you are supposed to run `incus remote get-client-certificate` to put into the "image customizer", and you cannot generate an image without it. So how do you get started?


You can download the CLI client for Linux, Windows and MacOS from our Github releases: https://github.com/lxc/incus/releases/latest/

I've filed https://github.com/lxc/incus-os/issues/551 which we should be able to sort out later today.


perhaps add installation instructions in the README? Most people already know they need the binary to run that command. For those who don't, I don't recommend you baby them because next thing you know, they've downloaded the wrong binary and it doesn't run.


After dismissing the tips I am greeted with errors:-

  {
    "message": "NetworkError when attempting to fetch resource.",
    "error": true,
    "running": false
  }
If I check the console, it appears to be due to CORS failures.


Hey! So sorry about that - thanks for saying something. That's on the initial load of the dashboard link? What's the URL you see it on in console? If you're using the public resolver it will do some CORS validation but should pass for the trilogydata site - are you running locally?


Is it possible to specify the external ipv4 and ipv6 address? There are scenarios where the eggress traffic uses a different address to ingress, or the host has multiple internet-connected addresses but only one has a firewall permitting traffic to the nominated port.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: