I have some of my own tooling in place, for example with secure clipboard, I have disabled clipboard sharing between VMs and instead wrote host-level ssh+xclip scripts which I hotkeyed.
Web Search Navigator is a Chrome/Firefox extension that adds keyboard shortcuts to Google, YouTube, Github, Amazon, Startpage, and Google Scholar.
Note that this extension focuses on searching, not general keyboard navigation. For the latter, you should look into extensions such as Vimium [1], Tridactyl [2], and Surfingkeys [3]. Alternatively, if you are willing to use a niche browser, you should look into Qutebrowser [4], Nyxt [5], and Vieb [6].
I use it with Vimium C [7] and Firenvim [8], which are both excellent browser extensions that focus on more general keyboard navigation. This provides
reasonably good keyboard-based web browsing in Chrome/Firefox.
If you define untrusted software as anything not coming from your distro's official package manager, many people run lots of untrusted software on their Linux systems.
Think of PPAs (Debian/Ubuntu/etc), the AUR (Arch/Manjaro), and many other package managers (pip, conda, npm, cargo, etc.).
You have always the option to put programs that you don't trust into a jail. It just shouldn't be mandatory.
Besides, the only good way to solve that problem is namespaces as they are used in Plan9. Sadly Unix/Linux took the wrong turn at some point (probably with the introduction of sockets).
The Tinfoil Chat setup uses optocouplers to enforce one-way data transmission.[0] And one can use inexpensive CD-R and micro SD cards for single-use data transfer. But transferring anything but plain text is dangerous.
Thanks for mentioning the octocouplers implementation, it looks very interesting and I'll look it. Other options I'm aware of to avoid USB and complex drivers/protocols:
I wonder if one could cut up QR codes, and store pieces separately. If that were done in Gimp or whatever, there could be multiple copies of each piece.
I meant actually cutting them with a sharp knife, or scissors. So you could hide pieces in different places. For stuff like URLs, passphrases, wallet seeds, etc.
While I can appreciate your vision, that seems too detached from the current reality.
Software development is a highly distributed system with many human participants with different (and sometimes conflicting) goals and skills. And to make it effective, you almost always need to reuse software
created by many other people that you don't know (which incidentally creates a huge opportunity for supply chain attacks).
In this reality, you get very little assurance about the authenticity and security of anything you use.
> Xorg already provides a full suite of security protocols that allow fine grained control over every aspect of any application down to the single pixmap via access control hooks.
One notable exception is OpenSSH, which uses the SECURITY extension with the -X flag (which is their recommended way to use X11 forwarding).
I agree that Android app security model is much better than desktop Linux (of course, they had the privilege of designing a new system without backward compatibility concerns and after learning lessons from other systems).
The main issue with using that model for desktop Linux is that apps where not developed with this model in mine. So when an app wants to access your webcam, it tries to do it directly and doesn't ask the OS to grant
permission. Similarly when accessing any files.
I guess it's possible in theory to trace any system calls the app makes and accordingly trigger permission requests to the user. Since that didn't happen, maybe it just breaks to many apps to be effective.
BTW, installed apps could create their own UID to isolate themselves, but most developers/distros don't bother doing it. I should not that I did see a significant improvement in running systemd services as separate
users, but I rarely see it for user facing apps.
A better option than only using a separate UID is containerization, and things like docker, firejail, bubblewrap, etc, are useful here.
But Linux containers are not considered secure enough (at least compared to VMs). The real gold standard in terms of security is QubesOS, but you pay for that security in performance and ease of
use.
Signatures are meaningful when the keys are more secure than the servers hosting the data.
If you download software from a hacked server that serves you malware, the signature check will fail. In contrast, the execute bit can be changed by anyone.
The problem is that you need to get the authentic public key of the software distributor to verify the signature. If an attacker is able to forge the public key, they can easily
forge the signature and the signature check will succeed.
> The problem is that you need to get the authentic public key of the software distributor to verify the signature.
Right. If you already have a secure channel to receive the signing key over, you can just use it to receive the software to begin with.
Meanwhile we do have a CA system that lets you download the software via TLS. It's not perfect, but breaking TLS or compromising a CA are not even close to common methods of delivering malware.
> Right. If you already have a secure channel to receive the signing key over, you can just use it to receive the software to begin with.
Note that the secure channel sometimes has more limited bandwidth. An example would be reading part of your public key over the phone, which is not practical for the actual software.
There are other considerations that make using the secure channel for the software itself impractical. For example, you can have many people publish known public keys on their website, so that
other people could verify them with some majority voting.
> Meanwhile we do have a CA system that lets you download the software via TLS. It's not perfect, but breaking TLS or compromising a CA are not even close to common methods of delivering malware.
The main risk is not breaking TLS or CAs, but rather compromising the server that you download the software from, and serving malware instead. Indeed, if the same server is used for serving the public key,
you don't gain much, because the attacker can just generate their own key pair, sign the malware, and publish their key. But ideally, the public key would not be published from the same server,
making an attack more difficult.
> Note that the secure channel sometimes has more limited bandwidth. An example would be reading part of your public key over the phone, which is not practical for the actual software.
In theory, sure. In practice ordinary users are not calling up the software developer and having them read their public key over the phone.
> There are other considerations that make using the secure channel for the software itself impractical. For example, you can have many people publish known public keys on their website, so that other people could verify them with some majority voting.
You could do the same thing with the application binary itself and have them compare hashes.
> But ideally, the public key would not be published from the same server, making an attack more difficult.
You could get the same benefit from publishing only the hash of the software on the separate server. The signature is redundant, and is even worse than the hash because it introduces private key compromise as an attack vector.
The main benefit of signatures is for an app distribution system that needs to distribute multiple apps or updates, so then it can deliver the public key once and reuse it. But now you're talking about the package manager and Linux package managers already do that.
> You could do the same thing with the application binary itself and have them compare hashes.
This is not scalable to software updates, which happen much more frequently than (long term) key updates.
With hashes you would need to publish a new hash for every update, but with public keys you're fine.
> You could get the same benefit from publishing only the hash of the software on the separate server. The signature is redundant, and is in fact worse than the hash because it introduces private key compromise as an attack vector.
Similarly to the issue I mentioned above, this introduces a hassle because the hash will change every time you update the software, which means you will need to update it. In addition, the system serving the hash/key should have some additional safeguards for updating it (because it's somewhat more sensitive), making the update probably more cumbersome.
Could you clarify what attack is possible with keys that is not possible with hashes?
I think it's unlikely that pip will be able to easily install something like Sage anytime soon, but one could use conda [1] which is pretty popular in the scientific community, or alternatively
docker [2].
It also seems there's support in the Nix package manager [3] which is actually more powerful than any other package manager I'm aware of (GUIX notwithstanding).
(I realize that this response is probably useless for the Sage author, but hopefully will benefit others who struggle with installation)
Even if the security is not significantly enhanced over a content blocker, tracking using JS will be much harder (assuming the cloud device is randomized in some way).