Everything in life is about trade-offs. Certain trade-offs people aren't going to make.
- If you want to run an alternative operating system, you got to learn how it works. That is a trade off not even many tech savvy people want to make.
- There is a trade-off with a desktop OS. I actually like the fact that it isn't super sand-boxed and locked down. I am willing to trade security & safety for control.
> Personally I think we need to start making computers that provide the best of both worlds. I want much more control over what code can do on my computer. I also want programs to be able to run in a safe, sandboxed way. But I should be the one in charge of that sandbox. Not Google. Definitely not Apple. But there's currently no desktop environment that provides that ability.
The market and demand for that is low.
BTW. This does exist with Qubes OS already. However there are a bunch of trade-offs that most people are unlikely to want to make.
No, not everything is a trade-off. Some things are just good and some are just bad.
A working permission system would be objectively good. By that I mean one where a program called "image-editor" can only access "~/.config/image-editor", and files that you "File > Open". And if you want to bypass that and give it full permissions, it can be as simple as `$ yolo image-editor` or `# echo /usr/bin/image-editor >> /etc/yololist`.
A permission system that protects /usr/bin and /root, while /home/alex, where all my stuff is is a free-for-all, is bad. I know about chroot and Linux namespaces, and SELinux, and QEMU. None of these are an acceptable way to to day-to-day computing, if you actually want to get work done.
That claim is too generic to add anything to this discussion. Ok, everything has a trade off. Thanks for that fortune cookie wisdom. But we’re not discussing CS theory 101. In this case in particular, what is the cost exactly? Is it a cost worth paying?
The cost is that developing that simple script to execute something and accessing files will have to be constructed differently. It will be much more complex.
That or the OS settings for said script will need to be handled. That is time and money.
I've said this elsewhere in this thread - but I think it might be interesting to consider how capabilities could be used to write simple scripts without sacrificing simplicity.
For example, right now when you invoke a script - say "cat foo.js" - the arguments are passed as strings, parsed by the script and then the named files are opened via the filesystem. But this implicitly allows cat to open any file on your computer.
Instead, you could achieve something similar with capabilities. So, I assume the shell has full access to the filesystem. When you call "cat foo.js", the shell could open the file and pass the file handle itself to the "cat" program. This way, cat doesn't need to be given access to the filesystem. In fact, literally the only things it can do are read the contents of the file it was passed, and presumably output to stdout.
> It will be much more complex.
Is this more complex? In a sense, its exactly the same as what we're doing now. Just with a new kind of argument for resources. I'm sure some tasks would get more complex. But also, some tasks might get easier too. I think capability based computing is an interesting idea and I hope it gets explored more.
> how capabilities could be used to write simple scripts without sacrificing simplicity.
I proposed a solution for that in my original comment - you should be able to trivially bypass the capability system if you trust what you're running ($ yolo my_script.sh).
The existance of such a "yolo" command implies you're running in a shell with the "full capabilities" of your user, and that by default that shell launches child processes only a subset of those. "yolo" would then have to be a shell builtin, that overrides this behavior and launches the child process with the same caps as the shell itself.
> That claim is too generic to add anything to this discussion. Ok, everything has a trade off. Thanks for that fortune cookie wisdom.
It isn't fortune cookie wisdom and no it isn't "too generic". It is something that fundamentally wasn't understood by the person I was replying to from their comment. I also don't believe you really understand the concept either.
> But we’re not discussing CS theory 101.
No we are not. We are discussing concepts about security and time / money management.
> In this case in particular, what is the cost exactly? Is it a cost worth paying?
You just accused me of "fortune cookie wisdom" and "being too generic". While asking a question where the answer differs dependant on the person / organisation.
All security is predicated on what you are protected against. So it is unique to your needs. What realistically are your threats. This is known as threat modelling.
e.g. I have a old vehicle. The security on it is a joke. Without additional third party security products, you can literally steal it with a flat blade about two inches long and drive away. You don't even need to hot-wire it. Additionally it is highly desirable by thieves. I can only realistically as a individual without a garage to store it in overnight, protect it from an opportunist. So I have a pedal box, a steering wheel lock, and a secret key switch that turns off the ignition and only I know where it is in the cab. That is like stop an opportunist. However more determined individuals. It will be stolen. Therefore I keep it out of public view when parked overnight. BTW because of the security measures, it takes about a good few minutes to be able to drive anywhere.
Realistically. Operating system security is much better than than it was. It is at the point that many recent large scale hacks in the last few years were initiated via social engineering to bypass the OS security entirely. So I would say it is in the area of diminishing returns already. So the level of threats I face and most people face, it is already sufficient. The rest I can mitigate myself.
Just like my vehicle. If a determined individual wants to get into you computer they are going to do so.
Thanks for educating me there champ. I'm sure you're very smart. But I've been writing software for a few decades now. Longer than a lot of people on HN have been alive. There's a good chance the computer you're using right contains code I've written. Suffice it to say, I'm pretty familiar with the idea of engineering tradeoffs. I suspect many other people in this thread are familiar with it too.
You missed the point the person you were replying to upthread was making. You're technically right - there is always some tradeoff when it comes to engineering choices. But there's a pernicious idea that comes along for the ride when you think too much about "engineering tradeoffs". The idea is that all software exists on some paraeto frontier, where there's no such thing as "better choices", there's only "different choices with different tradeoffs".
This idea is wrong.
The point made upthread was that often the cost of some choice is so negligible that its hardly worth considering. For example, if you refactor a long function by splitting it into two separate functions, this will usually result in more work for the compiler to do. This is an engineering tradeoff - we get more readability in exchange for slower compile times. But the compilation speed difference is usually so miniscule that we don't even talk about it.
"Everything comes with tradeoffs" is technically true if you look hard enough. But "No, not everything is a trade-off. Some things are just good and some are just bad" is also a good point. Some things are better or worse for almost everyone. Writing a huge piece of software using raw assembly? Probably a bad idea. Adding a thorough test suite to a mission-critical piece of software? Probably a good idea. Operating systems? Version control? Yeah those are kinda great. All these things come with tradeoffs. But the juice can still be worth the squeeze.
My larger point in this thread is that perhaps there are ways we can improve security that don't make computing measurably worse in other ways. You might not be clever enough to think of any of them, but that isn't proof that improvements aren't possible. I wasn't smart enough to invent typescript or rust 20 years ago. But I write better software today thanks to their existence.
I would be very sad if, in another 30 years, we're still programming using the same mishmash of tools we're using today. Will there be tradeoffs involved? Yes, for sure. But no matter, the status quo can still be improved.
> Realistically. Operating system security is much better than than it was. [...] So I would say it is in the area of diminishing returns already. So the level of threats I face and most people face, it is already sufficient.
What threat models are you considering? Computers might be secure enough for you, but they are nowhere near secure enough for me. I also don't consider them secure enough for my parents. I won't go into detail of some of the scams people have tried to pull on my parents - but better computer systems could easily have done a better job protecting them from some of this stuff.
If you use programming languages with a lot of dependencies, how do you protect yourself and your work against supply chain attacks? Do you personally audit all the code you pull into a project? Do you continue doing that when those dependencies are updated? Or do you trust someone to do that for you? (Who?). This is the threat model that keeps me up at night. All the tools I have to defend against this threat feel inadequate.
> If you want to run an alternative operating system, you got to learn how it works.
The typical user doesn't know how Windows works, and they can run that. These days, users can run a friendly GNU/Linux distribution not knowing how it works. So, disagree with you here.
> The typical user doesn't know how Windows works, and they can run that.
That is because Windows for the most part manages itself and there are enough IT professionals, repairs shops and other third support options (including someone that is good with computers that lives down the road) where people can problems sorted.
This is not the case with Linux.
> These days, users can run a friendly GNU/Linux distribution not knowing how it works. So, disagree with you here.
Sooner or later there will be an issue that will need to be solved with opening up a terminal and entering a set of esoteric commands. I've been using Linux on and off since 2002. I have done a Linux from Scratch build. I have tried most of the distros over the years, everything from Ubuntu to Gentoo.
When people claim that you will never have to know how it works. That is simply incorrect and gives a false impression to new users.
I would rather that other Linux users tell potential users the truth. There is trade off. You get a lot more control over your own computer, but you will need to peek under the hood sooner or later and you maybe be on your own solving problems yourself a lot of the time.
> That is because Windows for the most part manages itself
Windows is the least "manage itself" OS out of all OS available today. It needs pretty constant maintenance and esoteric enchantments to keep trucking.
That’s not my experience with it. I have 2 windows installations at home and they both seem fine.
I must admit - I spent about an hour figuring out how to turn off telemetry and other junk after installation. But since then, windows has been trucking along just fine.
I wonder why! Has your workplace installed weird junk on the machine which is gumming it up? Are you using some set of configuration options that microsoft doesn't regularly check?
My experience of windows is that it works pretty well these days. But I don't develop on windows - I just use it for entertainment (steam, vlc, etc). So there's probably a lot of edge cases that I'm not hitting.
It wasn't ever different when I ran windows on my personal computers, although granted that was back in 8.1. 8.1 was just bad for a variety of reasons, but it definitely still had the rot problem.
The latest in my saga of Windows being annoying is applications just randomly killing themselves when I'm not looking. I don't reboot my work computer because I have far too much precious stuff open.
But, every other day or so, an application or two will mysteriously disappear from my taskbar. Silently. I never catch it, then I get the "hey did you see this email??"
Why no, no I did not. Outlook committed suicide at some point and I'm not pocket watching the windows taskbar. My mistake.
For a while I thought I just hallucinated me closing the application, but I don't close applications, like, ever.
To put into perspective, my work has a policy which forcefully reboots windows once every 14 days. It helps, but not much, because by day 2-3 it's already breaking down. My Debian machine has an uptime of a few hundred days. I legitimately still have applications open from last year.
Maybe I use my computer like a psychopath, or maybe my expectations are too high, but I don't consider windows to take care of itself. Its the most babying-an-OS I ever have to do. iOS and Android are much better as well.
No it doesn't. I barely do anything to manage my Windows Installation. I install loads of garbage (I mostly still run the same programs as I did 15 years ago).
I don't understand why people propagate these falsehoods.
Windows rots. Even a few days without a reboot and things will just stop working or be really slow. No idea why.
But if you don't clean install once every few years you'll just have a ton of shit everywhere. Programs don't clean themselves up.
Also every program has its own update mechanism. Great... now I don't just have to manage windows update, but also a few dozen other esoteric update mechanisms.
iOS and Android are self managing. Windows? Can we be for real? Why get on the internet and lie to people?
Anybody who is good with computers should be able to install linux, it's easier than to install windows, because you don't need to jump through capitalist dark patterns.
>Sooner or later there will be an issue that will need to be solved with opening up a terminal and entering a set of esoteric commands.
That's what I did to export drivers from previous windows installation in suspicion of regression.
>Installation is not the same as support and isn't the same as trouble shooting.
The meme is still alive that windows accumulates garbage and becomes slower with time, so you need to reinstall it periodically. Reinstallation is also how you fix regressions, because ms is busy with cloud services.
>It isn't unusual situation in Linux.
As I remember, on linux I have an ample choice of kernel versions, but I didn't encounter regressions. For windows intel provides only the latest drivers.
> The meme is still alive that windows accumulates garbage and becomes slower with time, so you need to reinstall it periodically.
I've not needed to worry about this since Windows XP. Which was what? 25 years ago almost.
> Reinstallation is also how you fix regressions, because ms is busy with cloud services.
I've never had hardware regressions with Windows. I've had plenty of weird and annoying bugs return with Linux.
e.g. My Dell 6410 has an issue where the wifi card would die after suspend with kernel 6.1. However it would get fixed by a patch, and then get unfixed the next patch.
> As I remember, on linux I have an ample choice of kernel versions, but I didn't encounter regressions. For windows intel provides only the latest drivers.
"Swings and Roundabout".
Again. It is a pretty niche problem. I've had plenty of weird hardware regressions with the Kernel. Recently there was a AMD HDMI audio bug, IIRC it was kernel related.
I’ve had the same experience. Never had a regression with windows. Had plenty with Linux.
One Linux kernel version broke hdmi audio and another fixed it. Recently a change to power management has made my Intel Ethernet controller stop working about an hour after the computer boots up. And so on. Each time I’ve needed to pouring through forums trying to find the right fix. That or pin an older version which worked correctly.
>I've never had hardware regressions with Windows.
Until recently I didn't either. Windows resizing to 640x480 when display turns off and sound resetting to 100% after a toast notification.
>It is a pretty niche problem.
I think hdmi audio is a niche problem. What do you even use it for? With linux you can at least try a different version, with windows you have to just eat it.
I think a lot of it is "nobody has bothered building it yet" vs security.
Eg Qubes runs everything in Xen isolates - which is a wildly complex, performance limiting way to do sandboxing on modern computers. There are much better ways to implement sandboxing that don't limit performance or communication between applications. For example SeL4's OS level capability model. SeL4 still allows arbitrary IPC / shared memory between processes. Or Solaris / Illumos's Zones. But that route would unfortunately require rewriting / changing most modern software.
> I think a lot of it is "nobody has bothered building it yet" vs security.
All of this takes considerable time, money to build and after that you need to get people to buy into it anyway. Large billion dollar software companies have difficulty doing this. If you think it is so easy, go away and build a proof of concept.
BTW They have implementing sand-boxing in most desktop operating system. It is often a PITA. Phone like permissions model already exist in Windows, Linux and I suspect MacOS in various guises.
For development there are various solutions that already exist.
So these things already exist and often people don't use them. The reason for that is that there is usually reduces usability by introducing annoyances.
> Eg Qubes runs everything in Xen isolates - which is a wildly complex, performance limiting way to do sandboxing on modern computers.
It exists though today. If I care about security enough, I am willing to sacrifice performance. That is a trade off that some people are willing to make.
> There are much better ways to implement sandboxing that don't limit performance or communication between applications. For example SeL4's OS level capability model. SeL4 still allows arbitrary IPC / shared memory between processes. Or Solaris / Illumos's Zones. But that route would unfortunately require rewriting / changing most modern software.
If you solution starts with "rewriting most modern software". Then it isn't really a solution.
BTW what you are suggesting is a trade off. You have to trade resources (time and money typically) to build the thing and then you will need to spend more resources to get people to buy into using your tech.
What happens when the OS that is running the browser fails to update because /boot has run out of room for a new Linux kernel (this happened to me the other week)?
What happens when the browser update fails because the package database got corrupted?
What happens when a lock file stop the whole system updating because of a previous iffy update?
You are going to need to drop to a terminal and fix that issue or reinstall the whole OS.
Either way you are going to need to know something about how the machine works.
- If you want to run an alternative operating system, you got to learn how it works. That is a trade off not even many tech savvy people want to make.
- There is a trade-off with a desktop OS. I actually like the fact that it isn't super sand-boxed and locked down. I am willing to trade security & safety for control.
> Personally I think we need to start making computers that provide the best of both worlds. I want much more control over what code can do on my computer. I also want programs to be able to run in a safe, sandboxed way. But I should be the one in charge of that sandbox. Not Google. Definitely not Apple. But there's currently no desktop environment that provides that ability.
The market and demand for that is low.
BTW. This does exist with Qubes OS already. However there are a bunch of trade-offs that most people are unlikely to want to make.
https://www.qubes-os.org/