Also a scientist here. The part of this article I disagree with is "Science is a profession, not a religious vocation, and does not justify an oath of poverty or celibacy".
I really think it is the closest thing I have to a religion. It alters how I see the world, defines the way I think about how to approach problems, and consumes every waking moment of my life.
Of course, I've had many failed relationships either directly or indirectly for my career, and it has taken my health (I sustained a spinal injury at work) and best years of my life. I feel deeply uncomfortable when I am _not_ working and it is difficult for me to relax. Intellectually I recognise that it is deeply unhealthy but every time I write a grant everyone else in the world who applies is ranked and objectively compared to me -- and I can't shake this feeling that _they don't sleep_, so why should I? It's an absolute obsession and I go into this hyper-focussed mode when I _actually_ get things done and, when an experiment reveals something for the first time in the world, the feeling is amazing.
I've just won an academic prize and have a tenured post. I'm deeply, deeply insecure and have a very unhealthy relationship with work. Many academics I know – especially in medicine – are likely diagnosable with very real conditions...
Sounds like you’re describing how things are, while the author is describing how things are ought to be. It’s fine to be obsessed and sacrifice the rest of your life for a job if that’s your choice. If that’s the only option to succeed, however, you severely limit the pool of people who are willing to make that sacrifice, and (I believe) slow down the inflow of ideas and creativity that push science forward.
I'm glad to know that I'm not the only one who hates MitID. I really don't think that any software that has so much trust in the user has a good security model. What are they protecting against exactly? If someone else wanted to impersonate you with your consent you could just tell them your login credentials!
I really love Graphene OS but I _wish_ there was a version in which you could get a root shell and extract private data of apps you install when verified as the user. The developers are on record as saying that root blows a hole in their security model (it does!) but if there was _some_ way of doing it safely, so I can modify applications I as the user wish to, it would be my ideal OS. I know I could download and self sign it, but I'd rather not…
What is the threat model when enabling root on a phone and why can't it be mitigated? Root is enabled on most servers and desktops and we are surviving fine.
Threat model is that you can "spy" on what your applications are doing, or do things undesirable to application owners (like making screenshots).
This is desirable to end users, but my understanding is that making your os rootable will make applications like bank apps blacklist your os, and make it more or less unusable for a normal user.
The way apps behave and the user interface to apps and the way they are used, the level of basic visibility and control that the user has moment to moment, is totally different on a phone than on a pc.
In practice, desktops and servers are quiet secure because you don't need to download random closed-source firmware and apps to use your device.
iOS and Android are a security nightmare. Downloading a random-ass executable to pay for parking is asking for trouble. Relying on millions of lines of proprietary Google code that you-don't-know-what-it-does is asking for trouble.
This code could have, and almost certainly does have, spyware, keyloggers, and various other forms of malware. You're simply trusting that it doesn't, because it's unverifiable.
And this doesn't even TOUCH on all the vulnerabilities associated with cellular networks, the baseband, SS7, etc. Good luck auditing that code.
At least on a server I can have some baseline guarantees about what software I'm running and what it's doing. Whereas on a phone, your location could constantly be triangulated, your phone identity spoofed, your cellular traffic sniffed, and on and on and you'd never know.
I mean, just this week we saw a post on here about ICE using fake cell towers to identify protestors. That shit is truly trivial to do, and people have been doing it for almost two decades. You wanna talk CVE? Start with that.
Is Qubes resistant to forensics? I think its selling point is multi-level security and lateral movement prevention, not safeguarding data on a stolen laptop.
No, it's not resistant to forensics unless it's turned off when obtained. The hardware / firmware / software makes no serious attempt to protect a device in the After First Unlock state.
One of the things I like the most about GOS is the web installer, and how easy it is to use. If I need a custom build, to run my own server, and sacrifice performance for it, it doesn't seem worth it. It would also be good to know what a debug build entails, how exactly it is "less secure", and so on. Since this is unlikely to be documented by the GOS team, a 3rd party guide would still be helpful.
> It would also be good to know what a debug build entails, how exactly it is "less secure".
Using software engineering terms, think of the official GOS as production release, and the debug version as test release. You deploy it by actiually building it, like building a linux kernel. This takes lots of reaources (RAM + storage). But also is quite flexible because you can compile and build it whenver there is a new update. And you can sign the build yourself. The reason why they say it's insecure is the same why your server sysadmin does not give you the root password. You can do some serious damage if you have no idea what you're doing. On Android, root allow you to peek on other processes and apps, so if you grant root to a malicious app, high risk of data leakage. That's it though. Been rooting and building roms since early android days, no issues for me as I tend to use open source tools most of the time.
You can enable root on GrapheneOS. It will erase your data, however, so make a backup before you do. But if you really want root you can save your data, root, restore, and leave root on.
On Linux, I can add an account to the sudoers list, and have the flexibility to configure the level of security appropriate for my use case. I have yet to experience any security issues (that I'm aware of). Why isn't this possible on my mobile device as well?
This absolute stance is not right. Security is not binary, but a spectrum. I should be allowed to have full control over my device without this being a security risk.
Well, anyone with actual root on a secure (locked, verified boot on) Android phone can hard brick it with a single command. Yes, you can yell at the user telling them it's their fault. Still something you usually do not want to support.
I don't think having authorized temporary root is inherently insecure, but on the other hand making sure it is secure could be a huge time sink.
Now, the original request here, modifying user app (I'd assume it's not system app) data, is reasonable. Designing a properly authenticated way to allow doing so would be an interesting challenge.
Seedvault is the /worst/. I ranted about it here a few months ago, and the lead dev says he's aware they really need something better: https://news.ycombinator.com/item?id=42541520
Malware capable of getting root can access / exfiltrate anything, use your network, flash your firmware, can persist permanently, can use you as a vector.
Shellshock, log4j, Heartbleed. Hundreds of the big profile vulnerabilities that can be exploited on the system in an attempt to obtain root. And then you're cooked.
You really think a malware with the root access can't do much?
Why do you think selinux (and similar) even exist?
This isn't absolute stance. This is just stating that having a root access on the proruction/daily system is the opposite of security.
Yes but root still exists in phones just like it does in servers. It's just not accessible by the user. The OS does run processes as root and it needs it for things like updates.
Also, the user having root access doesn't mean that every process they run has root rights. For rooted phones there's apps to control what it's used for. Anything else just runs with the limited rights as before.
Of course those 'sudo' apps would be an attack vector but a pretty niche one.
I understand the risks, but just because they theoretically exist doesn't mean that they pose an active threat in all scenarios, or that they can't be mitigated.
The idea of locking the system down completely and preventing anyone from accessing it is technically more secure, but it creates many practical issues for tech-savvy people who want full control over their devices, which is the vast majority of the GrapheneOS user base.
If SELinux can mitigate the risks, then sure, let's use that. I don't really care what the technical solution is to this problem.
I'm just saying that:
a) As a user of an OS I want to be allowed full control over my device and not have babyproofed functionality because "it's for my own good". That is the realm of walled garden OSs from most major corporations which I deliberately avoid by using GOS in the first place.
b) My personal threat model doesn't involve using a bunch of untrusted applications, and I'm fine with trading some security for convenience. If the risks from choosing convenience can't be mitigated, then my OS should be flexible enough to allow me to make that choice. Other OSs can do this, so why can't GOS? I'm inclined to believe that there's no technical reason for it, but it's something that maintainers simply don't want to support. Which is fine, it's their project and their prerogative, but then let's not pretend that this is a discussion about security.
Note that this is in stark contrast to the first transatlantic tele_graph_ cable, which did not really have a ground line and consisted of seven copper wires covered with three coats of gutta-percha (natural latex rubber) and then hemp and tar. Many breaks and failures later, the first messages were sent in August 1858. The bandwidth was such that Queen Victoria's message to the US president, James Buchanan, that contained 98 words took 16 hours to send. It ultimately died during a famous dispute between William Thomson – later Lord Kelvin – yes, _that_ Kelvin – and the project's main engineer that ultimately ended in disaster (when the engineer put 2k VDC on the cable, destroying the insulation, against Thomson's advice) and a famous court case that basically saw the role of "the scientist" (the physicist!) as a competition professional for the first time.
It's all fascinating history. By the time of Bell Labs, an awful _lot_ had already been learned from previous failures.
Yeah, trying to build a thousands of km long undersea cable without a good theory of transmission lines is gonna be a painful experience (a lot of this theory was developed to fix these problems!)
Once again, the existence of legal use cases does not invalidate the existence of illegal use cases. Do you genuinely believe the primary use case of this software by a majority of its users will be to download this type of legal content?
Thank you very much for a detailed answer. Might I rudely ask -- as you're knowledgeable in this space, what do you think of Mullvad's DAITA, which specifically aims to defeat traffic analysis by moving to a more pulsed constant bandwidth model?
DAITA was introduced after my time in the industry, but this isn't a new idea (though as far as I know, it's the first time this kind of thing's been commercialized).
It's clever. It tries to defeat attacks against one of the tougher parts of VPN connections to reliably obfuscate, and the effort's commendable, but I'll stop short of saying it's a good solution for one big reason: with VPNs and censorship circumvention, the data often speaks for itself.
A VPN provider working in this space will often have aggregate (and obviously anonymized, if they're working in good faith) stats about success rates and failure classes encountered from clients connecting to their nodes. Where I worked, we didn't publish this information. I'm not sure where Mullvad stands on this right now.
In any case -- some VPN providers deploying new technology like this will partner with the research community (because there's a small, but passionate formal research community in this space!) and publish papers, studies, and other digests of their findings. Keep an eye out for this sort of stuff. UMD's Breakerspace in the US in particular had some extremely clever people working on this stuff when I was involved in the industry.
I really think it is the closest thing I have to a religion. It alters how I see the world, defines the way I think about how to approach problems, and consumes every waking moment of my life.
Of course, I've had many failed relationships either directly or indirectly for my career, and it has taken my health (I sustained a spinal injury at work) and best years of my life. I feel deeply uncomfortable when I am _not_ working and it is difficult for me to relax. Intellectually I recognise that it is deeply unhealthy but every time I write a grant everyone else in the world who applies is ranked and objectively compared to me -- and I can't shake this feeling that _they don't sleep_, so why should I? It's an absolute obsession and I go into this hyper-focussed mode when I _actually_ get things done and, when an experiment reveals something for the first time in the world, the feeling is amazing.
I've just won an academic prize and have a tenured post. I'm deeply, deeply insecure and have a very unhealthy relationship with work. Many academics I know – especially in medicine – are likely diagnosable with very real conditions...
reply