> sometimes, just sometimes - it is also to do with protecting organisations from careless or malicious users.
There are two cases where this is true: a user intentionally sharing internal access with external malicious actors, or a user unintentionally sharing internal access with external malicious actors (e.g. social engineering / general incompetence). Neither apply to kiosk breakouts.
You seem very sure that those are the only two security risks to an expected browser being available on an otherwise managed device. I'm pretty certain there may be other risks.
One can absolutely make an argument for a great many risks to be classified under security concern: there are certainly more than just these two. Doing so is simply reductio ad absurdum.
To expand on this, we can if we choose classify all parental controls under general access control, and within a principle of least privilege further classify the following as legitimate security risks:
- access to the internet
- access to a keyboard
- read access to a disk
There are absolutely scenarios one can contoct where these are real concerns. The settings panel of a general-purpose consumer device doesn't fit that venn diagram for me. Is it a bug: yes. Is it a security bug: no.
Please take this as critical feedback, and not as a personal attack: The comments which you are making here suggest that you shouldn't develop any software which in any way touches personal data without significant upskilling on IT security. You're making false comments with complete confidence.
Most security scenarios came about as a result of attackers being able to bring systems into absurd situations, and moving systems through unintended pathways.
"Reductio ad absurdum" could apply to most digital exploits before they've happened. "Why would the system get into that state?"
That's a key difference between physical security and digital security:
- In a physical situation, I need to worry about what a typical criminal trying to break into my home or business might do. That requires reasonable measures.
- In digital security, I need to worry about what the most absurdly creative attacker on the internet might do (and potentially bundle up as a script / worm / virus / etc.). I do need to worry about scenarios which might seem absurd for physical security.
If you engineer classifying only "reasonable" scenarios as security risks, your system WILL eventually be compromised, and there WILL be a data leak. That shift in mind set happened around two decades ago, when the internet went from a friendly neighborhood of academics to the wild, wild west, with increasingly creative criminals attacking systems from countries many people in America have never heard of, and certainly where cross-border law enforcement is impractical.
I've seen people like you design systems, and that HAS led to user harm and severe damages to the companies where they worked. At this point, this should be security 101 for anyone building software.
Seems like an argument about system-driven and component-driven risk analyses - they both have their place, and they're not mutually exclusive. Risk-based approaches aren't about either removing all risk or paying attention to only the highest priority ones. Instead, they are about managing and tracking risk at acceptable levels based on threat models and the risk appetites of stakeholders, and implementing appropriate mitigations.
It's a slightly different argument. The level of "reasonable risk" depends on the attacker in both situations.
The odds of any individual crafting a special packet to crash my system are absurdly low.
However, "absurdly low" is good enough. All it took was one individual to come up with the ping-of-death and one more to write a script to automate it, and systems worldwide were being taken down by random teenagers in the late nineties.
As a result of these and other absurd attacks, any modern IP stack is hardened to extreme levels.
In contrast, my house lock is pretty easy to pick (much easier than crafting the ping-of-death), and I sometimes don't even remember to lock it. That's okay, since the threat profile isn't "anyone on the internet," but is rather limited (to people in my community who happen to be trying to break into my house).
I don't need to protect my home against the world's most elite criminals trying to break in, since they're not likely to be in that very limited set of people. I do any software I build.
That applies both to system threats and to component threats. Digital systems need to be incredibly hard.
Google used to know that too. I'm not sure when they unlearned that lesson.
Do you think there’s a standard for “incredibly hard” that all applications need to follow? Or that it varies from one application to another depending on context?
It depends on context. There are many pieces here:
1) Cost of compromise.
- For example, medical data, military secrets, and highly-personal data need a high level of security.
- Something like Sudoku high scores, perhaps not so much.
2) Benefit of compromise. Some compromises net $0, and some $1M.
- Something used by 4B users (like Google) has much higher potential upside than something used by 1 user. If someone can scam-at-scale, that's a lot of money.
- Something managing $4B of bitcoin or with designs for the F35 has much higher upside than something with Sudoku high scores.
3) Exposure.
- A script I wrote which I run on my local computer doesn't need any security. It's like my bedroom door.
- A one-off home, school, or business-internal system is only exposed to those communities, and doesn't need to be excessively hardened. It's more-or-less the same as physical security.
- Something on the public internet needs a lot more.
This, again, speaks to number of potential attackers (0, dozens/hundreds, or 7B).
#1 and #2 are obvious. #3 is the one where I see people screw up with arguments. Threats which seem absurdly unlikely are exploited all the time on the public internet, and intuition from the real world doesn't translate at all.
If I’m reading you right, if a business had a non-critical internal system (internal network behind a strong VPN) with the potential for a CSRF attack, you wouldn’t call that a risk?
Having is having glass windows (at least at street level).
Whether it's a risk worth addressing depends on a lot of specifics.
For example, a CSRF attack on something like sharepoint.business.com could be externally exploited with automated exploits. That brings you to the 7B attacker scenario, and if the business has 100,000 employees, likely one of them will hit on an attack.
A CSRF attack on a custom application only five employees know about has decent security-by-obscurity. An attacker would need to know URLs and similar business-internal information, which only five people have access to. Those five people can just as easily walk into the CEOs office and physically compromise the machine.
There are two cases where this is true: a user intentionally sharing internal access with external malicious actors, or a user unintentionally sharing internal access with external malicious actors (e.g. social engineering / general incompetence). Neither apply to kiosk breakouts.