Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Google's increasingly cavalier attitude towards security is concerning:

> [3 bullet points unrelated to security]

Security is a field related to protecting device-users from malicious actors. Your 3 examples all fall broadly under parental-controls, which are about controlling & monitoring a user's use & access of their device - a scenario within whichc the user is the adversary, not external actors. That may be an important or necessary measure in some contexts but classifying it as "security" is misleading.



This could also be considered a sandbox bypass. A device/application is given a limited set of capabilities to ensure that if something does go wrong, the affected area is small and well known. This effectively eliminates those safeguards and provides a gaping hole that most systems designers would think had been closed vis other configuration. As others have pointed out: kiosks, schools, prisons, POS, the check-in device at a Dr. office, and any other managed device have a reasonable expectation yo behave as their admins have configured them for the sake of not necessarily the person who has the device, but also the person sitting next to them that they could possibly effect by misuse of the device.

Systems have firewalls, ulimits, pledge, acls, permissions, sometimes physical lock and keys to prevent users of the system from doing things that owners or operators of the system have decided should not be permitted. As others have mentioned, this might be for security, compliance, CYA, or just reducing the number of variables to consider in a system.


> This could also be considered ...

I agree but you've very appropriately used the word "could" here. The gp bemoaned Google not prioritising this issue as a serious security concern. Whether it could theoretically be classified under security if X, Y & Z were true, due to the to-the-letter definition of access control threat models, doesn't mean that in this specific case of a consumer device, that using a browser from settings is a high severity risk. Even if it were a bypass of something like Nessus/Crowdstrike/et al (and not just consumer parental controls), it still wouldn't represent a significant threat as a simple kiosk escape in isolation.

Any definition that classifies this as the gp is proposing is a theoretical nitpick, not an actual considered threat model.


> which are about controlling & monitoring a user's use & access of their device - a scenario within whichc the user is the adversary, not external actors

Access control falls squarely under security. Also, the user should be considered the adversary, because they or programs that run on their behalf might be malicious, either knowingly or unknowingly. Not accounting for this is one of UNIX's biggest blunders.


The user is generally never the adversary in any legitimate security situation. Ignorance might be but that’s not something inherent to the user and an area for improvement.


In this case the “user” is in part the person granting controlled access. The person moving the mouse is not the user in total.

Take a easier example an atm machine. If a person touching it can access accounts/remove money, there is no question about it being a security problem.


Yeah, it's important to make a distinction between the "user" and the device owner. Often those are the same person but not always. Treating the user as an adversary can be okay in some circumstances, but treating the device owner as an adversary is never acceptable in my opinion.


The whole problem with security is that it's often difficult to tell whether all steps of what are happening now align with the device owner's true intent--

* Is it the device owner providing the direction to do this?

* Will the input being consumed as a result of this direction result in actions that the device owner approves of?

etc.

A kind of blanket assumption that everyone and everything is the adversary is a good starting point. The system needs to protect itself, in order to be able to faithfully follow the owner's instructions in the future.


Someone on an ATM accessing accounts other than their own is a security problem. Someone on an ATM accessing youtube is not a security problem.


I'm not so sure. It could be considered a DoS if nothing else, and throwing porn up on an ATM screen could certainly cause a company enough problems that they would consider it a security problem, and if you can load youtube on an ATM you could probably also load a different site with a fake ATM screen that collects pins and/or other personal information (account numbers would be more difficult unless you have a way to access the card reader) but any full featured browser in an ATM capable of being instructed by an attacker to load the attacker's JS is very likely a major security issue waiting to happen.


Being able to display whatever you want on an ATM machine is absolutely a security problem. I could put a fake PIN prompt, a prompt to enter the card number because the reader is broken, whatever. This comment sections is blowing my mind, and is a great example of why dedicated security teams are required, in the world of software.


You're assuming they have control of a lot of the screen, and that they have access to the keypad. Or even that they can get to sites other than youtube. Please don't assume the case that makes my post the weakest.

Your mind is blown because you're reading way too much into my hypothetical.


> Or even that they can get to sites other than YouTube.

Playing a video directing the user to call a number would be enough to trick some people. Enabling social engineering is a security problem.

Security is minding the specifics, which requires not assuming things are ok. That's why red teams exist, and why the default assumption of "it's not ok" is the correct assumption. ;)

We'll find out if the specific case in the article is a problem or not, once people look at it very closely. We may not have this luxury with our hypothetical ATM, though.


I'm not here to make assumptions. I'm here to point out "being able to open a web page in a context like that is not necessarily a security problem", and I'm sure you can think of an example if you don't like my example.

With an existence proof, you only have to worry about the narrowest possible interpretation. The skill of considering what an exploit could lead to is very important, but it fits oddly into such a hypothetical. Finding a possible flaw doesn't invalidate an existence proof unless you also can't think of a way to mitigate it.

Also if the video is small and says youtube and tricks a user I'm not sure I would call that a security problem. You can trick users with a post-it note, and that doesn't mean there's anything wrong on a technical level.


> You can trick users with a post-it note, and that doesn't mean there's anything wrong on a technical level.

Sure, but something present on the screen of a trusted system is very very different than a post it note. This claim is why I'm sticking strong by my assertion that this is why red teams exist. That's a really baffling view of security, to me.


You're still not listening to my real point.

Pretend it can only play a rickroll, no other videos. Or I could come up with something more reasonable like "it only does top trending and you need to hold keys down so if you don't have skimmer-level tech to shove in you can't persist the exploit" or whatever.

I'm saying there's some scenario where it's not a security issue.

You don't need to prove that there are scenarios that are security issues. That's obvious.


Excluding parental controlls from 'security' feels like more of an idealogical stance than a practical one.

I can see the argument based on Free Software principles. But I don't see anything else. There are so many cases of devices that are facing a user but not owned by the user which very much do fall under 'security'. Public terminals are a big one, devices handed out to employees in certain cases are another, and esoterica cases like prisoners also exist. Those should very much count as security, if only because 'when something breaks dangerous things can happen'. Then excluding parental controls because 'censorship bad' doesn't make much sense, since parental controls and other device lockdowns are often implemented with the exact same methods.

There are plenty of eviler things like a locked-down secure-boot and TPM grounded DRM that definitely fall under security, that I don't think it makes sense to gatekeep the term.

Heck, security as a term is so often used oppresively, that it makes little sense to gatekeep it anyway.


Privilege escalation is a typical class soft security issues.

The device owner (parent, school, etc.) set restrictions, which some other user bypasses.


Could be an organizational need like medical files and HIPA.


Right, that’s a parental control scenario.


A Linux box with a root user and an end user and the end user can run things as root without root authentication—is that also parental controls?


> The user is generally never the adversary in any legitimate security situation.

First, this isn't correct, for instance, DRM and TPM.

Second, "the user" does not have direct access to the computer internals, which means all such access is mediated by programs that are supposed to act on the user's behalf. But because software is not formally verified, we have no guarantee that they do so, and so we must assume that any program purporting to run on the user's behalf is intentionally or unintentionally malicious. This is where the principle of least privilege comes from.


The word "user" is ambiguous.

There are two kinds of relationships between an "user" and a computer.

The computer may belong to the employer of the "user" and the "user" receives temporary local access or remote access to it, in order to perform the job's tasks. Or the computer may belong to some company that provides some paid or free services, which involve the local or remote using of a computer.

In such a context, the main purpose of security is indeed to ensure that the "user" cannot use the computer for anything else than what is intended by the computer owner.

The second kind of relationship between a "user" and a computer is when the "user" is supposed to be the owner or the co-owner of the computer. In this case the security should be directed only towards external threats and any security feature which is hidden or which cannot be overridden by the "user" is not acceptable.

Except perhaps in special cases, parental controls should no longer be necessary after a much lower age than usually claimed, as they are useless anyway.

I have grown up in a society where everybody was subjected to parental controls, regardless of age, i.e. regardless whether they were 10 years old, 40 years old or 100 years old.

Among many other things that were taboo, there was no pornography, either in printed form, or in movie theaters or on TV.

Despite this, the young children, at least the young boys, were no more innocent than they would be today given unrestricted access to Internet. At school, after the age of 10 years, whenever there were no adults or girls around, a common pass-time was the telling of various sexual jokes. I have no idea which was the source of those jokes, but there was an enormous number of them and they included pretty much everything that can be seen in a porno movie today. The only difference between the children of that time and those who would be exposed to pornography today was that due to the lack of visual information both those who were telling and those who were listening did not understand many of the words or descriptions included in the jokes.

So even Draconian measures designed to "protect the innocence of the children" fail to achieve their purpose and AFAIK none of those boys who "lost their innocence" by being exposed to pornographic jokes at a low age were influenced in any way by this.


> > The user is generally never the adversary in any legitimate security situation.

> First, this isn't correct, for instance, DRM and TPM.

You must have missed the word "legitimate". DRM and TPM are two of the best examples of illegitimate "security".


First, that's a matter of opinion. Second, it's still wrong per my second point.


If you don't think DRM is illegitimate security, then what do you think is?


It still falls under security, obviously, which is why I listed it. Whether you like it or not is irrelevant.


The role of the user as adversary is complicated, but it includes things like unintentional and coerced or duped actions. The desired behavior is to protect the user from their own mistakes or victimization. Some of the concerns GP raises overlap with security. In secure programming, the threat model always includes "user error".


Ya, I could not upvote this more. Honestly physical security is usually one of the biggest fail points you will see in security audits. I also agree that there is nothing wrong with viewing users as potentially adversarial. I guess some of these responses surprise me, is all. I urge any sysadmins working with physical servers to reevaluate their access controls.


> there is nothing wrong with viewing users as potentially adversarial

There is a world of difference between considering users adversarially (social engineering is the most common threat vector bar none) and considering kiosk escape a serious threat.


> Security is a field related to protecting device-users from malicious actors.

You know - sometimes, just sometimes - it is also to do with protecting organisations from careless or malicious users. The three points are related to security, even it couched in terms of parents/children


If you (in this case parent) block something (in this case browsing porn sites) on some software (in this case Android device), it most definitely _is_ a security issue if the user (in this case a child) can bypass the restriction you imposed. I don't understand what's not clear there? If your phone is locked with a pin and you pass it to your friend (Stifler) because his mom just called you, he should not in any circumstance, be able to unlock your phone without knowing the pin code. That's the first issue. The second security issue is the possibility of any website calling private internal Android functions for (potentially) setting encryption keys of your device (!!!) You don't consider this a security issue?


Related, I used to root my old android phones by going to rooting websites that would do it all in browser.


There are lot of much easier ways to compromise security both for careless or malicious users. This is the fundamental difference between ios and android. If you want you could ruin the security of android, however it is harder to do it in ios. Definitely not impossible, you could sideload dangerous apps easily in ios as well.


> There are lot of much easier ways to compromise security both for careless or malicious users.

So what? There can be multiple ways to compromise security and it’s not like we only solve the easiest ways and leave the rest.

While there are easier ways today, when those get patched this will one day be the easiest.


I think you misunderstood me. Android deliberately allows users to hack into their own phone and remove its security. It allows users to install malicious apps if they want to or even root the phone entirely.

So there is nothing to solve or patch here. You could get ios if you want user to not have that power(even there it isn't very hard to install malicious accessibility app through sideloading).


I would hardly call disabling a security feature in the settings or getting an authorization key from the vendor hacking into your own phone. These are features that allow users who (think they) know what they are doing do what they want to do. It is intentional and people can figure out the consequences by doing some research. That is in start contrast to finding an undocumented hole in security.


> sometimes, just sometimes - it is also to do with protecting organisations from careless or malicious users.

There are two cases where this is true: a user intentionally sharing internal access with external malicious actors, or a user unintentionally sharing internal access with external malicious actors (e.g. social engineering / general incompetence). Neither apply to kiosk breakouts.


You seem very sure that those are the only two security risks to an expected browser being available on an otherwise managed device. I'm pretty certain there may be other risks.


One can absolutely make an argument for a great many risks to be classified under security concern: there are certainly more than just these two. Doing so is simply reductio ad absurdum.

To expand on this, we can if we choose classify all parental controls under general access control, and within a principle of least privilege further classify the following as legitimate security risks: - access to the internet - access to a keyboard - read access to a disk

There are absolutely scenarios one can contoct where these are real concerns. The settings panel of a general-purpose consumer device doesn't fit that venn diagram for me. Is it a bug: yes. Is it a security bug: no.


Please take this as critical feedback, and not as a personal attack: The comments which you are making here suggest that you shouldn't develop any software which in any way touches personal data without significant upskilling on IT security. You're making false comments with complete confidence.

Most security scenarios came about as a result of attackers being able to bring systems into absurd situations, and moving systems through unintended pathways.

"Reductio ad absurdum" could apply to most digital exploits before they've happened. "Why would the system get into that state?"

That's a key difference between physical security and digital security:

- In a physical situation, I need to worry about what a typical criminal trying to break into my home or business might do. That requires reasonable measures.

- In digital security, I need to worry about what the most absurdly creative attacker on the internet might do (and potentially bundle up as a script / worm / virus / etc.). I do need to worry about scenarios which might seem absurd for physical security.

If you engineer classifying only "reasonable" scenarios as security risks, your system WILL eventually be compromised, and there WILL be a data leak. That shift in mind set happened around two decades ago, when the internet went from a friendly neighborhood of academics to the wild, wild west, with increasingly creative criminals attacking systems from countries many people in America have never heard of, and certainly where cross-border law enforcement is impractical.

I've seen people like you design systems, and that HAS led to user harm and severe damages to the companies where they worked. At this point, this should be security 101 for anyone building software.


Seems like an argument about system-driven and component-driven risk analyses - they both have their place, and they're not mutually exclusive. Risk-based approaches aren't about either removing all risk or paying attention to only the highest priority ones. Instead, they are about managing and tracking risk at acceptable levels based on threat models and the risk appetites of stakeholders, and implementing appropriate mitigations.

https://www.ncsc.gov.uk/collection/risk-management/introduci...


It's a slightly different argument. The level of "reasonable risk" depends on the attacker in both situations.

The odds of any individual crafting a special packet to crash my system are absurdly low.

However, "absurdly low" is good enough. All it took was one individual to come up with the ping-of-death and one more to write a script to automate it, and systems worldwide were being taken down by random teenagers in the late nineties.

As a result of these and other absurd attacks, any modern IP stack is hardened to extreme levels.

In contrast, my house lock is pretty easy to pick (much easier than crafting the ping-of-death), and I sometimes don't even remember to lock it. That's okay, since the threat profile isn't "anyone on the internet," but is rather limited (to people in my community who happen to be trying to break into my house).

I don't need to protect my home against the world's most elite criminals trying to break in, since they're not likely to be in that very limited set of people. I do any software I build.

That applies both to system threats and to component threats. Digital systems need to be incredibly hard.

Google used to know that too. I'm not sure when they unlearned that lesson.


Do you think there’s a standard for “incredibly hard” that all applications need to follow? Or that it varies from one application to another depending on context?


It depends on context. There are many pieces here:

1) Cost of compromise.

- For example, medical data, military secrets, and highly-personal data need a high level of security.

- Something like Sudoku high scores, perhaps not so much.

2) Benefit of compromise. Some compromises net $0, and some $1M.

- Something used by 4B users (like Google) has much higher potential upside than something used by 1 user. If someone can scam-at-scale, that's a lot of money.

- Something managing $4B of bitcoin or with designs for the F35 has much higher upside than something with Sudoku high scores.

3) Exposure.

- A script I wrote which I run on my local computer doesn't need any security. It's like my bedroom door.

- A one-off home, school, or business-internal system is only exposed to those communities, and doesn't need to be excessively hardened. It's more-or-less the same as physical security.

- Something on the public internet needs a lot more.

This, again, speaks to number of potential attackers (0, dozens/hundreds, or 7B).

#1 and #2 are obvious. #3 is the one where I see people screw up with arguments. Threats which seem absurdly unlikely are exploited all the time on the public internet, and intuition from the real world doesn't translate at all.


If I’m reading you right, if a business had a non-critical internal system (internal network behind a strong VPN) with the potential for a CSRF attack, you wouldn’t call that a risk?


It's a risk.

Having is having glass windows (at least at street level).

Whether it's a risk worth addressing depends on a lot of specifics.

For example, a CSRF attack on something like sharepoint.business.com could be externally exploited with automated exploits. That brings you to the 7B attacker scenario, and if the business has 100,000 employees, likely one of them will hit on an attack.

A CSRF attack on a custom application only five employees know about has decent security-by-obscurity. An attacker would need to know URLs and similar business-internal information, which only five people have access to. Those five people can just as easily walk into the CEOs office and physically compromise the machine.


>it is also to do with protecting organisations from careless or malicious users.

What about protecting users from careless or malicious organisations?


This is literally a privilege-escalation attack: i.e. the user escapes from controls that are imposed on them by the device manager (which may well not be the user, but a corporate MDM platform).

Are you suggesting that privilege-escalation attacks are not security risks?


> Are you suggesting that privilege-escalation attacks are not security risks?

Nope. What I'm suggesting is that threat modelling is important. If attack vectors were classified equally based on technicalities we would have infinite surface area. Kiosk bypass might be vaguely categorisable alongside things like polkit exploits but they are not equivalent in any normal threat model.


>Nope. What I'm suggesting is that threat modelling is important. If attack vectors were classified equally based on technicalities we would have infinite surface area.

OK, so we agree that, your original statement (which follows) is wrong, because it makes broad, tacit assumptions about the threat model that are not justified?

Security is a field related to protecting device-users from malicious actors.

Whereas a more conventional definition of information security would also involve protecting systems from unauthorized access, including privilege escalations (that's the E in STRIDE, right?) that bypass controls that were intended to apply to the user.

Honestly, it's baffling to me why you're arguing this point.


No, you said they are 'unrelated to security.' Just admit you made a mischaracterization. It happens.


I agree with lucideer here. While I think the language chosen needlessly leaves space for pedantic arguments. They're correct that, from the context of google's software, none of these are relevant to the security that Google needs to care about.

it's true they could be a part of the things security needs to care about, but so is a phone catching on fire because of its battery. which in of itself is not directly a security risk.


I wouldn't say that escaping a control is always a privilege escalation. Browsing like this doesn't access any data, privileged or not, and you already had internet access. You're still in a very tight sandbox.


Defining your way out of giving secure, as in safe, devices to kids is frustrating. And sadly reflective of exactly why the original comment is correct.


Well said. Spoken like a true Google engineer! However, I think you understand both security as a field, at least one of my three points, as well as children and parenting.

===================

Security as a field

===================

You wrote: "Security is a field related to protecting device-users from malicious actors."

This is a very narrow and incorrect definition. Security as a field relates to many things, including for example protecting confidential information. If my medical information is handled by a hospital, I would like to know that information does not land on the dark web. In order to do this, the hospital needs to implement processes which protect my information from nurses being socially-engineered, doctors installing spyware, and countless other threats.

This is handled in-depth:

- Personnel handling my sensitive data should be screened.

- There should be technological restrictions on the devices preventing both malicious actors and errors

- There should be training in place

- There should be appropriate legal safeguards (NDAs, employment agreements, etc.)

- And so on.

Managing confidential information involves having managed devices. In many cases, these are also in physically-secure facilities and intentionally kept off-line. They don't belong to the person using them.

=========

Bullet #3

=========

One of the points in the original article is that the embedded browser has "a weird JavaScript object named mm" which appears to be used to handle things like security keys. This is a security issue in the narrow sense you've defined. If my child (and many other kids) uses this to bypass parental controls, their device is likely to be compromise by a malicious actor if they browse to a malicious web site.

========

Children

========

You described kids as "a scenario within which the user is the adversary"

I don't know if you've ever interacted with young kids before, but they're not so much the adversary as oblivious and clueless. Before they're teenagers, most are sweet, charming, and WANT to do the right thing. However:

- They have no idea what a "buffer overflow attack" is, let along phishing and other standard scams

- They're very easy to socially engineer. If you're a Random Adult, and ask them for a password, and give a stern look, they'll probably give it to you.

- They have no idea of the kinds of malicious actors on the internet. If someone tells them "To enable Angry Birds, go to this special dialogue," they might very well do it. There are online videos of malicious actors tricking little kids into e.g. washing their devices in a sink, or sticking them into a microwave purely for the LOLs. Mean people do these things to kids.

... and so on.

The reason to control and monitor what little kids do (not just digitally; the same applies to kitchen knives, fireplaces, and swimming pools) has very little to do with treating them as an adversary, and a lot to treating them as little kids who need an adult to help them learn.


That's a nice long reply - I'll try and keep my response a bit shorter.

You (and many many of the replies in thread) have taken the initial topic (kiosk escape -vs- parental controls) and are defending their definition as a serious security threat by likening them to social engineering attacks on medical staff. These are separate scenarios with separate threat models. If your child is sending confidential corporate data to malicious third parties through the Android settings app you may have a separate set of problems beyond software controls.

Overall, much of the finer details of yours & others' replies amount to an extreme level of theoretical pedantry around technical classification of threats, completely removed from any kind of real-world analysis of their severity.


I'll keep it short too: You don't understand what (many) people are trying to explain to you, and are coming to dangerously incorrect conclusions. If many people are giving you the same feedback, it should trigger something in your head, but somehow, it doesn't. You don't even seem to be trying to understand or considering the fact you might be missing something, so I'm giving up.

Please do not ever build systems which ever touch any sort of critical data or which work on consumer devices outside of a sandbox until you've picked up basic clue about security.


> Security is a field related to protecting device-users from malicious actors.

This is one of many aspects of security-- perhaps what Google considers most important on Android, but surely you can imagine some scenarios which we care about which aren't about an end-user getting attacked.

(Indeed, sometimes security is all about protecting infrastructure, assets, or information from device users).

Besides, the third point that you cavalierly dismiss above:

> > 3) Given the embedded browser is not secure, if a lot of kids do this, it WILL lead to someone exploiting this, and machines being compromised and escalations

directly relates to even your limited notion of security.


The third bullet point explicitly mentions the device being compromised, so I think it’s unfair to paint that as unrelated to security or just a parental-control issue.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: