Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't really see how you can both allow developers to update their apps automatically (which is widely promoted as being good security practice) and also defend against good developers turning bad.

How does Google know if someone has sold off their app? In most cases, F-Droid couldn't know either. A developer transferring their accounts and private keys to someone else is not easily detected.





> In most cases, F-Droid couldn't know either.

F-Droid is quite restrictive about what kinds of app they accept, they build the app from source code themselves, and the source code must be published under a FLOSS license. They have some checks that have to pass for each new version of an app.

Although it's possible for a developer to transfer their accounts and private keys to someone shady, F-Droid's checks and open source requirements limit the damage the new developer can do.

https://f-droid.org/docs/Inclusion_Policy/

https://f-droid.org/docs/Anti-Features/


One thing worth noting, these checks and restrictions only apply if you're using the original F-Droid repository.

Many times I've seen the IzzyOnDroid repository recommended, but that repo explicitly gives you the APKs from the original developers, so you don't get these benefits.


That's true. The whole point of an open ecosystem is that you get to decide who you get your software from. You can decide on the official F-Droid repository and get the benefits and drawbacks of a strict open source rule with the F-Droid organization's curation if that's your preference. You can add other repositories with different curation if you prefer that.

You know what? That's bullshit.

Anybody slightly competent can put horrendous back doors into any code, in such a way that they will pass F-Droid's "checks", Apple's "checks", and Google's "checks". Source code is barely a speed bump. Behavioral tests are a joke.


Anyone determined enough can break into any house. If not through ingenuitiy, then by a brick to your window. Doesn't mean we shouldn't lock our doors, turn off our lights, and close our curtains anyway.

The fortunate thing is that 99% of people won't bother trying to break your app if it's not dead simple. Advanved security mechanisms to check for backdoors is probably something only billionaire tech companies need to worry about.


You totally misunderstand the threat model. It's not about anybody breaking your app. It's about people making their own apps do things they're not supposed to do.

... and there's always a tradeoff in terms of how much of a deterrent anything is. The app store checks are barely measurable.


The app store checks are barely measureable, yes. Hence why being open source is the best check for any undocumented changes. Even if it's not discovered on FDoid, reports will come out for those who dig. Much easier to view source code than decompiling an APK to analyze.

But at some point there needs to be some level of trust in anything you install. You can't rely on institutions to make sure everything is squeaky clean. They can't even do that on content platforms (or at least, they choose not to afford it).


> In most cases, F-Droid couldn't know either. A developer transferring their accounts and private keys to someone else is not easily detected.

1. The Android OS does not allow installing app updates if the new APK uses a different signing key than the existing one. It will outright refuse, and this works locally on device. There's no need to ask some third party server to verify anything. It's a fundamental part of how Android security works, and it has been like this since the first Android phone ever release.

2. F-Droid compiles all APKs on its store, and signs them with its own keys. Apps on F-Droid are not signed by the developers of those apps. They're signed by F-Droid, and thus can only be updated through and by F-Droid. F-Droid does not just distribute APKs uploaded by random people, it distributes APKs that F-Droid compiled themselves.

So to answer your question, a developer transferring their accounts/keys to someone else doesn't matter. It won't affect the security of F-Droid users, because those keys/accounts aren't used by F-Droid. The worst that can happen is that the new owner tries injecting malware into the source code, but F-Droid builds apps from source and is thus positioned to catch those types of things (which is more than can be said about Google's ability to police Google Play)

And finally,

> How does Google know if someone has sold off their app?

Google should not know anything about the business dealings of potential competitors. Google is a monopoly[1], so there is real risk for developers and their businesses if Google is given access to this kind of information.

[1]: https://www.google.com/search?q=is+google+a+monopoly%3F&udm=...


Android also has the feature of warning the user if an update is coming from a different source than what is installed. This will happen even if they have the same key. This reply isn't trying to argue against anything you've said. I am just adding to the list of how Android handles updates.

> F-Droid compiles all APKs on its store, and signs them with its own keys. Apps on F-Droid are not signed by the developers of those apps. They're signed by F-Droid, and thus can only be updated through and by F-Droid. F-Droid does not just distribute APKs uploaded by random people, it distributes APKs that F-Droid compiled themselves.

For most programs I use, they just publishing the developer's built (and signed) APK. They do their own build in parallel and ensure that the result is the same as the developer's build (thanks to reproducible builds), but they still end up distributing the developer's APK.


Can you give some examples? I've heard that's a thing, but I'm not familiar with any apps that actually pull it off (reproducible builds are difficult to achieve)

Reproducible builds may be hard to achieve, but that doesn't mean you don't have a list of such builds long enough to crash your browser: https://verification.f-droid.org/verified.html

Weird to have a page like that if a human can't use it. Needs some pagination, f-droid!

It's like we're supposed to save the page and grep it or something. Doesn't work in my Firefox.


You have to trust somebody.

Who is F-Droid? Why should I trust them?

How do I know they aren’t infiltrated by TLAs? (Three Letter Agencies), or outright bad-actors.

Didn’t F-Droid have 20 or so apps that contained known vulnerabilities back in 2022?

Who are all these people? Why should I trust them, and why do most of them have no link to a bio or repository, or otherwise no way to verify they are who they say they are and are doing what they claim to be doing in my best interests?

https://f-droid.org/en/about/


I trust them, at least a lot more than I do Google, which is a known bad actor, and collaborator with "TLAs". F-Droid has been around for a very long time, if you didn't know. They've built and earned the trust people have in them today.

> Didn’t F-Droid have 20 or so apps that contained known vulnerabilities back in 2022?

Idk what specific incident you're referring to, but since they build apks themselves in an automated way, if a security patch to an app breaks the build, that needs to be fixed before the update can go out (by F-Droid volunteers, usually). In that case, F-Droid will warn about the app having known unpatched vulnerabilities.

Again, this is above and beyond what Google does in their store. Google Play probably has more malware apps than F-Droid has lines of code in its entire catalog.



Right, that's literally the team marking 12 apps as having known vulnerabilities (seems like it was because of a WebRTC vulnerability that was discovered). It's the F-Droid system working as intended to inform users about what they're installing.

You're calling it an incident like it was an attack or something, but it just seems like everyday software development. Google Play and the App Store don't let me know when apps have known vulnerabilities. I think F-Droid is coming out way ahead here.


So Google and Apple are already known to work with US government agencies. This was revealed in the Snowden leaks in 2013, and confirmed on multiple occasions since. Neither Google nor Apple tell you when apps you're downloading from the store contain known vulnerabilities. We know for a fact that both Google Play and the App Store are filled with scams and malware: it's widely documented.

So to my reading F-Droid comes out ahead on every metric you've listed: It has no known associations with US government agencies. They do inform you when your apps have known vulnerabilities. I'm not aware of any cases of scams or malware being distributed through F-Droid.

I highly recommend it. It's the main store I've been using on my phone for probably more than a decade now.


Because you can literally verify every single step of what they do. That's the reason you can trust them.

You cannot apply this logic to almost anyone else. Apple, Google, etc. can only give you empty promises.


I understand your concern, though your suspicion is a little shortsighted. It can be personally dangerous to volunteer for projects that directly circumvent the control of the establishment.

> Who is F-Droid? Why should I trust them?

For the same reason you trust many things. They have a long track record of doing the right thing. As gaining reputation for doing the wrong thing would more or less destroy them, it's a fair incentive to continue doing the right thing. It's a much better incentive that many random developers of small apps in Google's play store have.

However, that's not the only reason to trust them. They also follow a set of processes, starting with a long list of criteria saying what app's they will accept https://f-droid.org/docs/Inclusion_Policy/ That doesn't mean malware won't slip past them on occasion, but if you look at the amount of malware that slips past F-Droid and projects with similar policies like Debian and compare them to other app stores like Google's, Apple and Microsoft there is no comparison. Some malware slips past Debian's defences once every few years. I would not be surprised if new malware isn't uploaded to Google app store every few minutes. The others aren't much better.

The net outcome of all that is the open source distribution platforms like F-Droid and Debian, that have procedures in place like tight acceptance policies and reproducible builds are by a huge margin the most reliable and trustworthy on the planet right now. That isn't saying they are perfect, but rather if Google's goal is to keep their users safe they should be doing everything in their power to protect and promote F-Droid.

> How do I know they aren’t infiltrated by TLAs? (Three Letter Agencies), or outright bad-actors.

You don't know for sure, but F-Droid policies make it possible to detect if the TLA did something nefarious. The combination of reproducible builds, open source and open source's tendency to use source code management systems that provide to audit trail showing who changed every line shine a lot of sunlight into the area. Sunlight those TLA's your so paranoid about hate.

This is the one thing that puzzles me about F-Droid opposition in particular. Google is taking a small step here towards increasing accountability of app developers. But a single person signing an app is in reality a very small step. There are likely tens if not hundreds of libraries underpinning it, developed by thousands of people. That single developer can't monitor them all, and consequently libraries with malware inserted from upstream repositories like NPM or PyPi regularly slips through. Transparency the open source movement mostly enforces is far greater. You can't even modify the amount of whitespace in a line without it being picked up by some version control system that records who did it, why they did it, and when. So F-Droid is complaining about a small increase in enforced transparency from Google, when they demand far, far more from their contributors.

I get that Google's change probably creates some paper-cuts for F-Droid, but I doubt it's something that can't be worked around if both sides collaborate. This blog post sounds like Google is moving in that direction. Hear, hear!


> They also follow a set of processes, starting with a long list of criteria saying what app's they will accept

How is this an argument in favour of being able to run whatever software you want on hardware you own?


You can run any software you like on Android, if it's open source. You just compile it yourself, and sign it with the limited distribution signature the blog post mentions. Hell, I've never done it, but re-signing any APK with your own signature sounds like it should be feasible. If it is, you can run any APK you want on your own hardware.

Get a grip. Yes it might be possible the world is out to get you. But it's also possible Google is trying to do exactly what they say on the tin - make the world a safer place for people who don't know shit from clay. In this particular case, if they are trying to restrict what an person with a modicum of skillz can do on their own phone it's a piss poor effort, so I'm inclined to think it's the latter. They aren't even removing the adb app upload hole.


>> In most cases, F-Droid couldn't know either. A developer transferring their accounts and private keys to someone else is not easily detected.

> 1. The Android OS does not allow installing app updates if the new APK uses a different signing key than the existing one. It will outright refuse, and this works locally on device

You missed the and private keys part of the original claim.


No I didn't. Finish reading the rest of the comment.

If an app updates to require new permissions, or to suddenly require network access, or the owner contact details change, Google Play should ideally stop that during the update review process and let the users know. But that wouldn't be good for business.

An update can become malicious even without change in permissions.

E.g. my now perfectly fine QR reader already has access to camera (obvious), media (to read QR in an image file or photo) and network (enhanced security by on-demand checking the URL for me and showing OG etc so I can more informed choose to open the URL)

But it could now start sending all my photo's to train an LLM or secretly make pictures of the inside of my home, or start mining crypto or whatnot. Without me noticing.


See that's what the intent system was originally designed to prevent.

Your QR reader requires no media permission if it uses the standard file dialogs. Then it can only access files you select, during that session.

Similarly for the camera.

And in fact, it should have no network access whatsoever (and network should be a user controllable permission, as it used to be — the only reason that was removed is that people would block network access to block ads)


> And in fact, it should have no network access whatsoever (and network should be a user controllable permission, as it used to be — the only reason that was removed is that people would block network access to block ads)

Sure, a QR code scanner can work fine without network. E.g. it could use the network to check a scanned URL against the "safe browsing API" or to pre-fetch the URL and show me a nice OG preview. You are correct to say you may not need nor want this. But I and others may like such features.

Point is not to discuss wether a QR scanner should have network-access, but to say that once a permission is there for obvious or correct reasons, it can in future easily get abused for other reasons. Without changing the permissions.

My mail-app needs network. Nothing prohibits it from abusing this after an update to pull in ads, or send telemetry to third parties. My sound record app needs microphone permissions. Nothing prohibits it from "secretly" recording my conversations after an update (detectable since a LED and icon will light up).

If you want to solve "app becoming malicious after an update", permissions aren't the tool. They are a tiny piece of that puzzle, but "better permissions" aren't the solution either. Nor is "better awareness of permissions by users".


> See that's what the intent system was originally designed to prevent.

> Your QR reader requires no media permission if it uses the standard file dialogs. Then it can only access files you select, during that session.

On the one hand, yes, good point, but it runs into the usual problem with strict sandboxing – it works for the simple default use case, but as soon as you want to do more advanced stuff, offer a nicer UI, etc. etc. it breaks down.

E.g. barcode scanners – yes, technically you could send a media capture intent to ask the camera app to capture a single photo without needing the camera permission yourself, but then you run into the problem that maybe the photo isn't suitable enough for successful barcode detection, so you have to ask the user to take another picture, and perhaps another, and another, and…

So much nicer to request the camera permission after all and then capture a live image stream and automatically re-run the detection algorithm until a code has been found.


>...or to suddenly require network access...

That's the most baffling thing to me. There is simply no option to remove network permissions from any app on my Pixel phone.

It's one of the reasons why I avoid using mobile apps whenever I can.


It's weird because GrapheneOS does have this. Networking is a permission on Android, but stock Android doesn't give you the setting.

I believe that permission is currently "leaky". The app can't access the network but it can use Google Play services to display ads.

I believe that would theoretically allow exfiltration of data but I don't understand all of the details behind this behavior and how far it goes.


Google wants 0 friction for apps to display ads.

So does Apple apparently.

What incentive is there for OEMs to not add this option though? Does Google refuse to veriy their firmware if they offer this feature?

The network permission was displayed in the first versions of Android, then removed. I heard (hearsay alert) at the time that it was because so many apps needed it, and they wanted to get rid of always-yes questions. IIRC this happened before the rise of in-app advertising.

If people always answer yes, they grow tired and eventually don't notice the question. I've seen it happen with "do you want to overwrite the previous version of the document you're editing, which you saved two minutes ago?" At that point your question is just poisoning the well. Makes sense, but still, hearsay alert.


As far as I'm concerned they can grant this permission by default. I just want the power to disable it.

A while ago I wanted to scan the NFC chip in my passport. Obviously, I didn't want this information to leave my device.

There are many small utility apps and games that have no reason to require network access. So "need" is not quite the right word here. They _want_ network access and they _want_ to be able to bully users into granting it.

That's a weird justification for granting it by default. But I wouldn't care if I could disable it.


Android doesn't grant this by default, strictly speaking. Rather, an application can enable it by listing it in the application manifest. Most permissions require a question to to the user.

Did you find a suitable app? I don't really remember, but https://play.google.com/store/apps/details?id=com.nxp.taginf... might suit you.


I did find one but it was years ago so I don't remember.

Could have been easily solved by granting it by default, but I doubt that was original intent.

Well, the original intent was to ask the user for permission at installation time, which turned out to be a poor idea after a while. Perhaps you mean that it would have been simple to change the API in some particular way, while retaining compatibility with existing apps? If I remember the timeline correctly, which is far from certain, this happened around the same time as Android passed 100k apps, so a fairly strong compatibility requirement.

I mean, just make it "Granted" by default and give user ability to control it. Permissions API was already broken few times(i.e. Location for bluetooth and granular Files permissions)

> Does Google refuse to veriy their firmware if they offer this feature?

If a manufacturer doesn't follow the Android CDD (https://source.android.com/docs/compatibility/cdd), Google will not allow them to bundle Google's closed source apps (which include the Google Play store). It was originally a measure to prevent fragmentation. I don't know whether this particular detail (not exposing this particular permission) is part of the CDD.


It's not explicitly part of the CDD, but implicitly. The device must support the Android permissions model and is only allowed to extend this implementation using OWN permissions (in a different namespace than 'android'), but not allowed to deviate from it.

INTERNET is a "normal permission", automatically granted at install time if declared in the manifest. OEMs cannot change the grant behavior without breaking compatibility because:

The CDD explicitly states that the Android security model must remain intact. Any deviation would fail CTS (Compatibility Test Suite) and prevent Play certification.


Well, apart from the OEM violating the Android Compatibility Definition Document (CDD), failing the Compatibility Test Suite (CTS) and thus not getting their device Play-certified (so not being able to preload all the Google services, there is an economical impact as well:

As OEM you want Carriers to sell your device above everything else, because they are able to sell large volumes.

Carriers make money using network traffic, Google is paying Revenue-Share for ads to Carriers (and OEMs of certain size). Carriers measure this as part of the average revenue per user (ARPU).

--> The device would be designed to create less ARPU for the Carrier and Google and thus be less attractive for the entire ecosystem.


It is solvable from user space.

E.g. TrackerControl https://github.com/TrackerControl/tracker-control-android can do it, it is a local vpn which sees which application is making a request and blocks it.

You can write your own version of it if you don't trust them.


I've been using a similar VPN solution. It works great for apps that absolutely should not be connected, like my keyboard. But it has an obvious downside: you can't use a VPN on your phone while you're using that.

Some apps would use this for loopback addresses, which as far as I know will then need network permission. The problem here is the permission system itself because ironically Google Play is full of malicious software.

And neither Android nor iOS a safer than modern Desktop systems. On the contrary because leaking data is its own security issue.


Wasn't the loopback address recently used maliciously?

Yes. Facebook/Meta was using a locally hosted proxy to get info smuggled back without using routes that are increasingly obstructed by things like ad blockers if I recall correctly.

https://securityonline.info/androids-secret-tracking-meta-ya...

Search string for DDG: Meta proxy localhost data exfiltration


This is a huge problem in the Chrome Web Store and Google is doing very little about it. If you ever made an extension that is even just a little popular, expect to get acquisition offers by people who want to add malicious features somewhere between click fraud, residential IP services or even password stealers.

Same for Play Store. I have 2 games and I keep getting offers all the time. The last one offered $2000 for the developer account or a $100 monthly rent.

From their email pitch:

> We’re now offering from $500 to $2000 for a one-time purchase of a developer account that includes apps, or a rental deal starting from $100.

> No hidden conditions — quick process, secure agreement, and immediate payment upon verification.

> We’re simply looking for reliable accounts to publish our client apps quickly, and yours could be a perfect match.


Indeed, an update can't be more malicious than the permissions allow it to be. You have a calculator app with limited permissions, it is "safe" to set to allow the developer to update it. No danger in that.

But I don't think it is enough, or it is the right model. In other cases, when the app has dangerous permissions already, auto-update should be a no-go.


> Indeed, an update can't be more malicious than the permissions allow it to be.

...in the absence of sandbox escape bugs.


> F-Droid couldn't know either

F-Droid is not just a repository and an organization providing the relevant services, but a community of like-minded *users* that report on and talk about such issues.


> which is widely promoted as being good security practice

Maybe that's the mistake right there?

It is a good practice only as long as you can trust the remote source for apps. Illustration: it is a good security practice for a Debian distro, not so much for a closed source phone app store.


OPEN SOURCE EVERYTHING is the premier solution.. again.

By using the distributor model, where a trusted 3rd party builds & distributes the apps. Like every Linux distro or like what F-droid does.

The point here is that app developers have to identify themselves. Google has no intention to verify the content of sideloaded apps, just that it is signed by a real person, for accountability.

They don't know if the person who signed the app is the developer, but should the app happen to be a scam and there is a police investigation, that is the person who will have to answer questions, like "who did you transfer these private keys to?".

This, according to Google and possibly regulators in countries where this will be implemented, will help combat a certain type of scam.

It shouldn't be a problem for YouTube Vanced, at least in the proposed form. The authors, who are already idendified just need to sign their APK. AFAIK, what they are doing is not illegal or they would have been shut down long ago. It may be a problem for others though, and particularly F-Droid, because F-Droid recompiles apps, they can't reasonably be signed by the original author.

The F-Droid situation can resolve itself if F-Droid is allowed to sign the apps it publishes, and in fact, doing that is an improvement in security as it can be a guarantee that the APK you got is indeed the one compiled by F-Droid from publicly available source code.


APKs are already signed. Now Google requries that they be signed by a key which is verified by their own signatures. Which means they can selectively refused to verify whichever keys are inconvenient to them.

> Google has no intention to verify the content of sideloaded apps, just that it is signed by a real person, for accountability.

for now


Still believe that signing binaries this way is always bullshit.

I stopped developing for mobile systems ages ago because it just isn't fun anymore and the devices are vastly more useless. As a user, I don't use apps anymore either.

But you can bet I won't ever id myself to Google as a dev.


> I don't really see how you can both allow developers to update their apps automatically (which is widely promoted as being good security practice) and also defend against good developers turning bad.

These are not compatible, but only because the first half is simply false. Allowing a developer to send updates is not "good" but "bad" security practice.


That's true in theory. But as you can see in practice is that google does very little to protect their users, while F-Droid at least tries.

Which shows that the whole 'security' rigmarole by google is bullshit.


This is a big problem with Chrome extensions and Google hasn't done anything about it there, so I don't think they actually care about it. I'm not actually sure how you would solve that problem even theoretically.

In many cases developer e-mail address changes, IP address changes, billing address changes, tax ID changes...

This exactly. Transferring ownership is a business transaction. Track that. If the new owner is trying to hide it, this is fraud, and should be dealt with in court.

To be fair, on Google Play you have the option to transfer the app to someone else's account. People don't need to trade accounts...

That doesn’t help mitigate the class of attack you responded to.

Quite simple: Actual human review that works with the developers.

But this costs money, and the lack of it is proof google doesn't really care about user security. They're just lying.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: