Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm really upset that this happened to you folks, and it's scary, because incident could just as easily have happened to us at Zulip (or any other OSS app that connects to self-hosted servers!).

I expect we'll never get a useful explanation from Google for why this incident happened -- abuse teams, like fraud teams, are worried about the bad guys using the explanations to tune their tactics and so tend to never explain anything.

But the details of how Google screwed up here also don't matter. A sudden Friday night suspension of a popular, legitimate app is insane! That possibility shouldn't be in the flowchart.

I get that for malware/spam/etc., it's important to immediately suspend, but I don't understand why Google doesn't take more seriously the very negative harm caused by doing that to a legitimate app. Some notice and appeal opportunity should be required before suspending a popular app by a legitimate publisher.

I'm upset, and a bit scared, but I can't say I'm surprised. This sort of random/erroneous/arbitrary punishment without explanation happens all the time with Google and other major tech companies. And every app developer I've met has experienced _significant_ disruption to their app publishing efforts due arbitrary/random rejections by an Apple app store reviewer, and this has been the case for years, so we can pretty confident that the vendors won't improve unless they are forced to do so.

There needs to be regulatory oversight of the Google/Apple app stores and the negative consequences for everyone else of their error-prone and ruthless enforcement processes.



> There needs to be regulatory oversight of the Google/Apple app stores

The regularity oversight needs to address a different aspect: google is world-wide de-facto monopoly for people not owning an iPhone. At least for the most part of the freer world, China is different story.

Until Google is broken up or fair competition is not achieved, content regulation does not help. As a European I want to care about US regulation as much as about US tax laws: not at all. The US is not the world-regulator. We elect governments in Europe that have no power to do anything in this sector. I don't say Google should be forbidden in Europe, we are not China. But competition and more choice for users needs to be guaranteed by effective legislation, in practice that against Google and Apple.


> freer world

I guess that's sarcasm? I can't even eat a sandwich outside without fearing that it will be considered a picnic and get me arrested.


This principle should be applied to all sectors. Basically capitalism is still young and we didn't have situations in the past were small companies could consolidate indefinitely so that they can at some point get so big they have more money than some countries. I think there should be a cap that beyond certain point company will have to be divided, so we never get companies too big to fail and being able to afford buying law to suit them.


I’m somewhat certain that the east India company, the VOC etc., had wealth surpassing many countries by the time their consolidation activities reached their peak, though. This isn’t a new phenomenon I don’t think.


Yeah, but those companies were de facto monopolies created by their governments. When they no longer served their purposes, those governments stepped in, defanged them, and took over their operations. Those companies may have been larger in terms of net worth, but their relationship to government was fundamentally different.


This was more like a state sanctioned oligarchy, whereas this loophole kind of gives anyone a chance to start EIC if they find a niche, exploit and then expand. Beyond certain point you buy laws so that nobody else can copy your steps. Rinse and repeat. This should be stopped.


> But the details of how Google screwed up here

They didn't "screw up". Or rather, that's not the main problem. The problem is that Google has the power to block the main channel of distribution of a piece of software.

Now, it's true that you can "just" get Element elsewhere, but the effective user lock-in into a single-corporation-controlled download hub ("app store") - that's the problem. And Google has gotten that quite right... for itself.


Legitimate app yes, but was it actually popular? The cached copy of the Element store page says 100k installs, <2k reviews. Compare to e.g. Signal at 50M installs and 1.2M reviews. Or WhatsApp with 5B installs and 130M reviews.


This is in part because we had to replace the app in 2019, and also because it's not the only client for Matrix - many deployments are actually forks of Element (e.g. France's 5.5M user deployment of Tchap).


If you accept that (1) there is a substantial amount of mal-content that Google should censor, and (2) a key use case for federated messaging platforms is to evade censorship, and that (3) client applications can be functionally part of a federated messaging platform while legally separate from it, then those client applications are fair game to be censored when they deliver mal-content.

Now I may disagree with parts those precepts in stronger or weaker forms, but it is disingenuous to claim that the client application is exactly as legitimate as a web browser just because the client application is legally but not functionally separate from the federated network.


While we're at it, should we ban email apps as well? And probably the Internet itself and go back to "safe" walled gardens like AOL, since there are almost certainly bad people on the net?


We try to rationalize this when it affects apps we don't use. But censorship is a very slippery slope. No single company should have this level of power. Browsers will be next. Solidarity is the only answer to abuses as such. Shade Freud always begets irony.


> should we ban email apps as well? And probably the Internet itself and go back to "safe" walled gardens like AOL

Huh? The AOL you're thinking of included an enormous cross-section of the population, with no controls on who could talk to who. If the Internet is unsafe, then so is AOL, because they're the same thing.


Right, that was the point. You can never get rid of "bad actors", and even if possible, that would have a myriad of unintended consequences (such as living under authoritarianism).

The irony in this case is that the speech that is attempted to be censored isn't even illegal. (at least in the U.S., where our liberal, cherished "anything goes" approach is enshrined in the Constitution.)


That's a textbook example of the logical fallacy commonly known as the slippery slope.


No it’s not — it’s pointing out that those arguments fail when looking at existing systems, and would trivially deny things that we know we want. A slippery slope would be that the reasoning lends itself to more extreme reasoning down the line. You don’t need to bother with that here, we’re already arguing from the bottom of the slope.

This simply argues that they’re special-casing against non-established systems — if you applied it uniformly, you’d trivially lose things you obviously want to keep.

That is, this is a stupid operation that is at best sponsored by “think of the children!” Mothers Against Everything foundation.


My assertion was a narrow one: the client application of a network designed to avoid censorship of bad actors is not exactly as legitimate as a web browser that is not designed to avoid censorship of bad actors.

To go from that narrow assertion to "ban email apps and probably the Internet itself" is fallacious reasoning at its finest.

There's no rebuttal by refuting any of the premises or finding logical flaws, just straight to the end of the world as we know it.


> the client application of a network designed to avoid censorship of bad actors

You didn’t argue this, and I’m not clear that matrix or similar technology makes any direct, intended or significant effort to do so beyond the much broader, all-encompassing goal of “let nothing be unavailable”. But if true, I might be more inclined to agree with you.

What you did argue is that

    a key use case for federated messaging platforms is to evade censorship
Which is wholly different, in that the usage is at fault, not the protocol in and of itself (in the same fashion that Bitcoin was not designed to facilitate drug trade, even if it’s a key use case driving its valuation).

But we also know that illegal activity is a key use case of the internet, of email, of encryption and a wide variety of other decentralized and federated technologies. This is hardly a good justification because you’ll ban all sorts of good things.

The only thing that protects your argument against everything else is that you arbitrarily limit it to non-established technologies — in the name of all that is good and wholesome, you would kill anything like the internet, email, etc, that is not itself the existing technologies.

A web browser is only more legitimate because the internet is more broadly used. Which isn’t much of a case for legitimacy.


This event is a textbook example of the slope already being slippery.


By the way, “malcontent” (no hyphen) is an English word that doesn’t mean “bad content” or “malicious content” like you’re intending.


I guess my dictionary is wrong then sense it seems to be referring to a person in both definitions that they give.


That would presumably include Chrome and https and enabling use through VPNs?

The above are use daily for extremist content, CP, circumventing numerous national laws in numerous places...

Hard to draw the line.


The line isn't that hard to draw.

If you run a monolithic, centralized service specifically designed to avoid censorship, and you don't moderate what users do on your service, and some of those users hurt people with your service, then you should expect your service will be shut down as Parler was.

And if you do the same thing, but separate the front end from the back end, and have different entities run them to provide legal separation, while practically and functionally the result is identical to that of the monolithic centralized service and your system is used to hurt people, then you should also expect that whatever components can be deplatformed will be deplatformed.


Your litmus test of 'is used to hurt people' is completely true of web browsers and email too.

Emails are always coming up in court cases etc. as people regularly use them to organise or discuss criminal acts, and it is sometimes used with E2E encryption so nobody can intercept and police the contents. I'm not convinced you've drawn a clear line. When is a protocol client responsible for the content shared or accessed with it, and when is a client not responsible for it.


How do you logically reconcile that what you say applies to Web Browsers as well?


I just want to chat with my friends without spooks putting it into a permanent database to later be used out of context against me. A crazy, dangerous idea, I know.

It's amazing to see how far we've fallen - to be at ease with the idea that there can be no such thing as a private conversation, and therefore that any private conversation is by definition illegal.

Any crime should require an actual victim. Which means there is evidence of it happening. You don't need a permanent record of everyone's conversations to uncover such crimes, police just need to do the job which we pay them for, which is to investigate.

CP and terrorism are both disgusting, horrible things, but even those are not worth losing all our basic human freedoms over, or we won't be left with a lifestyle worth defending.


By that logic, web browsers are not functionally separate from the federated Web, so all browsers should be banned until they start blocking objectionable sites.


The future: "it's the Internet Jim, but not as we know it" -Spock


> (2) a key use case for federated messaging platforms is to evade censorship

For some people (although maybe comparatively few) it's primarily about building a more robust Internet that works also if centralized service(s) disappear


> If you accept that (1) there is a substantial amount of content that Google should censor

Why would we accept this? We don't even accept the situation in which Google is _able_ to censor anything for the general public.


Why is this disingenuous? The web is decentralized in exactly the same way and browsers play exactly the same role of independent clients for accessing this decentralised network as do Matrix clients for the Matrix network.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: