So main application for WebRTC is de-anonymisation of users (for example getting their local IP address). Why it is not hidden behind permission I don't understand.
The main application for WebRTC is peer to peer data transfer.
I think you can make the argument that it should be behind a permission prompt these days but it's difficult. What would the permission prompt actually say, in easy to understand layman's terms? "This web site would like to transfer data from your computer to another computer in a way that could potentially identify you"? How many users are going to be able to make an informed choice after reading that?
If users don't understand, they click whatever. If the website really needs it to operate, it will explain why before requesting, just like apps do now.
Always aim for a little more knowledgeable users than you think they are.
And specifically, if you're on something-sensitive.com in a private browsing session, it would give you the choice of giving no optional permissions. That choice is better than no choice at all, especially in a world where Meta can be subpoenaed for this tracking data by actors who may be acting unconstitutionally without sufficient oversight.
That feels pretty useless. You might as well do what happens today: enable it by default and allow knowledgable power users to disable it. If it's disabled, show a message to the user explaining why it's needed.
it does exist in `about:config`, which could be made as a UI setting instead:
`media.peerconnectin.enabled`.
on cromite[1], a hardened chromium fork, there is such a setting, both in the settings page, as well as when you click on the lock icon in the address bar.
I think very few people would argue that cookie consent banners in the form in which they are the norm are a good thing just like permission prompts for microphone access are.
Browser functionality needs a hard segmentation into disparate categories like "pages" and "apps". For example, Pages that you're merely intending to view don't need WebRTC (or really any sort of network access beyond the originating site, and even this is questionable). And you'd only give something App functionality if it was from a trustable source and the intent was to use it as general software. This would go a long way to solving the other fingerprinting security vulnerabilities, because Pages don't need to be using functionality like Canvas, USB, etc.
It's only "profitable" if people don't bounce at being asked to trust a random news article, or something-embarassing.com, with their personal information. Same as why native Android apps don't just ask for every single permission. People in general do care about their security, they just lack tools to effectively protect it.
When enrolling Yubikeys and similar devices, Firefox sometimes warns "This website requires extra information about your security device which might affect your privacy. Do you want to give this information? Refusing might cause the process to fail."
I wouldn't understand that. Is it getting a manufacturer address to block some devices? Does it use a key to encrypt something? Which "security device? /dev/urandom?
I see that non-technical users can be confused by too much information, but when you omit this even knowledgeable users can't make an informed decision.
1- You'd be in a page where you'll be enrolling your YubiKey or WebAuthn device. You'll be having your key at hand, or recently plugged in.
2- Your device's LED would be flashing, and you'll be pressing to the button on your device.
3- The warning will pop-up at that moment, asking that question to you. This means the website probably querying for something like the serial number of your key, which increases the security, but reduces your privacy.
With the context at hand, you'd understand that instantly, because the place you are and the thing you're doing perfectly completes the picture, and you're in control of every step during the procedure.
Exactly. You need to infer that, it isn't stated directly.
Same like you need to guess, that "Unable to connect" means connection refused, while "We can’t connect to the server at a" means the DNS request failed. Or does it mean no route to host? Network is unreachable?
I would argue, that (sometimes) the user would be fine to distinguish whether he wants to approve something, but can't because both dialogs state the same wishy-washy message. Even non-technical users (might) eventually learn the proper terms, but they can't if they only get shown meaningless statements.
> Exactly. You need to infer that, it isn't stated directly.
I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
> Same like you need to guess, that "Unable to connect" means...
Again, as a layman, I don't care. As a sysadmin, I don't worry, because I can look into in three seconds flat. Also, Unable to Connect comes with its reasons in parantheses all the time.
> I don't care. The site is doing something unusual. It's evident, it's enough to take a second look and think about it.
Is it enough to do an informed decision?
> Again, as a layman, I don't care.
You do care, whether you mistyped or the network is down. I agree that you probably don't care to distinguish between "network unreachable" and "no route to host" though.
> As a sysadmin
True, but that information was already there and was thrown away.
With my layman hat, yes it is. I'll think about a trade-off between site's importance in my life, trustworthiness of the body behind the site, and my privacy.
> You do care, whether you mistyped or the network is down.
No I don't. Because it's easy to check for a typo, and then it's easy enough to investigate like layman. e.g.: Try going to Google, check for your (wireless) connection from your taskbar, every OS shows a "!" when internet is unreachable, and so on...
> but that information was already there and was thrown away.
Sometimes starting with a truncated but accurate info allows a way faster start. Precision and accuracy are different things, and accuracy is more important than precision.
Let's be clear here. Meta/other sites are abusing the technology TURN/WebRTC for a purpose it was never intended for, way beyond the comfortable confines of innocent hackery, and we all know it.
That's asshole behavior, and worth naming, shaming, and ostracizing over.
> That's asshole behavior, and worth naming, shaming, and ostracizing over.
These exploits are being developed, distributed and orchestrated by Meta. The ”millions of websites” are just hummus recipe content farms using their ad SDKs, and are downstream Zuck in every meaningful interpretation of the term.
Meta has been named and shamed for decades. Shame only works in a society where bad actors are punished by the masses of people that constitute Meta’s products. Doesn’t mean we should stop, only that it’s not enough.
More than that, talking about TURN or WebRTC is really missing the issue. If you lock everything down so that no one can do anything you wouldn't want a malicious actor to be able to do, then no one can do anything.
The real issue is, why are we putting up with having these apps on our devices? Why do we have laws that prohibit you from e.g. using a third party app from a trusted party or with published source code in order to access the Facebook service, instead of the untrustworthy official app which is evidently actual malware?
What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
Agree on your first point at a practical level, but from the normative standpoint, it's unforgivable to cross those streams. At the point we're talking about with a service provider desperately wanting to leak IP info for marketability applications of an underlying dataset and using completely unrelated to the task at hand technical primitives to do it, you very clearly have the device doing something the end user doesn't want or intend. The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors. A connection to a TURN server or use of parts of the RTC stack should explain to the user they are about to engage programming intended for real-time communication when it's happening not just once at the beginning when most users would just accept it and ignore it from then on.
10 or so TURN call making notifications in a context where synchronous RTC isn't involved should make it obvious that something nefarious is going on, and would actually give the user insight into what is running on the phone. Something modern devs seem to be allergic to, because it would cause them to have to confront the sketchiness of what they are implementing instead of being transparent with the principle of least surprise.
Modern businesses though would crumble under such a model because they want to hide as much about what they are doing as possible from the customer base/competitors/regulators.
>
> What laws are you referring to other than Terms of Service which are entirely artificial constructs whisked into existence by service/platform providers? Which will, admittedly, be as draconian and onesided as the courts will allow.
There are two main ones.
The first is the CFAA, which by its terms would turn those ToS violations into a serious felony, if violations of the ToS means your access is "unauthorized". Courts have been variously skeptical of that interpretation because of its obvious absurdity, but when it's megacorp vs. small business or open source project, you're often not even getting into court because the party trying to interoperate immediately folds. Especially when the penalties are that scary. It's also a worthless piece of legislation because the actual bad things people do after actual unauthorized access are all separately illegal, so the penalty for unauthorized access by itself should be no more than a minor misdemeanor, and then it makes no sense as a federal law because that sort of thing isn't worth a federal prosecutor's time. Which implies we should just get rid of it.
The other one, and this one gets you twice, is DMCA 1201. It's nominally about circumventing DRM but its actual purpose is that Hollywood wants to monopolize the playback devices, which is exactly the thing we're talking about. Someone wants to make an app where you can watch videos on any streaming service you subscribe to and make recommendations (but the recommendations might be to content on YouTube or another non-Hollywood service), or block ads etc. The content providers use the law to prevent this by sticking some DRM on the stream to make it illegal for a third party app to decrypt it. Facebook can do the same thing by claiming that other users' posts are "copyrighted works".
And then the same law is used by the phone platforms to lock users out of competing platforms and app stores. You want to make your competing phone platform and have it run existing Android apps, or use microG instead of Google Play, but now Netflix is broken and so is your bank app so normal people won't put up with that and the competition is thwarted. Then Facebook goes to the now-monopoly Google Play Store and has "unauthorized" third party Facebook readers removed.
These things should be illegal the other way around. Adversarial interoperability should be a right and thwarting it should be a crime, i.e. an antitrust violation.
> The problem is that FAANG have turned the concept of general computing on it's head by making every bloody handset a playground for the programmer with no easily grokkable interface to the user to curtail the worst behavior of technically savvy bad actors.
But how do you suppose that happened? Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
>Why isn't there a popular Android fork which runs all the same apps but provides a better permissions model or greater visibility into what apps are doing?
Besides every possible attempt being DoA because Google is intent on monopolizing the space with their TOS and OEM terms? There isn't a fork because it can't be Android if you do that sort of thing, and if you tried to it'd be you vs. Google. Nevermind the bloody rats nest of intentional one-sided architecture decisions done to ensure the modern smartphone is first and foremost a consumption device instead of a usable and configurable tool, which includes things like regulations around the base and processor, lawful interception/MITM capability, and meddling, as you mentioned, in the name of DMCA 1201.
Though there's an even more subtle reason why, too, and it's the lack of accessible system developer documentation, capability to write custom firmware, and architecture documentation. It's all NDA locked IP, and completely blobbed.
The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
> The will is there amongst people to support things, but the legal power edifice has constructed intentional info asymmetries in order to keep the majority of the population under some semblance of controlled behavior through the shaping of the legal landscape and incentive structures.
Exactly. We have bad laws and therefore bad outcomes. To get better outcomes we need better laws.
There are already permissions dialogs for using the camera/microphone. I don't think it'd be absurd to implicitly grant WebRTC permissions alongside that.
>The website wants to connect to another computer|another app on your computer.
"website wants to connect to another computer" basically describes all websites. Do you really expect the average user to understand the difference? The exploit is also non-trivial either. SDP and TURN aren't privacy risks in and of themselves. They only pose risks when the server is set to localhost and with a cooperating app.
Pardon my ignorance, but modern browsers won't even load assets or iframes over plain http within an SSL page. So under normal circumstances you cannot open so much as an iframe to "localhost" from an https url unless you've configured https locally. Regardless of crossdomain perms. Wouldn't you want to require a special security permission from an app that was trying to setup a local server, AND require confirmation from a browser that was trying to connect to a local server?
HTTP isn't allowed on secure pages because the security of HTTP is known to be non-existent. WebRTC uses datagram TLS, which is approximately on par with HTTPS.
The thing that's happening here isn't really a problem with WebRTC. Compare this to having an app on your phone that listens on an arbitrary port and spits out a unique tracking ID to anything that connects. Does it matter if the connection is made using HTTP or HTTPS or WebRTC or something else? Not really. The actual problem is that you installed malware on your phone.
But that says nothing about the danger of identifying you.
> Most users probably will click "No"
Strong disagree. When I'm loading google.com is my computer not connecting to another computer? From a layman's perspective this is the basis of the internet doing what it does. Not to mention, the vast majority of users say yes to pretty much any permission prompt you put in front of them.
The existing killer app for WebRTC is video chat without installing an app, which is huge.
Other P2P uses are very cool and interesting as well - abusing it for fingerprinting is just that, abusing a user-positive feature and twisting it for identification, just like a million other browser features.
The technique doesn't actually rely on webrtc though, does it? Not showing up in the default view of chrome's network inspector obfuscates it a bit, but it's not like there aren't other ways to do what they're achieving here.
Because the decision makers don't care about privacy, they only want you to think that you have privacy, thus enabling even more spying.
One solution is to not use the apps and websites from companies that are known to abuse WebRTC or something else.
This is not unique to WebRTC. The same result could be achieved by sending a http request to localhost. The only difference in this case is that using WebRTC doesn't log a http request