User asks (human) Assistant to login to their online banking and make a transfer. No problem. No digital security system can stop this (bar requiring true biometrics on every sign-in, which isn’t happening soon).
User asks Company (with human staff) to login and do the same thing. Perhaps the company is an accounting firm, a legal firm, or a “manage my company for me” kind of firm. No problem.
User asks Company which makes self-hosted business management tools to login to their online banking. Oh shit!!! This is a violation of the ToS! The Company that makes this tool is violating the bank’s rights! The user doesn’t understand how they’re letting themselves get hacked!! Block block block! (Also some banks realise that can charge a fee for such access!)
Everyone on HN sees how that last case — the most useful given how great automation is these days — should be permitted.
I wish the governing layers of society could also see how useful such automation is.
These Device-Bound Session Credentials could result in the death of many good automation solutions.
The last hope is TPM emulation, but I’m sure that TPM attestation will become a part of this spec, and attestation prevents useful emulation. In this future, Microsoft and others will be able to charge the banks a great deal of money to help “protect their customers” via TPM attestation licensing fees, involving rotation, distribution, and verification of keys.
I’m guessing the protocol will somehow prevent one TPM being used for too many different user accounts with one entity (bank), preventing cloud-TPM-as—a-service being a solution to this. If you have 5,000 users that want to let your app connect to their Bobby's Bank online banking, then you’ll need 5,000 different TPMs. Also Microsoft (or whoever) could detect and blacklist “shared” TPMs entirely to kill TPMaaS entirely.
Robotic Process Automation on the user’s desktop, perhaps in a hidden Puppeteer browser, could still work. But that’s obviously a great deal harder to implement than just “install this Chrome extension and press this button to give me your cookies.”
There's nothing in this spec that says there needs to be a restriction of one session per TPM. There isn't even anything that forces the client to use a TPM. It just requires the client to generate a key pair, and then use that to sign challenge responses. There's no way for the server to know which TPM was used to store that private key, nor whether one was even used.
> There's no way for the server to know which TPM was used to store that private key, nor whether one was even used.
I was all ready to disagree with you but apparently you're correct. Color me surprised.
> DBSC will also not prevent an attack if the attacker is replacing or injecting into the user agent at the time of session registration as the attacker can bind the session either to keys that are not TPM bound, or to a TPM that the attacker controls permanently.
This is a very pleasant surprise. I've grown accustomed to modern auth protocols (and other tech stacks as well) having DRM functionality baked into them where they can attest the vendor of the device or the software stack being used to perform the auth. It's become bad enough that at this point I just reflexively assume that any new web technology is hostile to user autonomy.
As long as banks are held accountable or generally blamed for people handing over their savings to foreign scammers, any kind of external access will be considered a threat. Every single time people get scammed by fake apps or fake websites or fake calls, a large section of society goes "the bank should've prevented this!!!".
Here, one particular bank is popular because of their pro-crypto stance, their high interest rates, and their app-only approach. That makes them an extremely easy target for phishing and scamming, and everyone blames the bank for the old men pressing the "yes I want to log in with a QR code" button when a stranger calls them. Of course, banks could stop scams like that, so the calls to maybe delay transferring tens of thousands for human review aren't exactly baseless, but this is how you get the situation where businesses struggle to integrate with banking apps.
There are initiatives such as PSD2, but those are not exactly friendly to the "move fast and break things" companies that you'll find on HN (because moving fast and breaking things is not a good idea when you're talking about managing people's life savings).
The TPM is used here because it's the most secure way to store a keypair like this. But, as the spec says:
> DBSC will not prevent temporary access to the browser session while the attacker is resident on the user’s device. The private key should be stored as safely as modern operating systems allow, preventing exfiltration of the session private key, but the signing capability will likely still be available for any program running as the user on the user’s device.
In other words, if a more secure alternative than TPMs comes into play, browsers should migrate. If no TPM is available, something like a credential service would also suffice.
As for TPM emulation: it already exists. Of course, TPMs also contain a unique, signed certificate from the TPM manufacturer that can be validated, so it's possible for TPM-based protocols to deny emulated TPMs. The Passkey API supports mechanisms like that, which makes Passkeys a nice way to validate that someone is a human during signup, though the API docs tell you not to do that.
We're once again one step closer to losing whatever little autonomy we have left when interacting with online services. Why the hell did we have to put TPMs in every computer?? They bring essentially no benefit for the vast majority of users, but companies keep finding new ways to use TPM capabilities to the user's detriment.
Don't you think if banks and email providers supported this, they would be a significant security benefit to most users?
I don't think this will be a worthwhile security benefit for most sites, and comes with trade-offs, but we already have trade-offs for higher security around sensitive things like banking and email where most users need a lot of protection.
>higher security around sensitive things like banking and email
There are no guard rails built in to make sure this isn't used by everyone and their dog as long as it makes site automation just a bit more difficult. Also kiss goodbye to browsing the internet without a governement/bigcorp™ approved TPM.
If you check the spec [0] you will see that unlike most new web tech this one doesn't provide any DRM-adjacent functionality. There's absolutely no technical measure in the spec that could be leveraged to force an implementation to use a TPM. If user choice is disrespected that is squarely on the implementation (ie the browser) and has nothing to do with either the protocol or the server.
Honestly this fairly simple scheme looks a lot like what I wish webauthn could have been.
> makes it more or less useless as a security measure
Define "security". This is incredibly useful for mitigating bearer token exfiltration which is the stated purpose. It's also the same way ssh keypairs work and those are clearly much more secure than passwords.
It's only "insecure" from the perspective of a service host who wants to exert control over end users.
Even webauthn leaves attestation as an optional thing. Even in the case that the service operator requires it, so long as they don't engage in vendor whitelisting you can create a snakeoil authority on the fly.
The main advantage this has over webauthn is that it is so much simpler.
What is the scenario that this is supposed to help with? Allowing you to browse malicious sites slightly more securely? Slightly less chance that running malware will steal your money?
To what end? It's still very likely that malware on your machine will find some other way to exfiltrate this data / impersonate you while using your machine and exfiltrate the important parts (e.g. your money).
I don't understand the benefit of all this complexity vs simply having the device store the cookie jar securely (with help from the TPM or secure enclave if required).
That would have the benefit that every web service automatically gets added security.
One implementation might be:
* Have a secure enclave/trustzone worker store the cookie jar. The OS and browser would never see cookies.
* When the browser wants to make an HTTPS request containing a cookie, the browser send "GET / HTTP/1.0 Cookie: <placeholder>" to the secure enclave.
* The secure enclave replaces the placeholder with the cookie, and encrypts the https traffic, and sends it back to the OS to be sent over the network.
If DNS is wrong, that server can get a domain validated certificate.
What I am imagining here is that you set a cookie with Domain set, and not __Host, possibly because you need the cookie to be accessible on multiple domains, and then someone sets up a CNAME that points to a third party hosting service without thinking about the fact that that would leak the cookie.
You could have similar secure handling of cookies on your server.
For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.
This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.
So every TLS connection, both the handshake and all the subsequent bytes need to be routed through the TPM? That sounds like it'll be slow.
Additionally, the TPM will now need to have a root store of root CAs. Will the TPM manufacturer update the root store? Users won't be able to install a custom root CA. That's going to be a problem, because custom root CAs are needed for a variety of different purposes.
When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.
I work at Google and regularly bypass HTTPS certificate errors as part of my job for the purpose of developing servers. Either by clicking through the error, or with the "thisisunsafe" codeword, or with --ignore-certificate-errors . I pretty much only do this in incognito windows or alternate Chrome profiles, to avoid risk of leaking valid credentials.
Yeah I was being a bit snarky. I didn't mean to imply that mainstream browsers are anywhere near phasing that ability out (at least yet). However consider Firefox policies regarding extension signing (specifically code review), or major mobile platform policies regarding user access to app data. Or a certain Google policy regarding add blockers, err sorry I mean protecting user data from malicious extensions. I think there's a pretty clear theme that the end user is to be regarded as an adversary and his behavior controlled.
Yeah, not really sure that’s simpler or even addresses the same attack vector Google’s option does.
First of all, this approach have the nice fact that now we need new TPMs capable of doing that, and even if people could update it, we will need to wait for everybody to update their TPMs. So lets wait another 10 to 15 years before we’re really sure.
Second, the attack vector google’s approach is trying to protect against is assuming someone stole your cookies. Might as well assume that someone has gained root on your machine. Can you protect against that? Google’s approach does regardless of how “owned” your machine is, yours doesn’t.
It’s not like you’re gonna hand off the TLS stream to the tpm to write a bit into it, then hand it back to the OS to continue. The tpm can’t write to a Linux tcp socket. whatever value the tpm is returning can be captured and replayed indefinitely or for the max length of the session.
So you’re back where you started and you need to have a “keep alive” mechanism with the server about these sessions.
Google’s approach is simpler. A private key you refresh your ownership of every X minutes. Even if I’m root on your machine. Whatever I steel from it has a short expiration time. It cuts down the unnecessary step of having the tpm hold the cookie too. Plus it doesn’t introduce any limitations on the cookie size
You're still sending the key over the wire as your credential. That's bad design plain and simple. If you want symmetric crypto there's preshared keys where the keys never go over the wire. If you need more than a single point-to-point between two parties then there's asymmetric cryptography.
Ironically the design you propose, juggling headers over to a secure enclave and having the secure enclave form the TLS tunnel, is significantly more complex than just using an asymmetric keypair in a portable manner. That's been standard practice for SSH for I don't even know how long now - at least 2 decades.
Oh also there's a glaring issue with your proposed implementation. The attacker simply initiates the request using their own certificate, intercepts the "secure" encrypted result, and decrypts that. You could attempt mitigations by (for example) having the secure enclave resolve DNS but at that point you're basically implementing an entire shadow networking stack on the secure enclave and the exercise is starting to look fairly ridiculous.
Using a public key mechanism means you can have a system where there is literally no interface that allows you to extract the sensitive parts. A very secure cookie jar still requires you to take the secrets out of it to use them.
I don't know why you threw the baby out with the bathwater. The problem is that you want the cookies to be short lived and device bound because someone might intercept your e.g. JSESSIONID or if they can't read it, they might inject their own JSESSIONID through cross origin requests somehow.
Binding a session cookie to a device is pretty simple though. You just send a nonce header + the cookie signed with the nonce using a private key. What the chrome team is getting wrong here is that there is no need for these silly short lived cookies that need to be refreshed periodically.
What's the use-case for restoring session authentication state with an external service as part of that? You have the creds, and the session will expire in somewhere between 10 mins and maybe a week from the backup (for sites that need this security). I doubt you'll be restoring within the session timeout of most online banking.
I get the benefits of restoring a full backup, but in this instance it would seem to lose practical security benefits for theoretical purity.
If I remove my drive from a dead computer and put in a spare one, it should boot up in the same state, including cookies in the browser. With a desktop computer and SSDs that could easily happen within the banking timeout. With Linux it is trivial to do as well.
Wait, so your use case here is that you login to online banking and while you are paying your bills or whatever your computer dies, you pull the drive from the computer that just died and put it into the new computer, boot it back up all within 10 minutes, and then expect to still be logged-in? That seems exceptionally unusual, and logging into one account seems a small inconvenience compared to replacing your entire computer. tbh I'd be amazed if it even works now. Does Linux restore the complete memory state of a dead computer when you install the drive in a new machine?
Bank logins use session cookies that are cleared when the browser closes. Unless RAM is preserved you'll need to re-open your browser, so they'll be lost.
Personally I have more use for protection against session theft than I do for moving a drive to another computer and continuing to use the same online banking session within 10 minutes. I suspect most people are in the same category.
Look at the situation with mobile phones. Half my apps on Android are impossible to back up and restore on another phone.
If my phone is damaged, "logging in" again to my bank's mandatory app means I need to fly half way across the world and visit a branch in person with my new phone.
I don't want anything like that happening to desktop devices, regardless of how small the initial steps in that direction are.
The entire point of this scheme is that sessions would no longer need to be expired so aggressively. The bearer tokens remain short lived but the asymmetric key model means leaking the underlying session credential is much more difficult.
Are there really many web services where an attacker having long-lived access gives them much more power than short lived access?
If someone gets short lived access to a control panel for something, there are normally ways to twiddle settings to, for example, create more user accounts, or slacken permissions.
If someone gets short lived access to a datastore, they can download all the data.
This is a very good point, and one the DBSC team thinks about a lot.
In the short term it's about economics: Infostealer malware today scales really well because it can a) exfiltrate cookies quickly and clean it self up, mostly evading any client based detection, and b) sit on large stashes of long-lived cookies and carefully "cash them in" in ways that evade server side detections.
A short-lived cookie forces different behavior for b, which we think will make it more detectable server side, and binding in general will force malware to act more locally, which will make it (far) more detectable locally.
In the long term, DBSC also is designed so that the session management and key registration is somewhat decoupled from that short-term cookie business. If and when we can sign more often (perhaps every request), I believe the DBSC API will still be useful for websites to manage the session key and lifetime.
Not more power in the sense of greater access but nonetheless gaining persistence is a huge advantage for an attacker.
In the case of bearer tokens there are many cases where attackers have managed to steal them without achieving full device compromise. Since it's literally sending the key in plaintext (horribly insecure) all it takes is tricking the client software into sending the header to the wrong place a single time.
One way you could potentially combat that is to make it so that a single short lived token isn't enough to accomplish more dangerous tasks like that.
Many sites already have some protections against that by for example requiring you to enter your password and/or 2fa code to disable 2fa, change privacy settings, update an email address, etc.
Leverages TPM-backed secure storage when available
Step 2: TPM required, and your cookies are no longer yours.
I actually like the idea as long as you hold the keys. Unfortuately, the chasm to cross is so small that I can't see this ending in a way beneficial for users.
The opsec reason I use Safari as a work browser today is that Safari has a much more blunt tool to disrupt cookie stealers: Safari and macOS do not permit (silent) access to Safari's local storage to user level processes. If malware attempts to access Safari, its access is either denied or the user gets presented a popup to grant access.
I wish other browsers implemented this kind of self protection, but I suppose that is difficult to do for third party browsers. This seems like a great improvement as well, but it seems this is quite overengineered to work around security limitations of desktop operating systems.
Seems like a very weak mitigation, if this is to protect against malwares running in your user session, alongside your browser. Can't they already do all kinds of nefarious keylogging/screen recording/network tracing/config file editing enabling impersonation and so on?
I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.
> I mean, if my threat model starts with "I have a mal/spyware running alongside my browser with access to all my local files", I would pretty much call it game over.
This is a big problem I have with desktop security - people just give up when faced with something so trivial as user privileged malware. I consider it a huge flaw in desktop security that user privilege malware can get away with so many things.
macOS is really the only desktop OS that doesn't just give up when faced with same user privileged malware (in good and bad ways). So there it's likely a good mitigation - macOS also doesn't permit same user privileged processes to silently key log, screen record, network trace and various other things that are possible on Windows and common Linux configurations.
Yeah, I'm siding with the sceptics on this one. Adding more layers of indirection against those malware running under a user session seem like a good idea in general, but in practice, you showed how ineffective the macOS approach is: under this model, every application is let to defend itself in an ad-hoc and specific manner. That doesn't generalise well: you can't expect every software, tool, widget, … vendor to be held to the same level of security as Apple.
Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.
Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.
The attitude of trusting by default, and chrooting/jailing in case of doubt probably still have decades to live.
> under this model, every application is let to defend itself in an ad-hoc and specific manner.
This description of the macOS model doesn't really apply so I'm not sure if I'm misunderstanding you or you're misunderstanding the model.
> Another approach is to police everything behind rules (the way selinux or others do), which is even better in theory. In practice, you waste a ton of time bending those policies to your specific needs. A typical user won't take that.
While SELinux could probably provide this kind of data protection on Linux, the method of technical enforcement is only one part. There's a lot of UI involved to get right, and that will require far more effort.
> Then there is the flatpak+portal isolation model, which is probably the most pragmatic, but not without its own compromises and limitations.
That model doesn't really apply here. Flatpak et al allow applications to self confine in order to protect the other things the user is doing. What I'm talking about is for an app to have some protections of its own data from the other things the user is doing. I'm not talking about sandboxing, this data protection.
>> under this model, every application is let to defend itself in an ad-hoc and specific manner.
> This description of the macOS model doesn't really apply so I'm not sure if I'm misunderstanding you or you're misunderstanding the model.
I admit I might be misunderstanding, since, again, I don't use macOS. But from your description:
>>> Safari and macOS do not permit (silent) access to Safari's local storage to user level processes. If malware attempts to access Safari, its access is either denied or the user gets presented a popup to grant access.
it sounds like safari detects that a foreign application is trying to read its data, warn the user and lets them call the shot on that. I don't see how that isn't very specific to safari and to one specific type of mitigation. Unless the same prompt shows up for every program trying to access every other one's configuration? Then I suppose we hit the usability nightmare I'm on about, with utilities like ncdu, borg and others just unable to do their job.
> While SELinux could probably provide this kind of data protection on Linux, the method of technical enforcement is only one part. There's a lot of UI involved to get right, and that will require far more effort.
My experience with SELinux was not that of a problematic UI or ecosystem of utilities around it, but more one of incurred fatigue working against rules: once you've hit your tenth AVC denial trying to get something to run, you might as well want to disable SELinux altogether. Or maybe that's what you call UI? Either way, I don't think there is a viable "fix" for it.
>> Then there is the flatpak+portal isolation model
> That model doesn't really apply here.
I mean, I was merely stating facts about what's existing out there. Anyhow
> What I'm talking about is for an app to have some protections of its own data
This isolates applications and their data from one another, in that aspect they are relatable.
On macOS, basically all of these are extra permissions that you have to grant to an application - you'll get prompted with a popup when they try to do it.
eg: local network access, access to the documents and desktop folder, screen recording, microphone access, accessibility access (for keylogging), full disk access, all require you to grant permission
> Even if session cookies are stolen, they cannot be used from another device.
This seems false? Given the description in the article, the short lived cookie could be used from another device during its lifetime. Having this short lived cookie and having the browser proactively refresh it seems like a bad design to me. The proof of possession should be a handshake at the start of each connection. With HTTP3 you shouldn't need a lot of connections.
Right. The idea is that the short lived cookies would have a very short lived expiration, so even if you get access to it, it isn't very useful.
> The proof of possession should happen at the start of each connection. With HTTP3 you shouldn't need a lot of connections.
That could possibly be workable in some situations, but it would add a lot of complexity to application layer load balancers, or reverse proxies, since they would somehow need to communicate that proof of possession to the backend for every request. And it makes http/3 or http/2 a requirement.
I think imitating TLS (and who knows how many other protocols) by coupling the asymmetric key with a symmetric one instead of a bearer token is the obvious upgrade security wise. That way you could prove possession of the PSK with every request, keep it short lived, and (unlike bearer tokens) keep it hidden from callers of the API.
That said, the DBSC scheme has the rather large advantage that it can be bolted on to the current bearer token scheme with minimal changes and should largely mitigate the current issues.
I'm sure that the business case for it hasn't gone away, but unless they can side-channel some information out of the TPM, this proposal doesn't appear to give the server the ability to uniquely identify a visitor except through the obvious and intended method. So: maybe, but this appears to be separate.
> Servers cannot correlate different sessions on the same device unless explicitly allowed by the user.
I read it like browser can always correlate public/private key to the website (it knows if there is authenticated tab/window somewhere).
Why they are making this possible, if you could store the information in random UUID and just connect it to the cookie? What is the use case where you want to connect new session instead of using the old one?
It means that it works the same way that first party cookies already work. HN can't see my Google cookies and vice versa. If I clear my cookies Google has no way to know (aside from fingerprinting and maybe IP) that I'm the same person.
> What is the use case where you want to connect new session instead of using the old one?
Multiple accounts? Clear cookies and visit the next day? Probably other stuff as well. The import point is that DBSC doesn't itself increase the ability of website operators to track you beyond what they can already do.
It doesn't require a TPM though. It just says it CAN use one, if one is available. If it is changed to require a TPM though, then that will be a problem.
I'm curious why the solution here is bearer tokens bound to asymmetric keys instead of a preshared key model. Both solutions require a new browser API. In either case the key is never revealed to the caller and can potentially be bound to the device via a hardware module if the user so chooses.
Asymmetric crypto is more complex and resource intensive but is useful when you have concerns about the remote endpoint impersonating you. However that's presumably not a concern when the authentication is unique to the ( server, client ) pair as it appears to be in this case. This doesn't appear to be an identity scheme hence my question.
(This is not criticism BTW. I am always happy to see the horribly insecure bearer token model being replaced by pretty much anything else.)
I wonder how long these short lived cookies actually live for? From the article it sounds like chrome makes a request to the server every time it has to generate a new short lived cookie, so if they do have very short lives (say a few minutes) chrome could be making a lot of requests to your server to generate new cookies.
Ed: reading a bit more closely it sounds like the request is more of a notification and actually all the real work happens in the user's browser, so you could presumably ignore it and hope the generated bandwidth to your server is pretty low.
It seems like this requires you to have very high availability for the refresh endpoint. If that endpoint is unavailable, the user can end up being effectively logged out, which could lead to a confusing, and frustrating experience for the user.
WebAuthn protects the sign in, but malware can still steal the resulting cookies. DBSC protects the sign in _session_. (It should stand for Don’t Bother Stealing Cookies.)
If you read the proposal carefully. this api is used to refresh/revalidate extremely short lived cookie. not replace cookie itself. Which you can already do with webauthn
Webauthn is significantly more complicated and conceptually structured around the use of authenticators. DBSC is a rather simple challenge-response scheme that can be bolted on to things that already exist in order to mitigate bearer token exfiltration. Even though they both use public keys the two things solve (slightly) different problems.
Importantly, the presence of attestation in webauthn could potentially compromise privacy or user choice in certain cases. DBSC has zero support for that.
You could certainly use a webauthn credential to establish a DBSC session though.
User asks Company (with human staff) to login and do the same thing. Perhaps the company is an accounting firm, a legal firm, or a “manage my company for me” kind of firm. No problem.
User asks Company which makes self-hosted business management tools to login to their online banking. Oh shit!!! This is a violation of the ToS! The Company that makes this tool is violating the bank’s rights! The user doesn’t understand how they’re letting themselves get hacked!! Block block block! (Also some banks realise that can charge a fee for such access!)
Everyone on HN sees how that last case — the most useful given how great automation is these days — should be permitted.
I wish the governing layers of society could also see how useful such automation is.
These Device-Bound Session Credentials could result in the death of many good automation solutions.
The last hope is TPM emulation, but I’m sure that TPM attestation will become a part of this spec, and attestation prevents useful emulation. In this future, Microsoft and others will be able to charge the banks a great deal of money to help “protect their customers” via TPM attestation licensing fees, involving rotation, distribution, and verification of keys.
I’m guessing the protocol will somehow prevent one TPM being used for too many different user accounts with one entity (bank), preventing cloud-TPM-as—a-service being a solution to this. If you have 5,000 users that want to let your app connect to their Bobby's Bank online banking, then you’ll need 5,000 different TPMs. Also Microsoft (or whoever) could detect and blacklist “shared” TPMs entirely to kill TPMaaS entirely.
Robotic Process Automation on the user’s desktop, perhaps in a hidden Puppeteer browser, could still work. But that’s obviously a great deal harder to implement than just “install this Chrome extension and press this button to give me your cookies.”
Goodbye web freedom, and my software product :(