Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I don't understand the benefit of all this complexity vs simply having the device store the cookie jar securely (with help from the TPM or secure enclave if required).

That would have the benefit that every web service automatically gets added security.

One implementation might be:

* Have a secure enclave/trustzone worker store the cookie jar. The OS and browser would never see cookies.

* When the browser wants to make an HTTPS request containing a cookie, the browser send "GET / HTTP/1.0 Cookie: <placeholder>" to the secure enclave.

* The secure enclave replaces the placeholder with the cookie, and encrypts the https traffic, and sends it back to the OS to be sent over the network.




The cookie jar isn't the only place the cookie could be leaked from. For example, it could be leaked from:

* Someone inspecting the page with developer tools

* Logs that accidentally (or intentionally) contain the cookie

* A corporate (or government) firewall that intercepts plaintext traffic

* Someone with temporary physical access to the machine that can use the TPM or secure enclave to decrypt the cookie jar.

* A mistake in the cookie configuration and/or DNS leads to the cookie getting sent to the wrong server.

This would protect against those scenarios.


That last one should largely be solved by

1) TLS

2) make your cookie __Secure- or __Host- - which then require the secure attribute.

If DNS is wrong, it should then point to a server without the proper TLS cert and your cookie wouldn't get sent.


If DNS is wrong, that server can get a domain validated certificate.

What I am imagining here is that you set a cookie with Domain set, and not __Host, possibly because you need the cookie to be accessible on multiple domains, and then someone sets up a CNAME that points to a third party hosting service without thinking about the fact that that would leak the cookie.

Sure


Oops your developer accidentally enabled logging for headers. Now everyone with access to your logs can take over your customer accounts.


You could have similar secure handling of cookies on your server.

For example, the server could verify the cookie and replace it with some marker like 'verified cookie of user ID=123', and then the whole application software doesn't have access to the actual cookie contents.

This replacement could be at any level - maybe in the web server, maybe in a trusted frontend loadbalancer (who holds the tls keys), etc.


So every TLS connection, both the handshake and all the subsequent bytes need to be routed through the TPM? That sounds like it'll be slow.

Additionally, the TPM will now need to have a root store of root CAs. Will the TPM manufacturer update the root store? Users won't be able to install a custom root CA. That's going to be a problem, because custom root CAs are needed for a variety of different purposes.

When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.


> When a user gets an HTTPS certificate error, now it'll be impossible for the user to bypass it.

According to BigTech that's a feature, not a bug.


I work at Google and regularly bypass HTTPS certificate errors as part of my job for the purpose of developing servers. Either by clicking through the error, or with the "thisisunsafe" codeword, or with --ignore-certificate-errors . I pretty much only do this in incognito windows or alternate Chrome profiles, to avoid risk of leaking valid credentials.


Yeah I was being a bit snarky. I didn't mean to imply that mainstream browsers are anywhere near phasing that ability out (at least yet). However consider Firefox policies regarding extension signing (specifically code review), or major mobile platform policies regarding user access to app data. Or a certain Google policy regarding add blockers, err sorry I mean protecting user data from malicious extensions. I think there's a pretty clear theme that the end user is to be regarded as an adversary and his behavior controlled.


Yeah, not really sure that’s simpler or even addresses the same attack vector Google’s option does.

First of all, this approach have the nice fact that now we need new TPMs capable of doing that, and even if people could update it, we will need to wait for everybody to update their TPMs. So lets wait another 10 to 15 years before we’re really sure.

Second, the attack vector google’s approach is trying to protect against is assuming someone stole your cookies. Might as well assume that someone has gained root on your machine. Can you protect against that? Google’s approach does regardless of how “owned” your machine is, yours doesn’t.

It’s not like you’re gonna hand off the TLS stream to the tpm to write a bit into it, then hand it back to the OS to continue. The tpm can’t write to a Linux tcp socket. whatever value the tpm is returning can be captured and replayed indefinitely or for the max length of the session.

So you’re back where you started and you need to have a “keep alive” mechanism with the server about these sessions.

Google’s approach is simpler. A private key you refresh your ownership of every X minutes. Even if I’m root on your machine. Whatever I steel from it has a short expiration time. It cuts down the unnecessary step of having the tpm hold the cookie too. Plus it doesn’t introduce any limitations on the cookie size


Is there a latency problem with current HSM implementations? That sounds like a lot of computation for arbitrary data which is now mostly done by CPU.


You're still sending the key over the wire as your credential. That's bad design plain and simple. If you want symmetric crypto there's preshared keys where the keys never go over the wire. If you need more than a single point-to-point between two parties then there's asymmetric cryptography.

Ironically the design you propose, juggling headers over to a secure enclave and having the secure enclave form the TLS tunnel, is significantly more complex than just using an asymmetric keypair in a portable manner. That's been standard practice for SSH for I don't even know how long now - at least 2 decades.

Oh also there's a glaring issue with your proposed implementation. The attacker simply initiates the request using their own certificate, intercepts the "secure" encrypted result, and decrypts that. You could attempt mitigations by (for example) having the secure enclave resolve DNS but at that point you're basically implementing an entire shadow networking stack on the secure enclave and the exercise is starting to look fairly ridiculous.


Using a public key mechanism means you can have a system where there is literally no interface that allows you to extract the sensitive parts. A very secure cookie jar still requires you to take the secrets out of it to use them.


I don't know why you threw the baby out with the bathwater. The problem is that you want the cookies to be short lived and device bound because someone might intercept your e.g. JSESSIONID or if they can't read it, they might inject their own JSESSIONID through cross origin requests somehow.

Binding a session cookie to a device is pretty simple though. You just send a nonce header + the cookie signed with the nonce using a private key. What the chrome team is getting wrong here is that there is no need for these silly short lived cookies that need to be refreshed periodically.


I'm not sure that's simpler than what Google is proposing here




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: