Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

What I'm most curious about here is if a state actor comes to Apple with a subpoena and compels them to release information on an individual, what would Apple be able to release?

... I suppose this is ultimately a question that will be tested sooner or later in the US.



I'm very curious as well because my very limited understanding tells me the answer is nothing. The relay hides your identity. Your phone checks the attestations so it won't send your data to servers not running the published software which ensures encryption keys are ephemeral. Once your session is done, the keys are deleted.

Law enforcement would need to seize the right server among millions while it's processing your request and perform an attack on it to get the keys before they're gone.

My next question is what happens if/when the attestation keys are stolen.


Probably everything uploaded after the intercept is in place if you can convince a court to compel it.

One option is to release a malicious software update, sign it, publish the signature on the public chain, and then simply not release the binaries until after whatever associated gag orders there are (if any) expire. Apple gave themselves a 90 day timeline for this before they'd even be in violation of their promises.

Another option is to use the cryptographic keys used to make the hardware that attests to the software running on it, to simply falsely attest to what software is running. Unless Apple's somehow moved those keys outside of the courts jurisdiction (which means outside of Apple's control in the case of most courts) that should be within the courts power. If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...

Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure. The fact that they are possible and within the legal systems power... well... why are we advertising this as secure again?

The main value of this whole architecture in my mind isn't actually security though, it's that it's Apple implicitly making the promise that they won't under any circumstance use the data, or let anyone else use the data, for business purposes (not even for running the service itself).


> One option is to release a malicious software update, sign it, publish the signature on the public chain,

In this option it would be Apple releasing a malicious software update?

> If they can still create new hardware, it seems likely whoever is making that hardware must still have access to the keys...

This option reads like the keys are stored in apple-keys.txt

> Both of these attacks are outside the "threat model" proposed, because they are broad compromises against the entire PCC infrastructure

They mentioned that the in-depth write up will be shared later, might they still address this concern in writing? Your wording makes you sound so certain, but this is just a broad overview. How are you so sure?


> In this option it would be Apple releasing a malicious software update?

Yes, compelled by something like the all writs act (if the US is the one doing the compelling).

> This option reads like the keys are stored in apple-keys.txt

They probably are. That file might live on a CD drive in a safe that requires two people to open it, but ultimately it's a short chunk of binary data that exists somewhere (until it is destroyed)...

> might they still address this concern in writing?

Can I say beyond all doubt that this won't happen? Of course not.

On the first approach I'm quite confident though, because it's both the type of attack they discuss in their initial press release, and pretty fundamental to and explicitly allowed by their model of updating the software.

On the second approach I'm reasonably confident. Like the first issue it's the type of issue that they were discussing in their initial press release. Unlike the first issue it's not something that is explicitly allowed in the model. If Apple can find a way to make the attestation keys irretrievable while still allowing themselves to manufacture hardware I believe they'd do it - I just don't see a method and think it would have warranted a mention if they had one. I tried to insert a level of uncertainty in my original writing on this one because I could be missing a way to solve it.

Ultimately I'd rather over-correct now then have people start thinking this is going to be more secure than it is and then have some fraction of them miss the extremely-likely follow up of "and we could be compelled to work around our security".



I'm well aware of these. They don't solve the problem at hand. You need a way to put keys into new hardware. Thus you need a way to get keys out of wherever you've stored your cryptographic material. Thus it can't be on a HSM (or it can be if it's a master key signing child keys, but in that case the attack only needs a signed child key).


From: https://support.apple.com/guide/security/secure-enclave-sec5...

“A randomly generated UID is fused into the SoC at manufacturing time. Starting with A9 SoCs, the UID is generated by the Secure Enclave TRNG during manufacturing and written to the fuses using a software process that runs entirely in the Secure Enclave. This process protects the UID from being visible outside the device during manufacturing and therefore isn’t available for access or storage by Apple or any of its suppliers.“


Sure, and even Apple can't imitate a different server that they made.

They're making new servers though. Take the keys that are used to vouch for the UIDs in actual secure enclaves, and use them to vouch for the UID in your evil simulated "secure" enclave. Your simulated secure enclave doesn't present as any particular real secure enclave, it just presents as a newly made secure enclave that Apple has vouched for as being a secure enclave.


I mean it was famously tested in 2015 after the San Bernardino attack. Apple didn’t back down [1] and later sued the company who sold the zero-day to the govt to unlock the phone [2].

[1] https://en.m.wikipedia.org/wiki/Apple%E2%80%93FBI_encryption...

[2] https://www.washingtonpost.com/technology/2021/04/14/azimuth...


Also famously tested (and failed) much more recently. https://arstechnica.com/tech-policy/2023/12/apple-admits-to-...

Apple shills are the worst.


That's an ordinary subpoena and that data is not being specially collected and not e2ee encrypted. Has nothing to do with the guarantee in this article.


Push notifications are on a whole different level than "full access to your phone".


The more common use case will be when you're locked out of "your" apple id.


The design appears to be entirely ephemeral. There's no personal data to recover here from "your" apple id.


Unless their statements regarding the design of these systems are blatantly false, or they are forced to add data collectors on purpose to target individuals, the answer is close to nothing.

You can opt into full E2E encryption [1] which makes it nothing, presumably at the cost of some convenience features.

[1] https://support.apple.com/en-us/108756


PCC exists because full E2E is not feasible for these use cases. The LLM has to take your personal data (context window and prompt) to process it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: