Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> There is one exception: FPGAs, under the premise that "generically" backdooring an FPGA is computationally infeasible.

IMO, this isn't a reasonable premise.

- The entire boundary scan chain is a backdoor. You could have an embedded processor poking around looking for things that look like RISC-V code and adding implants or observe state.

- You could make SERDESes do various kinds of naughty things when certain patterns go by-- dump some scan chain info, so you can kick some special packets and read out state remotely. Same thing for other dedicated peripherals that are connected to the outside world. This could be a pretty small number of gates compared to the processor implant idea.

- You could make naughty patterns of bits crossing places do bad stuff. Think of dynamic effects like rowhammer being deliberately included, so if you know the design you can figure out what outside data will trigger bits to flip and state to leak. (Yes, I know that block rams are SRAMs, but that doesn't mean you can't deliberately add capacitive coupling or screw up synchronizers in various ways. And it looks like we may have block NVRAMs soon, so that opens the possibility for various evil even more).

- You could deliberately break some kinds of operations-- e.g. make elliptic curve cryptography unreliable in some cases so you leak key data.

Note the various defense and national security applications of FPGAs. They're a wonderful target for state actors to try to backdoor.



> The entire boundary scan chain is a backdoor. You could have an embedded processor poking around looking for things that look like RISC-V code and adding implants or observe state.

The point is that is not feasible when the design layout is randomized. Reverse engineering FPGA bitstreams is a notoriously hard problem. You might detect your synthesis of RISC-V but there is no algorithm that can detect any RISC-V, and definitely no algorithm that can do that in a tiny low power embedded processor that can pass by unnoticed.

> You could make SERDESes do various kinds of naughty things when certain patterns go by-- dump some scan chain info, so you can kick some special packets and read out state remotely.

Yes, backdooring I/O is still possible. But it significantly raises the bar for the backdoor, since now you're relying on significant back and forth to probe deeper into the system. This isn't an absolute defense, it's just better than hard silicon because you can make backdooring the core logic impractical, especially self-contained.

> You could make naughty patterns of bits crossing places do bad stuff. Think of dynamic effects like rowhammer being deliberately included, so if you know the design you can figure out what outside data will trigger bits to flip and state to leak. (Yes, I know that block rams are SRAMs, but that doesn't mean you can't deliberately add capacitive coupling or screw up synchronizers in various ways. And it looks like we may have block NVRAMs soon, so that opens the possibility for various evil even more).

With design randomization, you can make it hard to detect patterns like that. Think things like randomizing the polarity of each bit line going in/out of RAM. Again, the point is the backdoor has to work with any design, and this opens up a wide range of mitigations that you can implement at that stage, that make it a lot less practical.

Keep in mind I'm bringing this up in the context of people believing that some ThinkPad the FSF rubber-stamped respects your freedom (and security) when it contains microcontrollers hooked up to LPC running secret blobs. Yes, if we want to go deeper down the rabbit hole of hardware trustability, there is definitely more to be done after Precursor, but it's a particularly clever example of how to at least attempt to begin to solve the silicon trust problem.


> With design randomization, you can make it hard to detect patterns like that. Think things like randomizing the polarity of each bit line going in/out of RAM. Again, the point is the backdoor has to work with any design, and this opens up a wide range of mitigations that you can implement at that stage, that make it a lot less practical.

This doesn't really work-- assume the adversary has your design. Then they can appropriately figure out how to get the right bits across some part of it that matters.


The idea for Precursor is that every user runs a different random build of the design.


The only thing that seems to be randomized in building precursor is the P&R seed.

https://github.com/betrusted-io/betrusted-soc/blob/main/betr...

That prevents attacks where you have a known place on the FPGA is naughty, but not when you have a lot of elements that are naughty on certain input.

It doesn't even really protect against known-naughty-place: there's not infinite freedom on P&R with fixed I/O locations.


Right now, sure, but there are more mitigations that can be added. This is an area ripe for research. The idea is that this kind of device and approach allows for further research, which can benefit users in the future since it's soft logic.

Again, I'm not saying this is a silver bullet, I'm saying it's an interesting approach and can claim to at least mitigate the risk of silicon backdoors by making them harder to pull off, which is more than can be said of the typical hard logic approach.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: