I agree with all of the articles points except for the first one: TPM and Secure Boot do not reduce user choice or promote state or corporate surveillance. If you want to be able to prevent root kits you need secure boot, and if you want to store secrets that don't need a user password to unlock and can't be stolen by taking apart the computer, you need a TPM; or you need substantially similar alternatives.
I would say that specifically with Secure Boot, Microsoft actually promoted user choice: A Windows Logo compliant PC needs to have Microsoft's root of trust installed by default. Microsoft could have stopped there, but they didn't. A Windows Logo compliant PC _also_ needs a way for users to install their own root of trust. Microsoft didn't need to add that requirement. Sure, there are large corporate and government buyers that would insist on that, but they could convince (without loss of generality) Dell to offer it to them. Instead, Microsoft said all PCs need it, and as a result, anybody who wants to take advantage of secure boot can do so if they go through the bother of installing their own root of trust and signing their boot image.
> I would say that specifically with Secure Boot, Microsoft actually promoted user choice: A Windows Logo compliant PC needs to have Microsoft's root of trust installed by default. Microsoft could have stopped there, but they didn't.
This was not the case with the initial rollout of Secure Boot, it was combined with locked BIOS to lock PCs so that they could only boot Windows 8 on some devices. This was the case on Windows RT ARM machines from that era.
All that has to be done today for machines to be locked down again is to flip a bit or blow an e-fuse. It's already the case on phones and tablets.
There is also a real potential for abusing TPMs or cryptographic co-processors to enforce remote attestation.
I say this as someone who agrees with your first paragraph and uses Secure Boot + TPMs on all of my machines.
And it's already happening in the form of Google play integrity API. Many apps already require it. It's just a matter of time before they push similar tech to the desktop. And on mobile it hurts more because many banks now require a mobile app for 2FA.
Personally I think any form of attestation is evil.
There's a reason Microsoft is aggressively deprecating "older" CPU's that work perfectly fine. Heck, I have one laptop with Windows 11 that worked great, but won't update from 22h2 to 24h2 because CPU support was dropped between versions, leaving me with only the glib suggestion from the Windows Update UI to "Buy a new device".
Ironically, installing Windows 10 and activating ESU would lead to longer hardware life.
Of course, I didn't. Instead, I installed Linux on that laptop too. My partner had no issues switching.
TPM wasn't the only reason older CPUs were dropped. The biggest reasons where the line in the sand Microsoft chose would not be supported in Windows 11 was Spectre/Meltdown [0] mitigation. Windows 10 added a bunch of intentional slowdowns to mitigate that disaster and people incorrectly blamed Windows 10 for being slow and not the CPUs and their CVEs. Windows 11 seems to have wanted a clean slate without needing to have any of those slowdown mitigations in the codebase and eliminate some classes of "Windows 11 is slow on my machine" complaints.
I'm not sure Microsoft took the best approach. I might have opted into a "Windows 11 Slow CPU" SKU if it was marketed right. That might have been a little kinder than "all these CPUs with this awful series of bugs are trash, even though we have had a successful workaround".
Yes, Microsoft is blocking CPUs which lack the ability for Virtualization Based Security. Given OS security is important to Microsoft (surprise, I know), enforcing VBS is a priority.
I think "chooses to" is doing a lot of work there in your understanding. Spectre exploits were found in the wild even in JS code submitted to ad networks. I suppose a user could choose to uBlock all ad JS and never visit webpages they don't trust. Those are choices, sort of.
But also that's a bit victim blaming isn't it? Do you want to explain to your grandfather or partner or child "Oh sorry, you had a password stolen because you chose to visit Google.com on a day where Google let an ad buyer attach Spectre exploit malware"? (Google could also chose to not let ads attach JS at all, but that's a very different problem.)
Computers have millions of places they get code from to run. Is "your CPU has a data leaking bug in it" the user's problem or the OS's problem? When there's a mitigation the OS can manage? When security-in-depth is an option?
I installed Bazzite on my own old Desktop not supported by Windows 11. One of the first things the Linux kernel spits out on boot if I have the boot console up is about running with Spectre mitigations. The Linux kernel also thinks it is important to mitigate (as Windows 10 did, but Windows 11 doesn't include and so doesn't support this old Desktop).
Sure I am might have been a wee bit too bitter writing that.
The point I want to make is that allowing remote code execution is such a big attack surface that it makes all the other security measures look silly, which indicates that signed execution contexts in them self is an attack on privacy and control etc.
If there was any actual security concerns there could be a push for server side rendering or something.
> People here REALLY need to start understanding this issue.
The idea that understanding is the problem feels like a fallacy. People need to upgrade hardware, and when all chips contain such functionality, consumers won't have a choice of alternatives. What you want is legislation (or a dominant competitor lacking such features, which doesn't exist).
No, I think they bend over backwards not to do it overnight because of the outcry but try to make all required changes and enforcements gradually over the years so in the end you will have no choice but there will not be any sudden change that would spark protests.
> This was not the case with the initial rollout of Secure Boot, it was combined with locked BIOS to lock PCs so that they could only boot Windows 8 on some devices. This was the case on Windows RT ARM machines from that era.
Okay, but, that was like 15 years ago, on some shitty first-run computers that no one bought. A failed first attempt. I've never met a single person that owned, or has ever used, a Windows RT device.
The world has moved on. But oddly continues to buy bootloader-locked iPhones and Androids by the bucketful.
Dwelling on the past isn't going to move us forward. Anyone pushing the "Secure Boot and TPM are evil" trope in 2025 is objectively a fool and should be ignored. Most don't even realize what a TPM does, they think it's some secret chip inserted by glowies into their computers to prevent them from running free software. No.
Normally I would agree that security measures are needed in many, but not all cases, but only if they are in complete control of the user and cannot by altered by any one organization. For-profit companies cannot be in control of these mechanisms. We have seen how they can be abused with the latest decision by Google to limit side-loading to people who identify themselves. So your take is really a misdirection from how these tools are being used against our property.
> For-profit companies cannot be in control of these mechanisms.
But they are not in control of Secure Boot.
Microsoft runs a root CA that is pre-installed on most PCs. It could have been Verisign or someone else, but MS made sense at the time, likely because they had additional code signing expertise.
You are free to delete these keys and/or install your own. If there wasn't preexisting infrastructure, Secure Boot would be DOA for most people.
Microsoft can force manufacturers to can change the way that works at any time, its vendor specific and they are totally in control, via pressure on manufactures to toe that line if they want to continue sell computers with Windows.
Don’t confuse the real point with the caricature. There’s a very real risk of only giant corporations being able to control software, because the general public does not even draw a distinction between “having control over what software is running on your computer,” and “being able to run a curated collection of software blessed by the manufacturer and subject to their exclusive discretion.” The full acceptance of the Apple iOS platform proves this. Apple must bless all binaries, and except for cases that are getting less and less common where jailbreaks are possible, the user has no authority and you could argue they do not own the device.
Some combination of the advertising industry and those with a vested interest in anti-fraud such as banks will eventually try to sneak remote attestation in there, which has the potential to put a complete end to ownership of devices as we have always understood it.
I wouldn't mind that if in fact the parent poster didn't try to make it look like an argument that Microsoft is kind and playing nice. They did a bad thing there, there was an outrage, they fixed it, the end. If possible, they will do another bad thing again, should it benefit them.
> Okay, but, that was like 15 years ago, on some shitty first-run computers that no one bought.
I wouldn't call the first Microsoft Surface, Surface 2, Dell XPS 10, and Lenovo IdeaPad Yoga 11 products that no one bought.
> I've never met a single person that owned, or has ever used, a Windows RT device.
I have and I also regrettably bought one myself.
> Dwelling on the past isn't going to move us forward.
The past dictates the future, and history repeats itself. Microsoft made their intentions known, it would be foolish to pretend they haven't. They continue to make their intentions known today with the Pluton cryptographic co-processor, that paired with a TPM, can enforce remote attestation by design. That is literally the intent of the Pluton chip: ensuring platform integrity and securely attesting to 3rd parties that your system is Blessed/trusted.
> Anyone pushing the "Secure Boot and TPM are evil" trope in 2025 is objectively a fool and should be ignored
Anyone tearing down this strawman is tilting at windmills for some reason.
> Most don't even realize what a TPM does, they think it's some secret chip inserted by glowies into their computers to prevent them from running free software.
I wouldn't project ignorance on those you don't actually know. You can understand what a TPM does, understand how it can be abused today and acknowledge how it was abused in the past.
> There is also a real potential for abusing TPMs or cryptographic co-processors to enforce remote attestation.
Remote attestation can be misused, yes. But why writing it as TPM is the problem? In cases where remote attestation is used for good, TPM improves the setup, if anything.
I dont see the rationale for what you wrote, and am genuinely curious what it is.
You can't do remote attestation without something like a TPM.
Let's compare these scenarios:
A) TPMs are optional and 30% of users have them. A bank is thinking about requiring remote attestation to use their services. Since they'd lock out 70% of users they decide to not do it.
B) TPMs are mandatory and 90% of users have them. A bank is thinking about requiring remote attestation to use their services. Since they'd only lock out 10% of users they decide to do it.
And banking is the nice example here. Refusing to serve a site if the user is using an ablocker is very much in the interest of powerful players in the space, see WEI. Every platform that has wide spread TPM adoption, namely Android and iOS have shown that they will abuse them for anti-consumer purposes sooner or later. We are talking about Microsoft here, the current and past poster child for anti-consumer decisions.
I hope that explains why making TPMs blanket available introduces new risks to sovereign computing.
I see your point. Its the very unbalanced power balance between consumers and providers, and the dishonest tactics of the latter. It ought to be addressed politically (its idealistic, I know). Until then use free software and multiple devices, or something like that. The TPM chips in themselves are a powerful concept, that can, and should, be used to the consumers advantage.
Because that's what has been going on in the Android world for years and for the iPhone was the case from the start.
Root your phone, even if it is just for the ability to make full backups (because that is, to this day, not a thing on Android)? Say goodbye to banking, most games, even the proposed new EU "digital identity" government wallet was supposed to enforce attestation.
And everyone with a phone on the "bad vendor" list that either doesn't get Google certification from the start or gets it revoked due to sanctions? Same.
Then you really should be angry at Apple and Google, not the hardware.
The preparations for eIDAS 2.0 (the EU thing) has been heavily inspired by SSI. If they keep up the good work, and implement it properly, security and privacy will be top notch. And that is only possible by using TPM (or really SE when we talk about mobile phones).
Yes, I know that eIDAS might end up not meeting the early promises. We will have to see. But in that case it will be despite the possibilities that the hardware provides, not because of them.
TPMs form the root of trust needed for remote attestation. If not TPMs, cryptographic co-processors can do similar things, or work in tandem with TPMs to accomplish the same thing.
On the face of it they're just security features, and I don't deny they are, but the industry as a whole are using those features to implement device verification systems that are being used to lock down their platforms and centralize control over their software ecosystems.
Being able to install another OS isn't much good if critical applications and websites refuse to run on it.
That the battle is lost doesn't mean we should stop fighting. Even the war being lost isn't a reason to. The equivalent in the real world is resistance.
I honestly have only come across one company that is app only. That was because I was with them when they changed over, otherwise I would never have signed up.
This was my local gym which sacked their front desk staff and moved to app access only, and with an app infested with trackers at that. Needless to say I don't go to that gym anymore.
It's popular with fintechs, especially new ones. Robinhood for instance was app-only for a few years before they got their web version. Revolut theoretically has a web version but it has far less features than the mobile app. Restaurant "apps" (for ordering and offers) are often app-only as well.
Honest question: What does TPM have to do with this? I mean, Revolut developers don't need to check for TPM or similar to serve other functionalities just because you're on browser or mobile app. Am I getting something wrong?
There might not be "TPMs" exactly on smartphones, but both Android and iOS have device attestation APIs that does the same thing that TPMs do, ie. cryptographically prove to a remote party that you're running some particular version of software.
>I mean, Revolut developers don't need to check for TPM or similar to serve other functionalities just because you're on browser or mobile app.
Some features are simply not available in the web version. You can try running the app in an emulator to get past that limitation, but an emulator won't be able to spoof device attestations, so if they bother checking for it you're screwed.
I'm on a move, had to pay some transport company to move some stuff for me, pick-up date tomorrow. Paid online, website asked for a confirmation from my bank's app (N26), fair enough. Opened the app, just to be greated with "Please Update. The latest app version includes new features, enhancements and stability improvements" with the only choice: "Update now".
Being confronted with an app designed to refuse to work was irritating enough (for context, I'm from a generation were we used to own our devices), but I clicked on "Update" anyway, just to be told by apple store that there was no update for my iPhone 7.
Ok, the writting was on the wall. You know, I own one iphone and 2 android phones already, all of them several years old but in pristine condition. That's how I am, I care for things. I'm not going to buy yet another one, if only because I hate waste and fear mismanagement of natural resources. That's how I am, I care for things.
Now you are mandating me to add more e-waste? There is no way I'm going to do that, so I decided to connect to N26's wensite, but guess what? You need the app to login. Well, if you insist you can also login with a short message, which I did, just to check that there was no way to confirm a paiement on the website.
But you can contact "support", so I tried that. To their credit, the robot bouncer was quick to admit incompetence and to connect me with a friendly fellow human, who was unfortunately only allowed to lecture me about why those "new features and enhancements" were essential to my account's security, while being unable to tell me exaclty what they were or what was the problem with the current version, and suggested I login from someone else's phone instead.
Security? Whose security?
To anyone working in tech, let me remind you what an actual threat model is.
My actual threat model in the actual world is that your company might stole my money, or prevent me from access it which amount to the same thing. Data points: Despite all the stories on the news about mischievous hackerz from russia and china, I've been stolen money only twice in my life, not a lot of but at the time I needed it, and twice by banks.
My threat model is that the electronic gadget that I bought and carry with me all the time stops obeying me and starts obeying some adversarial company. And that, in perfect novlang mastery, you want me to call this a "trusted device".
My threat model is that our civilization might drown in e-waste.
Want another exemple of app only service? Wait for a days or two, as I'm confident I will face the same issue soon.
Yes, your bank is shit, but this is also Apple's fault to a large degree.
There is absolutely no reason to release a new major version of your OS every year, and there is no reason to arbitrarily drop support for older devices (except extremely contrived ones, that I'm sure will be posted below). I made the mistake of acquiring an Ipad once. Its only job was playing YouTube videos in bed (yes I know), until Apple and Google in unison decided that it should be thrown into a landfill, because its OS was unsupported and the YouTube app, for no reason at all, would no longer work. Was the device suddenly unable to decode H.264 video or playing audio? Nope. But please just throw it in the trash and buy a new one - what are you, poor?!
I don't know, I haven't checked extensively but I believe supporting iphone7 is still one checkbox away in xcode (xcode 26 release notes state that it "supports on-device debugging in iOS 15 and later", which is what is installed on my iphone).
I could imagine how some team at N26 though that "supporting" more devices was too much on their plate, which I would sympathise with, but the most likely scenario to me is that some technically inept "decision maker" decided to ban older phones in a security gesture to give the impression that he is adding value.
Note: I also own a venerable ipad air2 (2009) that I bought second hand long ago to serve as a midi controller. Still a very nice, well build machine. It's not allowed to connect to wifi or it would figure out what year it is. I call it "hibernatus" (reference to https://en.wikipedia.org/wiki/Hibernatus) :)
I must just have a sixth sense to avoid those kinds of services. And I also have a zero tolerance policy. For example, if a restaurant says I have to order on my phone, I stand up and go to leave. I am old enough now they probably just assume I am technologically illiterate.
Year 2034, you have a nice vintage, lightly used electric car. Battery still charges and whole box drives. Do you need to buy new car or gov need to prohibit you using it or enforce to scrap it ? Most likely yes - battery is about to explode, possibly on crowded crossroad...
Real problems sometimes demands 0 or 1 action.
Just "phone app from everyone" etc is monopolies inflicted harm on society.
Your story is appalling, and I agree that this is a major problem.
However, drowning in e-waste from smartphones is many orders of magnitude from being an issue, as trivial calculations easily show. Mentioning it makes your argument rhetorically much weaker. The iPhone 16 is 147.6mm × 71.6mm × 7.8mm (8.2 × 10⁻⁵ m³) and weighs 170g, according to https://www.dimensions.com/element/apple-iphone-16-18th-gen. The population of France is 68.6 million people. One iPhone per person each year for the next century would be 6.86 billion iPhones in France, assuming the population remained constant. This would weigh 1.2 million tonnes and fit in a sphere 51 meters in diameter. If stacked 6 meters deep it would cover 9.4 hectares, a circle 340 meters in diameter. France contains 63 million hectares. The hypothetical pile of iPhones would cover about a third of the area of the Gravelines Nuclear Power Station near Calais.
Far from drowning in e-waste from smartphones, if you dump it in a landfill, it will be extremely hard even to find the e-waste without a map.
Even if you didn't have a countryside to bury e-waste in, this should be obvious even on the household scale. Suppose you and your four children each get a new iPhone every year, and instead of throwing them away, you put them in a box in the attic. How big is the box? It's a 35 cm cube after 100 years. It would weigh 85 kg, though, so you'd want to use several smaller boxes. But there is no risk of drowning.
"Drowning in e-waste" was a metaphor for "slowly destroying the conditions for civilisation with the violent obsession for more fossil fuel and more minerals to extract".
That's a bad metaphor, because those problems don't have anything significant in common with the e-waste problem, but there is no particular danger of smartphones being a major contributor to them, either. According to https://www.apple.com/nz/environment/pdf/products/iphone/iPh... the emissions per iPhone 16 are 56 kg of CO₂ equivalent, 18% of which is the expected energy consumption during the life of the product. France emits 4.14 tonnes of CO₂ per person per year, so buying an extra iPhone per year would increase your total yearly CO₂ (equivalent) emissions by about 1%. Similarly, the quantity of minerals in a smartphone is insignificant (170 grams! largely recycled!) compared with the quantity of minerals in, for example, a sidewalk (many tonnes).
Some of those minerals, like the gold in the bond wires, are pretty heavily refined, requiring the excavation of some much larger amount of gangue and leaving most of it as tailings. But the total quantities of those minerals in the device are very small indeed. Instead, worry about things like electric vehicles and CO₂ emissions from making concrete.
What you are doing by attempting to reduce fossil fuel and other mineral usage by buying smartphones less frequently is analogous to attempting to pay the rent on a Paris apartment by looking for lost coins in the subway station, or attempting to take a running leap across the English Channel. You are doomed by your complete lack of understanding of the orders of magnitude involved.
> the emissions per iPhone 16 are 56 kg of CO₂ equivalent, 18% of which is the expected energy consumption during the life of the product
Are you counting the emissions produced to make it and all the packaging that comes with it, the vehicles used to transport it, lightning used in the warehouse where it sits and the appliances used to keep the warehouse clean too? Phones, just like anything else, are not made in a vaccuum
Apple says they are counting the emissions produced to make it and all the packaging that comes with it, the vehicles used to transport it, lighting used in the warehouse where it sits, and also the consumption of the device during its lifetime. You wouldn't want to count the carbon emissions of making the appliances used to keep the warehouse clean too because with that procedure the carbon emissions of anything would be infinite.
It ought to be obvious, but I'll say it anyway: the carbon emissions of shipping things like a smartphone are quite small, and the carbon emissions of things like warehouse lighting and warehouse cleaning are utterly insignificant.
"You wouldn't want to count the carbon emissions of making the appliances used to keep the warehouse clean too because with that procedure the carbon emissions of anything would be infinite." Problem is that this is the kind of loopholes orgs use to be able to get a lower number on reports. See country selling their waste to poorer countries. You don't have to necessarily fully map the true production chain but not counting the emissions of the tools used to produce, store and maintain the smartphones smells like cheating to me. All those tools will be contributing to carbon emissions and setting an arbitrary line only serves to push the responsibility of those emissions with someone else which is how we got into this whole environmental mess in the first place.
Obvious is just shorthand for unsubstantiated beliefs ime. What does "quite small" even mean? The iPhone carbon footprint is likely the lowest of all smartphones given Apple's efforts to look as green as possible. Your regular smartphone has almost double the carbot footprint at around 80kg. When you consider that most non-iPhone non-flagship smartphones become virtual bricks after 2-3 years, 80kg is a lot to me.
e-waste is very much linked with over-production, of which any particular product taken in isolation, be it iphone or tomatoes, is of course insignificant, the issue being the economy at large not iphones or Apple.
I don't know what's your point exactly? I was close to believe that this near perfect mix of naive quotation from Apple PR BS, computation of tons of minerals required to build a phone to the 5th decimal, and the lackadaisical insulting remarks, was some refined form of humor. But given we are on HN, you might just be this kind of engineer who can't see the forest for the tree.
So, assuming you are just inapropriately expressing a genuine concern that I might be mislead into believing that refraining oneself from buying any more phones is going to slow our society spiraling down into chaos, rest assured: I'm not believing this. My posture is all about principles, and holds for an iphone like for any of the many useless things a normal, modern life wants us to consume routinely, because I believe one should try to do the right thing no matter what, regardless of the odds of success, because proceeding otherwise requires to define success, an end goal, and that's a circular impossibility. Yes, as you can see, I'm with you on the spectrum. :-)
I am an engineer, and engineering is what is going to keep the planet habitable, not self-sacrifice. Engineering is based on calculating the costs and benefits of tradeoffs.
I do respect self-sacrifice on principled grounds. If you were starving in a besieged city, and killing and eating a baby were your best chance for survival (https://youtu.be/KOkBEqtGUI8?t=2886), I'd endorse you not doing it. Even if, in some utilitarian calculus, you were more important than the baby, I'd endorse your hypothetical non-baby-eating moral choice. I'd like to think that I'd be one of the people abstaining from lifesaving cannibalism myself, though I've often seen people fail to uphold their principles when it comes down to it. I respect drawing a line in the sand beyond which you refuse to coldly weigh costs and benefits like an engineer.
But that's not what you're doing. If not buying a smartphone were "all about principles" to you, you wouldn't have a smartphone in the first place. You've crossed the line in the sand; you're already eating babies. All that remains to you is balancing the number of babies you kill and eat against your nourishment.
And, in that situation, refusing to balance costs and benefits isn't a matter of principle. It's merely irresponsibility, and will result in you eating unnecessary quantities of babies.
> I am an engineer, and engineering is what is going to keep the planet habitable, not self-sacrifice. Engineering is based on calculating the costs and benefits of tradeoffs.
This is HN naivete at its best. Engineer-centric worldview directly inspired by Ayn Rand science fantasies with single-factor causality at its core.
Engineering happens in and is regulated by its surrounding socio-entrepreneurial-political context. Apple releasing Apple Intelligence is not exclusively an engineering decision. OpenAI releasing ChatGPT is not exclusively an engineering decision. The birth of the internet is not an exclusively engineering decision.
Every single one of those decisions involved more than just calculating costs and benefits of tradeoffs.
What is the difference between saying "I am an engineer" and "I work as an engineer" if we leave aside any desires to bind your personality to your employment contract?
I don't subscribe to a belief in single-factor causality. You can't do engineering with such a belief. Engineering is a discipline of bringing about desired effects, and that requires bringing about all of their necessary causes, not just one of them. If you attempt to operate a motor, a CPU, or an electroporation apparatus at the right voltage without paying attention to the temperature, or the right temperature without paying attention to the voltage, your design will have a bad problem and you will not be doing engineering today. And if you look at the motor's datasheet, you can see that the operating conditions have not just voltage and temperature but another dozen or two parameters.
But when you try to reduce a relationship in the infinitely complex and mostly unknown real world to a sentence, or even an essay or an encyclopedia, you have to simplify it. When you do this well, you can manage to say things that guide your readers toward the inexpressible and incompletely knowable truth, rather than away from it. You may even be able to figure out how to do something that you are trying to do.
To describe a bit more of the situation, among the unbounded complexity of the causal graph that has mostly eliminated the risk of global warming continuing, many of the critical nexuses are engineering achievements: the reduction of the resources required to manufacture solar panels to a tiny fraction of what they were only ten years ago, the construction and successful operation of solar panel factories that would already suffice to meet the human world's energy demands within decades, the similar improvements in rechargeable batteries, the not-yet-built solar farms that will deploy these panels, and so on.
These are ultimately causally dependent on nearly all of human history and especially on the political history of China, Germany, and Spain in the early 21st century and of the US in the late 20th. And the effects that will proceed from them are still largely unknown and unknowable, depending on future politics, but some of them are predictable; in particular, fossil fuels have become economically uncompetitive as a source of energy almost everywhere in the world, and will consequently decline over time. This may not be completely inevitable, but it is likely enough at this point that the alternatives are not worth worrying about.
You ask what it means to be an engineer if it's not just an employment contract, which makes me wonder if you have ever met an engineer. I have already given a partial answer: it is a way of thinking that seeks acceptable tradeoffs rather than perfection. I think it has a lot of other aspects as well. For example, engineers tend not to worry too much about factions with conflicting interests; we see life as a series of problems; we expect problems to be solved with enough knowledge and diligent hard work; we tend to value what is knowable and measurable over intuition, even as we depend unavoidably on intuition every day; we design things; our designs are based on material implications of inequalities (to compensate for the unknown unknowns in the world) rather than just equations; we respect expertise, especially expertise that can be put into words; we dare to imagine what has never been, and bring it into existence.
Contrast this with, for example, the worldview of a lawyer, or a doctor, or a mystic, or even a scientist.
Each of these aspects of being an engineer has good effects and bad effects, and sometimes the congenital blind spots of engineering thinking lead us into disasters. (Those blind spots don't bear much resemblance to your caricature of them, presumably because you know almost nothing about engineering, but they do exist and are very important.) But that's basically the way we have not only built the internet but also solved the climate change problem, including at the political level—you may have recognized Xi Jinping's good and bad points in the outline above.
TPM and Secure Boot would be good things if there were no way to prove to third parties that you're using them, or have them configured a certain way (i.e., remote attestation). It's the fact that that is possible that makes them reduce user choice and promote state and corporate surveillance.
Maybe. This assumes I trust Microsoft to have part of my computer where I have no ability to interrogate it to see what they’re doing in there.
If it’s on my computer, I should be allowed to read and write to it. End of story. I don’t care if that makes it vulnerable. So far as I’m concerned, letting Microsoft keep secrets from me on my own computer is similarly catastrophic to losing my HD to a crypto-locker virus.
> TPM and Secure Boot would be good things if there were no way to prove to third parties that you're using them, or have them configured a certain way (i.e., remote attestation).
This is exactly what a TPM was made for, so your statement is a little bit paradoxical.
The ideal is the owner being able to use TPM/SecureBoot/etc to ensure that the device is in the configuration they want. That means resisting tampering, and making any successful tampering become obvious.
The problem is third parties using TPM/SecureBoot/etc as a weapon against the owner via remote attestation, by preventing them from configuring their own device, with the threat of being cut off from critical services.
Having the upside without the downside would be nice, but how could it work? Is a technical solution feasible, or would it need a law/regulation?
Not a crypto expert, but given how both, bad players seeking control and people seeking to verify their cloud machines are both remote it seems that the technology will rollout without problem and will end up being force fed into all consumer devices with bullshit excuses.
Thing is, because the whole design is closed as well as firmware, the security of it is near zero, even for sealing firmware device images (e.g. option ROM), much less bootloaders. Multiple security holes have been found.
There's no issue booting a boot rootkit with the standard Windows bootloader unless you manually seal the image with command line or group policy, and even then it's possible to bypass by installing a fresh bootloader because the images are identical and will boot after a wipe.
>Thing is, because the whole design is closed as well as firmware, the security of it is near zero, even for sealing firmware device images (e.g. option ROM), much less bootloaders. Multiple security holes have been found.
> if you want to store secrets that don't need a user password to unlock and can't be stolen by taking apart the computer, you need a TPM
I had a Win 7 system and just entered a password on boot, this decrypted the disk. It was supported without mods or TPM (maybe some registry tweaks though).
On Ubuntu I do the same, no need for TPM.
Am I missing something?
My disk is encrypted. If they take it apart, they need my password to crack the encryption.
The important part in the parent is "that don't need a user password". You just said you had to supply a (user) password.
With a TPM you can set it up that your disk is unlocked automatically, but only if no-one changed anything in the signed boot chain. This is the default with Bitlocker on Windows and is also possible on Linux, though somewhat more finicky.
But most people don't want to enter a password, and if you make people enter a password too much, they'll choose terrible passwords and put them on a sticky note. Windows Hello can only be done securely with a TPM. A server that I want to turn back on all by itself after a power outage can only be done securely with a TPM.
I want a TPM in my computer so I can have the security and convenience. Yes, it's another point of failure. But I need backups in case the hard drive fails anyway. And besides, the OS can be designed so I can enter a password if I need to use the drive without the TPM.
>Windows Hello can only be done securely with a TPM
I think in general biometrics are in the same ballpark as low-entropy passwords. IDK, I personally have no faith in trusted computing hardware because it can be broken with the right equipment. You're right that it can be used alongside ordinary security measures, but I just think it encourages putting your eggs into a cryptographicially-weak hardware-strong basket (which represents a downgrade because crypto is stronger than hw).
>A server that I want to turn back on all by itself after a power outage can only be done securely with a TPM.
Can you describe how this prevents a MITM attack? I assume you mean a remote server? I've heard of colocation setups like this, but I think they rely on a couple of unstated assumptions.
> >A server that I want to turn back on all by itself after a power outage can only be done securely with a TPM.
> Can you describe how this prevents a MITM attack? I assume you mean a remote server? I've heard of colocation setups like this, but I think they rely on a couple of unstated assumptions.
I'm not sure what you mean by prevent a MitM attack, unless you're worried about someone with probes MitM-ing your TPM-CPU connection in the DC.
You can bind a TPM to measurements on the host (let's say for argument's sake you want Secure Boot state, Option ROM state, and UEFI state), then configure the OS to ask the TPM for the (or rather, a) decryption key during boot.
The TPM will check that the state(s) you bound to is (are) the same as when you bound them, and if so it will give the OS the key. Your disk is encrypted, but the boot process is automatic/unattended, as well as completely contained within the server chassis.
There are ways to attack this hypothetical setup, buuuuut there are ways to attack remotely entering your disk password as well, and bear in mind that denial of service is a security vulnerability. Tradeoffs.
I agree that biometrics are in the same ballpark as low-entropy passwords, which means their security relies on avoiding offline attacks. My ATM card is protected by a 4-digit pin. That's perfectly secure, because the ATM network won't let you enter a wrong pin more than a single-digit number of times before locking the account.
Windows Hello allows you to log in with a 6-digit pin. That's perfectly secure, because the TPM lets them design a system where you can't do an offline attack on the pin. Too many wrong entries and you'll need to use your password.
I doubt there's more than two dozen bits of entropy provided by finger print readers or facial recognition authentication, but you can make an acceptably secure login experience with it because, again, the TPM lets you prevent offline attacks.
But without password, anybody can physically access the device and exfiltrate data. That is even easier than regular password protection, where the storage medium would have to be removed or a live OS would have to be booted.
The risk is data leakage. With a TPM and no password, there is no data leakage protection.
Passwordless boot with a TPM means the software can control what secrets it gives out. Yeah, if you boot to a desktop operating system and auto-login as an admin user, that doesn't leave things very secure, but that's not the only scenario.
Consider a server. It can have an encrypted hard drive, boot with the TPM without a password, and run its services. In order to steal data from it, you need to either convince software running on the server to give you that data, or you need to do some sort of advanced hardware attack, like trying to read the contents of DRAM while the computer is running.
There are other use cases too, like kiosks, booting to a guest login, corporate owned laptops issued to employees, allowing low-entropy (but rate limited) authentication after booting, to name a few.
> Am I missing something? My disk is encrypted. If they take it apart, they need my password to crack the encryption.
You’re not protected from an evil maid attack. An attacker with physical access could make your device boot their own payload to capture your encryption key and install a rootkit.
I—like most people—don't have a maid. Is Tom Cruise going to break into my house to add a keylogger to my computer without me noticing? If anyone is breaking in, my threat model is worrying about me or my family getting killed, not someone installing an evil bootloader.
Most market segmentation is just to screw customers (e.g. ECC support), but measured boot is one that really only needs to be on enterprise server or workstation-class hardware, and actually causes issues by existing in mass market hardware.
If your threat model includes evil maid attacks a TMP will not save you. They can just install a physical keylogger and then do whatever they want. The only threat model that a TPM helps with is where the owner of the computer is considered the threat by someone else.
So what happens when they use their physical access to turn off secure boot or just replace the component/device with one that looks the same, prompts for your password and sends it to them?
That's Windows doing that, which they've just compromised and then configured to display only the normal login prompt but send your credentials to the attacker.
They can also decrypt your hard drive by doing the same thing without modifying the original machine by just stealing it and leaving you a compromised one of the same model to also steal your password.
No, GP is misinterpreting Windows's message. It prompts for a recovery key because the TPM is bound to, among other things, Secure Boot == enabled. When Secure Boot is disabled, the TPM notices that and refuses to release the key, that's how you know to reënable Secure Boot or throw away your device.
The fact that Windows is compromised does not make it capable of extracting secrets from the TPM, though maybe a naïve user can be convinced to enter the recovery key anyway...
> When Secure Boot is disabled, the TPM notices that and refuses to release the key, that's how you know to reënable Secure Boot or throw away your device.
But the attacker isn't trying to get the key from the TPM right now, they're trying to get the credentials from the user. It's the same thing that happens with full disk encryption and no TPM. They can't read what's on the device without the secret but they can alter it.
So they alter it to boot a compromised Windows install -- not the original one -- and prompt for your credentials, which they then capture and use to unlock the original install.
They don't need secure boot to be turned on in order to do that, the original Windows install is never booted with it turned off and they can turn it back on later after they've captured your password. Or even leave it turned on but have it boot the second, compromised Windows install to capture your credentials with secure boot enabled.
How suspicious are you going to be if you enter your credentials and the next thing that happens is that Windows reboots "for updates" (into the original install instead of the compromised one)?
So this attack is to steal my Windows password or Windows Hello credentials, but doesn't get my encryption key...? That's...not ideal, but I think you'll see it's an improvement over unencrypted disks (again, TPMs are for people who can't be bothered to set a strong password).
And again this presupposes that you can disable Secure Boot, boot a malicious OS from another drive, fool the user into entering their password, automatically reboot, enable Secure Boot, boot into the legit OS, then come back later and have the ability to boot the OS yourself and log in as the user (because again, you don't have the decryption key, you have the user's login credentials).
You are also presupposing what the TPM is bound to. I don't use Windows, but using systemd-cryptsetup I could configure a TPM to bind to the drives in the system; in this way, it will refuse to boot my legit OS while your malicious disk is installed (well, it will demand a recovery key). Again, setting off alarm bells, and if I discover the disk with my recorded credentials before you can physically access it, I can just destroy it.
> And again this presupposes that you can disable Secure Boot, boot a malicious OS from another drive, fool the user into entering their password, automatically reboot, enable Secure Boot, boot into the legit OS, then come back later and have the ability to boot the OS yourself and log in as the user (because again, you don't have the decryption key, you have the user's login credentials).
But that's the same thing that happens with full disk encryption. They come get physical access to the machine but don't have the decryption key yet so they compromise the unencrypted part of the machine which is what prompts you for it, have that capture the key when you enter it, and now they have the key when they come back to use it.
If anything allowing the short password is even worse, because if you leave your machine in suspend you expect it to prompt for your unlock password but not the full disk encryption key when you come back, so the latter would be suspicious but the former doesn't let them unlock the disk, and now you're using the short password for both.
> You are also presupposing what the TPM is bound to. I don't use Windows, but using systemd-cryptsetup I could configure a TPM to bind to the drives in the system; in this way, it will refuse to boot my legit OS while your malicious disk is installed (well, it will demand a recovery key). Again, setting off alarm bells, and if I discover the disk with my recorded credentials before you can physically access it, I can just destroy it.
Except that it doesn't need to be installed once you're at that point. By then it has already captured your credentials and stored them or sent them to the attacker over the network, so it can disable that device right before it goes to boot into the original operating system.
Also notice that the original premise was to make it easy for ordinary users and now the workaround is to install Linux and change a setting that will confuse people as soon as they leave their own USB stick plugged into their computer.
Either you're entering something into the machine to authenticate yourself or they can just copy or modify your files without authenticating to begin with.
If they just want your password they don't need to decrypt your hard drive, they can format it and install a rootkit that steals your password as soon as you try to login.
So don't turn off secure boot. Replace the target machine with an identical decoy machine set up to capture whatever credentials are required to log in to the machine once BitLocker auto-unlocks, then use these to log in to Windows on the original machine and steal any encrypted data accessible by the user who logs in.
This would be more difficult to pull off in the presence of non-password security like a hardware token, as you'd need to forward the actual login UI to the decoy machine, but still not terribly difficult if the login UI will display on an externally-connected monitor and accept input from an externally-connected keyboard and pointing device, and the hardware security device connects via an external interface like USB.
I think it has the potential to create that situation if those features ever change. I should probably update that language, but I still feel from a consumer choice perspective, those solutions seem vendor specific and not governed by an open organization.
Between 2011 and 2013, multiple Linux / free software organisations raised the issue with the EC. There was an actual antitrust investigation which at the time was seen as what motivated Microsoft to open the solution to third parties by 2013.
So in a way, thank you EU for making it so we have choices at all.
With that said, I think the technology still does more to promote vendor lock-in and as others have said, it’s one windows update away from a dystopian hellscape where all your bits have been pre-approved by someone else.
I am starting to see the benefits to secure boot and TPM from a gaming perspective. I realize this can still be tampered with but it eliminates so many casual cheaters that the edge case is practically irrelevant.
I don't see how my TPM module will prevent me from using the machine the way I want. The offer of a cryptographic assurance to a 3rd party is something I happily provide in order to gain access to a competitive gaming resource. Cheaters really fucking suck and if this is what it takes to ruin their day, then fantastic. I'm looking forward to TPM3.0 now after seeing how ruinous this has been to their schemes. These tools are effective.
Battlefield 6 is especially problematic for malcontents because its developers also enjoy using statistical methods to detect cheaters. TPM2.0 + statistical methods + $69.99 per try = probably can't afford to play this game unfairly for very long. Even if you can afford it, the in game progression takes an eternity. You're gonna need that 8x scope if you want your "undetectable" frame scanning aimbot to be of any use.
> I don't see how my TPM module will prevent me from using the machine the way I want.
I guess people don't know this particular dystopia is implemented.
First a platform gets third parties (games, banks, etc.) to impose their attestation system on customers. Congrats, you're locked in! This is the gun they point at you but the bullet comes after.
Now you can't leave the platform or you lose all your games, have to get a new bank, etc. The more stuff they can get to require that, the more stuck you are. This also prevents any new competitors from building a network effect. But competition -- the ability to switch to a competitor -- is the only thing stopping them from being the worst people in the world. Ads in the start menu. Censoring whatever they don't like. If you want to buy something -- anything -- they want a 30% cut. They'll hide it from you but take it anyway. All your local files get uploaded to their cloud and the terms let them use it for AI training, or whatever else they want. And soon you have to pay a monthly fee if you don't want them to be deleted. Why would not paying also delete them from your local machine? Because screw you, you don't have a choice anymore.
> I don't see how my TPM module will prevent me from using the machine the way I want.
"Your version of TPM is unsupported. Please update your hardware to enjoy playing Battlefield 7". Your 69.99 per try just went up to 769.99 _for legitimate users_ because you need a new CPU with updated TPM for every new version. I'm being hyperbolic, but only slightly.
If you want a real example of this, Windows 11 requires TPM 2.0 to run. Hardware predating wide TPM2 adoption can be powerful enough to run Windows 11, except the company decided you need a new computer to do that.
> I am starting to see the benefits to secure boot and TPM from a gaming perspective. I realize this can still be tampered with but it eliminates so many casual cheaters that the edge case is practically irrelevant.
This is overkill for a feature that is only relevant to one specific usage of PCs. Imagine if your PC got crippled because farmer IT admin benefits from it
Not to mention hardware based cheating that just implements a fully compliant USB mouse, keyboard, and HDMI setup, and DMA like https://www.dma-cheats.com/
> If you want to be able to prevent root kits you need secure boot
I think this is very misleading. Secure boot was a response to the poor security of commodity operating systems which allowed programs easy access to make low-level system modifications. In other words, the poor security models of commodity operating systems was the actual cause that allowed rootkits to spread and become a major threat that required mitigation.
In an alternate world in which operating systems enforced least privilege on all programs, the likelihood of a rootkit spreading would be orders of magnitude smaller, almost not even worth mentioning. The motivation for secure boot in this world is really only to prevent supply chain attacks, which can also be solved by just buying hardware from reputable companies. Secure boot arguably would not have been created in this world, thus avoiding the new dangers inherent to it.
Yes, but when an individual hacker needs a secure computer and is deciding which computer to buy, it does him no good to tell him that if the whole industry had evolved in a more convenient way over the last 4 decades, he would have been able to avoid secure boot: in the actual world, the only user-facing computers on the market with decent security use secure boot to help deliver that decent security where "user-facing" means "used to browse the web and maybe other things".
Also remote attestation has pro-social uses. Without it, photographs will soon become useless as evidence because soon there will be no way to distinguish a photo of a real scene from the output of generative AI.
My point is that secure boot isn't the only way forward, and depending on your circumstances, a foundation built on something like seL4 could suffice for particular applications. And it doesn't even require a whole new OS or foundation like seL4, even Windows has the right core primitives if they're used in the right way [1]. And that work was from 2005, not 40 years ago, but still long before any of this really became an issue.
Coreboot with Heads and Qubes prevents malware that has inserted itself into the firmware of your ethernet driver, keyboard or block-storage device from modifying your software?
I would say that specifically with Secure Boot, Microsoft actually promoted user choice: A Windows Logo compliant PC needs to have Microsoft's root of trust installed by default. Microsoft could have stopped there, but they didn't. A Windows Logo compliant PC _also_ needs a way for users to install their own root of trust. Microsoft didn't need to add that requirement. Sure, there are large corporate and government buyers that would insist on that, but they could convince (without loss of generality) Dell to offer it to them. Instead, Microsoft said all PCs need it, and as a result, anybody who wants to take advantage of secure boot can do so if they go through the bother of installing their own root of trust and signing their boot image.