To add to this & the Jobs interview - an oil industry proverb: a healthy oil company has a geologist in charge, a mature one has an engineer in charge, a declining one has an accountant in charge, and a dying one has a lawyer in charge.
> A bit ironic though because the CEO of Boing during their best years was William McPherson Allen, a lawyer.
Just as there are good engineers, there are also bad ones. Same for every profession.
I guess the question is: can Boeing really design a new plane where cost cutting, regulation interpretation and skirting, and greed are _not_ the driving factors?
It feels like what Boeing is saving from all the nickel-and-dime it does on everything it ends up paying lawyers, fines and damages. I wonder how they manage to see this as good business. Or maybe they hope that "the next" time they'll score big without any penalties?
One of the most exceptional CEOs I've worked with was a lawyer. I still think the proverb is largely correct, along with the other proverb about the exception proving the rule.
Well, the proverb doesn't necessarily attribute the death of the company to the lawyer.
If a company is dying (aka winding down), you most likely do in fact want a lawyer in charge, whatever their job title may be. For instance why would you put a scientist or engineer in charge of negotiating your acquisition?
It's a great proverb and in particular the "accountants in charge to extract maximum value after maturity, lawyers in charge at the end to wind it down or sell it off" part is accurate of many businesses in general. No company gets to live in the startup and growth stages forever. At a certain point shareholders decide to get everything they can out of their investment and move on.
It's listed there as a way that people use it and then calls that usage objectionable and a misunderstanding.
I don't dispute that people use it that way but it's objectively a misuse. The phrase's misuse implies that evidence against a statement supports the statement.
> In many uses of the phrase, however, the existence of an exception is taken to more definitively 'prove' a rule to which the exception does not fit.
> In what Fowler describes as the "most objectionable" variation of the phrase,[1] this sort of use comes closest to meaning "there is an exception to every rule", or even that the presence of an exception makes a rule more true; these uses Fowler attributes to misunderstanding.
Try to understand that there is no individual ownership over turns of phrase, and that they tend to shift around over time. Bugs Bunny turned Nimrod from a byword for a competent hunter into an insult.
This is natural and all of your favorite words have or will be subject to it as long as there are humans to communicate with them.
Dictionaries - at least the ones I checked - mark the "very good" meaning of "egregious" as archaic. I'm only aware of the "very bad" meaning (in UK English), and was quite surprised, when studying maths, to learn of Gauss's "Theorema Egregium", and that the word could have positive connations.
Shit (meaning “how true”), shit is veritably the aladeen of words. It can basically mean anything depending on usage, context, attitude, or tone of voice.
A phrase does not mean anything by itself. People mean something when they use it. You could argue that expressions carry some meaning, by virtue of shared use. But your definition does not align with the meaning most people make of this specific expression as you can witness above.
People misuse expressions. A common one is the customer is always right. The actual wording is the customer is always right in matters of taste but people cut off the ending which changes it from a sensible and useful proverb to a bunch of nonsense. Of course the customer isn't always right. They're always right in terms of what they want to buy, not in all other terms.
Similarly an exception like a lawyer being a good ceo does not prove a rule like lawyers are bad CEOs. It's nonsense. People who don't understand the proverb took it and misused it and then others took after them and here we are, I've been wondering about that proverb my entire life and I never understood how it makes any sense. Now I finally do, and I'm glad the other commenter clarified it
On top of what j5r5myk mentioned, there is a fairly good record of the origin on “the customer is always right,” (described on Wikipedia) because it was something like a moderately well known person’s catchphrase, in an era when newspapers and marketing existed.
There’s some quibbling to be had about the meaning, but it puts it closer to “assume good faith” or something like that, rather than reducing it to just preferences.
> The earliest known printed mention of the phrase is a September 1905 article in the Boston Globe about Field, which describes him as "broadly speaking" adhering to the theory that "the customer is always right".
> However, John William Tebbel was of the opinion that Field never himself actually said such a thing, because he was "no master of idiom". Tebbel rather believed it probable that what Field would have actually said was "Assume the customer is right until it is plain beyond all question that he is not."
There is a common phenomenon where people claim proverbial quotes were originally longer. One I often hear is “Jack of all trades master of none” originally including the follow up “often better than a master of none.”
If you research this, as well as the customer as always right as you claim, you will find no evidence of their longer ‘original’ forms [1].
But the intended insight isn't stupid, and "the exception proves the rule" is a natural, easily-inferred contraction of "the rarity of finding an exception proves the general validity of the rule".
The phrase "most people mean the wrong thing by this phrase" makes no sense. A phrase means what most people mean by saying it, or understand by hearing it. So, "the exception that proves the rule" is, as its main modern meaning, a joking way of admitting that a rule (especially one that the speaker had argued for) is not actually a universal rule, while maintaining that it generally holds true.
The examples of legal signs and so on are a more specific, technical, meaning that is only used in certain contexts, such as actual legal proceedings or at least informal discussions about laws or contract terms.
When /u/flkiwi above said this phrase, they obviously meant it in the joking sense I gave, and which they had actually explained above. They agree that, in general, lawyers make bad CEOs, but they also personally know of exceptions. This is not "wrong usage", as proven by the fact that everyone who read the comment understood exactly what they meant.
This whole thing reminds me of the people who complain about the use of literally as an amplifier instead of for its primary meaning as "wrong", with seemingly no understanding of how flourishes and rhetoric work (nor even of the history of words like "very", which used to be quite similar to "literally" a long time ago).
Words are just noises. Think of them as pointers. They point to a concept in the brain. What concept that may be differs from person to person. But as long as the words point to something, they aren't used wrongly.
The idea that there was some point in history were the pointer target was officially designated to be x is just false. That point in time never existed.
My point isn't that the use of the phrase is wrong, the point is that the colloquial understanding of the phrase is a bad concept.
[All] lawyers are bad CEOs is a statement that was made.
Evidence to the contrary was presented.
"The exception proves the rule" was used to dismiss that evidence.
It's used in a similar way as "God works in mysterious ways".
that's actually the correct use of the phrase "the exception proves the rule"
the rule is that the parking is allowed; the exception is that it's not allowed on Wednesdays; they didn't bother spelling out "parking is allowed at all other times except"
Yes, but in common usage it has come to also mean "the [rarity of finding an] exception proves the [general validity of] the rule", and it was clear from context which one the parent meant.
God Internet pendants are exhausting. I KNOW, but it's a harmless rhetorical device. This begs the question of why you care. There you go, that's a good one to get fired up about.
I don't really care but but when I found out the actual meaning of the phrase (the usage of which never really made sense to me), it made a lot more sense to me. I thought it was interesting.
I'd also argue that "it's harmless" is not always accurate. It's usage dismisses counter-evidence to a statement. Depending on the case, it may or may not be harmless.
I think denying the antecedent (that’s what this is, right?) is a well known fallacy precisely because it is often the intended implication in typical speech.
There are always people who work out despite common sense saying they shouldn't that doesn't mean common sense is wrong, it just means we don't understand what the real factors are.
He initially turned down the job because he felt that a lawyer wasn't the right person to run an engineering company, and from reports of people who worked with him he knew his knowledge limits and listened to the engineers. He took serious risks with the 707 and 747 projects because he trusted the people who understood the technology.
MBAs and final-gasp lawyers concentrate on making the reported number go up in the short term, they won't take a hit now for a payoff in ten years.
Fun fact the 707 had the first implementation of “MCAS” because the plane had a tendency to pitch up in a certain flaps configuration. They added a stick nudger which applied light pressure in said config. Not a stick pusher, as it did not alert the pilots, it simply applied an extra input independent of the pilots. However this was made aware to all pilots of the plane and likely contributed to its certification.
Also the 707 tail was extended by 40ft to give it better minimum ground speed control, this was retroactively applied to already built planes. Very interesting to see how this was applied in the past with a lawyer at the helm vs the current ceo during the launch of the 737Max
I think there might be some confusion here. The vertical stabiliser (not the tail) was extended by 40 inches [0] to combat concerns about poor yaw control.
Although the Wikipedia article cites the UK ARB as the influence, it was also in response to the 1959 crash [1] of a 707 being used for training, in which Dutch roll was induced and later became so violent it ripped 3 engines off the wings.
Hmm, you now highlighted an interesting thing - every company (I've seen) being run to the ground by MBAs and Lawyers was done so because they outright refused to trust their employees. The usual playbook is severance of any transparency and communication and implementation of more and more paperwork and oversight over their employees with no nuance. In other words - complete lack of trust into employed specialists being able to do their work.
yep, that's the most important thing about his operation of Boeing.
Both the 707 and 747 were "bet the company" projects, in particular the 747 pushed Boeing to the brink of bankruptcy. However both were major successes because they took a gamble on the future of the aviation industry.
In these days of "fiduciary responsibility" it's difficult to imagine any public company taking that kind of risk. Risk is what should make returns.
Nokia is the best case study of what not to do. In 2005 Nokia launched the 770 Internet Tablet. It was the groundwork for the modern smartphone. But the management did not allow it to have a GSM modem. So it was not a smartphone. Only after the iPhone Nokia launched the N900 but it was too late. Nokia did not believe in touch screen too.
The 770 and Maemo environment were pretty amazing back in the day - high resolution screen for the time, but a somewhat laggy interface given the compute available at the time. Hardware was somewhat compromised - the half-height MMC storage expansion was particularly difficult to find. I still have one sitting around somewhere.
It did support touch, with a stylus built in - I forget if the stylus was needed or if you could use your bare fingers.
I got to briefly try a N900 back in the day. When it powered on you would briefly see the raw X background stipple and the x-shaped cursor before the window manager loaded. I liked the nerd factor of it but I knew at that point the company was in the weeds.
True. But Nokia could do many improvements before Apple launched iPhone. But Nokia decided to stay only with Symbian without touch. And we had to wait 4 years to Android show what a Linux with Java smartphone would be. And we saw Nokia going down with Microsoft. It was interesting times back then.
Maemo was an actual GNU/Linux, not just kernel with custom userland. Logging into a cluster from my N900 and having plots just appear on screen thanks to X network transparency is still one of the most futuristic things I have ever seen a computer do.
Interesting times indeed. It was also just before the Nokia acquisition that Microsoft was selling their first Android phone (under the Nokia brand) so they could see the winds of change.
Nokia already had Android phones released months before the Microsoft acquisition closed. Makes me wonder if Nokia would have pulled out completely from making Windows phones had Microsoft not purchased them.
Even Microsoft changed their tune, releasing Android phones themselves years later under the Surface brand.
If the 770 was anything like the N900 then the screen was resistive and the stylus was passive. So you could use your fingers but in my experience, for any finer task, you needed the stylus.
Damn, I really miss the N900. I was seriously using it as recently as about 4-5 years ago.
I loved my N9, truly one of the best experiences i've had with a mobile phone. I've watched Sailfish from a distance but their very limited choice of phones have always been off-putting.
I feel like Nokia would still be a notable smartphone company albeit not on the scale of apple (as the single alternative to the android ecosystem) if not for Microsoft and it's plant delivering the actual blow.
Dennis Muilenberg has a BS in Aerospace Engineering and a MS in Aeronautics and Astronautics. He was Boeing’s CEO during and after the MAX disaster. He cannot be blamed for the first crash, but he absolutely bears a direct responsibility for every person who died in the second crash, as by that point he knew that he had delivered a product that had not been correctly and fully certified by the regulatory authority.
An ethical person with that knowledge, whether they be an engineer, lawyer, or a circus clown, would have fought tooth and nail to ensure the aircraft was grounded.
I am much more interested in the ethics of any particular leader, than their credentials.
The mistake was committed willfully well before the first crash. Their emails during the development and testing of the Max 8 show some strongly worded exchanges clearly indicating their full awareness of the troubles with the MCAS system. There was one simulated flight by a test pilot where the MCAS had activated and was deemed a catastrophic failure. In another instance, the chief test pilot said, "I basically lied to the regulators, unknowingly" - a mistake that he chose not to rectify. It was abundantly clear at that point that business as usual was going to cost lives. The entire top management knew this and yet, not one of them decided to step in.
The first crash was already too late for them. A team that behaved so callously earlier wasn't going to stop at that point without an external intervention. They were in fact attempting to scapegoat the pilots even after the second crash. Therefore, putting the blame on one CEO at the time of the first crash is illogical. The blame must fall on the team that established such an unbelievably flawed safety culture in the first place. Who was that? And why?
I'm not insisting that accountants and lawyers are unfit for top management. There are corporate portfolios that they're the best fit for - like accounting and law firms. Then there are the exceptions who do a sensible job even outside their portfolio. But I have seen both engineering heavy and accounting heavy managements. As expected, their priorities and operational philosophies are drastically different. But their influence doesn't end there. They also define the wider company culture - A culture that not even a CEO can change without significant personal effort against institutional inertia.
So, accountants or lawyers may be sufficient to lead IT companies. But if they choose to lead an industry where so many lives are at stake, they better hold back their profit-seeking instincts and understand the safety culture and the consequences of their decisions damn well.
In most cases,the successor to a founder CEO is a finance person - because their mandate is to massage the stock on the behalf of the appointing board.
declining one has an accountant in charge, and a dying one has a lawyer in charge.
You would think by now that HN would understand that there is a difference between the accountants and the finance guys. The greatest con that Finance achieved is convincing everyone that the accountants were to blame for everything finance did. But the accountants just handle the transactional details. They don't make the financial decisions.
And generally a dying company should have a lawyer in charge because their mandate is to try to negotiate selling off the company or its assets, or to run it through bankruptcy.
> They have proven over and over and over, that at every opportunity presented they will increase their own authority. I don’t believe I have personally witnessed any other advanced economy that so ardently marches towards authoritarianism.
This has been a slow 111 year project. See the opening of A. J. P. Taylor's English History 1914–1945:
> Until August 1914 a sensible, law-abiding Englishman could pass through life and hardly notice the existence of the state, beyond the post office and the policeman. He could live where he liked and as he liked. He had no official number or identity card. He could travel abroad or leave his country for ever without a passport or any sort of official permission. He could exchange his money for any other currency without restriction or limit. He could buy goods from any country in the world on the same terms as he bought goods at home. For that matter, a foreigner could spend his life in this country without permit and without informing the police. Unlike the countries of the European continent, the state did not require its citizens to perform military service. An Englishman could enlist, if he chose, in the regular army, the navy, or the territorials. He could also ignore, if he chose, the demands of national defence. Substantial householders were occasionally called on for jury service. Otherwise, only those helped the state who wished to do so.
> All this was changed by the impact of the Great War. The mass of the people became, for the first time, active citizens. Their lives were shaped by orders from above; they were required to serve the state instead of pursuing exclusively their own affairs. Five million men entered the armed forces, many of them (though a minority) under compulsion. The Englishman’s food was limited, and its quality changed, by government order. His freedom of movement was restricted; his conditions of work prescribed. Some industries were reduced or closed, others artificially fostered. The publication of news was fettered. Street lights were dimmed. The sacred freedom of drinking was tampered with: licensed hours were cut down, and the beer watered by order. The very time on the clocks was changed. From 1916 onwards, every Englishman got up an hour earlier in summer than he would otherwise have done, thanks to an act of parliament. The state established a hold over its citizens which, though relaxed in peacetime, was never to be removed and which the second World War was again to increase. The history of the English state and of the English people merged for the first time.
I think even 111 years is being too cautious. One only needs to look as far as the numerous vagrancy laws in England to see how a citizen might be prevented from living "where he liked and as he liked". Persecution of minorities including 'witches', Gypsies and Jews has been a continual theme. England has had banned books, even banned translations of the Holy Bible.
The Edwardian era was a very unusual period of liberality, I'll agree. But at least in that quote, Taylor is making some strange omissions that I hardly think are accidental: for a start, where is the mention of women's suffrage, introduced for the first time ever after the Great War?
Given the disparity in middle-class household incomes between the UK and the US, I suspect a majority of UK middle-class students would be eligible for some form of financial aid from US universities (assuming Oxbridge vs US equivalents with need-blind + full-need international admissions), meaning their net cost to attend could be lower than studying in the UK.
But the difference between UK student debt (basically a regressive time limited tax) and the US version of student debt (actual loan that will fuck you up) is key here.
I don't think its possible to have a full student loan from the UK and study abroad the whole time. (you can do a year abroad though)
It's a proprietary form factor, so you're gambling on replacements being available down the line. I don't think it'll be easy to rebuild the battery pack without compromising the casing.
Given the battery specs and the form factor of the products, they could have used a 14500 cell that retails for $5. That's not as much recurring revenue as charging $80 for something proprietary though.
> For personal computing, Intel will build and offer to the market x86 system-on-chips (SOCs) that integrate NVIDIA RTX GPU chiplets. These new x86 RTX SOCs will power a wide range of PCs that demand integration of world-class CPUs and GPUs.
I don't think this is Intel trying to save itself, it's nVidia. Intel GPUs have been in 3rd place for a long time, but their integrated graphics are widely available and come in 2nd place because nVidia can't compete in the x86 space. Intel graphics have been closing the gap with AMD and are now within what? A factor of 2 or less (1.5?)
IMHO we will soon see more small/quiet PCs without a slot for a graphics card, relying on integrated graphics. nVidia has no place in that future. But now, by dropping $5B on Intel they can get into some of these SoCs and not become irrelevant.
The nice thing for Intel is that they might be able to claim graphics superiority in SoC land since they are currently lagging in CPU.
Way back in the mid-late 2000s Intel CPUs could be used with third party chipsets not manufactured by Intel. This had been going on forever but the space was particularly wild with Nvidia being the most popular chipset manufacturer for AMD and also making in-roads for Intel CPUs. It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This was all for naught as AMD purchased ATi, shutting out all other chipsets and Intel did the same. Things actually looked pretty grim for Nvidia at this point in time. AMD was making moves that suggested APUs were the future and Intel started releasing platforms with very little PCIe connectivity, prompting Nvidia to build things like the Ion platform that could operate over an anemic pcie 1x link. There were really were the beginnings of strategic moves to lock Nvidia out of their own market.
Fortunately, Nvidia won a lawsuit against Intel that required them to have pcie 16x connectivity on their main platforms for 10 years or so and AMD put out non-competitive offerings in the CPU space such that the APU take off never happened. If Intel had actually developed their integrated GPUs or won that lawsuit or if AMD had actually executed Nvidia might well be an also-ran right around now.
To their credit, Nvidia really took advantage of their competitors inability to press their huge strategic advantage during that time. I think we're in a different landscape at the moment. Neither AMD nor Intel can afford boot Nvidia since consumers would likely abandon them for whoever could still slot in an Nvidia card. High performance graphics is the domain of add-in boards now and will be for awhile. Process node shrinks aren't as easy and cooling solutions are getting crazy.
But Nvidia has been shut out of the new handheld market and haven't been a good total package for consoles as SoC both rule the day in those spaces so I'm not super surprised at the desire for this pairing. But I did think nvidia had given up these ambitions was planning to try to build an adjacent ARM based platform as a potential escape hatch.
> It was an important enough market than when ALi introduced AMD chipsets that were better than Nvidia's they promptly bought the company and spun down operations.
This feels like a 'brand new sentence' to me because I've never met an ALi chipset that I liked. Every one I ever used had some shitty quirk that made VIA or SiS somehow more palatable [0] [1].
> Intel started releasing platforms with very little PCIe connectivity,
This is also a semi-weird statement to me, in that it was nothing new; Intel already had an established history of chipsets like the i810, 845GV, 865GV, etc which all lacked AGP. [2]
[0] - Aladdin V with it's AGP Instabilities, MAGiK 1 with it's poor handling of more than 2 or 3 'rows' of DDR (i.e. two double-sided sticks of DDR turned it into a shitshow no matter what you did to timings. 3 usually was 'ok-ish' and 2 was stable.)
[1] - SIS 730 and 735 were great chipsets for the money and TBH the closest to the AMD760 for stability.
[2] - If I had a dollar for every time I got to break the news to someone that there was no real way to put a Geforce or 'Radon' [3] in their eMachine, I could have had a then-decent down payment for a car.
[3] - Although, in an odd sort of foreshadowing, most people who called it a 'Radon', would specifically call it an AMD Radon... and now here we are. Oddly prescient.
I'm thinking the era of "great ALI chipsets" was more after they became ULi in the Athlon 64 era.
I had a ULi M1695 board (ASRock 939SLI32-eSATA2) and it was unusual for the era in that it was a $90 motherboard with two full x16 slots. Even most of the nForce boards at the time had it set up as x8/x8. For like 10 minutes you could run SLI with it until nVidia deliberately crippled the GeForce drivers to not permit it, but I was using it with a pretty unambitious (but fanless-- remember fanless GPUs?) 7600GS.
They also did another chipset pairing that offered a PCI-Ex16 slot and a fairly compatible AGP-ish slot for people who had bought an expensive (which then meant $300 for a 256MB card) graphics card and wanted to carry it over. There were a few other boards using other chipsets (maybe VIA) that tried to glue together something like that, but the support was much more hit-or-miss.
OTOH, I did have an Aladdin IV ("TXpro") board back in the day, and it was nice because it supported 83MHz bus speeds when a "better" Intel TX board wouldn't. A K6-233 overclocked to 250 (3x83) was detectably faster than at 262 (3.5x75)
ALi was indeed pretty much on the avoid list for me for most of their history. It was only when they came out with the ULi M1695 made famous by the Asrock939dual-sata2 that they were a contender for best out of nowhere. One of the coolest boards I ever owned and was rock solid for me even with all of the weird configs I ran on it. I kind of wish I hadn't sold it even today!
I remember a lot disappointed people on forums who couldn't upgrade their cheap PCs as well, but there were still motherboards available with AGP to slot into for Intel's best products. Intel couldn't just remove it from the landscape altogether (assuming they wanted to) because they weren't the only company making Intel supporting chipsets. IIRC Intel/AMD/Nvidia were not interested in making AGP+PCIe supporting chipsets at all, but VIA/ALi and maybe SiS made them instead because it was a free for all space still. Once that went away Nvidia couldn't control their own destiny.
nvidia does build SOCs already. The AGXs and other offerings. I'm curious why they want intel despite having that technical capability of building SOCs.
I realize the AGX is more of a low power solution and it's possible that nvidia is still technically limited when building SOCs but this is just speculation.
Does anybody know actual ground truth reasoning why Nvidia is buying Intel despite the fact that nvidia can make their own SOCs?
Sometimes HN users appear to have absolutely zero sense of scales. Lifetime sales numbers of those are like hours to days worth equivalent of Switch 2.
I think it is bad news for the GPU market (AMD has had a beachhead with their integrated solution here as they've lost out elsewhere) but good for x86 which I've worried would be greatly diminished as Intel became less competitive.
That was targeted at supporting more tightly integrated and performant Macbooks .... it flopped because Apple came up with M1, not because it was bad per se.
I'm typing this post on the 395+ 128gb RAM model. IMO, the keyboard is better than the one in the newest Macbook Pro. Just enough travel, and quiet enough so I don't disturb co-workers when I type.
I use it for development running Fedora Workstation. My job involves spinning up lots of containers and K8S KIND clusters.
I often reach for it instead of my 14" M4 Macbook. However, I will choose the Macbook Pro when I know I'll be away from a charger for a while. The HP, as great as it is, still has bad battery life.
The only downside is that the webcam _does not work_ unless you use Ubuntu 20.04 w/ the OEM kernel package.
The ISP driver which will enable the camera to work is in the process of being up-streamed, though. I believe they're targeting early 2025 for mainline Linux support.
Do you feel a difference between Strix Halo and other x86 machines you could lay your hands on to date? I want one, but with an M2 Max macbook pro and Zen2 desktop it feels very hard to justify.
I have this laptop with this display configuration and it looks amazing. However on Arch with Gnome/Wayland I cannot get color management to work, which is a problem since this display has such a wide gamut. Opening HN on it for the first time I was blinded by the deepest orange nav bar I could imagine.
Where are you getting this information? For what it is worth, Wikipedia mentions the Pixel 6 on the eFuse page https://en.wikipedia.org/wiki/EFuse
Myself I have not reverse engineered the Titan M2 security chip, but surely it uses eFuse or OTP memory for anti rollback protection mechanisms and such.
These are really basic hardware security primitives. I'm curious why you're under the impression Pixels wouldn't use eFuse.
The Pixel 6 is only mentioned in regards to anti-rollback protection. This has nothing to do with unlocking and later relocking the bootloader. Pixels have always supported relocking the bootloader with a custom root of trust, i.e. custom AVB signing keys used by a custom, user-installed operating system.
The Pixel 6 is mentioned specifically about eFuses which is the technical detail that caught my attention in this thread.
> The Xbox 360, Nintendo Switch, Pixel 6 and Samsung Galaxy S22 are known for using eFuses this way.[8]
Anti-rollback protection is a security feature, eFuses are hardware primitives that can be used to implement it. Bootloader locking is another security feature that can be implemented with eFuses.
If you have any data denying the use of eFuses in the Pixel 6, please share it, that is what I was interested in this sub-thread. I really did not understand the relevance and the correctness of your comment.
I thought you claimed that Pixels also used eFuses to disable certain features after unlocking the bootloder once, like Samsung devices do. That's why I pointed out that Pixel devices have always had support for relocking the bootloader with a custom root of trust.
Your response to this comment https://news.ycombinator.com/item?id=45244933 made it seem that way, because you appeared to disagree that "Pixel devices don't have anything like the Samsung Knox eFuse, which blows after running a third-party bootloader".
You mentioned devices being irreversibly "tainted" after unlocking the bootloader.
On Samsung devices, blowing the Knox eFuse permanently disables features tied to Knox (e.g. Samsung Pay, Secure Folder). ("can never go back to a state where it passes all checks")
Pixels do not have an equivalent eFuse that permanently disables features (discounting the ability to flash previous versions of Android). Restoring stock firmware and relocking the bootloader will give you a normal Pixel.
I was purely focusing on whether or not it uses eFuses, literally, which it 100% absolutely does. I was not making any other such claims.
Indeed it may be true today that "restoring stock firmware and relocking the bootloader will give you a normal Pixel", I completely understand what you mean.
But that is NOT the same thing as "Pixels do not have eFuses to flag devices that have been modified before". Please share data supporting this claim if you have it.
It is possible that existing Pixels have such eFuses that internally flag your device (perhaps bubbling up to the Google Play Integrity APIs) but they don't kill device features per Google's good will.
My question is 100% about the hardware inside the Titan M2 and how it is used by Google. I don't think the answer is public, and anyone who has reverse engineered it to such detail won't share the answer either.
> Because they had built the tsunami-wall to spec.
If you're referring to the Onagawa plant, one engineer (Yanosuke Hirai) pushed for the height of the wall to be increased beyond the original spec:
> A nuclear plant in a neighboring area, meanwhile, had been built to withstand the tsunamis. A solitary civil engineer employed by the Tohoku Electric Power Company knew the story of the massive Jogan tsunami of the year 869, because it had flooded the Shinto shrine in his hometown. In the 1960s, the engineer, Yanosuke Hirai, had insisted that the Onagawa Nuclear Power Station be built farther back from the sea and at higher elevation than initially proposed—ultimately nearly fifty feet above sea level. He argued for a seawall to surpass the original plan of thirty-nine feet. He did not live to see what happened in 2011, when forty-foot waves destroyed much of the fishing town of Onagawa, seventy-five miles north of Fukushima. The nuclear power station—the closest one in Japan to the earthquake’s epicenter—was left intact. Displaced residents even took refuge in the power plant’s gym.
In addition, they didn't have hydrogen recombinators, which for example are/were standard in all German plants. Those plants also had special requirements for bunkers for the Diesel backup generators so they couldn't be knocked out by water.
The point is not about "someone may not err" but about "someone may err", or more precisely "someone WILL err", coupled with the effects of such mistakes.
Failing to correctly design, build, exploit or maintain a wind turbine or solar panel isn't a big deal. Failing to do so on a nuclear reactor can become a huge and lasting disaster for many.
You are making the very common "mistake" of comparing 1 nuclear accident with 1 wind turbine accident.
And are completely missing that you need a LOT more wind turbines, and these have a lot more accidents.
For example, wind turbine accidents killed 14 people just in one year, 2011. How many people were killed in the UK in nuclear accidents that year? That decade.
Ladder accidents kill ~80 people per year in Germany.
The various estimations of "victims of nuclear" also neglect victims from such accidents. In 2011 2 workers died while working to build the new EPR in Flamanville, and aren't officially (nor AFAIK anywhere) counted as nuclear victims.
reply