Hacker Newsnew | past | comments | ask | show | jobs | submit | dhx's commentslogin

Are you referring to the 'GovPort' website episode of Utopia, season 3, episode 3 'Nation Shapers'[1] or a different episode?

[1] https://www.youtube.com/watch?v=_otJbx-PVOw


It was the most painful episode I watched, as someone working in IT; so my brain managed to forget most of the details to protect me.

But seeing Ash reviewing all the previous versions at 3:08, I'd say yes, that's this episode!


It might have a minor beneficial impact to tourism in Saint Barthelemy and Norfolk Island from geeks wanting a trendy new account registered in a territory with fewer than 1000 IPv4 addresses allocated.[1]

A more useful addition would be a contributions calendar similar to GitHub's [2] but focused on which time zones the user is active within, and importantly, the latency of the user's replies. It's trivial to fake geographic location observed through source IP addresses (or even RTT multilateration) but much harder to fake time zones a user is active within, particularly if monitoring latency of replies.

edit: To further clarify, I don't think a contributions calendar would be beneficial to HN either. I've never cared to think about the country a commenter resides in, and don't care about username/real name either unless the commenter is appealing to their own authority (e.g. "I am the author of this software"). Even then, the usefulness of an appeal to own authority is often limited to the ability to reverse lookup the user's personal website (which itself is proven to be notable from other sources) for a link back to their HN profile.

[1] https://impliedchaos.github.io/ip-alloc/

[2] https://docs.github.com/en/account-and-profile/concepts/cont...


Amongst the numerous reasons why you _don't_ want to rush into implementing new algorithms is even the _reference implementation_ (and most other early implementations) for Kyber/ML-KEM included multiple timing side channel vulnerabilities that allowed for key recovery.[1][2]

djb has been consistent in view for decades that cryptography standards need to consider the foolproofness of implementation so that a minor implementation mistake specific to timing of specific instructions on specific CPU architectures, or specific compiler optimisations, etc doesn't break the implementation. See for example the many problems of NIST P-224/P-256/P-384 ECC curves which djb has been instrumental in fixing through widespread deployment of X25519.[3][4][5]

[1] https://cryspen.com/post/ml-kem-implementation/

[2] https://kyberslash.cr.yp.to/faq.html / https://kyberslash.cr.yp.to/libraries.html

[3] https://en.wikipedia.org/wiki/Elliptic_curve_point_multiplic...

[4] https://safecurves.cr.yp.to/ladder.html

[5] https://cr.yp.to/newelliptic/nistecc-20160106.pdf


Given the emphasis on reliability of implementations of an algorith, it's ironic that the Curve 25519-based Ed25519 digital signature standard was itself specified and originally implemented in such a way as to lead to implementation divergence on what a valid and invalid signature actually was. See https://hdevalence.ca/blog/2020-10-04-its-25519am/

Not a criticism, if anything it reinforces DJB's point. But it makes clear that ease of (proper) implementation also needs to cover things like proper canonicalization of relevant security variables and that supporting multiple modes of operation doesn't actually lead to different answers of security questions meant to give the same answer.


This logic does not follow. Your argument seems to be "the implementation has security bugs, so let's not ratify the standard." That's not how standards work though. Ensuring an implementation is secure is part of the certification process. As long as the scheme itself is shown to be provably secure, that is sufficient to ratify a standard.

If anything, standardization encourages more investment, which means more eyeballs to identify and plug those holes.


No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one. This is a property of the algorithm being specified, not just an individual implementation, and we’ve seen it play out over and over again in cryptography.

I’d actually like to see more (non-cryptographic) standards take this into account. Many web standards are so complicated and/or ill-specified that trillion dollar market cap companies have trouble implementing them correctly/consistently. Standards shouldn’t just be thrown over the wall and have any problems blamed on the implementations.


> No, the argument is that the algorithm (as specified in the standard) is difficult to implement correctly, so we should tweak it/find another one.

This argument is without merit. ML-KEM/Kyber has already been ratified as the PQC KEM standard by NIST. What you are proposing is that the NIST process was fundamentally flawed. This is a claim that requires serious evidence as backup.


You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"

NIST can adopt and recommend whatever algorithms they might like using whatever criteria they decide they want to use. However, while the amount of expertise and experience on display by NIST in identifying algorithms that are secure or potentially useful is impressive, there is no amount of expertise or experience that guarantees any given implementation is always feasible.

Indeed, this is precisely why elliptic curve algorithms are often not available, in spite of a NIST standard being adopted like 8+ years ago!


I'm having trouble understanding your argument. Elliptic curve algorithms have been the mainstream standard for key establishment for something like 15 years now. The NIST standards for the P-curves are much, much older than 8 years.


> You can't be serious. "The standard was adopted, therefore it must be able to be implemented in any or all systems?"

If we did that we'd all be using Dual_EC...


DJB has specific (technical and non-conspiratorial) bones to pick with the algorithm. He’s as much an expert in cryptographic implementation flaws and misuse resistance as anybody at NIST. Doesn’t mean he’s right all the time, but blowing him off as if he’s just some crackpot isn’t even correctly appealing to authority.

I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.


There are like 3 cryptographers in all of NIST. NIST was a referee in the process. The bones he's picking are with the entire field of cryptography, not just NIST people.


> The bones he's picking are with the entire field of cryptography

Isn't that how you advance a field, though?

It has been a couple hundred years, but we used to think that disease was primarily caused by "bad humors".

Fields can and do advance. I'm not versed enough to say whether his criticisms are legitimate, but this doesn't sound like a problem, but part of the process, to me (and his article is documenting how some bureaucrats/illegitimate interests are blocking that advancement).

The "area adminstrator" being unable or unwilling to do basic math is both worrying, and undermines the idea that the standards that are being produced are worth anything, which is bad for the entire field.

If the standards are chock full of nonsense, then how does that reflect upon the field?


The standards people have problems with weren't run as open processes the way AES, SHA3, and MLKEM were. As for the rest of it: I don't know what to tell you. Sounds like a compelling argument if you think Daniel Bernstein is literally the most competent living cryptographer, or, alternately, if Bernstein and Schneier are the only cryptographers one can name.


In a lot of ways this seems, from the outside, to be similar to "Planck's principle"; e.g. physics advances one funeral at a time.


In exactly what sense? Who is the "old guard" you're thinking of here? Peter Schwabe got his doctorate 16 years after Bernstein. Peikert got his 10 years after.


They may not be involved with this process, but ITL has way more than 3 cryptographers.


> I hate that his more tinfoil hat stuff (which is not totally unjustified, mind you) overshadows his sober technical contributions in these discussions.

Currently he argues that NSA is likely to be attacking the standards process to do some unspecified nefarious thing in PQ algorithms, and he's appealing to our memories of Dual_EC. That's not tinfoil hat stuff! It's a serious possibility that has happened before (Dual_EC). True, no one knows for a fact that NSA backdoored Dual_EC, but it's very very likely that they did -- why bother with such a slow DRBG if not for this benefit of being able to recover session keys?


NSA wrote Dual EC. A team of (mostly European) academic cryptographers wrote the CRYSTALS constructions. Moreover, the NOBUS mechanism in Dual EC is obvious, and it's not at all clear where you'd do anything like that in Kyber, which goes out of its way not to have the "weird constants" problem that the P-curves (which practitioners generally trust) ended up with.


It took a couple of years to get the suspicion about Dual_EC out.


No it didn't. The problem with Dual EC was published in a rump session at the next CRYPTO after NIST published it. The widespread assumption was that nobody was actually using it, which was enabled by the fact that the important "target" implementations (most importantly RSA BSAFE, which I think a lot of people also assumed wasn't in common use, but I may just be saying that because it's what I myself assumed) were deeply closed-source.

None of this applies to anything else besides Dual EC.

That aside: I don't know what this has to do with anything I just wrote. Did you mean to respond to some other comment?


It's more like "the standard makes it easier to create insecure implementations." Our standards shouldn't just be "sufficient" they should be "robust."


this is like saying just use C and don't write any memory bugs. possible, but life could be a lot better if it weren't so easy to do so.


Great, you’ve just convinced every C programmer to use a hand rolled AES implementation on their next embedded device. Only slightly joking.


If the standard had clear algorhitm -> source code, thrn couldnt everyone copy from there though?


AES is actually a good example of why this doesn’t work in cryptography. Implementing AES without a timing side channel in C is pretty much impossible. Each architecture requires specific and subtle constructions to ensure it executes in constant time. Newer algorithms are designed to not have this problem (DJB was actually the one who popularized this approach).


Reconcile this claim with, for instance, aes_ct64 in Thomas Pornin's BearSSL?

I'm familiar with Bernstein's argument about AES, but AES is also the most successful cryptography standard ever created.


Okay, I should've said implementing AES in C without a timing sidechannel performantly enough to power TLS for a browser running on a shitty ARMv7 phone is basically impossible. Also if only Thomas Pornin can correctly implement your cipher without assembly, that's not a selling point.

I'm not contesting AES's success or saying it doesn't deserve it. I'm not even saying we should move off it (especially now that even most mobile processors have AES instructions). But nobody would put something like a S-Box in a cipher created today.


If your point is "reference implementations have never been sufficient for real-world implementations", I agree, strongly, but of course that cuts painfully across several of Bernstein's own arguments about the importance of issues in PQ reference implementations.

Part of this, though, is that it's also kind of an incoherent standard to hold reference implementations to. Science proceeds long after the standard is written! The best/safest possible implementation is bound to change.


I don't think it's incoherent. On one extreme you have web standards, where it's now commonplace to not finalize standards until they're implemented in multiple major browser engines. Some web-adjacent IETF standards also work like this (WebTransport over HTTP3 is one I've been implementing recently).

I'm not saying cryptography should necessarily work this way, but it's not an unworkable policy to have multiple projects implement a draft before settling on a standard.


Look at the timeline for performant non-leaking implementations of Weierstrass curves. How long are you going to wait for these things to settle? I feel like there's also a hindsight bias that slips into a lot of this stuff.

Certainly, if you're going to do standards adoption by open competition the way NIST has done with AES, SHA3, and MLKEM, you're not going to be able to factor multiple major implementations into your process.


This isn’t black and white. There’s a medium between:

* Wait for 10 years of cryptanalysis (specific to the final algorithm) before using anything, which probably will be relatively meager because nobody is using it

* Expect the standardization process itself to produce a blessed artifact, to be set on fire as a false god if it turns out to be imperfect (or more realistically, just cause everybody a bunch of pain for 20 years)

Nothing would stop NIST from adding a post-competition phase where Google, Microsoft, Amazon, whoever the hell is maintaining OpenSSL, and maybe Mozilla implement the algorithm in their respective libraries and kick the tires. Maybe it’s pointless and everything we’d expect to get from cryptographers observing that process for a few months to a year has already been suitably covered, and DJB is just being prissy. I don’t know enough about cryptanalysis to know.

But I do feel very confident that many of the IETF standards I’ve been on the receiving end of could have used a non-reference implementation phase to find practical, you-could-technically-do-it-right-but-you-won’t issues that showed up within the first 6 months of people trying to use the damn thing.


I don't know what you mean by "kick the tires".

If by that you mean "perfect the implementation", we already get that! The MLKEM in Go is not the MLKEM in OpenSSL is not the MLKEM in AWS-LC.

If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable. It's the publication of the standard itself that is the forcing function for high-quality competing implementations. In particular, part of arriving at high-quality implementations is running them in production, which is something you can't do without solving the coordination problem of getting everyone onto the same standard.

Here it's important to note that nothing we've learned since Kyber was chosen has materially weakened the construction itself. We've had in fact 3 years now of sustained (urgent, in fact) implementation and deployment (after almost 30 years of cryptologic work on lattices). What would have been different had Kyber been a speculative or proposed standard, other than it getting far less attention and deployment?

("Prissy" is not the word I personally would choose here.)


I mean have a bunch of competent teams that (importantly) didn’t design the algorithm read the final draft and write their versions of it. Then they and others can perform practical analysis on each (empirically look for timing side channels on x86 and ARM, fuzz them, etc.).

> If instead you mean "figure out after some period of implementation whether the standard itself is good", I don't know how that's meant to be workable.

The forcing function can potentially be: this final draft is the heir apparent. If nothing serious comes up in the next 6 months, it will be summarily finalized.

It’s possible this won’t get any of the implementers off their ass on a reasonable timeframe - this happens with web standards all the time. It’s also possible that this is very unlikely to uncover anything not already uncovered. Like I said, I’m not totally convinced that in this specific field it makes sense. But your arguments against it are fully general against this kind of phased process at all, and I think it has empirically improved recent W3C and IETF standards (including QUIC and HTTP2/3) a lot compared to the previous method.


Again: that has now happened. What have we learned from it that we needed to know 3 years ago when NIST chose Kyber? That's an important question, because this is a whole giant thread about Bernstein's allegation that the IETF is in the pocket of the NSA (see "part 4" of this series for that charming claim).

Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers. All of them had the knowhow and incentive to write implementations of their constructions and, if it was going to showcase some glaring problem, of their competitors. What makes you think that we lacked implementation understanding during this process?


I don’t think IETF is in the pocket of the NSA. I really wish the US government hadn’t hassled Bernstein so much when he was a grad student, it would make his stuff way more focused on technical details and readable without rolling your eyes.

> Further, the people involved in the NIST PQ key establishment competition are a murderers row of serious cryptographers and cryptography engineers.

That’s actually my point! When you’re trying to figure out if your standard is difficult to implement correctly, that everyone who worked on the reference implementations is a genius who understands it perfectly is a disadvantage for finding certain problems. It’s classic expert blindness, like you see with C++ where the people working on the standard understand the language so completely they can’t even conceive of what will happen when it’s in the hands of someone that doesn’t sleep with the C++ standard under their pillow.

Like, would anyone who developed ECC algorithms have forgotten to check for invalid curve points when writing an implementation? Meanwhile among mere mortals that’s happened over and over again.


I don't think this has much of anything to do with Bernstein's qualms with the US government. For all his concerns about NIST process, he himself had his name on a NIST PQC candidate. Moreover, he's gotten into similar spats elsewhere. This isn't even the first time he's gotten into a heap of shit at IETF/IRTF. This springs to mind:

https://mailarchive.ietf.org/arch/msg/cfrg/qqrtZnjV1oTBHtvZ1...

This wasn't about NSA or the USG! Note the date. Of course, had this happened in 2025, we'd all know about it, because he'd have blogged it.

But I want to circle back to the point I just made: you've said that we'd all be better off if there was a burning-in period for implementors before standards were ratified. We've definitely burnt in MLKEM now! What would we have done differently knowing what we now know?


> What would we have done differently knowing what we now know?

With the MLKEM standard? Probably nothing, Bernstein would have done less rambling in these blog posts if he was aware of something specifically wrong with one of the implementations. My key point here was that establishing an implementation phase during standardization is not an incoherent or categorically unjustifiable idea, whether it makes sense for massive cryptographic development efforts or not. I will note that something not getting caught by a potential process change is a datapoint that it’s not needed, but isn’t dispositive.

I do think there is some baby in the Bernstein bathwater that is this blog post series though. His strongest specific point in these posts was that the TLS working group adding a cipher suite with a MLKEM-only key exchange this early is an own goal (but that’s of course not the fault of the MLKEM standard itself). That’s an obvious footgun, and I’ll miss the days when you could enable all the standard TLS 1.3 cipher suites and not stress about it. The arguments to keep it in are legitimately not good, but in the area director’s defense we’re all guilty of motivated reasoning when you’re talking to someone who will inevitably accuse you of colluding with the NSA to bring about 1984.


In what way is adding an MLKEM-only code point an "own goal"? Exercise for the reader: find the place where Bernstein proposed we have hybrid RSA/ECDH ciphersuites.


Yeah except there are certified versions of AES written in C. Which makes your point what exactly?


> See for example the many problems of NIST P-224/P-256/P-384 ECC curves

What are those problems exactly? The whitepaper from djb only makes vague claims about NSA being a malicious actor, but after ~20 years no known backdoors nor intentional weaknesses has been reliably proven?


As I understand it, a big issue is that they are really hard to implement correctly. This means that backdoors and weaknesses might not exist in the theoretical algorithm, but still be common in real-world implementations.

On the other hand, Curve25519 is designed from the ground up to be hard to implement incorrectly: there are very few footguns, gotchas, and edge cases. This means that real-world implementations are likely to be correct implementations of the theoretical algorithm.

This means that, even if P-224/P-256/P-384 are on paper exactly as secure as Curve25519, they could still end up being significantly weaker in practice.


I tried to defend a similar argument in a private forum today and basically got my ass handed to me. In practice, not only would modern P-curve implementations not be "significantly weaker" than Curve25519 (we've had good complete addition formulas for them for a long time, along with widespread hardware support), but Curve25519 causes as many (probably more) problems than it solves --- cofactor problems being more common in modern practice than point validation mistakes.

In TLS, Curve25519 vs. the P-curves are a total non-issue, because TLS isn't generally deployed anymore in ways that even admit point validation vulnerabilities (even if implementations still had them). That bit, I already knew, but I'd assumed ad-hoc non-TLS implementations, by random people who don't know what point validation is, might tip the scales. Turns out guess not.

Again, by way of bona fides: I woke up this morning in your camp, regarding Curve25519. But that won't be the camp I go to bed in.


I agree that Curve25519 and other "safer" algorithms are far from immune to side channel attacks in their implementation. For example, [1] is a single trace EM side channel key recovery attack against Curve25519 implemented in MbedTLS on an ARM Cortex-M4. This implementation had the benefit of a constant-time Montgomery ladder algorithm that NIST P curve implementations have traditionally not had a similar approach for, but nonetheless failed due to a conditional swap instruction that leaked secret state via EM.

The question is generally, could a standard in 2025 build upon decades of research and implementation failures to specify side channel resistant algorithms to address conditional jumps, processor optimisations for math functions, etc which might leak secret state via timing, power or EM signals. See for example section VI of [1] which proposed a new side channel countermeasure that ended up being implemented in MbedTLS to mitigate the conditional swap instruction leak. Could such countermeasures be added to the standard in the first instance, rather than left to implementers to figure out based on their review of IACR papers?

One could argue that standards are simply following interests of standards proposers and organisations who might not care about cryptography implementations on smart cards, TPMs, etc, or side channel attacks between different containers on the same host. Instead, perhaps standards proposers and organisations only care about side channel resistance across remote networks with high noise floors for timing signals, where attacks such as [2] (300ns timing signal) are not considered feasible. If this is the case, I would argue that the standards should still state their security model more clearly, for example:

* Is the standard assuming the implementation has a noise floor of 300ns for timing signals, 1ms, etc? Are there any particular cryptographic primitives that implementers must use to avoid particular types of side channel attack (particularly timing)?

* Implementation fingerprinting resistance/avoidance: how many choices can an implementation make that may allow a cryptosystem party to be deanonymised by the specific version of a crypto library in use?[3] Does the standard provide any guarantee for fingerprinting resistance/avoidance?

[1] Template Attacks against ECC: practical implementation against Curve25519, https://cea.hal.science/cea-03157323/document

[2] CVE-2024-13176 openssl Timing side-channel in ECDSA signature computation, https://openssl-library.org/news/vulnerabilities/index.html#...

[3] Table 2, pyecsca: Reverse engineering black-box ellipticcurve cryptography via side-channel analysis, https://tches.iacr.org/index.php/TCHES/article/view/11796/11...


> As I understand it, a big issue is that they are really hard to implement correctly.

Any reference for the "really hard" part? That is a very interesting subject and I can't imagine it's independent of the environment and development stack being used.

I'd welcome any standard that's "really hard to implement correctly" as a testbed for improving our compilers and other tools.


I posted above, but most of the 'really hard' bits come from the unreasonable complexity of actual computing vs the more manageable complexity of computing-with-idealized-software.

That is, an algorithm and compiler and tool safety smoke test and improvement thereby is good. But you also need to think hard about what happens when someone induces an RF pulse at specific timings targeted at a certain part of a circuit board, say, when you're trying to harden these algorithmic implementations. Lots of things that compiler architects typically say is "not my problem".


It would be wise for people to remember that it’s worth doing basic sanity checks before making claims like no backdoors from the NSA. strong encryption has been restricted historically so we had things like DES and 3DES and Crypto AG. In the modern internet age juniper has a bad time with this one https://www.wired.com/2013/09/nsa-backdoor/.

Usually it’s really hard to distinguish intent, and so it’s possible to develop plausible deniability with committees. Their track record isn’t perfect.

With WPA3 cryptographers warned about the known pitfall of standardizing a timing sensitive PAKE, and Harkin got it through anyway. Since it was a standard, the WiFi committee gladly selected it anyway, and then resulted in dragonbleed among other bugs. The techniques for hash2curve have patched that


It's "Dragonblood", not "Dragonbleed". I don't like Harkin's PAKE either, but I'm not sure what fundamental attribute of it enables the downgrade attack you're talking about.

When you're talking about the P-curves, I'm curious how you get your "sanity check" argument past things like the Koblitz/Menezes "Riddle Wrapped In An Enigma" paper. What part of their arguments did you not find persuasive?


yes dragon blood. I’m not speaking of the downgrade but the timing sidechannels — which were called out very loudly and then ignored during standardization. and then the PAKE showed up in wpa3 of all places, that was the key issue and was extended further in a brain pool curve specific attack for the proposed initial mitigation. It’s a good example of error by committee I do not address that article and don’t know why the NSA advised migration that early.

The riddle paper I’ve not read in a long time if ever, though I don’t understand the question. As Scott Aaronson recently blogged it’s difficult to predict human progress with technology and it’s possible we’ll see shors algorithm running publicly sooner than consensus. It could be that in 2035 the NSA’s call 20 years prior looks like it was the right one in that ECC is insecure but that wouldn’t make the replacements secure by default ofc


Aren't the timing attacks you're talking about specific to oddball parameters for the handshake? If you're doing Dragonfly with Brainpool curves you're specifically not doing what NSA wants you to do. Brainpool curves are literally a rejection of NIST's curves.

If you haven't read the Enigma paper, you should do so before confidently stating that nobody's done "sanity checks" on the P-curves. Its authors are approximately as authoritative on the subject as Aaronson is on his. I am specifically not talking about the question of NSA's recommendation on ECC vs. PQ; I'm talking about the integrity of the P-curve selection, in particular. You need to read the paper to see the argument I'm making; it's not in the abstract.


Ah now I see what the question was as it seemed like a non sequitur. I misunderstood the comment by foxboron to be concerns about any backdoors not that P256 is backdoored, I hold no such view of that, surely bitcoin should be good evidence.

Instead I was stating that weaknesses in cryptography have been historically put there with some NSA involvement at times.

For DB: The brain pool curves do have a worse leak, but as stated in the dragon blood paper “we believe that these sidechannels are inherent to Dragonfly”. The first attack submission did hit P-256 setups before the minimal iteration count was increased and afterward was more applicable to same-system cache/ micro architectural bugs. These attacks were more generally correctly mitigated when H2C deterministic algorithms rolled out. There’s many bad choices that were selected of course to make the PAKE more exploitable, putting the client MAC in the pre commits, having that downgrade, including brain pool curves. but to my point on committees— cryptographers warned strongly when standardizing that this could be an attack and no course correction was taken.


Can I ask you to respond to the "sanity check" argument you made upthread? What is the "sanity checking" you're implying wasn't done on the P-curves?


I wasn’t talking about P curves, I was talking about NSA having acted as a malicious actor in general so I misunderstood their comment


The NSA changed the S-boxes in DES and this made people suspicious they had planted a back door but then when differential cryptanalysis was discovered people realized that the NSA changes to S-boxes made them more secure against it.


That was 50 years ago. And since then we have an NSA employee co-authoring the paper which led to Heartbleed, the backdoor in Dual EC DRBG which has been successfully exploited by adversaries, and documentation from Snowden which confirms NSA compromise of standards setting committees.


> And since then we have an NSA employee co-authoring the paper which led to Heartbleed

I'm confused as to what "the paper which led to Heartbleed" means. A paper proposing/describing the heartbeat extension? A paper proposing its implementation in OpenSSL? A paper describing the bug/exploit? Something else?

And in addition to that, is there any connection between that author and the people who actually wrote the relevant (buggy) OpenSSL code? If the people who wrote the bug were entirely unrelated to the people authoring the paper then it's not clear to me why any blame should be placed on the paper authors.


> I'm confused

The original paper which proposed the OpenSSL Heartbeat extension was written by two people, one worked for NSA and one was a student at the time who went on to work for BND, the "German NSA". The paper authors also wrote the extension.

I know this because when it happened, I wanted to know who was responsible for making me patch all my servers, so I dug through the OpenSSL patch stream to find the authors.


What does that paper say about implementing the TLS Heartbeat extension with a trivial uninitialized buffer bug?


About as much as Jia Tan said about implementing the XZ backdoor via an inconspicuous typo in a CMake file. What's your point?


I'm asking what the paper has to do with the vulnerability. Can you answer that? Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".


> Right now your claim basically comes down to "writing about CMake is evidence you backdoored CMake".

This statement makes it clear to me that you don't understand a thing I've said, and that you don't have the necessary background knowledge of Heartbleed, the XZ backdoor, or concepts such a plausible deniability to engage in useful conversation about any of them. Else you would not be so confused.

Please do some reading on all three. And if you want to have a conversation afterwards, feel free to make a comment which demonstrates a deeper understanding of the issues at hand.


Sorry, you're not going to be able to bluster your way through this. What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?


> What part of the paper you're describing instructed implementers of the TLS Heartbeat extension to copy data into an uninitialized buffer and then transmit it on the wire?

That's a very easy question to answer: the implementation the authors provided alongside it.

If you expect authors of exploits to clearly explain them to you, you are not just ignorant of the details of backdoors like the one in XZ (CMake was never backdoored, a "typo" in a CMake file bootstrapped the exploit in XZ builds), but are naive to an implausible degree about the activities of exploit authors.

Even the University of Minnesota did not publicly state "we're going to backdoor the Linux kernel" before they attempted to do so: https://cyberir.mit.edu/site/how-university-got-itself-banne...

If you tell someone you're going to build an exploit and how, the obvious response will be "no, we won't allow you to." So no exploit author does that.


Which "paper" are you referring to?


Think the above poster is full of bologna? It's less painful for everyone involved, and the readers, to just say that and get that out of the way rather than trying to surgically draw it out over half a dozen comments. I see you do this often enough that I think you must get some pleasure out of making people squirm. We know you're smart already!


I think their argument is verkakte but I literally don't know what they're talking about or who the NSA stooge they're referring to is, and it's not so much that I want to make them squirm so much as that I want to draw the full argument out.

I think your complaint isn't with me, but with people who hedge when confronted with direct questions. I think if you look at the thread, you'll see I wasn't exactly playing cards close to my chest.


I don't make a habit of googling things for people when they could do it just as quickly themselves. There is only one paper proposing the OpenSSL heartbeat feature. So I have not been unclear, nor can there be any confusion about which it is. Perhaps we'll learn someday what tptacek expects to find or not to find in it, but he'll have to spend 30 seconds with Google. As I did.

Informing one's self is a pretty low bar for having a productive conversation. When one party can't be arsed to take the initiative to do so, that usually signals the end of useful interaction.

A comment like "I googled and found this paper... it says X... that means Y to me." would feel much less like someone just looking for an argument, because it involves effort and stating a position.

If he has a point, he's free to make it. Everything he needs is at his fingertips, and there's nothing I could do to stop him, nor would I want to. I asked for a point first thing. All I've gotten in response is combative rhetoric which is neither interesting nor informative.


Your argument that heart bleed was intentional is very weak


Means, motive, and opportunity. Seems to check all the boxes.

There's no conclusive evidence that it wasn't purposeful. And plenty of evidence of past plausibly deniable attempts. So you can believe whatever lets you sleep better at night.


Ah, that clears up the confusion. Thank you for taking the time to explain!


What's the original paper? The earliest thing I can find is an RFC.


I'm pretty sure he meant the RFC. (Insert "The German Three" meme).


The NSA also wanted a 48 bit implementation which was sufficiently weak to brute force with their power. The industry and IBM initially wanted 64 bit. IBM compromised and gave us 56 bit.


Yes, NSA made DES stronger. After first making it weaker. IBM had wanted a 128-bit key, then they decided to knock that down to 64-bit (probably for reasons related to cost, this being the 70s), and NSA brought that down to 56-bit because hey! we need parity bits (we didn't).


They're vulnerable to "High-S" malleable signatures, while ed25519 isn't. No one is claiming they're backdoored (well, some people somewhere probably are), but they do have failure modes that ed25519 doesn't which is the GP's point.


in the NIST Curve arena, I think DJB's main concern is engineering implementation - from an online slide deck he published:

  We’re writing a document “Security dangers of the NIST curves”
  Focus on the prime-field NIST curves
  DLP news relevant to these curves? No
  DLP on these curves seems really hard
  So what’s the problem?
  Answer: If you implement the NIST curves, chances are you’re doing it wrong
  Your code produces incorrect results for some rare curve points
  Your code leaks secret data when the input isn’t a curve point
  Your code leaks secret data through branch timing
  Your code leaks secret data through cache timing
  Even more trouble in smart cards: power, EM, etc.
  Theoretically possible to do it right, but very hard
  Can anyone show us software for the NIST curves done right?
As to whether or not the NSA is a strategic adversary to some people using ECC curves, I think that's right in the mandate of the org, no? If a current standard is super hard to implement, and theoretically strong at the same time, that has to make someone happy on a red team. At least, it would make me happy, if I were on such a red team.


He does a motte-and-bailey thing with the P-curves. I don't know if it's intentional or not.

Curve25519 was a materially important engineering advance over the state of the art in P-curve implementations when it was introduced. There was a window of time within which Curve25519 foreclosed on Internet-exploitable vulnerabilities (and probably a somewhat longer period of time where it foreclosed on some embedded vulnerabilities). That window of time has pretty much closed now, but it was real at the time.

But he also does a handwavy thing about how the P-curves could have been backdoored. No practicing cryptgraphy engineer I'm aware of takes these arguments seriously, and to buy them you have to take Bernstein's side over people like Neil Koblitz.

The P-curve backdoor argument is unserious, but the P-curve implementation stuff has enough of a solid kernel to it that he can keep both arguments alive.


Quite true, but the Dual_EC backdoor claim is serious. DJB's point that we should design curves with "nothing up my sleeve" is a nice touch.


See, this gets you into trouble, because Bernstein has actually a pretty batshit take on nothing-up-my-sleeve constructions (see the B4D455 paper) --- and that argument also hurts his position on Kyber, which does NUMS stuff!


Link?



There’s also a more approachable set of slides on the topic at https://cr.yp.to/talks/2025.11.14/slides-djb-20251114-safecu...


What do you think of those slides?


I didn’t see anything “batshit” in either the paper or the slides.


Say more. What do you think of his argument? I paraphrased it downthread. Do you think I did so accurately? If not: what did I get wrong?


At least in terms of the Bada55 paper, I think he writes in a fairly jocular style that sounds unprofessional unless you read his citations as well. You seem to object to his occasional jocularity and take it as prima facie evidence of him being “batshit”. Given that you are well known for a jocular writing style, perhaps you should extend some grace.

The slides seem like a pretty nice summary of the 2015-era SafeCurves work, which you acknowledge elsewhere on this site (this thread? They all blend together) was based on good engineering.


No, what I'm saying has only to do with the substance of his claims, which I now think you don't understand, because I laid them out straightforwardly (I might have been wrong, but I definitely wasn't making a tone argument) and you came back with this. People actually do work in this field. You can't just bluster your way through it.

This is a "challenge" with discussing Bernstein claims on Hacker News and places like it --- the threads are full of people who know two cryptographers in the whole world (Bernstein and Schneier) and axiomatically derive their claims from "whatever those two said is probably true". It's the same way you get these inane claims that Kyber was backdoored by the NSA --- by looking at the list of authors on Kyber and not recognizing a single one of them.

What do you think about Bernstein's arguments for SNTRUP being safe while Kyber isn't? Super curious. I barely follow. Maybe you've got a better grip on the controversy.


I’m not sure why you’re hung up on SNTRUP, since DJB didn’t submit it past round 2 of NISTPQC. In round 3, DJB put his full weight behind Classic McEliece.

You’ve previously argued that “cryptosystems based on ring-LWE hardness have been worked on by giants in the field since the mid-1990s” and suggested this is a point in Kyber’s favor. Well, news flash, McEliece has been worked on by giants in the field for 45 years. It shows up in NSA’s declassified internal history book, though their insights into the crypto system are still classified to this day.


How long do you think people have been working on lattice cryptography?


Lattices themselves have been analyzed since the days of Gauss. Lattice cryptography is only a couple decades old (in the unclassified literature).

The first proposed lattice-based cryptosystem was completely broken within 2 years of its announcement, which is an lovely harbinger of Kyber’s fate.


That's a funny claim given NTRU goes back to 1996 and was a PQC finalist. I barely know what I'm talking about here and even I think you're bluffing your way through this. At this point you're making arguments Bernstein would presumably himself reject!


Since you've been very strident throughout this thread I'm wondering if you're going to have a response to this. Similarly, I'm curious, as a scholar of Bernstein's cryptography writing --- did the MOV attack (prominently featured on Safecurves) serve as a lovely harbinger of the failure of elliptic curve cryptography?


I tried a couple searches and I forget which calculator-speak version of "BADASS" Bernstein actually used, but the concept of the paper† is that all the NUMS-style curves are suspect because you can make combinations of mathematical constants say whatever you want them to say (in combination), and so instead you should pick curve constants based purely on engineering excellence, which nobody could ever disagree about or (looks around the room) start huge conspiracy theories over.

as I remember it


Well, DJB also focused on "nothing up my sleeve" design methodology for curves. The implication was that any curves that were not designed in such a way might have something nefarious going on.


Dual_EC's backdoor can't be proven, but it's almost certainly real.


Take for example Chongqing, China which is one of the world's cloudiest and most overcast cities[1]. Easily confirmed by lack of cloudless satellite imagery.[2] You could get the forecast correct most of the time by just assuming it will be cloudy.

What is more interesting for meteorological forecasting is the time-sensitive details such as:

1. We know severe storms will impact city X at approximately Ypm tomorrow. Will it include large hailstones? Severe and destructive downdraft / tornado? What path will the most damage occur and how much notice can we provide those in the path, even if it's just 30min before the storm arrives?

2. Large wildfire breaks out near city X and is starting to form its own weather patterns.[3] What's the possible scenarios for fire tornadoes, lightning, etc to be formed and when/where? Will the wind direction change more likely happen at Ypm or Y+2pm?

I'm skeptical that AI models would excel in these areas because of the time sensitivity of input data as well as the general lack of accurate input data (impacting human analysis too).

Maybe AI models would be better than humans at making longer term climate predictions such as "If [particular type of ENSO/IOD/etc event] is occurring, the number of cloudy days in [city] is expected to be [quantity]/month in [month] versus [quantity2]/month if the event was not occurring." It's not that humans would be unable to arrive at these type of results -- just that it would be tedious and resource intensive to do so.

[1] https://en.wikipedia.org/wiki/List_of_cities_by_sunshine_dur...

[2] https://imagehunter.apollomapping.com/search/90e4893eeeaa48a...

[3] https://en.wikipedia.org/wiki/Cumulonimbus_flammagenitus


If the government wanted to pump USD$1T into the economy, is investment into a stack of sheds full of rapidly depreciating computers the most effective use of USD$1T?

Some example contrasting options:

- Worldwide investment in 300mm wafer fab equipment is projected to be USD$107-138B per year through to 2028.[1] USD$1T buys 100% of the global production of 300mm wafer fab equipment for about 7 years.

- European Union countries are projected to spend approximately USD$250B on electricity generation and transmission infrastructure projects in 2025.[2] The US is projected to spend a similar amount.[3] China is projected to spend approximately USD$460B.[4] USD$1T buys 4 years worth of European Union or US expenditure on electricity generation and transmission infrastructure, or 2 years worth of Chinese expenditure.

- Worldwide biopharmaceutical R&D was estimated to amount to USD$276B in 2021.[5] More conservative estimates include USD$102 in 2024.[6] Using the larger estimate, USD$1T buys 3.5 years of global biopharmaceutical R&D.

[1] https://www.trendforce.com/news/2025/10/14/news-2nm-race-dri...

[2] https://www.iea.org/reports/world-energy-investment-2025/eur...

[3] https://www.iea.org/reports/world-energy-investment-2025/uni...

[4] https://www.iea.org/reports/world-energy-investment-2025/chi...

[5] doi:10.1038/d41573-024-00131-2 (https://www.analysisgroup.com/globalassets/insights/publishi...)

[6] https://www.iqvia.com/blogs/2025/06/global-trends-in-r-and-d...


I know it's very unpopular on HN to give the benefit of the doubt to either OpenAI or to the US Government. I'll do it anyway, torpedoes be damned.

First, the $1T number seems to be completely pulled out of thin air. There were recent speculations that OpenAI is working on an IPO at the level of $1T, but obviously this has nothing to do with US loan guarantees of $1T. The talk about US loan guarantees originated with the WSJ interview of the OpenAI CFO, Sarah Friar. She was very clear that she can't give any details, because nothing is concrete yet. She did not mention any numbers, at all. The $1T is pure speculation, based on nothing.

Second, Sarah Friar mentioned that the US Government is very receptive to the entire sector, and OpenAI is always consulted. But OpenAI is not lobbying the government for loan guarantees for themselves only. If there are any such loan guarantees (which is very likely), they'll be for AI investments in the US in general, so, for example, Anthropic and Google will also benefit (and probably Nvidia, AMD, Intel, Oracle, Amazon, Meta, etc).

Third, Sarah Friar used the word "backstop", instead of loan guarantees. This might sound like a synonym, but I don't think it is. She emphasized that the financing will be organized by the private industry; the government backstop is there to facilitate that. How could such a backstop look, so it is most efficient for both the taxpayers and for the investors (most of which are also taxpayers)? It could be some first loss tranches in some pools of loans, for example the first 10%. This way, the industry can raise $10BN, while the government is on the hook for only $1BN. Even more likely, the government will provide second loss backstop (the technical term is a "mezzanine tranch"). For example, they'll absorb the losses between 5% and 15% in a loan portfolio. The first loss will be absorbed by some speculative investors, who will be compensated by a very generous yield.

I also expect that new loans will only be raised if old loans perform well. Overall, I don't expect the US government to be on the hook for $1T at any given time, and I would be surprised if they'll be on the hook for much more than $1 BN at any given time.


Indeed a world in which you can't tell truth from lies and proof from ai generated videos suits this administration perfectly. Masked men with no identification kidnap people off the streets and you can't tell if it's real or fake. No wonder the government loves openai. Just look at how happy they are to use it on truth social.


> If there are any such loan guarantees (which is very likely), they'll be for AI investments in the US in general, so, for example, Anthropic and Google will also benefit (and probably Nvidia, AMD, Intel, Oracle, Amazon, Meta, etc).

so, if we say there is AI bubble, now whole sector of ultrarich will rob taxpayers, not just single controversial company.


Is anyone aware of a project that provides simplified declaration of constraint checking?

For example:

  structures:
    struct a { strz b, strz c, int d, str[d] e }
  
  constraints:
    len(b) > len(c)
    d > 6
    d <= 10
    e ~ /^ABC\d+/


There's a few obvious gaps, seemingly still unsolved today:

1. Build environments may not be adequately sandboxed. Some distributions are better than others (Gentoo being an example of a better approach). The idea is that the package specification specifies the full list of files to be downloaded initially into a sandboxed build environment, and scripts in that build environment when executed are not able to then access any network interfaces, filesystem locations outside the build environment, etc. Even within a build of a particular software package, more advanced sandboxing may segregate test suite resources from code that is built so that a compromise of the test suite can't impact built executables, or compromised documentation resources can't be accessed during build or eventual execution of the software.

2. The open source community as a whole (but ultimately in the hands of distribution package maintainers) are not being alerted to and apply caution for unverified high entropy in source repositories. Similar in concept to nothing-up-my-sleeve numbers.[1] Typical examples of unverified high entropy where a supply chain attack can hide payload: images, videos, archives, PDF documents etc in test suites or bundled with software as documentation and/or general resources (such as splash screens in software). It may also include IVs/example keys in code or code comments, s-boxes or similar matrices or arrays of high entropy data which may not be obvious to human reviewers how the entropy is low (such as a well known AES s-box) rather than high and potentially undifferentiated from attacker shellcode. Ideally when a package maintainer goes to commit a new package or package update, are they alerted to unexplained high entropy information that ends up in the build environment sandbox and required to justify why this is OK?

[1] https://en.wikipedia.org/wiki/Nothing-up-my-sleeve_number


Europe's projected semiconductor manufacturing equipment expenditure from 2026-2028 is a rounding error.

Global expenditure on 300mm fab equipment from 2026-2028 is predicted to be USD$374bn with regional breakdown as follows (totaling 100%):

China - USD$95bn (25%)

South Korea - USD$86bn (23%)

Taiwan - USD$75bn (20%)

Americas - USD$60bn (16%)

Japan - USD$32bn (9%)

Europe and Middle East - USD$14bn (4%)

Southeast Asia - USD$12bn (3%)

[1] https://www.semi.org/en/semi-press-release/semi-reports-glob...


yeah, but those machines are built in Europe. Most in Netherlands to be exact, but Holand too.


Netherlands AND Holland? Isn't that the same place?

Also: Even while ASML steppers are built in Netherlands, there are a lot of other non-photolithography tools needed to build a fab in addition to the ASML tools.


> Netherlands AND Holland? Isn't that the same place?

Holland is part of the Netherlands. Not unlike how say Texas is part of the United States.

So in that regard the statement was redundant, yes.

https://en.wikipedia.org/wiki/Holland


And ASML is not in Holland, nor is Nexperia or ASMI. I can't think of any semiconductor business in "Holland".


ASML has offices in Delft, so in proper HN Standard Pedantic Form: Well, actually there is!


And to be extremely pedantic, machines are not built there.

It's a tiny satellite office.


Roughly half the Capex for a new modern fab goes to ASML.


Nor for long. If the Chinese didn't already have incentives to break ASML's monopoly, they do now.


There's more to that than just ASML, as that company in turn uses specialty lenses produced by Zeiss. Additionally there's crucial American IP (I don't recall what specifically) that goes into those machines, so advanced semiconductor production is more of a global effort really.

Meanwhile China is trying to do all that domestically. They might even pull this off, but just like with their 7nm tech, they're unlikely to be able to do this economically.


> (I don't recall what specifically)

I believe it's the lightsource, made by Cymer of San Diego before ASML took them over.


They're working hard but this particular product is so high-tech they struggle to develop it. I bet they have mostly complete reproductions, but just missing key components like the laser.


Generation curtailment is expected though in a 100% VRE power grid because it's necessary to overbuild, and this strategy is found to be economically viable almost anywhere on the planet.[1][2] Generation curtailment could be minimised though with demand-side flexibility (eg. turn on aluminium smelters when it's windy), not necessarily just with pumped storage hydro and/or lengthy transmission line builds to other regions where solar generation can be used to supplement wind generation.

Relevant quotes:

"this report infers that, almost anywhere on the planet, nearly 100% VRE power grids firmly supplying clean power and meeting demand 24/365 are not only possible but would be economically viable, provided that VRE resources are optimally transformed from unconstrained run-of-the weather generation into firm generation."[1]

"VRE overbuilding and operational curtailment (i.e., implicit storage) are key to achieving economically acceptable firm 24x365 solutions. Because firm power generation could be achieved locally/regionally in many cases with a small premium, optimum implicit storage solution could alleviate the need for major power grid enhancement requirements."[2]

[1] https://iea-pvps.org/key-topics/firm-power-generation/

[2] https://iea-pvps.org/wp-content/uploads/2023/01/Report-IEA-P...


> eg. turn on aluminium smelters when it's windy

I'd be curious to learn how you intend to amortize that aluminium smelter, while also being competitive on aluminium markets.

> The variable-to-firm transformation enablers include energy storage, the optimum blending of VREs and other renewable resources, geographic dispersion, and supply/demand flexibility.

Yeah... provided someone else does that for them, VREs are very cheap.


Smelters around the globe are doing this already. Rio Tinto operates one in New Zealand to help manage seasonal hydro flows for example.

In Australia the same firm is vocal that unless the local area moves from coal to renewables they won't hit price points that are competitive on the global market.


Further info re: Australian Aluminium production

   The four aluminium smelters in Australia consume 10% of the nation’s electricity and produce close to 5% of total emissions.
  Smelting is so energy intensive that in many countries, it has driven the construction of new fossil fuel power plants.

  That’s why it’s nicknamed “congealed electricity”.

  This week, the federal government proposed a new policy aimed at making aluminium smelting green.
* https://www.climateworkscentre.org/news/will-tax-incentives-...

* https://aluminium.org.au/australian-industry/australian-alum...


> Generation curtailment could be minimised though with demand-side flexibility

Demand-side flexibility is synonymous with surge-pricing, ensuring peak demand energy is a luxury good. Ensuring that energy availability is only mostly guaranteed to those able to outbid the bottom something-percentile: those using energy for economically productive work and otherwise rich.

This is quite opposite of an utility.


The overloaded "software engineering" label can also refer to formal software engineering centered around examples of DO-178C for aviation software, IEC 61508 for railway software, ISO 26262 for road vehicle software, EAL5+ for cybersecurity related software, etc. It's somewhat unfortunate the label is also applied to CRUD websites and mobile applications, even there there is a world of difference in the various levels of formal engineering applied.


> CRUD websites and mobile applications

These can be quite intense (but, to be fair there's a ton of dross, there, as well). Probably best to avoid the broad brush.


It's somewhat unfortunate the label is also applied to CRUD websites and mobile applications,

These websites and applications can still have vast security implications depending on what kind of data is being collected.

The advertising industry has done security a huge disfavor by collecting every bit of data they can about everyones actions all the time. Adding some ad library to your website or app now could turn it into a full time tracking device. And phone manufactures like Google don't want this to change as the more information they get, the more ads they can stuff in your face.


> ISO 26262

This is only about safety. As i told to my coleagues in a former workplace: Safety first (that was one of company's mottos), quality second.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: