Hacker Newsnew | past | comments | ask | show | jobs | submit | wolfgke's favoriteslogin

MSRs are interesting. There are a lot more than are documented, and since they came into existence, people have been exploring them:

http://archive.gamedev.net/archive/reference/articles/articl...

http://www.rcollins.org/Errata/Jan97/Bugs.html


Just to quickly help the Streisand effect, this is the private key, extracted from [1]:

-----BEGIN PRIVATE KEY----- MIIEvQIBADANBgkqhkiG9w0BAQEFAASCBKcwggSjAgEAAoIBAQC10dxEGINZbF0nIoMtM8705Nqm6ZWdb72DqTdFJ+UzQIRIUS59lQkYLvdQp71767vz0dVlPTikHmiv dYHRc7Fo6JsmSUsGR3th+fU6d1Wt6cwpMTUXj/qODmubDK/ioVDW7wz9OFlSsCBvylOYp9v2+u/VXwACnBXNxCDezjx4RKcqMFT31WTxqU9OM9J86ChMOW4bFA41aLAJ ozB+02xis7OV175XdQ5vkVXM9ys6ZoRF/K6NXeHiwcZFtMKyphXAxqU7uGY2a16bC3TEG5/km6Jru3Wxy4nKlDyUjWISwH4llWjdSi99r2c1fSCXlMCrW0CHoznn+22l YCKtYe8JAgMBAAECggEAGOPDJvFCHd43PFG9qlTyylR/2CSWzigLRfhGsClfd24oDaxLVHav+YcIZRqpVkr1flGlyEeittjQ1OAdptoTGbzp7EpRQmlLqyRoHRpT+MxO Hf91+KVFk+fGdEG+3CPgKKQt34Y0uByTPCpy2i10b7F3Xnq0Sicq1vG33DhYT9A/DRIjYr8Y0AVovq0VDjWqA1FW5OO9p7vky6e+PDMjSHucQ+uaLzVZSc7vWOh0tH5M 0GVk17YpBiB/iTpw4zBUIcaneQX3eaIfSCDHK0SCD6IRF7kl+uORzvWqiWlGzpdG2B96uyP4hd3WoPcZntM79PKm4dAotdgmalbueFJfpwKBgQDUy0EyA9Fq0aPF4LID HqDPduIm4hEAZf6sQLd8Fe6ywM4p9KOEVx7YPaFxQHFSgIiWXswildPJl8Cg5cM2EyMU1tdn5xaR4VIDk8e2JEDfhPtaWskpJp2rU2wHvAXOeAES7UFMrkhKVqqVOdbo IhlLdcYp5KxiJ3mwINSSO94ShwKBgQDavJvF+c8AINfCaMocUX0knXz+xCwdP430GoPQCHa1rUj5bZ3qn3XMwSWa57J4x3pVhYmgJv4jpEK+LBULFezNLV5N4C7vH63a Zo4OF7IUedFBS5B508yAq7RiPhN2VOC8LRdDh5oqnFufjafF82y9d+/czCrVIG43D+KO2j4F7wKBgDg/HZWF0tYEYeDNGuCeOO19xBt5B/tt+lo3pQhkl7qiIhyO8KXr jVilOcZAvXOMTA5LMnQ13ExeE2m0MdxaRJyeiUOKnrmisFYHuvNXM9qhQPtKIgABmA2QOG728SX5LHd/RRJqwur7a42UQ00Krlr235F1Q2eSfaTjmKyqrHGDAoGAOTrd 2ueoZFUzfnciYlRj1L+r45B6JlDpmDOTx0tfm9sx26j1h1yfWqoyZ5w1kupGNLgSsSdimPqyR8WK3/KlmW1EXkXIoeH8/8aTZlaGzlqtCFN4ApgKyqOiN44cU3qTrkhx 7MY+7OUqB83tVpqBGfWWeYOltUud6qQqV8v8LFsCgYEAnOq+Ls83CaHIWCjpVfiWC+R7mqW+ql1OGtoaajtA4AzhXzX8HIXpYjupPBlXlQ1FFfPem6jwa1UTZf8CpIb8 pPULAN9ZRrxG8V+bvkZWVREPTZj7xPCwPaZHNKoAmi3Dbv7S5SEYDbBX/NyPCLE4sj/AgTPbUsUtaiw5TvrPsFE= -----END PRIVATE KEY-----

[1] https://archive.softwareheritage.org/browse/origin/content/?...



This needs a 2011 on it, which will explain why it is missing the very recent Ryū algorithm, which I believe is the fastest algorithm at this point:

https://dl.acm.org/doi/10.1145/3192366.3192369 (PLDI 2018 paper)

https://github.com/ulfjack/ryu (C source code)


I find it strange that HoTT and HoTT-adjacent stuff gets posted here so often.

Even in the world of pure mathematics, this is a backwater. As I understand it, the two main selling points are providing a foundation for mathematics and a framework for automated theorem proving, but:

1) We already have a perfectly good foundation of mathematics. It's called ZFC [1], and it does everything we need it to do. If all HoTT does is give a theory that's mutually interpretable with ZFC, that's not interesting (at least to mainstream workers in the foundations of mathematics, who would rather tackle substantive problems).

There are also people like Per-Martin Lof who want to use HoTT as a vehicle for doing mathematics with intuitionistic logic, and in particular denying the law of excluded middle, but at that point you've let the mask slip and you're advocating something that interests approximately zero mainstream mathematicians.

2) It's not at all clear HoTT, or type theory in general, is the best solution for formalizing mathematics. It could be, it couldn't be, but this is ultimately an empirical question . Some comments on the state of the field are given in [2].

[2]: https://xenaproject.wordpress.com/2020/02/09/where-is-the-fa...


Disclaimer: I am not a domain expert in type theory and my background is algebraic combinatorics, not computer science. I also hate online courses, don’t like collaborative learning, and tend to read books and papers by myself. So I’m not a great source of advice for most people.

Unfortunately I don’t have many more suggestions that would be good pedagogically. Benjamin Pierce (who wrote Software Foundations) has another very famous intro book, Types and Programming Languages: https://www.cis.upenn.edu/~bcpierce/tapl/ which is regarded as a “Bible” of type theory for computer scientists. Note: I haven’t personally read this, but I have read the sequel, Advanced Types and Programming Languages. Both books have exercises and most exercises have solutions.

Simon Peyton-Jones wrote a very good book, the Implementation of Functional Programming Languages. It is old (1987), but free, a good read, and covers most of the “important” topics without assuming too much background: https://www.microsoft.com/en-us/research/publication/the-imp...

For formal verification - there are definitely better-qualified people than me. I found a lot of the source code to CompCert C (a verified C compiler) illuminating: https://github.com/AbsInt/CompCert However, it will be difficult to understand without doing Software Foundations first (I still find personally tactics-style proofs in Coq confusing and don’t like Coq as much as Idris or Agda).

I am very shy and can’t help you with the online communities :) The type theory subreddit is pretty active and they seem nice.


I would strongly recommend looking at “standard” dependently-typed theories and languages (or even proof assistants without dependent types), instead of trying to plunge into homotopy theory. Learning how dependent types work will be much more illuminating than trying to catch up on the algebraic topology. While the link between topology and formal logic is certainly interesting, I think you’ll be much better served by starting on the logic side of things.

Software Foundations is a well-known book / series of Coq programs for theoretical computer science. [1]

Lectures on the Curry-Howard Isomorphism[2] is my favorite (graduate-level) introduction to type theory and the lambda calculus, including the “lambda cube” and pure type systems.

Finally, while Type-Driven Development With Idris[3] is not at all “theoretical,” it is still an excellent introduction to dependent types and the idea of proof-as-program. It’s also the only book about a specific programming language I’ve ever read and actually enjoyed :)

[1] https://softwarefoundations.cis.upenn.edu/

[2] Available for free here (1 MB pdf) https://disi.unitn.it/~bernardi/RSISE11/Papers/curry-howard....

[3] https://www.manning.com/books/type-driven-development-with-i...


For people wanting to learn HoTT, I'd rather recommend Rijke's book assembled from his lectures at CMU: https://github.com/EgbertRijke/HoTT-Intro

(Disclaimer: I'm a mathematician working in HoTT and (higher) category theory.)

Like with most advanced concepts in any field, there are lots of misunderstandings pertaining to HoTT. To me, the underlying insight is that the "right" abstract setting for a lot of classical homotopy theory is that of an infinity-topos (whose precise definition is an open question, but we have candidates). Theorems proven in HoTT hold in any infinity-topos, and HoTT is (conjecturally) the internal language of these.

For anything outside of homotopy theory, HoTT isn't (immediately) interesting. It's certainly not trying to provide foundations for set-based mathematics, but for (categorical) homotopy theory, sure.

Some mathematicians take issue with the logic being intuisionistic; e.g. neither the law of the excluded middle, nor double negation hold in HoTT. This is not about constructivity (which I agree, mainstream mathematicians will reject), but about the spaces being modeled. For example, a mainstream mathematician will say that a space is either empty or has a point. However, in the category of bundles over the circle, points are sections; and so the double cover doesn't have any point, for example. Neither is it empty.


From John Nagle:

"(..) Unfortunately, delayed ACKs went in after I got out of networking in 1986, and this was never fixed. Now it's too late."

https://stackoverflow.com/a/16663206/2326672


>GNU Smalltalk turned out to be the perfect dialect for this first contact with Smalltalk, offering a very traditional command-line experience. And Vim provided the syntax highlighting I needed to write those initial lines of code.

Sorry, but if you're using VIM and command line, you're missing all of the best parts of Smalltalk. It's like trying to learn HTML/CSS by designing web pages in Word.

If you want to see what the fuss is really all about, you should download a standalone Pharo VM and latest stable image: https://pharo.org/download

It instantly gives you everything you need to learn and develop things with a modern IDE. It has Playground (better REPL), the famed debugger, the code explorer and a bunch of other things.

Then listen to MOOC: http://mooc.pharo.org/

Then buy The Blue Book on Ebay (Smalltalk-80: The Language and its Implementation). Absolutely worth it. Yes, even now. Yes, even if you don't plan to use SmallTalk.

If you need help, join Pharo Discord or use the mailing list.


I am currently learning modern algebraic geometry (scheme theory) from Ravi Vakil's algebraic geometry in the time of COVID: https://math216.wordpress.com/agittoc-2020/.

It's a great ongoing course that offers amazing intuition into scheme theory. We're divided into "working group(oid)s" where we discuss the mathematics and solve the weekly homework assigned by Ravi. Best of all, anyone [all over the world] can join in on the fun! You get added to a discord and a zulip group, where all the discussion happens.

My favourite article about Grothendieck is the one titled "The Grothendieck-Serre Correspondence": https://webusers.imj-prg.fr/~leila.schneps/corr.pdf

This article describes both the mathematics, and the lives of Grothendieck and Serre through their exchange of letters. It's powerfully written, and provides great insight into both the mathematics and the two as people. The last line of the article is absolutely beautiful.


I'm also learning some algebraic geometry at the moment:

Some resources I have used are:

* video lectures by Richard Borcherds

* Lecture notes by Andreas Gathmann (https://www.mathematik.uni-kl.de/~gathmann/class/alggeom-200...)

* This blog: https://rigtriv.wordpress.com/ag-from-the-beginning/

* An infinite large napkin, by Evan Chen (https://venhance.github.io/napkin/Napkin.pdf)

It also found it helpful to learn some (algebraic) number theory, to get a sense of where some of the motivation comes from (e.g. elliptic curves, modular forms). Grothendieck's work is abstract, but he was always motivated by concrete problems (e.g. Weil conjectures).


New edition of Arfken, Weber and Harris is great!

Hilbert and Courant is BY FAR the best mathematical physics book in existence. No contest. Boas and all the others are good, even very good. H & C beats them by a kiloParsec.

Have you listened to https://oxide.computer/podcast/ ?

It's has more depth than Friedman's podcast, I listen to it mainly for the computing history because they interview people about their careers. I hope they do another season soon


Not a standard source for algebraists, but Percy Diaconis's book, mentioned in other comments, is an excellent introduction. Also contains applications in probability.

I recently did a deep dive into the topic, so here's a list of things I came across that helped me:

[1] Teleman 2005: https://math.berkeley.edu/~teleman/math/RepThry.pdf

[2] Khonvanov List of Repr Theory Resources http://www.math.columbia.edu/~khovanov/resources/

[3] Huang 2010, "Fourier-Theorietic Probabalistic Inference over Permutations"

[4] Woit, Topics in Representation Theory Course, http://www.math.columbia.edu/~woit/repthy.html

[5] Gallier 2013, "Spherical Harmonics and Linear Representations of Lie Groups" https://www.cis.upenn.edu/~cis610/sharmonics.pdf


That made sense. I was following along, and then all of a sudden, it just kind of ended.

As a layman who doesn't clearly remember B Trees, it would be awesome to have even a sentence at the end, like

...and that's B Trees! Commonly used for storing fields in relational databases, filesystems, and more!

For fellow laymen, https://en.wikipedia.org/wiki/B-tree isn't bad, but is there more?


The Postgres documentation on btrees is absolutely stellar if you can afford some extended time to study it [1]. From there you can read the actual code or watch some videos on how Postgresql indexes use them. You can even dump indexes locally to see their content! from root all the way to the leaf nodes [2] (this assuming you read and understood the above).

[1] https://github.com/postgres/postgres/tree/master/src/backend...

[2] https://www.postgresql.org/docs/10/pageinspect.html


Here's a nice blog post about the algorithm https://codeforces.com/blog/entry/61306 .

It definitely makes sense, recurrence relation does have its own polynomial. Even the naive polynomial multiplication is very fast.

Now, what I'm wondering is if this is possible to do with adjacency matrices of graphs.

Recently I was calculating the number of paths of a knight going from one position to another in K moves. Ended up doing O(N^6 log K) (N is the chessboard dimension) which was fast enough for a small chessboard but maybe there's a way to turn that adjacency matrix to a polynomial and do the same.


>Ristretto is nice, terribly complex, and you don't actually need to care about the conceptual complexity. As an implementer, your only job is to execute the explicit formulas in section 5 of the Ristretto website. You do not have to be able to follow the hard math (just how you do not have to be able to follow the hard math involved in making the explicit formulas).

I don't think one should blindly follow an instruction without understanding why in any fields, let alone in crypto where a small, subtle difference can make or break it. Also, understanding crypto requires less math than inventing (and attacking) crypto, so it takes some effort, but it's doable even for hobbyists. If the math makes one uncomfortable, maybe one shouldn't try to roll their own crypto for production use in the first place.

Case in point: the author of this article that we're commenting on made a deadly mistake because they did not understand the math behind point conversion between Ed25519 and Curve25519 [1].

Below I also point out a mistake in their claim about malleability in EdDSA.

[1] https://www.reddit.com/r/crypto/comments/8toywt/critical_vul...


Is there a standardized set of TCP "test vectors" anywhere? Even an informal de facto standard test like what SSLLabs is for TLS?

  $ host www.sex.com
  www.sex.com is an alias for dmz01.cdn.live.
  dmz01.cdn.live has address 15.222.131.21
  
  $ host www.sex.com 1.1.1.3
  Host www.sex.com not found: 5(REFUSED)
  
  $ host www.nothing 1.1.1.3
  Host www.nothing not found: 3(NXDOMAIN)
I hadn't noticed a DNS REFUSED response before. That seems reasonable, although a web browser's error message doesn't differ between REFUSED and NXDOMAIN.


Maybe. Another possibility for a "Rosetta Stone", or perhaps the Lovecraftian perversion of said stone, comes from the ADE situation [0][1]. Even category theory, that great unifier with its own "Rosetta" tables [2][3], cannot escape; quivers have an ADE property.

Either way, we still don't know what's up with 1728. That will truly unlock everything, I think. Understanding 2, 3, or 5 would be nice; understanding 8 or 12 or 24 would be groundbreaking; but I think understanding 1728 will also be understanding Langlands' programme entire.

[0] https://en.wikipedia.org/wiki/ADE_classification

[1] http://www-groups.mcs.st-andrews.ac.uk/~pjc/talks/boundaries...

[2] http://math.ucr.edu/home/baez/rosetta.pdf

[3] https://ncatlab.org/nlab/show/computational+trinitarianism


Here you go :)

http://www.visual6502.org/JSSim/expert-z80.html

But AFAIK nobody really knows yet whether it works in all situations, because not all of the "trap transistors" had been found yet which the Zilog designers put in to make reverse engineering harder.

...maybe it would have been better to decap one of the "unlicensed clones" of the Z80, like the East German U880, because that definitely had the trap transistors fixed ;) The U880 had some minor differences in the undocumented behaviour too though.


I am not sure what you mean by "factually wrong": the whole point of the example is on no other platform is a memory barrier required on the RHS.

Even on ARM and POWER, very weak models, an address dependency orders the reads. This is the 'MP+dmb/sync+addr′' litmus test in 4.1 in [1].

This type of dependency, which doesn't need a barrier to work, is why C++ invented the whole convoluted "consume" ordering in the memory model, pretty much doubling the size and complexity of the model.

---

[1] https://www.cl.cam.ac.uk/~pes20/ppc-supplemental/test7.pdf


Boomerang attacks on SHA-0 have a complexity on the order of 2^33.

The author found the bug inspired by: https://github.com/google/wycheproof

(it's in the thread of the original tweet, though the thread is in Italian: https://twitter.com/asanso/status/1214450115777351681)


In general you'll find that zig is easier to read than Rust (see the first version of this project in Rust [0]) because it's a simpler language. For kernel programming this is even more so the case:

* zig has native support for arbitrary sized integers. In Rust I used to do bitshifts, Now I just have a packed struct of u3/u5/u7 whatever (see `src/pci/pci.zig`). Of course Rust has a bitflags crate but I didn't find it handy, this is a case where native support vs library support means a world of difference.

* zig has native support for freestanding target. I used to have to build with Xargo for cross compiling to a custom target, I also was forced to `[nostd]`, and some features I was using forced me to use nightly Rust. In zig I have a simple `build.zig` and a simple `linker.ld` script, just works.

* zig has nicer pointer handling. A lot of kernel programming is stuff that Rust considers unsafe anyway. It's not uncommon to have to write lines like `unsafe { (ptr as const u8) }` to deref a pointer in Rust, which is a pain because this kind of thing happens all of the time. Also you have to play around with mut a lot like this: `unsafe { &mut (address as mut _)`. It just felt wrong a lot of the time, where in zig you have either `const` or `var` and that's the end of it.

* zig is really fun to write! this is something that comes up often in the community, after years of C it's just very refreshing.

Some things zig is missing:

* Package manager, this is coming soon. [1]

* Missing documentation for inline assembly (I think this part is going to get overhauled, as Andrew Kelley is writing an assembler in zig atm [2]).

I don't know Nim, but I believe it has a garbage collector so it could be tricky to use for kernel programming.

[0] https://github.com/jzck/kernel-rs

[1] https://github.com/ziglang/zig/issues/943

[2] https://www.youtube.com/watch?v=iWRrkuFCYXQ


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: