Hacker Newsnew | past | comments | ask | show | jobs | submit | ajarmst's commentslogin

Looks like someone else mentioned PAIP. I’d add “The Art of the Metaobject Protocol” as well.


More people should really read this book. Its great that functional programing is on the rise, but the shitty object systems of C++ and Java have poisoned OOP in most people's mind. CLOS is an awesome object system and TAOTMP really highlights what you can do with a good object system.

Functional programing is still awesome, but for high-level work I think the multiparadigm languages like Scala, Swift, and CL are the way to go.


If they're so easy and obvious, why hasn't the author done it, at least as a demo? The article also reeks of "this doesn't work for me, therefore it won't work for anyone" I doubt I'm the only one who absolutely does not concede that everything should have a GUI interface.

"Why does shit like DPDK exist?" I don't know, but I bet you could find out with a little investigation, which might make this sound more like a well-researched position and less like a tantrum.

"people who absolutely insist that the Church Turing thesis means muh computer is all-powerful simulator of everything". Yeaaah...we're done here.


The author's about page reports that he is a former physicist with experience in automotives and law enforcement. It also contains the statement "I have a particular dislike of self-anointed 'experts';" with no apparent irony.

And my personal favourite: "People may think I’m fighting above my weight class, because many of the people I label as clowns are on television and in important newspapers, much like the stars of 'The Bachelor.' "

Uh huh.


The guy has a sense of humor that is both effective and rubs The Offended the wrong way.


It would be less useful than you’d think. Pilots are trained to articulate what they are doing and generally follow very well-practiced procedures. Another pilot can quite accurately reproduce what a pilot is doing just from the transcript, and the warning horns and other indicators are quite expressive of other things. The audio is good enough to hear breakers opening. Anything a video camera could see out the window that would be relevant will be remarked upon by pilots.


In the vast, vast majority of cases, yes. However, black boxes are only used in the minority where something has gone horribly wrong.


I've watched enough Aviation Disasters to know this isn't always good enough.


Which episode has a crash that (1) they failed to find a primary cause for and (2) would have been solved had they had video?


There’s an interesting aside in that neither a CVR nor a FDR would have been helpful in figuring out the cause of the Comet disasters. The way they did was brilliant.


The article is interesting, but that lede is incoherent. Many mathematicians accept computer proofs the way chess grand masters accept computer players. Computer “assistants” that generate proofs that humans cannot follow or understand will always be controversial, and the proofs they generate, even though accepted as valid, will always be decorated with an asterisk.


This is a proof assistant, not an automated theorem prover. The user has to supply* the mathematics and the proof checker formally verifies whether or not the steps are correct. It doesn’t have any creativity (that’s up to the mathematician).

*I should have clarified there is some proof generation, see the comment below by opnitro, but I meant the meat and potatoes of novel non-trivial proofs currently has to be supplied by the user.


That's not quite true. Many of these proof assistants support some level of automation and proof search. I haven't used Lean specifically, but it's quite common in Coq for projects to write proof search techniques specific to the problem domain and utilize them in their proofs.


True, and Isabelle can call out to automated provers (a mechanism called "sledgehammer"!)


Thanks for the clarification.


If not an asterisk, they’ll just have less impact. A proof generates a fact. The best facts and proofs are useful in that they help other things. You work becomes useful for my work, which may become useful for others.


It's still just as useful. The fact that's proven is what helps other proofs.

A computer assisted proof is just as correct or helpful, it just may be more complicated of a proof initially. Given time said proof can be simplified but having the proof in the first place allows you to move away from assumptions into proofs or alternatively even open new doors that weren't known to exist.

Keep in mind that these computer aided proofs are equivalent to pen and paper proofs but because you can rely on software to guarantee you haven't made any mistakes, you can make more complex proofs that still work.

It's the same as with programming. You can write overly complex and opaque proofs in the same way you can write bad, slow, or hard to read code. It still "works" but it's not ideal and is often a first revision in a series of steps towards the clean, fast, and easy to read final products.


The best proofs are good explanations as well as just being correct. If you don’t understand the theorem and the proof as well, you’re less likely to use it.


Presumably the asterix denotes "actually true"? ;-)


I don’t understand the chess analogy. How is it in any way similar?


Lean and the other theorem provers turn mathematical proofs into levels of a computer puzzle game, much like a chess puzzle.


When I teach coding, I used to have a similar lesson around commenting code: "don't tell me you're creating a variable. I can read the code. Tell me what the variable is for."

I still mostly abide by that, but I've had an opportunity to write some larger programs for the first time in a few years (a hazard of teaching programming is you find yourself working with pretty small programs), and noticed that the comments started become a part of my conversation with myself. I'd drop in quick reminders for what I intended to do in a function before going moving on to something else. When I came back and then wrote the code corresponding to the comments: well now it's redundant, but it wasn't. I stopped cleaning that stuff up (I'd clean it if I had to present or publish it, of course). The comments kept some context of my thinking when I wrote it, what order I did things in, etc. It would be gibberish for another, but it wasn't for me, especially after a few days, or even longer. (Note: I'm a pretty fast touch-typist, so muttering to myself doesn't invoke much of a productivity cost. I also note that commit messages are another great place to capture that sort of context for yourself, and not using that opportunity is a sin.)

So, I've changed some of my greybeard aphorisms around commenting: "When commenting code, consider the needs of the person who will be maintaining this code. Who will probably be you. Be nice to future you."


Felleisen is an ACM Fellow with a senior tenured position at Rice. American, white, middle-aged. Doesn't have a presence in social media. He'll be fine.


That seemed like an awful lot of words to describe the entirely predictable fact that the job market will pick up significantly as we shed pandemic restrictions and that workers in low-wage service jobs tend to quit a lot. A few anecdotes about people liking WFH doesn’t make the trend any more mysterious or novel.


I think we've gone a little overboard with the whole, "unprecedented times" stuff. Sure, pandemic economics are weird, but they're not "up is the new down" weird, basic economic principles apply, and we're not throwing out the whole book, just writing a new chapter.


Why would I use this and not Pandoc?


"nor are the skills of writing good documentation even vaguely similar to those for writing good code". I disagree. Vehemently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: