Hacker News new | past | comments | ask | show | jobs | submit login

That's the point, yes. "Piling up more and more data then stirring it until it works" stopped being a joke and turned out to be a practical approach.

This can be seen as another occurence of the "bitter lesson": http://www.incompleteideas.net/IncIdeas/BitterLesson.html




Thanks for the link to The Bitter Lesson.

I indeed find the lesson that it describes unbearably bitter. Searching and learning, as used by the article, may discover patterns and results (due to infinite scaling of computation) that we, humans, are physically uncapable of discovering -- however, all those learnings will have no meaning, they will not expose any causality. This is what I find unbearable, as it implies that the real world must ultimately remain impervious to human cognizance; it implies that our meaning- and causality-based human reasoning ultimately falls short to model the world, while general, computation-only methods (given ever-growing computing power) at least "converges" to a faithful (but meaningless) description of the world.

See examples like protein folding, medicine research, AI-assisted diagnosis, self driving cars. We're going to rely on their results, but we'll never know why those results work. We're not going to reject self-driving cars if those cars save lives per same distance driven and/or same time driven; however, we're going to sit in, and drive, those cars blind. To me, that's an unbearable thought, even apart from the possibility that at some point the system might break down, and cause a huge accident inexplicably. An inexplicable misbehavior of the system is of course catastrophic, but to me, even the inexplicable proper behavior of the system is an unsettling thought -- because it is inexplicable.

Edited to add: I think the phrase "how we think we think" is awesome in the essay. We don't even know how our reasoning works, so trying to "machinize" those misconceptions is likely bound to fail.


Arguably, "the way our reasoning works" is probably a normal distribution but with a broad curve (and for some things, possibly a bimodal distribution), so trying to understand "why" is a fool's errand. It's more valuable to understand the input variables and then be able to calculate the likely output behaviors with error bars than to try to reduce the problem to a guaranteed if(this), then(that) equation. I don't particularly care why a person behaves a certain way in many cases, as long as 1) their behavior is generally within an expected range, and 2) doesn't harm themselves or others, and I don't see why I'd care any more about the behavior of an AI-driven system. As with most things, Safety first!


Uh, it's funny because it works. It came out at a point where that approach was already being used in plenty of applications.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: