Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

>If you are saying there is no difference then how do you explain the vast difference in capability between humans and LLM models?

No I completely agree that they are different, like swimming and propulsion by propellers - my point is that the difference may be irrelevant in many cases.

Humans haven't been able to beat computers in chess since the 90s, long before LLM's became a thing. Chess engines from the 90s were not at all "thinking" in any sense of the word.

It turns out "thinking" is not required in order to win chess games. Whatever mechanism a chess engine uses gets better results than a thinking human does, so if you want to win a chess game, you bring a computer, not a human.

What if that also applies to other things, like translation of languages, summarizing complex texts, writing advanced algorithms, realizing implications from a bunch of seemingly unrelated scientific papers, and so on. Does it matter that there was no "thinking" going on, if it works?



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: