Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Searle didn’t say AGI was impossible, rather the Chinese Room is an argument against GOFAI and symbol manipulation being enough. What makes human brains different is they’re part of an organism interacting with the environment. Symbols get their meaning from use in the environment. The Chinese Room, like chatGPT, is parasitic on those symbols whose meanings we assigned from our use in the real world.


The type of computation used in AI is irrelevant for understanding. If our brains are computable then they are Turing machines. If our brains can understand, then so can any UTM using any system of symbol manipulation it likes.

If you want to define “understanding” to be limited to organisms which interact with their environment, then I think that is an overly limiting and not very useful definition.


The inside of a Chinese room is an environment like any other, this argument makes no sense.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: