Hinton's example doesn't resonate for me -- I do gardening as a hobby, so my immediate answer to his "What is common to nukes and compost heap?" -- was the concept of critical mass. It took me 10 seconds to articulate the word, so I'm not as fast, but I still got the "correct" analogy right away, because with the right information the answer is obvious.
Hinton says this demonstrates analogical thinking. I read about gardening online and I've probably read about compost heap physics written down somewhere online.
So what is more likely happened was that ChatGPT was previously trained some article about compost heap chain reactions, because there's a lot of soil science and agriculture that discusses this, even for laypeople like myself that I've already read, so that it isn't a long reach if you know enough about both topics.
Thus Hinton's example strikes me as more having failed to control for LLMs having already basically seen the answer before in its training texts.
EDIT: A few minutes later in the video, Ilya comments (https://youtu.be/Gg-w_n9NJIE?t=4966): "We have a proof of existence, the human brain is a neural network" (and the panelist agrees, assuming/barring any computational "magic" happening in the human brain. I'm interested in this comment, because I generally hold that position as well, but I see many naysayers and dissenters who will respond "Well but real neurons don't run on 8 bits, neurons have different types and cells and contain DNA in a chemical soup of hormones, etc. etc., you are just confusing the model for the real thing in a backwards way", and so on. So I think there's an interesting divide in philosophy there as well, and I don't personally have an articulate answer to explain why I might hold the position that natural brains are also neural networks other than an appeal to computationalism.
Hinton says this demonstrates analogical thinking. I read about gardening online and I've probably read about compost heap physics written down somewhere online.
So what is more likely happened was that ChatGPT was previously trained some article about compost heap chain reactions, because there's a lot of soil science and agriculture that discusses this, even for laypeople like myself that I've already read, so that it isn't a long reach if you know enough about both topics.
Thus Hinton's example strikes me as more having failed to control for LLMs having already basically seen the answer before in its training texts.
EDIT: A few minutes later in the video, Ilya comments (https://youtu.be/Gg-w_n9NJIE?t=4966): "We have a proof of existence, the human brain is a neural network" (and the panelist agrees, assuming/barring any computational "magic" happening in the human brain. I'm interested in this comment, because I generally hold that position as well, but I see many naysayers and dissenters who will respond "Well but real neurons don't run on 8 bits, neurons have different types and cells and contain DNA in a chemical soup of hormones, etc. etc., you are just confusing the model for the real thing in a backwards way", and so on. So I think there's an interesting divide in philosophy there as well, and I don't personally have an articulate answer to explain why I might hold the position that natural brains are also neural networks other than an appeal to computationalism.