Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

99.99% of human thought is repetitive, derivative and generally just cached computation being reused. Adding one original thought on top of everything can be a life-long endeavour and often require a PhD.

AlphaGo started from scratch and surpassed human level by having access to more experience. It was enough for a neural net to learn this game, because a simulated Go board is close enough to a real one. Remember move 37?

More recently AlphaTensor showed a better way to do matrix multiplication than humans could manually discover, also based on massive search followed by verification and learning.

Humans appear more intelligent because we have access to validation in a way the AI models don't, we have access to the real world, tools and labs and human society, not just a text dataset or an impoverished simulation.

Even so, it's not easy to validate abductive thought. Saying is cheap, proving is what matters. Same for language models - unvalidated generative language models are worthless. Validation is the key. When validating is cheap, a model can beat humans, the neural net architecture is not an obstacle to surpassing humans.

When validation is expensive, even humans fumble around - remember how many cooks were around the pot at the CERN particle accelerator a few years ago? All of them sucking on the verification soup. With so many PhD brains, verification was still the scarce ingredient. Without our labs, toys and human peers we can't do it either.

One other thing we can't do, for example, is to discover how to build better AI. We just try various ideas out, seeing what sticks. Why can't we just spit out the best idea if we are "intelligent"? Why are we calling working with neural nets a kind of alchemy? Because we haven't verified most of our ideas yet.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: