Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> "Current AI systems have no internal structure that relates meaningfully to their functionality".

I'm curious as to why you feel this needs to be true?

Or to put it another way, what would an AI structure look like to be more meaningfully connected to its function?

Not trying to flame. I always feel that I can't think quite deeply enough about these issues, so I'm worried that I'm missing something 'obvious'.



The reasoning is quite subtle, and because I'm not a very coherent guy I have problems expressing it. In the LLM space there are a whole bunch of pitfalls around overfit (largely solvable with pretty standard statistical methods) and inherent bias in training material which is a much harder to problem to solve. The fact that the internal representation gives you zero information on how to handle this bias means the tool can itself not be used to detect or resolve the problem.

I found this episode of the nature podcast - "How AI works is often a mystery — that's a problem": https://www.nature.com/articles/d41586-023-04154-4 - very useful in a 'thank goodness someone else has done the work of being coherent so I don't have to' way.


Thank you.

That's a really interesting (and understandable) explanation.


AlphaGo had an artificial neural network that was specifically trained in best moves and winning percentages. An LLM trained on text has some data on what constitutes winning at go, but internally doesn't have a ANN specifically for the game of go.


> AlphaGo had an artificial neural network that was specifically trained in best moves and winning percentages. An LLM trained on text has some data on what constitutes winning at go, but internally doesn't have a ANN specifically for the game of go.

This isn't addressing what the original commenter was referring to.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: