Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

See my definition at : https://news.ycombinator.com/item?id=44137201

As mentioned there, I was arguing that without being prompted, there's no way that it can add something that is not a combination of the training data. And that combination does not act on the same terms that you would expect someone learning the same material would do.

In Linear regression, you can reduce a big amount of data to a small amount of factors. Every prediction would be a combination of those factors. According to your definition, those prediction will be new. For me what's new is when you retrospectively adds the input to the training data, find a different set of factors that gives you a bigger set of possible answers (generation) or narrows the definition of correct answers (reliability).

That is what people do when programming a computer. You goes from something that can do almost anything and you restrict it down to a few things (that you need). What LLM do is throwing the dice and what you get may or may not do what you want, and may not even be possible.



That comment doesn't provide anything resembling a coherent definition.

The rest of what you wrote here is either also true for humans or not true for machines irrespective of your definitions unless you can demonstrate that humans can exceed the Turing computable.

You can not.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: