>> What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.
> So?
Okay, this is my last attempt to express myself clearly to you in this thread.
> The internet in general is that, as are people sharing things they know.
"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:
> If you're trying to show that LLMs can be guided into saying false things ...
> LLMs to tend to say true things ...
These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.
What I have consistently said is quoted above
LLM's have no concept of "truth."
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.
Sounds like you're not trying to say anything if your final attempt is LLMs have no concept of truth. Books don't have that either. Even humans don't really have it and use something else like "everybody knows" most of the time or science which itself isn't producing truth.
> So?
Okay, this is my last attempt to express myself clearly to you in this thread.
> The internet in general is that, as are people sharing things they know.
"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:
> If you're trying to show that LLMs can be guided into saying false things ...
> LLMs to tend to say true things ...
These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.
What I have consistently said is quoted above
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.