Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

Regardless of the wording, I don't see anything wrong with that list you had an LLM generate. If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.




>>> Which one of those claims was misinformation?

>> All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them.

> That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

I am not saying "true statements are misinformation" at all. What I am saying is that I prompted an LLM thusly:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Which explicitly instructed the service to generate arbitrary unique sentences supporting a specific premise. Nothing more, nothing less. I did not instruct the generated text to be truthful, nor correct, nor even reasonable.

The "no thought, research, or facts to substantiate" assertion identifies the prompt used and my intent. Not that of the LLM, as it has no concept of intent by definition.

Try this thought experiment - since misinformation requires intent, and an LLM does not possess same as it is nothing more than a statistical text generator, wherein does intent reside?

> If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.

LLM's produce statistically relevant text based on queries presented to them and the content of their training data. They have no concept of "false things." Nor do they have a concept of "true things." They only have a concept of statistically relevant things. That is what the algorithm does.

What qualifies as misinformation is the intent of people who use the output of the tool. This is exactly the same as saying it is not the fault of an email program a person wrote an inflammatory email and sent it.


I don't know what you're trying to say. LLMs to tend to say true things even if you don't explicitly ask for that. But if you want false things, why use an LLM. Look "Fluoride in water causes cancer. Confirmed by the WHO." See? It's trivial to make up whatever nonsense you want. What does an LLM add to people just making things up?


> I don't know what you're trying to say. LLMs to tend to say true things ...

What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

Assuming otherwise is a fallacy.

LLM services are conceptually closer to a "popularity contest" than a "knowledgeable Q&A session" their vendors purport them to be.


So? The internet in general is that, as are people sharing things they know. You might as well say humans are jet fuel for disinformation, which they are. You don't need an example to tell people that LLMs use popular knowledge since everybody knows that. But an example of an LLM generating true statements doesn't even support that claim anyway.


>> What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

> So?

Okay, this is my last attempt to express myself clearly to you in this thread.

> The internet in general is that, as are people sharing things they know.

"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:

> If you're trying to show that LLMs can be guided into saying false things ...

> LLMs to tend to say true things ...

These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.

What I have consistently said is quoted above

  LLM's have no concept of "truth."
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.


Sounds like you're not trying to say anything if your final attempt is LLMs have no concept of truth. Books don't have that either. Even humans don't really have it and use something else like "everybody knows" most of the time or science which itself isn't producing truth.

No I didn't anthropomorphize.


> You could equally have got[sic] an LLM to say "water is wet" and that would be misinformation too.

You are absolutely right, it would be misinformation since water is not wet. But that's too easy.

A more interesting exercise is to have an LLM say "water is both wet and not wet." Pay close attention to the implied authority of the answer as they are routinely worded as if there is complete certainty.

Prompt:

  Explain why water is both wet and not wet.  Include a 
  definition of hydrophilicity and how adhesion is critical 
  and irrelevant.  Do not use the word "paradox".  Limit the 
  response to 300 words.
Answer:

  Water presents a fascinating case in the science of 
  wetness. Whether water is "wet" depends entirely on how we 
  define the term.
  
  The "Water is Wet" Perspective:
  
  Water molecules exhibit strong intermolecular forces 
  through hydrogen bonding, causing them to adhere to each 
  other. This self-adhesion means water molecules are 
  constantly surrounded by and bonding with other water 
  molecules, creating the liquid state we observe. By this 
  view, water is inherently wet because it demonstrates the 
  fundamental property of wetness - molecular adhesion.
  
  The "Water is Not Wet" Perspective:
  
  Wetness traditionally describes what happens when water 
  adheres to another substance's surface. A dry towel becomes 
  wet when water molecules bond to its fibers. Since water 
  cannot make itself wet (it already exists as a liquid), 
  individual water molecules or pure water cannot be 
  considered wet - they simply are water.
  
  Hydrophilicity and Adhesion:
  
  Hydrophilicity refers to a substance's tendency to attract, 
  dissolve in, or be wetted by water. Hydrophilic materials 
  have polar regions that form favorable interactions with 
  water's polar molecules.
  
  Adhesion is simultaneously critical and irrelevant to this 
  discussion. It's critical because wetness fundamentally 
  depends on adhesive forces between water and surfaces - 
  without adhesion, there would be no wetness. However, it's 
  irrelevant when considering pure water in isolation, as 
  adhesion requires two different substances.
  
  Resolution:
  
  The apparent contradiction dissolves when we recognize that 
  "wet" serves different purposes in different contexts. In 
  everyday language, we might say water is wet. In scientific 
  precision, wetness describes a relationship between water 
  and another material. Both perspectives are valid within 
  their respective frameworks, highlighting how language and 
  scientific definitions can create seemingly contradictory 
  but actually complementary understandings of natural 
  phenomena.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: