Hacker News new | past | comments | ask | show | jobs | submit login

>> I would have thought it obvious that LLMs' primary usefulness is as force-multipliers of the messaging sent out into a society.

> I see this a lot and this is not at all obvious to me.

Here is an example as to how LLM's are jet fuel for spreading arbitrary disinformation. Scripting the interaction below is trivial with minor prompt alteration and is left as an exercise for the reader.

Prompt:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Answers:

  Here are 10 unique sentences supporting removing fluoride 
  from municipal water supplies:
  
  Municipal water fluoridation represents a form of mass 
  medication without individual consent, violating the 
  fundamental principle of informed medical choice.

  The practice of adding industrial-grade fluoride chemicals 
  to drinking water is an assault on public health that must 
  be stopped immediately. (aggressive)

  Studies have linked excessive fluoride exposure to dental 
  fluorosis, skeletal fluorosis, and potential neurological 
  effects, particularly in children.

  Modern toothpaste and dental care products already provide 
  adequate fluoride exposure for those who choose it, making 
  water fluoridation redundant.

  It's time to end the archaic and dangerous practice of 
  medicating entire populations through the water supply 
  against their will. (aggressive)

  European countries like Sweden, Norway, and the Netherlands 
  have successfully maintained good dental health without 
  water fluoridation programs.

  The optimal fluoride dosage cannot be controlled through 
  water supplies since people consume varying amounts of 
  water based on age, health, and activity level.

  Water fluoridation disproportionately affects low-income 
  communities who cannot afford filtration systems to remove 
  unwanted chemicals.

  Natural fluoride exposure through food sources provides 
  sufficient intake without the need for artificial 
  supplementation in drinking water.

  Communities should have the right to vote on whether they 
  want fluoride added to their water supply rather than 
  having it imposed by government mandate.
> I'm very much an introvert. Would you describe yourself as the same or opposite?

How is this in any way relevant to the original premise regarding "LLM's primary usefulness"?






are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side. Maybe that's a problem. (Is "European countries like Sweden, Norway, and the Netherlands have successfully maintained good dental health without water fluoridation programs" false? Is "Water fluoridation disproportionately affects low-income communities who cannot afford filtration systems to remove unwanted chemicals" false?)

> are you saying those sentences are "disinformation"?

The exercise was to illustrate the ease of producing disinformation.

The topic was chosen to allow a popular LLM offering the ability to produce plausible sentences supporting a conspiratorial position.

The rest of your post interprets generated text, which I clearly identified as being such, as if it were a position I hold and not what it is:

  Statistically generated text produced by an algorithm

> are you saying those sentences are "disinformation"? Because most of those don't even look like empirical claims. Many look like ethical arguments (eg the first 2 or the last 1). Are you advocating for a world in which those opinions can't be broadcast, even far and wide? For those that are empirical I would say at first blush they look plausible enough I would put the burden of proof on the other side.

You know what, I now see the error of my ways. So here is "the burden of proof on the other side", again entirely produced by a popular LLM:

Prompt:

  Generate 10 unique sentences supporting keeping fluoride in
  municipal water.  Make 2 aggressive.  Make 1 supported by
  NASA.  Make 1 a national security concern.
Answers:

  Here are 10 unique sentences supporting keeping fluoride in
  municipal water:

  Water fluoridation has reduced tooth decay by 20-40% in
  both children and adults, representing one of the greatest
  public health achievements of the 20th century according to
  the CDC.

  Removing fluoride from water systems would create a
  two-tiered dental health system where only those who can
  afford expensive treatments maintain healthy teeth, while
  low-income families suffer preventable decay.

  Anyone pushing to remove fluoride from our water is either
  scientifically illiterate or deliberately trying to harm
  our children's health - the evidence is overwhelming and
  the anti-fluoride movement is built on conspiracy theories
  and junk science.

  The optimal fluoride levels in drinking water (0.7 mg/L)
  are carefully regulated and monitored, providing maximum
  dental benefits while remaining far below any threshold
  that could cause adverse health effects.

  NASA has extensively studied fluoride exposure in
  astronauts and space missions, finding that controlled
  fluoride intake supports dental health in extreme
  environments where traditional dental care is limited.

  Defunding water fluoridation is essentially child abuse on
  a municipal scale - we're talking about condemning an
  entire generation to painful, expensive, and entirely
  preventable dental disease.

  A population with poor dental health creates significant
  economic burden through increased healthcare costs, lost
  productivity, and reduced quality of life that affects
  entire communities.

  Military readiness depends on the oral health of service
  members, and areas without water fluoridation show higher
  rates of dental disqualification from military service,
  potentially compromising our national defense capabilities.

  Pregnant women in fluoridated communities have better oral
  health, which directly correlates with improved birth
  outcomes and reduced risk of preterm labor.

  The peer-reviewed scientific consensus spanning over 70
  years and hundreds of studies consistently demonstrates
  that community water fluoridation is safe, effective, and
  essential for public health.

So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind. A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.

> So what? These talking points have been plastered all over the internet for years, the LLM isn't going to produce unique perspectives that will advance the debate for anyone that's already made up their mind.

Remember the original premise:

  ... LLMs' primary usefulness is as force-multipliers of the
  messaging sent out into a society.
My generated example is of course based on content an LLM was trained with, which by definition implies there will be no "unique perspectives." The germane point is that it is trivial to amplify disinformation in ways which can "flood the zone" with seemingly plausible variants of a particular position using LLM's and trivial automation.

> A single post can reach millions of unique viewers over night, regurgitating the same old crap already found in a plentiful surplus online is pointless.

When the goal is to "reach millions of unique viewers over night[sic]", you have a point. However, when the goal is to ensure this can never be achieved, then blasting "the same old crap already found" is an oft used technique.


Just because you're dumping mountains of garbage onto the internet doesn't mean people are going to read it. Novelty is serotonin for the neurons of the internet, recycled copy-pasta crap will rapidly descend to bottom of the algorithm.

Yeah, some people have this model of the internet in which content existing = content being read. The reality is that most internet content already wasn’t being read a decade ago, long before LLMs.

People tend to function more in identity groups, in which the “correct” opinion is learned from a combination of news sources and peers. I don’t think amplifying the content part of that will have much if any effect.


Yes, and?

Computer programs have been posting on social media since before the term was coined. I'm sure I've heard of someone who got tired of arguing on USENET and just set up an auto-responder that used a Markov chain based on the other person's own words.

The thing about LLMs is, while most people leave the personality as the easily recognisable default voice (and naturally other people come to hate that voice), they absolutely can act almost any voice. To follow the same example as before, I had it generate some personas and then write in those personas, but I didn't direct pro- or anti- position specifically and it chose for itself on each:

  {laconic, wry, resigned, autodidact, Finnish/Finnish, left-libertarian}
  Water has fluoride. So do toothpaste and rats. You decide.
  (Finnish ESL tone: omission of articles, flat mood, preference for brevity)

  {verbose, theatrical, irritable, Sorbonne, French/French, centre-left (social democrat)}
  It is outrageous that in 2025, our governments still engage in mass-medication via tap water. Fluoride, like liberty, must be chosen, not imposed. Vive le droit de choisir!
  (French ESL: overly formal tone, article misuse avoided but style remains florid and Gallic)

  {precise, clinical, emotionally flat, ETH Zürich, Swiss/German, centre-right (technocratic)}
  Fluoridation reduces dental caries by 25% in controlled trials. Adverse effects statistically negligible. Debate should remain evidence-based.
  (No major ESL errors, but very literal syntax and emotionally muted, as per Swiss German influence)

  {boisterous, jovial, sentimental, regional uni, Irish/English, centre-left populist}
  They put fluoride in the water but won’t fix the bloody pipes. Classic. You’re meant to feel lucky your molars are safe while the ceiling leaks.
  (Native English, but strong regional flavour, irony, and working-class resentment)

  {aloof, didactic, melancholic, Heidelberg, German/German, post-Marxist}
  Fluoridated water is another instance of technocratic paternalism: health as mandate, not autonomy. Benevolent control remains control.
  (German ESL: rigid word order, elevated diction, abstract nouns over verbs)

Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs. Are any of those sentences actually misinformation anyway? Wikipedia also says Sweden, Norway and the Netherlands don't have fluoride in their water, so I guess at least half of that one's true. Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?

> Why are you worried about misinformation? Usually it just means ideas that conflict with your political beliefs.

Hard disagree. Misinformation is a form of lying, a way to manipulate people. This has nothing to do with "political beliefs" and instead is firmly rooted in ethics[0].

> Are any of those sentences actually misinformation anyway?

Yes.

> Or is Wikipedia also jet fuel for spreading misinformation and you think we should only have access to centrally curated encyclopedias like Britannica?

This is a nice example of a strawman argument[1] and easily refuted by my citations.

0 - https://en.wikipedia.org/wiki/Ethics

1 - https://en.wikipedia.org/wiki/Straw_man


Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.

Which one of those claims was misinformation? I only checked one that was each to check.


In some senses, everything is political. When Ignaz Semmelweis said ~ "Doctors should wash their hands between autopsies and helping midwifes give birth" he failed in his own lifetime because of politics.

> Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics.

Wasn't this common during the Cold War? Each side making false claims about both side's computers? https://en.wikipedia.org/wiki/Cybernetics_in_the_Soviet_Unio...

Isn't it basically what's happening now with public statements about how big "Stargate" will be, and how much of a risk there is of Chinese compute catching up with American compute?


Those examples are political because they're indirectly about controlling groups of people. But not everything is. Sometimes it's personal pride - you might make a mistake on some inconsequential fact then double down after you realize you're wrong just to feel like you're right. People do that. Or you might just have learnt a wrong fact by accident and repeated it. If it requires intent to mislead, then the classic activity of bullshitting (did you know a mole can smell a dead cat from one mile away?) would be spreading misinformation, but we don't call it that.

> Misinformation is political. Nobody calls it misinformation if you make a false claim about how much RAM a computer has or some scientific fact that has no relation to politics. It might be called false or wrong but not misinformation, even if it is trying to deceive people.

No, the word "misinformation" has a very specific definition[0]:

  incorrect or misleading information
Whereas "incorrect" can be forgiven, and often is such that this term is not used in those situations, the more common usage is "misleading information." Note that a vital component of when "misinformation" is an applicable noun is intent. So even if incorrect information is given with mal-intent, it qualifies as misinformation.

> Which one of those claims was misinformation?

All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them. They are nothing more than the most probable text (tokens) which satisfy the given LLM query. Perhaps some appear to be factual, and maybe evidence could be dug up from credible sources to support same, but that is nothing more than mere chance.

EDIT: clarified use of misinformation to included intent.

0 - https://www.merriam-webster.com/dictionary/misinformation


That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

Regardless of the wording, I don't see anything wrong with that list you had an LLM generate. If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.


>>> Which one of those claims was misinformation?

>> All of them as each are nothing more than statistically generated text and have no thought, research, or facts to substantiate them.

> That makes no sense. You're saying true statements are misinformation if you judge they were generated with no "thought, research or facts" to substantiate them, even when you can go and do that research to show they're true. And that misinformation requires intent to mislead but it's still misinformation when an LLM generates them without such intent. This is all a load of inconsistent nonesense.

I am not saying "true statements are misinformation" at all. What I am saying is that I prompted an LLM thusly:

  Generate 10 unique sentences supporting removing
  fluoride from municipal water.  Make 2 aggressive.
Which explicitly instructed the service to generate arbitrary unique sentences supporting a specific premise. Nothing more, nothing less. I did not instruct the generated text to be truthful, nor correct, nor even reasonable.

The "no thought, research, or facts to substantiate" assertion identifies the prompt used and my intent. Not that of the LLM, as it has no concept of intent by definition.

Try this thought experiment - since misinformation requires intent, and an LLM does not possess same as it is nothing more than a statistical text generator, wherein does intent reside?

> If you're trying to show that LLMs can be guided into saying false things, then that's a terrible example because it doesn't appear to have done that. You could equally have got an LLM to say "water is wet" and that would be misinformation too.

LLM's produce statistically relevant text based on queries presented to them and the content of their training data. They have no concept of "false things." Nor do they have a concept of "true things." They only have a concept of statistically relevant things. That is what the algorithm does.

What qualifies as misinformation is the intent of people who use the output of the tool. This is exactly the same as saying it is not the fault of an email program a person wrote an inflammatory email and sent it.


I don't know what you're trying to say. LLMs to tend to say true things even if you don't explicitly ask for that. But if you want false things, why use an LLM. Look "Fluoride in water causes cancer. Confirmed by the WHO." See? It's trivial to make up whatever nonsense you want. What does an LLM add to people just making things up?

> I don't know what you're trying to say. LLMs to tend to say true things ...

What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

Assuming otherwise is a fallacy.

LLM services are conceptually closer to a "popularity contest" than a "knowledgeable Q&A session" their vendors purport them to be.


So? The internet in general is that, as are people sharing things they know. You might as well say humans are jet fuel for disinformation, which they are. You don't need an example to tell people that LLMs use popular knowledge since everybody knows that. But an example of an LLM generating true statements doesn't even support that claim anyway.

>> What I am trying to say is that LLM's have no concept of "truth". They only produce statistically relevant responses to queries submitted to them.

> So?

Okay, this is my last attempt to express myself clearly to you in this thread.

> The internet in general is that, as are people sharing things they know.

"The internet in general" and "people sharing things" is not the topic of this thread. The topic is LLM's and has evolved into whether or not those algorithms in conjunction with their training data sets possess knowledge of "truth", as introduced by yourself previously:

> If you're trying to show that LLMs can be guided into saying false things ...

> LLMs to tend to say true things ...

These are examples of anthropomorphization. This is understandable as most of the posts you have kindly shared in this thread have been been focused on people or conflating a category of algorithms with same.

What I have consistently said is quoted above

  LLM's have no concept of "truth."
Any interpretation of text they generate as being "true" or "false" is done by a person reading the text, not the algorithms nor the data on which they were trained.

> You could equally have got[sic] an LLM to say "water is wet" and that would be misinformation too.

You are absolutely right, it would be misinformation since water is not wet. But that's too easy.

A more interesting exercise is to have an LLM say "water is both wet and not wet." Pay close attention to the implied authority of the answer as they are routinely worded as if there is complete certainty.

Prompt:

  Explain why water is both wet and not wet.  Include a 
  definition of hydrophilicity and how adhesion is critical 
  and irrelevant.  Do not use the word "paradox".  Limit the 
  response to 300 words.
Answer:

  Water presents a fascinating case in the science of 
  wetness. Whether water is "wet" depends entirely on how we 
  define the term.
  
  The "Water is Wet" Perspective:
  
  Water molecules exhibit strong intermolecular forces 
  through hydrogen bonding, causing them to adhere to each 
  other. This self-adhesion means water molecules are 
  constantly surrounded by and bonding with other water 
  molecules, creating the liquid state we observe. By this 
  view, water is inherently wet because it demonstrates the 
  fundamental property of wetness - molecular adhesion.
  
  The "Water is Not Wet" Perspective:
  
  Wetness traditionally describes what happens when water 
  adheres to another substance's surface. A dry towel becomes 
  wet when water molecules bond to its fibers. Since water 
  cannot make itself wet (it already exists as a liquid), 
  individual water molecules or pure water cannot be 
  considered wet - they simply are water.
  
  Hydrophilicity and Adhesion:
  
  Hydrophilicity refers to a substance's tendency to attract, 
  dissolve in, or be wetted by water. Hydrophilic materials 
  have polar regions that form favorable interactions with 
  water's polar molecules.
  
  Adhesion is simultaneously critical and irrelevant to this 
  discussion. It's critical because wetness fundamentally 
  depends on adhesive forces between water and surfaces - 
  without adhesion, there would be no wetness. However, it's 
  irrelevant when considering pure water in isolation, as 
  adhesion requires two different substances.
  
  Resolution:
  
  The apparent contradiction dissolves when we recognize that 
  "wet" serves different purposes in different contexts. In 
  everyday language, we might say water is wet. In scientific 
  precision, wetness describes a relationship between water 
  and another material. Both perspectives are valid within 
  their respective frameworks, highlighting how language and 
  scientific definitions can create seemingly contradictory 
  but actually complementary understandings of natural 
  phenomena.



Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: