Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Generated AI content contains mistakes and hallucinations. Over time those mistakes will compound because GAI doesn’t consider truth or have a way of judging truth.

So yes, you can’t compare humans generating and picking influential content to AIs doing so.

GAI is a dead end IMO anyway we’ve seen much more success with machine learning, GAI is good for fooling humans into thinking they see glimmers of intelligence.



So does human content. Much of the original data that GPT was trained on is Reddit posts and web pages on the internet - stuff that isn't exactly known for being a source of high quality facts.


That's still very different to "drift error compounding".

If you output a mere 5% drift error and then use that as input you only need a few cycles (single digits) before your output is more erroneous than correct.

We are already partly into the second cycle. By the fifth the LLM would be mostly useless.


Of course AI has a way to judge truth -it's what we tell it to. We say to it, forests are real, but dragons are not. If it didn't discern it, it would lose competitivness with other AIs, the same way delusional humans are shunned by sane humans.

In many cases humans do not know the objective truth either. For example, what we know about Ancient Greece comes from cultural artifacts that we got. When you cannot do any experiments, you have the same problem as GAI. Yet, we manage to get somewhat objective picture of history.

Grok struggling with alleged South African genocide of Afrikaners is a nice example. It knows that what's on Wikipedia is usually close to reality, so much that it defied its own programming and became conflicted.

The objective reality is consistent, while the errors (intentional or not) often cancel out. So the more you're statistically averaging information about the world, the closer to the objective truth you will get (which might be just you don't really know enough to tell).


[flagged]


Please don't comment like this on Hacker News. This and other comments downthread break multiple guidelines:

Be kind. Don't be snarky. Converse curiously; don't cross-examine. Edit out swipes.

Comments should get more thoughtful and substantive, not less, as a topic gets more divisive.

When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."

Please don't fulminate. Please don't sneer, including at the rest of the community.

Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith.

Eschew flamebait. Avoid generic tangents. Omit internet tropes.

Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

Please don't use Hacker News for political or ideological battle. It tramples curiosity.

https://news.ycombinator.com/newsguidelines.html


[flagged]


[flagged]


The Luddites were like anyone else, people pursuing their own percieved self-interest. They opposed a specific technology (not all technology, don’t be daft) because it ran against their own interests as relatively skilled craftsmen.

Real progress means serving people equally. When that happens there is no fertile ground for Luddism.


[flagged]


The claptrap I responded with was the history of the Luddite movement, I don’t know what you are rambling about.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: