It's not slop if it is inspired from good content. Basically you need to add your original spices into the soup to make it not slop, or have the LLM do deep research kind of work to contrast among hundreds of sources.
Slop did not originate from AI itself, but from the feed ranking Algorithm which sets the criteria for visibility. They "prompt" humans to write slop.
AI slop is just an extension of this process, and it started long before LLMs. Platforms optimizing for their own interest at the expense of both users and creators is the source of slop.
> the problem is when I copy paste the content and claim I wrote it
Why is this the problem and not the reverse - using AI without adding anything original into the soup? I could paraphrase an AI response in my own words and it will be no better. But even if I used AI, if it writes my ideas, then it would not be AI slop.
So we expect either 1. people using AI and copy pasting into the human-only network, or 2. other people claiming your text sounds like AI and ostracizing you for no good reason. It won't be a happy place - I know from anti-generative AI forums.
Deep Research reports I think are above average internet quality, they collect hundreds of sources, synthesize and contrast them & provide backlinks. Almost like a generative wikipedia.
I think all we can expect from internet information is a good description of the distribution of materials out there, not truth. This is totally within the capabilities of LLMs. For additional confidence run 3 reports on different models.
That sidebar of past chats is where they go to be lost forever. Nobody came up with a UI that has decent search experience. It's like reddit internal search engine, but a bit worse.
Exactly. The best they came up with is a generated subject-like summary. So many options to explore here. Categorization by topic, by date, by customer account, clustering by topic, search with various ranking options, conversation(s) tree view, histogram per date/topic/account, integration with email, with an issue tracker, various states per chat/thread e.g. resolved/ongoing/non-viable, a knowledge bank to quickly save stuff you learned (code snippets, commands, facts), integration with Notion or a wiki etc etc. Just off the top of my head.
I was told there would be rapid prototyping with AI. Haven't seen any of the above.
I face the same problem and agree with your "search experience" remark. But it triggered something and raised this idea: just add the search bar to the top of the page instead of hiding it in the sidebar. That would solve 100% of my need to open the sidebar. Maybe show 5 most recent chats under the search bar when I click it before typing anything.
I don't see a big difference to humans, we are saying many unreasonable things too, validation is necessary. If you use internet, books or AI it is your job to test their validity. Anything can be bullshit, written by human or AI.
In fact I fear the humans optimize for attention and cater to the feed ranking Algorithm too much, while AI is at least trying to do a decent job. But with AI it is the responsibility of the user to guide it, what AI does depends on what the user does.
There are some major differences though. Without using these tools, individual are pretty limited in how much bullshit they can output for many reasons, including they are not mere digital puppet without need to survive in society.
It’s clear pro-slavery-minded elitists are happy to sell the speech that people should become "good complement to AI", that is even more disposable as this puppets. But unlike this mindless entities, people have will to survive deeply engraved as primary behavior.
Sure, but that’s not raw individual output on its mere direct utterance capacities.
Now anyone mildly capable of using a computer is able to produce many more fictional characters than all that humanity collectively kept in its miscellaneous lores, and drawn them in an ocean of insipid narratives. All that nonetheless mostly passing all the grammatical checkboxes at a level most humans would fail (I definitely would :D).
Why does it matter? If you consider not just the people creating these hallucination, but also the people accepting them and using them, it must be billions and billions...
and that's the point. You need a critical mass of people buying into something. With LLMs, you just need ONE person with ONE model and a modest enough hardware.
>Here’s a concise and thoughtful response you could use to engage with ako’s last point:
---
"The scale and speed might be the key difference here. While human-generated narratives—like religions or myths—emerged over centuries through collective belief, debate, and cultural evolution, LLMs enable individuals to produce vast, coherent-seeming narratives almost instantaneously. The challenge isn’t just the volume of ‘bullshit,’ but the potential for it to spread unchecked, without the friction or feedback loops that historically shaped human ideas. It’s less about the number of people involved and more about the pace and context in which these narratives are created and consumed."
No, the web is now full of this bot generated noise.
And even when only considering the tools used in isolated sessions not exposed by default, the most popular ones are tuned to favor engagement and retention over relevance. That's a different point as LLM definitely can be tuned in different direction, but in practice in does matter in terms of social impact at scale. Even prime time infotainment covered people falling in love or encouraged into suicidal loops by now. You're absolutely right is not always the best
You don't use it that way. You use it to help you build and run experiments, and help you discuss your findings, and in the end helps you write your discoveries. You provide the content, and actual experiments provide the signal.
Like clockwork. Each time someone criticizes any aspect of any LLM there's always someone to tell that person they're using the LLM wrong. Perhaps it's time to stop blaming the user?
Why would their response be appropriate when even the creators of the LLM doesn't clearly state the purpose of their software, yet alone instruct users how to use it? The person I replied to said that this software should be used yo "help you build and run experiments, and help you discuss your findings, and in the end helps you write your discoveries" - I dare anyone to find any mention of this workflow being the "correct" way of using any LLM in the LLM's official documentation.
You wouldn't use a screwdriver to hammer a nail. Understanding how to use a tool is part of using the tool. It's early days and how to make the best use of these tools is still being discovered. Fortunately a lot of people are experimenting on what works best, so it only takes a little bit of reading to get more consistent results.
What if the company selling the screwdriver kept telling you your could use it as a hammer? What if you were being bombarded with marketing the hammers are being replaced by screwdrivers?
You can recognise that the technology has a poor user interface and is wrought with subtleties without denying its underlying capabilities. People misuse good technology all the time. It's kind of what users do. I would not expect a radically new form of computing which is under five years old to be intuitive to most people.
Slop did not originate from AI itself, but from the feed ranking Algorithm which sets the criteria for visibility. They "prompt" humans to write slop.
AI slop is just an extension of this process, and it started long before LLMs. Platforms optimizing for their own interest at the expense of both users and creators is the source of slop.
reply