Your internal verifier model in your head is actually good enough and not random. It knows how the world works and subconsciously applies a lot of sniff tests it has learned over the years.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.
The problem isn't that content is AI generated, the problem is that the content is generated to maximize ad revenue (or some other kind of revenue) rather than maximize truth and usefulness. This has been the case pretty much since the Internet went commercial. Google was in a lot of ways created to solve this problem and it's been a constant struggle.
The problem isn't AI, the problem is the idea that advertising and PR markets are useful tools for organizing information rather than vaguely anarchist self-organizing collectives like Wikipedia or StackOverflow.
that's where i disagree. the noise is not that high at all and is vastly exaggerated. of course if you go too deep into niche topics you will experience this.
Yeah niche topics like the technical questions I have left over after doing embedded development for more than a decade. Mostly questions like “can you dig up a pdf for this obsolete wire format.” And google used to be able to do that but now all I get is hundreds of identical results telling me about the protocol’s existence but nothing else.
Sure a lot of answers from llms may be inaccurate - but you mostly identify them as such because your ability to verify (using various heuristics) is good too.
Do you learn from asking people advice? Do you learn from reading comments on Reddit? You still do without trusting them fully because you have sniff tests.