Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

AI/LLM doesn't have our monkey brains, so no gut-reactions, tribalism, or propaganda programming that short-circuits its rational capacity.

I think it could do a better job than 99.9% of humans at helping us spot the bias and propaganda we are fed daily.





The only rational capacity that LLMs have is that which has been trained into it. They've also been trained on mountains of gut reactions, tribalism, and propaganda. These things aren't Data from Star Trek. They're not coldly logical. In fact, it's a struggle to get them to be logical at all.

You must be using an LLM that cannot navigate formal logic puzzles or hasn't undergone chain-of-thought optimization.

It also doesn't understand how certain viewpoints are excluded. Yes, it can have propaganda capacity... Just go to whoever programs it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: