What if we shifted our focus entirely from the source of information to how useful and accurate it is?
I can't see how the prevalent value system could avoid being "sapio-supremacist" ? is "future proof" to include intelligences that are artificial but whose "sentience" is otherwise human equivalent or "greater"
That’s not the problem. The way you’re modeling it as an accuracy issue is partly solved by peer reviewed journals but that’s not what people are asking for. People are asking for authenticity and not the output of automated information generators for hire.
Is this an argument for AI? First, AI slop sucks. Second, even if it stopped sucking, it would need good input data, which it will need for a top 5 dish soap recommendation until it can do my dishes for me. Third, I want more than just useful information.
I can't see how the prevalent value system could avoid being "sapio-supremacist" ? is "future proof" to include intelligences that are artificial but whose "sentience" is otherwise human equivalent or "greater"