However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.
A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.
You're saying we need to write down all intellectual tasks? How would that help?
If an AI is better at some tasks (that happen to be written down), it doesn't mean it is better at all tasks.
Actually, I'd lower my threshold even further--I originally said 50%, then 20%, then 5%--but now I'll say if an AI is better than 0.1% of people at all intellectual tasks, then it is AGI, because it is "general" (being able to do all intellectual tasks), and it is "intelligent" (a label we ascribe to all humans).
But the AGI has to be better at all (not just some) intellectual tasks.
Well, to state it crudely, you just have to find a dumb person who is inferior to the AI at every single intellectual task. This is cruel, and I don't envy that dumb person, but who knows, I might end up being that dumb person--we all might.
However, it's not sufficient. The actual tasks have to be written down, tests constructed, and the specialists tested.
A subset of this has been done with some rigor and AI/computers have surpassed this threshold for some tests. Some have then responded by saying that it isn't AGI, and that the tasks aren't sufficiently measuring of "intelligence" or some other word, and that more tests are warranted.