> describing something as wrong is easier than saying what's right
This 1000x. This problem you also see with education of children. You have 2 problems:
1) there's a VERY large number of possible depictions. Let's say 10 million. Of those, a VERY small number are correct/acceptable. Let's say 10.
Then it's easy to see. A correct example has 1/10 = "10%" of all information so to speak. If 100% is what you need to be guaranteed a correct classification of all possible depictions.
Yet a negative example has 1/10,000,000= "0.000001%" of the information you need to classify all examples correctly. If you attempt to create correct behavior by giving wrong examples ... it'll be a long day.
A positive example, telling an AI (or a child/student) what to DO has ~10000 times more information than telling them what NOT to do, in this example. In practice, you'll find the difference is even more extreme.
2) BUT there is a problem, that's also always brought up with wikipedia. Giving positive examples requires that there are examples that everyone agrees with. And we all know there are rather serious problems with that. A positive example HAS to be in the intersection of what all groups find acceptable. And ... well easy examples are always wars: Who is right? Who started the war? Is it justified? Applied to Russia, Ukraine, Israel, India, Sudan, ... but in practice a lot of issues, some not at all that controversial (which word is correct "New York" english? Bupkis or bupkez?)
This 1000x. This problem you also see with education of children. You have 2 problems:
1) there's a VERY large number of possible depictions. Let's say 10 million. Of those, a VERY small number are correct/acceptable. Let's say 10.
Then it's easy to see. A correct example has 1/10 = "10%" of all information so to speak. If 100% is what you need to be guaranteed a correct classification of all possible depictions.
Yet a negative example has 1/10,000,000= "0.000001%" of the information you need to classify all examples correctly. If you attempt to create correct behavior by giving wrong examples ... it'll be a long day.
A positive example, telling an AI (or a child/student) what to DO has ~10000 times more information than telling them what NOT to do, in this example. In practice, you'll find the difference is even more extreme.
2) BUT there is a problem, that's also always brought up with wikipedia. Giving positive examples requires that there are examples that everyone agrees with. And we all know there are rather serious problems with that. A positive example HAS to be in the intersection of what all groups find acceptable. And ... well easy examples are always wars: Who is right? Who started the war? Is it justified? Applied to Russia, Ukraine, Israel, India, Sudan, ... but in practice a lot of issues, some not at all that controversial (which word is correct "New York" english? Bupkis or bupkez?)