> That somehow international legislation will converge on the strictest possible interpretation of intellectual property, and those models will become illegal by the mere fact they were trained on copyrighted material.
Doesn't this ultimately result in local maxima? All the biases get reinforced and all the novelty (things the system hasn't seen/produced yet) goes away.
A tiny example: Dall-E (and SD) both struggled with eye positioning, for example. Wouldn't training a model on their output then reinforce that particular bias of poorly positioning eyes? Now multiply this by every existing quirk in the models.
Doesn't this ultimately result in local maxima? All the biases get reinforced and all the novelty (things the system hasn't seen/produced yet) goes away.
A tiny example: Dall-E (and SD) both struggled with eye positioning, for example. Wouldn't training a model on their output then reinforce that particular bias of poorly positioning eyes? Now multiply this by every existing quirk in the models.