Human time is certainly valuable to a particular human. However, if I choose to spend time doing something that a machine can do people will not generally choose to compensate me more for it just because it was me doing it instead of a machine.
I think it's worth remembering that IP law is generally viewed (at least legally) as existing for the net benefit of society as opposed to for ethical reasons. Certainly many authors feel like they have (or ought to have) some moral right to control their work but I don't believe that was ever the foundation of IP law.
Nor do I think it should be! If we are to restrict people's actions (ex copying) then it should be for a clear and articulable net societal benefit. The value proposition of IP law is that it prevents degenerate behavior that would otherwise stifle innovation. My question is thus, how do these AI developments fit into that?
So I completely agree that (for example) laundering a full work more or less verbatim through an AI should not be permissible. But when it comes to the higher order transformations and remixes that resemble genuine human work I'm no longer certain. I definitely don't think that "human exceptionalism" makes for a good basis either legally or ethically.
Regarding FOSS licenses, I'm again asking how AI relates back to the original motivations. Why does FOSS exist in the first place? What is it trying to accomplish? A couple ideological motivations that come to mind are preventing someone building on top and then profiting, or ensuring user freedom and ability to tinker.
Yes, the current crop of AI tools seem to pose an ideological issue. However! That's only because the current iteration can't truly innovate and also (as you note) the process still requires lots of painstaking human input. That's a far cry from the hypothetical that I previously posed.
I think it's worth remembering that IP law is generally viewed (at least legally) as existing for the net benefit of society as opposed to for ethical reasons. Certainly many authors feel like they have (or ought to have) some moral right to control their work but I don't believe that was ever the foundation of IP law.
Nor do I think it should be! If we are to restrict people's actions (ex copying) then it should be for a clear and articulable net societal benefit. The value proposition of IP law is that it prevents degenerate behavior that would otherwise stifle innovation. My question is thus, how do these AI developments fit into that?
So I completely agree that (for example) laundering a full work more or less verbatim through an AI should not be permissible. But when it comes to the higher order transformations and remixes that resemble genuine human work I'm no longer certain. I definitely don't think that "human exceptionalism" makes for a good basis either legally or ethically.
Regarding FOSS licenses, I'm again asking how AI relates back to the original motivations. Why does FOSS exist in the first place? What is it trying to accomplish? A couple ideological motivations that come to mind are preventing someone building on top and then profiting, or ensuring user freedom and ability to tinker.
Yes, the current crop of AI tools seem to pose an ideological issue. However! That's only because the current iteration can't truly innovate and also (as you note) the process still requires lots of painstaking human input. That's a far cry from the hypothetical that I previously posed.