> Will you indemnify those that follow your advice?
I strongly feel that this is a terrible metric for comments on the internet.
First, the person you’re replying to has nothing to gain and a lot to lose by saying "yes".
Second, it invites silly corner case nitpicking. Their comment is written in reasonable plain English for other users reading plain English. It’s not a legal contract, and so leaves lots of loopholes. Sure, you could create a likely non-transformative LLM by training it on nothing but the text of Harry Potter with fitness measured by how accurately it exactly reproduces the complete text of Harry Potter, but that’s not what reasonable people are doing with LLMs.
It's borderline legal advice and you have to be very careful with predicting how judges will rule on future cases.
In a legal context certain words have immense power. In the context of copyright 'transformative' is one such case. It's a very fine line between 'transformative' and 'derivative' and you don't get to preempt the judiciary about how they will see things.
This is not a legal context though. I am not a lawyer, I don't claim to be a lawyer, and even if I were a lawyer, no one in the internet should be taking my comments as legal advice in the first place. One should not need to disclaim everything they write with such a statement.
As an attorney, I'm of the opinion that otherwise-intelligent people who provide confidently-wrong legal opinions on the Internet should be held accountable for people following their advice. I see incorrect understandings of the law and sloppy legal analysis with dismaying frequency here, even when it comes to settled law like what "fair use" is.
This is a weird stance. Anyone can say anything on the internet, they can be legal opinions or other things. It should not be necessary to disclaim such an opinion because no one should be using the internet as their basis of law (or medicine, etc) instead of a professional in the first place.
> no one should be using the internet as their basis of law (or medicine, etc) instead of a professional in the first place.
Designing systems around what people should do, as opposed to what they actually do, has proven time and again not to work particularly well in practice. I'm sure you've seen countless examples of how people track paths through manicured grass fields. The landscaper will complain about how people should walk and they'll put up signs to no avail.
The fact is, we (including me, BTW) are frequently wrong about a lot of things, and when there's little riding on it, we can ignore that most of the time. With subjects like medicine and law, however, where a mistake can cost you your life or lots of money, we want to make sure people are getting the best advice possible. That's why we require licenses to practice medicine and law, and we have governing and ethics bodies to regulate how professionals operate their practices.
> That's why we require licenses to practice medicine and law, and we have governing and ethics bodies to regulate how professionals operate their practices.
Correct, so people should (and do) go to the people who have these licenses, not random people on the internet. I don't even understand what your solution, or even problem, is. It seems like you're suggesting that everyone, whenever they speak on the internet about anything vaguely related to medicine, law, or hell, even regulated fields like engineering, should disclaim that they are not speaking in such a context. And I saw that that is a ludicrous task that is expected of one to do. So if you have any better solutions, let me know.
That's your opinion on how people should speak, not most people's, so feel free to disclaim when you yourself talk, but don't deem what other people should or should not say.
I’m afraid you didn’t understand what I just said. I was politely trying to say “if you wisely abstain from talking about things you don’t know about, you won’t need to disclaim that you don’t know what you’re talking about.”
No because I don't have that much money, but it looks like Microsoft will. They likely wouldn't if their lawyers did not think there was a reasonable chance that they'd win the lawsuits, likely from, again, generative AI being deemed fair use.
Because 'transformative' is a pretty dangerous word to use in this context.