Slight tangent, but you seem to know about licences... Do you happen to know of a licence that has anything like a "can only be used for the benefit of humanity" clause?
I've favoured the MIT licence for what little OSS I've published thus far. But, I'm becoming increasingly concerned that ruthless profit-above-all-else driven companies can include my (benign) work in systems that causes real harm.
That’s far too subjective to be of any legal value. If you want that, you’ll need to (a) spell out what you want to allow, (b) spell out what you want to disallow, or (c) just write the subjective thing out plain and simple and don’t even bother with complying with license norms (e.g. just write “you can do whatever you want with this provided it is for the benefit of humanity”).
That's a fair criticism. My idea of good is not defined, or static - it adapts over time to the norms and values of society.
Perhaps something like the OpenAI approach to their GPT-3 deal with Microsoft is better. That is, if the work Microsoft do with GPT-3 goes in a direction OpenAI doesn't like, OpenAI reserves the right to veto the work [1].
Of course a person has to have some sort of opinion of it under such conditions, but is it going to come down mostly condemning the weapons enabling the aggressor or thankful for the weapons that enable some measure of violent defense?
The Slaughterbots campaign argued, rightly, I think, that advanced autonomous lethal weapons should be suppressed because they enable unethical uses and unscrupulous actors far more than legitimate defense.
It can't really be seen in isolation from the environment (social, economical, etc) it's going to come into I suppose, but in the real, concrete world we have creating them is not a neutral act, and some of the consequences are reasonably predictable.
I think there are some instances where it's definitely bad. E.g. weapons that, if used, can by themselves extinguish humanity. Most instances are not that clear unfortunately - lots of sides to the story, extenuating circumstances, etc. etc.
It's not an easy question. However, as the creator of the software I guess I feel that my opinion should count in how it's used. As a simplistic example, if in some dystopian timeline my OSS were used to facilitate a holocaust I'd like to be able to do something to halt that. It doesn't matter that the perpetrators feel that what they're doing is right.
>Do you happen to know of a licence that has anything like a "can only be used for the benefit of humanity" clause?
A terrible idea for a number of reasons (in terms of legal enforceability, unintended side effects, and more). The following two articles do a good job of explaining why such a license really isn't practical:
Yes, the unintended side effects of HESSLA (sibling comment) were a surprise for me to read about.
Thank you for the links - I'd not heard of the Hippocratic License but the criticisms are interesting.
Your first assumption is that your inventions are important enough to be of use to “bad people”.
The other is your assumption that you have the objective ability to determine good from bad uses of a benign invention.
I’m increasingly looking for the psychological reasons why these ML models and their outputs cause such an emotional reaction in certain individuals.
For example, the language of opponents of Copilot speaks in absolutes. And when presented with the history of copyright when applied to software the opponents seem to not register that copyright (logically) does not extend to the non-expressive parts of a work.
“In computer programs, concerns for efficiency may limit the possible ways to achieve a particular function, making a particular expression necessary to achieving the idea. In this case, the expression is not protected by copyright."
This allows for verbatim copies if they are utilitarian in nature!
As for why we should allow verbatim copies of utilitarian features... First, let's preface this with the substantial similarity of the structure, sequence and organization as established in Whelan v. Jaslow which amongst other things says that you cannot merely change the variable names if the expressive structure of the code remains the same. Now let's imagine 10,000 software developers who all implement Dijkstra's algorithm in C and then run it through clang-format. Aside from variable names, isn't it safe to assume that many of the implementations are going to be exactly the same?
Now, this doesn’t mean that GitHub is not in violation of other copyright claims, such as clearly expressive parts like comments and more!
I've favoured the MIT licence for what little OSS I've published thus far. But, I'm becoming increasingly concerned that ruthless profit-above-all-else driven companies can include my (benign) work in systems that causes real harm.