There are reasonable ethical concerns one may have with AI (around data center impacts on communities, and the labor used to SFT and RLHF them), but these aren't:
> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.
I thought we (FOSS) were anti copyright?
> Their operations are causing concerns about the huge use of energy and water.
This is massively overblown. If they'd specifically said that their concerns were around the concentrated impact of energy and water usage on specific communities, fine, but then you'd have to have ethical concerns about a lot of other tech including video streaming; but the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.
> The advertising and use of AI models has caused a significant harm to employees and reduction of service quality.
Is this talking about automation? You know what else automated employees and can often reduce service quality? Software.
> LLMs have been empowering all kinds of spam and scam efforts.
I get why water use is the sort of nonsense that spreads around mainstream social media, but it baffles me how a whole council of nerds would pass a vote on a policy that includes that line.
The energy use claims are questionable, but I at least get where they're coming from. The water use is the confusing part. Who looks at a server rack and goes ‘darn, look at how water intensive this is’? People use water as a coolant in large part because it's really hard to boil, plus it's typically cheap because it regularly gets delivered to your front door for free.
As to actual numbers, they're not that hard to crunch, but we have a few good sources that have done so for us.
Being ideologically motivated is not necessarily bad (understanding ideology as a worldview associated with a set of values and priorities). FOSS as a whole is deeply ideologically motivated from its origins. The issue is that there seems to have been a change in the nature of the ideology, leading to some amount of conflict between the older and newer guard.
>> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.
> I thought we (FOSS) were anti copyright?
Absolutely not! Every major FOSS license has copyright as its enforcement method -- "if you don't do X (share code with customers, etc depending on license) you lose the right to copy the code"
>> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.
> I thought we (FOSS) were anti copyright?
No free and open source software (FOSS) distribution model is "anti-copyright." Quite to the contrary, FOSS licenses are well defined[0] and either address copyright directly or rely on copyright being retained by the original author.
FOSS still has to exist within the rules of the system the planet operates under. You can't just say "I downloaded that movie, but I'm a Linux user so I don't believe in copyright" and get away with it
>the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.
[citation needed]
>Is this talking about automation? You know what else automated employees and can often reduce service quality? Software.
Disingenuous strawman. Tech CEO's and the like have been exuberant at the idea that "AI" will replace human labor. The entire end-goal of companies like OpenAI is to create a "super-intelligence" that will then generate a return. By definition the AI would be performing labor (services) for capital, outcompeting humans to do so. Unless OpenAI wants it to just hack every bank account on Earth and transfer it all to them instead? Or something equally farcical
> the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.
10 GPT prompts take the same energy as a wifi router operating for 30 minutes.
If Gentoo were so concerned for the environment, they would have more mileage forbidding PRs from people who took a 10 hour flight. These flights, per person, emit as much carbon as a million prompts.
The first comprehensive environmental audit and analysis performed in conjunction with French environmental agencies and audit environmental audit consultants, which includes every stage of the supply chain, including usually hidden upstream costs: https://mistral.ai/news/our-contribution-to-a-global-environ...
> Disingenuous strawman. Tech CEO's and the like have been exuberant at the idea that "AI" will replace human labor. The entire end-goal of companies like OpenAI is to create a "super-intelligence" that will then generate a return. By definition the AI would be performing labor (services) for capital, outcompeting humans to do so
Isn't that literally the selling point of software, performing something that would otherwise have to be done by humans, namely both keeping calculating research, locating things, transferring information, and so on, using capital instead of labor, transforming labor into capital, and providing more profits as a result?
> Unless OpenAI wants it to just hack every bank account on Earth and transfer it all to them instead? Or something equally farcical
It's extremely funny that you pull this out of your house and say that this is the only way that I could be justified in saying what I'm saying, while accusing me of making a disingenuous straw man. Consult the rod in your own eye before you concern yourself with the speck in mind.
> >So did email.
> "We should improve society somewhat"
> "Ah, but you participate in society! Curious!"
Disengenuous strawman. That comic is used to respond to people that claim that you can't be against something if you also participate in it out of necessity. That's not what I'm doing. I would be fine with it if they blank it condemned all things that enable spam on a massive level, including email, social media, automated phone calls, mail, and so on, while still using those technologies because they have to to live in present society and get the word out. There are people who do that with rigorous intellectual consistency and have been since those things existed. My argument is by condemning one but not the other irrespective of whether they use them or not, they are being ethically inconsistent and it shows a double standard and a biased towards technologies that they're used to over technologies that they aren't. It shows a fundamental reactionary conservatism over an actually well thought through ethical position.
> Commercial AI projects are frequently indulging in blatant copyright violations to train their models.
I thought we (FOSS) were anti copyright?
> Their operations are causing concerns about the huge use of energy and water.
This is massively overblown. If they'd specifically said that their concerns were around the concentrated impact of energy and water usage on specific communities, fine, but then you'd have to have ethical concerns about a lot of other tech including video streaming; but the overall energy and water usage of AI contributed to by the actual individual use of AI to, for instance, generate a PR, is completely negligible on the scale of tech products.
> The advertising and use of AI models has caused a significant harm to employees and reduction of service quality.
Is this talking about automation? You know what else automated employees and can often reduce service quality? Software.
> LLMs have been empowering all kinds of spam and scam efforts.
So did email.