I get that it's satisfying to tell them to go away because they're being unreasonable. But what's the legal strategy here? Piss off the regulators such that they really won't drop this case, and give them fodder to be able to paint the lawyer and his client as uncooperative?
Is the strategy really just "get new federal laws passed so UK can't shove these regulations down our throats"? Is that going to happen on a timeline that makes sense for this specific case?
He says on his site that he wants the US to pass a “shield law,” I guess the idea must be to pass a law that explicitly says we don’t extradite for this, pass along the fines, or whatever.
It seems like inside the US, this must be constitutionally protected speech anyway. I’m not 100% sure, but it would seem quite weird if the US could enter a treaty that requires us to enforce the laws of other countries in a way that is against our constitution. Of course the constitution doesn’t apply to the UK (something people just love to point out in these discussions), but it does apply to the US, which would be the one actually doing the enforcing, right?
Anyway, bumping something all the way up to the Supreme Court is a pain in the ass, so it may make sense to just pass a law to make it explicit.
The British legal system is pretty inefficient. I'd probably just say sorry we'll block harder. That'll probably delay things for years, by which time there may be a different government, or a US shield law.
If you go down in physics you might discover that the reality seems to be composed a lot more by probabilities. An objective reality built on probabilities does not seem so objective afterall.
Here is a fun think to try: demonstrate that you and me are seeing the same color in the same way? You might find some papers trying to prove it but you will se they are all based on subjective answers of people.
So while at macro level the reality seems hard and objective at the micro level it is not.
It's an objective reality. Even at quantum level the known laws of nature hold. Even if there is uncertainty, nature is predictable in following the laws of physics.
And ofc at macro level we live in a very objective reality. This is the basis of science.
> I don't understand how you could have
trouble finding the volume down button.
Read again: It was Apple Online Help that
didn't know where the Volume Down button
was.
I wasn't asking them; in a 'chat' session
as part of an effort to get the phone
working, they were asking me, asking me to
press that button, and not for sound
volume; to be sure, I asked them just
where the button was, and they didn't
know.
In simple terms, I unpacked the phone,
read the documentation, plugged it in to
charge its battery, tried to use it to
make and receive calls, and it didn't
work. "Just work"? No, didn't work.
Whatever, this was one HORRIBLE end-user
experience, and I'm way past putting up
with it, returning the iPhone, and going
to look at something from Samsung. Before
spending any money, I will want to see
their DOCUMENTATION.
>Critics have already written thoroughly about the environmental harms, the reinforcement of bias and generation of racist output, the cognitive harms and AI supported suicides, the problems with consent and copyright,
I just can't take anything the author has to say seriously after the intro.
Firstly, the author doesn't even define the term AI. Do they just mean generative AI (likely), or all machine learning? Secondly, you can pick any of those and they would only be true of particular implementations of generative AI, or machine learning, it's not true of technology as a whole.
For instance, small edge models don't use a lot of energy. Models that are not trained on racist material won't be racist. Models not trained to give advice on suicide, or trained NOT to do such things, won't do it.
Do I even need to address the claim that it's at it's core rooted in "fascist" ideology? So all the people creating AI to help cure diseases, enable technologies assistive technologies for people with impairments, and other positive tasks, all these desires are fascist? It's ridiculous.
AI is a technology that can be used positively or negatively. To be sure many of the generative AI systems today do have issues associated with them, but the authors position of extending these issues to the entirety of the AI and AI practitioners, it's immoral and shitty.
I also don't care what the author has to say after the intro.
Come on now. You know he's not talking about small machine learning models or protein folding programs. When people talk about AI in this day and age they are talking about generative AI. All of the articles he links when bringing up common criticisms are about generative AI.
I too can hypothetically conceive of generative AI that isn't harmful and wasteful and dangerous, but that's not what we have. It's disingenuous to dismiss his opinion because the technology that you imagine is so wonderful.
I'm serious. This sentence perfectly captures what the coastal cities sound like to the rest of the US, and why they voted for the crazy uncle over something unintelligible.
Coastal city dwellers want the next thing to signal rebellion. Its just that AI serves as a way to do that plus also show some concern to the working class.
It would take too much time to tear the entirety of this slop apart, but if you understand the mechanics of AI, you'd know environmental impact is negligible vs the value.
The links are laughable. For environment we get one lady whose underground water well got dirtier (according to her) because Meta built a data center nearby. Which, even if true (which is doubtful), has negligible impact on environment, and maybe a huge annoyance for her personally.
And 2 gives bad estimates such as ChatGPT 4 generation of ~100 tokens for an email (say 1000tok/s from 8xH100, so 0.1s so 0.1Wh) using as much energy as 14 LEDs for an hour (say 3W each, so 45Wh, so almost 3 orders of magnitude off, 9 if you like me count in binary).
P.S. Voted dems and would never vote Trump, but the gp is IMHO spot on.
This is the dumbest question ever. I guess you need to ask 1B+ LLM users.
But hey, I already know you'd say you personally would never use it for these purposes.
Moreover, of the two of us you appear to have "shareholder" mentality. How profitable are volunteers serving food to homeless people? I guess they have no value then.
How many of those users are paying users? What’s the churn rate?
And how profitable are OpenAI and other providers?
They’re running at a loss. The startups using LLMs as their product are only viable as long as they get free credits from OpenAI. The only people making a profit are NVidia.
reply