Yes. He's insinuating that the only reason you would dislike musk is sure to his association with trump. He's also insinuating that people who dislike trump just like the opposite of what trump like, so that is trump dislikes musk then they musk like him.
My definition of fascist is a person that is ok with, or perhaps takes active steps towards, a society where powerful people can manipulate/pick/write the rules in order to further increase their power. By this definition it's not "absurd" at all to call him a fascist.
Yes, I realize there are degrees.
No point telling me that the Wikipedia definition is different. This is the definition which I think is the most useful because it captures the essence of why fascism is bad: it's an instability of the system.
According to this definition all the usual suspects are fascists (Hitler, Stalin, Mussolini, Mao), but also characters like Trump and his enablers, and Musk and his fanboys.
I ask because your definition is very broad. Generally, every political system has business owners / elites / lords who have more influence.
The Bolsheviks had almost exclusive influence and control, but they weren't fascist.
Most liberal democracies have elites and lobbyists who have more influence than the common man. they aren't fascist.
I think your definition is convenient because it's so broad and then you can apply it to who you dislike.
Among the american elite, at least 1/5 of congress, most of the senate, most of the fortune 500, a good chunk of hollywood would qualify as fascist according to this definition.
> The Bolsheviks had almost exclusive influence and control, but they weren't fascist.
Who is "the Bolsheviks"? Most supporters of the party did not have as explicit goal to accumulate power for themselves. If you're referring to Lenin and Stalin, they certainly where fascists by the useful definition.
> Among the american elite, at least 1/5 of congress, most of the senate, most of the fortune 500, a good chunk of hollywood would qualify as fascist according to this definition.
No, I disagree. Most of those people have only enough power to effect miniscule tweaks to the system. Those don't qualify. Like I said, it is in part a matter of degree.
> I think your definition is convenient because it's so broad and then you can apply it to who you dislike.
Here's a thought exercise. Could you list history's top 5 worst fascists in descending order? #1 would be the worst. and would Elon make it? Would Trump?
Then a court will order that you don't encrypt. And probably go after you for trying to undermine the intent of previous court order. Or what, you thought you found an obvious loophole in the entire legal system?
Do you have a reason to believe this ain't already being done? I would assume that the big guys like openai are already training on basically all text in existence.
Google goes around legally scanning every book they can get their hands on with books.google.com. Legally scanning every paper they can get their hands on with scholar.google.com.
I doubt they'd resort to piracy for what is basically the same information as what they've already legally acquired...
That is a good reason to think they did not but it doesn't necessarily override reasons for them to do so. Perhaps it's dubious that the subset of data they could not legally get their hands on is an advantage for training but I really don't know, and maybe nobody does. Given that, Google's execs may have been in favor of similar operations as Facebook's and their lawyers may have been willing to approve them with similar justifications.
*If the license you have authorizes you to make a copy in that fashion.
But here, Google isn't a license holder. Google doesn't license the text in Google Books (unless something has changed since the lawsuits). Google simply legally acquires (buys, borrows, etc) a copy of the book and does things with it that the US courts have found are fair use and require no license.
Incidentally I believe the French courts disagreed and fined them half a million dollars or so and ordered them to stop in France.
> LLMs/transformers make mistakes in different ways than humans do
Sure but I don't think this is an example of it. If you show people a picture and ask "how many legs does this dog have?" a lot of people will look at the picture, see that it contains a dog, and say 4 without counting. The rate at which humans behave in this way might differ from the rate at which llms do, but they both do it.
I don’t think there’s a person alive who wouldn’t carefully and accurately count the number of legs on a dog if you ask them how many legs this dog has.
The context is that you wouldn’t ask a person that unless there was a chance the answer is not 4.
The models are like a kindergartner. No, worse than that, a whole classroom of kindergartners.
The teacher holds up a picture and says, "and how many legs does the dog have?" and they all shout "FOUR!!" because they are so excited they know the answer. Not a single one will think to look carefully at the picture.
That's a specific example that when you draw a human's attention to something (eg: count the number of ball passes in this video), they hyper-fixate on that, to the exclusion of other things, so it seems like it makes the opposite point that I think you're trying to?
The analogy should be of an artist that can draw dogs but when you ask them to draw a dog with three legs they completely fail and have no idea how to do it. That likelihood is really low. A trained artist will give you exactly what you ask for, meanwhile GenAI models can produce beautiful renders but fail miserably when asked for certain specific but simple details.