Hacker Newsnew | past | comments | ask | show | jobs | submit | more jxjnskkzxxhx's commentslogin

I mean.... You just used those standalone.


He was never saving the world from climate change, you're just naive.


That’s not the point we’re making.


What's the point then?


Yes. He's insinuating that the only reason you would dislike musk is sure to his association with trump. He's also insinuating that people who dislike trump just like the opposite of what trump like, so that is trump dislikes musk then they musk like him.

He's just a troll.


i agree with the first part, but how does that make me a troll? It is peculiar.


the left loved musk until he started helping trump


Have you never heard the expression that the friends of your enemies are your enemies? Strange.

Anyway I've despised that guy since the Thai cave episode. Don't know if I'm "the left" tho.


Despising Elon is acceptable. He's cringy . but is he a fascist? that's the absurd part.


My definition of fascist is a person that is ok with, or perhaps takes active steps towards, a society where powerful people can manipulate/pick/write the rules in order to further increase their power. By this definition it's not "absurd" at all to call him a fascist.

Yes, I realize there are degrees.

No point telling me that the Wikipedia definition is different. This is the definition which I think is the most useful because it captures the essence of why fascism is bad: it's an instability of the system.

According to this definition all the usual suspects are fascists (Hitler, Stalin, Mussolini, Mao), but also characters like Trump and his enablers, and Musk and his fanboys.


I ask because your definition is very broad. Generally, every political system has business owners / elites / lords who have more influence.

The Bolsheviks had almost exclusive influence and control, but they weren't fascist.

Most liberal democracies have elites and lobbyists who have more influence than the common man. they aren't fascist.

I think your definition is convenient because it's so broad and then you can apply it to who you dislike.

Among the american elite, at least 1/5 of congress, most of the senate, most of the fortune 500, a good chunk of hollywood would qualify as fascist according to this definition.


> The Bolsheviks had almost exclusive influence and control, but they weren't fascist.

Who is "the Bolsheviks"? Most supporters of the party did not have as explicit goal to accumulate power for themselves. If you're referring to Lenin and Stalin, they certainly where fascists by the useful definition.

> Among the american elite, at least 1/5 of congress, most of the senate, most of the fortune 500, a good chunk of hollywood would qualify as fascist according to this definition.

No, I disagree. Most of those people have only enough power to effect miniscule tweaks to the system. Those don't qualify. Like I said, it is in part a matter of degree.

> I think your definition is convenient because it's so broad and then you can apply it to who you dislike.

Aaaaand I'm done talking to you.


are there democrat examples who fit this definition?


They're both still fascists, still ok to vandalize teslas. Their relationship has nothing to do with either point.


Here's a thought exercise. Could you list history's top 5 worst fascists in descending order? #1 would be the worst. and would Elon make it? Would Trump?


When will Europeans realize that the USA is not a monolithic bloc?


Yup.

I'm surprised that people conjure up inventions to justify that this is actually acceptable.


Then a court will order that you don't encrypt. And probably go after you for trying to undermine the intent of previous court order. Or what, you thought you found an obvious loophole in the entire legal system?


Yes. Because once you have remote attestation, anyone can host these enclaves in any country, and charge some tiny fee for their gpu time.

Decentralize hosting and encryption then centralized developers of the open source software will be literally unable to comply.

This well proven strategy would however only be possible if anything about OpenAI was actually open.


I thought that was anime pictures...?


That doesn't matter. They are still illegal in a great number of jurisdictions, including large portions of the USA.


Citation needed.


a subset of that, yes. but that label implies more than just that


Do you have a reason to believe this ain't already being done? I would assume that the big guys like openai are already training on basically all text in existence.


In fact, facebook torrented annas archive and got busted for it, because of course they did:

https://torrentfreak.com/meta-torrented-over-81-tb-of-data-t...


Every LLM maker probably did the same. Facebook just has disgruntled employees who leaked it


Google goes around legally scanning every book they can get their hands on with books.google.com. Legally scanning every paper they can get their hands on with scholar.google.com.

I doubt they'd resort to piracy for what is basically the same information as what they've already legally acquired...


That is a good reason to think they did not but it doesn't necessarily override reasons for them to do so. Perhaps it's dubious that the subset of data they could not legally get their hands on is an advantage for training but I really don't know, and maybe nobody does. Given that, Google's execs may have been in favor of similar operations as Facebook's and their lawyers may have been willing to approve them with similar justifications.


Downloading a torrent isn't piracy if you are a license holder for the information that you are downloading.


*If the license you have authorizes you to make a copy in that fashion.

But here, Google isn't a license holder. Google doesn't license the text in Google Books (unless something has changed since the lawsuits). Google simply legally acquires (buys, borrows, etc) a copy of the book and does things with it that the US courts have found are fair use and require no license.

Incidentally I believe the French courts disagreed and fined them half a million dollars or so and ordered them to stop in France.



> LLMs/transformers make mistakes in different ways than humans do

Sure but I don't think this is an example of it. If you show people a picture and ask "how many legs does this dog have?" a lot of people will look at the picture, see that it contains a dog, and say 4 without counting. The rate at which humans behave in this way might differ from the rate at which llms do, but they both do it.


I don’t think there’s a person alive who wouldn’t carefully and accurately count the number of legs on a dog if you ask them how many legs this dog has.

The context is that you wouldn’t ask a person that unless there was a chance the answer is not 4.


You deeply overestimate people.

The models are like a kindergartner. No, worse than that, a whole classroom of kindergartners.

The teacher holds up a picture and says, "and how many legs does the dog have?" and they all shout "FOUR!!" because they are so excited they know the answer. Not a single one will think to look carefully at the picture.


It's hilarious how off you are.


Exactly this. Humans are primed for novelty and being quizzed about things.


You have never seen the video of the gorilla in the background?


That's a specific example that when you draw a human's attention to something (eg: count the number of ball passes in this video), they hyper-fixate on that, to the exclusion of other things, so it seems like it makes the opposite point that I think you're trying to?


Ok? But we invented computers to be correct. It’s suddenly ok if they can look at an image and be wrong about it just because humans are too?


My point is that these llms are doing something that our brain also is doing. If you don't find that interesting, I can't help you.


Well, they’re getting the same result. I don’t particularly see why that’s useful.


All automation has ever been is an object doing something that a human can do, without needing the human.


The result is still wrong, though! It needs to be right to be useful!


The analogy should be of an artist that can draw dogs but when you ask them to draw a dog with three legs they completely fail and have no idea how to do it. That likelihood is really low. A trained artist will give you exactly what you ask for, meanwhile GenAI models can produce beautiful renders but fail miserably when asked for certain specific but simple details.


No, the example in the link is asking to count the number of legs in the pic.


Ok, sure, but I'm trying to point out the gap in expectation, i.e. it's an expert artist but it cannot fulfill certain specific but simple requests.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: