To be fair comments from the people on the opposite end (not that I agree with them) seem to all get flagged very fast. The points they are making don’t seem to be any worse, just somewhat less eloquent..
As far as I'm concerned, it makes sense that an LLM would have absolutely no clue about nearly-obsolete human melanin preferences, and it's surprising that as soon as machines (cue McBean and his Star-On machine) arrive that show us how silly we've been being, it'd be considered "unacceptable".
When I was a child, blue or green hair would've been considered unacceptable, and now it is (although largely limited to younger and older women); maybe I'll live long enough to see a world where the maître d' has blue skin and my banker has green and nobody except some old farts who are about to kick the bucket really cares?
There is always the case for making a charitable interpretation of a certain behavior. Otherwise you’re just up for a forever-lasting war against your “enemies”.
I'd like to hear the fully-fleshed out position about why these particular examples are an indication of some broader, deeper problem. If possible, please do not use the word "woke" as a shorthand, explain it in full. Same for DEI.
Is there some suggestion that these obviously wrong examples are intentionally ahistorical or intended to erase history? Is this a plot of some kind? How common are these issues relative to all the other images being produced? Apparently the tool seems to be reluctant to produce images of white people at all. What should the appropriate frequency of white people be? That last one is a serious question that should be answered with an actual number, because that is the central concern here (unless I'm mistaken).
What seems very likely to me, is that in an attempt to not make the tool embarrassingly only produce images of white people they overshot. Bear in mind for a second, that for anybody who isn't white and is trying to generate an image of a person doing a thing (with no historical context to infer a race for that person) it would possibly be deeply frustrating to only get images of white people.
>Apparently the tool seems to be reluctant to produce images of white people at all.
It's not that. It would have been one thing that when you said "I want a picture of 4 doctors" for the tool not to generate any white doctors at all.
The issue here is that I've read numerous reports of people saying that when they _explicitly_ specified something to the effect of "I want a picture of 4 _white_ doctors" it would straight up refuse them on the spot.
Typically this is the flame war detector, which looks at comment:vote ratio, among other things.
Hacker News is more of a library where people come to read stuff, and isn’t really a place for political battles like the one happening here. Usually a moderator will manually disable the flame war detector for a thread if there is some hope for it. But it is doing its job as far as I’m concerned, because this thread isn’t really the kind that motivates intellectual curiosity.
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
And especially please don't cross into personal attack, as you did more than once in this thread. We have to ban accounts that do so, regardless of how right they are or feel they are.
This very thread had 50 points and was top 10 on the front page and going up just now. Silently removed from the front page. Poof. Gone.
So here, you just saw. Up to you to acknowledge whether or not this happened going forward or pretend like it isn't. Many commentators are pretending. This willingness to lie to get your way is creepy to the extreme.
I beleive your comment shows the exact same thinking that you accuse the 'other' party of.
Are there issues ? Yes.
Do they have soul searching to do? Yes
Will they release a complely uncensored model ? No
Will the exact prompts that showed problems be fixed ? Yes
Will their be other problems ? Yes.
Is this the end of Google? No