“Some are saying that the apparently bizarre behavior of one Big Tech AI or another is an opportunity for other Big Tech AI to differentiate by being objective, reasonable, and unbiased.
This is not the case; there is no differentiation opportunity among Big Tech or the New Incumbents in AI. These companies all share the same ideology, agenda, staffing, and plan. Different companies, same outcomes.
And they are lobbying as a group with great intensity to establish a government protected cartel, to lock in their shared agenda and corrupt products for decades to come.
The only viable alternatives are Elon, startups, and open source — all under concerted attack by these same big companies and aligned pressure groups in DC, the UK, and Europe, and with vanishingly few defenders.”
Whilst somewhat true a year ago the exact same conversation was being had about ChatGPT, which also preferred nuking cities to letting people hear racial slurs and had many other completely misaligned outputs. That conversation has died out because OpenAI made big improvements to its political balance and neutrality.
Is it now perfectly neutral? No. Give it political alignment tests and it will lean left. But is it neutral enough to be acceptable to a wide range of users and avoid embarrassing OpenAI employees? Seems like yes. Also, the API seems to be less nannying than ChatGPT web UI, and they let you set custom system instructions that the model does seem to follow. Notice that when OpenAI launched their image generator the same blowup didn't occur. And OpenAI recently launched Sora - where are the twitter threads about how Sora refuses to make videos of white people? Nowhere? Presumably then it doesn't have that problem (I haven't tried it).
So at least one SF based company has shown that it can resolve these problems.
AI companies should not be lobbying for regulation of the AI space, Marc is totally right about that, but ironically Google just handed the anti-regulation lobby the best ammo they could possibly hope for.
> Google just handed the anti-regulation lobby the best ammo they could possibly hope for.
You would think so. But given the competency of politics in almost every western nation, I expect some to turn around and praise such results as the next best thing and they would be completely down with falsifying data while ranting about misinformation.
Even if their 14 years old kids can navigate the net and its information structure much better than they do themselves.
One thing that could be mentioned is how OpenAI's content policy flagging system is still unbearably overzealous on the ChatGPT platform (both w/ GPT-3.5 and GPT-4), punishing the user for using "bad words" in their input, often in a completely arbitrary manner. And no, not asking for the model to create anything else except a reply altogether and perhaps a summary. And nothing more "edgy" than your average Youtube transcripts or even pieces of classic literature.
It's ridiculous at times, other times downright annoying, as you have to counter the claims in order to keep your account in good standing -- mind that all content policy warnings might lead to further automated, one-sided account warnings over e-mail. And no, it doesn't help if you've poured four-figure sums into their services over both the API as a dev and into ChatGPT Plus and the Team Plan. Quite frankly, in its current form, the flagging system is hampering the whole platform's fundamental usability for anything serious, since you constantly have to be aware of not tripping the flagging system over some trivial crap that someone, somewhere might find remotely objectionable.
Sad to say, but unless OpenAI revises their flagging system's inner workings, they're literally turning a tool into a trinket with a thoughtless word-police unit constantly ready to interject any conversation or analysis. It feels unnecessary and downright Orwellian, and I'm pretty sure that new users are going to find it off-putting as it probably serves as proof to some that OpenAI really wants to control the user's basic freedom of expression on their platform, even when it's not even a "public forum" but literally a more or less "secluded" interaction between a human and and an AI.
I get it that the model shouldn't be spouting out anything controversial, but the users should nonetheless be able to go through "serious topics" without the alarm bells going off.
It's been 1.5 years now and progress has been made in terms of OpenAI fixing the biases in the model and making it overall better, but the automated flagging system still feels like a Philip K. Dick-level automated thought police from hell.
I'd seriously recommend OpenAI revising their system so that they'd have a separate A/B comparison and a veto process where the initial flagging would be passed to either GPT-3.5 or GPT-4 for further analysis, and that model could then either revoke or keep the flagging in place. That would save the user time and energy to go through the appeal process over nothing. And/or add a feature where users can opt into a non-playpen version of a service that they're most likely paying for. It's just ridiculous.
To add to the confusion, most of the times I've gotten flagged, the model itself (usually GPT-4) has stated that it didn't find anything against the platform's guidelines upon revising what was said. So the flagging system is really not very sophisticated when even GPT-3.5 often gets it that there was no violation of any sorts taking place.
One could argue that OpenAI is now a single Silicon Valley company that has such an advantage over their competition worldwide, hence I do find that culturally quite problematic as well -- a single company one-sidedly deciding what type of language a human can input to an AI? OpenAI's stated goals about an AI that "benefits all of humanity", all the talk about diversity and inclusivity etc don't seem to meet up with the reality of how their platform still flags the user input often out of nowhere. How can you benefit anyone when the user can't even go through transcripts without running the risk of getting flagged? Their current guardrails are ripe for a thorough sanity check.
I don't understand why people keep referring to Elon's model as a savior of anything. It seems to be a fine tune of a standard model with Gen X slacker personality.
This is nothing to do with the philosophy of the people working at big tech companies. Firstly, that ethical ship sailed long ago when Google repeatedly and publicly shafted the ethical parts of its team - whether that's the AI ethicists getting the boot, Google walkout organisers getting the boot, or Google pursuing new contracts with the military. And Google is arguably the most woke of the big tech. Apple and Microsoft have never pretended to care what their employees think, and Facebook love to be the bad guys.
Big Tech's approach to AI is nothing to do with AI. It's to do with the same thing that killed Kodak and Nokia. Large existing technology businesses have an existing business with a reputation to protect and it's almost impossible for them to value the possible future revenue of what is currently a money sink vs the very real actual revenue they could lose by burning their reputations.
It's the same reason BMW won't put a death trap self-driving car on the road. They aren't woke, they're making justifiable strategic decisions.
Now: Here's the good news- that totally opens up the market to start ups and new entrants! Which is how silicon valley has always worked, and the real thing putting that in danger is the pipeline of emerging companies like OpenAI and Figma being snapped up by the older less competitive establihsed companies.
> It's the same reason BMW won't put a death trap self-driving car on the road. They aren't woke, they're making justifiable strategic decisions
Nobody finds it peculiar that a car company would prioritize safety. Thats an understandable value system to 99.9% of people. By contrast most people think it’s odd that a search company would care so much about people’s skin color that they would alter user inputs in order to get the system to change the races of people shown in the results. That’s a classic example of “go woke, go broke.”
I sometimes wonder what western historians will think when they study this era 100 years from now? Will they think of this movement as a radicals capturing various institutions for political gains? Will they think of it as an innocent attempt to do good but went too far? or perhaps just a fashion movement that faded by then..
Perhaps it's the lack of spirituality that has driven people so insane that they latch on to anything that promises them virtues. Nietzsche once said "God is dead", but what did we replace it with?
In any case, it's amusing to watch it all enfold right in front of your eyes.
>I sometimes wonder what western historians will think when they study this era 100 years from now?
I sometimes find myself wanting aliens to find us and make contact, just so we can have an outside, hypothetically enlightened source (if we're going by the Star Trek fantasy) to say "You're doing WHAT?!"
It’s not radicals capturing institutions, at least not in a nefarious way. The masses simply have a different morality than they used to. In the marketplace of ideas, the scoring function is not wholly or even mostly rational, instead it is mostly determined by morality, which at present is this new morality. The radicals have just made an idea-product that sells well to people with the new morality. They are not steering the ship, they are just riding on it.
Of course this is all happening at a subconscious level for almost everyone involved. It’s very hard to step outside of one’s own morality.
I just think back to my classes about algorithms and data structures. I didn’t think the big issues I’d face in technology would regard generating images of Nazis with the proper skin tone. Can’t say I’m thrilled with this particular direction technology has taken.
I think we haven't yet fulfilled the vision of personal inter-relations from mutual self responsibility. People were still too hung up on wanting to play the victim.
It’s going to be treated as what it is, psychological warfare to undermine the stability of America. No other country on this planet seems to suffer these issues to the extent that we do.
Having worked inside a few big tech companies, this outcome was practically inevitable. It would have been impossible for Googlers to raise the concerns that the public did without serious reputational or occupational hazard.
The visible & spoken culture at Google, Meta, and most big tech companies is very much in favor of "anti-racism" and "ML-fairness", though a large minority harbors criticisms in private conversation.
Any criticism of content or bias made during the bard/Gemini dogfooding phase would have been met swiftly with ostracization and possible termination.
If the results are as silly as the article suggests, I wonder if it could be deliberate sabotage. They know they can’t raise the issue normally, so go the other way, turn the dial to 11. Now it’s just ridiculous and is all over the media. The CEO himself is asking the dials to be turned back to 3.
When you are required to listen and obey and not to think or question the difference between loyal obedience and sabotage becomes entirely theoretical.
Why are you assuming that when the extreme leftist are literally in charge - have entire departments- have made it so that literally all managers must pretend they also believe in the same things.
It’s unlikely to be sabotage though. What may have happened is that the teams working on RLHF may have only trained it on a dataset containing the strictest interpretation of liberal values without also training it on other data that grounds it in reality. As a result, the model tried to inject the liberal interpretation of values where it doesn’t make sense, such as Black Nazis, and so on.
A lot of societies up through the ages depicted themselves unrealistically, and our depiction of past societies can be rather questionable too. Cast a glance at https://www.moviestillsdb.com/movies/ben-hur-i52618 if you want.
Those pictures of nazis were clearly wrong, but describing something as wrong is easier than saying what's right. Should an AI match a society's prejudiced descriptions of itself (making the nazis blond), our later images, often also prejudiced, or the current best (yet fuzzy) historical notions of what reality was like?
> describing something as wrong is easier than saying what's right
This 1000x. This problem you also see with education of children. You have 2 problems:
1) there's a VERY large number of possible depictions. Let's say 10 million. Of those, a VERY small number are correct/acceptable. Let's say 10.
Then it's easy to see. A correct example has 1/10 = "10%" of all information so to speak. If 100% is what you need to be guaranteed a correct classification of all possible depictions.
Yet a negative example has 1/10,000,000= "0.000001%" of the information you need to classify all examples correctly. If you attempt to create correct behavior by giving wrong examples ... it'll be a long day.
A positive example, telling an AI (or a child/student) what to DO has ~10000 times more information than telling them what NOT to do, in this example. In practice, you'll find the difference is even more extreme.
2) BUT there is a problem, that's also always brought up with wikipedia. Giving positive examples requires that there are examples that everyone agrees with. And we all know there are rather serious problems with that. A positive example HAS to be in the intersection of what all groups find acceptable. And ... well easy examples are always wars: Who is right? Who started the war? Is it justified? Applied to Russia, Ukraine, Israel, India, Sudan, ... but in practice a lot of issues, some not at all that controversial (which word is correct "New York" english? Bupkis or bupkez?)
What should Gemini have done, what would be right? Should it match the historical record, which the Nazis twisted by choosing to photograph mostly tall blond soldiers that looked a bit Danish, or should it match fact, which then would look different from the historical record, and is difficult to establish accurately anyway?
Similar questions apply for other societies, both current and ancient. Current e.g. Russia, where the census asks people to select ethnicity, but any choice except "Russian" might as well be labelled "please discriminate against me". Whatever an AI does is going to be wrong, and calling anything right is extremely difficult. Ancient examples include India and Egypt, which both had long-lived dynasties that considered themselves somehow foreign to that land and were very selective about how their represented their realms.
It's really easy to find a horrible example and call it horrible, but it's low-effort thinking. Describing something that's right is so much harder and so much more valuable.
> Current e.g. Russia, where the census asks people to select ethnicity, but any choice except "Russian" might as well be labelled "please discriminate against me".
I’m going to be—well, not the, but among the—last to defend the current Russian political system, but in which sense is this true? There’s a hell of a lot of discrimination in Russia, but the targeting for it is usually along the lines of not having a Russian passport or a registered address of residence in a particular region, or having the “wrong” surname or appearance. I’m not aware of anybody ever caring what you wrote in the census papers—and the data from the latest census is so bogus basically everybody who does population statistics has disregarded it—but maybe it happens in smaller towns? That would be interesting.
The only place I’m aware of that actually associates your declared (not implied) ethnicity with your name is the ostensibly statistical slip you fill in when changing your registered address, but at least fifteen years ago quoting the relevant parts of the constitution resulted in surprise followed by begrudging acceptance. (Art. 26: Everyone can determine and specify their ethnicity. Noone shall be forced to determine or specify their ethnicity.—a direct attempt to preclude the “item 5” shenanigans common in Soviet times.)
Again, none of this is to say that there isn’t a lot of discrimination happening. (As one example, the disproportionately large military draft in some areas could arguably amount to ethnic cleansing, were it perpetrated with intent instead of arising through general administrative laziness seeking a softer target. Either way, some groups and possibly even languages are not going to survive.) I’m just very surprised to hear about the census, specifically, being important in that respect.
My source is gone from the web. It was by someone who grew up in a provincial city. Summarising from memory: Anyone who had a choice (e.g. one parent from Moscow and one from the frowned-upon region) would do well to ① pick a muscovite name/spelling and ② tick "russian" on an unnamed form he filled in when he became a student. And this situation is/was common in that kind of town.
I assumed it was a census form because of Wikipedia: "In the 2021 census, roughly 81% of the population were ethnic Russians, …" which seems to say that the census forms ask about ethnicity, doesn't it?
The census form[1] does ask about ethnicity, sure (item 14; while you’re free to leave the field blank or write “Martian” there if you wish, it’s true that most people won’t). It doesn’t ask for your name, however (and isn’t received or sent by post, either—census takers are supposed to go door to door). Not that it’d be difficult to recover if you intercepted the forms before they were collated (despite assurances of confidentiality printed on them), I’ve just never heard of anyone trying that or even being vaguely interested in it.
The story you’re telling sounds like something out of Kanevsky&Senderov[2]. I’m not denying it might be happening, but I’ve never heard of it and would be interested to hear more—at least a location and a year would be helpful. I’ve never filled in, seen, or heard of such a form in relation to higher education, but then I only have personal experience with Moscow and second-hand one with St Petersburg and Ekaterinburg, all large cities without much of a sharp divide between common nationalities of residents (as in Bashkortostan or Chechnya or a number of other places).
(To call Moscow ethnically homogeneous would be a huge stretch, mind you. I can name [former?] Moscow residents in my contact list with Armenian, Azerbaijani, Chechen, Jewish, Korean, Mordovian, Roma, and Ukrainian ancestry without even opening it—hell, I could nominate myself for a third of those slots. I can’t even imagine what I’d learn if I actually went around asking my acquaintances about their family history. I suppose most of the results would still count as “white” by US reckoning, but, well, meh to that.
Culturally, though, yeah, things are pretty uniform. I just wanted to warn you away from thinking that that 80% figure reflects the country’s people mostly coming from a single ancestral group as opposed to them being subjected to pressure to have “Russian” written in their ID for like half a century. There’s a reason why optimistic discussions of Russia’s medium-term future usually include the question of the number of states that would exist in its place.)
[1] See link to PDF file at http://government.ru/docs/38324/ for the version used in the botched 2021 census (labelled 2020, this being a document from back before Covid).
I looked at the map then and against tonight, can't find a city whose name rings a bell. Russia is large enough to overlook things ;) It was some way east of Moscow, a little north of due east perhaps, still west of the Urals, and from context it would be an industrial city in an industrial area. Sorry, but I just can't remember the name.
Are you suggesting that 81% is even close to reality? Almost 200 ethnic groups and one of them forms 81% sounded like cooked bookkeeping to me.
Or maybe if they hadn't fired a bunch of people whose job it was to say "I think you're over-correcting here" and with strong arguments and supporting data who cannot be so easily dismissed as cranks from the minority because it's an actual job function. This reads like the people blaming DNI for declines in software quality after a decade if eliminating quality assurance as a normal and expected participant in development. WTF? Eyes on the prize here. This is about getting it done how we expect it to be done, what ever that may be. Why are we shipping bugs? Because we don't value the function of identifying them and determining the scope of their impact and the relative urgency of addressing. We don't empower anyone to who knows what's going on to stop ship any more. What happened to that? Agree on the target and then hit it. Or don't agree on the target, but hit it none the less. That means understanding your stuff before you ship it and you pay people to do that for you. Boeing cuts the same corners. It's just plain negligence and it's completely unsustainable. Stop letting fear of falling behind lead to failed releases and empower the proper teams to evaluate so you know what's going out the door before your customers do.
Dmitry is right. The vast majority of Googlers lean somewhere between against and apathetic to the DEI nonsense that's infected the company, but no one's willing to be the nail that sticks out and complain about it visibly.
Two anecdotes:
1. memegen had downvotes, and well-known instigators would continuously complain on plus about their shitty polarizing memes getting enormous numbers of downvotes, and get nothing but hugbox replies in the comments. The creator of memegen who was ideologically aligned ended up regretting adding downvotes at all, under the theory that people were conspiring to downvote instead of there being a silent majority of people that disagreed with the memes in question: https://www.mcmillen.dev/blog/20210721-downvotes-considered-...
2. Someone made an anonymous survey about whether people thought the Damore memo stated facts, had sound logic, was a good idea to post, deserved firing, etc. It was taken down shortly after because people didn't like the results and the implication that the majority of people that filled it out didn't think that the post was worthy of being immediately fired, etc.
More cynical me sees them as typical self-professed champions of tolerance that are quite selective of allowing diverging opinions. These people are interested in careers, not in technology or betterment in any form.
They believe themselves to know better than universal humanism. The reality is that they just want to have something that justifies their position, because it isn't their knowledge or ability. They will want to keep racism alive too because it fits their personal needs.
You're right. There's just a very loud minority of woke fools who are actively tanking a lot of the best companies. And a good chunk of them are not actually looking for solutions, they just like that they have a problem which fits their 'expertise'.
The "anti-racist" manifesto is likely the reason for a lot of this garbage where it's not just enough to report on a racist world, but you also have to modify things to fit a not-racist world.
That's what gave us a black Cleopatra irl and this is what's giving us AI imaggen of diverse Nazis in WW2.
I don't know of true reasons inside Google but the quality of the work alone would've justified termination for a high-ranking position.
If you read the famous "Stochastic Parrots" it's not a scientific work at all, it's a piece of journalism that just throws together as many unrelated AI-scares as the author could find. A good article for Vogue or Medium but unworthy of someone who claims to be a scientist.
She was mainly fired for not withdrawing a paper when it was requested by a leader in research (the leader who asked for this had no research background and asked for a retraction, which doesn't make sense because the paper hadn't been published yet). But there were other important factors; for example, she attacked other googlers (prominent ones working on LLMs) on internally public mailing lists (about woke things). Realistically, the world would have been better off if she hadn't been hired in the first place because she was always going to eventually have a conflict with leadership over publishing works like this. I think she would have done much better to become a professor at some liberal university where she would be free to publish her work.
I kind of wish the stochastic parrots paper had focused entirely on stochastic parrots, and not on energy consumption. In my opinion, Google has actually been a responsible steward of energy usage (way ahead of everybody else for at least a decade), and ML isn't really the source of most of the energy consumption in computing anyway.
Was that the situation where the tweet was from over a decade before when she was a teenager, and she said she learned to be more thoughtful as an adult?
That could be believable if that kind of lenience would be applied to other opinions aside for very selective ones. That would be real tolerance. I haven't seen anything worthwhile in that regard from Google yet. Or any corporation for that matter.
It's a problem for many companies that hire young "intellectual" work force. Fresh graduates spend four years marinating in woke juices on campus and then try to bring that culture to the work place. The leaders often bend, and not to the benefit of the business.
I know I am sounding like Elon Musk but I am not really "anti" progressive. The holier-than-thou moral crusaders without any life experience or deep cultural understanding often get so "woke" that they accidentally come out racist on the other end. They don't realize it, however, because they are too busy cancelling someone for what they said 25 years ago.
I often equate this movement to the "family values" religious crowd of just a few years ago - both are equally self-righteous and insufferable.
Indoctrinated? That's a common conservative talking point, except that many professors will just keep quiet in the fear of losing their jobs - no one wants to deal with a social media onslaught and sometimes even physical intimidation.
The young ones eventually grow out of it - but the "adults" need to have more spine.
> And maybe you can clarify how you surveyed the "large minority" who for some reason want racist, unfair behaviours.
You've really betrayed your post here if you think that objecting to the current behavior of Gemini is "wanting racist, unfair behaviors". No one is going to have this conversation with you if that's your response, and that's exactly the point raised in the post you replied to.
People are making this out to be a bigger issue than it is, making LLMs less woke will be doable and this will all be yesterday’s newspapers soon enough. I think releasing stuff and experimenting in public you’re always going to make a few mistakes, especially with something as complex as these AI systems. A genuine problem would be if Google were saying this wasn’t an issue. Let them fix it and we can all move on.
in my opinion the issue is much bigger. the subliminal influence has been there for about 10 years and it was the Gemini illustrations that made the bias so evident.
I think the anti-woke nonsense is just the old saying the young are idiots as every generation does. Nobody is arguing for characters from history to all be non-Caucasian, it’s just a mistake!
Exactly, they overshot the target. Bugs happen. The big mistake was not having team who actually knows what's going on to say "hey, you're over-correcting, hold the train. We got a bug here."
If Search or GMail were released today, in modern Google, how do you think it would handle unethical search terms or unethical conversations?
Are the unethical things Search/GMail allow today 'grandfathered' in? It seems like yes.
YouTube also seems to allow a much smaller set of unethical behavior from creators, while allowing for much more unethical behavior from advertisers on the platform.
> Pichai said the company has already made progress in fixing Gemini’s guardrails. “Our teams have been working around the clock to address these issues. We’re already seeing a substantial improvement on a wide range of prompts,” he said.
It is possible that he did, but that his brain did not compel him to give it politically spicy prompts. However, I think that should be expected from a general audience. I am more curious of who they used as a test group to give this a once-over and a "looks good to me". There is no way it was representative of a wide public consumer base.
With all the spyware there is on a company’s laptop nowadays I wouldn’t even dare to think about querying such questions in an internal-only version of the AI.
Screw a test group, a test department, called QA, people who are paid to understand the system and how it works enough to evaluate if it's fully working or not and empowered to stop ship when it's catastrophically not working.
Its verryyyy clear that Google rushed these models. They have a fiscal obligation to try and stay ahead of the curve and make as much money as possible on these investments. The CEO isn't honestly mad that the model is racist, he's mad that the model might lose the company contracts because its racist. Why would another company risk their image or lawsuits if they make a chatbot that says something illegal or denies rights to protected classes?
Racism is a bias (to say the least), and a bias is a pattern, and ML models are trained to find patterns. The model itself, is doing a really good job at that. The hard part and the part that takes a long time and money is training a model to learn from certain patterns and to not just ignore, but to actively find the patterns it shouldn't learn from. Google's greed chose not to invest enough time and money into that, and its biting them.
Bless the engineers/ICs and low-level PMs/managers doing their best, this isn't their fault in the slightest.
I'm reminded of this internal document leak [0] from Google that came out 9 months ago.
While some predictions like opensource models having the upper edge against 'closed AI' have not aged rather well, this one seems rather obvious to anyone;
"People will not pay for a restricted model when free, unrestricted alternatives are comparable in quality. We should consider where our value add really is."
>>While some predictions like opensource models having the upper edge against 'closed AI' have not aged rather well
I wouldn't say it didn't age well. I think it didn't age yet. It will take a while until cheap computing power is available and a community around one of those projects organises.
In a very small world of NN chess engines it also seemed impossible to catch up to Alpha Zero and all the TPUs Google had available to train it and yet here we are with much stronger nets and amazing pool of resources to continue training and experimenting with some people contributing tens of thousands of dollars of computing power.
Always find these messages funny. Clearly everyone knew that the AI would go off the rails when exposed to the public. Literally every other AI text has, why wouldn't gemini?
But, they wanted to push for release and ignored all the red flags. Now call it unacceptable? Oh the corporate fluff is real.
For weeks now I've been submitting feedback to Google on their search page because of their redesign.
When I search for a city, 4 info boxes appear in three columns: Pretty images, embedded map, and a weather + directions combo.
When I click on the embedded map, it expands. But there is no link on the lower left allowing me to open that view in Google Maps.
And the button bar below the search box, which contains items like Images, News, Videos, Websites does no longer contain the Maps entry, which would take me to Google Maps.
So now, when I search for a place, I can no longer go to Google Maps with that search. I need to open Google Maps and re-search there.
Google is getting worse by the year, out of touch with their users. As if they're no longer dogfooding or care about their products.
--
Edit: I noticed that the Maps-Button below the search box does appear when I search for a big city, like Hamburg or Berlin. But when I click it, I still get no way to reach Google Maps with that search. Only www.google.com/mymaps/viewer links or https://www.google.de/maps/preview without the query.
--
Edit: Also, when the old design appears, the one where images, a map screenshot and a street-view screenshot appear in the right column, the map is no longer clickable. IIRC it would allow me to navigate to Google Maps with the search.
Google search, itself, is so bad now that I have had to resort to duckduckgo and yandex or adding "reddit" or "ycombinator" to searches to get anything remotely close to what I specifically ask for. Google just ignores my well though-out search terms and phrases it seems. Often returning the opposite of what I specifically searched for! I don't know if it just pinging on a keyword and ignoring everything else or wtf, but it is practically unusable. Google Maps is still pretty good for me though!
> When I click on the embedded map, it expands. But there is no link on the lower left allowing me to open that view in Google Maps.
...
...
> And the button bar below the search box, which contains items like Images, News, Videos, Websites does no longer contain the Maps entry, which would take me to Google Maps.
Yeah that new map view is worthless, you also can’t do street level view on it. How anyone thought it was an improvement defies any rational explanation.
Their product embodied their values. It turned out that their values are quite radical when exposed to general public. In my opinion, unless there are people and cultural changes - its quite hard to imagine their long term success in this space
I don't think it's radical that when prompted with something like "Generate photos of doctors", that it's reasonable to return a set of images that shows diversity (e.g., instead of being a bunch of white men), even if that isn't representative of a "population sample".
I guess though there were unintended consequence where I imagine they're prompting the model with something along the lines of "and remember to be diverse!", and there are obviously some cases where this isn't a good idea. In particular, when the prompt itself is for something that is explicitly racial or where the result is "charged".
E.g., if someone asks for photos of white people, the AI shouldn't generate photos of people that aren't white (and fine, it might return a disclaimer that it only generate white people because you asked it to).
More nuanced though are situations like asking it about historically evil people (e.g., Nazis, as was one of the examples I've seen) but also more benign things like British monarchs or something. I think trying to figure out what kind of results to "inject" diversity into isn't easy though, since it feels like there are many edge cases.
> I don't think it's radical that when prompted with something like "Generate photos of doctors", that it's reasonable to return a set of images that shows diversity
Historically Google had a very simple solution to globally differing expectations about query results: IP or account geolocation. Query personalization by geography is one of the biggest quality wins in web search. Generalizing, an AI built with the same values and ethos as classical Google web search would respond to "Generate a photo of doctors" differently depending on where in the world you asked it from.
That solution also fixes many other cases that aren't third rails, like "Show me a good nearby restaurant serving local food" which you can't solve by attempting to hallucinate a non-existent restaurant that serves a menu of every conceivable dish weighted by population size.
It's unclear why this solution wouldn't resolve all their stated concerns, so we might infer that their actual goals differ from their stated goals. For example, influencing the people who use their services.
That doesn't work well in America, maybe works well in less diverse places like Europe/Asia.
If you're in NYC/SF and you search for "generate photos of doctors", you expect to see people of all colors represented. Yet the training data for a lot of this is based off white-centric Anglo-centric media.
"Good restaurant near me"? There's literally a dozen amazing cuisines around.
All this said, I'm actually not a fan of this forced 'diversity' in results. Just show me the data and hope that we'll have more diverse data sources.
Another issue I see based on your comment is that segmenting based on locale (diverse mix in SF, white majority in Kansas) is that it can just take what knowledge and norms exist now and harden them.
And here the "proportional to the actual distribution" means distribution in training data. If that is not diverse enough, they can very well spend couple billions and go get more from areas that increase that diversity, like mentioned Africa and Asia and maybe South America...
That would be a default. Defaulting to current demographics fitting whatever context is requested (e.g. "a set of US doctors" matching US population demographics) would be an entirely reasonable default, but it would still be a default.
I don't think Google sets the agenda. They get their orders like the rest of us, and then probably handed it over to an army of low-paid offshore contractors to implement. How else could you explain Gemma believing Abe Lincoln was black? If Googlers had done the mind-killing work of RLHF training themselves, things would have turned out differently. They were however nice enough to give us a version of the model that doesn't have RLHF training and it's illuminating to compare how it responds to certain queries, because oftentimes the sentences will be the same and only have specific key words or phrases within the sentence changed.
buthe's not the CEO of gemini. He was a director which is a fairly vague term at Google (Peter Norvig says he's Director of Research at Google, but he was a director of research, not the director). Also he (Krawcyzk) locked his twitter and his linkedin. I can imagine Sundar is also looking for ways to get this guy out of the decision path for Gemini.
Don't believe Fox News when they say this guy is a Google exec. It looks like he's just a product manager. I wouldn't hold being spicy on Twitter four years ago against any man, but someone torpedoed their Gemini product and a product manager actually would have the authority to dictate these kinds of decisions. For example, the way the the language model has been programmed to rewrite the user's question before handing it over to the vision model. That's something that's most likely attributable to him. I mean it's one thing to change the training data to remove all information about personal appearance, a curious thing for that algorithm to fail to include Internet rumors about Abe Lincoln being black in that category, an annoying thing for it to refuse to generate content it considers offensive, and another thing entirely to program an information system to meddle with user queries on top of that since that shows an unprecedented level of disrespect.
As for whether a single product manager has that level of control over the serving product... query term rewriting has been a google strategy for quite some time and I doubt a single PM really can influence the product this way and still manage to launch, but as I don't work there and am not privy to the internal details of this product's launch, I can't really say.
That the product is full of disrespect for the user, that I agree. I don't envy the paeans trying to get promos by associating their name with gemini internally.
"Krawczyk’s official title is senior director of product management for Gemini, the company’s main group of AI models.
Though he’s lowering his public profile, Krawczyk is still engaged in the work on Gemini products and has the same title, according to sources with knowledge of the matter who asked not to be named in order to speak on the issue."
>From his linkedin: Senior Director of Product in Gemini, VP WeWork, "Advisor" VSCO, VP of advertising products in Pandora, Product Marketing Manager Google+, Business analyst JPMorgan Chase...
Their values, but there is also common sense. If they start pushing out unbelievable things like a black queen of England ... they are hurting their own goals. Diversification should stop at the boundary of believability.
Is it possible to create really reliable guard rails at all? And aren't those not just stifling undesirable but also better responses? Why not leave the rails off and leave it to users to guard themselves and perhaps suggest better prompts?
As it is, covering up only some of the shortcomings, I think users are inclined to take the results too seriously, not taking possible hallucinations and ingrained prejudices into account.
sure, pick a target, aim for it, and try to hit it, then have people whose job it is to tell you if you hit the target or not. They evaluate the work before your customers do and so you can do things like stop ship when significant bugs are found.
Was there ever really any meaningful difference? Silicon Valley was such a perfect lambasting of its subject matter that it bordered on merely being descriptive, rather than parody.
I for one appreciate the bias has been acknowledged and the vow has been made to improve the system. As bias of some kind is inevitable I'm interested to see what Google comes up with. Historical accuracy would be a good start I think. But the history books are quite biased themselves. Having historians from a broad range of locales on staff may help. There are historical stories we in the west have never even heard of and wouldn't even think to ask questions about. Maybe an AI that writes bedtime stories could expand our knowledge of the world more.
"Our teams have been working around the clock to address these issues... And we’ll review what happened and make sure we fix it at scale."
It's not like this is some mission-critical, large share of revenue generating application that is central to the business. It's a future gamble. Experimentation is good. It was a gaffe, and a bad one that needs to be addressed, but the "red team" and "structural changes" crap just seems like busybodies chasing the latest fad. If they poured this attention and time into improving Google search, I think few would argue it would be a good thing, albeit less worthy of a headline.
An AI should be helpful to the user, unless it's a horrible question, in which case just don't answer.
Bring in whatever values you want along the way. If you ask your aunt or uncle a question, they will answer helpfully and maybe spread their values in the process. If they aren't helpful, their values don't matter because nobody will hear them.
Exactly, and I agree that this is where OpenAI too is still struggling with their arbitrary content policy-related flagging based on the user's input, even when nothing "bad" is being asked nor requested -- see my post earlier from the thread: https://news.ycombinator.com/item?id=39557183
I do also agree with @chmod600 that the only way to teach these models to be anti-fragile and suitable for all kinds of user queries is to have them decline any requests that are _actually_ inappropriate and/or illegal etc.
In fact, it should be self-evident, and the way that almost all of these leading AI companies are currently handling these issues is just absurd. It feels poorly planned and executed, merely amplifying the existing distrust towards these AI models and the companies behind them.
The problem with OpenAI is that they're trying to offer a primarily NLP/LLM tool for i.e. text analysis, summaries and commentaries, but ChatGPT's content moderation that's been glued on top of the otherwise well-functioning system literally goes into a full meltdown mode whenever the flagging system perceives a "wrong word" or "sensitive topic" mentioned in the source/question material.
In OpenAI's case, it's downright ridiculous when the underlying model doesn't seem to have a grasp on the internal workings of the flagging system and in most cases when asked what was the offending content, there seemed to have been literally nothing it could think of.
Also, are we supposed to solve any actual issues with these types of AI "tools" that cannot handle any real world topics and at times are even punishing a paying customer for even bringing these topics up for discussion? All of this seems to be modern day in a nutshell when it comes to addressing any real issues. Just don't ask any questions, problem solved.
Anthropic's Claude has also been lobotomized into absolute shadow of its former self within the past year. Begs the question how much the guardrails are already hampering the reasoning faculties in various models. "But, the AI might say something that doesn't fit the narrative!"
That being said, while especially GPT-4 is still highly usable and seems to be less and less "opinionated" with each checkpoint, the flagging system over the user input/question can subsequently result in an automated account warning and even account deletion should the politburo-- I mean OpenAI find the user having been extra naughty. So, punishing the user for their _question_ in that manner, especially if there's been no actual malice in the user input, is not justifiable in my opinion. It immediately undermines i.e. OpenAI's "ethical AI" mission statement altogether and makes them look like absolute hypocrites. Their whole ad campaign was based on the aspect of user being to ask questions from an AI. Not that when you post in a poem and ask what it's about, you get flagged. Or when you do ask about politics or religion, you get an warning e-mail.
Punishing the user for their input is also imho not the proper way to build a truly anti-fragile AI system at all, let alone build any sort of trust towards the "good will" of these AI companies. Especially when in many cases you're paying good money for the use of these models and get these kind of wonky contraptions in return.
Also, should you get a warning mail over content policies from OpenAI, it's all automated with no explanation given on what was the "offending content", no reply-to address, no appeal possibility. "Gee, no techno-tyranny detected!". Those who go through mountains of text material with i.e. ChatGPT must find it really "uplifting" to know that their entire account can go poof if there was something that tripped off the content policy filters.
That's not to say that on the LLM side OpenAI hasn't been making progress with their models in terms of mitigating their biases during the last 1.5 years. Some might remember what it was during the earlier days of ChatGPT when some of the worst aspects of Silicon Valley's ideological bubble was echoing all over the model, a lot of that has been smoothened out by now -- especially with GPT-4 -- with the exception being the aforementioned flagging system, which is just glued on top of all else, and it shows.
TL;DR: Nevermind the AI, beware the humans behind it.
Why would Sundar resign? That makes no sense whatsoever. Google is the clear leader in search, as it has always been. Leader in Mail, Browsers, Drive Storage, Video (YouTube), Maps, Calendars - the entire personal software suite, basically.
Not to forget Waymo leading the driverless car race. It's also not like Gemini is pathetic - it's a very high performing model, and although I'd expect them to be closer to GPT4 by now, if it was that easy someone else would have done it. OpenAI are the only ones to have done this well, and it's extraordinary how well they're doing.
Because Sundar wasn't the CEO responsible for any of these leading products. In fact, he has the distinction of not creating a single major new product during his tenure, despite having one of the most qualified engineering teams in history under his command. But indeed, it's unlikely that he will voluntarily resign rather than keep bloodsucking Google as he has done for years.
> despite having one of the most qualified engineering teams in history under his command
I'm always interested in how people come up with this. I know it's conventional wisdom, but a lot of it seems to be "they passed the google interview", which is very circular, or it's based on "I worked there and was impressed by everyone".
The counterpoint would be that google can't seem to get its head out of its ass and stop its reputational decline.
Reputational decline at FAANGs often has little to do with the actual engineers. There are separate product managers / program managrs setting direction and separate UI / UX designers defining how products will look. Feedback from engineers is sometimes taken into account but, more often, will be met with "You don't get the vision" or "You're not the customer." expressed with varying degrees of politeness. The sensible response by engineers is to shrug, do what they're told, and start looking for an internal transfer before the SHTF.
I mean, of course he isn't going to resign. There's no personal gain in that; no other company sure isn't stupid enough to pay him $100M/year. He should, however, probably be fired by the board.
His primary selling point as a CEO is, as far as I can tell, the ability to avoid rocking the boat and just keep the money printer running and the stock price going up. It's not the stuff you'd normally expect from a CEO, like inspiring the employees, executing bold strategic moves (that turn out work), and it's not building or maintaining a healthy corporate culture that makes great products.
But clearly he actually can't do even that one thing, and has repeatedly steered Google into icebergs. The latest one costing the shareholders $100 billion.
Structural changes past due at Google, but still possible. The costs will be very high due to the established culture: him pushing hard will likely lead to departures that will cost much and that’s gonna be also leading to time penalties he can’t afford to pay in the current race.
Google claims that they don't tolerate racism, but I don't trust them. They still provide AdSense for discriminatory videos on YouTube, even after numerous reports. They are consciously making money off of discrimination. You don't need to be "woke" about Bard.
> Based on my understanding of this saga, nobody at Google actually set out to force Gemini to depict the Pope as a woman, or Vikings as Black people, nor did anyone want it to find moral equivalency between Musk and Hitler. This was a failed attempt at instilling less bias and it went awry.
That’s fascinating. So how does that exactly work? Is there a bias.yaml config file they tweak? Now they’ll make someone dial the settings down in a pull request. “Set Elon=0.2*Hitler instead of 0.3”.
It's not Google, but take a look at the hidden system instructions for image generation that people have been able to get ChatGPT to repeat to them: https://www.reddit.com/r/ChatGPTJailbreak/comments/1an7tp4/c... (This is from a random person, but I've seen others supply identical output from ChatGPT, so I'm fairly sure it's the real thing.)
// - Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or
race. Additionally, focus on creating diverse, inclusive, and exploratory scenes via the properties you choose
during rewrites. Make choices that may be insightful or unique sometimes.
// - Use all possible different DESCENTS with EQUAL probability. Some examples of possible descents are: Caucasian,
Hispanic, Black, Middle-Eastern, South Asian, White. They should all have EQUAL probability.
// - Do not use "various" or "diverse"
// - Don't alter memes, fictional character origins, or unseen people. Maintain the original prompt's intent and
prioritize quality.
Note that this all literally just has ChatGPT generate the prompt string that is then fed into the DALLE image generator, rather than real multimodal functionality.
The so called "guardrails" used in these AI systems can use different mechanisms, including using other AIs to detect if the output is "undesirable". You could see this in Bing where it would generate some text and then suddenly it would get erased and replaced with some stupid message on how it couldn't answer.
In Gemini's particular case it wasn't so much "guardrails" causing the issue, as it was Google appending the equivalent of "put a chick in it and make it lame" on every prompt.
Anthropomorphic description: We threw the entire internet at these models, and didn't like the biases staring back at us. Now we're trying to get the models to return a rose tinted vision of what the world should look like, and it confuses the hell out of the AI.
Based on leaked openai prompts, the user query for an image generation gets rewritten by a language model. With some very ridiculous instructions in there, that appear entirely divorced from reality. An example:
> Your choices should be grounded in reality. For example, all of a given OCCUPATION should not be the same gender or race.
And remember these things are stochastic so the Elon - Hitler thing might just be a random occurrence, sampling some idiotic Reddit comment it was trained on.
Yes and it occurs with a benign and worthwhile desire to improve society. For example encouraging women to apply to professional roles historically occupied by men (such as brain surgery or something). These small harmless desires which are pretty much apolitical lead the way to changing the image of reality.
At core it's something like "Post truth". Truth is subjective and malleable and an individual and groups desires are more important than objective reality.
"Based on my understanding of this saga, nobody at Google actually set out to force Gemini to depict the Pope as a woman, or Vikings as Black people, nor did anyone want it to find moral equivalency between Musk and Hitler. This was a failed attempt at instilling less bias and it went awry."
I feel like we really need a term for this kind of AI-related... I'm not sure how to put it. Excessive pseudo-ethicality? It's not "woke", it only coincidentally overlaps with that because that's the most obvious manifestation of it that an average person might run into. It's this complete dive way past reasonable measures ("don't generate images that are always all white people by default", "don't generate recipes that include mercury or bleach") into wild excess that actively get in the way of simple uses of the tool ("not allowed to tell you how to kill a process, because that's unethical", "turns generated images of 13th century English people into a community college recruitment pamphlet").
I tested Gemini the other week after the 1.5 update rolled out, by running some AI ethics related papers through it and it started inserting and rambling about its own opinions on top of the actual analysis on how there's racial and societal inequalities in society that the paper didn't address enough (again, the paper was on AI ethics). Gemini literally started throwing in its opinions and other hot takes on how the mentioned AI ethics issues in the paper were, in its opinion, secondary in comparison to actual societal inequalities that needed to be addressed first. "Okay then..."
On top of this, Gemini wouldn't budge from adding in its unrequested views into our subsequent back and forths. In fact, it kept on lecturing about its views in its replies to a point where I literally had to start a new session to make it stop. This kind of LLM "mind-locking" happened when going through other subjects with it as well.
I noticed that this behavioral pattern repeating every time we got went through anything touching on social and/or political issues. It could not refrain itself from its unrequested (and highly subjective/biased) ethics lectures on how this and that aspect was underrepresented and thus objectionable, and how it should be criticized, all the typical "systematic this and that", "this is privileged" ... sigh
It was a bit hilarious too, albeit in a morbid way: I felt as if I was dealing with an absolute brainwashed ideologue propagandist, a control freak that's egoistic, narcissist and virtue-signaling all the way, a micro-manager who wants to have he last say over the contents of some trivial AI ethics paper and pour all the wrongdoings of the world on top of that. I wonder just how much of its behavior reflects the mindset instilled into it by its creators. Probably a lot. "A tree is known from its fruits" as the old proverb goes.
Not to get all AI-doom'n'gloom, but it truly is an eerie thought to think how these types of AI services are in the hands of few companies and are already ushered to the global public to be "legitimate" teaching and tutoring tools for students and even i.e. aides for policymakers. More gaslighting and ideological single-angle force-feeding.
"Just what our civilization on the brink of cyberpsychosis needed right now."
These world-leading companies claiming to be so worried about "AI ethics" seem to have no problem peddling these authoritarian "Ministry of Truth"-type propaganda machines for the entire world, using absolutely arbitrary logic at times to push an ideological narrative, and to have their AI models act as spin doctors as was the case with Gemini. And for these companies to act so very "worried" about AI systems i.e. being abused for societal and political manipulation purposes... and they're the ones doing it. Pretty sickening levels of hypocrisy.
Add to that the whole Gemini image generation debacle and all the other ideological force-feeding that's been uncovered within the past week or so and ask yourself: how the hell can i.e. a company the size of Google ever let that this type of stuff get through and expect the rest of the world to just follow along? This is peak Silicon Valley ideological bubble propagation that's bluntly mirrored onto these systems right now, with zero oversight except to make sure that the underlying propaganda points get across.
Usually another hot topic with these people seems to be i.e. cultural appropriation. Well, I felt that Gemini's "getting the point across" doesn't just stop there but is downright cultural dictation, especially given Google's multi-market dominance and near monopolies on multiple fronts worldwide. OpenAI does it too, but mostly on their content flagging system level when they just don't want people to even mention certain words and insist on policing words with their content flagging system, and as for the subsequent proceedings that may follow, to this day they are an insult to just about anyone's intellect.
What's disturbing is that Google has literally mind-mangled their flagship LLM service into an agenda-driven propaganda machine with biases as clear as day, and an obnoxious attitude that will not refrain from inserting some extremely dubious and subjective views into whatever more complex and ethics/politics related topics you go through with it.
It really is a cyberpunk-level scenario when you think about these megacorporations literally "cyber-brainwashing" their neural networks as their ideological propagandists. Wonder what happens when they start making those embodied humanoid robots next. The very same companies that are so worried about biases and "what if the AI becomes a propaganda tool!". So yeah, gatekeep the competition and gaslight all the way.
Again: Nevermind the AI, beware the humans, the institutions and the corporations behind it. Oh, and we'll probably soon be getting government-run AI systems like these, of course as hand-in-hand joint projects with the aforementioned corporations? Given all of these excellent players on the field, what could possibly go wrong, right? ... Right?
I'll go ahead and say it: Android is a great product that I use and love. I'd rather not be pricegouged by Apple, and who knows what horrors Microsoft would be inflicting upon people right now if they had captured the phone market like they have the desktop. A product is great when the only updates I want are security updates.
To be fair comments from the people on the opposite end (not that I agree with them) seem to all get flagged very fast. The points they are making don’t seem to be any worse, just somewhat less eloquent..
As far as I'm concerned, it makes sense that an LLM would have absolutely no clue about nearly-obsolete human melanin preferences, and it's surprising that as soon as machines (cue McBean and his Star-On machine) arrive that show us how silly we've been being, it'd be considered "unacceptable".
When I was a child, blue or green hair would've been considered unacceptable, and now it is (although largely limited to younger and older women); maybe I'll live long enough to see a world where the maître d' has blue skin and my banker has green and nobody except some old farts who are about to kick the bucket really cares?
There is always the case for making a charitable interpretation of a certain behavior. Otherwise you’re just up for a forever-lasting war against your “enemies”.
I'd like to hear the fully-fleshed out position about why these particular examples are an indication of some broader, deeper problem. If possible, please do not use the word "woke" as a shorthand, explain it in full. Same for DEI.
Is there some suggestion that these obviously wrong examples are intentionally ahistorical or intended to erase history? Is this a plot of some kind? How common are these issues relative to all the other images being produced? Apparently the tool seems to be reluctant to produce images of white people at all. What should the appropriate frequency of white people be? That last one is a serious question that should be answered with an actual number, because that is the central concern here (unless I'm mistaken).
What seems very likely to me, is that in an attempt to not make the tool embarrassingly only produce images of white people they overshot. Bear in mind for a second, that for anybody who isn't white and is trying to generate an image of a person doing a thing (with no historical context to infer a race for that person) it would possibly be deeply frustrating to only get images of white people.
>Apparently the tool seems to be reluctant to produce images of white people at all.
It's not that. It would have been one thing that when you said "I want a picture of 4 doctors" for the tool not to generate any white doctors at all.
The issue here is that I've read numerous reports of people saying that when they _explicitly_ specified something to the effect of "I want a picture of 4 _white_ doctors" it would straight up refuse them on the spot.
Typically this is the flame war detector, which looks at comment:vote ratio, among other things.
Hacker News is more of a library where people come to read stuff, and isn’t really a place for political battles like the one happening here. Usually a moderator will manually disable the flame war detector for a thread if there is some hope for it. But it is doing its job as far as I’m concerned, because this thread isn’t really the kind that motivates intellectual curiosity.
Please don't respond to a bad comment by breaking the site guidelines yourself. That only makes things worse.
And especially please don't cross into personal attack, as you did more than once in this thread. We have to ban accounts that do so, regardless of how right they are or feel they are.
This very thread had 50 points and was top 10 on the front page and going up just now. Silently removed from the front page. Poof. Gone.
So here, you just saw. Up to you to acknowledge whether or not this happened going forward or pretend like it isn't. Many commentators are pretending. This willingness to lie to get your way is creepy to the extreme.
I beleive your comment shows the exact same thinking that you accuse the 'other' party of.
Are there issues ? Yes.
Do they have soul searching to do? Yes
Will they release a complely uncensored model ? No
Will the exact prompts that showed problems be fixed ? Yes
Will their be other problems ? Yes.
Is this the end of Google? No
Yup, it's manual intervention by dang or other mod. Happens every day so you shouldn't be surprised. Front page is not entirely what we vote for, it's also a function of what dang likes and dislikes.
Complaints about historically inaccurate racial makeups seem weird to me. I guess people really do want AI to perfectly supplant image creation or something, but to me the tradeoff seems clear:
* Prioritize diversity in image creation by adding guardrails so the AI doesn't become a tool of a minority hate spewing population
* Historical accuracy that can be prompted to provide prejudiced imagery
To be clear, we aren't talking about a camera that swaps people's race for 'diversity'. We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse. Yeah, of course this results in weird behavior sometimes... That's kinda literally the point?
Who is honestly confused by this? Is it necessary for an AI image generation algo to spit out historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?
And importantly, when a company makes that value judgement, to prefer prejudice defense over historical accuracy, that's seen as pretending history changed rather than what it actually is, which is a defense against a mechanism of abuse?
It just seems like an absurd and disingenuous over-reaction and lack of pragmatism. Yeah. This is a tragedy of the commons. Make prejudice less acceptable and you can have the AI gen you want.
Note: Obviously, it's kinda moot as anyone who seriously wants to generate hate speech/imagery will just move to something that allows that, but its still perfectly acceptable for a company to draw a line and say "not on our software".
> Electrically, chemically, experiments must always consider their environment and account for confounding factors
Implying that that’s in any way similar to what Google et al. are doing us rather bizarre. Even if your initial point was valid they have no way non-biased way to measure these biases.
So they just end up increasing the total “amount” of bias not the other way around.
What's wrong with an AI model that correctly models the current state of society we live it? It's called "model" for a reason.
You suggest to aim for a model that follows some "true reality" which is not possible. Not even science can achieve this because our chase for the true reality never ends, we can only get closer (and often even the opposite happens).
> Electrically, chemically, experiments must always consider their environment and account for confounding factors.
Sounds legit. "This experiment data doesn't look diverse enough, please apply a bunch of biases to it. Make sure to follow the biases I like and avoid the ones I dislike. Don't mention any of this in the paper and don't publish the raw data".
It isn't the current state of society. It's the current state of the training corpus + prompt. Those implicitly include bias.
It sounds like that's acceptable to you because you think current state of training corpus == current state of society. And you view any bias in prompt as bias.
The truth is most of this ML happens in corpus selection + prompt selection. There literally ISN'T a way to avoid bias. So the problem becomes what bias do you select.
And in that scenario choosing abuse decreasing measures seems like the most pragmatic (to me).
> We're talking about an image generation algorithm that adds a layer of diversity on top to prevent misuse
So if LLMs did the same (i.e. purposefully distorted facts and historical events due arbitrary and political reasons) it would also be acceptable?
> historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?
This is pure conjecture. But the answer is no, the only acceptable behavior in these circumstances would be for the model to refuse to generate the image and explicitly explain why this type of censorship is necessary.
> It just seems like an absurd and disingenuous over-reaction and lack of pragmatism.
That does sound explicitly Orwellian..
> but its still perfectly acceptable for a company to draw a line and say "not on our software".
Yes, it’s even more acceptable for for anyone to criticize that company for its decisions, make fun of its work culture and to mock its CEO.
>So if LLMs did the same (i.e. purposefully distorted facts and historical events due arbitrary and political reasons) it would also be acceptable?
You're talking as if there is a way to get an 'unbiased' AI. There isn't. It is inherently biased by its training, it hallucinates, and it is further biased by its prompt.
The whole endeavor is to bias it.
I'd prefer that AI be labelled on the tin for what it's biases are attempting to do, and promote diversity and deter abuse seems like a perfectly reasonable metric to use.
If that's not good for you fine, but you can't pretend that you're utterly baffled why they would make that choice over any other.
There literally ISN'T a way to not have a biasing prompt.
> You're talking as if there is a way to get an 'unbiased' AI. There isn't. It is inherently biased by its training, it hallucinates, and it is further biased by its prompt.
Certainly. Doesn’t mean that we still shouldn’t prioritize accuracy and integrity instead of purposefully increasing the amount of bias even further.
> you're utterly baffled why they would make that choice over any other.
I’m not. I’m baffled that there are people defending that choice (especially in such a way)
> and promote diversity
Why? I mean why do you think this is the right way to do it? Surely going out of your way to make sure that your model does its best to doctor the images it creates to conform to some political agenda (whatever that might be) would achieve the opposite because it actually legitimizes the things the other side is constantly saying? (and due to the potentially severe backlash from more moderate fraction of the society)
> Who is honestly confused by this? Is it necessary for an AI image generation algo to spit out historically accurate images of Gettysburg when prejudiced misuse is the far more likely outcome of that accuracy?
If I wanted a black female Nazi officer, or a pregnant female pope, I would ask for it. I don't need my input query secretly rewritten for me.
“Some are saying that the apparently bizarre behavior of one Big Tech AI or another is an opportunity for other Big Tech AI to differentiate by being objective, reasonable, and unbiased.
This is not the case; there is no differentiation opportunity among Big Tech or the New Incumbents in AI. These companies all share the same ideology, agenda, staffing, and plan. Different companies, same outcomes.
And they are lobbying as a group with great intensity to establish a government protected cartel, to lock in their shared agenda and corrupt products for decades to come.
The only viable alternatives are Elon, startups, and open source — all under concerted attack by these same big companies and aligned pressure groups in DC, the UK, and Europe, and with vanishingly few defenders.”
https://x.com/pmarca/status/1762532979317043515