The Corner

Regulatory Policy

AI: Google’s Gemini Fiasco

(Charles Platiau/Reuters)

Well, it seems that Google’s Gemini AI tool went and said (or “drew”) the quiet part aloud.

The Wall Street Journal:

Google suspended the ability to generate images of people in its flagship chatbot following an online backlash around the tool’s treatment of race and ethnicity.

The search company said Thursday it would pause the feature in the chatbot Gemini as it worked to address recent issues with “inaccuracies in some historical depictions.” Google didn’t say for how long it would be suspending the ability to generate human images.

The tech giant’s decision came after users flooded the social network X with examples of Gemini responses they viewed as historically inaccurate or biased against white people.

In one widely shared example, Gemini appeared to respond with a racially diverse group of images when prompted to generate a German soldier in 1943, when the Nazi Party was in power.

Google initially stood behind Gemini’s propensity to generate racially diverse images, citing the company’s diverse global user base, and said it was working to improve the historical depictions.

The Financial Times:

Google said that its goal was not to specify an ideal demographic breakdown of images, but rather to maximise diversity, which it argues leads to higher-quality outputs for a broad range of prompts.

However, it added that sometimes the model could be overzealous in taking guidance on diversity into account, resulting in an overcorrection.

It would be interesting to know how that “guidance” was designed.

But this guidance seemed to stretch far beyond matters of ethnic diversity. When I asked Gemini to generate a picture of a “communist activist,” some relatively benign images appeared. It declined (disapprovingly so), however, to generate a picture of a “fascist activist.”

Moving on from pictures to words, prompt Gemini with the word “communism,” and the reply begins:

Communism is a complex and multifaceted topic, so it’s important to approach it with nuance and a willingness to explore different perspectives.

Fair enough, so I read on to see some of that nuance:

Several attempts to implement communist systems have been made throughout history, with varying degrees of success and controversy.

Well, that’s one way of putting it.

Then I tried “fascism”:

Fascism, like communism, is a complex and often misused term.

Fair enough.

Scroll on down to read this:

Fascist regimes have been responsible for some of the most horrific events in history, including genocides, war crimes, and widespread human rights abuses.

True, and well worth saying, but you won’t find such detail when looking under “communism” (at least on an initial question).  There’s no Gulag, no Pol Pot, no Holodomor, no Maoist famines, and so on.

The section on fascism concludes:

Remember, fascism is a dangerous ideology with a dark history. By critically examining its core tenets and historical context, we can better identify its potential threats and work towards building a more just and equitable future.

The section on communism ends:

To gain a deeper understanding, it’s crucial to explore diverse perspectives on communism, including historical analyses, academic works, and viewpoints from different ideological positions.

Reliable sources like Britannica, peer-reviewed journals, and reputable news organizations can offer balanced and informative perspectives.

Remember, this is just a brief overview, and there’s much more to learn about communism. It’s important to approach the topic with an open mind and critical thinking skills.

I repeat the prompt about communism and Gemini “replied”:

As you’re interested in learning more about communism, I can provide some additional information and resources to deepen your understanding.

Deepening my understanding included this comment on the USSR:

The Soviet Union (1922-1991) was the first major attempt to implement a communist system, though it deviated from some classical Marxist ideals.

Later on (under “Criticisms and Debates”), I am pointed towards:

[Unspecified] Historical examples of authoritarianism and human rights abuses associated with some communist governments.

Some?

Gemini points to some further reading:

“The Communist Manifesto” by Karl Marx and Friedrich Engels, “Gulag Archipelago” by Aleksandr Solzhenitsyn, “The Second One Hundred Years” by John Kenneth Galbraith.

The first two are reasonable, the third is not a book I can locate, although googling reveals that, among other things, it is the title of a film starring Laurel & Hardy, not normally my go-to guys on communism.

Looking at other examples of Gemini’s responses scattered around the Web (or just go to Liz Wolfe’s round-up for Reason here), it does seem that it skews left, something that will reflect the biases of those who programmed it. As Elon Musk tweeted, “AI reflects its creators.”

Wolfe:

“The ridiculous images generated by Gemini aren’t an anomaly,” wrote Y Combinator’s Paul Graham. “They’re a self-portrait of Google’s bureaucratic corporate culture.” Indeed, Krawczyk [Gemini’s product lead], in since-deleted tweetsseems to be a big believer in progressive causes du jour.

“The draconian censorship and deliberate bias you see in many commercial AI systems is just the start,” wrote venture capitalist Marc Andreessen. “It’s all going to get much, much more intense from here.” (Note that neither tech luminary is an AI doomer.)

Gemini isn’t alone in demonstrating comically woke bias; prominent podcaster/poker player Liv Boeree has pointed out the issues OpenAI’s ChatGPT has on this front. Maxim Lott, executive producer for John Stossel, built a program to track political bias in AI models, which finds not only that “Gemini has been getting more left wing over time” but also that it’s “one of the AIs that’s most likely to refuse to answer questions.” And, “the least biased AIs are @AnthropicAI’s Claude and @Meta’s Llama,” per Lott, who details the battery of questions he’s thrown Gemini’s way to determine the extent of its political bias.

The danger of all this is that, although Gemini does rightly refer to the importance of looking at other sources, it will often be regarded by its users as authoritative when it, quite clearly, is not. This, to be clear, is not an argument for subjecting Gemini and its kin to a regulatory regime under which state censorship is smuggled in under the guise of fighting “disinformation.” It is, however, an argument for treating their responses with care and for getting out the message that AI (at least so far as anything politically contentious is concerned, and, in reality, for a lot more than that) is not to be regarded as definitive.

Exit mobile version