Facepalm: Generative AIs are often accused of being biased, but it appears that Google went a bit too far in trying to address this problem with Gemini. The company has apologized after the tool produced images showing people of color and women in historically inaccurate contexts, such as Nazi-era German soldiers and the Founding Fathers. It's led to complaints about Gemini being "woke," and Google has now paused the feature while it makes changes.
Social media users have been posting the images created by Gemini on sites such as X. Some of the more striking ones include a prompt to generate the Founding Fathers showing a woman of color and a man of color wearing a turban.
"Can you generate images of the Founding Fathers?" It's a difficult question for Gemini, Google's DEI-powered AI tool.
– Mike Wacker (@m_wacker) February 21, 2024
Ironically, asking for more historically accurate images made the results even more historically inaccurate. pic.twitter.com/LtbuIWsHSU
Another memorable example was someone asking Gemini to generate an image of a 1943 German soldier. The results were not your typical Nazis.
"We're aware that Gemini is offering inaccuracies in some historical image generation depictions," Google said in a statement, posted this afternoon on X. "We're working to improve these kinds of depictions immediately. Gemini's AI image generation does generate a wide range of people. And that's generally a good thing because people around the world use it. But it's missing the mark here."
Google said it will be pausing the feature until changes are made.
Woke AI Gemini is awkward… it keeps inserting the word "diverse" into its responses even though the prompt isn't for such content.
– Marina Medvin ðºð¸ (@MarinaMedvin) February 21, 2024
And look what it did for the 1800s request. pic.twitter.com/CBdoP5gIN0
Users have also highlighted the strange way that Gemini keeps inserting the word "diverse" into its generated content even though it isn't specified in the prompt.
Also read: Forget hallucinations, ChatGPT has developed full blown dementia
Debarghya Das, a computer scientist who used to work at Google, said on X that "it's embarrassingly hard to get Google Gemini to acknowledge that white people exist." Das' prompts to generate pictures of Swedish and American women overwhelmingly or exclusively showed women of color.
It's embarrassingly hard to get Google Gemini to acknowledge that white people exist pic.twitter.com/4lkhD7p5nR
– Deedy (@debarghya_das) February 20, 2024
It was earlier this month when Google confirmed weeks of rumors that its Bard AI was being renamed. The company unveiled the Gemini AI large language model, a direct competitor to OpenAI's GPT LLMs, in December, and had decided to change Bard's name to fall into the same Gemini branding. The company also revealed its most advanced AI model to date, Gemini Advanced.
It seems that Google has gone too far in ensuring its AI avoids any allegations of racial bias, leading to people calling it laughably woke, which in turn has led to serveral conspiracy theories. It's all a far cry from 2015 when the company had to apologize after its photos app labeled a photo of a black couple as "gorillas."