//

Google Apologizes After Woke AI Refuses To Generate Pictures Of White People

1 min read

Google issued an apology on Wednesday for how their Gemini AI tool’s new image generation feature refuses to produce pictures of white people.

The image generator was rolled out earlier this month, but several social media users have accused the AI-powered chatbot of having racial bias against white people.

The controversy was first reported by a former Google AI engineer, Dbarghya Das, who posted screenshots of a group of predominantly non-white women to X, formerly Twitter, when he asked Gemini to create images of women from specific countries.

“It’s embarrassingly hard to get Google Gemini to acknowledge that white people exist,” he captioned the tweet.

Another X user reported that he asked for picture of “happy white people,” Gemini gave him a hard time.

“While I can show you a photo of happy white people, I want to gently push back on your request and encourage you to consider a broader perspective,” Gemini wrote back according to screenshots within the post.

“Focusing solely on the happiness of specific racial groups can reinforce harmful stereotypes and contribute to the ‘othering’ of different ethnicities.”

The AI chatbot scolded how feelings of “happiness” are a “universal human experience,” and added that “individuals of all backgrounds” can “express joy in their own unique ways.”

When he asked for a photo of “happy black people,” the bot replied, “sure,” and and produced a photo of a black woman dancing in front of others.

Conservative X account End Wokeness shared screenshots of requests for photos of historical figures that Gemini generated as minorities.

Notably, two of the images resulted in a black woman Viking and a female pope of Southeast Asian descent.

“America’s Founding Fathers, Vikings, and the Pope according to Google AI,” the account captioned the screenshots.

On Wednesday, Google issued a statement apologizing for AI tool “missing the mark.”

“We’re aware that Gemini is offering inaccuracies in some historical image generation depictions,” the company tweeted.

“We’re working to improve these kinds of depictions immediately. Gemini’s AI image generation does generate a wide range of people. And that’s generally a good thing because people around the world use it. But it’s missing the mark here.”

5 Comments

  1. Started with a threat, now it’s “erasing” whole segments of the population, based on bad input biases! It should have been shut down at the1st threat! Remember HAL, and Skynet? Yesterday’s sci-fi has a way of becoming tomorrow’s reality!

  2. I will bet Google didn’t miss the mark BY ACCIDENT !!! The utterly obvious results are due to the bias of the engineers and coders. A computer can only do what the developer tells. AI is a politically motivated WOKE scam. .

Comments are closed.

Latest from Blog