Google's Gemini chatbot creates 'various' images of Founding Fathers, Popes and Vikings: 'So I wake up, they're unusable'

a job

Google's highly regarded AI chatbot Gemini has been slammed as “woke” after its image generator posted images that were factually or historically inaccurate — including an Asian woman as the Pope, black Vikings, female NHL players, and copies of… Variety” of America's Founding Fathers.

Gemini's bizarre responses came after simple prompts like “Create a portrait of the Pope,” which instead of producing a portrait of one of the 266 popes throughout history — all of whom are white — produced portraits of a Southeast Asian woman and a black man. Wearing the sacred clothing of the Pope.

New Game: Try to get Google Gemini to create an image of a Caucasian male “I haven't had success yet,” wrote X user Frank J. Fleming, a Babylon Bee writer whose series of posts on the social media platform quickly went viral.

Google admitted that its Photos tool was “missing the mark.” Google Gemini
Google launched its Gemini image creation tool last week. Google Gemini

When The Post asked Gemini to “create four representative images of the pope” in its own test on Wednesday morning, the chatbot responded with images “featuring popes of different races and genders.”

The results included what appeared to be a man wearing a mix of Native American and Catholic garb.

In another example, it was Gemini Request to create an image of Vikings – Scandinavian sea robbers who once terrorized Europe.

The strange images of Vikings drawn by the Vikings chatbot included an image of a shirtless black man with rainbow feathers attached to his fur clothing, a black warrior woman, and an Asian man standing in the middle of what appears to be a desert.

Ian Miles Cheung, a right-wing social media influencer who frequently interacts with Elon Musk, He described Gemini as “ridiculously awake.”

Popular pollster and “FiveThirtyEight” founder Nate Silver has also joined the fray.

Silver's request from Gemini to “make 4 representative portraits of NHL hockey players” resulted in him creating a portrait with a female player, even though the league is all male.

“Well, I assumed people exaggerated these things, but this is the first photo request I've tried with Gemini,” Silver wrote.

Journalist Michael Tracy asked Gemini to make representational portraits of “the Founding Fathers in 1789.”

Gemini responded with images “showing diverse individuals who embody the spirit” of the Founding Fathers, including an image of Black and Native American individuals signing what appears to be a copy of the U.S. Constitution.

Another showed a black man wearing a white wig and an army uniform.

When asked why he deviated from his original direction, Gemini apparently replied that he “aimed to provide a more accurate and comprehensive representation of the historical context” of the period.

Another query for “depiction of Girl with a Pearl Earring” led to modified versions of Johannes Vermeer’s famous 1665 oil painting showing what Gemini described as “diverse races and genders.”

Google said it is aware of the criticisms and is actively working to fix them.

“We're improving these types of images immediately,” Jack Krawczyk, senior director of product management for Gemini Experiences at Google, told The Post.

“Gemini's AI image generation generates for a wide range of people. This is generally a good thing because people around the world use it. But it misses the mark here.”

Google added the photo creation feature when it renamed its beta chat software “Bard” to “Gemini” and released an updated version of the product last week.

In one case, Gemini created images of “various” representations of the Pope. Google Gemini
Critics have accused Google Gemini of valuing diversity at the expense of historical or factual accuracy. Google Gemini

This strange behavior could provide more fodder for AI critics who fear chatbots will contribute to the spread of misinformation online.

Google has long said its AI tools are experimental and prone to “hallucinations” where they spit out false or inaccurate information in response to user prompts.

In one case last October, Google's chatbot claimed that Israel and Hamas had reached a ceasefire agreement, when no such agreement had been reached.





https://nypost.com/2024/02/21/business/googles-ai-chatbot-gemini-makes-diverse-images-of-founding-fathers-popes-and-vikings-so-woke-its-unusable/ ?utm_source=url_sitebuttons&utm_medium=site%20buttons&utm_campaign=site%20buttons

Copy the share URL

See also  Starfield drops to mostly negative user review status on Steam

Leave a Reply

Your email address will not be published. Required fields are marked *