Liz Read, Google’s head of search, admitted that the company’s search engine had returned some “strange, inaccurate or unhelpful overviews of AI” after it was rolled out publicly in the US. The executive posted an explanation of Google’s more bizarre AI-generated responses in a file Blog postIt also announced that the company has implemented safeguards that will help the new feature return more accurate and less meme-worthy results.
Reed defended Google and pointed out that some of the more egregious responses to the AI overview circulating, such as claims that it is safe to leave dogs in cars, are fake. Viral screenshot showing the answer to the question “How many rocks should I eat?” True, but she said Google came up with an answer because a site published satirical content on the topic. “Before these screenshots went viral, practically no one asked Google this question,” she explained, so the company’s AI was linked to this site.
The Google vice president also confirmed that AI Overview asked people to use glue to make cheese stick to pizza based on content taken from a forum. She said forums typically provide “genuine, straightforward information,” but can also lead to “less-than-helpful advice.” The CEO didn’t mention the other answers circulating around the AI overview, however Washington Post According to reports, the technology also told users that Barack Obama was a Muslim and that people should drink a lot of urine to help them get rid of kidney stones.
Reid said the company tested the feature extensively before launching, but “there’s nothing like millions of people using the feature for so many new searches.” It appears that Google was able to identify patterns where its AI technology couldn’t get things right by looking at examples of its responses over the past two weeks. It then put protections in place based on its feedback, starting with tweaking its AI to be able to better detect humor and satirical content. It has also updated its systems to limit the addition of user-generated responses in Overviews, such as social media and forum posts, which can give people misleading or even harmful advice. Additionally, it “also added trigger restrictions for queries where AI Overviews didn’t prove useful” and stopped displaying AI-generated responses for some health topics.
“Freelance web ninja. Wannabe communicator. Amateur tv aficionado. Twitter practitioner. Extreme music evangelist. Internet fanatic.”