Google has responded to the uproar caused by the weird responses from its AI Overviews feature, including the suggestion to put glue on pizza.

Liz Reid, Head of Google Search, addressed these issues in a recent blog post, outlining steps taken to enhance the feature's accuracy and reliability.

Google Responds to AI Overviews' Weird and Inaccurate Responses

Reid acknowledged instances where the search engine returned odd, inaccurate, or unhelpful AI-generated responses following the rollout of Google's AI Overviews feature to users across the United States. 

While defending Google's commitment to providing accurate information, Reid clarified that some responses circulating on social media, such as claims regarding the safety of leaving dogs in cars, were fabricated.

Reid also confirmed that AI Overviews suggested using glue to get cheese to stick to pizza, drawing from content found on a forum. She noted that while forums often offer authentic, firsthand information, they can also present less-than-helpful advice.

Google has acknowledged that despite its efforts, some AI Overview responses have been odd, inaccurate, or unhelpful. These instances have highlighted specific areas that need improvement, particularly in interpreting nonsensical queries and satirical content.

In the query "How many rocks should I eat?" AI Overviews says, citing UC Berkeley geologists, "You should eat at least one small rock per day," which originated from an article by satirical site The Onion in 2021. 

Before its viral spread, Reid said that "practically no one asked Google that question," indicating a "data void" or "information gap" on the topic. In this case, the satirical content on this topic republished on a geological software provider's website generated the AI Overviews response linked to one of the only websites that discussed the question. 

In some cases, AI Overviews responses have misinterpreted language on web pages, resulting in inaccurate information being presented to users. Google addressed these issues through algorithmic improvements and established processes to remove responses that violate content policies.

Read Also: Google AI Overview Under Fire For Inaccurate Results: What Went Wrong?

Google Search

(Photo : Pexels from Pixabay)

How Google is Improving AI Overviews

To enhance the accuracy and reliability of AI Overviews, Google has implemented a series of technical improvements. These include better detection mechanisms for nonsensical queries, limitations on including satire and humor content, and restrictions on user-generated content that could offer misleading advice. 

Furthermore, restrictions have been added for queries in instances when the AI feature is not proving to be helpful to users. In addition to these enhancements, Google said it has maintained vigilance in monitoring feedback and external reports and has taken action in the rare instances where AI Overviews violate content policies. 

This includes overviews containing potentially harmful, obscene, or violative information. Google identified a content policy violation in less than one out of every seven million unique queries on which AI Overviews appeared.

"At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors," Reid wrote in the blog post. "We've learned a lot over the past 25 years about how to build and maintain a high-quality search experience, including how to learn from these errors to make Search better for everyone."

Related Article: Google Unveils AI-Powered Search Results, Favoring AI Over Website Links 

Byline



ⓒ 2024 TECHTIMES.com All rights reserved. Do not reproduce without permission.
Join the Discussion