Google Explains AI Overviews’ Odd Results

2024-05-31 13:12:32

'Data Void', 'Information Gap': Google Explains AI Search's Odd Results

Google has said it has made over a dozen technical improvements in AI Overviews (Representational)

A week after a series of screenshots of Google’s artificial intelligence search tool – AI Overviews – providing inaccurate responses made rounds on social media, Google has issued an explanation and cited “data void” and “information gap” as reasons behind the blunder. 

A couple of weeks ago, Google rolled out its experimental AI search feature in the US, however, it soon faced scrutiny after people shared on social media the bizarre responses by the search tool, including telling people to eat rocks and mix pizza cheese with glue. 

In a blogpost, Google acknowledged that “some odd, inaccurate or unhelpful AI Overviews certainly did show up” while also debunking the alleged dangerous responses on topics such as leaving dogs in cars and smoking while pregnant saying that those AI Overviews “never appeared”. Google also called out a large number of faked screenshots being shared online, calling them “obvious” and “silly”. 

The tech giant has said that they saw “nonsensical new searches, seemingly aimed at producing erroneous results” and added that one of the areas it needed to improve in is interpreting nonsensical queries and satirical content. 

Citing an example of a question in the viral screenshots – “How many rocks should I eat?” – Google said that practically no one asked that question before those screenshots went viral. Since not much high-quality web content that seriously contemplates that question is available online, it created a “data void” or “information gap”, said Google. Explaining why the search tool came up with a bizarre response to this particular query, Google said, “there is satirical content on this topic … that also happened to be republished on a geological software provider’s website. So when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question.”

In the blogpost, Liz Reid, VP, Head of Google Search also explained how AI Overviews work and what sets them apart from chatbots and other LLM products. She said that AI Overviews are “powered by a customized language model, which is integrated with our core web ranking systems, and are designed to carry out traditional “search” tasks, like identifying relevant, high-quality results from Google’s index.” That is why, AI Overviews don’t just provide text output but also give relevant links that back the results and allow people to explore further. 

“This means that AI Overviews generally don’t “hallucinate” or make things up in the ways that other LLM products might,” she said. 

According to Google, when AI Overviews get something wrong, it is due to reasons such as “misinterpreting queries, misinterpreting a nuance of language on the web, or not having a lot of great information available.” 

After identifying patterns where Google got it wrong, the company has said it has made over a dozen technical improvements such as- 

  • Google has built better detection mechanisms for nonsensical queries and limited the inclusion of satire and humor content.
  • Google has updated its systems to limit the use of user-generated content in responses that could offer misleading advice.
  • Google has added triggering restrictions for queries where AI Overviews were not proving to be as helpful.
  • AI Overviews for hard news topics will not be shown where “freshness and factuality” are crucial.

Apart from these improvements, Google said that it has found content policy violation on “less than one in every 7 million unique queries” on which AI Overviews appeared and has taken action against them.

Google AI Overviews,Google AI overviews viral respnses

Source link

Loading