Google’s AI overviews use their own mistakes to make new ones

Google began rolling out AI overviews in May, and quickly, serious mistakes may have been made as AI went viral, particularly including an example where Google suggested users put glue on their pizza. Now, AI Overviews has started using articles about that viral situation to… continue telling people to put glue on pizza.

From day one, Google has clearly stated that AI in Search, now called “AI Overview,” may end up collecting information that is not completely accurate. This became clearly evident as the functionality was widely implemented, as overviews spit out sarcastic or satirical information as reliable facts. The most viral example was Google telling users to put glue on their pizza to help the cheese stay in place, based on that recommendation in a decade-old Reddit comment that was clearly satire.

Google has since defended the overviews, saying the vast majority are accurate and explaining that the most viral errors occurred in queries that are very rare. AI overviews have started showing far less frequently since those public errors, in part because Google has pledged to take action against inaccurate or dangerous information. That included not showing AI summaries on queries that triggered the glue-on-pizza recommendation.

Colin McMillen, a Bluesky developer, discovered that Google still recommended this, but in a new way. Searching for “how much glue to add to pizza” found that AI Overviews provided up-to-date information on the topic, this time coming from the same news articles that had covered Google’s viral bug. The edge It confirmed the same results yesterday (with a featured snippet even using the information), but it appears Google has since disabled them, as we couldn’t get any AI overview on that query or others like it.

Image: Colin McMillen at Bluesky

Based on Google’s explanation of rare queries providing incorrect information, it makes sense that this would happen in this even rarer query.

But should I do it?

That’s the important question, and fortunately, Google seems to continue to investigate and prevent these bugs from existing. However, the key problem this situation shows is that AI Overviews is willing to extract information that is clearly in the context of being incorrect or satirical. When Google began this effort, we took issue with the potential for Google’s AI to extract information from AI-generated articles and websites in the first place, but it appears that the human touch on online content will be equally difficult for AI. sort out.

Learn more about AI overviews:

Follow Ben: Twitter/XThreads and Instagram

FTC: We use automatic affiliate links that generate income. Further.