- Liz Reid, Google’s vice president of search, told employees at a recent meeting that the company “won’t always find everything” when it comes to AI bugs.
- Reid urged employees to continue pushing AI products, suggesting they can fix bugs as users and employees find them.
- Google recently came under fire after its AI Overview tool gave meaningless answers to users.
Google’s new search chief said at an all-hands meeting last week that mistakes will happen as artificial intelligence becomes more integrated into Internet search, but that the company should continue releasing products and let employees and users help. to find the problems.
“It’s important that we don’t retain roles just because there may be occasional problems, but as we find problems, we address them,” said Liz Reid, who was promoted to vice president of search in March. she said at the company-wide meeting, according to audio obtained by CNBC.
“I don’t think we should infer from this that we shouldn’t take risks,” Reid said. “We must consider them carefully. We must act urgently. When we find new problems, we must test extensively, but we won’t always find everything and that just means we respond.”
Reid’s comments come at a critical time for Google, which is struggling to keep pace with OpenAI and Microsoft in generative AI. The market for chatbots and related AI tools has exploded since OpenAI introduced ChatGPT in late 2022, giving consumers a new way to search for information online outside of traditional search.
Google’s rush to release new products and features has led to a number of embarrassing situations. Last month, the company published AI Overview, which CEO Sundar Pichai called the biggest change to search in 25 years, for a limited audience, allowing users to see a summary of answers to queries at the top of Google search. The company plans to roll out the feature worldwide.
Although Google had been working on AI Overview for more than a year, users quickly noticed that queries were returning meaningless or inaccurate answers, and they had no way to opt out. The widely reported results included the false claim that Barack Obama was the first Muslim president of the United States, a suggestion for users to try putting glue on pizza, and a recommendation to try to eat at least one stone a day.
Google was quick to fix errors. Reid, a 21-year veteran of the company, published a blog post on May 30, mocking the “troll-y” content some users were posting, but admitting that the company made more than a dozen technical improvements, including limitation of user-generated content and health. advice.
“You may have seen stories about putting glue on pizza and eating rocks,” Reid told employees at the all-hands meeting. Reid was introduced on stage by Prabhakar Raghavan, who heads Google’s knowledge and information organization.
A Google spokesperson said in an emailed statement that the “vast majority” of the results are accurate and that the company found a policy violation in “fewer than one in 7 million unique queries that featured summaries.” of AI”.
“As we’ve said, we continue to refine when and how we display AI overviews to make them as useful as possible, including a number of technical updates to improve the quality of the response,” the spokesperson said.
The AI Overview errors followed a pattern.
Shortly before launching its AI chatbot Bard, now called Gemini, last year, Google executives faced the challenges posed by ChatGPT, which had gone viral. Jeff Dean, Google’s chief scientist and longtime head of artificial intelligence, said in December 2022 that the company had much more “reputational risk” and needed to act “more conservatively than a small startup,” as it chatbots still had a lot of accuracy issues.
But Google pressed ahead with its chatbot and was criticized by shareholders and employees for a “botched” launch that some said was hastily organized to coincide with the timeline of a Microsoft announcement.
A year later, Google launched its AI-powered Gemini imaging tool, but had to discontinue the product after users discovered historical inaccuracies and questionable answers that circulated widely on social media. Pichai sent a company-wide email at the time, saying the errors were “unacceptable” and “showed bias.”
Red Team
Reid’s stance suggests that Google has become more willing to accept mistakes.
“At the scale of the web, with billions of queries coming in every day, there are bound to be some oddities and errors,” he wrote in his recent blog post.
Reid said some user queries about the AI overview were intentionally conflicting and that many of the worst ones listed were false.
“People actually created templates for how to achieve social engagement by creating fake AI overviews, so that’s something additional we’re thinking about,” Reid said.
He said the company does “a lot of testing ahead of time,” as well as “red teaming,” which involves efforts to find vulnerabilities in technology before they can be discovered by outsiders.
“No matter how many red teams we make, we’ll need to make more,” Reid said.
By implementing AI products, Reid said, teams were able to encounter problems like “data gaps,” which occur when the web doesn’t have enough data to correctly answer a particular query. They were also able to identify comments on a particular web page, detect satire, and correct spelling.
“We don’t just have to understand the quality of the site or the page, we have to understand every passage of a page,” Reid said, regarding the challenges facing the company.
Reid thanked employees across several teams who worked on the fixes and emphasized the importance of employee feedback, directing staff to an internal link for reporting bugs.
“Any time you see problems, they can be small or big,” he said. “Please archive them.”
LOOK: Google has shown that it is not a run-over AI