Google to revamp search with generative AI tools
Now Reading
Google to revamp search with generative AI tools

Google to revamp search with generative AI tools

The company is also trying to be thoughtful about how to set expectations with users about the capabilities of generative AI

Avatar
Google - Big Tech

For months, Google has been under pressure to reinvent its core search business and respond to the rise of artificial intelligence (AI) programmes that can generate content. On Wednesday, Google started introducing more to the public — slowly.

At its annual developer conference, Google unveiled a version of its search engine that uses large language models, AI tools that are trained on enormous volumes of text to answer users’ queries conversationally. The update will exist only in a new, experimental space dubbed “Search Labs,” which users who sign up for a wait list will be able to access. Certain features may eventually graduate to the main search engine; others will be scrapped entirely.

The approach insulates Google from some of the ethical concerns surrounding generative AI, while also buying the company more time to assess the impact on its search advertising business, which is the main revenue driver for its parent company Alphabet.

It’s a more risk-averse stance than the one being taken by Microsoft, which has already infused its search engine, Bing, with technology from the startup OpenAI, and made the tools widely available to the public.

With an estimated 93 per cent of the worldwide search market, Google has more to lose. The company is also trying to be thoughtful about how to set expectations with users about the capabilities of generative AI, Prabhakar Raghavan, a senior vice president at Google, said in an interview ahead of the conference.

“One of the defining questions for me is, how do you delineate what these things can and cannot do in a way that’s understandable to users, to businesses? Or to the ecosystem at large?” Raghavan said. Either way, investors were optimistic: Alphabet shares rose 4.4 per cent to $111.75, outperforming the Nasdaq.

Since the launch last year of OpenAI’s popular chatbot, ChatGPT, people are increasingly experimenting with using AI tools to offload time-consuming tasks, such as party planning or product research.

Raghavan described the example of mapping out a trip to Paris, which requires cross-checking the hours of operation of local establishments — a task that could easily trip up an AI system. Asked whether Google’s generative AI search product, known as SGE — short for “search generative experience” — could field such a project, Raghavan responded: “Someday.”

If the AI’s answers come easily and sound confident, people may believe them without checking facts. “We want it to be trustworthy for users to do that application,” Raghavan said. “Today, we are not there.”

Google’s presentation suggested there are still many ways the company can use generative AI to help consumers in the meantime. In its search labs, Google is focusing on helping users accomplish tasks that are “clunky” in the current version of the search engine, requiring multiple queries, said Liz Reid, a vice president of search.

“We think generative AI is sort of the next evolution of search, and it can help supercharge search,” Reid said in an interview.

There are limits to how far Google will go. For highly sensitive queries, such as suicide, Google’s generative AI tools will not engage, Reid said. (The company will instead display information about suicide prevention.) Searches about other topics, such as health and finance, will come with a disclaimer that the responses should not be used as advice.

“We know that even with all of this, we will still make mistakes,” Reid said. “That’s why we are launching it in Search Labs and trying to be very clear in the messaging and on the product that this is still very much experimental.”

Google has previously tested new search features by introducing them to randomised groups of users.

By rolling out Search Labs, Google hopes to engage in a more robust dialogue with users about generative AI, Reid said. Users will be able to give a thumbs up or a thumbs down to provide feedback on results, and the company will review potential policy violations using “a combination of humans and automation,” Reid added.

Despite these precautions, some experts on tech ethics have expressed concern that it’s not enough for companies to start by rolling out nascent technologies to smaller groups of users — they must take other steps to mitigate harmful effects to people, especially those in underrepresented and minority groups, which can be particularly vulnerable when new technology emerges.

Tech giants like Google that invest heavily in research have long faced questions about when to share their findings with the academic community and when to focus on weaving technological advances into products. The company is currently “fine-tuning” its approach, especially in the fast-moving field of generative AI, Raghavan said.

“I fully expect we will continue to contribute to the computer science community, but we will probably continue to look even harder at moving things into product rapidly,” Raghavan said.

Asked about whether generative AI is here to stay, Raghavan cited its “black box” nature — referring to the idea that people who build these AI models don’t always understand why they spit out one answer versus another. “These things are very opaque,” he said.

“So there is conceivably a black swan event lurking in there that we haven’t seen yet, that’s so terrible, you know, we all take a deep breath.

“As the technology moves forward, Google will be vigilant to ensure that the benefits to users from generative AI outweigh the harms, Raghavan said. “For now, studies, user tests are showing that they’re gaining value from this,” Raghavan said. “And that, to us, is a North Star.”

Read: Google enters foldable market with Pixel Fold

You might also like


© 2021 MOTIVATE MEDIA GROUP. ALL RIGHTS RESERVED.

Scroll To Top