Google engineer explains ‘black box’ AI models in search


Nikola Todorovic, director of software engineering at Google Search, appeared on an episode of Unofficial research to discuss the evolution of AI in Google Search.

Todorovic leads Google’s SafeSearch engineering team and has worked in the search organization for 15 years. He said machine learning was difficult to deploy at scale in research because complex models are harder to understand and correct than simpler systems.

It explained why Google couldn’t just apply ML systems in search simultaneously. Todorovic said these models can “operate as a kind of black box” because engineers don’t always understand what’s going on underneath.

This makes debugging more difficult when search systems change over time or when a model needs to be replaced, he said.

SafeSearch as a testing ground

Todorovic said SafeSearch was one of the first places Google could deploy AI models in search because the team could isolate those systems from the main ranking stream.

SafeSearch could run standalone image and video classifiers that produced a signal, such as how explicit a result was. If problems arose, engineers could iterate on the model without disrupting the rest of the research.

Convolutional neural networks began improving image understanding about 12 years ago, he said, making SafeSearch a natural first use case for machine learning in search.

AI insights based on existing research

Todorovic described AI insights as a feature that “piggybacks” on Google’s existing retrieval and ranking systems. He said that retrieving and sorting under AI insights is still what he calls “old style, old school.”

The process may involve broadcast requests, he said. Google can identify additional queries related to the original input, run them in parallel, and consolidate the retrieved results into a single response.

AI previews then combine and summarize information from selected results, including source text, snippets, headlines and other page context, he said.

AI mode follows a similar model but operates with more independence, Todorovic said. He described it as still running on Search, while having a “bigger platform of its own.”

Why it matters

The “black box” quote gets attention, but the context as a whole matters. Todorovic was explaining why machine learning was difficult to deploy at scale in search, without saying that Google lacked oversight of AI previews or AI mode.

His comments add useful context to Google’s existing AI Search documentation. Google has previously stated that AI Previews and AI Mode can use distribution of requestsrunning multiple related searches across subtopics and data sources to develop answers.

The point is not that AI is a “black box”. His comments reinforce the fact that Traditional search systems still matter for AI overviews, although Google overlays the summary and breakdown on top.

This keeps the fundamentals of traditional search relevant to AI capabilities, even as Google changes how results are summarized and presented.

Looking to the future

The difference between AI previews and AI Mode is worth watching as Google expands AI Mode. Todorovic described AI insights as being more isolated from the rest of the research, while AI mode has more of its own infrastructure.

This difference may be important in how Google explains visibility, measurement, and optimization advice as AI mode grows.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *