Welcome to this week’s Pulse: Updates affect how deep links appear in your snippets, how your robots.txt file is parsed, how agentic features work in search, and how EU data sharing rules apply to AI chatbots.
This is what matters to you and your work.
Google Lists Best Practices for Learning About Deep Linking
Google has updated its snippets documentation with a new “Learn More” deep link section in search results. The documentation lists three best practices that can increase the likelihood of these links appearing.
Highlights: Content should be immediately visible to a human when the page loads, and content hidden behind expandable sections or tabbed interfaces can reduce the likelihood of these links appearing. Sections should use headings H2 or H3. The snippet text should match the content that appears on the page, and pages with content that loads after scrolling or interacting can further reduce the likelihood.
Why it matters
The three practices constitute the first specific guidelines published by Google on this feature. Sites using expandable FAQ sections, tabbed product detail areas, or scroll-triggered content for basic information may see fewer deep links in their featured snippets than sites that display the same content on page load.
The tips fit a pattern that Google has applied to other search features. Content that displays without user interaction is more likely to appear in an enhanced view.
Slobodan Manić, founder of No Hacks, made a related observation about LinkedIn:
“The documentation is structured around a snippet behavior (reading deeper links in search results), but the language chosen by Google reads like a general preference. ‘Content immediately visible to a human’ is the structural instruction, not a specific tip to learn more.”
Manić’s argument extends its Interview of April 16 in my humble opinion with editor Shelley Walsh, where he argued that most websites are structurally flawed for AI agents. He claims that search bots and AI agents now face the same structural problem and that auditing is the same for both.
For existing pages, the audit question is whether key information is contained in a click-to-expand element. If a page already has a “Learn More” deep link for a section, the structure of that section serves as a guide for what works. For other sections on the same page, replicating this structure can also improve their chances.
Google describes the tips as best practices that can “increase the likelihood” of deep links appearing. This coverage is important because it is not a list of requirements, and meeting all three cannot guarantee that links will appear.
Read our full coverage: Google Lists Best Practices for Learning About Deep Linking
Google could expand its list of unsupported Robots.txt rules
Google may add rules to its robots.txt documentation based on analysis of actual data collected via HTTP Archive. Gary Illyes and Martin Splitt described the project on the latest Search Off the Record podcast.
Highlights: The Google team analyzed the most frequently unsupported rules in robots.txt files across millions of URLs indexed by HTTP Archive. Illyes said the team plans to document the 10 to 15 most used unsupported rules beyond user agent, allow, prohibit and sitemap. He also said the parser could expand typos he accepted for banning, although he did not commit to a timeline or name-specific typos.
Why it matters
If Google documents more unsupported guidelines, sites using custom or third-party rules will have clearer guidance on what Google is ignoring.
Anyone managing a robots.txt file with rules beyond user agent, allow, disallow and sitemap should check the guidelines which have never worked for Google. The data in the HTTP archive is publicly searchable on BigQuery, so the same distribution used by Google is available to anyone who wants to examine it.
Typo tolerance is the most speculative part. Illyes’ formulation implies that the parser already accepts some misspellings of “disallow”, and that others might be honored over time. Check all spelling variations now and correct them, rather than assuming they will be ignored.
Read our full coverage: Google may expand list of unsupported Robots.txt rules
EU offers Google to share search data with competitors and AI chatbots
The European Commission has sent preliminary findings proposing that Google share search data with competing search engines in the EU and EEA, including AI chatbots that are considered online search engines under the DMA. The measures are not yet binding, with a public consultation open until May 1 and a final decision expected by July 27.
Highlights: The proposal covers four categories of data shared under fair, reasonable and non-discriminatory conditions. Categories are rank, query, click, and view data. Eligibility extends to AI chatbot providers that meet the DMA’s definition of online search engines. If the Commission maintains its eligibility until the final decision, eligible providers could have access to anonymized Google search data under the conditions proposed by the Commission.
Why it matters
This proposal explicitly extends data sharing eligibility from search engines to AI chatbots under the DMA. If eligibility survives consultation, the regulatory category of “search engine” now includes products that most search marketing work has treated as a separate category.
The consequences vary depending on where you operate. For sites maximizing their visibility in the EU/EEA, the change could expand the reach of anonymized search signals. AI products competing with Google in this market could use the data to improve their retrieval and ranking systems, which could, in turn, affect the content they cite.
Outside the EU, the direct regulatory effect is zero. Defining the category is another matter. How the Commission draws the line between “AI chatbot” and “AI chatbot qualified as a search engine” will likely be discussed in future proceedings.
The eligibility question is the event to watch until May 1. If the Commission narrows the criteria for AI chatbots in response to consultation comments, the implications will remain regulatory. If it holds up, it would set an important precedent for how AI research is classified.
Read our full coverage: Google may have to share search data with competitors
Google adds new task-based search features
Google has introduced new search features that continue its evolution toward task completion. Users can now track price drops for individual hotels via a new toggle in search, and Google is adding the ability to launch AI agents directly from AI mode.
Highlights: Hotel price tracking is available worldwide via a toggle in the search bar. When prices drop for a tracked hotel, Google sends an email alert. The AI agent launched from AI mode allows users to launch AI-managed tasks in the search interface. Rose Yao, Google Search Product Manager, published an article about the features of X.
Why it matters
Each task-based feature moves a process previously started on another site to Google’s own surface. Hotel price tracking has existed at the city level for months. The expansion to individual hotels adds a new signal that users can set in Google rather than on hotel or aggregator sites.
Visibility of direct bookings depends on membership in the Google ecosystem. Sites that rely on price drop alerts as a return trigger for users may see some of that engagement reallocated to Google’s tracking UI. For hotel brands, this raises the stakes to ensure that individual hotel pages are fully populated in Google Business Profile and hotel feeds.
On LinkedInDaniel Foley Carter connected the feature to a broader pattern:
“Google’s AI Previews, AI Mode and now built-in features for SERP + SITE are just making Google eat up more and more traffic opportunities. Everything Google told us not to do, it does itself. SPAM / LOW VALUE CONTENT – don’t summarize other people’s content – Google does it.”
The launch of the AI agent is more speculative. Google has not published detailed documentation explaining what types of tasks users can delegate or how sources are cited. The feature confirms that agentic search, described by Sundar Pichai as “search as an agent manager”, appears gradually in search rather than as a one-time launch.
Read Roger Montti’s full coverage: Google adds new task-based search features
Theme of the week: the rules are being written
Each story this week describes something that was previously implied or ongoing.
Google has announced plans to expand what its robots.txt documentation covers. The company has listed specific practices that can increase the likelihood of “Learn More” deep links appearing. The European Commission has proposed measures that extend eligibility for data sharing from search engines to AI chatbots under the DMA. And the task-based features described by Sundar Pichai in interviews are deployed as buttons in the search bar.
For your daily life, the ground becomes firmer. Fewer questions are judgmental. What is admissible or not, what Google supports and what counts as a search engine for a regulator are all written. This works in your favor when it involves clearer audit criteria, and against you when “we weren’t sure” is no longer a defensible answer.
Top stories of the week:
More resources:
Featured image: (Photographer)/Shutterstock





