I really enjoyed this interview with Sundar Pichai by John Collison And Sold by Gil from Stripe.
Here are the five most interesting things I learned.
1. Research will still exist in the future, but much of it will be agentic
Sundar was asked if the agents would replace Search. He said:
“If I fast forward, a lot of the information search queries will be agents in the search. You will be performing tasks. You will have many threads running.”
And also, the search is going to change so that we think of it as an agent manager.
“It’s still evolving. Research will be an agent manager where you do a lot of things. I think in some ways, you know, I’m using Antigravity today, and you know, you have a bunch of agents doing things, and I can see Research doing versions of those things, and you’re doing a bunch of things.”
He said that people do in-depth research in AI modeand it will soon be the norm to perform long-term tasks. He also said that the form factor of the devices will change.
2. Google uses antigravity internally
Boy, do I love Google’s IDE and Agent Manager, Antigravity. I’ve built so many things with it, including my own RSS feed reader, a screenshot and annotation tool, workflows for publishing things I write in a Google Doc to my WordPress site, and a bunch of tools for doing agent things with Google Search Console and data from Google Analytics 4. While I think Claude Cowork and Claude Code are amazing, I really prefer using Antigravity.
It turns out that Google is making good use of Antigravity internally. Except they don’t call it Antigravity. They call it “Jet Ski”.
Sundar said Google DeepMind and Google software engineers use it:
“I can see groups, and in particular I would say GDM and some SWE groups are really changing their workflows. They’re using, we call it for some strange reason, we have a different name internally than externally of the same product, but it’s Jet Ski internally which is Antigravity. You live on it, you live in an agent handler world. You have workflows and you work in this new way.”
He also uses it himself.
“I would ask questions in Antigravity, in our internal version of Antigravity. “Hey, we launched this thing. What did people think? Tell me the five worst things people talk about? » and I type that. Now this brings him back. Has my life become easier? Yes. In the past, I had to spend a lot more time trying to figure this out. Now an AI agent is helping me on this journey.
Also, last week, the Google research team started using Antigravity.
“Last week we rolled it out (Antigravity) to the research team. We constantly emphasize this. In a large organization, I think change management is a difficult aspect of releasing this technology, which can be easy for a small company. You can pivot quickly.”
If you’re interested in learning how to use Antigravity, I’ve created a comprehensive guide showing you how it works and how I use it not only to code, but to create entire agent workflows that I actually use in my daily work. It is available in the paid part of my community, the search bar. And next Thursday the Search Bar Pro team is hosting an event where we’re going to split into two teams, Claude Code Team and Antigravity Teamand see who can create the best SEO tool.
I know it’s a bit overwhelming trying to use something new in your workflows. I deeply believe that those who learn how to use antigravity today will have a big advantage as things really start to take off as AI improves.
3. Robotics is growing rapidly
Sundar admitted that Google was previously too early for robotics. AI has become the missing ingredient in ideas conceived 10-15 years ago. Gemini Robotics models have reached state-of-the-art status in spatial reasoning. Google has partnered with Boston Dynamics and Agile and a few other companies.
The most interesting for me was the discussion on Wing for delivery by drone.
“I think we’re growing Wing and within a reasonable time frame, 40 million Americans will have access to a Wing delivery service. I’m not talking about years or anything like that.”
When asked if Google was going to do more to build hardware, Sundar said it would be important to have first-party hardware for robotics and AI.
“I think we should keep a very open mind. My lesson from Waymo and the AI side with TPUs, et cetera, I have to really push the curve well, especially in areas where you have security, regulation, everything. You want first-hand experience of the return cycle on products. I think having first-party hardware will end up being very important.”
4. OpenClaw-like agent systems are the future
There’s a reason why OpenClaw (originally Clawdbot) went viral a few weeks ago. I still haven’t implemented an OpenClaw system because I don’t think I know enough about security to secure this system.
When Sundar was asked if something OpenClaw-like was coming from Google, he said he thought it was the future.
“I think you want to give users the ability to do persistent, long-running tasks, reliably and securely. You have to think about things like identity, access, etc. But I think that’s the future. It’s the agent future. And bringing that to consumers is a bit of an exciting frontier that we’re looking at. It’s one of mine, too.
I think in reality, consumer interfaces will have full coding templates belowas well as the right harnesses, the right skills and the ability to persist and execute cloud security somewhere, locally and in the cloud. All these primitives come together.
Today, I have the impression that there is 1% of the planet, maybe not 1%, 0.1% of the planet that is experiencing this future. They build things for themselves, but bring them to mass adoption. Yes. It’s a very exciting frontier, I think.
As I write this, Google DeepMind just tweeted instructions for using its new local open model Gemma 4 with OpenClaw. A new way of communicating with our machines is beginning to emerge!
5. AI and AI agents will improve significantly in 2027
Sundar was asked when he thought agent systems would be able to fully function without any humans in the loop. He said twice that 2027 would likely be a big year.
“I really expect that, in some of these areas, 27 will be a big inflection point for some things. Even the ones that do it, this is the workflow by which they’ll produce it. Maybe for a while you’ll check it out conventionally, but you’ll flip, a crossover. But I expect 27 to be a big year in which some of these changes will happen pretty profoundly.”
The interview ended with Sundar talking about what excited him the most. He mentioned that putting data centers in space was very exciting, but this last bit was extremely interesting.
“I literally spent some time yesterday with someone who was explaining some improvement after training, which was a person talking about the improvements that they were making. Listening to him, I’m like, ‘Oh, this is really going to come across as a good jump.’
It seems to me that he is talking about agentic self-improvement.
We are currently learning how to have the AI build and do things for us. I remember first learning to code with ChatGPT as a partner. This would give me code to paste into VS Code. Then I would run it and paste the errors into ChatGPT. We went back and forth until something actually worked. I felt like I was useless in this process – the copy and paste robot, and of course, current systems like Antigravity, Claude Code and ChatGPT Codex run the code, check for errors and fix things without much human intervention.
It makes sense to me that the next step in this process is to have AI systems learn to improve their usefulness without us having to specifically nudge them to do so. I hope that when this happens, we will see an even more rapid advancement in the capabilities and usefulness of AI!
More resources:
Read Marie’s newsletter, AI News You Can Use. Subscribe now.
Featured image: isasoulart/Shutterstock





