Elizabeth Reid has over the past year led Google’s push to reinvent its core product: search. About a year ago her team launched the company’s biggest revamp in years with AI Overviews, in which generative artificial intelligence models summarise search results.
The feature began tentatively, with the AI summaries prompting ridicule when they advised users that eating rocks can be healthy and told others to glue cheese to pizza. Since then, Reid says, the company has worked to balance accuracy and usefulness, and is seeing people change the way they seek information online.
In this conversation with the Financial Times’ AI correspondent Melissa Heikkilä, Reid talks about the future of AI-powered search and how it is changing the business model of the internet.
Melissa Heikkilä: You graduated from Dartmouth College, which is where the definition of AI was first conceived in 1956. Tell me about your journey to AI. Has Dartmouth influenced you in any way?
Elizabeth Reid: Dartmouth definitely got me into computer science. I did very little of it in high school. I went to a small school in Massachusetts whose idea of computer classes at the time was typing and learning to use Microsoft Excel and Word. I did a little programming on my graphing calculator because they told me I couldn’t take this class unless I knew how to do that.
And I went to Dartmouth, thinking I was going to go into physics. I was good at maths. I did an internship in my freshman summer, and it was in material science and, in theory, was really interesting [but] I wanted something more applied. So, I thought I would go into engineering physics.
I took [a computer science class] at the same time I was taking thermodynamics and physics. And I spent time doing extra credits for computer science, [rather than] focusing as much as I probably should have on my physics. I talked to Professor [Thomas] Cormen, a longtime Dartmouth professor, and he convinced me to switch into computer science.
Then I needed a job. It was 2003. Dartmouth had a good computer science department, but it was not Stanford or MIT or Carnegie Mellon. [Cormen] had a previous student who was at Google, and he helped me. He contacted her, and she helped me get an interview. So, I landed at Google in the New York office. There were about 10 engineers there and maybe 500 or 1,000 total employees [at the corporate headquarters] in Mountain View in California.
I started in search on a project that became local search. At some point, that moved to the geo-map space, and I worked on engineering problems there. We sometimes synonymise AI with generative AI but, really, AI isn’t just about generative AI. And so, across time, in both local search and some of the maps, we were using AI in [many] different areas.
I moved to [Google] Search a few years ago and was talking to the engineers about what they were doing [and] what was possible. The technology then had a tipping point, and we were suddenly able to do a lot more with it. It was pretty exciting.
MH: You’re working on one of the most concrete applications of AI. And it’s been just under a year since AI Overviews was launched. Could you tell me a bit about the past year and how it’s gone?
ER: It’s been a great launch. We see some of the strongest growth in [Google] Search and people issuing more queries.
It unlocks the difficulty of asking a question. It allows you to ask questions you couldn’t ask before because the information wasn’t on a single webpage. It was scattered across the web, and you’d have to have pulled it together. Something we’ve seen over and over again with [Google] Search is that human curiosity is boundless. People have a lot of questions.
A three-year-old will go: “Why, why, why, why, why?” But, as an adult, you don’t assume the person you ask the question knows the answer. You don’t know if you have enough time. You don’t know if it’s worth the effort. And so you don’t ask those questions. But if you lower the [barrier] to asking the question, then people just come. They have a lot more questions and they ask anything these days.

MH: And how else are you seeing AI changing search?
ER: Besides seeing people ask more questions, they ask longer questions. And the way you can think about a longer question is: do you have to take the actual question you have and turn it into the strictest “keywordese” or can you ask what’s on your mind? With AI Overviews, people start asking these longer queries that express more of the constraints, more of the angles that they see.
We see it resonate in particular with younger users. They are often the first to push expectations about what should be possible and to adapt to new technology. More and longer questions. They start asking more nuanced questions.
AI Overviews is the start with thinking about transforming search. How can you think about transforming the whole page, organising the information in a way that’s easier, even finding what the right web links are that you want to go and pursue? We see a lot of growth in multi-modality: people asking these text-plus-image questions. So, it’s not just, “What is this image?” or “Here’s my question”, but combining them.
MH: With ChatGPT, we’ve seen some evidence that people are changing the way they behave. Are you thinking about adapting to more chat-based search functions?
ER: We’re not looking in that direction in the same way: to the extent that somebody will think of a chatbot as talking to something that feels personified and you can ask it, “How was your day?”, then expect a response.
We think of search as more of an information-focused question. We are starting to experiment more with the idea that people sometimes have a question that has multiple parts plus a follow-up. And if you have a follow-up question, you don’t want to start over from scratch.
But it’s more designed as: how can you further your journey without repeating it the same way you might to a human — rather than designing it in the sense of: do you have a friend to chat with and ask them their views? It’s much more about organising information.
MH: There’s been lots of criticism about search being broken, people having to add “Reddit” as [a] search keyword or, when they search, they’re getting hallucinations, or incorrect or misleading results, as answers. Or the AI answers are telling them to eat rocks or glue. How are you working to fix that?
ER: I don’t think adding the word “Reddit” is a bad thing. Some people want more discussions. Others may want it from more mainstream or authoritative sources. So, the ability to express more of what you want can be a win. But what we have seen is that people, especially younger users, want to hear directly from others who have experienced something.
And so, it’s not just, “here’s a site that’s done some research”, but “did you go there yourself?” Did you use the product yourself, or did you read about it and write some summary on it? We’ve been doing a lot of work to figure out how we bring more human voices on.
It is the case with generative AI that the technology sometimes makes mistakes. We saw, with eating rocks, that it was an extremely small-use case. Despite our extensive work and testing, it was not the type of query we had seen previously.
People didn’t ask us, “How many rocks should I eat a day?” People use new technology in ways that you hadn’t imagined. We took it seriously. It didn’t matter that it was a small incident.
We put a lot of effort in our models on paying attention to factuality. That’s a way that we make a different choice on search, compared with a chatbot. You typically have to choose between how factual it is versus how creative or how conversational it is.
If you’re building a product that’s designed to be conversational, you might weigh it one way. But in the case of [Google] Search, we have weighted factuality and put extensive work into that. We have continued to raise the bar on that for the past several months.
MH: Language models do have this technical flaw where it’s easy for outsiders to inject unwanted prompts, and that then influences what the overviews say, or hallucinations. Are these models fit for purpose for something like search, which requires accuracy? And how do you think about these security weaknesses and how to fix them?
ER: There’s a difference between “can you hack the prompts”, versus “are they going to make occasional mistakes”? Those are different things. From a security perspective, on the prompting, everyone is working to figure out how to avoid jailbreaking, or finding loopholes that make AI models bypass their guardrails. We’re doing that. The way search is designed, in terms of how it uses the web, it tends not to have that problem in the same way that a traditional chatbot might have.
But in terms of, are they ready to be used, one of the things that we do rely on for search is the use of high-quality information from the internet. It’s a different use, in that it’s not so much the model generating everything and using a little bit of web, but feeding the web at the centre and designing it. Our models are trained not just to try and be highly accurate, but to try and base their answers on information on the web.
That helps in two ways. One, it increases the accuracy and, two, we can then tell you where to look for further confirmation.
AI Overviews aren’t designed to be a standalone product. They are designed to get you started and then help you dive deeper. And so, when it’s important, the idea is that you get some context on where to check and then you can choose to double-check more on some of them.
There are lots of questions people ask, where, if you are just relying on webpages, it can be difficult. So, tech support is one of the AI Overviews areas that people rely on. The tech documents are not necessarily extensive online. Maybe there’s a form that talks about your problem, but maybe not. Or the forum talks about your problem, but you’ve tried those two or three things.
We don’t show AI Overviews in every query. In order to show AI Overviews, we have to believe the response is high quality [and] is it a net value over the rest of the search results? If we think the rest of the search results page provides the answer, then we don’t feel an obligation to respond.

MH: What kind of behaviour change are you seeing in people double-checking sources? Are people doing that, or how often do they rely on the AI Overviews?
ER: We do see people dive in, often to continue. That can be because they want to confirm data, but often it’s not just because they want to confirm. They come in with an initial question and then they read something, and it sparks the next question. Or they really want to hear a more in-depth perspective now they have a sense of the topic and what parts they’re interested in, and they can zero in. We see them engage.
We see the clicks are of higher quality, because they’re not clicking on a webpage, realising it wasn’t what they want and immediately bailing. So, they spend more time on those sites. We see that it shows a greater diversity of websites that come up.
And that might be surprising. But if your question is long, finding a webpage that covers every part of your question is hard, and sometimes what you get is a very surface-level webpage. Technically it talks about every one of your words, but you didn’t get much substance. With generative AI, we can go and look for web pages that talk about specific subsets. So, we’ll take that query, and we’ll turn it into multiple queries.
And then we’ll say, a-ha, OK, you’re comparing two items that are not traditionally compared. Let me find a webpage about one item. Let me find a webpage about another. And then, you can expose websites that go in more depth on part of a topic, instead of just a webpage that is surface level about the whole topic.
MH: Some people have criticised language models in search, not for the “eat rocks” mistakes but for these subtle, inaccurate mistakes that people don’t pick up if they’re not experts in the field. How concerned are you about that?
ER: Besides trying to place a high bar on quality, we take extra effort on things we call “your money or your life”. So, questions of finance, questions on medical topics — we try to be thoughtful in our answers about both. Maybe we should not give a response at all or where we think we can give you something to get started, but we should recommend you talk to a doctor, dig in more and find out details.
And that’s an important thing to do, because in many of those cases, you’d prefer that they seek out a medical professional. But there are many people who don’t necessarily have access to a medical professional. So, if you said: I’m not going to answer anything, even some basics about a rash, and you’re a stressed mother and it’s the middle of the night, and you can’t reach someone in some part of the world, do you not help them?
We try to be clear that the technology is more experimental. [With] a lot of questions people ask, though, the stakes aren’t as high. If you’re trying to get tech support on figuring out how to fix your phone, hopefully we give you the right instructions, but if we don’t give you exactly the right instructions on how to turn something on, you usually figure that out and then you can do more searching. But often we can get you there faster.
MH: Going back to what you said about information and different publishers getting access, publishers have criticised AI search for dropping traffic and ad revenue. How are you avoiding this or taking this into account?
ER: We do believe, in [Google] Search, that people continuing to hear from other people is essential and at the heart of our product. That’s important, not just for a healthy ecosystem, but for users. Lots of times you want a quick answer, but often you want to hear from other people.
I often use a fashion example: most of the people I know who want to delegate their choices to a bot for fashion are the set of people who weren’t trying to spend any time on fashion before.
The people who are following influencers and creators and others, they’re not ready to go there. They want to hear from the people they trust. So, we spend a lot of time thinking about, how do we elevate the right content? How do we present it? We run different experiments. We design it to not just show links, but think about where it could add additional links within the response. Not just at the end, but maybe we can say, “according to the Financial Times” and put a link to the Financial Times.
What you see with something like AI Overviews, when you bring the friction down for users, is people search more and that opens up new opportunities for websites, for creators, for publishers to access. And they get higher-quality clicks.
MH: Is there a risk that you end up cannibalising your own product? Generative search is expensive, and this is changing the whole ad revenue model.
ER: There are a lot of opportunities for ads. We show them both above and below in AI Overviews, but also within. Ads are relevant whenever users are going to make a choice that has some commercial aspect.
When a query is predominantly commercial intent — like we think you want to buy something — then we might often show ads. But sometimes we think you probably don’t want to [see] ads, and so we don’t want to give everyone ads. But some people might want to buy something. If [you search] “how to clean a stain out of the couch” and the first thing we show is a bunch of ads, you’re like, “Whoa, I just wanted some advice.”
But if we’re giving you ideas and then we say, “if you’re having trouble you might want to consider a stain-remover product”, and then we give you some ads for stain-remover products, it feels natural and in context. And so, there are new opportunities.
MH: Are we going to see a paid version of [Google] Search? And what would that include?
ER: Never say never about what the future will hold. Ensuring that search in general, the essence of it, is available for free, to allow access to information, will be important. There may be some aspects for people who have subscriptions in the future. But the core of search we want to have available for everyone for free, yes.
MH: What does the future of search look like? Are you thinking about other modalities or agents?
ER: One thing that’s really at the heart of it is this idea that we want to make search effortless. That assumes multimodalities, because humans are wired not just to type or text or use voice. They see things. They use different ways of expressing what they want.
It will get more personalised over time, not just in the results, but in how you learn well. Are you somebody who learns well with videos or are you someone who prefers text?
So, that ability for the technology to meet you where you are — can we make it as easy as possible for you to learn and explore the world?
This question is about how you make use of tools. People use the word “agents” to mean different things. But the sense of “you can use tools to ask hard questions” will continue. [Google] Search will remain an information product at heart, but sometimes information is hard and there’s a lot of work.
MH: Have your search habits changed in this AI era?
ER: I personally ask more questions. So, one example: I work with people who are into cricket. They would say something, and it would make no sense. But I didn’t have enough time to go and do an hour-long tutorial on cricket.
I would start asking the question and finally get the answer. So, for instance, there’s this thing in cricket where if there’s rain that cuts the game short, the scoring uses an algorithm to decide how many runs you might have been able to score based on where they are.
I ask questions about a book my son is reading and is talking about. I haven’t read the book, so I’ll ask a question about it. I’d love to be able to read all of the books at the rate he does. I don’t have the time to do that. So, instead of thinking about the question and having it pop out, I find myself asking the question and learning about new things.
This transcript has been edited for brevity and clarity