Search isn’t dead—it’s just different now

How to get more information about the information we’re getting.
Image may contain Magnifying and Racket

The information we can find online—and how we find it—is in the middle of a tectonic shift.

After core updates were recently launched on some of the world’s most powerful search engines, AI summaries and AI-generated “featured snippets” have taken over the results you see when you type a query into a search bar and punch “return.”

Now, the majority of Americans are seeing these AI-generated summaries first and foremost in their search returns, beckoning them not to scroll down to the traditional, SEO-based, ranked search returns that whisk one away to a new website, research study, or article, and instead to take the aggregated summary as the answer they are looking for. But do these AI summaries and AI-powered searches actually deliver the most accurate, most up-to-date, and helpful information, without requiring users to even click on a link? Can they really be trusted?

According to an October 2025 survey, about half of Americans who have come across AI summaries have at least some trust in the information that they provide in their responses. Meanwhile, a close 46% say they do not have much trust or any trust at all in that information.

The skepticism can be viewed as healthy. Deep dives from curious journalists on the accuracy of AI-generated search summaries show that, pretty often, they’re not very accurate. One writer, in their own informal experiment, found that 1 in every 5 AI overviews on search returns gave an "inaccurate or misleading answer.”

But there are tools that can lift the curtain on how AI overviews and featured snippets on search returns are generated, with what information and from where—effectively holding LLM training to account and offering a clearer picture on whether or not the answers being given are appropriate and accurate.

SerpApi, a service that provides application programming interfaces to programmatically find and access data from search engine results pages, can be used for verifying the accuracy of AI overviews by scanning the source material and determining if the summary accurately conveys it.

Using a tool like SerpApi can bring more transparency to AI-generated search results by identifying the sources that the LLM prefers to use, showing the pages that it favors quoting, and by tracking the attribution. That means monitoring how often a specific website or source is (or isn’t) cited by the AI in the links cited in the summaries it provides. This can show the cracks in the timeliness of AI overviews, since traditional LLM knowledge is often limited to its training cutoff date. A 2024 study used SerpApi to access real-time search results, allowing the system to provide up-to-date, neutral context for recent articles in a way that AI overviews simply couldn’t.

Using SerpApi, researchers can continue to monitor how updates and training change the results that are returned in AI summaries, running automated, high-volume queries over longer periods of time to track changes in the AI's summaries, content structure, and source preferences. This can help to better understand the long-range evolution of the AI's algorithmic logic and its response to updates on its algorithms.

It’s not an approach reserved just for research and academia, though. Businesses and content creators can use data from SerpApi to learn more about how they, or their competitors, appear in AI overviews and adapt their own content to be more easily summarized or cited.

It seems dramatic to say, but it’s not: this is a time when search, as we know it, is changing. According to recent research, search engine users who encounter an AI summary are less likely to click on links to other websites and are more likely to end their browsing session entirely after visiting a search page with an AI summary. That means that the information seen at the top of the page is shaping the way we think, learn, and decide what’s true. Certain tools can help bring clarity to how AI systems summarize information in search returns by providing real-time access to the AI-generated content and tracking its source data. This type of information access—about information—allows for deeper analysis of the AI's behavior and its real-world impact.