Introduction
This is Part Three of my three-part series that I began last week.
- Part 1: Our work in the AI search community, and the major shifts that affected our customers and audience.
- Part 2: New product features and innovation from the DemandSphere team in 2025
- Part 3 (this post): What we’re seeing in 2026, and what we’re focused on next.
I don’t usually do predictions. In general, I’m just not interested in trying my hand at predictions because the timing of how things play out is always complicated by a lot of factors.
However, I am always interested in trends. And I think it’s worthwhile to periodically take a step back and notice what we’re noticing.
Each one could be its own article, but we’ll stick with high level overview for the purposes of this post.
TL;DR: Key Takeaways for 2026:
- Everything is AI search now, but it shows up in three main experiences: SERPs, LLM search, and agentic search
- “Zero-click” does not mean “no clicks.” The SERP is changing, but clicks and CTR can still grow when you understand what’s happening and measure correctly
- Search is becoming a brand channel as much as a performance channel, which forces a bigger measurement mindset than traffic alone
- Agentic search will have a real-world adoption test in 2026, but most people won’t build workflows themselves. New products will abstract that away
- AI fatigue is real and will grow, especially as trust degrades and people question what’s real
- AI has major externalities: energy, grid strain, data centers, materials like silver, broader economic distortions
- Compute constraints and the token economy matter. GPU and memory pressure will ripple into costs and product tradeoffs
- CXL (Compute Express Link) is a real infrastructure trend to watch, especially as memory bottlenecks become unavoidable
- Google is a behemoth in AI, and people are waking up to how structurally advantaged they are if they execute
- LLM tracking will consolidate. There are too many companies chasing a market that isn’t big enough for everyone
- Engineering workflows will mature past vibe coding, especially through more structured systems like Claude Code plus disciplined ops
- There’s civilian AI and unconstrained AI. They’re not the same thing, and the difference matters long-term
The three major AI search experiences
In the DemandSphere worldview, everything at this point is AI search.
So the question becomes: what are the major search experiences users are actually interacting with?
The three big ones we pay attention to are:
- SERPs
- LLM or generative AI search experiences
- Agentic search
There are others, but those are the main three that matter right now.
SERPs are not “traditional search”
People call SERPs “traditional search,” but I think that’s a misnomer.
Google SERPs have been an AI search experience for more than 10 years, and they’re becoming even more so with AI Overviews and entirely new experiences like AI Mode.
So when we talk about the SERP, we’re not talking about something old. We’re talking about an increasingly AI-driven interface that’s evolving quickly.
SERPs are moving deeper into Zero Click and interactive features
The biggest shift over the last year and a half has been the continued increase in zero-click SERP features, especially generative features like:
- AI Overviews
- AI Mode
- Broader adoption of Gemini
In 2026, we’re going to see more generative-style interactions expand across many other SERP features. Each individual SERP feature is going to keep becoming more interactive.
And that means businesses need to understand how their brand is reflected and interacted with inside those features.
Measurement gets trickier
This makes measurement more difficult. Simplistic measurements like traffic and revenue don’t tell the full story. Revenue is still the ultimate measure if you treat search purely as a performance channel, but more people are waking up to something important:
Search is also a brand channel.
So you need to look at:
- visibility and appearance inside SERP features
- brand impact
- digital product experience
- behavioral analytics that reflect engagement inside those surfaces
That’s one of the reasons we’re seeing SERP data expanding far beyond traditional SEO.
Understanding the true value of search and search data
I posted about this recently, but I’ll say it again here.
The idea that SERP data is only useful for SEO is short-sighted.
As we’ve seen in 2025, it was the critical factor in generative AI products such as ChatGPT and Perplexity.
We’re seeing teams well beyond marketing adopting SERP data for all kinds of reasons, and we’re expanding into those use cases too.
A lot of initial innovation comes from the SEO world, but this data is expanding into engineering units, competitive intelligence, and product teams.
That’s not the death of SEO. That’s SEO’s data footprint becoming useful at higher layers of the business.
I’ve mentioned Mike King a few times recently and one of the best points he makes is that SEOs are the underpaid workhorses of the entire internet.
This is 100% true. And we all intuitively sense this and I think it’s the reason we obsess so much over labels.
One easy way to deal with this is to simply expand into existing categories such as Product and Strategy because these have large budgets and also need a lot of the expertise our industry has developed. I don’t think that simply relabeling what we do as AEO is going to solve the problem.
SEO is a fantastic discipline with many serious players but that doesn’t mean we can’t also be generalists doing great work in other areas.
On top of that, the SEO industry is massive and continuing to grow.
The C-suite is aware of it now and The Wall Street Journal most recently highlighted that the search industry will be worth $171B USD by 2030. That is massive.
“Zero-Click” does not mean “zero clicks”
There’s a common confusion.
People assume that the presence of Zero Click features means zero clicks are occurring.
That is false.
We have clients that have grown CTR and grown overall clicks over the last year and a half. We see it constantly.
What “zero-click” actually means is that more SERP features can resolve intent without a click. But it does not mean there are no clicks, and it absolutely does not mean there are no brand interactions.
If you only look at search through a click lens, you’re missing a lot.
SERP Features are a model and predictor of agentic search workflows
This is where things get really interesting.
In our view, what people are calling agentic search closely resembles the functionality embedded in many zero-click SERP features.
You can take a lot of SERP features and treat them as abstractions for what an agentic workflow could look like.
Some map better than others.
Then the practical question becomes:
What is the real use case for agentic search, and where does it actually work?
Let’s use two examples.
Example: Shopping
Everyone has this narrative right now:
Websites are going away. Five years from now there won’t be any websites. Agents will buy everything for you. Your AI will know you and order new socks for you when you’re running low.
I think this is a very simplistic version of how things will work, and I’m just not buying it.
There are some consumer categories where it could happen, but even the “smart refrigerator reorders eggs” idea hasn’t really materialized in a meaningful way yet.
Where it gets interesting: B2B commerce
Where things get much more interesting is B2B commerce.
I think that’s where a lot of the money is going to be made, with an important caveat.
A lot of B2B commerce workflows were already digitally orchestrated more than 20 years ago through ERP platforms, e-procurement, and digital logistics systems. A lot of decision-making was already automated, but it was done through rule-based systems, heuristics, and controlled workflows.
Even today, I’m not convinced you would want to hand core purchasing and logistics decisions to LLM inference without heavy monitoring and observability. There’s still too much unpredictability.
So yes, agentic commerce will happen, but it’s not going to look like the simplistic “no websites, no humans” story.
Let’s look at a second example: travel.
We can also use this example to talk about how agentic search could work.
Agentic search will be tested in the real world in 2026
I think 2026 is going to be the first big real-world test of what it means to perform well inside agentic search.
A good example where I do think agentic workflows make sense is travel.
Example: hotel booking and travel
As an executive I travel a lot, and I can definitely see an agentic workflow where I type in Slack:
“I’m traveling to London. I need airfare, hotel, and travel arrangements.”
Then an automation kicks off in n8n and searches within pre-approved providers, using my preferences, booking rules, and constraints. That could save a lot of time, especially for repeat trips. Restaurant reservations are another one that makes sense.
But the questions, what are the vast majority of normal people going to do?
How many people are going to build an automation workflow in n8n to book travel?
Almost nobody.
The technical barriers are too high. Most people are not going to architect their own agentic workflows. If this becomes mainstream, it will be because new companies build consumer-facing products that abstract all of that away.
The end state is likely:
- users get agentic convenience
- but the workflows are built and hosted by product companies, not by individuals
This means we will see (and are already) new categories of SaaS businesses emerge. Or Google will just do more of this automatically for you.
AI fatigue is coming
We are in the middle of the AI hype cycle. Possibly approaching the peak. If you’ve been in tech for any amount of time, you know every big trend follows the hype cycle. AI and LLMs will too.
We will eventually hit a point where the hype dies down and we enter the trough of despair. And ironically, that’s when the real work starts, and the real productivity gains get realized. The question is: where are we right now?
I still think we’re at or near the peak, but I’m starting to see people express more disdain toward the hype. And underneath that is the big question:
What are the net benefits for humans, at a civilizational level? What have we really gained?
In search, I think the gains are real. Retrieval has gotten better. We learn new topics faster. We can make connections faster.
But it comes at a cost.
The AI slop problem and cognitive atrophy
Are we outsourcing more and more of our thinking to systems that aren’t actually thinking?
This is a big risk, and it could push us toward a form of Idiocracy.
On an individual level, it would behoove all of us to force our minds back into disciplined thinking: long-form reading, note-taking, journaling by hand, and other practices that keep the human mind sharp.
Trust erosion
Another part of AI fatigue is cultural. People are tired and skeptical of everything they see. The first question under a video is now:
“Is this AI?”
That is dangerous on a civilizational level. You don’t want an entire society where people can’t believe anything they see anymore.
The externalities of AI will become a bigger public conversation
I lived in the Bay Area for a long time. The mindset there is often techno-optimistic: progress is inherently good, and skepticism is treated like caveman thinking. One of the only ways that worldview will listen to criticism is through externalities.
So let’s talk about some of the externalities that come with AI.
Data centers and energy strain
In the Midwest, where I live now, one of the biggest issues is the continuous expansion of data centers.
Water politics are becoming a big deal. We have fundamental limitations not just in energy generation, but also in storage and distribution. The grid is old and fragile in the US.
We’re starting to see weird situations where households see energy bills rise, and the word on the street is that it’s driven by data center demand. True or not, that is the perception and it is a growing one.
Precious metals and supply constraints
Another externality is the precious metals market, especially silver.
Energy storage and solar scaling depend on materials, and there’s a growing conversation about supply constraints and market dislocations between paper and physical markets.
AI as the “savior” of the economy
AI is being treated like the latest savior for an economy that feels shaky for a lot of people.
That leads to the biggest question of externalities:
Does it make sense to put a civilization’s GDP growth engine primarily on AI?
And, as a corollary:
Are we going to let AI become the machine that takes over every aspect of life?
A lot of people would not be happy if that decision gets made for them.
GPUs, memory shortages, and the token economy
We all know there is pressure in GPU supply and pricing. But the bigger story is memory bottlenecks, especially HBM (High Bandwidth Memory). HBM is the memory that sits right on the GPU, and in most cases, you can’t upgrade it. You get what you get.
HBM constraints ripple into:
- GPU availability
- inference throughput
- cost per token
- context window tradeoffs
- product economics
There is a token economy, and the cost per token is correlated to GPU pricing and scarcity, which is tied to HBM supply bottlenecks.
This will matter a lot in 2026.
CXL is one of the real infrastructure trends to watch
This gets geeky, but I think we’ll hear a lot more about CXL in 2026.
CXL stands for Compute Express Link. It runs over PCIe and is designed to help with bottlenecks around memory scaling, utilization, and flexibility across CPUs, GPUs, and accelerators.
The short version is:
CXL enables more flexible memory access and sharing, potentially moving toward shared memory pools depending on the mode and architecture.
In AI workloads, one of the relevant contexts is the KV cache used in LLM inference. KV cache performance wants fast memory like HBM, but HBM is constrained. If you can offload some operations and schedule memory differently using CXL, you can reduce pressure on HBM and improve efficiency.
CXL 3.0 is gaining ground and the market incentives, with the memory shortages, are tangible.
We’re going to see the big players such as Google, Meta, and others take it even more seriously as the shortages continue.
CXL also has some potentially very interesting advantages outside of the AI conversation in the world of data engineering and large-scale data analytics, to further improve capabilities of database management systems like Postgres (and others).
Google’s AI empire is a structural advantage
ChatGPT was a black swan event for Google because it showed normal people how far transformer models had evolved. As a result, Google took a branding hit initially. 2025 is when it started to become clear that Google was far from done in the AI race.
Google is a behemoth in AI, and they’ve been working on these problems for 15+ years. They have the ability to build systems at a level of infrastructure that OpenAI can’t even remotely approach at this stage in their existence.
Google owns:
- DeepMind and major research leadership
- Their own proprietary hardware (TPUs)
- Their own data centers
- Global distribution through Search, Android, YouTube, Gmail, Workspace
- GCP and Vertex AI
- Their massive ad network
- and the ability to vertically integrate everything they do
There is regulatory pressure, antitrust risk, and scrutiny on Google. However, as an AI platform company, Google is in the winning position.
The Context Window will keep forcing index innovation
The context window is not infinite.
We define “infinite” as: fully up to date in real time across the entire internet.
So we will continue to see innovations around index caching, retrieval augmentation, and bespoke indexing strategies.
Companies like OpenAI will keep investing in their own indexes out of necessity.
They won’t succeed at the scale Google can succeed, but they can get big Pareto gains through smart caching and retrieval design.
The LLM tracking market will expand and then consolidate
We built a whole new product suite around tracking visibility in LLMs in 2025. That space is going to keep expanding.
But then it will consolidate.
There were something like 15 LLM tracking companies floating around BrightonSEO in San Diego. That’s not a market where everyone survives. There will be a handful of winners. A lot will fall by the wayside.
DemandSphere’s focus is to be a unified platform that brings a unified view across all major search experiences.
This is a challenging focus for any company and it takes a lot of experience to get right, both at the data and operational level. We’ve been at it for 16 years.
Engineering workflows are maturing past vibe coding
One of the biggest innovations this year has been AI coding frameworks like Cursor and Claude code. I’m personally more favorably inclined toward Claude Code and related structured frameworks, especially because they’re built by real engineers with real engineering background.
That’s the major difference.
Vibe coding is great for prototyping. It’s not great for building and scaling production systems. Vibe coding breaks down at the deployment and operational phases.
But with proper workflows around using AI in actual software engineering, you can gain meaningful leverage:
- disciplined CI/CD
- automated testing
- granular feature tracking tied to tickets and specs
- clear integration models
- operational controls
- observability and monitoring
We’re experimenting with this at DemandSphere, but we’re doing it carefully. We’re not just letting LLMs generate a pile of unreviewed code in production systems. We’ll see more companies get this right in 2026.
Civilian AI vs Unconstrained AI
Part of the AI fatigue question, and part of the civilizational value question, depends on what kind of AI we’re talking about.
Most of the AI people interact with is what I’d call civilian AI:
- ChatGPT
- Gemini
- Claude
- and the public models that ship with guardrails
These systems have to deal with:
- brand risk
- regulation
- consumer trust
- content safety policies
- margin realities
But there’s another category, which I loosely call (in my own mind) “military AI,” though Unconstrained AI is probably a better term. (Also, if you spend enough time around military types, the first thing you’ll learn is that there are very large differences in technical acumen and infrastructure across different units.)
Unconstrained systems operate under different constraints:
- mission success is only goal that matters
- operational security
- survivability
- adversarial adaptation
- strategic advantage
- tolerating weirdness if it yields advantage
- operating in classified domains where reputational risk is irrelevant
These systems would likely integrate into:
- IoT networks
- continuous sensor fusion
- multi-domain situational awareness
- adversary modeling
- operational planning and logistics
They may also continuously learn from real-world inputs, instead of running frozen models with controlled updates.
That’s not something civilian AI can openly do without massive liability.
So when we talk about AI, we need to be honest that we’re usually talking about the public, constrained version.
That difference is going to matter more in the coming years. You’ll know it when you see it.
Capital markets and macroeconomic weirdness
This isn’t the main focus of what we write about at DemandSphere, but it’s a weird time economically.
It’s worth paying attention to both business and personal risk:
- What does your portfolio look like?
- Have you built distributed risk mitigation strategies?
- Do you have access to liquidity if things get weird quickly?
I’m not any kind of financial advisor, nor is this financial advice. I just think it’s something to review at this point in time.
What we’re focused on at DemandSphere in 2026
We’re going to continue to push on:
- SERP analytics
- LLM analytics
- agentic search
Agentic search is the big one to evolve in 2026. We’ll be taking the time to dig into different workflows and figure out which ones are actually likely to be adopted in ways that matter for our customer base. This will tie directly back in to the work we will continue to do on SERP and LLM analytics.
We’re also seeing adoption of our datasets getting pulled into use cases outside traditional SEO teams:
- engineering within large data and analytics teams
- competitive intelligence
- product teams
These expanded use cases are not a big surprise to me because, as I’ve said for years:
Done properly, product strategy, corporate strategy, and search strategy are inseparable.
If you’re a good company, you understand that at a foundational level.
Conclusion: pay attention to the bigger picture
Again, I’m not trying to predict the future. But the trends are worth tracking.
2026 is shaping up to be the year where a lot of these ideas go from theory to reality, especially around agentic search, measurement shifts, and the infrastructure constraints that underpin the entire AI economy.
This wraps up our three-part series, I hope you enjoyed it. I appreciate you reading these and following along. I’m always interested in hearing your feedback. Best wishes for a happy and health 2026!

