Evolutionary Tree Insights

Shifting Center of Gravity: Navigating the Next Phases of AI

Written by Evolutionary Tree Capital Management | Feb 12, 2026 6:40:06 PM

While We Are Believers in AI, We Seek to Avoid Areas of Hype

We are long-term AI bulls. We use AI tools internally and see the technology’s positive impact on our research process and across our portfolio companies. In coming decades, we believe AI will be a powerful driver of productivity and corporate profits, with benefits accruing to AI tech enablers and adopters alike. The benefits of AI will broaden out.

At the same time, we do see growing risk and hype in parts of the AI ecosystem, particularly where capital spending assumptions are extremely aggressive, business models are still unproven, and a single vendor or architecture is treated as uniquely indispensable.

AI cannot grow faster than the systems and economics that must support it. The AI ecosystem faces two hard constraints: physical limitations, specifically power availability and grid capacity which are curbing new data center growth; and economic reality, as shareholders increasingly push back on spending plans and debt financing that lack clear returns on investment (ROI).

As a result, we are seeing a transition from a singular investment narrative (buy Nvidia and a handful of other AI hardware companies) to a more nuanced, economically disciplined environment, one that may change the center of gravity to a broader set of opportunities. We have positioned our portfolios to capitalize on three distinct shifts that, we believe, represent opportunities in the next phases of AI.

Shift 1: From Performance-at-Any-Cost (GPUs) to Cost Efficiency and Custom AI Chips

The first major shift is from a singular focus on raw model performance to a more balanced emphasis on cost efficiency, particularly at scale. In the early years following the launch of ChatGPT, the priority was to push the frontier of capabilities and speed to market, largely using high cost Nvidia GPUs. As costs have soared for AI model development, developers are increasingly optimizing performance per dollar rather than absolute performance alone. This is especially important as models move from training into large scale production and inference, where cost per query quickly becomes a critical constraint. In that context, custom AI chips and ASICs—such as Google’s TPUs and Amazon’s internally developed training and inference chips (innovations from two holdings in our portfolios)—are gaining traction as lower cost alternatives to general purpose GPUs, marking a structural transition from “whatever it takes” compute spending toward more economically sustainable AI infrastructure.

Evidence of this shift is plentiful, but recent news has highlighted this breakthrough change in the center of gravity with two of the most recent leading frontier models—Google’s Gemini and Anthropic’s Claude—being trained primarily on custom AI chips (Google’s TPUs and Amazon’s Trainium chips) rather than general-purpose Nvidia GPUs. These represent the first major examples of what some are calling the “great decoupling,” with custom AI chips eroding Nvidia’s near-monopoly and control over the development of AI.

While custom AI chips are proving their ability in the model training space, their impact may be even greater in the inference segment of the market, where AI models are deployed for end users. While the training market is large today, the inference segment could end up multiple times the size of training over time. With the need to serve up the absolute lowest-cost inferencing, calculated as cost-per-query or token, AI companies and hyperscalers are shifting toward optimized cost- and energy-efficient AI chips—custom AI chips/ASICs.

This is a clear negative development for Nvidia GPUs as Google’s TPU chips, Amazon’s Inferentia chip, and internally-developed custom chips at Meta, Microsoft, and OpenAI are set to take significant market share of inference in coming quarters and years. The cost differences are stark, with estimates of 30-50% reduction in inferencing costs for custom AI chips vs general-purpose GPUs. These lower-cost alternatives could put pressure on GPU pricing and margins as well. In our research, we hear from experts that all major AI leaders are seeking alternatives to high-priced GPUs. Looking below the hood, one can see why: Nvidia's gross margins are so high—at 75% vs 50% for alternatives—that the price differential is viewed as a “Nvidia tax” that is hard to justify in the growing inference market.

As a result, all of the hyperscalers and AI model leaders—representing the largest customers of Nvidia—are pursuing their own internally-developed AI accelerators and ASICs tailored to specific workloads. This does not mean GPUs go away. It does mean that hyperscalers have credible alternatives and growing bargaining power, and that the long-term economics of AI are unlikely to reside with a single chip architecture or supplier. We want to participate in AI in ways that recognize this shift toward a more-diversified compute landscape—by leaning into the leading developers of custom AI chips, notably Alphabet/Google and Amazon, as well as platforms that benefit from lower-cost AI systems.

A further, often underappreciated, dimension of this shift is the impact of AI on power and energy infrastructure. The rapid expansion of AI data centers is placing significant strain on electric grids, leading to the emergence of “power as the new bottleneck” in AI deployment. As compute demand climbs, availability of reliable, affordable power becomes a limiting factor in where and how quickly new data centers can be built. This dynamic creates opportunities for companies involved in gas turbines, grid modernization, and other forms of energy infrastructure. The inclusion of Siemens Energy in certain portfolios is emblematic of this trend: it reflects a recognition that solving the power bottleneck is integral to AI growth. Lastly, while access to power is important, equally critical is energy efficiency. As it turns out, custom AI chips can be tuned to be substantially more energy efficient than general-purpose GPUs, another advantage of Google’s TPUs and Amazon’s Inferentia chips.

Shift 2: From One Dominant AI Model to Multiple Competing AI Models and Ecosystems

The second shift relates to AI models: the industry is moving away from the notion that one general-purpose model or chatbot can dominate all use cases—the “one model to rule them all” paradigm—to a more diversified, use-case-specific model landscape. Early in the cycle, the success of ChatGPT created an impression that a single, frontier-scale model might serve as the universal interface for both consumers and enterprises. That view is now being steadily displaced by a more pragmatic understanding: different users and applications have different requirements with respect to latency, cost, accuracy, and integration, and no single model optimizes for all of them simultaneously.

One key bifurcation is between enterprise/business models and consumer models. On the enterprise side, organizations are prioritizing accuracy, controllability, and the minimization of hallucinations, especially where AI outputs touch regulatory, financial, legal, or mission-critical domains. These use cases often benefit from optimized, domain-tuned models that are tightly coupled to proprietary data and workflows, rather than massive general-purpose models. In addition, enterprises place a high value on predictable and low costs, data governance, and tailored application of AI models to specific use cases. Moreover, enterprises, and the platforms they use, allow for “toggling” between a growing number of proprietary and open LLM models, rotating from model to model, as need and cost dictate.

On the consumer side, by contrast, the emphasis is frequently on breadth of capability, seamless integration with existing apps or workflows, and user experience. Here, AI is being embedded into browsers, operating systems, and consumer apps—search, email, personal productivity, social platforms—often in ways that prioritize responsiveness and convenience. Google’s integration of Gemini into Search (e.g., AI Overviews and AI Mode), Chrome browser, and Android smartphone OS is a prime example of this approach: a single family of models adapted to multiple user touch points with ease of access at low or no cost.

These developments show we are shifting from one dominant LLM or AI chatbot (ChatGPT) and moving to a world of multiple competing models across the web and apps. Recent app data indicates a slowing of ChatGPT usage, with stronger growth at Gemini. The net effect is an increasingly competitive and diversified ecosystem that some refer to as a “model marketplace.” Within both enterprise and consumer segments, further specialization is emerging: preferred models for coding, models for customer support, models for creative tasks, and models tuned for specific industries such as healthcare, financial services, or public safety.

The emergence of multiple AI chatbots also weakens the tightly-bound and single AI ecosystem dominated by two players—OpenAI on the model side and Nvidia on the hardware side—shifting to multiple, competing ecosystems. Early on, much of the AI narrative revolved around ChatGPT trained on Nvidia GPUs, with that pairing effectively defining the state of the art for AI.

Increasingly, alternative stacks are matching or exceeding the OpenAI+Nvidia ecosystem. Alphabet’s Gemini family of models, as mentioned, has emerged as a leading competitor, in some benchmarks surpassing ChatGPT. Importantly, Gemini was developed on Google’s lower cost TPU infrastructure rather than Nvidia GPUs. Anthropic’s Claude models, also competitive with ChatGPT and dominant in the enterprise, are being developed on Amazon’s Trainium chips and dedicated inference chips, establishing a distinct AWS anchored ecosystem. At the same time, China is developing its own regional ecosystem, pairing local models such as DeepSeek with domestic chip providers like Huawei. The result is a transition from a single dominant model hardware pairing to multiple, regionally and technologically differentiated AI stacks competing across models, silicon, and platforms.

For investors, the move from one dominant model to many specialized models changes the shape of the opportunity. It favors platforms and ecosystems that can host and orchestrate multiple models, enterprises that own valuable proprietary data to tune those models, and infrastructure providers that support dynamic, multi-model environments. It also mitigates some concentration risk: instead of a winner-take-all outcome centered on a single provider, we are more likely to see a landscape where different models and vendors win in different segments and workflows. Simply put, the monopoly era of AI is fading, making way for a diversified marketplace of competing architectures.

Shift 3: From Embracing AI Hype and “Deals” to Capital Discipline and ROI Focus

The third shift is occurring in capital markets: investors are moving from broadly rewarding AI-related announcements, partnerships, and capex commitments to demanding clear, quantifiable returns on AI investment. In the early phase, the promise of AI—its potential to transform industries and unlock new revenue pools—was sufficient for markets to reward companies that signaled aggressive AI strategies, regardless of near-term economic justification or returns. Announcements about multi-year, multi-billion (or even trillion) dollar commitments to data centers, capacity reservations, or AI partnerships often led to positive stock price reactions, even when revenue visibility was limited.

As the scale of AI spending has grown, so too has investor scrutiny. There is now widespread recognition of a significant gap between aggregate AI infrastructure commitments—sometimes on the order of trillions of dollars for data centers, chips, and related capabilities—and the revenue generated to support these commitments, which remains in the tens of billions for even the leading players. In other words, the industry has front-loaded an enormous amount of capital into AI infrastructure, but the monetization curve is still catching up. Markets are increasingly asking whether the implied returns on these investments are sustainable and attractive.

This changing sentiment is reshaping how announcements are received. Companies that continue to trumpet large AI spending plans without clear paths to monetization or without credible unit economics are at greater risk of being penalized rather than rewarded. Conversely, firms that can demonstrate strong AI-driven revenue growth, improved customer retention, higher ARPU, or tangible efficiency gains are more likely to enjoy a valuation premium. Investors are shifting from rewarding “AI news flow” to rewarding “AI ROI,” focusing on metrics like payback periods, margin impact, and the actual monetization of AI-enhanced offerings.

For equity investors, this shift argues for a more discriminating approach. Rather than buying AI exposure through the most aggressive capex spenders, the focus is now on identifying those with business models genuinely enhanced by AI capabilities, with economic flywheels that justify the underlying capex. It also raises the bar for hardware and infrastructure providers: the long-term sustainability and growth rate of AI hardware demand will depend on whether downstream users are achieving satisfactory returns.

Conclusion

Taken together, these three shifts—from performance to cost efficiency, from a single dominant model to multiple specialized models, and from hype to ROI discipline—define a broader and ultimately healthier AI investment landscape.

While we believe AI is a profound, long-term driver of change in the global economy, we have chosen to participate in AI in ways that emphasize durable economic models and sustainable growth: platforms with distribution, applications with real customers and cash flows, and infrastructure that offers cost-efficient hardware for AI model training and inference. This evolution supports a more balanced portfolio approach, where AI exposure is diversified across multiple layers of the stack and across both direct (AI enablers) and indirect (AI adopters) beneficiaries.