MI&S Weekly Analyst Insights — Week Ending April 24, 2026

MI&S Weekly Analyst Insights — Week Ending April 24, 2026
Welcome to this edition of our Weekly Analyst Insights roundup, which features key insights that our analysts have developed based on the past week’s events.Our schedule last week was dominated by Google Cloud Next, as you’ll see from entries in this week’s roundup by Jason Andersen, Mike Leone, Matt Kimball, and Melody Brue. This was the first time I was able to join our newest analyst, Mike, at a conference, plus it was a great opportunity for all of us to catch up with friends from the industry and with Google execs — especially at the Analyst Summit connected to GCN. Google Cloud CEO Thomas Kurian delivers the keynote at Google Cloud Next 2026. (Credit: Moor Insights & Strategy) That said, there were plenty of other things going on, including the Adobe Summit (Mel has the scoop on that), Intel’s surprisingly strong earnings (see Anshel Sag’s “Semiconductors” entry), and, as icing on the cake, the big announcement that longtime Apple exec John Ternus would succeed Tim Cook as Apple’s CEO — which I commented on for CNBC. Whew!This week, Jason and Mel are in San Francisco for the What’s Next With AWS event, with Mel also attending some of the Oracle Applications Summit while in town. Anshel and Mel will be in Bellevue, Washington, for a T-Mobile event, and I’ll be at TT-Deploy in San Francisco. The spring season of client and vendor events is here, and we’re on the road a lot over the coming months. If you’ll be attending any of the same events or see that we’ll be in your city, please reach out.Last week was a banger for Moor Insights & Strategy analysts quoted in the news. Our insights on major events, executive changes, and tech company earnings (to name just a few) were featured across multiple leading business and technology publications, including the Wall Street Journal, CIO, PC World, and Business Insider. Check out the full list of citations here.Our MI&S team also published 8 deliverables — 5 Research Notes, 1 Analyst Insight, and 2 Podcasts.Check out this week’s Analyst Insights roundup for more from the MI&S team, including what’s top of mind for each analyst, our thoughts on vendor announcements, press quotes, and more.Have a great week!Patrick Moorhead MI&S Analyst Insights Apps, Agents, and Automation (Jason Andersen) Last week was dominated by Google Cloud Next 2026, and my overall feedback is that Google has gone to great lengths to fill gaps I had identified with its agent strategy last year. In particular, great strides were made in agent observability and governance. My takeaway for enterprises is that Google’s agent products — namely Gemini Enterprise and Gemini Enterprise Agent Platform — are now competitive with other leading products in the space.That said, there are some distinct capabilities worthy of special mention — plus I want to share a thought on some of Google’s recent claims about openness. First, the capabilities:While “agent skills” are not a new concept, I really like how Google treats them as an enterprise capability like MCP tools. In a way it feels like Google has mainstreamed the skills notion with its dedicated Agent Skills registry and better ability to share skills across the enterprise.Another positive: I observed some very good tooling updates, especially with respect to workflows and agent orchestration. And those capabilities are quite visible in the updated Google Customer Experience Platform, which I wrote about last year. It’s clear that the Customer Experience team is making big strides by announcing a number of new wins — including Macy’s, which was demonstrated to great success at the event.Lastly, it’s easy to forget that while so much of Google’s AI marketing focuses on its Gemini models, Google also has many others, such as voice and image models. As we move past LLMs and chat interfaces, Google has some advantages when it comes to multi-model agents.One last note on Google’s agent play: At last week’s event, executives and other speakers went to considerable lengths in calling Gemini’s agentic products “open.” Again, a tip of the hat to Google for making strides here, but this is another messaging statement that places Google in a better competitive space than others that have done at least as much on the openness front.Last week I also had a great conversation with the team from Weights & Biases, which is now part of CoreWeave. To be honest, I did not know a lot heading into the meeting — but I walked away impressed. And while CoreWeave completed the W&B acquisition about a year ago, it’s clear that they have something more akin to a GitHub/Microsoft model than a full integration play, which means that W&B is not limited to using the CoreWeave services.While I learned a lot across the board, two things stuck out the most. The first is that W&B can provide a very granular cross-cloud platform for agent and model observability and training. This is something that hyperscalers and SaaS agent players aren’t able to do, and, in fact, multiple SaaS vendors leveraging hyperscaler infrastructure are using W&B to manage their agentic platforms today. And while that’s interesting in itself, W&B’s vision also connects training and observability ideas, allowing you to get real-time model improvements into a model rather than needing dedicated training builds. It’s kind of a self-correcting, autonomous service that is important in many industries now, but would be mandatory in more futuristic cases like robotics. Between what is going on at W&B and CoreWeave’s recent expansion into storage and other software capabilities, it would seem that CoreWeave is in the midst of pivoting away from its neocloud roots to a more full-service AI infrastructure play. Apps, Agents, and Automation (Jason Andersen) Last week was dominated by Google Cloud Next 2026, and my overall feedback is that Google has gone to great lengths to fill gaps I had identified with its agent strategy last year. In particular, great strides were made in agent observability and governance. My takeaway for enterprises is that Google’s agent products — namely Gemini Enterprise and Gemini Enterprise Agent Platform — are now competitive with other leading products in the space.That said, there are some distinct capabilities worthy of special mention — plus I want to share a thought on some of Google’s recent claims about openness. First, the capabilities:While “agent skills” are not a new concept, I really like how Google treats them as an enterprise capability like MCP tools. In a way it feels like Google has mainstreamed the skills notion with its dedicated Agent Skills registry and better ability to share skills across the enterprise.Another positive: I observed some very good tooling updates, especially with respect to workflows and agent orchestration. And those capabilities are quite visible in the updated Google Customer Experience Platform, which I wrote about last year. It’s clear that the Customer Experience team is making big strides by announcing a number of new wins — including Macy’s, which was demonstrated to great success at the event.Lastly, it’s easy to forget that while so much of Google’s AI marketing focuses on its Gemini models, Google also has many others, such as voice and image models. As we move past LLMs and chat interfaces, Google has some advantages when it comes to multi-model agents.One last note on Google’s agent play: At last week’s event, executives and other speakers went to considerable lengths in calling Gemini’s agentic products “open.” Again, a tip of the hat to Google for making strides here, but this is another messaging statement that places Google in a better competitive space than others that have done at least as much on the openness front.Last week I also had a great conversation with the team from Weights & Biases, which is now part of CoreWeave. To be honest, I did not know a lot heading into the meeting — but I walked away impressed. And while CoreWeave completed the W&B acquisition about a year ago, it’s clear that they have something more akin to a GitHub/Microsoft model than a full integration play, which means that W&B is not limited to using the CoreWeave services.While I learned a lot across the board, two things stuck out the most. The first is that W&B can provide a very granular cross-cloud platform for agent and model observability and training. This is something that hyperscalers and SaaS agent players aren’t able to do, and, in fact, multiple SaaS vendors leveraging hyperscaler infrastructure are using W&B to manage their agentic platforms today. And while that’s interesting in itself, W&B’s vision also connects training and observability ideas, allowing you to get real-time model improvements into a model rather than needing dedicated training builds. It’s kind of a self-correcting, autonomous service that is important in many industries now, but would be mandatory in more futuristic cases like robotics. Between what is going on at W&B and CoreWeave’s recent expansion into storage and other software capabilities, it would seem that CoreWeave is in the midst of pivoting away from its neocloud roots to a more full-service AI infrastructure play. Automotive (Anshel Sag) Distance Technologies has partnered with Kia to bring its light-field display technology to Kia vehicles, with the aim of incorporating new AR functions into driving technology. According to the companies, Distance’s tech will initially find its way into the Kia Vision Meta Turismo Concept Car; that should be a good proving ground for Kia’s vision for future vehicles and show how well the companies are working together to deliver that vision. While Distance’s technology is already being integrated into planes and military vehicles, this is the company’s first consumer prototype design. I have always hoped that such technology would find its way into consumer products, and I believe that augmented reality holds great promise for automotive applications, especially with dash displays shrinking or going away entirely in vehicles like Teslas. Kia is a very forward-thinking company, and I could see it commercializing this technology soon. Data and AI Governance (Mike Leone) Governance ran straight through Google Cloud Next. I posted about Agentic Data Cloud, Knowledge Catalog, and Agent Platform in real time, and what jumped out for me is how Google connected governance and security as one story. Google has built the two to work together, and it’s the first hyperscaler I’ve seen do that. The bigger angle is Knowledge Catalog itself. The whole design is about reaching across clouds. Google wants Knowledge Catalog to govern your data wherever it sits. Every hyperscaler is racing to be the center of governance gravity for the agentic era, and right now Google sounds the most complete.Lots of partner info came out of Google Cloud Next, to the point that it was hard to pick which ones to focus on. I’m putting a little focus on Atlan, an active metadata vendor, and Anomalo, a data quality vendor. Both showed up with stronger governance stories. Atlan deepened its Google Cloud integration and pushed a broader multi-cloud “AI Context Ecosystem” story. It almost feels like Atlan is thinking about AI context as its own category. Anomalo plugged quality directly into Knowledge Catalog and teamed up with Atlan on an agentic-context piece. Two vendors who normally compete for the same buyer teaming up is the part worth flagging here. Agentic context is bigger than any one vendor can carry, and that’s why we’re seeing partnerships form. The independent governance vendors who don’t land a solid partner in the next 90 days are the ones falling behind — if there are any left, that is.In a heavy news week, Immuta also dropped a piece on access governance for AI agents that probably got lost in the shuffle. Most of what gets called AI governance right now is output control. Think evaluation, hallucination guarding, prompt safety — all the stuff that happens after the model runs. Immuta is pointing at the input side, which is where most breaches actually start. Old access governance assumed humans clicking buttons. Agents read sensitive columns at machine speed, and they don’t ask for permission the way people do. What happens when a long-running agent makes a thousand silent reads against data nobody told it to touch? Old access models can’t see that. I’m watching to see which catalog vendors are looking at it from both directions. Datacenter — Silicon (Matt Kimball) At Google Cloud Next, the company unveiled its eighth-generation TPU portfolio. I say “portfolio” because Google announced the TPU 8t (training) and TPU 8i (inference), along with networking (Virgo) and system-level integration (AI Hypercomputer) to deliver highly specialized silicon across the AI lifecycle. The architectural reasoning is sound. Training is a throughput problem; inference is a latency and memory bandwidth problem. Google’s view is that engineering for both in the same chip means compromising on both. AWS took this path years ago with Trainium and Inferentia. Google has now done the same.Interestingly, as Google pivots to discrete silicon for training and inference, AWS has pivoted back the other way, with CEO Matt Garman noting that Trainium could handle much of the training and inference-heavy lifting. AWS also recently announced a partnership with Cerebras to deliver disaggregated inference, where latency issues around token generation and delivery are better addressed. Meanwhile, Google seems not to have addressed this “disaggregated” opportunity (which was also a major theme at GTC earlier this year).Back to TPU. The scale numbers are impressive, though largely irrelevant to enterprise IT. A 9,600-chip training pod connected by optical circuit switches is an architecture that will make a frontier model provider excited — but is fully disconnected from what an enterprise IT organization would care about.What matters more is price-performance when AI is activated (i.e., inference). To that end, Google is making bold claims about TPU 8i’s improvements over Ironwood (TPU 7). Assuming its claim of an 80% improvement in performance-per-dollar holds up, that could be significant. But again, Google’s claim feels a bit hollow. All claims compare TPU 8 to TPU 7, while the obvious relevant comparisons to Trainium, NVIDIA, and maybe even Azure’s Maia are absent. Further, I don’t know an enterprise CIO who puts any stock in a vendor’s own claims around performance or price-performance.On the model-support front, Google states support beyond Gemini, with Llama, Qwen, and others running via vLLM on TPU. But multi-host configurations aren’t supported yet, and PyTorch is still in preview.Maybe most confusing to me here is trying to understand who Google’s target customer is. Is TPU 8 designed and built solely to support the company’s recent partnership with Anthropic? Or is Google looking to establish a more relevant foothold in the enterprise AI space?I don’t see anything that suggests the company is serious about penetrating the enterprise market (which has long been an elusive segment). This spans product, product delivery, and target GTM messaging. As Google builds what looks like impressive silicon, networking (Virgo), and systems (AI Hypercomputer), I don’t see anything that would resonate with an enterprise IT organization looking to deploy AI that, even if it originates in the cloud, will have a strong on-prem presence. Even the company’s distributed cloud (on-prem cloud) only supports NVIDIA.My take: Google may have developed some great technology. But it has a long way to go in making it relevant for the enterprise. Data Infrastructure and Storage (Mike Leone) Google led Google Cloud Next with its biggest first-party storage push in years. As AI models scale, the bottleneck shifts from compute to storage. Storage feeds the accelerators, and if storage can’t keep up, every idle GPU minute is money lit on fire. That becomes a TCO problem pretty quickly. Google’s answer is to push performance directly into the storage layer. The announcements cut across the stack, with Cloud Storage Rapid for performance object storage, Managed Lustre at 10x throughput with a cheaper Dynamic tier, Smart Storage automating metadata for agents over M

Take Your Experience to the Next Level

New

Download our mobile app for a faster and better experience.

Comments

0
U

Join the discussion

Sign in to leave a comment

0:000:00