Episode Details

Back to Episodes
The teams pulling ahead aren't the ones with the best models

The teams pulling ahead aren't the ones with the best models

Published 2 months, 1 week ago
Description

AI products are shipping faster than ever. But shipping isn’t impact. The teams pulling ahead aren’t the ones with the best models — they’re the ones who can prove their product moves the business. This edition is about that gap. How to measure what matters, where the biggest barriers to impact are hiding, and what the latest research says about getting AI products to actually drive growth. Because the real competitive advantage isn’t AI. It’s knowing whether your AI is working.

What You’ll Learn in This Edition

This edition cuts through the noise to focus on the measurement gap — the difference between shipping AI and proving AI drives growth.

* The Power/Speed/Impact/Joy bullseye — a calibration framework for AI products that actually drive growth

* A Nature paper reveals why removing friction from AI may be destroying the learning your team needs

* John Maeda on why design teams are being hollowed out — and why PMs are next

* Benedict Evans on why even OpenAI can’t solve product-market fit with capability alone

* Research that should change how your team thinks about AI-assisted skill building

Thanks for reading Product Impact | AI Strategy, Value Creation, AI UX! This post is public so feel free to share it.

Episode 1: Why Your AI Metrics Are Lying to You - Framework for improving AI product performance

Your AI product might be fast, capable, and technically impressive — and still not drive the growth your business needs. In this episode, Brittany Hobbs and I introduce the Power, Speed, Impact, and Joy bullseye — a calibration framework borrowed from F1 racing. The teams winning aren’t shipping more features. They’re measuring different things entirely. We break down a three-layer eval approach and why most completion metrics are hiding the signals that matter.

“Success does not mean satisfaction. If someone stops engaging, does that mean they solved their problem — or that they were frustrated and left?” — Brittany Hobbs

Listen on Spotify | Apple Podcasts | YouTube

Your Role Isn’t Shrinking. It’s Being Hollowed Out.

John Maeda — Three major tech companies have restructured design teams into “prompt engineering pods.” Maeda’s #DesignInTech 2026 calls it what it is: the elimination of design judgment from the product process. “When you replace a designer with a prompt, you don’t lose the pixels. You lose the questions that should have been asked before anyone opened a tool.” This applies to product managers too — if your PM’s job becomes prompt-wrangling instead of deciding what to build and why, you’ve automated the wrong layer. The roles aren’t disappearing. The judgment inside them is.

Featured Resource: Strategy for Measuring & Improving AI Products

The gap between what AI products ship and what they prove is where growth stalls. This framework moves teams from tracking activity — token counts, completion rates, session length — to defining and measuring the outcomes that actually drive business impact. Most teams ship features and assume engagement means success. It doesn’t. If your team can’t answer “is this AI feature making the business better?” with data, you’re flying blind. The framework covers product discovery through scale, with concrete steps for building measurement into your AI product from the start — not bolting it on after launch.

Listen Now