Episode Details

Back to Episodes
75% of Enterprise AI Fails. The Fix Isn't a Better Model.

75% of Enterprise AI Fails. The Fix Isn't a Better Model.

Published 1 month, 3 weeks ago
Description

Every influencer is drooling over Claude Code skills files. Every product team is chasing the next model release. But for two years, the data has been screaming the same thing: capability isn’t the bottleneck. Context is. This edition unpacks what that actually means — why structured business knowledge is the highest-leverage investment a product team can make, what the “context wars” look like from the inside, and why the teams winning aren’t the ones with the best models. They’re the ones whose AI actually understands their business.

What You’ll Learn in This Edition

This edition confronts the structural reason most AI products fail — they’re missing the context that makes capability useful.

* Why Juan Sequeda from ServiceNow says “hope is not a strategy” — and what to build instead of better prompts

* The three-layer knowledge framework that gives AI a shared language across your entire organization

* CNBC’s “silent failure at scale” investigation reveals why 91% of AI models degrade without anyone noticing

* Microsoft just adopted ontology — the same concept Juan has championed for 20 years — as the foundation of its agentic AI architecture

* Citadel Securities data shows software engineer job postings rising 11% YoY despite the displacement narrative

Episode 3: Context Is the New Moat — Why Your AI Needs Business Knowledge, Not Better Prompts

Every influencer is drooling over skills files and prompt templates. Juan Sequeda, Principal Scientist at data.world (acquired by ServiceNow), has spent 20 years proving that none of it works without structured business knowledge underneath. In this episode, Juan breaks down the three-layer framework — business metadata, technical metadata, and the mapping layer that creates real semantics — and explains why the teams investing in ontology today will compound value across every AI use case they build next. His blunt assessment of skills files as a production strategy: “Hope is an interesting strategy. It’s not one that I add to my strategy.”

“If you just edit in skills, I don’t think that’s gonna be the solution to your problem. You’ll have a great POC. It’ll work for the use cases you tested on. Are you willing to put your career on the line and put that in production?” — Juan Sequeda

Listen on Spotify | Apple Podcasts | YouTube

Context isn’t a nice-to-have. It’s the architecture layer that determines whether your AI product delivers consistent, measurable value or drifts into silent failure. PH1 built this framework to illustrate what Juan Sequeda has been researching for two decades: intent, background, examples, and templates aren’t prompt engineering tricks — they’re the structural foundation that transforms an AI system from a “forever intern” into a strategic partner. Without them, you’re hoping the model figures out what “order” means in your business. Hope, as Juan puts it, is not a strategy.

RAG Was the Answer. Now It’s a Symptom of the Real Problem.

RAG dominated for two years as the default way to give LLMs context. But as context windows expanded from 8K to a million tokens, the question shifted. This video breaks down when RAG still matters — vast, dynamic datasets and cost efficiency — and when long context windows make the retrieval layer unnecessary. The strategic implication for product teams: RAG was always a workaround for a deeper problem. The real question was never “how do I retrieve the right document?” It was “does my system actually understand my business?” That’s the context layer Juan Sequeda is building — and it sits beneath RAG,

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us