Episode Details

Back to Episodes
MCP: The End of Custom AI Glue

MCP: The End of Custom AI Glue

Published 1 month ago
Description
Everyone is suddenly talking about MCP—but most people are describing it wrong. This episode argues that MCP is not a plugin system, not an API wrapper, and not “function calling, but standardized.” Those frames miss the point and guarantee that teams will simply recreate the same brittle AI glue they’re trying to escape. MCP is a security and authority boundary. As enterprises rush to integrate large language models into real systems—Graph, SharePoint, line-of-business APIs—the comfortable assumption has been that better prompts, better tools, or better agent frameworks will solve the problem. They won’t. The failure mode isn’t model intelligence. It’s unbounded action. Models don’t call APIs. They make probabilistic decisions about which described tools to request. And when those requests are executed against deterministic systems with real blast radius, ambiguity turns into incidents. MCP exists to insert a hard stop: a protocol-level choke point where identity, scope, auditability, and failure behavior can be enforced without trusting the model to behave. This episode builds that argument from first principles, walks through the architectural failures that made MCP inevitable, and then places MCP precisely inside a Microsoft-native world—where Entra, Conditional Access, and audit are the real control plane. Long-Form Show Notes MCP Isn’t About Intelligence — It’s About Authority The core misunderstanding this episode dismantles is simple but dangerous: the idea that LLMs “call APIs.” They don’t. An LLM never touches Graph, SharePoint, or your backend directly. It only sees text and structured tool descriptions. The actual execution happens somewhere else—inside a host process that decides which tools exist, what schemas they accept, and what identity is used when they run. That means the real problem isn’t how smart the model is.
It’s who is allowed to act, and under what constraints. MCP formalizes that boundary. The Real Failure Mode: Probabilistic Callers Meet Deterministic Systems APIs assume disciplined, deterministic callers.
LLMs are probabilistic planners. That collision creates a unique failure mode:
  • Ambiguous tool names lead to wrong tool selection
  • Optional parameters get “improvised” into unsafe inputs
  • Partial failures get treated as signals to retry elsewhere
  • Empty responses get interpreted as “no data exists”
  • And eventually, authority leaks without anyone noticing
Prompt injection doesn’t bypass auth—it steers the caller. Without a hard orchestration boundary, you’re not securing APIs. You’re hoping a stochastic process won’t make a bad decision. Custom AI Glue Is an Entropy Generator Before MCP, every team built its own bridge:
  • bespoke Graph wrappers
  • ad-hoc SharePoint connectors
  • middleware services with long-lived service principals
  • “temporary” permissions that never got revoked
Each one felt reasonable. Together they created:
  • tool sprawl
  • permission creep
  • policy drift
  • inconsistent logging
  • and integrations that fail quietly, not loudly
That’s the worst possible failure mode for agentic systems—because the model fills in the gaps confidently. Custom AI glue doesn’t stay glue.
It becomes policy, without governance. Why REST, Plugins, Functions, and Frameworks All Failed The episode walks through the industry’s four failed patterns:
  1. REST Everywhere
    REST assumes callers understand semantics. LLMs guess. Ambiguity turns into behavior.
  2. Plugin Ecosystems
    Plugins centralize distribution, not governance. They concentrate integration debt inside a vendor’s abstraction layer.
  3. Function Calling
    Function calling is a local convention, not a protocol. Every team reinvents discovery, auth, logging, and policy—badly.
  4. Agent Frameworks
    Frameworks accelerate prototypes, not ecosystems. They hide boundary dec
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us