Episode Details
Back to Episodes
Copilot Studio governance: use Purview and Power Platform DLP to stop AI agents from leaking internal data
Season 1
Published 6 months ago
Description
Copilot Studio governance: in this episode of M365.fm, Mirko Peters explains why your Copilot agents are quietly over‑sharing internal data—and how to use Microsoft Purview and Power Platform DLP to put them on a strict least‑privilege diet. He starts with the “eager intern with a master key” problem: every agent runs with the invoking user’s token, happily roaming through SharePoint, Outlook, and Dataverse wherever that user has access, then surfacing confidential context in otherwise innocent answers.
Mirko walks through how this inheritance actually works. Copilot Studio does not create a new identity by default; it impersonates the user, borrowing their permissions across connectors and environments. That design keeps UX simple but creates a gray zone where tenant‑level policies feel in place while agents operate in a “service context” that sidesteps classic app governance. The result is context leakage by paraphrase rather than file download, the kind of subtle oversharing auditors call “inference” and admins struggle to detect in logs.
From there, he dissects how data flows through a single Copilot query. A question jumps from the chat surface into connectors, then into runtime and analytics, touching multiple services and audit systems along the way. Standard, Premium, and Custom connectors each open different doors; mixed classifications in a single environment can turn a harmless prototype into a production‑grade exfiltration path when Business and Non‑Business connectors are allowed to talk. Mirko explains why per‑environment DLP, cloned without discipline, makes “we have tenant‑wide DLP” a dangerous illusion.
The episode then focuses on repair instead of fear. Mirko lays out how to design layered DLP policies that classify connectors correctly, block risky combinations, and treat Custom connectors as quarantine until proven safe. He emphasizes automating policy rollout across environments, enforcing consistent connector groupings, and using managed identities for agents that genuinely need service‑level access so they stop piggybacking on interactive user tokens. The goal is not fewer capabilities, but predictable corridors where data may and may not flow.
Finally, he reveals the “one DLP rule most admins skip”: guarding the analytics and logging layer, not just the live connectors. Copilot Studio’s conversation analytics and telemetry can retain sensitive snippets outside the places your compliance diagrams usually cover. Mirko shows how to bring those stores under Purview’s lens, align their geography with your data residency requirements, and ensure the agent’s memory is governed as strictly as its real‑time access. By the end, you have a concrete model to turn Copilot Studio from an enthusiastic leaker into a disciplined, policy‑aware assistant.
WHAT YOU WILL LEARN
Mirko walks through how this inheritance actually works. Copilot Studio does not create a new identity by default; it impersonates the user, borrowing their permissions across connectors and environments. That design keeps UX simple but creates a gray zone where tenant‑level policies feel in place while agents operate in a “service context” that sidesteps classic app governance. The result is context leakage by paraphrase rather than file download, the kind of subtle oversharing auditors call “inference” and admins struggle to detect in logs.
From there, he dissects how data flows through a single Copilot query. A question jumps from the chat surface into connectors, then into runtime and analytics, touching multiple services and audit systems along the way. Standard, Premium, and Custom connectors each open different doors; mixed classifications in a single environment can turn a harmless prototype into a production‑grade exfiltration path when Business and Non‑Business connectors are allowed to talk. Mirko explains why per‑environment DLP, cloned without discipline, makes “we have tenant‑wide DLP” a dangerous illusion.
The episode then focuses on repair instead of fear. Mirko lays out how to design layered DLP policies that classify connectors correctly, block risky combinations, and treat Custom connectors as quarantine until proven safe. He emphasizes automating policy rollout across environments, enforcing consistent connector groupings, and using managed identities for agents that genuinely need service‑level access so they stop piggybacking on interactive user tokens. The goal is not fewer capabilities, but predictable corridors where data may and may not flow.
Finally, he reveals the “one DLP rule most admins skip”: guarding the analytics and logging layer, not just the live connectors. Copilot Studio’s conversation analytics and telemetry can retain sensitive snippets outside the places your compliance diagrams usually cover. Mirko shows how to bring those stores under Purview’s lens, align their geography with your data residency requirements, and ensure the agent’s memory is governed as strictly as its real‑time access. By the end, you have a concrete model to turn Copilot Studio from an enthusiastic leaker into a disciplined, policy‑aware assistant.
WHAT YOU WILL LEARN
- Why Copilot Studio agents inherit user permissions and how that causes silent oversharing.
- How data actually moves through connectors, runtime, and analytics when someone chats with an agent.
- How Power Platform DLP really works at the environment‑connector intersection.
- How to design and roll out layered DLP, including safe handling of Custom connectors.