Episode Details

Back to Episodes
Copilot vs ChatGPT under the EU AI Act: why “compliant by design” changes your risk and governance workload

Copilot vs ChatGPT under the EU AI Act: why “compliant by design” changes your risk and governance workload

Season 1 Published 6 months ago
Description
Everyone thinks AI compliance is Microsoft’s problem. In this episode of M365.fm, Mirko Peters explains why the EU AI Act actually splits obligations across the whole AI supply chain—providers like Microsoft, yes, but also deployers like you when you roll out tools such as Copilot or ChatGPT into real business workflows. He shows how one HR experiment with ChatGPT or Copilot for candidate screening can instantly put your organization into “high‑risk” territory, triggering documentation, monitoring, transparency, and human‑oversight requirements backed by fines of up to 7% of global revenue.

Mirko walks through the AI Act’s four‑step risk ladder—unacceptable, high, limited, minimal—and makes it brutally clear that risk is defined by use case and context, not by how friendly the tool looks. A generic chatbot writing social posts may be minimal risk, but wire the same engine into hiring, compliance reporting, or credit decisions and it jumps into high‑risk classification with a full compliance checklist attached. You do not get to argue your way down the ladder; certain use cases, like automated CV screening or biometric ID, are pre‑stamped as high‑risk by the law itself.

From there, he contrasts Copilot and ChatGPT as two very different starting points under the Act. Copilot arrives embedded in Microsoft 365, running on Azure OpenAI inside the Microsoft service boundary with an EU Data Boundary, established security certifications, and clear commitments that your prompts and responses are not used to train Microsoft’s foundation models. In practice, that means governance is built into the furniture: Purview handles classification and retention, the Trust Center documents residency and safeguards, and Microsoft exposes transparency notes and responsible‑AI tooling so you can show auditors your control surface instead of waving at a black box.

ChatGPT, by contrast, lands as a highly flexible general‑purpose model with minimal enterprise scaffolding by default. In its consumer form it sits in the “limited risk” bucket, fine for casual use but requiring you to build your own residency guarantees, logging, access controls, and documentation once you embed it into HR, finance, or other sensitive workflows. Mirko describes this as “flexibility plus bureaucratic headache”: every powerful new use case you create with ChatGPT in a regulated environment becomes a compliance project you have to design, document, and defend—largely from scratch.

Throughout the episode, Mirko’s core message is that “compliant by design” is not a magical exemption, but a meaningful head start. Choosing Copilot means starting with guardrails aligned to the AI Act’s expectations, but you still have to classify your use cases, configure Purview and RBAC correctly, and monitor real deployment risk. Choosing bare ChatGPT for enterprise use gives you amazing capabilities with almost no built‑in regulatory scaffolding—which is fine for experiments, but dangerous if you confuse “it works” with “it’s ready for an audit.”

WHAT YOU WILL LEARN