Podcast Episode Details

Back to Podcast Episodes

176 - (Part 2) The MIRRR UX Framework for Designing Trustworthy Agentic AI Applications


Episode 176


This is part two of the framework; if you missed part one, head to episode 175 and start there so you're all caught up. 

In this episode of Experiencing Data, I continue my deep dive into the MIRRR UX Framework for designing trustworthy agentic AI applications. Building on Part 1’s “Monitor” and “Interrupt,” I unpack the three R’s: Redirect, Rerun, and Rollback—and share practical strategies for data product managers and leaders tasked with creating AI systems people will actually trust and use. I explain human-centered approaches to thinking about automation and how to handle unexpected outcomes in agentic AI applications without losing user confidence. I am hoping this control framework will help you get more value out of your data while simultaneously creating value for the human stakeholders, users, and customers.

Highlights / Skip to:

  • Introducing the MIRRR UX Framework (1:08)
  • Designing for trust and user adoption plus perspectives you should be including when designing systems. (2:31)
  • Monitor and interrupt controls let humans pause anything from a single AI task to the entire agent (3:17)
  • Explaining “redirection” in the example context of use cases for claims adjusters working on insurance claims—so adjusters (users) can focus on important decisions. (4:35) 
  • Rerun controls: lets humans redo an angentic task after unexpected results, preventing errors and building trust in early AI rollouts (11:12)
  • Rerun vs. Redirect: what the difference is in the context of AI, using additional use cases from the insurance claim processing domain  (12:07)
  • Empathy and user experience in AI adoption, and how the most useful insights come from directly observing users—not from analytics (18:28)
  • Thinking about agentic AI as glue for existing applications and workflows, or as a worker  (27:35)

Quotes from Today’s Episode

The value of AI isn’t just about technical capability, it’s based in large part on whether the end-users will actually trust and adopt it. If we don’t design for trust from the start, even the most advanced AI can fail to deliver value."

"In agentic AI, knowing when to automate is just as important as knowing what to automate. Smart product and design decisions mean sometimes holding back on full automation until the people, processes, and culture are ready for it."

"Sometimes the most valuable thing you can do is slow down, create checkpoints, and give people a chance to course-correct before the work goes too far in the wrong direction."

 

"Reruns and rollbacks shouldn’t be seen as failures, they’re essential safety mechanisms that protect both the integrity of the work and the trust of the humans in the loop. They give people the confidence to keep using the system, even when mistakes happen."

"You can’t measure trust in an AI system by counting logins or tracking clicks. True adoption comes from understanding the people using it, listening to them, observing their workflows, and learning what really builds or breaks their confidence."

 

"You’ll never learn the real reasons behind a team’s choices by only looking at analytics, you have to actually talk to them and watch them work."

 

"Labels matter, what you call a button or an action can shape how people interpret and trust what will happen when they click it."

Quotes from Today’s Episode





If you like Podbriefly.com, please consider donating to support the ongoing development.

Donate