Episode 201
We live in a moment where artificial intelligence can write our emails, plan our meetings, even give us life advice. But here’s the problem: these systems are often too agreeable for our own good. They’re less like truth tellers and more like digital echo chambers. They nod along, validate our choices, and tell us exactly what we want to hear.
To use an outdated term… GenAI is too often like a Yes Man.
In this episode we’re looking at the rise of sycophancy in generative AI, the tendency of machines to flatter us instead of challenging us. What does this mean for employees, for leaders, and especially for communicators who rely on AI as a tool? And how do we make sure our AI mirrors are giving us clarity, not just compliments?
Listen For
3:49 Is ChatGPT too nice for our own good?
6:55 Can AI flattery mislead leaders?
8:52 Do AIs just tell you what you want to hear?
14:36 Is generative AI breaking social unity?
20:45 Answer to Last Episode’s Question from Mark Lowe
Guest: Tina McCorkindale, PhD
Website | LinkedIn | Google Scholar Profile
Link to Tina’s LinkedIn article on The Danger of Sycophancy in GenAI
Check out the IPR Video Series In a Car with IPR
Rate this podcast with just one click
Stories and Strategies Website
Curzon Public Relations Website
Are you a brand with a podcast that needs support? Book a meeting with Doug Downs to talk about it.
Apply to be a guest on the podcast
Connect with us
LinkedIn | X | Instagram | You Tube | Facebook | Threads | Bluesky | Pinterest
Request a transcript of this episode
Published on 1 week ago
If you like Podbriefly.com, please consider donating to support the ongoing development.
Donate