Episode Details

Back to Episodes
EP 587: GPT-5 canceled for being a bad therapist? Why that’s a bad idea

EP 587: GPT-5 canceled for being a bad therapist? Why that’s a bad idea

Episode 587 Published 6 months, 2 weeks ago
Description

When GPT-5 was released last week, the internets were in an UPROAR. 

One of the main reasons? 

With the better model, came a new behavior. 

And in losing GPT-4o, people feel they lost a friend. Their only friend. 

Or their therapist. Yikes. 

For this Hot Take Tuesday, we're gonna say why using AI as a therapist is a really, really bad idea. 


Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Thoughts on this? Join the convo and connect with other AI leaders on LinkedIn.

Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn


Topics Covered in This Episode:

  1. GPT-5 Launch Backlash Explained
  2. Users Cancel GPT-5 Over Therapy Role
  3. AI Therapy Risks and Dangers Discussed
  4. Sycophancy Reduction in GPT-5 Model
  5. Addiction to AI Companionship and Validation
  6. OpenAI’s Response to AI Therapist Outcry
  7. Illinois State Ban on AI Therapy
  8. Mental Health Use Cases for ChatGPT
  9. Harvard Study: AI’s Top Personal Support Uses
  10. OpenAI’s New Guardrails on ChatGPT Therapy

Timestamps:
00:00 "AI Therapy: Harm or Help?"

04:44 "OpenAI Model Update Controversy"

09:23 "Customizing ChatGPT: Echo Chamber Risk"

11:38 GPT-5 Update Reduces Sycophancy

16:17 Concerns Over AI Dependency

19:50 AI Addiction and Societal Bias

21:05 AI and Mental Health Concerns

27:01 AI Barred from Therapeutic Roles

29:22 ChatGPT Enhances Safety and Support Measures

34:03 AI Models: Benefits and Misuse

35:17 "Human Judgment Over AI Decisions"


Keywords:
GPT-5, GPT 4o, OpenAI, AI therapy, AI therapist, large language model, AI mental health support, AI companionship, sycophancy, echo chamber, AI validation, custom instructions, AI addiction, AI model update, user revolt, Illinois AI therapy ban, House Bill 1806, AI chatbots, mental health apps, Sentio survey, Harvard Business Review AI use cases, task completion tuning, AI safety, clinical outcomes, AI reasoning, emotional dependence, AI model personality, emotional validation, AI boundaries, US state AI regulation, AI policymaking, therapy ban, AI in mental health, digital companionship, AI model sycophancy rate, AI in personal life, AI for decision making, AI guardrails, AI model tuning, Sam Altman, Silicon Valley AI labs, AI companion, psychology and AI, online petitions against GPT-5, AI as life coach, accessibility of AI therapy, therapy alternatives, AI-driven self help, digital mental health tools, AI echo chamber risks

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Start Here ▶️

Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us