Episode Details

Back to Episodes

AI Companions: the risks and benefits, and what educators need to know

Season 8 Episode 393 Published 5 months, 4 weeks ago
Description

How do we prepare students—and ourselves—for a world where AI grief companions and "deadbots" are a reality?

In this eye-opening episode, Jeff Utecht sits down with Dr. Tomasz Hollanek, a critical design and AI ethics researcher at the University of Cambridge's Leverhulme Centre for the Future of Intelligence, to discuss:

  • The rise of AI companions like Character.AI and Replika

  • Emotional manipulation risks and the ethics of human-AI relationships

  • What educators need to know about the EU AI Act and digital consent

  • How to teach AI literacy beyond skill-building—focusing on ethics, emotional health, and the environmental impact of generative AI

  • Promising examples: preserving Indigenous languages and Holocaust survivor testimonies through AI

From griefbots to regulation loopholes, Tomasz explains why educators are essential voices in shaping how AI unfolds in schools and society—and how we can avoid repeating the harms of the social media era.

Dr Tomasz Hollanek is a Postdoctoral Research Fellow at the Leverhulme Centre for the Future of Intelligence (LCFI) and an Affiliated Lecturer in the Department of Computer Science and Technology at the University of Cambridge, working at the intersection of AI ethics and critical design. His current research focuses on the ethics of human-AI interaction design and the challenges of developing critical AI literacy among diverse stakeholder groups; related to the latter research stream is the work on AI, media, and communications that he is leading at LCFI.

Connect with him:

https://link.springer.com/article/10.1007/s13347-024-00744-w

https://www.repository.cam.ac.uk/items/d3229fe5-db87-42ff-869b-11e0538014d8

https://www.desirableai.com/journalism-toolkit

📌 Key Takeaways
  • Teenagers are vulnerable AI users. Many systems simulate empathy while bypassing meaningful regulation or safeguards.

  • Consent needs a redesign. Hollanek proposes recurring consent mechanisms—a shift from passive pop-ups to informed, adaptive engagement.

  • AI literacy ≠ prompt engineering. We must move from tool proficiency to critical awareness of data footprints, systemic manipulation, and long-term impact.

  • Social AI is the new social media. Without thoughtful intervention, the pitfalls of social media could repeat—and intensify—with AI companions.

  • AI for cultural preservation. Ethical use of AI offers promise for sustaining languages and stories that might otherwise disappear.

🧠 For Educators: Use This Episode To
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us