Episode Details

Back to Episodes
EP 148: Safer AI - Why we all need ethical AI tools we can trust

EP 148: Safer AI - Why we all need ethical AI tools we can trust

Episode 148 Published 2 years, 3 months ago
Description

Do you trust the AI tools that you use? Are they ethical and safe? We often overlook the safety behind AI and it's something we should pay attention to. Mark Surman, President at Mozilla Foundation, joins us to discuss how we can trust and use ethical AI.

Newsletter: Sign up for our free daily newsletter
More on this Episode: Episode Page
Join the discussion: Ask Mark Surman and Jordan questions about AI safety
Upcoming Episodes: Check out the upcoming Everyday AI Livestream lineup
Website: YourEverydayAI.com
Email The Show: info@youreverydayai.com
Connect with Jordan on LinkedIn

Timestamps:
[00:01:05] Daily AI news
[00:03:15] About Mark and Mozilla Foundation
[00:06:20] Big Tech and ethical AI
[00:09:20] Is AI unsafe?
[00:11:05] Responsible AI regulation
[00:16:33] Creating balanced government regulation
[00:20:25] Is AI too accessible?
[00:23:00] Resources for AI best practices
[00:25:30] AI concerns to be aware of
[00:30:00] Mark's final takeaway

Topics Covered in This Episode:
1. Future of AI regulation
2. Balancing interests of humanity and government
3. How to make and use AI responsibly
4. Concerns with AI

Keywords:
AI space, risks, guardrails, AI development, misinformation, national elections, deep fake voices, fake content, sophisticated AI tools, generative AI systems, regulatory challenges, government accountability, expertise, company incentives, Meta's responsible AI team, ethical considerations, faster development, friction, balance, innovation, governments, regulations, public interest, technology, government involvement, society, progress, politically motivated, Jordan Wilson, Mozilla, show notes, Mark Surman, societal concerns, individual concerns, misinformation, authenticity, shared content, data, generative AI, control, interests, transparency, open source AI, regulation, accuracy, trustworthiness, hallucinations, discrimination, reports, software, OpenAI, CEO, rumors, high-ranking employees, Microsoft, discussions, Facebook, responsible AI team, Germany, France, Italy, agreement, future AI regulation, public interest, humanity, safety, profit-making interests

Send Everyday AI and Jordan a text message. (We can't reply back unless you leave contact info)

Start Here ▶️

Not sure where to start when it comes to AI? Start with our Start Here Series. You can listen to the first drop -- Episode 691 -- or get free access to our Inner Cricle community and access all episodes there: StartHereSeries.com 

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us