Episode Details
Back to Episodes
How to Govern AI When You Can't Predict the Future (with Charlie Bullock)
Description
Charlie Bullock is a Senior Research Fellow at the Institute for Law and AI. He joins the podcast to discuss radical optionality: how governments can prepare for very advanced AI without locking in premature rules. The conversation covers why law often trails technology, and how transparency, reporting, evaluations, cybersecurity standards, and expanded technical hiring could help. We also discuss private oversight, state versus federal rules, and the risk of concentrating power in companies or government.
LINKS:
CHAPTERS:
(00:00) Episode Preview
(01:04) The pacing problem
(06:18) Defining radical optionality
(11:03) Assumptions under uncertainty
(16:00) Industry convenience concerns
(20:41) Political will realities
(26:48) Private governance limits
(30:28) Government misuse risks
(36:29) Balancing institutional power
(42:25) Transparency and reporting
(49:35) Evaluations, security, talent
(58:26) State law preemption
(01:04:20) Historical nuclear analogies
PRODUCED BY:
SOCIAL LINKS:
Website: https://podcast.futureoflife.org
Twitter (FLI): https://x.com/FLI_org
Twitter (Gus): https://x.com/gusdocker
LinkedIn: https://www.linkedin.com/company/future-of-life-institute/
YouTube: https://www.youtube.com/channel/UC-rCCy3FQ-GItDimSR9lhzw/
Apple: https://geo.itunes.apple.com/us/podcast/id1170991978
Spotify: https://open.spotify.com/show/2Op1WO3gwVwCrYHg4eoGyP