 
        
        
For years, working on AI safety usually meant theorising about the ‘alignment problem’ or trying to convince other people to give a damn. If you could find any way to help, the work was frustrating and low feedback.
According to Anthropic’s Holden Karnofsky, this situation has now reversed completely.
There are now large amounts of useful, concrete, shovel-ready projects with clear goals and deliverables. Holden thinks people haven’t appreciated the scale of the shift, and wants everyone to see the large range of ‘well-scoped object-level work’ they could personally help with, in both technical and non-technical areas.
Video, full transcript, and links to learn more: https://80k.info/hk25
In today’s interview, Holden — previously cofounder and CEO of Open Philanthropy — lists 39 projects he’s excited to see happening, including:
And that’s all just stuff he’s happened to observe directly, which is probably only a small fraction of the options available.
Holden makes a case that, for many people, working at an AI company like Anthropic will be the best way to steer AGI in a positive direction. He notes there are “ways that you can reduce AI risk that you can only do if you’re a competitive frontier AI company.” At the same time, he believes external groups have their own advantages and can be equally impactful.
Critics worry that Anthropic’s efforts to stay at that frontier encourage competitive racing towards AGI — significantly or entirely offsetting any useful research they do. Holden thinks this seriously misunderstands the strategic situation we’re in — and explains his case in detail with host Rob Wiblin.
Chapters:
Published on 20 hours ago
If you like Podbriefly.com, please consider donating to support the ongoing development.
Donate