Episode Details

Back to Episodes

LLMs for Alignment Research: a safety priority?

Published 1 year, 10 months ago
Description
A recent short story by Gabriel Mukobi illustrates a near-term scenario where things go bad because new developments in LLMs allow LLMs to accelerate capabilities research without a correspondingly large acceleration in safety research.

This scenario is disturbingly close to the situation we already find ourselves in. Asking the best LLMs for help with programming vs technical alignment research feels very different (at least to me). LLMs might generate junk code, but you can keep pointing out the problems with the code, and the code will eventually work. This can be faster than doing it myself, in cases where I don't know a language or library well; the LLMs are moderately familiar with everything.

When I try to talk to LLMs about technical AI safety work, however, I just get garbage.

I think a useful safety precaution for frontier AI models would be to make them more useful for [...]

The original text contained 8 footnotes which were omitted from this narration.

---

First published:
April 4th, 2024

Source:
https://www.lesswrong.com/posts/nQwbDPgYvAbqAmAud/llms-for-alignment-research-a-safety-priority

---

Narrated by TYPE III AUDIO.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us