Episode Details
Back to Episodes“LLM Generality is a Timeline Crux” by eggsyntax
Published 1 year, 8 months ago
Description
Crossposted from the AI Alignment Forum. May contain more technical jargon than usual. Short Summary
LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible.
Longer summary
There is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduction to that research, and asks:
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
June 24th, 2024
Source:
https://www.lesswrong.com/posts/k38sJNLk7YbJA72ST/llm-generality-is-a-timeline-crux
---
Narrated by TYPE III AUDIO.
LLMs may be fundamentally incapable of fully general reasoning, and if so, short timelines are less plausible.
Longer summary
There is ML research suggesting that LLMs fail badly on attempts at general reasoning, such as planning problems, scheduling, and attempts to solve novel visual puzzles. This post provides a brief introduction to that research, and asks:
- Whether this limitation is illusory or actually exists.
- If it exists, whether it will be solved by scaling or is a problem fundamental to LLMs.
- If fundamental, whether it can be overcome by scaffolding & tooling.
The original text contained 10 footnotes which were omitted from this narration.
---
First published:
June 24th, 2024
Source:
https://www.lesswrong.com/posts/k38sJNLk7YbJA72ST/llm-generality-is-a-timeline-crux
---
Narrated by TYPE III AUDIO.