Episode Details

Back to Episodes

“Catching illicit distributed training operations during an AI pause” by Robi Rahman

Published 1 week, 3 days ago
Description

Last year, my colleagues on MIRI's Technical Governance Team proposed an international agreement to halt risky development of superhuman artificial intelligence until it can be done safely. The agreement would require all clusters of AI chips with more computing power than 16 H100 GPUs to be registered with a coalition of states, led by the US and China, that would monitor their operations to ensure they aren’t being used for unsafe AI development. In my opinion, the proposal is impressively well-thought out and thorough – the authors identified various contingencies and closed many gaps in the plan.

However, one threat model was insufficiently addressed. Here is the agreement's definition of clusters subject to registration requirements:

“Covered chip cluster (CCC) means any set of AI chips or networked cluster with aggregate effective computing capacity greater than 16 H100-equivalents. A networked cluster refers to chips that either are physically co-located, have inter-node aggregate bandwidth — defined as the sum of bandwidth between distinct hosts/chassis — greater than 25 Gbit/s, or are networked to perform workloads together. The aggregate effective computing capacity of 16 H100 chips is 15,840 TFLOP/s, or total processing power of 253,440 TFLOP-bit/s.”

Unfortunately, this definition had a sort [...]

---

First published:
April 11th, 2026

Source:
https://www.lesswrong.com/posts/35yyWJnXvC2ae6NKH/catching-illicit-distributed-training-operations-during-an

---

Narrated by TYPE III AUDIO.

Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us