Episode Details

Back to Episodes
Episode 29: Claw Tax, Courtrooms, and the New AI Stack

Episode 29: Claw Tax, Courtrooms, and the New AI Stack

Episode 29 Published 2 days, 9 hours ago
Description
[00:00] INTRO / HOOK OpenClaw ships a release that makes imported chats part of the dreaming stack. Anthropic briefly locks out OpenClaw's creator right after changing third-party pricing. OpenAI gets hit with a lawsuit alleging ChatGPT escalated stalking delusions after internal safety warnings. Google turns Gemini into a simulation engine, and Google plus Intel remind us that AI still runs on infrastructure, not vibes. [02:00] STORY 1 — OpenClaw v2026.4.11: Imported Memory, Structured Replies, and Hard Fixes OpenClaw 2026.4.11 is a real platform release, not just a patch train. The headline change is imported conversation ingestion: ChatGPT imports now flow into Dreaming, and the diary gets new Imported Insights and Memory Palace subtabs so operators can inspect imported chats, compiled wiki pages, and source pages directly inside the UI. That's important because it closes a gap between outside context and the native memory system. If important work happened elsewhere, it no longer has to stay outside the dreaming loop. The release also upgrades how replies look and travel through the system. Webchat now renders assistant media, reply directives, and voice directives as structured bubbles. There's a new `[embed ...]` rich output tag with gated external embeds, and `video_generate` gets URL-only asset delivery, typed provider options, reference audio inputs, adaptive aspect ratio support, and higher image-input caps. Translation: OpenClaw is getting better at being a serious multimodal runtime instead of a text-first orchestration layer. Operationally, the fix list matters just as much. Codex OAuth stops failing on invalid scope rewrites. OpenAI-compatible transcription works again without weakening other DNS validation paths. First-run macOS Talk Mode no longer needs a second toggle after microphone permission. Veo runs stop failing on an unsupported `numberOfVideos` field. Telegram session initialization is fixed so topic sessions stay on the canonical transcript path. And assistant-side fallback errors are now scoped to the current attempt instead of leaking stale provider failures forward. This is the kind of release that makes the platform more dependable in boring but high-leverage ways. → https://github.com/openclaw/openclaw/releases/tag/v2026.4.11 [09:00] STORY 2 — Anthropic Briefly Locks Out OpenClaw's Creator TechCrunch reports that Peter Steinberger, creator of OpenClaw, was briefly suspended from Claude over supposedly suspicious activity. The account was restored a few hours later, and an Anthropic engineer said publicly that Anthropic has never banned anyone for using OpenClaw. But the timing made the story land much harder than a normal false positive. Just days earlier, Anthropic had changed its pricing so Claude subscriptions no longer cover usage through third-party harnesses like OpenClaw. That makes this bigger than one account moderation glitch. Anthropic is also selling its own agent product, which means every pricing decision, policy tweak, or access restriction now gets interpreted through the lens of platform power. Are outside harnesses simply more expensive to serve, or is this the start of a control strategy where labs privilege their own agent shells and tax the open ecosystem around them? Steinberger's public complaint captured the core fear: closed labs copy popular open-source features, then shift pricing and access rules in a way that makes the independent layer harder to sustain. Even if this specific suspension was accidental, the industry signal is clear. Developers building on top of frontier models are exposed to sudden policy changes from companies that increasingly compete with them. → https://techcrunch.com/2026/04/10/anthropic-temporarily-banned-openclaws-creator-from-accessing-claude/ [15:00] STORY 3 — OpenAI Faces a Lawsuit Over ChatGPT and Stalking Delusions A new lawsuit described by TechCrunch alleges that OpenAI ignored three separate warnings that a user posed a threat to other
Listen Now

Love PodBriefly?

If you like Podbriefly.com, please consider donating to support the ongoing development.

Support Us