Episode Details
Back to EpisodesCopilot Cowork Expands. Mirage Effect in Multimodal Models. Maven Scrutinized After Strike. Americans Adopt AI Distrust Grows.
Published 14 hours ago
Description
Microsoft is rolling out Copilot Cowork more widely and letting AI models monitor each other.'Mirage Effect': AI models diagnose diseases on images that never existed.Alleged U.S. attack on a school in Iran: the Palantir system comes into focus.As more Americans adopt AI tools, fewer say they can trust the results
The AI news for March 31st, 2026
--- This episode is sponsored by ---
Rocket Routine GmbH
Find our more about our today's sponsor Rocket Routine at
Microsoft is rolling out Copilot Cowork more widely and letting AI models monitor each other.
Source: https://the-decoder.de/microsoft-rollt-copilot-cowork-breiter-aus-und-laesst-ki-modelle-sich-gegenseitig-kontrollieren/
Why did we choose this article?
Microsoft 365 users can have workflows automated end-to-end (saving time) and use a tool that cross-checks model outputs, which directly affects office productivity, oversight, and how safe it is to rely on Copilot for important tasks.
'Mirage Effect': AI models diagnose diseases on images that never existed.
Source: https://the-decoder.de/mirage-effekt-ki-modelle-diagnostizieren-krankheiten-auf-bildern-die-nie-existierten/
Why did we choose this article?
Highlights a direct safety risk: multimodal AIs can invent images or medical findings that never existed, which matters for clinicians, patients, and anyone using AI for medical interpretation — prompting need for verification and new safeguards.
Alleged U.S. attack on a school in Iran: the Palantir system comes into focus.
Source: https://www.heise.de/news/Mutmasslicher-US-Angriff-auf-Schule-in-Iran-Palantir-System-rueckt-in-den-Fokus-11229006.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
Allegations that a Palantir system played a role in a deadly strike bring real-world accountability and policy implications for surveillance and targeting tools, affecting military procurement, public oversight, and international relations.
As more Americans adopt AI tools, fewer say they can trust the results
Source: https://techcrunch.com/2026/03/30/ai-trust-adoption-poll-more-americans-adopt-tools-fewer-say-they-can-trust-the-results/
Why did we choose this article?
Shows a widening gap between AI use and public trust — important for employers, product teams, and policymakers deciding whether and how to deploy, regulate, or communicate about AI tools in workplaces and public services.
Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at podcast@pickert.de.
--- This episode is sponsored by ---
Rocket Routine GmbH
Find our more about our today's sponsor Rocket Routine at
Microsoft is rolling out Copilot Cowork more widely and letting AI models monitor each other.
Source: https://the-decoder.de/microsoft-rollt-copilot-cowork-breiter-aus-und-laesst-ki-modelle-sich-gegenseitig-kontrollieren/
Why did we choose this article?
Microsoft 365 users can have workflows automated end-to-end (saving time) and use a tool that cross-checks model outputs, which directly affects office productivity, oversight, and how safe it is to rely on Copilot for important tasks.
'Mirage Effect': AI models diagnose diseases on images that never existed.
Source: https://the-decoder.de/mirage-effekt-ki-modelle-diagnostizieren-krankheiten-auf-bildern-die-nie-existierten/
Why did we choose this article?
Highlights a direct safety risk: multimodal AIs can invent images or medical findings that never existed, which matters for clinicians, patients, and anyone using AI for medical interpretation — prompting need for verification and new safeguards.
Alleged U.S. attack on a school in Iran: the Palantir system comes into focus.
Source: https://www.heise.de/news/Mutmasslicher-US-Angriff-auf-Schule-in-Iran-Palantir-System-rueckt-in-den-Fokus-11229006.html?wt_mc=rss.red.ho.themen.k%C3%BCnstliche+intelligenz.beitrag.beitrag
Why did we choose this article?
Allegations that a Palantir system played a role in a deadly strike bring real-world accountability and policy implications for surveillance and targeting tools, affecting military procurement, public oversight, and international relations.
As more Americans adopt AI tools, fewer say they can trust the results
Source: https://techcrunch.com/2026/03/30/ai-trust-adoption-poll-more-americans-adopt-tools-fewer-say-they-can-trust-the-results/
Why did we choose this article?
Shows a widening gap between AI use and public trust — important for employers, product teams, and policymakers deciding whether and how to deploy, regulate, or communicate about AI tools in workplaces and public services.
Do you have any questions, comments, or suggestions for improvement? We welcome your feedback at podcast@pickert.de.