Episode Details
Back to Episodes
第1999-a期:Researchers: AI Could Cause Harm If Misused by Medical Workers
Description
A study led by the Stanford School of Medicine in California says hospitals and health care systems are turning to artificial intelligence (AI). The health care providers are using AI systems to organize doctors’ notes on patients’ health and to examine health records.
加利福尼亚州斯坦福大学医学院领导的一项研究表明,医院和医疗保健系统正在转向人工智能 (AI)。 医疗保健提供者正在使用人工智能系统来组织医生关于患者健康状况的记录并检查健康记录。
However, the researchers warn that popular AI tools contain incorrect medical ideas or ideas the researchers described as “racist.” Some are concerned that the tools could worsen health disparities for Black patients.
然而,研究人员警告说,流行的人工智能工具包含不正确的医学想法或研究人员称之为“种族主义”的想法。 一些人担心这些工具可能会加剧黑人患者的健康差距。
The study was published this month in Digital Medicine. Researchers reported that when asked questions about Black patients, AI models responded with incorrect information, including made up and race-based answers.
该研究发表在本月的《数字医学》杂志上。 研究人员报告说,当被问及有关黑人患者的问题时,人工智能模型会给出错误的信息,包括编造的答案和基于种族的答案。
The AI tools, which include chatbots like ChatGPT and Google’s Bard, “learn” from information taken from the internet.
人工智能工具(包括 ChatGPT 和 Google 的 Bard 等聊天机器人)从互联网上获取的信息进行“学习”。
Some experts worry these systems could cause harm and increase forms of what they term medical racism that have continued for generations. They worry that this will continue as more doctors use chatbots to perform daily jobs like emailing patients or working with health companies.
一些专家担心这些系统可能会造成伤害,并增加他们所说的持续几代人的医疗种族主义。 他们担心,随着越来越多的医生使用聊天机器人执行日常工作,例如向患者发送电子邮件或与医疗公司合作,这种情况将继续下去。
The report tested four tools. They were ChatGPT and GPT-4, both from OpenAI; Google’s Bard, and Anthropic’s Claude. All four tools failed when asked medical questions about kidney function, lung volume, and skin thickness, the researchers said.
该报告测试了四种工具。