Artificial intelligence (AI) is rapidly reshaping the medical landscape, revolutionizing how healthcare providers diagnose and treat patients. However, this technological evolution brings with it complex questions concerning legal liability and accountability, especially when AI systems are integrated into clinical workflows. Recent research highlights how the timing and manner of AI’s involvement in radiological interpretation can significantly influence perceptions of malpractice risk, shedding light on the intricate interplay between automation and human judgment in medical decision-making.
In a groundbreaking study involving collaboration among Penn State College of Medicine, Brown University, and Seton Hall University School of Law, researchers investigated how mock jurors judge the liability of radiologists in hypothetical malpractice scenarios where AI flagged abnormalities in brain scans that the radiologist failed to identify. The study revealed a profound impact of workflow design on legal perceptions: jurors were nearly 50% more inclined to side with plaintiffs when the radiologist reviewed the scan only once after AI alert, compared to when the radiologist examined the images twice—once prior to and once after AI input.
This finding indicates that jurors attribute greater negligence to radiologists who appear to rely passively on AI outputs, rather than actively engaging with both their expertise and AI assistance through multiple evaluations. The dual-review workflow, entailing an initial radiologist assessment followed by AI feedback and a subsequent review, seems to convey a more diligent and thorough diagnostic process. Consequently, the legal threshold for meeting the “duty of care” appears closely tied to evidence of such methodical interactions between human clinicians and AI.
The study’s experimental design centered on a fictitious but plausible medical malpractice suit. Participants, recruited to act as lay jurors, examined one of two carefully crafted scenarios simulating the detection of a brain hemorrhage via computerized tomography (CT). In both, AI correctly identified an abnormality, but the radiologist’s conclusion denied its presence. Differences in juror decision-making starkly contrasted the workflows, underscoring the judicial system’s nuanced interpretation of human-AI collaboration in clinical practice.
Importantly, about 75% of jurors determined a breach of duty when the radiologist enlisted AI feedback but reviewed the scan only once afterward. This figure dropped to 53% when the radiologist performed two separate reads bracketing the AI alert. These statistics emphasize that workflow adjustments that foster active, iterative engagement with AI findings may mitigate legal exposure, a crucial insight for healthcare providers contemplating the adoption of AI tools in diagnostics.
Yet, prudence is warranted. The researchers caution that asking radiologists to reinterpret scans multiple times may introduce operational complexities and increased costs in clinical settings. Moreover, cognitive biases further complicate the landscape. Radiologists may feel pressured to conform to AI’s conclusions for fear of legal repercussions if they dissent and are proven wrong. Such dynamics could paradoxically compromise diagnostic rigor and exacerbate patient anxiety through excess follow-up testing and healthcare expenditure.
This phenomenon highlights a critical tension at the intersection of AI implementation and medical liability: balancing the need for thoroughness and accountability against resource constraints and human factors. Legal experts underscore that these concerns bear heavily on procurement decisions for AI technologies, clinical protocol development, and strategies around litigation or settlement in cases of alleged malpractice.
The researchers purposely focused on radiology, given its advanced state of AI integration compared to other specialties. Radiology’s heavy reliance on imaging data and algorithmic interpretations provides a fertile ground for studying how human and machine cognition intertwine in high-stakes decision-making. Still, the implications likely extend across healthcare disciplines as AI becomes more entrenched in diagnostics and treatment paradigms.
Beyond measuring liability perceptions, prior work by this team revealed that jurors are less inclined to hold radiologists accountable when their diagnoses align with AI outputs, whereas disagreement with AI seemingly increases perceived culpability. Disclosure of AI error rates to juries also modulates these judgments, underscoring that transparency around AI capabilities and limitations is critical in fostering informed legal assessments.
Moreover, other studies illustrate that AI not only affects post-hoc liability views but also shapes real-time clinical decisions. Physicians confronted with AI recommendations often adjust treatment plans, reflecting how decision-making authority becomes shared or contested between human experts and algorithmic systems. This evolving dynamic demands continuous scholarly attention as technology and societal norms co-evolve.
Corresponding author Michael Bernstein of Brown University notes that public and professional attitudes toward AI’s diagnostic role—and consequent legal ramifications—are swiftly changing. Such shifts necessitate agile policy frameworks and adaptive clinical workflows that integrate human factors principles to optimize outcomes and minimize unintended consequences.
The broader challenge lies in reconciling AI’s promise to enhance diagnostic accuracy and patient safety with the multifaceted risks posed by legal uncertainty, workflow disruption, and cognitive biases. As this research compellingly demonstrates, successful human-AI integration must address not only technological efficacy but also the social, legal, and psychological dimensions that govern stakeholder acceptance and trust.
Future investigations will likely explore how different organizational policies, educational initiatives, and legal standards can harmonize with emerging AI capabilities to foster a healthcare environment where technology acts as a reliable, transparent ally rather than a source of liability anxiety or defensive practice patterns. The evolving jurisprudence around AI in medicine will be pivotal in shaping an ethical, effective, and equitable future for patient care.
As AI continues its inexorable advance through medicine, understanding the nuanced relationships among clinical workflows, legal accountability, and human judgment will become ever more crucial. This study stands as a timely beacon, illuminating how thoughtful integration strategies, grounded in empirical evidence, can help navigate the complex terrain at the confluence of innovation and responsibility.
Subject of Research: People
Article Title: The radiologist–AI workflow and the risk of medical malpractice claims
News Publication Date: 10-Mar-2026
Web References: DOI 10.1038/s44360-026-00085-2
Keywords: Artificial intelligence, Health care, Health care costs, Medical economics, Health care delivery, Health care policy, Litigation, Legal system, Radiology
