A groundbreaking study recently published in Radiology: Imaging Cancer, a journal under the Radiological Society of North America (RSNA), sheds new light on patient attitudes toward the integration of artificial intelligence (AI) in screening mammography. This comprehensive survey delves into the perceptions, trust factors, and concerns of a large and diverse patient cohort regarding the role of AI in breast cancer screening, highlighting complex sociocultural and clinical nuances that could shape the future of AI implementation in radiological practices.
Artificial intelligence has made remarkable strides in diagnostic radiology, with algorithms now capable of detecting subtle abnormalities in mammographic images with impressive accuracy. Despite these technological advancements, real-world adoption and acceptance of AI-assisted diagnostic tools remain limited, largely due to concerns about data privacy, inherent biases within AI models, and a general lack of understanding about how these systems function. Researchers have often overlooked the critical viewpoint of patients — the ultimate recipients of these diagnostic technologies — until now.
The lead author, Dr. Basak E. Dogan, a clinical professor of radiology and breast imaging research director at the University of Texas Southwestern Medical Center, emphasizes that patient trust is a cornerstone of successful AI integration. “Without patient confidence in AI, we are likely to see disruptions in adherence to recommended screening schedules, which can negatively impact early detection and health outcomes,” she remarks. This study represents an essential step in evaluating the patient voice to inform responsible AI adoption.
To explore patient sentiments, Dr. Dogan and her team developed a robust 29-question survey administered to patients undergoing breast cancer screening mammograms at their institution over a seven-month period in 2023. The survey was meticulously designed to capture participants’ demographic details, medical and familial breast cancer history, as well as their knowledge and views on AI, allowing for a granular understanding of how these variables interplay in trust formation.
Results from the 518 completed surveys revealed a cautiously optimistic stance toward AI. A striking 71% of respondents favored using AI as a complementary "second reader" alongside radiologists, suggesting an openness to augmented diagnostic workflows. However, only a small fraction, less than 5%, were comfortable with AI independently interpreting their mammograms. This highlights an enduring preference for human oversight, underscoring the perceived value of personal interaction in clinical care and concerns around transparency, algorithmic bias, and data privacy.
Importantly, the study uncovered clear associations between patient demographics and AI acceptance. Participants possessing education beyond college level or those with greater self-reported familiarity with AI technologies exhibited roughly twice the likelihood of endorsing AI integration in screening. This correlation underscores the role of education and knowledge dissemination in shaping AI receptivity and points toward the necessity of targeted informational campaigns to alleviate fears and misconceptions.
Racial and ethnic background emerged as a significant determinant of trust in AI. Hispanic and non-Hispanic Black respondents reported substantially higher apprehensions regarding AI bias and the security of their personal health data. This disparity most likely contributes to their comparatively lower acceptance rates of AI in mammography interpretation. These findings stress the imperative for culturally sensitive patient engagement and equity-focused AI development to mitigate distrust in traditionally underserved communities.
The influence of personal and familial medical history further nuances patient perspectives. Individuals who have close relatives diagnosed with breast cancer tend to exhibit a heightened vigilance, often requesting additional mammographic reviews regardless of whether an AI system or radiologist detected abnormalities. Intriguingly, these patients generally demonstrate strong trust in both AI and human evaluations when their mammograms yield negative results, illustrating a complex but coherent trust framework influenced by personal risk awareness.
Conversely, patients with a history of abnormal mammograms display increased propensity to seek follow-up diagnostics when discrepancies arise between AI and radiologist opinions, especially if the AI flags a potential abnormality not noticed by the human reader. This behavioral trend underscores the sensitivity of patients with prior screening complications and points to the importance of clear communication strategies to navigate conflicting diagnostic outputs.
The implications of these findings are profound for the future integration of AI in mammographic screening. The variation in trust based on sociodemographic and clinical factors mandates a personalized approach to AI deployment, one that aligns technological innovation with patient-centered care practices. Healthcare providers and AI developers must collaborate proactively to ensure that AI tools are transparent, ethically designed, and culturally competent.
Furthermore, this study advocates for continuous, dynamic patient engagement as AI technology evolves. Tracking shifts in patient perceptions over time will be critical to refining AI interfaces and educational materials, securing broad-based acceptance, and ultimately improving clinical outcomes. Dr. Dogan underscores that “trust in AI is not monolithic but highly individualized, contingent on prior experiences, educational background, and cultural context.”
Incorporating these patient perspectives into AI implementation policies holds promise for elevating the standard of breast cancer screening. Doing so can foster greater adherence to screening recommendations, reduce disparities in care, and enhance confidence in radiologic interpretation enhanced by AI. This research thus bridges a vital gap by foregrounding the patient experience in the rapidly advancing landscape of AI-assisted medical imaging.
As AI continues to revolutionize diagnostic radiology, the interplay between cutting-edge technology and human factors remains paramount. This study from the University of Texas Southwestern Medical Center pioneers a patient-centered framework to guide ethical and effective AI integration in mammography, heralding a future where technology and trust coalesce to combat breast cancer more efficiently. Stakeholders across healthcare must heed these insights to ensure that AI not only innovates but also resonates with the people it aims to serve.
Subject of Research: People
Article Title: Patient Perception of Artificial Intelligence Use in Interpretation of Screening Mammograms: A Survey Study
News Publication Date: 18-Apr-2025
Web References:
- Radiology: Imaging Cancer
- Radiological Society of North America (RSNA)
- RadiologyInfo.org Mammography Information
References:
“Patient Perception of Artificial Intelligence Use in Interpretation of Screening Mammograms: A Survey Study.” Collaborators: B. Bersu Ozcan, M.D., Yin Xi, Ph.D., Emily E. Knippa, M.D.
Keywords: Mammography, Artificial intelligence, Radiology, Cancer screening, Cancer patients, Breast cancer, Social surveys