In the rapidly evolving field of medical technology, artificial intelligence (AI) stands out as a groundbreaking force poised to revolutionize diagnostics, treatment planning, and patient care. While numerous investigations have explored the perspectives of physicians regarding AI, the voices of patients—the ultimate beneficiaries—have remained largely unheard. A pioneering multinational study, led by a consortium from the Technical University of Munich (TUM), has now bridged this gap by surveying nearly 14,000 hospital patients across six continents. This extensive research reveals nuanced insights into patient acceptance of AI in healthcare and underscores critical factors influencing public sentiment toward these emerging technologies.
Central to the study’s findings is a correlation between patients’ self-assessed health status and their attitudes toward AI-driven medical interventions. Specifically, individuals perceiving their health as poor or very poor demonstrated greater skepticism and negativity toward AI use compared to healthier counterparts. The data shows that over half of patients in very poor health rejected medical AI, with more than a quarter expressing “extremely negative” views. Conversely, those in very good health were markedly more receptive, with only a tiny fraction harboring negative opinions. This observation invites deeper inquiry into the psychological and experiential dimensions behind patients’ apprehensions, especially since those bearing heavier illness burdens might experience greater vulnerability or distrust in automated systems.
Notably, the international researchers targeted radiology departments to capture a comprehensive snapshot encompassing a wide spectrum of medical conditions. Radiology, as a cornerstone of modern diagnostics through modalities such as X-ray, computed tomography (CT), and magnetic resonance imaging (MRI), relies increasingly on AI algorithms to detect, quantify, and classify pathologies. This strategic setting enabled researchers to assess the perspectives of patients undergoing diagnostic procedures integral to diverse clinical specialties, thereby ensuring the study’s broad applicability. The scale, spanning 74 clinics in 43 countries, marks this as one of the largest global explorations into patient attitudes on AI in medicine to date.
Gender differences subtly emerged from the data, with male respondents marginally more inclined to embrace AI applications than female respondents, manifesting approval rates of 59.1% and 55.6%, respectively. More strikingly, familiarity and understanding of AI technologies dramatically influenced acceptance levels. Among patients who rated themselves as highly knowledgeable about AI, a remarkable 83.3% expressed positive views regarding its integration in medical contexts. This trend underscores a critical linkage between digital literacy and openness to emerging healthcare paradigms, highlighting an imperative for enhanced patient education and transparent communication about the capabilities and limitations of AI tools.
The study also delved into critical principles patients deem necessary for AI deployment in clinical environments. Chief among these is the demand for explainability—70.2% of respondents insisted that AI systems should be transparent in their decision-making processes, allowing users, including patients and physicians, to comprehend how conclusions are reached. This preference for interpretable AI aligns with ongoing technical discussions in the field, emphasizing that opaque or “black-box” models risk eroding trust and may face resistance even if diagnostically accurate. Patients’ insistence that AI tools complement rather than replace physician expertise—favored by 72.9%—further illustrates the desire to preserve human oversight and relational aspects of care.
Interestingly, the survey introduced hypothetical scenarios wherein human clinicians and AI systems possessed equal diagnostic accuracy. Even in such idealized conditions, only 4.4% of respondents supported diagnoses made exclusively by AI, while a mere 6.6% preferred diagnoses entirely without AI assistance. This dichotomy signals that patients envision AI less as an autonomous agent and more as an augmentative adjunct, shaping a hybrid model of human-machine collaboration in healthcare decision-making. The findings resonate with ethical frameworks advocating augmented intelligence rather than full automation in medicine.
The timing of the survey in 2023, just prior to the explosive advances in large language models and conversational AI, constitutes a notable methodological caveat. As Dr. Felix Busch, the study’s lead author, and his colleagues acknowledge, public attitudes toward AI may have shifted since data collection, influenced by growing media exposure, consumer AI interactions, and evolving expectations around healthcare technology. The COMFORT consortium plans follow-up studies utilizing the same questionnaire to monitor longitudinal trends and better align AI development with evolving patient perspectives, thereby ensuring patient-centered innovation.
Underneath the surface of statistical summaries lies a complex interplay of psychological and experiential factors shaping patients’ skepticism, particularly among those with severe illness. The study posits that factors such as prior experiences within healthcare systems, the emotional toll of chronic or terminal conditions, and broader societal discourses regarding technology’s role in life-and-death decisions likely contribute to these attitudes. Future qualitative research could help unpack these layers, offering clinicians and developers richer insights to tailor AI tools sensitively and ethically.
From a technical standpoint, the preference for explainable AI speaks directly to current challenges in machine learning interpretability. Medical AI models increasingly employ deep learning architectures capable of parsing complex image and clinical data patterns, yet these systems often function as inscrutable black boxes. Achieving explainability involves integrating methods such as saliency mapping, attention mechanisms, or rule-based explanations that articulate how input features influence outputs. Engineering these features is not merely a user interface concern but a core research trajectory, essential for regulatory approval and clinical adoption.
Moreover, the study illuminates the critical role of trust in AI-human medical partnerships. Trust emerges as a multidimensional construct involving reliability, ethical transparency, perceived competence, and communication quality. AI developers must address these dimensions proactively, developing systems aligned to clinical workflows that do not disrupt but enhance the physician-patient relationship. Human-centered design strategies, participatory development involving patients, and transparent reporting mechanisms will be key to fostering acceptance.
Ethical considerations also permeate the discussion. Patients’ reluctance to fully cede clinical decisions to AI underscores persistent fears regarding autonomy, accountability, and the depersonalization of care. Ensuring that AI deployment safeguards patient rights, respects agency, and maintains avenues for human judgment will shape future regulatory guidelines and hospital policies. The study’s multinational scope highlights potential cultural variances in these attitudes, advocating for context-sensitive implementation rather than one-size-fits-all solutions.
Finally, this landmark research provides a critical evidence base for stakeholders across healthcare, policy, and technology sectors grappling with AI integration. By foregrounding the patient perspective on an unprecedented scale, the study challenges the field to move beyond clinician-centric narratives and embrace inclusive dialogues that foreground human experience. It also signals that technical innovation in medical AI must be accompanied by strategic communication, transparent design, and ethical stewardship to realize AI’s transformative potential responsibly.
As AI continues to advance and permeate deeper into everyday clinical practice, the insights gleaned from this global survey serve as a clarion call. Medical AI must be explainable, human-centered, and responsive to patient concerns, especially for the most vulnerable populations. Addressing these imperatives will not only fuel technological progress but also underpin the social license critical for AI to fulfill its promise in healthcare’s future.
Subject of Research: People
Article Title: Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients
News Publication Date: 10-Jun-2025
Web References:
http://dx.doi.org/10.1001/jamanetworkopen.2025.14452
References:
Busch F, Hoffmann L, Xu L, et al. Multinational Attitudes Toward AI in Health Care and Diagnostics Among Hospital Patients. JAMA Network Open. 2025;8(6):e2514452. doi:10.1001/jamanetworkopen.2025.14452
Keywords: Artificial Intelligence, Medical AI, Patient Attitudes, Explainability, Diagnostic Radiology, Health Technology, AI Acceptance, Human-AI Collaboration, Medical Ethics, Machine Learning, Healthcare Innovation, Patient-Centered Care