In an era where artificial intelligence (AI) is rapidly permeating healthcare, gauging the trustworthiness of AI systems has become paramount. A groundbreaking study published in BMC Psychology in 2025 delves into this very challenge, presenting a newly developed and rigorously validated scale to measure trust in AI-based follow-up systems integrated within hospital information frameworks. This research marks a significant stride toward understanding how patients and healthcare providers perceive AI’s role in post-treatment monitoring and continuity of care.
The integration of AI technologies into hospital information systems has revolutionized the way follow-up care is managed. Traditionally, follow-up processes hinged on manual interventions, patient self-reporting, or sporadic clinical visits. AI-enabled solutions promise continuous, personalized monitoring that can flag potential complications early, optimize resource allocation, and even predict patient trajectories. However, the successful deployment of these systems fundamentally depends on the degree of trust they foster among users.
Trust in AI is a multifaceted construct that transcends mere functionality. It encompasses users’ confidence in the system’s reliability, comprehensibility, and ethical handling of sensitive health data. Given the sensitive nature of healthcare, mistrust can obstruct the adoption of potentially life-saving technologies. Recognizing this, the research team led by Xie, L., Guo, T., and Yang, Y. embarked on a methodical journey to distill trust into measurable parameters specifically tailored for AI-based follow-up tools in hospital environments.
The researchers adopted a mixed-methods approach incorporating qualitative interviews with healthcare professionals and patients, alongside quantitative psychometric analyses. Their goal was to capture a holistic view of trust—from initial impressions and interface interactions to perceptions of privacy safeguards and algorithmic transparency. This exhaustive approach ensured that the eventual scale could resonate authentically with various stakeholders in a healthcare setting.
One of the technical challenges addressed by the study was defining the latent dimensions of trust unique to AI follow-up systems. Unlike conventional medical technologies, AI introduces algorithmic opacity and dynamic decision-making processes, which complicates users’ ability to assess its actions. The researchers navigated this complexity by anchoring their scale on constructs like predictability, integrity, benevolence, and technological competence, each operationalized through distinct questionnaire items.
The validation phase involved administering the emerging trust scale to several hundred participants across multiple hospital sites, encompassing diverse demographic and professional backgrounds. Statistical analyses employed included exploratory and confirmatory factor analysis to pinpoint the scale’s underlying structure, as well as reliability testing to ensure consistency across contexts. These rigorous psychometric methods reinforced the scale’s robustness and generalizability.
Importantly, the study highlights that trust is not static. User perceptions evolve through interaction with the AI system, influenced by real-world performance and the communication of AI-generated insights by clinicians. The dynamic nature of trust underscores the need for continuous evaluation frameworks in clinical deployments, moving beyond one-time usability assessments toward longitudinal trust monitoring.
This research also sheds light on ethical considerations critical to trust. The new scale encompasses items probing users’ confidence in data privacy protections and fairness of AI algorithms. Given recent concerns about bias and data breaches in health tech, incorporating these elements into the measurement instruments offers a nuanced understanding of trust dimensions that might otherwise be overlooked.
A pivotal finding from the study is the correlation between trust scores and user engagement metrics within the hospital information systems. Higher trust levels corresponded with more frequent patient adherence to follow-up recommendations and increased clinician reliance on AI-derived alerts. This finding positions trust not merely as a theoretical construct but as a tangible driver of improved healthcare outcomes and system efficiency.
From a technical perspective, this validated trust scale provides an invaluable tool for AI developers and hospital administrators. By pinpointing trust deficits through precise measurement, stakeholders can iteratively improve system design and user training to better align AI functionalities with user expectations and ethical standards.
Moreover, the methodology employed by the authors sets a precedent for trust measurement across other AI domains within healthcare, such as diagnostic support or treatment planning. Their framework can be adapted to diverse applications, fostering a uniform standard for trust evaluation that could accelerate the safe and accepted adoption of AI technologies across medical disciplines.
The societal implications of quantifying trust in AI hospital systems are profound. In an increasingly digital health ecosystem, empowering patients and providers with confidence in AI tools ensures equitable access and reduces technology-induced disparities. This aligns with broader public health goals of harnessing AI responsibly to enhance patient care without compromising human values or autonomy.
Looking forward, the authors advocate for incorporating their trust scale into regulatory and certification processes for AI medical devices. Such integration would facilitate objective, user-centered benchmarks that complement traditional safety and efficacy criteria, ultimately fostering greater transparency and accountability among AI solution vendors.
While this study primarily addresses hospital-based follow-up systems, its insights reverberate through the entire spectrum of digital healthcare innovations. As machine learning models grow more complex and ubiquitous, scalable mechanisms for measuring and nurturing trust will be indispensable to the technology’s social license to operate.
In conclusion, the development and validation of a dedicated trust scale for AI-based follow-up in hospital information systems epitomize a critical advancement bridging technology and human factors research. It charts a pragmatic pathway to ascertain and enhance the often intangible human experience of trust, thereby underpinning the sustainable adoption of AI innovations that promise to transform healthcare delivery worldwide.
Subject of Research:
Article Title:
Article References:
Xie, L., Guo, T., Yang, Y. et al. Trust in artificial intelligence-based follow-up in hospital information systems: development and validation of a new scale.
BMC Psychol (2025). https://doi.org/10.1186/s40359-025-03855-x
Image Credits: AI Generated

