Advancements in artificial intelligence (AI) are pushing the boundaries of healthcare by transforming how motivational interviewing (MI) is delivered to individuals seeking to change health-related behaviors. MI is a well-established, patient-centered counseling technique designed to help individuals explore and resolve ambivalence around behavior change, empowering them to find their own intrinsic motivation. Although proven effective in various clinical environments, traditional MI faces significant barriers such as limited clinician time, training complexity, and reimbursement challenges. Emerging AI-driven digital tools, such as chatbots and virtual agents, are now bridging these gaps by offering scalable, accessible, and personalized behavioral support around the clock.
These AI-powered interventions replicate the core aspects of motivational interviewing by engaging users in empathetic, nonjudgmental dialogues that foster reflection and readiness to change. The technology spectrum ranges from straightforward rule-based systems with scripted conversational flows to sophisticated natural language processing models, including the state-of-the-art large language models (LLMs) like GPT-3.5 and GPT-4. The latest iterations provide remarkably human-like interactions, using advanced algorithms to tailor responses dynamically, thereby emulating reflective listening, affirmations, and open-ended questioning—hallmarks of skilled MI practitioners.
A comprehensive scoping review conducted by researchers at Florida Atlantic University’s Charles E. Schmidt College of Medicine marks the first extensive synthesis of literature exploring AI systems designed to deliver MI for health behavior modification. This study catalogued the landscape of AI interventions, critically examined their adherence to MI principles, and assessed their reported impact on psychological and behavioral outcomes. The findings, published in the Journal of Medical Internet Research, illuminate both the promise and current limitations of AI-enhanced motivational interviewing.
The analysis revealed a predominance of chatbot implementations, complemented by virtual agents and mobile applications. These tools harness diverse technological frameworks, from deterministic algorithms to generative AI models. While all aimed to simulate the MI process, the rigor of their empirical evaluations varied significantly. Most studies emphasized short-term psychological constructs such as users’ readiness to change and their feeling of being understood—factors essential for initiating behavior change. However, there was a striking paucity of rigorous data on sustained behavioral outcomes, with long-term follow-up either absent or insufficiently detailed, highlighting a critical gap in the evidence base.
Evaluation of “MI fidelity,” or the extent to which AI systems adhere to authentic MI protocols, emerged as a complex challenge. Traditional fidelity assessments require detailed human coding and expert review, which are resource-intensive and do not scale well to the volume of AI interactions. The reviewed studies employed various fidelity evaluation strategies, yet few systematically documented how closely conversational agents replicated the nuanced empathic and autonomy-supportive elements fundamental to MI. This raises essential questions about the quality and ethical responsibility of AI-driven counseling, especially in sensitive health contexts.
Another important theme from the review concerns safety and accuracy in AI-generated content. Only a minority of the studies addressed potential risks such as misinformation, inappropriate or harmful responses, and the safeguarding mechanisms in place to mitigate these issues. As AI chatbots increasingly interface with vulnerable populations, ensuring content reliability and ethical standards becomes paramount. Without transparent safeguards, there is danger that users might receive advice that is misleading or inconsistent with established clinical guidelines.
Despite their current limitations, users generally appreciated the convenience, accessibility, and structured nature of AI systems. Participants frequently mentioned the benefit of 24/7 availability and the absence of perceived judgment, which can be a barrier to seeking traditional behavioral health care. However, many users also noted the lack of a “human touch” and the subtle relational dynamics intrinsic to face-to-face MI sessions, which include nonverbal cues and emotional attunement that AI, to date, cannot fully replicate.
The population samples studied varied, covering general adult populations, college students, and individuals with specific health conditions. Smoking cessation was the most common target behavior, reflecting the persistent public health demand for effective interventions. Other focal areas included reduction of substance use, stress management, and various lifestyle modifications critical to chronic disease prevention and management. This diversity underscores AI’s broad applicability but also points to the need for tailored, population-specific designs.
The report highlights a pivotal juncture in the evolution of AI within behavioral medicine. The integration of large language models, capable of generating highly contextual and sophisticated dialogues, opens unprecedented opportunities for scalable, personalized health coaching. Nevertheless, this technology’s rapid adoption must be approached with careful scientific scrutiny to ensure fidelity to evidence-based approaches, safeguard users, and genuinely empower meaningful behavior change.
Research leader Dr. Maria Carmenza Mejia emphasized the importance of dissecting specific MI techniques embodied in AI tools. Her team meticulously mapped out the use of essential MI components such as open-ended questions, affirmations, and reflective listening within AI dialogues, while also critically assessing fidelity measures. This granular analysis provides crucial insights into how AI systems perform compared to human counselors and identifies areas needing improvement to match the therapeutic depth and relational effectiveness of traditional MI.
Looking forward, the study advocates for a multidisciplinary research agenda that includes not only AI development but also comprehensive evaluation frameworks prioritizing fidelity, safety, efficacy, and ethical considerations. Scaling up AI interventions’ reach must be balanced by rigorous clinical validation and transparency regarding their limitations. By combining technological innovation with robust behavioral science frameworks, AI can play a transformative role in expanding access to motivational interviewing, ultimately supporting a larger segment of the population struggling with behavior change.
As AI continues to mature, its potential to democratize access to motivational interviewing and empower individuals toward healthier habits is clear, but so too are the challenges. From fidelity assessment to ensuring safety and replicating the nuanced empathy of human counselors, significant work remains. Only through sustained research, open collaboration, and ethical vigilance can these AI tools realize their full promise to revolutionize health behavior change and improve public health outcomes globally.
Subject of Research: People
Article Title: New Doc on the Block: Scoping Review of AI Systems Delivering Motivational Interviewing for Health Behavior Change
News Publication Date: 16-Sep-2025
Web References:
Journal of Medical Internet Research Article
Florida Atlantic University
References:
DOI: 10.17605/OSF.IO/G9N7E
Image Credits: Florida Atlantic University
Keywords: Health and medicine, Psychological science, Behavioral psychology, Substance abuse, Human social behavior, Stress management, Artificial intelligence, Generative AI, Personality psychology, Motivation, Substance related disorders