A groundbreaking observational study published in The Lancet Gastroenterology & Hepatology has unveiled a startling potential downside to the routine use of artificial intelligence (AI) in colonoscopy procedures. The research indicates that, paradoxically, the very technology designed to enhance adenoma detection rates during colonoscopies may inadvertently contribute to a deskilling effect among experienced endoscopists. This phenomenon could lead to a significant decrease in their ability to detect precancerous growths when AI assistance is not available, raising critical questions about the long-term implications of AI integration in clinical practice.
Colonoscopy remains a cornerstone in colorectal cancer prevention, functioning through the identification and removal of adenomas—precancerous polyps that, if left undetected, carry a heightened risk of developing into malignant tumors. Recent years have witnessed an accelerating adoption of AI-driven tools in endoscopic procedures. These systems leverage advanced computer vision algorithms and deep learning frameworks to scan endoscopic images in real-time, flagging suspicious lesions and thereby enabling practitioners to achieve higher adenoma detection rates (ADR). While short-term data from randomized controlled trials consistently demonstrate that AI assistance improves detection outcomes, the continuous reliance on such systems by clinicians has not been extensively studied until now.
The multicenter study, involving 19 seasoned endoscopists across four Polish colonoscopy centers, involved an extensive dataset of over 2,200 colonoscopies spanning from September 2021 to March 2022. Intriguingly, the research design allowed a direct comparison between colonoscopies performed without AI assistance before and after the introduction of routine AI use. This comparison exposed a downward shift in ADR—from 28.4% pre-AI exposure to a concerning 22.4% post-AI exposure during non-AI-assisted procedures. This 20% relative decline is critical, as it signifies a tangible erosion of endoscopists’ intrinsic diagnostic capabilities, potentially attributable to over-reliance on AI systems.
The underlying mechanisms of this deskilling phenomenon may be multifaceted. Cognitive offloading—whereby specialists subconsciously delegate critical decision-making processes to AI—could blunt the development and maintenance of their own visual recognition skills. Over time, this may culminate in diminished vigilance or pattern recognition acuity in the absence of AI prompts. Moreover, the real-time AI alerts might inadvertently recalibrate clinicians’ attention thresholds, leading them to defer to algorithmic suggestions rather than applying their own expertise fully. This interplay warrants deeper neurocognitive investigations to parse out the behavioral adaptations induced by AI integration.
Beyond the clinical implications, these findings also compel the medical community to rethink existing training paradigms. Historically, expertise in endoscopy is honed through rigorous hands-on experience and continual deliberate practice. The infusion of AI-based decision support tools disrupts this tradition, necessitating new frameworks that balance augmented intelligence use with the preservation of core procedural skills. This might entail periodic skill-refreshment protocols or hybrid training models that intermittently simulate AI absence to counteract skill degradation.
Profoundly, the data also challenge the interpretation of prior randomized trials that established the superiority of AI-assisted colonoscopy by juxtaposing it against non-AI-assisted procedures. Professor Yuichi Mori, co-author of the study, highlights the possibility that those trials may have inadvertently underestimated the baseline performance of unaided colonoscopies, as endoscopists involved could have been influenced by their concurrent exposure to AI, thereby depressing their unaided detection skills.
While the study’s observational nature precludes definitive causal inferences, its revelations are too consequential to ignore. The patient-related outcomes hinge precariously on the delicate balance of technology-assisted enhancement and human expertise retention. Importantly, the study participants were highly experienced endoscopists with a history of over 2,000 colonoscopies each, suggesting that the deskilling effect could be even more pronounced among less experienced practitioners, where dependence on AI might be deeper.
Equally important is the call by the researchers for comprehensive longitudinal assessments of AI’s impact across diverse medical specialties. The rapid proliferation of AI tools spans myriad clinical domains—from radiology to dermatology and pathology—and similar deskilling risks might be latent elsewhere, posing systemic challenges for healthcare delivery models worldwide.
Dr. Marcin Romańczyk, leading author and researcher at the Academy of Silesia, emphasizes that as AI becomes ubiquitously integrated into medical workflows, it is imperative to scrutinize the nuanced dynamics governing clinician-AI interactions. Only through such rigorous inquiry can effective strategies be devised to mitigate adverse consequences, preserve essential skills, and harness AI’s transformative potential for positive patient outcomes.
The discourse around AI in medicine has predominantly spotlighted its capacity to enhance diagnostic accuracy and operational efficiency. However, this study introduces a critical perspective, tempering widespread optimism with a cautionary message about potential unintended clinical consequences. Dr. Omer Ahmad, University College London, underscores this viewpoint in a linked commentary, advocating for a judicious and balanced approach to AI deployment that safeguards against the “quiet erosion” of fundamental clinical competencies.
Integrating AI is not a binary choice but rather a complex synthesis of human and machine cognition. The challenge resides not only in technological sophistication but also in designing adaptive clinician education, workflow modifications, and system architectures that promote synergistic performance without compromising independent skill retention. Emerging research on explainable AI and human-centered design may provide pathways for developing more intuitive systems that reinforce, rather than replace, expert clinical judgment.
The implications of these findings extend beyond colonoscopy to broader debates on automation and professional competence in healthcare. As AI tools evolve from assistive aids toward autonomous agents, the ethical and practical considerations surrounding deskilling will intensify. Continuous monitoring of AI’s impact on clinician proficiency and patient safety metrics must be embedded in quality assurance frameworks.
In conclusion, while AI undeniably bolsters the capabilities of health professionals in detecting colorectal adenomas, this pivotal study reveals a double-edged sword: the promise of enhanced detection risked being undermined by an erosion of critical endoscopist skills when AI support is withdrawn. Medical institutions, AI developers, and policymakers must collaboratively navigate these challenges to optimize AI’s benefits while preserving the invaluable expertise of human practitioners.
Subject of Research: People
Article Title: Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study
News Publication Date: 12-Aug-2025
Web References:
http://dx.doi.org/10.1016/S2468-1253(25)00133-5
Keywords: Artificial intelligence, Cancer screening, Colon cancer, Cancer, Health and medicine, Clinical medicine, Health care, Oncology, Colorectal cancer