A groundbreaking study published in the open-access journal PLOS One reveals a troubling layer of racial bias embedded deep within the language of electronic health records (EHRs). By analyzing over 13 million clinical notes from a Mid-Atlantic U.S. health system, researchers uncovered evidence that clinicians are more likely to question the credibility of Black patients compared to their White counterparts. This systemic pattern of documented doubt poses significant concerns about how unconscious biases may contribute to ongoing healthcare disparities affecting marginalized communities.
The research, led by Mary Catherine Beach and colleagues at Johns Hopkins University, utilized advanced artificial intelligence (AI) tools to sift through more than thirteen million clinical notes authored between 2016 and 2023. The AI algorithms were meticulously designed to flag phrases that implicitly cast doubt on a patient’s reliability or narrative competence—terms such as “claims,” “insists,” or “adamant about” were used as indicators of skepticism. Additionally, expressions like “poor historian” flagged questions about a patient’s ability to coherently narrate their medical history. These subtle language cues, though rarely exceeding 1% of the total notes, disproportionately appeared in accounts of Black patients.
Delving into the quantitative findings, the study reported that approximately 0.82% of all notes contained language undermining patient credibility. This fraction split nearly evenly between expressions questioning patient sincerity (0.48%) and those doubting patient competence (0.40%). Notably, the adjusted odds ratios (aOR) reveal an unsettling racial disparity: notes about non-Hispanic Black patients were 29% more likely to contain credibility-undermining language overall. Breaking it down further, doubt cast upon sincerity increased by 16%, while skepticism toward competence soared by 50% compared to notes concerning White patients. Conversely, supportive language bolstering patient credibility was recorded less frequently in notes about Black individuals.
This form of bias, documented within medical narratives, points to a systemic issue that could exacerbate unequal health outcomes. When clinician notes express implicit disbelief or skepticism, it risks influencing clinical decisions, treatment plans, and ultimately patient trust. Prior research has highlighted that perceived dismissal by healthcare providers undermines patient engagement and adherence, both pivotal for positive health trajectories. The current study extends this knowledge by spotlighting how such biases are mirrored in clinical documentation, a crucial yet often overlooked dimension.
Technically, the research team employed natural language processing (NLP) models trained to detect linguistic markers associated with credibility judgments. Although the models demonstrated high accuracy, the authors acknowledge limitations, citing potential misclassification errors that could underestimate or overestimate the prevalence of biased language. Furthermore, the study was conducted within a single healthcare system, which might limit generalizability. The influence of clinician demographics such as race, gender, or age on the use of credibility-undermining language was not explored, suggesting avenues for future inquiry.
Despite these constraints, Beach and colleagues emphasize that these findings likely constitute “the tip of the iceberg.” They warn that unconscious biases entwined in medical documentation may silently perpetuate stigma against Black patients, subtly shaping care trajectories. The authors advocate for enhanced medical training to sensitize future clinicians about implicit biases manifesting not only in interpersonal interactions but also in written communication. Moreover, as healthcare increasingly integrates AI-assisted documentation tools, they stress the necessity of programming these technologies to avoid perpetuating biased rhetoric.
Understanding the operational mechanics behind such AI tools is paramount. They help expedite the creation of patient notes, yet if trained on biased data, they risk inheriting and amplifying human prejudices. This feedback loop could normalize skewed portrayals of patient credibility, thereby institutionalizing disparities. The call to action involves developing ethical AI frameworks that actively mitigate bias, prompting rigorous validation of algorithmic outputs before clinical integration.
The implications of these discoveries extend beyond academic discourse to public health policy and clinical practice reform. Medical institutions must grapple with the recognition that documentation practices are not neutral; they reflect and reinforce social inequities. Interventions aiming to improve equity in healthcare outcomes should consider strategies addressing documentation bias, alongside broader structural reforms. For example, hospital systems can implement routine audits of clinical notes using AI tools to identify and remediate biased language patterns.
Furthermore, patients’ voices remain indispensable. Incorporating patient feedback mechanisms about their perceived treatment and representation in medical narratives might enhance transparency and foster mutual trust. Encouraging dialogues where patients can express concerns about how their accounts are documented and interpreted may act as an antidote to entrenched stigma. Ultimately, fostering an environment that respects and validates diverse patient narratives is foundational for equitable care.
The study also sheds light on the complex interface between language, power dynamics, and clinical judgment. Words possess the capacity to either empower or marginalize, especially in healthcare settings where documentation can influence diagnostic pathways and accessibility to resources. By rendering these dynamics visible through data-driven analyses, this research contributes critical insights into the subtleties of racial disparities.
In conclusion, the investigation by Beach et al. underscores the urgent need to confront the latent racial biases embedded in healthcare documentation. As the medical community strives to achieve equity, acknowledging and addressing how language shapes patient credibility assessments is imperative. This research advocates for multidisciplinary efforts combining AI innovation, clinician education, and patient engagement to dismantle bias and cultivate a more just healthcare system.
Subject of Research: People
Article Title: Racial bias in clinician assessment of patient credibility: Evidence from electronic health records
News Publication Date: 13-Aug-2025
Web References: http://dx.doi.org/10.1371/journal.pone.0328134
References: Beach MC, Harrigian K, Chee B, Ahmad A, Links AR, Zirikly A, et al. (2025) Racial bias in clinician assessment of patient credibility: Evidence from electronic health records. PLoS One 20(8): e0328134.
Image Credits: Beach et al., 2025, PLOS One, CC-BY 4.0
Keywords: racial bias, clinician assessment, patient credibility, electronic health records, natural language processing, artificial intelligence, healthcare disparities, implicit bias, medical documentation, equity in healthcare