The advent of artificial intelligence (AI) in healthcare heralds a transformative era, with the potential to revolutionize personalized medicine by tailoring diagnoses and treatments to individual patients. However, as AI models and applications proliferate, a critical challenge emerges: ensuring these technologies are developed and deployed without perpetuating biases that marginalize vulnerable populations. A groundbreaking study conducted by researchers from Pompeu Fabra University (UPF), the Barcelona Supercomputing Center (BSC-CNS), and Rovira i Virgili University (URV) in Spain tackles this issue head-on by focusing on the inclusion of transgender individuals in medical AI systems. This pioneering work urges the AI community to transcend simplistic binary frameworks and adapt medical AI to the nuanced, diverse realities of gender identity.
Traditional health AI systems have largely been designed within rigid binary gender models, frequently neglecting the unique physiological and psychosocial needs of transgender and non-binary populations. Such limitations not only restrict the utility of AI-powered healthcare tools for these communities but also risk exacerbating existing health disparities. The recent study delves into these concerns by engaging members of the transgender community directly in its research process, embracing a communicative methodology that emphasizes participatory collaboration rather than top-down analysis. This approach marks an essential shift in biomedical AI research, highlighting the importance of involving marginalized groups to co-create more equitable technologies.
The research was conducted in close partnership with the LGBTQIA+ advocacy group PRISMA, which plays a pivotal role in defending the rights of sexual and gender minorities within scientific and technological innovation spheres. This collaboration ensured that the study remained grounded in real-world experiences and ethical imperatives, avoiding the common pitfall of tokenistic inclusion. Representatives from the Health Care and Promotion Service for Trans and Non-Binary People (TRÀNSIT) at the Catalan Health Institute also provided critical insights, further enriching the study’s contextual relevance and fostering trust between researchers and participants.
At the heart of the study were three telematic focus groups composed of eighteen transgender individuals tasked with articulating their experiences and perspectives concerning current AI applications in healthcare. Participants reported that many existing AI tools propagate the biases of their predominantly cisgender developers. As an illustrative example, certain voice modification applications, designed to assist in gender transition, frequently misclassify users by gender. This misrecognition not only undermines the app’s therapeutic efficacy but also inflicts emotional distress on users by invalidating their gender identity through technology.
Such technological missteps are emblematic of a larger problem: the replication and amplification of societal biases within AI systems. This pernicious feedback loop leads to the invisibilization of transgender people, reinforcing structural inequities. Simón Perera del Rosario, a co-author from UPF, highlighted that this dynamic can have deleterious effects on mental health, self-esteem, and overall quality of life for transgender individuals. These findings emphasize the ethical imperative to design AI systems that are not just functionally effective but also socially responsible.
One of the study’s major recommendations focuses on leveraging AI’s capabilities to enhance the personalization of medical treatments, notably in the administration of masculinizing or feminizing hormonal therapies. Currently, hormone dosages are often standardized according to cisgender parameters, ignoring the distinct physiological profiles within transgender populations. AI systems, equipped with diverse data reflecting individual variations, could optimize dosage regimens and monitor potential interactions with other medications, thus minimizing side effects and maximizing therapeutic outcomes. This precision medicine approach marks a significant advance toward truly individualized care.
Participants also stressed the crucial importance of ethical data management. Their concerns revolve around how personal data related to gender identity is collected, stored, and used within medical systems. The group advocated for strictly limiting the use of such sensitive data to relevant medical contexts, entrusting only qualified health professionals with this information. This precaution is vital to prevent unauthorized misuse and to respect privacy, which has historically been a major barrier for transgender individuals seeking care. Furthermore, participants warned that AI systems built on binary frameworks risk misinterpreting data, thereby leading to diagnostic inaccuracies or inappropriate treatment decisions.
The mistrust of healthcare institutions among many transgender individuals is a significant hurdle that technology alone cannot overcome. This distrust stems from a long history of discrimination and medical pathologization, underscored by the World Health Organization’s delayed removal of “transsexuality” as a mental disorder only in 2019. To rebuild trust, the study underscores the necessity for comprehensive healthcare professional education and sensitization to transgender-specific health needs. Such training initiatives would enhance provider competence and foster more respectful, informed patient interactions, which are essential for effective AI integration in clinical practice.
Expanding the scientific evidence base concerning transgender health and AI is another vital pillar of the study’s agenda. Current research addressing these intersecting domains remains alarmingly sparse. There is an urgent need for more scholarly attention focused on developing AI models that reflect gender diversity holistically, from data collection to algorithmic design and deployment. Encouraging interdisciplinary collaboration among computer scientists, clinicians, and social scientists will be critical to ensuring these models are robust, ethical, and clinically impactful.
Moreover, fostering solidarity networks and knowledge exchange platforms between transgender communities and healthcare professionals holds great promise. These spaces enable the co-creation of AI tools grounded in lived experience, thereby enhancing relevance and acceptance. By engaging stakeholders throughout AI development cycles, the healthcare field can produce technologies that empower rather than alienate marginalized groups.
The study’s communication methodology itself represents an innovative research paradigm that upends conventional investigator-led approaches. By actively involving transgender participants in the research design and oversight, facilitated by PRISMA, the research ensured that ethical standards were meticulously upheld and that outcomes would resonate authentically with the community. This participatory ethic exemplifies a progressive direction for AI research writ large, emphasizing inclusivity, transparency, and reciprocal respect.
In conclusion, the ongoing evolution of AI in healthcare offers unprecedented opportunities to deliver personalized, equitable medical care. However, realizing this potential necessitates deliberate efforts to dismantle ingrained binary biases in AI systems. The interdisciplinary collaboration between UPF, BSC-CNS, URV, and advocacy partners like PRISMA points the way toward constructing AI applications that truly reflect and serve the diverse tapestry of human gender identities. Embracing this vision promises not only to improve health outcomes for transgender individuals but also to enrich the field of AI-powered medicine as a whole with more nuanced, just, and humane technologies.
Subject of Research: People
Article Title: Exploring Gender Bias in AI for Personalized Medicine: Focus Group Study With Trans Community Members
News Publication Date: 29-Jul-2025
Web References:
- Journal of Medical Internet Research DOI link
- PRISMA association
- TRÀNSIT Health Care and Promotion Service
References: None declared.
Keywords:
Artificial intelligence, Personalized medicine, Algorithms, Transgender identity