top of page

The LGBTQ+ Case for AI in Healthcare


An Introduction to Artificial Intelligence in Healthcare

Artificial intelligence (AI) is reshaping healthcare as we know it. From AI-powered imaging tools that can detect cancers earlier and more accurately than the human eye to algorithms that personalize treatments based on a patient’s unique genetic makeup, the potential applications of AI in healthcare can feel limitless. But as technology advances and new opportunities emerge, a critical question remains: who are we leaving behind?


For LGBTQ+ communities, who have long faced systemic discrimination within healthcare systems, the answer is far from simple. In just the past year, nearly one in three LGBTQ+ adults in the U.S. reported experiencing discrimination from a medical provider. As AI becomes a larger force in healthcare decision-making, advocates and experts alike are asking: will AI help close gaps in care, or will it simply reinforce existing ones?


While many see AI as a tool that could eliminate human bias and promote equity, others warn that without intentional design, AI could replicate the very inequalities it promises to solve.



How AI Works, and Why Data Matters

A defining challenge that guides this conversation is data. At its core, AI learns by analyzing vast amounts of data to find patterns, make predictions, and improve performance over time. In healthcare, this translates to faster diagnoses, personalized treatment plans, and broader access to expert knowledge.


But artificial intelligence systems are only as strong and as fair as the data that they are trained on. Historically, LGBTQ+ individuals have been excluded from mainstream healthcare research and data collection. Critical information about sexual orientation and gender identity (SOGI) is often missing, misclassified, or withheld due to privacy fears. And without that data, AI models risk overlooking or misrepresenting LGBTQ+ patients entirely. Traditional fairness efforts in AI focus on categories like race, age, or gender. But when key identity data is invisible, marginalized communities are at risk of being left behind once again.



Inclusive AI: A Conversation with Nenad Tomasev

Nenad Tomasev, a Senior Staff Research Scientist at Google DeepMind, is at the forefront of building more inclusive AI systems. His research, including the influential paper Fairness for Unobserved Characteristics: Insights from Technological Impacts on Queer Communities, urges a rethinking of what fairness looks like in the development of artificial intelligence systems—one that accounts for those who are often invisible in datasets.


“AI presents a powerful opportunity to address longstanding disparities in healthcare, particularly for marginalized groups,” Tomasev explains. “But if the data we use reflects systemic biases, or ignores entire communities, we risk building systems that replicate those harms.”


Tomasev points to mental health as a clear example: LGBTQ+ individuals face significantly higher rates of mental health distress, but relevant data is often too sensitive to collect easily. Without this information, AI systems can’t accurately recognize or respond to LGBTQ+ health needs.


One promising solution? Participatory research. Projects like PARQAIR-MH (Participatory Queer AI Research for Mental Health) bring LGBTQ+ communities, clinicians, ethicists, and AI researchers together to co-create solutions, policies, and models that center lived experience. “True fairness requires diverse voices at every stage—from defining the problems to evaluating the results,” Tomasev emphasizes.




The Future of LGBTQ+ Inclusion

AI won’t fix healthcare inequities on its own. But if built intentionally and with inclusion, privacy, and representation at its core, it can be a powerful catalyst for change. For LGBTQ+ communities, that means being more than just counted in datasets; it means shaping how AI defines success, whose needs are prioritized, and what ethical standards guide future development. Building this future requires robust privacy protections so that sensitive identity information can be safely included and used to improve care. It demands that LGBTQ+ voices are embedded into every stage of AI design, testing, and deployment. And it calls for developers, institutions, and regulators to be held accountable for equitable outcomes. 


When we build AI systems with LGBTQ+ communities, not just about them, we don’t just create smarter healthcare systems; we create fairer ones. A future where every person, regardless of identity and lived experience, receives the care, dignity, and support they deserve.

 
 
 
bottom of page