top of page

Exploring the Benefits of AI Technologies for the LGBTQ+ Community

AI technologies present tremendous opportunities for facilitating research, improving economic outcomes, and supporting the creativity and communication of marginalized communities. For LGBTQ+ individuals, who face worsened socioeconomic outcomes than their non-LGBTQ+ peers, AI-driven improvements in online safety, healthcare, and employment could be transformative in providing access and opportunities in areas where this community has been traditionally underrepresented. 


This journey into AI requires treading cautiously, to ensure positive changes without inadvertent harm. Through equitable methods of algorithmic construction, we can recognize potential pitfalls and actively address concerns of bias and misuse in AI, ensuring that AI integration is not just a leap forward in technology, but a responsible and ethical one as well. 



Hate Speech & Censorship

Here in the United States, LGBTQ+ individuals are facing record numbers of legislation targeting their expression, education, and access to resources. Around the world, propaganda laws, broadcasting regulations, morality enforcements, and content suppression leave millions of LGBTQ+ people without access to community, visibility, and vital resources. If AI can be applied in ways that identify and counter discriminatory behavior online, we are looking at an incredible tool for equality.


Natural language AI models can be trained to detect and flag hate speech, impacting the amount of such speech online and the speed with which it is flagged and removed. Among marginalized groups, LGBTQ+ individuals are reportedly the most targeted by hate speech online. Research has shown that a devastating 87% of LGBTQ+ young people have seen or experienced anti-LGBTQ+ hate and harassment on social media in the past year, and the vast majority reported that no action was taken to identify and remove the harmful content. Online spaces are too often the only affirming space available to members of the LGBTQ+ community, and ensuring the safety of those spaces is paramount. A mind-boggling amount of content is shared to online platforms - 66,000 photos on Instagram, 1.7 million pieces of content on Facebook, and 500 hours of video on Youtube are posted every single minute. Manual moderation policies, while necessary, are simply outnumbered. AI tools can and must be used to make decisions in conjunction with human moderators to protect at-risk populations online, including those in the LGBTQ+ community.


AI driven-tools for content detection and moderation are already used by most online platforms, but one particularly notable example of beneficial application was a coordination between Element AI and Amnesty International entitled “Troll Patrol,” which uncovered patterns of abuse against women on Twitter/X. These same or similar tools have been used for identifying abuse directed towards LGBTQ+ individuals and communities, such as a dataset of anti-LGBTQ+ YouTube comments used to train AI models in detecting harassment. 


We must recognize that these tools, while necessary to improve online safety, are far from perfect in their current states and often miss the mark by suppressing LGBTQ+ voices. For example, one study found that Google’s Tune browser plugin, which uses the Perspective AI tool to detect “toxicity” in text-based media, labeled the Twitter/X posts of drag queens as “more toxic” than those of white nationalists. This underscores the need for thoughtful and constant improvements in AI-based technologies, to ensure they benefit platforms, communities, and individual users alike.



Health & Mental Health

Healthcare is a field with both a particular centrality for LGBTQ+ people and a wealth of potentially beneficial AI applications. LGBTQ+ individuals very often have unique health and mental health needs tied to their identities, which are not fully accounted for or studied about in established modes of medicine. If applied cautiously and with an eye to data privacy, AI has the potential to help. 


One AI study found that researchers in California were able to use machine learning to more accurately identify future cases of HIV within a population, empowering physicians to provide candidates at risk with HIV pre-exposure prophylaxis and improving health outcomes in the community. Findings like this, deployed on a larger scale, could allow for governments, community organizations, and healthcare advocates to more accurately target their messaging regarding prevention and treatment options for LGBTQ+ individuals and other marginalized communities. Another health study demonstrated success in using machine learning and natural language processing validation to identify characteristics of online posts indicating the presence of gender dysphoria in users. Without support, gender dysphoria can lead to depression, anxiety, substance abuse, and suicidality. In situations like this, prudent application of AI could potentially be utilized to make timelier medical interventions that support the most at-risk members of the LGBTQ+ community.


Beyond potential applications by researchers and medical practitioners, we have real world evidence of the ways AI can improve health and mental health outcomes for LGBTQ+ individuals. The Trevor Project trains its helpline workers by having them engage in simulated conversations with an AI which imitates an at-risk LGBTQ+ youth. This allows for the improvement of these workers’ skills in a safe environment, and is reportedly being used to facilitate an increase of its pool of crisis counselors by a factor of ten.



Employment & Economic Outcomes

LGBTQ+ people face significant barriers to employment opportunities when compared to their non-LGBTQ+ counterparts, too often due to bias and discrimination. The Center For American Progress reported that LGBTQ+ people were more likely to experience loss of income and to live below the poverty line. Thankfully, AI can be utilized in ways that reduce bias relative to standard hiring practices, which are susceptible to potentially capricious and prejudiced snap judgments by recruiters or simply unconscious biases present for most individuals. 


In contrast, AI tools can be used to more accurately assess the importance of traditionally-valued candidate characteristics due to the tools enabling much larger datasets to be analyzed than with non-AI based tools. Additionally, AI can allow for a much more thorough analysis of candidate characteristics, painting a more complete picture of potential employees and their capabilities. The result could be that the traditional bias towards judging candidates based on a limited set of white-centric and hetero/cisnormative factors is replaced with a far more comprehensive evaluation which benefits employers and prospective employees alike.



Identifying & Addressing the Risks

Ironically, these same areas that can benefit from AI application - online safety, health and mental health, and employment outcomes - are the exact same that will worsen if AI is applied haphazardly and without intention. The dangers posed for LGBTQ+ people can be quite serious, and we detail those concerns in our sister blog on navigating AI risks. Because of this, it becomes crucial to note the ways that researchers are actively working to address and mitigate these issues.


Algorithmic bias in AI systems has an incredibly serious impact on LGBTQ+ people and members of other marginalized communities. This issue is compounded by the fact that popular methods for detecting unfairness in an algorithm’s outputs will analyze the treatment by the algorithm of broad protected categories (e.g. race, gender, sexual orientation), but fail to account for differential treatment of subgroups that combine categories, such as queer Black women. However, several researchers have created more complex fairness detection methods that analyze the algorithmic treatment of these subgroups, allowing for a determination of algorithmic fairness in an intersectional manner that accounts for all relevant groups. Even so, further research is undeniably required to expand the abilities of algorithm designers and auditors to detect unfairness before harm is caused.


Another large concern comes from the potential for malicious actors to use the outputs of a machine learning model to infer the identities of individuals within its dataset and details of their data. This is particularly dangerous for LGBTQ+ people, for whom maintaining privacy can be a matter of life and death. Thankfully, researchers have developed methods for introducing random noise into data in a manner that mitigates this privacy violation while preserving the usefulness of the dataset. These methods are being adopted by several large tech companies, such as Google, Apple, Snapchat, and Meta.


One final danger comes from the malicious use of social media images to deceive facial recognition tools and create “deepfakes” of targeted users, often for the purpose of synthetic pornographic images. Members of the LGBTQ+ community face increased and unwarranted levels of sexualization and fetishization of queer bodies, leaving them at likely an elevated risk of such tools being used for harassment. Transgender individuals face particular dangers, as shown by one study which found that transgender users are more likely than cisgender users to be targeted for types of image-based harassment. Thankfully, researchers are developing AI-driven tools for altering posted images in miniscule ways that are invisible to the naked eye, but prevent facial recognition tools from recognizing the image as belonging to that person. These also render deepfake algorithms unable to form sufficient datasets of images recognized as belonging to the same user, further preventing the use of this malicious software.



It is impossible to state with certainty how great of a disruption AI will have, both on society in general and on LGBTQ+ communities in particular. However, the current use cases and demonstrated effects on LGBTQ+ communities  are profound and  we must take a sober assessment of the benefits and drawbacks of AI applications for marginalized communities.As shown, the impact of AI tools on employment, healthcare and online speech for marginalized communities, and the LGBTQ+ community specifically, will be significant and while AI has the potential to  help  our communities increase their quality of life greatly, it is important to ensure that we are just as diligently addressing how its negative impacts can be effectively mitigated.

119 views0 comments
bottom of page