top of page
Shae Gardner

Navigating the Risks of AI Technologies for the LGBTQ+ Community

Updated: Feb 22


From recommendations on streaming platforms to diagnoses in advanced medicine, the potential of AI rightfully makes headlines every day. However, as with all emerging technology, it is not immune to mistakes and misuse- making it imperative that we are upfront in recognizing and addressing the challenges it presents for the LGBTQ+ community and other marginalized groups. 


We have already seen many instances where AI can cause harm in places where it is meant to assist. A helpline’s AI chatbot recommended diet advice to an individual struggling with an eating disorder. A healthcare AI algorithm misinterpreted lower spending by Black patients as an indicator that they were less in need of care. A manufacturer’s AI recruitment tool identified an employment pattern that had been favoring men and began to penalize resumes with the word “women” as a result. In these, and many other situations, the use of AI has resulted in unintended outcomes with an outsized impact on marginalized communities. 


Pepper in the incredibly difficult tasks of determining what inputs are being used in AI decision-making processes, as well as the reliance of these systems on biased data sets, and it becomes easy to see the dangers of this technology for the LGBTQ+ community if left unchecked.  



Biased Data, Biased AI 

The risks of AI can first be identified in the ways those systems are being built and trained. AI systems are only as good as the data they are trained on and can perpetuate the biases present in such data, potentially leading to discriminatory outcomes for LGBTQ+ individuals in skewed recommendations, prejudiced hiring decisions, or unbalanced access to healthcare. 


In 2022, researchers found that a robot operating with a widely-used and public AI system was replicating toxic stereotypes it found in internet data. The robot was asked to categorize assorted human faces as doctors, homemakers, janitors, and criminals, and was shown to be incapable of performing the task without acting out racist and sexist stereotypes. One author of the study stated that “while many marginalized groups are not included in our study, the assumption should be that any such…system will be unsafe for marginalized groups until proven otherwise.” 


For the LGBTQ+ community in particular, AI-powered datasets often have an absence of information around sexual orientation and gender identity, creating problematic downstream consequences. Sexual orientation and gender identity are characteristics that cannot be observed or assumed, and as a result, the development of AI has largely omitted them as data points all together. The problem of having such limited representation was put on display in 2022, when an AI enthusiast asked image generator Midjourney to produce images of 100 gay couples. The result was 100 nearly identical couples- thin, young, and overwhelmingly white, completely unrepresentative of the diversity in the LGBTQ+ community. 


One very real current danger of this lack of representation presents in facial recognition systems, an increasingly prevalent AI use case and a nightmare for transgender and non-binary individuals in particular. One study found that in over 30 years of facial recognition research, a binary model of gender was followed more than 90% of the time and treated as immutable in more than 70% of studies. The result is a technology that frequently misidentifies or misgenders, making both the digital and physical worlds less inclusive and less safe.


Advocating for algorithmic fairness is essential. One way to ensure AI is inclusive is to ensure that the teams developing them are inclusive themselves. The creation of AI tools must represent a wide range of experiences and perspectives, where teams are able to take steps to mitigate biases against their own communities and others. The more diverse the input, the more diverse the output. In addition, creators and users of these systems need to diversify their training data to include marginalized communities, conduct regular and rigorous bias audits, and develop legal and ethical frameworks to protect LGBTQ+ individuals from discrimination by these systems.



Privacy and Safety Concerns

The need to collect more inclusive data does raise another concern for AI - once collected, how is that potentially sensitive data being protected? For one, marginalized communities, including ethnic minorities, LGBTQ+ individuals, and activists, are often disproportionately targeted by surveillance technologies powered by AI. 


In 2021, facial recognition company Clearview AI secured a patent for an AI system able to run background checks on individuals after scanning their faces. This system would provide date and place of birth, addresses, nationality, educational history, phone numbers and email addresses, criminal history, and more- an egregious lack of privacy for anyone who comes into contact with the technology. Members of the LGBTQ+ community will have no ability to conceal relationships or gender transitions from AI systems like these, regardless of the risks they face.


In an attempt to mitigate concern, Clearview AI has stated that their technology is intended for law enforcement use alone. This is hardly a comfort to members of the LGBTQ+ community, who must then consider how these tools, developed entirely out of the public eye, can be used against them by law enforcement in states working to outlaw queer expression and experiences. 


2023 was a record year for anti-LGBTQ+ legislation, with more than 500 bills introduced in state legislatures. The Texas Attorney General attempted to compile a list of all transgender individuals in his state. Tennessee has worked to ban educational materials that mention LGBTQ+ identities or issues. Florida made it a criminal offense for transgender people to use bathrooms or facilities consistent with their gender identity. The outcomes of these three situations- and hundreds more around the country- would be significantly worsened if the individuals or agencies behind them are permitted to use and misuse AI. Someone sharing their sexual orientation on social media, using a public computer to search for resources, or simply being noticed by facial recognition could unwittingly and unknowingly have that information collected by an AI and subsequently be targeted.


Collection of user data by AI systems raises serious privacy concerns for everyone involved, but particularly for marginalized individuals. Robust data protection laws with guidelines for data collection, storage, and usage, along with increased public transparency in how and when AI is being used, are paramount for protection.



Digital Misinformation and Disinformation

Finally, the digital landscape is already rife with misinformation and disinformation. Without appropriate safeguards, campaigns like these can worsen with the use of AI. For LGBTQ+ individuals, this could mean false narratives and the reinforcement of harmful stereotypes that damage mental health, risk physical safety, and lessen inclusion.


Without appropriate training, an AI-powered algorithm will replicate the pattern it identifies- amplifying existing biases if it finds them in those patterns by duplicating them in conversations or by recommending similar content. In 2021, a popular South Korean AI chatbot named Lee Luda was suspended less than a month after its creation when it began using hate speech towards the LGBTQ+ community. The AI had drawn its responses from 10 billion real-life conversations, and from within them, homophobic slurs and abusive speech patterns.


While Lee Luda’s harm was inadvertent, AI chatbots, social media accounts, and deep fakes could also be intentionally employed against the community. Manipulated media pieces, divisive bots, and convincing fake recordings have already made many appearances on social media. With more than a third of Americans unable to identify manufactured content, an absence of effective preventative measures leaves every community exposed to being misled or to being targeted. The LGBTQ+ community is left particularly vulnerable, since the internet is too often their only gateway to resources, community, and affirming spaces. 


Preventing homophobia, transphobia, and bigotry is paramount in keeping LGBTQ+ youth and adults happy, healthy, and alive. As AI continues to advance, we will need proactive measures including AI bot detection tools, media literacy education, and strong policies against hate speech and discrimination.


AI has incredible capabilities for expanding and improving our lives- if we create and use the technology in ways that keep the needs of the LGBTQ+ community and others like it in mind. It can help build safe and educational spaces online, provide access to LGBTQ+ resources and services, and foster a more informed and inclusive world. It can do these and more- as long as we proactively address the very real risks in bias, privacy, and disinformation. Otherwise, we risk replicating in data the same discriminatory patterns faced in the physical world.

201 views0 comments

Yorumlar


bottom of page