AICOA Bill Provision May Weaken Content Moderation That Protects the LGBTQ+ Community
The American Innovation and Choice Online Act, S.2992 is an antitrust bill that aims to prevent the four biggest tech platforms – Alphabet, Amazon, Apple and Meta - from “self-preferencing” and thus would bar these companies from unfairly favoring their own services over similar services by their competitors. While ensuring that companies do not engage in anti-competitive behavior and tilt the playing field in favor of their own products is vitally important, serious concerns have been recently raised by well-respected public interest and civil organizations about the potential impact of this legislation on content moderation policies at these companies. In addition, last week a group of Senate Democrats sent a letter to Senator Klobuchar warning about these same unintended consequences, including the potential for a chilling effect on content moderation policies that may prevent companies from filtering toxic content and deplatforming hate and extremist groups which would hurt vulnerable and marginalized communities like the LGBTQ+ community.
The ability of tech companies to moderate hateful and extremist content and remove toxic services is extremely important to LGBTQ+ individuals and has allowed LGBTQ+ communities to flourish online. We are concerned that getting this wrong could inflict unique harms on LGBTQ+ communities and individuals who rely on these platforms’ content moderation systems to remove the worst of the worst. According to GLADD’s Social Media Safety Index, published earlier this year, 68% of LGBTQ adults have encountered online hate and harassment, and 51% have been targeted for “more severe forms of online abuse” compared to roughly 41% of straight adults who reported enduring any form of online harassment. At a minimum, the potential proliferation of such content will make it harder for marginalized groups like LGBTQ+ communities to participate and communicate freely on the internet or to do so without being harassed.
Concerns about the use of a law like this to compel companies to host toxic or hate speech are not merely speculative or theoretical. As Free Press stated, “With this powerful statute in hand, future federal regulators or state attorneys general could very easily argue that removing apps like Parler and Omnichan or content from Infowars is unlawful. They could claim that what tech companies rightly define as hate speech, incitements to violence or vaccine disinformation is really just competing political or health information that must stay up.” As legislation in Florida and Texas has shown, attempts to codify laws compelling tech companies to carry a specific type of speech is a major goal of Republicans, and individuals like Texas Attorney General Ken Paxton have made passing and enforcing these types of laws a crusade.
More to the point, Senator Ted Cruz has explicitly linked his support of this bill to its potential as a tool to force tech companies to host speech to his liking. As Techdirt stated in their analysis of Republican support for this bill, “Incredibly, Republicans like Ted Cruz have made it clear this is why they support such bills. In fact, Cruz introduced an amendment to double down on this language and make sure that the bill would prohibit ‘discriminating on the basis of a political belief’…Even more to the point, during the hearings about the bill and his amendment, Cruz flat out said that he was hoping to “unleash the trial lawyers” to sue Google, Facebook, Amazon, Apple and the like for moderating those who violate their policies.”
Fortunately, the group of Senate Democrats offer a simple solution to this issue that would clarify and ensure that this legislation could not be used to prevent platforms from moderating and removing toxic content and services without leaving them vulnerable to being sued for legitimate content moderation decisions. These four Democratic Senators, including Senator Ron Wyden, have asked Sen. Klobuchar to add a one-sentence clarification to the relevant section of the bill confirming that this bill in no way hinders a company’s content moderation practices (Sen. Wyden is the author of Section 230, the Internet law that gives companies the ability to remove toxic content and sites, including anti-LGBTQ+ content, and as such his concerns and clarification of this legislation should be given weight). Specifically, they requested the following clause be added:
“Protection for Content Moderation Practices.—Nothing in section 3(a)(3) may be construed to impose liability on a covered platform operator for moderating content on the platform or otherwise inhibit the authority of a covered platform operator to moderate content on the platform, including such authority under the First Amendment to the Constitution of the United States, section 230(c) of the Communications Act of 1934 (47 U.S.C. 230(c)), or any other provision of law.”
Senator Klobuchar has indicated that she does not believe her bill impedes content moderation efforts despite the concerns raised. Furthermore, it is noteworthy that both Sen. Chuck Grassley and Rep. Ken Buck have indicated that Republicans will walk if the bill is modified to include the clarifying content moderation language. Despite this, Sen. Klobuchar has indicated through a spokesperson that she is open to possibly adding revisions “that are not intended to alter the core principles of this legislation: to protect consumers and small businesses from anticompetitive behavior by monopolies”. It is clear that the proposed clarifying language falls in this category as it does not affect the bill’s ability to curb anticompetitive behavior. The suggested language in the Senate Democrats letter does not alter the core principles of the antitrust legislation but goes a long way towards ensuring that it cannot be used as a weapon to force companies to host hateful or toxic content that will disadvantage LGBTQ+ and other marginalized communities. We urge Senator Klobuchar to amend the bill as suggested and clearly prohibit use of this legislation to restrict content moderation practices in a manner that would hurt marginalized communities.