GLAAD released its fourth annual Social Media Safety Index on Tuesday, giving virtually every major social media company a failing grade as it surveyed LGBTQ safety, privacy, and expression online.
According to GLAAD, the world’s largest LGBTQ media advocacy organization, YouTube, X, and Meta’s Facebook, Instagram, and Threads received failing F grades on the SMSI Platform Scorecard for the third consecutive year.
The only exception was Chinese company ByteDance, owned TikTok, which earned a D+.
Some platforms have shown improvements in their scores since last year. Others have fallen, and overall, the scores remain abysmal, with all platforms other than TikTok receiving F grades.
● TikTok: D+ — 67 percent (+10 points from 2023)
● Facebook: F — 58 percent (-3 points from 2023)
● Instagram: F — 58 percent (-5 points from 2023)
● YouTube: F — 58 percent (+4 points from 2023)
● Threads: F — 51 percent (new 2024 rating)
● X: F — 41 percent (+8 points from 2023)
This year’s report also illuminates the epidemic of anti-LGBTQ hate, harassment, and disinformation across major social media platforms, and especially makes note of high-follower hate accounts and right-wing figures who continue to manufacture and circulate most of this activity.
“In addition to these egregious levels of inadequately moderated anti-LGBTQ hate and disinformation, we also see a corollary problem of over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content. Meta’s recent policy change limiting algorithmic eligibility of so-called ‘political content,’ which the company partly defines as: ‘social topics that affect a group of people and/or society large’ is especially concerning,” GLAAD Senior Director of Social Media Safety Jenni Olson said in the press release announcing the report’s findings.
Specific LGBTQ safety, privacy, and expression issues identified include:
● Inadequate content moderation and problems with policy development and enforcement (including issues with both failure to mitigate anti-LGBTQ content and over-moderation/suppression of LGBTQ users);
● Harmful algorithms and lack of algorithmic transparency; inadequate transparency and user controls around data privacy;
● An overall lack of transparency and accountability across the industry, among many other issues — all of which disproportionately impact LGBTQ users and other marginalized communities who are uniquely vulnerable to hate, harassment, and discrimination.
Key conclusions:
● Anti-LGBTQ rhetoric and disinformation on social media translates to real-world offline harms.
● Platforms are largely failing to successfully mitigate dangerous anti-LGBTQ hate and disinformation and frequently do not adequately enforce their own policies regarding such content.
● Platforms also disproportionately suppress LGBTQ content, including via removal, demonetization, and forms of shadowbanning.
● There is a lack of effective, meaningful transparency reporting from social media companies with regard to content moderation, algorithms, data protection, and data privacy practices.
Core recommendations:
● Strengthen and enforce existing policies that protect LGBTQ people and others from hate, harassment, and misinformation/disinformation, and also from suppression of legitimate LGBTQ expression.
● Improve moderation including training moderators on the needs of LGBTQ users, and moderate across all languages, cultural contexts, and regions. This also means not being overly reliant on AI.
● Be transparent with regard to content moderation, community guidelines, terms of service policy implementation, algorithm designs, and enforcement reports. Such transparency should be facilitated via working with independent researchers.
● Stop violating privacy/respect data privacy. To protect LGBTQ users from surveillance and discrimination, platforms should reduce the amount of data they collect, infer, and retain. They should cease the practice of targeted surveillance advertising, including the use of algorithmic content recommendation. In addition, they should implement end-to-end encryption by default on all private messaging to protect LGBTQ people from persecution, stalking, and violence.
● Promote civil discourse and proactively message expectations for user behavior, including respecting platform hate and harassment policies.
Read the report here.
Washington Blade courtesy of the National LGBTQ Media Association.