Most major social media platforms are failing to uphold LGBTQ safety, privacy, and expression, according to the most recent edition of an annual GLAAD report known as the Social Media Safety Index.
YouTube, X, Facebook, Instagram, and Threads received “F” grades from GLAAD in the Social Media Safety Index, while TikTok earned a “D+” grade. TikTok’s numerical grade was 67, which represented a 10-point increase from last year. X also improved by eight points over last year, but still finished with a dismal grade of 41 — the worst among all platforms in the report. Facebook, Instagram, and YouTube all finished with scores of 58.
The platforms lose points for issues such as failing to review problematic content, using what the report describes as “harmful” algorithms, and demonstrating a lack of transparency and accountability. This year’s report, GLAAD said, reflects a broader failure by platforms to regulate anti-LGBTQ hate and suppress LGBTQ-related content. Moreover, platforms are failing to provide transparency regarding content moderation, data privacy, and more, according to the report.
“Leaders of social media companies are failing at their responsibility to make safe products,” GLAAD CEO Kate Ellis said in a written statement. “When it comes to anti-LGBTQ hate and disinformation, the industry is dangerously lacking on enforcement of current policies. There is a direct relationship between online harms and the hundreds of anti-LGBTQ legislative attacks, rising rates of real-world anti-LGBTQ violence, and threats of violence, that social media platforms are responsible for and should act with urgency to address.”
GLAAD’s report praised TikTok for improvements such as revising its anti-discrimination advertising policy to ban advertisers from wrongfully targeting or excluding certain people from ads based on their sexual orientation or gender identity. Furthermore, TikTok is one of only two social media platforms to bar both targeted misgendering and deadnaming, according to the report. Still, the report chided TikTok for failing to publish data on its LGBTQ workforce and disclosing little information on how it addresses removal of LGBTQ creators and demonetization.
Looking at Meta’s Facebook and Instagram, which dropped in score over the last year, GLAAD acknowledged that there is a policy protecting LGBTQ users from targeted misgendering, but also said enforcement of that policy is shaky because it requires self-reporting and excludes public figures. Like TikTok, GLAAD criticized Facebook and Instagram for failing to uphold transparency in data collection and other areas. Threads, which is also led by Meta and is a new addition to the report, received the same negative feedback from GLAAD over self-reporting policies and transparency woes.
YouTube, on the other hand, saw its score increase over the last year thanks to the introduction of a policy allowing creators to add pronouns to their channels and the continuation of an effort to diversify its workforce. Still, it fails to implement sufficient protections for LGBTQ users, according to GLAAD, which said YouTube is the only company evaluated in the report that does not protect trans, gender non-conforming, and non-binary users from misgendering or deadnaming. Along with other platforms, YouTube faces criticism from GLAAD over limited transparency.
The social media platform with the worst grade, X, actually took some steps forward in the last year, according to the report. The platform quietly reinstated a rule barring misgendering and deadnaming, joining TikTok as the only two platforms in the report to do so. But, like Meta’s platforms, self-reporting is necessary to enforce the policy because the company must hear from targeted individuals before determining if the rule was violated, the report noted.
GLAAD outlined five core recommendations for social media companies to improve the broader landscape: strengthen and enforce existing policies protecting LGBTQ people; improve moderation — and, in doing so, limit the overuse of AI; show more transparency by working with independent researchers; reduce data collection to protect LGBTQ people from surveilance and discrimination; and promote civil discourse.
“In addition to these egregious levels of inadequately moderated anti-LGBTQ hate and disinformation, we also see a corollary problem of over-moderation of legitimate LGBTQ expression — including wrongful takedowns of LGBTQ accounts and creators, shadowbanning, and similar suppression of LGBTQ content,” GLAAD’s senior director of social media safety, Jenni Olson, said in a written statement. “Meta’s recent policy change limiting algorithmic eligibility of so-called ‘political content,’ which the company partly defines as: ‘social topics that affect a group of people and/or society large’ is especially concerning.”