The Exclusion Calculator: A Demographics-Based Approach to Design
- Exclusion
Who does this case study involve?
LGBTQ+ content creators on social media platforms
The case
Social media platforms rely heavily on automated content moderation systems to detect and remove content that violates platform policies. These systems use algorithms trained to identify patterns associated with harmful or explicit material. However, LGBTQ+ creators have reported that posts discussing identity, relationships, or activism are often flagged or removed without explanation.
Because automated systems lack contextual understanding, LGBTQ+ terminology and imagery may be misclassified as inappropriate. Appeals processes are frequently slow or opaque, leaving creators without recourse. This can result in reduced visibility, loss of income, and emotional distress, particularly for creators who rely on platforms for community-building or livelihoods.
Findings
This case illustrates how automated moderation can disproportionately affect marginalised groups. It highlights the need for greater transparency, contextual awareness, and human oversight in content moderation systems to prevent discriminatory outcomes.
References
Gillespie, T. (2018) Custodians of the Internet. Yale University Press.
