As social media becomes more involved in personal, business, and political relations, it has become increasingly important to prevent the spread of hate speech and disinformation on social media. At present, hate speech and disinformation is moderated on social media by: (1) voluntary self-regulation by service providers, and (2) government regulation that imposes obligations and liability on users and service providers. In light of the numerous lapses in content moderation and the ever present of risk of censorship through government regulation of social media, several questions have arisen on how content moderation can be improved.
This strategy brief presents a case study of content moderation in Sri Lanka and the risks presented by relying on mechanisms based on voluntary self-regulation and government regulation. Based on this case study, this brief presents an alternative approach that seeks to drive content moderation by creating ‘reputational-costs’ for social media companies that fail to moderate hate speech and disinformation. The brief also presents a strategy that is centered around developing vertical accountability mechanisms capable of generating national and global narratives against lapses in content moderation by social media companies.