Dear Executive Teams of all major social media platforms (Meta, X/Twitter, TikTok, YouTube, Reddit, Discord, etc.):
We write as concerned users to urge you to urgently strengthen and enforce your reporting and moderation systems. Despite years of growing awareness, countless users, especially women, children, LGBTQ+ individuals, and marginalized communities, continue to face daily harassment, threats, and hate speech online. Cyberbullying remains one of the most persistent forms of online abuse, and current mechanisms for addressing it are simply not working.
Research confirms that online abuse is not a minor inconvenience—it causes real and lasting harm. A UNICEF/UN survey found that one in three young people across 30 countries reported being a victim of cyberbullying. Psychological studies link online harassment to increased risks of anxiety, depression, and suicidal ideation. Victims frequently report feelings of isolation, fear, and trauma. These harms extend well beyond the screen, affecting users’ ability to participate in public discourse, attend school, work, or even feel safe in their homes.
Yet even with these stakes, platforms routinely fail to act. A 2021 Pew Research study found that 79% of Americans believe social media companies are doing only a fair or poor job addressing harassment. Survivors consistently report that reporting tools are ineffective, responses are inconsistent or absent, and abusive content remains online long after it’s flagged. This failure to act endangers lives.
We understand that platforms deploy automated systems and large-scale AI to moderate content, and in many cases, the technology is impressive. But the core failure is not the lack of tools: it is the failure to act on reports. No AI model can replace the responsibility to investigate flagged content promptly and thoroughly. It is unacceptable that, in 2025, a user can report a death threat or racial slur and receive no action or even acknowledgment. When users do not believe their reports will be reviewed or taken seriously, they stop reporting altogether, which further allows abuse to thrive unchecked.
We therefore call for the following urgent reforms:
-
Immediate investment in responsive, well-staffed content moderation teams. Every report of hateful, threatening, or abusive behavior must be reviewed by a trained human moderator in a timely manner. No report should go unseen, and platforms must ensure enough staffing to handle these reviews at scale.
-
A complete overhaul of the reporting interface. Reporting tools must be easier to use, with clear steps and categories for reporting different forms of abuse (e.g., threats, bullying, hate speech, impersonation). Victims should be guided through the process, not burdened by confusing or vague forms.
-
Specific prioritization of high-risk reports. Any report containing threats of violence, hate speech, doxxing, or child exploitation must be flagged and fast-tracked for immediate review. These reports should not sit in queues behind minor policy infractions.
-
Updated Terms of Service that explicitly ban hate and harassment. Too often, abusive users hide behind vague or outdated policies. Every platform must clearly define unacceptable behavior—including harassment, bullying, hate speech, targeted slurs, incitement, and abusive content—and commit to enforcing these definitions without exception.
-
Regular audits of moderation systems. Third-party or internal audits must be conducted to assess how often reports are ignored, mishandled, or incorrectly closed. The goal is not just transparency—it’s improvement. Platforms must identify failures in their systems and take steps to correct them.
We are not asking for perfection. We are asking for action. With billions of users and unprecedented computing power, social media companies have the tools and the resources. What’s missing is the will to prioritize user safety.
If these issues are not addressed, users will continue to suffer. Victims of harassment will continue to be ignored. And the platforms themselves will continue to lose the trust of the very people who built them. We urge you to treat this not as a technical flaw, but as a human rights crisis.
Please act now, because no one should fear for their safety just for being online.
Do you want to sign this open letter? Open a PR!
Signed,
- Aviv Keller, @avivkeller