Are you affected by online hate?

We support you!

Send an email to:

beratung@hateaid.org 

Give us a call during office hours:

030 / 252 088 38

Mon 10 a.m. -1 p.m. | Tue 3 p.m.- 6 p.m. | Thu 4 p.m. – 7 p.m.

Chat with us:* Wed 3 p.m. – 6 p.m. | Fr 11 a.m. – 2 p.m.

*Please accept the cookies to chat with us.

Report hate via our form:

Reporting Form 

Dickpic received?

File a complaint directly! 

 



Eine Lupe. Darunter die Logos der Plattformen Facebook, YouTube, Twitter & Instagram

Transparency reports: How social media platforms fail on users’ rights  

Once again social media platforms like Twitter, YouTube and others published their biannual reports that show the numbers of complaints, content removal rates and for the first time also the number of objections from German users.  

Large social media platforms are obliged to publish transparency reports according to the Network Enforcement Act (NetzDG) of Germany, which came into effect already in 2017 long before the DSA was born. With this controversial law Germany took the lead in Europe to regulate social media’s treatment of illegal content in order to get hold of the rampant spread of illegal hate and incitement from far-right movements within society. 

Again, the reports show that only a small fraction of reports leads to removal of content. Moreover, again the vast majority of that content is removed not based on the law but of the platform’s terms of service. We also see: The data required by the platforms is overall of little evidence and does therefore not allow meaningful conclusion about the quality of content moderation. We took a closer look at the numbers: 

Platforms’ transparency reports (January – June 2022) show: Low removal rates 

Twitter: 829,370 reported, 118,938 removed (14,3 %).
Facebook: 170,233 reported, 22,826 removed (13,4 %). 
Instagram: 63,696 reported, 5,566 removed (8,7 %).
TikTok: 226,479 reported, 34,727 removed, (15,3 %).

Low quality of content moderation: Findings from investigative research and surveys 

While the reports provide a quantitively overview over content moderation praxis of large social media platforms, they don’t give evidence for its quality. According to our findings there is reasonable doubt about how reports are assessed by the platforms and users confirm that content moderation praxis is incomprehensible and dissatisfying: 

  • In the run-up to the French presidential election in March 2022: Facebook failed  to remove reported illegal hate comments in 70 percent of cases 
  • One in two people are dissatisfied with how platforms handle reported abusive content. The main causes: Intransparency, incomprehensible responses and lack of reactions from the platforms.  
  • 48% of those who have already reported violent content complain that the platforms have done nothing about it. 

What is new: Internal complaint handling 

At the end of 2021, the obligation to install a user redress mechanism for content decisions was implemented. Social media platforms must give their users the opportunity to complain about content decisions such as removal or non-removal of content that was reported. The number of complaints and number of content that was removed or restored needs to be reported by the platforms. 

Surprisingly, no such numbers can be found in Twitter’s transparency report as far as we can tell. The others include information about it and show clearly that user redress is a meaningful instrument to reassess decisions of content moderation. Moreover, it is not only used by uploaders who are affected by removal of their content. Also users who are dissatisfied with rejection of their reporting use the new redress function. They are successful in about 8 % of the cases, while Facebook has the lowest success rate of 6 %. Surprisingly, it was TikTok that received most complaints about non removal of content from reporting users – twice as much as facebook that has three times as much users in Germany.

 

Hidden and deterred: How reporting channels are designed to not report illegal content 

All users of social networks in Germany have the right to a simple and easily visible and accessible reporting channel to report violent content to the platform. This right is guaranteed by the NetzDG and prescribes standards that platforms must adhere to. Among others these include: The removal of manifestly illegal content within 24 hours of receipt of the report, biannual transparency reports, easy access to platforms with a national point of contact and redress options against content decisions. The idea is that platforms should offer a minimum level of protection and accountability for their users

However, to allow users to notify illegal content, social networks have set up systems with many options that can easily confuse and mislead users. Users have the choice between reporting a violation of the platform’s terms of service, the internal rules that platform give themselves or a violation of the NetzDG. If users choose NetzDG it is getting complicated: next users must choose from a list of sections from the criminal code without any further explanation. Since it is hardly possible to make an educated guess without a law degree, many users refrain from reporting or avoid that option. 

Some real-life examples of non-user-friendly settings on social media:  

  • YouTube has a button for general reports and an additional optional field for a NetzDG report. However, this optional field is very easy to miss.  
  • When using Instagram in the web browser, there is only a general reporting option, but there is no option to make a NetzDG report. As a result, the majority of users who want to report hate speech, insults, and threats end up doing so via the community standards.   
  • When reporting content according to NetzDG on Twitter users are warned that they have to be aware of what a serious accusation of illegal behavior they are making and that their profile can be blocked if inaccurate accusations occur to often. This could even be labeled as a dark pattern designed to deter users from filing complaints at all. 
  • On TikTok users are informed about the decision but do not receive any other information required by NetzDG such as information about redress or criminal complaints. 

 
Community guidelines vs. NetzDG: How platforms circumvent legal obligations 

It has been observed for several years: Instead of assessing reports on illegal hate speech based on the legal requirement under the rules of the NetzDG, the platforms first check all reports according to their own community guidelines.

What’s the problem with reporting only on the grounds of the terms of service?

Not only that they are very vague and their application often incomprehensible. It also helps the platforms to squirm out their obligations that come with the treatment of illegal content such as providing a comprehensible reasoning or a national point of contact. Moreover, we saw in the past that both options are treated with a double standard. In an assessment conducted by us together with Reset before the German federal elections, we saw a 1/3 higher removal rate when reporting illegal content by NetzDG instead of Facebook’s community standards.  

It is experiments like this that indicate that the suspicion of massive underblocking might be a rightful one. While critics of the NetzDG oftentimes warn from massive overblocking due to removal deadlines, we don’t see any evidence of this so far. The fact that only a tiny fraction of on average 13 % of the reporting leads to removal of content in fact proves that the strict deadlines do not lead to unchecked and automatized removal of any content that is reported. The reports from Facebook and Instagram for example show clearly that only decisions on the grounds of NetzDG can be challenged with user redress, which is the most narrow understanding of their legal obligations. This proves again, that platforms will always find loopholes to avoid being held accountable by law but instead live up to their own rules. 

These are the lessons for the DSA enforcement 

It is more important than ever that the EU and its Member States learn from these mistakes when enforcing the Digital Services Act. The rights of users must not be undermined in order to make it more convenient for the platforms.  

 
We urge regulators to watch out for manipulative behavior from online platforms that undermines users’ rights: 

  • To put it really simple: Enforce the rules and make sure that platforms are not ignoring the obligations under the DSA. 
  • Hiding reporting channels, specifically for illegal content. User friendly and accessible reporting channels should always be located close to the content that is being reported.  
  • Not allowing platforms to intimidate users and discourage from reporting illegal content. It is platforms’ job to properly assess the reports, independently of the gravity of the reported content.