The DSA: Victims of online violence urgently need access to justice
Imagine your face is photoshopped on a naked image and published on Facebook, saying that it’s you. You decide to report it, but you receive no reaction, no help by the platform. Would your sense of justice tell you to fight for your rights before courts?
In two years of consulting and providing litigation financing for victims of online violence, HateAid has learned the following: If you are willing to take substantial financial risks and hang around for long enough, the justice system might as well have your back. If not, abusive posts and comments will stay up on the virtual wall, and platforms like Facebook will continue milking the cash cow with content that should never have been posted in the first place. We argue that this status quo is unbearable. And the proposed new EU platform regulation law, Digital Services Act, has overlooked necessary measures to enforce our right to access to justice.
Here’s how to fix it for victims of online violence.
Two notice and take-down letters later, Twitter still did not act
HateAid supports those affected by digital violence on Social Media Platforms daily. What we see is that going to court is very tricky at times, exhausting always.
One of our cases is exemplary for many others: A well-known German journalist was confronted with posts containing sexualized violence and insults directed at her based on her political views. She decided to file a criminal complaint. The platform denied information about perpetrators’ identities to the prosecutors. Which is why our client chose a civil procedure to seek a court order allowing the platform to provide information. While it took a couple of further steps until identification of the perpetrators was successful, the platform still did not delete the comments in question, even though there was a court order ruling them to be illegal. Two notice & take-down letters later, Twitter still did not act. Our client filed another lawsuit against the platform awaiting it’s decision now for 7 months – the defamatory posts however will stay visible throughout all these procedures spreading all over the internet unhindered.
Journalists and reporters are among one of the groups particularly exposed to online violence on social media platforms. Earlier this year Reporters Without Borders sued Facebook for failing their promise to provide “safe” and “errorfree” online environment based on a large-scale proliferation of hate speech and false information on its networks.
On the www, no one has 2 years to wait
Generally, criminal complaints in Germany in the field of illegal hate speech are rarely successful. Already the very basic first step of identification of the perpetrators can only be achieved in around one third (28 %) of the cases, chances varying from Facebook with 66 % to Twitter with 7 %, to Telegram, where it is nearly impossible. (1) And if the long journey of identification is successful, litigation will be as lengthy, complex and coming with high financial risks, most of the times adding up to 4 000 € – 5 000 €. It is not rare that 2 instances of litigation take 2 years.
So, with prosecution in many cases being ineffective, and financial risk to undertake litigation – high, contacting the platforms directly by legally asking for removal of the content may be the last option to achieve the deletion of illegal content in question.
Technically, this option brings advantages – bothersome procedures can be avoided, and result seen much quicker. However, in reality, it plays out differently.
Platforms are aware of this imbalance of power between them and us, their users. They rarely remove content – according to the Counter Extremism Project, the overall average takedown rate of illegal content by very large platforms (gatekeepers) based on user notices is 42% (2) – and their decisions are highly arbitrary, as seen in our entry example case.
And before you can try your luck, you first have to play a game of cache-cache – search the imprint for an address and struggle with legally secure delivery of documents abroad that is necessary for litigation. We still do not know if the last letter we sent to Twitter 9 months ago on behalf of a client has reached their headquarters.
DSA fails to strengthen judicial redress for users
The fast-paced digital environment has put our judicial systems, laws and policy frameworks under a stress-test, and users are the ones suffering from this legal uncertainty the most.
So how does the current proposal of the DSA face the problem of arbitrary decisions and high-threshold access to justice in the area of illegal hate speech online? Instead of strengthening the rule of law as guaranteed by national courts, the Commission attempts to build a parallel universe by installing a so called out-of-court dispute settlement body. While it might be helpful, it does not provide a complete solution.
Generally, the body is supposed to decide on unsettled content moderation cases as a kind of “final instance” after the internal complaint handling. The idea behind this, as the Commission proposes, is to “complement, yet leave unaffected in all respects, the possibility to seek judicial redress”. (3) Apart from the extremely vague provisions of these bodies, there are several issues that must be answered by the Governments and European Parliament: Whose national law shall be applicable – the platforms’ establishment or the users’? If these bodies make “binding” decisions, as Art. 18 proposes, what is the legal nature of those bodies? Is it even possible to challenge their decisions, or do they have the final say on content questions? If so, what are the consequences for my basic options of judicial redress?
So, with all those questions and complementary nature of this body in mind, the DSA, first, should strengthen judicial redress and empower users’ by widening possibilities to take legal action in their national judicial system. Which is access to justice in its most powerful shape.
Summary proceedings – known in the EU law already
HateAid believes, and the European Parliament has voiced, that the final decisions on online content must be reserved for courts. The rule of law is the best way to protect freedom of expression from arbitrary decision making by online platforms. But how can you refer users to courts that cannot be accessed under reasonable conditions? And to proceedings that may cost you more than a month’s salary and take up to one year? Currently, the path to binding decisions for individuals is long and exhausting, which is why the Commission is proposing alternative means. The European Parliament should now focus on enabling users to seek judicial review at reasonable and affordable conditions – and to ensure access to justice in its roots to guarantee freedom of expression.
There is a way to combine quick and effective decisions and the guarantee of a court rule. It is called Summary Proceedings and its concept is not new to the EU law – it has introduced this instrument before through the Directive 2004/48/EC in the field of intellectual property.
On the internet, quick decisions and actions have a radically different importance compared to the analogue world. One minute online can be just as impactful as one year on a billboard in your city. Therefore, we need quick decisions – but at the same time, we want those decisions to be made under the rule of law. Summary proceedings are a way of litigation that allow quick actions to prevent further damage if the case is clear enough. Summary proceedings can significantly improve the situation of all online users when enforcing their rights against online platforms.
Point of contact in your country, in your language
One additional step must be taken to make summary proceedings possible. Users need an effective means to contact online platforms. This is why we suggest that every online platform should be obliged to appoint an authorised recipient for all kinds of legal proceedings concerning content moderation in every Member State. This authorised recipient will dedicatedly enable legally binding delivery. You as a user should be able to contact the platform directly and transparently in your home Member State, whenever you seek to enforce your rights, in one of the official languages of your country.
What may seem like the Commission is effectively forcing platforms to become more approachable by introducing a mandatory single point of contact or a legal representative, it only facilitates communication between the platform and authorities. This leaves private individuals with very little opportunities to contact platforms themselves and gives a very big excuse for platforms to not further bother making their company more approachable to users.
Contacting platforms directly through a point of contact in each Member State, in their official language is a simple yet powerful idea. Giving power to users might be as uncomfortable to the Big Tech, as empowering to their users.
All users deserve a legal certainty
It is true that access to justice in the field of illegal hate speech currently is precarious. And even though one could probably call the out-of-court settlement a fixer-upper, victims of online violence and all users deserve a legal certainty. Especially in times when the EU is seeking to protect the rule of law. Digital environment should be no different.
List of References:
(1) Statistics provided by the General Attorney’s office in Frankfurt am Main/Hesse.
(2) Policy Brief “notice and (NO) action”: Lessons (not) learned from testing the content moderation systems of very large social media platforms. Counter Extremism Project, June 2021.
(3) Proposal for a Regulation of the European Parliament and of the Council on a Single Market For Digital Services (Digital Services Act) and amending Directive 2000/31/EC, Recital 44, p.27.