Are you affected by online hate?

We support you!

Send an email to:

beratung@hateaid.org 

Give us a call during office hours:

030 / 252 088 38

Mon 10 a.m. -1 p.m. | Tue 3 p.m.- 6 p.m. | Thu 4 p.m. – 7 p.m.

Chat with us:* Wed 3 p.m. – 6 p.m. | Fr 11 a.m. – 2 p.m.

*Please accept the cookies to chat with us.

Report hate via our form:

Reporting Form 

Dickpic received?

File a complaint directly! 

 



The DSA: Five recommendations

Remember Peter Stainer’s iconic cartoon with a caption “On the Internet, nobody knows you’re a dog”(1)? 28 years later more suitable version would read “On the Internet, nobody knows you’re a troll.” Big online platforms have handed the mic to billions of people. What began with the promise of endless freedom for those who did not have a voice before, quickly turned into a stage for hate and agitation. Overwhelmed by the hate we encounter online, many of us feel unprotected and thus unsafe expressing ourselves.
We offer a closer look at the silencing effect, it’s links to the far-right hate campaigns, the existing dilemma between freedom of expression and limiting hate speech, and what Digital Services Act can deliver.

Attack on democracy in disguise

Speech restricting laws and repressions in Southeast Asia have provenly contributed to self-censorship among individuals in the region concerning their political expression. (2) However, self-censorship is not only used to protect oneself from government crackdowns. With private parties, like far-right wing groupings, strategically attacking marginalised groups online to silence them, self-censorship is becoming a tool applied by those affected and bystanders alike. For decades, the self-regulatory approach that fills a legislative gap, has allowed for private companies to make content-decisions, orchestrate what we see on our social media walls and even influence who do we vote for in the elections that are coming up next. All of that wrapped in a multi-billion business model. Now the EU is a frontrunner in a marathon, finally attempting to lay out some ground rules for online platforms through the Digital Services Act and Digital Markets Act.

Legislators’ and society’s concerns about the Big Tech making content decisions mainly centre around protection of freedom of speech and data privacy. We are convinced that freedom of speech has yet another enemy that is actively using online platforms to undermine democratic values.

Addressing online violence and attack on our democracies that are in the disguise of protecting the freedom of speech is, perhaps, one of the hardest challenges of this Regulation.

Silencing as an organised act by the far-right

Organised storms of digital violence – hateful comments and posts, personal insults, defamation and threats to family members – these are just some of the techniques that organised far-right groups like Reconquista Germanica, the alt-right and the Identitarian Movement use to draw people, especially minorities, out of online debate. (3) And while the far-right is clearly a minority, they amplify hateful voices with bots and re-posts, so it echoes wide and far. A study by the Institute for Strategic Dialogue in Germany found that only 5% of users are responsible for 50% of all the likes given to hateful comments on Facebook. (4) The French have even invented a term to describe organised hateful attacks online – calling it a digital raid (French: raid numerique).

Social media accounts affiliated with the extreme right in France – in particular with the identarian movement – are central to the generation and spreading of anti-Arab and/or North African, anti-Muslim and anti-migrant content online. (5) Revealing organisation behind a broader anti-minority discourse targeted at non-white and non-Christian populations. Against woman, silencing techniques are often sexualised, including rape threats and image-based abuse. We would need another article to talk about that issue.

The cost of the dilemma

The avalanche of online hate in addition to algorithmic content curation that often highlights content that is emotional and already polarising, misleading, extremist or otherwise problematic, (6) does not leave a room for pluralistic debate. Especially, when people are drawn out – 54 % of the internet users in Germany do no longer dare to express their political opinion online and 47 % rarely participate in online discussions at all out of fear of becoming victims themselves. (7) The bystanders become part of the silenced. This effect is alarming and should cause an outcry as it shifts our public debate in favour of those who scream the loudest and finally threatens freedom of speech, because the internet is no longer is a safe space for everyone.

Further, the line between digital and analogue violence can disappear fast. “Twitter abuse made me buy a home security system,” writer and activist Jacklyn Friedman shared with Amnesty International in their campaign #ToxicTwitter back in 2018. (8) And this is not an individual case.

With big part of the population being silenced by online violence in Germany, and part fearing that deletion of the content based on decisions of private companies will limit the free speech, the dilemma sounds rather like a conundrum.

The Digital Services Act proposal put forward by the European Commission proposing uniform rules for digital services across the EU will have to answer, not ignore the problem or choose one solution over the other. Before policymakers reach the finishing line, the dilemma of freedom of speech and limiting hate speech deserves a proper attention and discussion.

What do Human Right Courts have to say about this?

There are several cases touching upon Hate Speech and Freedom of Speech that have been ruled by the European Court of Human Rights (ECHR). (9) What differs them from the issue of Silencing, however, is what legal interests are thrown onto “the scale of human rights”.

2015, Delfi AS v. Estonia. The ECHR ruled on an Internet news portal that was held liable for comments posted by its readers, infringing the personality rights of others. The court had to decide whether this poses a violation of Article 10 of the Convention, which protects the freedom of expression – and ruled against it. (10) The personality rights of those attacked by the comments outweighed the aspect of freedom of expression of the internet portal and consequently demanded to take down the hate speech.

However, from a legal perspective, Silencing opens yet another dimension. Instead of putting “personality rights” and “freedom of expression” face to face, Silencing treats the appreciation of “Freedom of expression” versus “freedom of expression”. This is an area of tension that is quite unexplored yet.

To tackle the problem of limiting hate speech, we first need to understand what concrete elements make it so hard to solve. In our context, most of the systematic silencing comes from private parties. This means it is more indirect and way harder to grasp from a human rights perspective than state censorship or even private censorship originating from private companies.

Despite those differences, the outcome remains the same: Individuals are reluctant, even afraid of expressing their opinion, coming from a fear of repression – subjectively in many cases this does not differ from any other form of censorship.

What can legislators do now on a regulation level?

  1. A clear codification on Union-level that illegal content will be deleted and acted upon, is the first step towards ending “legal vacuum”.
    It is paramount to let users know, that whenever they are confronted with illegal content online and want to act against it, there is a system waiting to support them in an effective, reliable and transparent way. The internet is often perceived as a legal vacuum that systematic silencing takes advantage of.
  2. Clearly visible, low-threshold reporting channels that are close to the content in question.
    No extra personal data should be required to notify platforms of illegal content.
  3. Online platforms should have seven days deadline for assessment of reported content.
    Because waiting for weeks and weeks long processing can be one’s interpretation of a timely assessment, as proposed in the text. Even one hour on the internet is a long time.
  4. Transparency should be ensured through a mandatory statement of reason for all content decisions from online platforms.
    You have a right to know why your content was deleted in the same way as you have a right to know why a defamatory post about you is still up.
  5. Access to justice through contact point establishment in each Member State and summary proceedings in civil cases.
    When turning to a court, one should not have to send a letter in a foreign language to an office in Ireland. A legal representative should be available in each of the Member States. Furthermore, establishing summary proceedings in all civil cases concerning online violence, it would allow for a timely reaction.

These are only some of the HateAid’s recommendations on Digital Services Act, specifically linked to online content moderation and access to justice. Read more in our DSA Position here.

List of references:

  1. Peter Steiner, The New Yorker, July 5, 1993.
  2. Ong, E. (2021). Online Repression and Self-Censorship: Evidence from Southeast Asia. Government and Opposition, 56(1), 141-162. doi:10.1017/gov.2019.18
  3. R.Ahmed, D.Pisoui, 2019. The Far Right Online: An Overview of Recent Studies. Voxpol. Viewed on 5/05/21
  4. P. kreißel, J.Ebner, A.Urban, J. Guhl, 2018. Hass Auf Knopfdruck. Rechtsextreme Trollfabriken und das Ökosystem koordinierter Hasskampagnen im Netz. Institute for Strategic Dialogue, UK.
  5. C. Guerin, Z. Fourel, C. Gatewood, 2021. La pandémie de COVID-19 : terreau fertile pour la haine en ligne. Institute for Strategic Dialogue, UK.
  6. Lewandowsky, S., Smillie, L., Garcia, D., Hertwig, R., Weatherall, J., Egidy, S., Robertson, R.E., O’connor, C., Kozyreva, A., Lorenz-Spreen, P., Blaschke, Y. and Leiser, M., Technology and Democracy: Understanding the influence of online technologies on political behaviour and decision-making, EUR 30422 EN, Publications Office of the European Union, Luxembourg, 2020, ISBN 978-92-76-24088-4, doi:10.2760/709177, JRC122023.
  7. D. Geschke, A.Klaßen, M.Quent, C. Richter, 2019. #Hass im Netz: Der schleichende Angriff auf unsere Demokratie. Eine bundesweite repräsentative Untersuchung. Institut für Demokratie und Zivilgesellschaft, Germany. ISBN: 978-3-940878-41-0
  8. How experiencing abuse made you change the way you use the platform? Youtube. Uploaded by Amnesty International, 21/03/18.
  9. Press Unit, 2020. Factsheet: Hate Speech. European Court of Human Rights, 2020.
  10. ECHR. Case DELFI AS v Estonia, Application no. 64569/09 Judgement of 16 June 2015. Strasbourg, 2015.