Are you affected by online hate?

We support you!

Send an email to:

beratung@hateaid.org 

Give us a call during office hours:

030 / 252 088 38

Mon 10 a.m. -1 p.m. | Tue 3 p.m.- 6 p.m. | Thu 4 p.m. – 7 p.m.

Chat with us:* Wed 3 p.m. – 6 p.m. | Fr 11 a.m. – 2 p.m.

Report hate via our form:

Reporting Form 

Dickpic received?

File a complaint directly! 

 



Blue graphic with text Safety by Design and a mobile phone with the logos of mastodon and twitter

Small changes – big effect: how hate on the internet can be reduced

The fact that we encounter hate and violence on the internet has almost become the norm: we scroll past it as if it were normal to see insults, threats and disinformation right next to cat videos and messages from our family and friends. But it’s not: consumers can and should be able to demand a minimum level of product safety in the digital space. After all, this is the standard practice in the analogue world: cars, medicine or groceries are carefully tested to see if they could be dangerous to consumers.

But when it comes to social networks, apps or AI, products are often launched without first examining their potential harm to individuals and society. New platforms like Mastodon want to follow a different approach. Following Elon Musk’s takeover of Twitter, many users created accounts in the decentralised network. In a new data analysis, HateAid explores whether a different product design really means less hate. And if yes, what can we learn from it?

Platforms can protect users

The good news first: a safer internet is possible. Our analysis shows that there are significantly fewer potentially insulting and hurtful comments on Mastodon¹ than on Twitter. It is therefore feasible for social networks to effectively protect users from hate. And it is necessary, because digital violence and disinformation contribute to the division of societies worldwide: for example, when right-wing extremists orchestrate coordinated online attacks on political opponents to silence them. Or when conspiracy ideologies are spread to undermine trust in public institutions and politics. Surveys show how commonplace hate and incitement are online: in the European Union alone, more than 50 per cent of young adults have experienced digital violence. And the platforms? They deny their responsibility for this development.

Digital violence as a business model

The way social media work is not a law of nature. On the contrary, it is a conscious decision by the platforms to design them in a certain way. And to accept the consequences: with their algorithms, the platforms contribute significantly to the mass dissemination of violence and disinformation – and they know it. About two years ago, the Wall Street Journal published the so-called Facebook Files. In several articles, journalists analysed internal documents, chat logs and presentations by Meta. The documents and statements by whistleblower Frances Haugen show: the company that owns Facebook and Instagram was aware of the harmful effects of its services. Hate and polarising content were amplified in order to increase user activity and ad revenue, Haugen said. At the same time, little was being done to limit the damage.

Platforms often take arbitrary action against digital violence

We also find that platforms often only take arbitrary action against digital violence. Adjustments to product design and algorithms are usually not transparent, and policies, once introduced, are hardly reliable. After the takeover by Elon Musk, for example, Twitter has repeatedly made headlines with decisions to reinstate previously suspended accounts and staff cuts to moderation teams. More recently, the introduction of the new “Twitter Blue” payment model and reading restrictions have caused resentment among users. Research also suggests a significant increase in anti-Semitic and racist posts on the platform. As a result, many users have moved to other networks. One of them is Mastodon, which is being touted as an alternative to Twitter. But is it really different?

Decentralised organisation in Fediverse

Mastodon is a decentralised social network. This means that, unlike most other platforms, there is no single large company behind it. Instead, Mastodon belongs to the so-called “Fediverse”. This is made up of a large number of different servers – called “instances” on Mastodon – that can communicate with each other using an open protocol called “Activity Pub”. As with email, not everyone uses the same provider, but they can still exchange messages with each other. In addition to Mastodon, other services in the Fediverse include “Pixelfed”, a photo platform, and “PeerTube”, a software for video platforms. According to the announcement, “Threads”, the new network from Meta, the company behind Facebook, will also be compatible with the Activity Pub protocol in the future. Currently, however, the non-commercial Twitter alternative Mastodon is the best-known application. An investigation by the German newspaper Süddeutsche Zeitung from 2022 shows: After Elon Musk’s takeover of Twitter, the largest instance, mastodon.social, saw an enormous increase in accounts.

Lower proportion of potentially insulting comments on Mastodon

As part of a data analysis, we took a closer look at whether the level of potentially insulting and hateful content there differs from that on Twitter. To do this, we used artificial intelligence² to analyse all German-language comments posted within a randomly selected day³ of the so-called “Federated Feed” on mastodon.social. This includes not only all the comments from that instance, but also the content from people on other servers that mastodon.social users follow. We looked at the German-language posts created that day, which totalled 15,073. For the same period, all German-language tweets on Twitter that were recognisable from word lists were analysed, which amounted to 390,832 tweets⁴. The results show that the proportion of potentially insulting comments on Mastodon is significantly lower than on Twitter. According to the AI, 2.95% of the tweets examined have more than an 85% chance of containing insulting and hurtful language. For comments on Mastodon, the figure is just 0.44%. A second AI⁵ came to similar conclusions.

Our analysis shows: There is proportionally less hate on Mastodon than on Twitter.

Own rules thanks to federal structure

Of course, this is only a part of the picture. Currently, Mastodon with around 1.3 million active users per month is significantly smaller than Twitter. But the fact that the network does some things differently is worth a closer look: its federal structure allows users to move their account to another instance. This is particularly interesting because each instance can set up its own rules – for example, about what content is tolerated and what is not. So, if you don’t agree with the moderation of the server you’ve been using so far – for example, if too many hate comments remain online – you can move without losing your existing contacts.

Partially automated blocking function

And there are other protective measures: most social networks allow you to manually block individual users. But with right-wing to far-right ecosystems comprising tens of thousands of accounts on platforms like Twitter, this quickly becomes a time-consuming chore. Mastodon also allows you to block, mute and report others. In addition, entire instances and therefore thousands of users can be blocked at once with just a few clicks. Instances can also do this amongst themselves (“defederate”). For example, some have blocked all known right-wing extremist servers in the Fediverse to protect their users from attacks from that spectrum. Projects such as “FediBlockHole” – a tool that synchronises your block list with other lists from trusted sources – can even partially automate this process.

A comprehensive approach to safety by design

These are all interesting and new approaches. In addition, there are other features, some of which are already being used in various social networks, that Mastodon could benefit from. These include:

  • The ability to hide replies to prevent comment sections from being hijacked by radical content. This is because free riders keep trying to spread hate and disinformation by commenting on posts from accounts with high reach.
  • The number of people who can reply to a post can be limited. This gives users more control over who they interact with.
  • Reporting channels should be transparent to users and criteria for moderation decisions are visible.
  • Private messages could be encrypted end-to-end. Currently, direct messages are not clearly separated from public posts in the Mastodon interface. Instead, they appear as a post visible only to the person being addressed. This should also be separated in the programming interface.
  • Diverse moderation teams should be used to identify discrimination at an early stage and to include different perspectives.

To date, no social media platform has implemented these measures in their entirety – not even Mastodon. Instead, platforms introduce individual security measures for users that can also be arbitrarily removed or changed at any time. The speed with which this can happen was demonstrated by Elon Musk’s takeover of Twitter, where he radically questioned Twitter’s existing security measures and in some cases removed them or diverted them from their intended purpose. If the existing measures on all platforms were bundled and consistently applied in their entirety, they could provide a first draft for a comprehensive “safety by design” concept (product safety).

Safety by design for a more peaceful network

The analysis and a closer look at the decentralised Fediverse show that social networks make it too easy for themselves when they say that the current situation on the internet cannot be changed. We demand that platforms must be designed to minimise the resulting damage to individuals and our democracy. Social media companies have a role to play here, even if it means foregoing profits. Safety by design is possible – and necessary.

This analysis is promoted by the participants of Deutsche Postcode Lotterie.

Logo Deutsche Postcode Lottery

______________

¹ Information for the “Federated Feed” on Mastodon.social

² The data was analysed using the Python programming language. We used Google’s Perspective API (score “Insult”) and OpenAI’s Moderations tool (category “hate”) to score potentially insulting and hateful comments. Following changes to Twitter’s data access, we are currently unable to conduct such analyses.

³ 23 March 2023

⁴ The data research therefore does not claim to be absolute. The number of German-language tweets was first determined using word lists containing common German terms such as articles and prepositions. The linguistic classification was then checked. It is possible that the total number of all German-language tweets on that day is higher.

⁵ Information in the text for Google’s Perspective API. The second AI used is by OpenAI (moderation “hate”) and comes to the following results: 3.45% potentially hateful tweets on Twitter, 0.6% potentially hateful comments on Mastodon.



    Bitte fülle noch das Captcha aus*

    Captcha
    5 - 4 = ?
    Reload

    Mehr Infos in unserer Datenschutzerklärung. *