#Kriegstreiber: With warmonger accusations into the middle of society
Disinformation, misleading content and propaganda – since the start of the war of aggression against Ukraine, they have played a major role, especially on social media platforms. Twitter claims to be reacting to these current developments. In mid-March, the short message service announced the following goal in this context: Users should not be misled by contextless information on the platform. Misleading information could be, for example, a tweet that republishes old video footage and suggests that this video comes from the current war in Ukraine.
This premise even applies to content that is legal and in the scope of Twitter’s internal rules. In order to enforce this goal, Twitter promised to reduce the reach of corresponding tweets and not to proactively play out the tweets in the timelines of other users. Furthermore, the platform wants to add labels to the content that can provide context (source). However, there has yet to be any precise evidence or figures on how often and which of these measures have actually been carried out. For HateAid and other civil society organisations, the implementation of this regulation is therefore hardly comprehensible.
#Kriegstreiber: We put Twitter to the test
From April 2022, the term “warmonger” spread at lightning speed on Twitter. Our research on the use of #Kriegstreiber points to targeted defamation campaigns under the hashtag. Many accounts spreading the term “warmonger” accuse people who have previously shown solidarity with Ukraine of fuelling the war in Ukraine. The aim is to undermine the credibility of politicians, journalists and committed citizens and thus to silence them or even drive them completely out of the public sphere.
We wanted to know how Twitter deals with posts that include pro-Kremlin propaganda and “warmonger” accusations. Therefore, we reported 74 critical posts accusing politicians and others of being “warmongers”. About half of these were legally relevant or a violation of Twitter’s internal guidelines. The result surprised us.
39 of the reported posts should, in our estimation, have been removed from Twitter, either as an insult under the Criminal Code or as a violation of Twitter’s rules. The remaining 35 reported posts could have been removed because they were classified as Kremlin-friendly propaganda. Here, we wanted to determine in the random sample how Twitter applies the new guidelines.
Shared 235 times, liked 900: High reach for propaganda
Our Twitter research not only shows that questionable posts are tolerated by Twitter. They also show: it is not apparent that Twitter is living up to its own promise to clamp down on misleading content. In fact, tweets with “warmonger” statements sometimes achieve very high reach. A tweet from 20 April 2022 accusing German politicians Hofreiter (Member of Parliament), Federal Chancellor Scholz and Minister of Foreign Affairs Baerbock of wanting to fuel a nuclear war received 900 likes. We have other tweets that developed quite a high reach with 60 or even 100 likes.
The vast majority of these tweets are retweets or replies to original content from accounts with enormous reach. These are mainly official accounts of federal politicians, journalists of large media outlets and preferably also the public channels, such as Tagesschau and ZDF. The accounts that spread pro-Kremlin propaganda use the large reach of political and journalistic accounts as free riders. They comment on and share them and thus manage to penetrate the middle of society through their followers.
For example, “warmonger” accusations were particularly frequently posted under the Twitter accounts of the media Tagesschau (3.6 million followers), BILD (1.9 million followers), Die Welt (1.7 million followers). Among the accounts of politicians, especially Chancellor Olaf Scholz (489,000 followers), Friedrich Merz (CDU chairman, 230,000 followers), Andrij Melnyk (Ukraine’s ambassador to Germany, 136,000 followers) and Marie-Agnes Strack-Zimmermann (Member of Parliament, 124,000 followers) were used to spread the “warmonger” narrative.
We have to suspect that this is not a coincidence, but a strategy. A strategy aimed at circumventing measures taken by the platforms while reaching as many people as possible.
The fine line between propaganda and freedom of expression
For the platforms, this is not an easy situation to resolve. In the case of Twitter, the vast majority of posts that serve the “warmonger” narrative violate at best the internal guidelines, but not the law. But also the posts that violate Twitter’s rules are apparently not removed, even after being reported to the platform, according to our findings. This is disappointing and runs counter to what the platform publicly promises.
The “warmonger” narrative is neither clear hate speech, because no one is wished dead, nor is it clear disinformation. And yet, campaigns like this present platforms with a dilemma: On the one hand, they are under no obligation to act. On the other hand, they watch themselves and their algorithms, being misused for targeted and strategic war propaganda. For it is above all polarising content that is frequently shared and amplified by the recommendation systems of the platforms. Thus, not only “warmonger” accusations, but also hate speech and disinformation spread rapidly on the internet.
Twitter and platforms alike must act now!
Effective measures are needed now to prevent the already overheated debate from becoming even more heated and to minimise the damage. In the face of the war against Ukraine, which is against international law, the platforms cannot invoke “neutrality”, but must decide which side of history they want to be on. The reach of propaganda and disinformation must be effectively curbed. Twitter and the other social media platforms must investigate indications of strategic abuse by highly active accounts and investigate whether these accounts are authentic. However, the reporting of these accounts is not sufficiently covered by the existing categories in the platforms’ reporting channels for users.
Our recommendations for social media platforms:
- Measures of the platforms must be dynamic in order to be able to react well to new developments and to be able to prevent the strategic spread of propaganda. Examples could be: The temporary blocking of particularly active accounts or the hiding of comments. We also consider Twitter’s already announced allocation of labels for tweets and accounts that prevent dissemination to be relevant.
- Harmful and threatening posts, outside of hate speech and disinformation, must be included in the community guidelines and risk assessments.
- Users reporting posts should be able to select the category “disinformation” when marking content, as well as “propaganda” or a similar term to improve content moderation. First attempts already exist e.g. on Facebook and TikTok.