柠檬导航

News

Malicious social media bots increased significantly during the COVID19 pandemic

A groundbreaking joint study by Aalto University and the Finnish Institute for Health and Welfare (THL) has revealed critical insights into the role of bots in shaping public health discourse during the COVID-19 pandemic, particularly on Twitter.
A person holds a phone showing the Twitter profile, with mask emojis in the header. A plant is in the background.
Photo: Solen Feyissa

The information environment in Finland during the coronavirus pandemic was exceptional and intense in many ways. The spread of disinformation and the number of actors involved reached unprecedented levels. The demand for accurate information was enormous, and the situation was constantly evolving. Information was disseminated through various channels. Official information played a crucial role, but at the same time, social media posed challenges in the fight against false and misleading information.

Malicious bots increased significantly during the pandemic. The operation of bots 鈥 i.e. programs imitating human users 鈥 was particularly aggressive during the key corona measures. These included, for example, the biggest information campaigns about corona vaccinations and instructions. This was evident in a study that analyzed a total of 1.7 million tweets related to the topic of COVID-19 on Twitter/X in Finland over the course of three years.

Bots accounted for 22 percent of the messages, while normally bots produce about 11 percent of the content in Twitter/X. Of the identified bot accounts, 36 percent (4894) acted maliciously. In particular, they emphasized the unintentional dissemination of misinformation, i.e. incorrect information. About a quarter (approx. 460,000) of all messages contained incorrect information. Roughly the same proportion of messages expressed a negative attitude towards vaccines.

According to the study, malicious bots used THL's Twitter to intentionally spread disinformation, i.e. misleading information, but did not actually target THL. The bots increased the effectiveness and reach of their publications in different ways. For example, they mentioned other accounts in 94 percent of their tweets. The bots also proved to be adaptable; their messages varied according to the situation.

The study utilized the latest version of Botometer (4.0) to classify bot accounts, going beyond mere identification to differentiate between regular bots and COVID-19-specific malicious bots. This distinction is critical, as it reveals that traditional binary classifications of bots are insufficient. 

鈥淭he findings highlight how regular bots often align with governmental messaging, enhancing their credibility and influence, while malicious bots employ more aggressive and deceptive tactics. The malicious bots may amplify false narratives, manipulate public opinion, and create confusion by blurring the line between credible and noncredible sources,鈥 says Ali Unlu, the primary author of the study. He is a Visiting Researcher at Aalto University鈥檚 Department of Computer Science and Senior Researcher at THL.

Bot activity should be taken into account in public health communication

Malicious bots pose persistent threat even after the pandemic's peak. They continue to spread misinformation, particularly concerning vaccines, by exploiting public fears and skepticism. 

The research suggests that these bots could have long-term implications for public trust in health institutions and highlights the importance of developing more sophisticated tools for detecting and mitigating the influence of such bots.

鈥淧ublic health agencies need to enhance their monitoring and response strategies. Our study suggests that preemptive measures such as public education on bot activity and improved detection tools. The study also calls for more actions from social media platforms to curb clearly false information and account authenticity, which could significantly improve public trust and the effectiveness of public health communication,鈥 says Lead Expert Tuukka Tammi from THL.

Non-English setting makes the research unique

Unlike most studies in this domain, which are predominantly in English, this research is one of the few that investigates social media bots in a non-English language, specifically Finnish. This unique focus allows for a detailed examination of external factors such as geographical dispersion and population diversity in Finland, providing valuable insights that are often overlooked in global studies.

鈥淭his study represents a significant contribution to understanding the complex role of bots in public health communication, particularly in the context of a global health crisis. It highlights the dual nature of bot activity 鈥 where regular bots can support public health efforts, while malicious bots pose a serious threat to public trust and the effectiveness of health messaging. The research provides a roadmap for future studies and public health strategies to combat the ongoing challenge of misinformation in the digital age,鈥 concludes Professor of Practice Nitin Sawhney from Aalto University鈥檚 Department of Computer Science.

The study was conducted as part of the joint Crisis Narratives research project between Aalto University (ARTS & SCI) and THL, and was funded by the Research Council of Finland from 2020 to 2024.

  • Updated:
  • Published:
Share
URL copied!

Read more news

TAIMI-hanke rakentaa tasa-arvoista ty枚el盲m盲盲. Kuva: Kauppakorkeakoulu Hanken.
Research & Art Published:

The TAIMI project builds an equal working life 鈥 a six-year consortium project seeks solutions to recruitment and skill challenges

Artificial intelligence (AI) is changing skill requirements, the population is aging, and the labor shortage is deepening. Meanwhile, the potential of international experts often remains unused in Finland. These challenges in working life are addressed by the six-year TAIMI project funded by the Strategic Research Council, and implemented by a broad consortium.
Unite! Seed Fund 2026: Call opens on 20 January. Applications open for student activities, teaching and learning, research and PhD.
Cooperation, Research & Art, Studies, University Published:

Unite! Seed Fund 2026: Call opens on 20 January 2026

Gain an early overview of the Unite! Seed Fund Call of Spring 2026. The call includes three funding lines: Student Activities, Teaching and Learning, and Research and PhD.
Deepika Yadav in the Computer science building in Otaniemi. Photo: Matti Ahlgren.
Appointments Published:

Deepika Yadav leverages technology to improve women's health

Deepika Yadav recently began as an assistant professor at the Department of Computer Science in the field of human-computer interaction (HCI) and interaction design for health and wellbeing.
A large cargo ship loaded with colourful containers sails across the blue ocean under a partly cloudy sky.
Research & Art Published:

Study: Internal combustion engine can achieve zero-emission combustion and double efficiency

A new combustion concept that utilizes argon could completely eliminate nitrogen oxide emissions from internal combustion engines and double their efficiency compared to diesel engines.