Next Article in Journal
Immunoinformatics-Based Proteome Mining to Develop a Next-Generation Vaccine Design against Borrelia burgdorferi: The Cause of Lyme Borreliosis
Previous Article in Journal
Long-Term Longitudinal Analysis of Neutralizing Antibody Response to Three Vaccine Doses in a Real-Life Setting of Previously SARS-CoV-2 Infected Healthcare Workers: A Model for Predicting Response to Further Vaccine Doses
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter

Carlos Ruiz-Núñez
Sergio Segado-Fernández
Beatriz Jiménez-Gómez
Pedro Jesús Jiménez Hidalgo
Carlos Santiago Romero Magdalena
María del Carmen Águila Pollo
Azucena Santillán-Garcia
5 and
Ivan Herrera-Peco
PhD Program in Biomedicine, Translational Research and New Health Technologies, School of Medicine, University of Malaga, Blvr. Louis Pasteur, 29010 Málaga, Spain
Nursing Department, Faculty of Medicine, Universidad Alfonso X el Sabio, Avda Universidad, 1, Villanueva de la Cañada, 28691 Madrid, Spain
Traumatology and Orthopedic Surgery Service, Hospital Universitario de Móstoles, C/Dr. Luis Montes s/n., 28935 Madrid, Spain
Faculty of Health Sciences, Universidad Alfonso X el Sabio, Avda Universidad, 1, Villanueva de la Cañada, 28691 Madrid, Spain
Valencia International University, C/Pintor Sorolla 21, 46002 Valencia, Spain
Author to whom correspondence should be addressed.
Vaccines 2022, 10(8), 1240;
Submission received: 25 June 2022 / Revised: 30 July 2022 / Accepted: 31 July 2022 / Published: 2 August 2022


This study aims to analyze the role of bots in the dissemination of health information, both in favor of and opposing vaccination against COVID-19. Study design: An observational, retrospective, time-limited study was proposed, in which activity on the social network Twitter was analyzed. Methods: Data related to pro-vaccination and anti-vaccination networks were compiled from 24 December 2020 to 30 April 2021 and analyzed using the software NodeXL and Botometer. The analyzed tweets were written in Spanish, including keywords that allow identifying the message and focusing on bots’ activity and their influence on both networks. Results: In the pro-vaccination network, 404 bots were found (14.31% of the total number of users), located mainly in Chile (37.87%) and Spain (14.36%). The anti-vaccination network bots represented 16.19% of the total users and were mainly located in Spain (8.09%) and Argentina (6.25%). The pro-vaccination bots generated greater impact than bots in the anti-vaccination network (p < 0.000). With respect to the bots’ influence, the pro-vaccination network did have a significant influence compared to the activity of human users (p < 0.000). Conclusions: This study provides information on bots’ activity in pro- and anti-vaccination networks in Spanish, within the context of the COVID-19 pandemic on Twitter. It is found that bots in the pro-vaccination network influence the dissemination of the pro-vaccination message, as opposed to those in the anti-vaccination network. We consider that this information could provide guidance on how to enhance the dissemination of public health campaigns, but also to combat the spread of health misinformation on social media.

1. Introduction

By the end of July 2022, according to data from the World Health Organization, more than 575 million people will have been infected worldwide by SARS-CoV-2, with 6.7 million people dying. Focusing on vaccines, 62.3% of the population is fully vaccinated and 67.9% of the world’s population is vaccinated with at least one dose.
During health emergencies, such as the one resulting from the COVID-19 pandemic, the need for information becomes a concern for many people [1,2]. Demand for information together with the large amount of information generated, which has led to an infodemic, considered a major health problem [3] globally, are associated with the emergence of uncertainty about what is or is not verified health information [4], causing a decrease in adherence to the recommendations of health authorities [5] but also making people less critical of the information consulted and, therefore, more prone to believe in biased information [6] or even misleading information [2].
This health disinformation and its rapid dissemination, coming from social media, significantly affects proper public health communication and decreases preventive measures [7]. The easy access to social media and the lack of control over the content generated mean that they can be considered a quick channel for the spread of unverified health information [8], representing a potential threat to public health [9], due to unverified health information exposure causing significant concerns, such as the generation of misinformation about treatments or modifying health care habits [3,6,10].
Health disinformation is developed both by human users and by automated accounts, controlled by a mathematical algorithm, and commonly known as “bots” [11]. These impersonate human users on social networks, such as Facebook, Twitter, Instagram, etc., focusing on the generation of contents, dissemination of disinformation, etc. [11,12].
Bots have been identified as key elements in the propagation of health disinformation [13], with great importance in relation to disinformation about vaccination [14,15]. In this regard, it is important to point out that, for instance, on the social network Twitter, fake news spreads faster than real news, since the former is mostly topical and triggers emotional reactions [16].
The bots used in these actions are mainly classified into two categories: (i) Social bots, those that automatically produce content and interact with human users in social networks, their goal being to modify the behavior and emotions of human users with regards to a specific topic [17,18]. (ii) Cyborgs or hybrid bot–human accounts, which also generate automated content like the social bot and seek to modify the behavior and emotions of human users on a specific subject, although presenting a flexible structure and adapting to conversations with other users [7,19].
Social bots can go unnoticed for social media users, for they are designed to have a similar appearance to human profiles (e.g., displaying a personal picture and stating a name or location) and behave online in a human-like manner (retweeting, quoting, or endorsing others’ posts or tweets) [17]. Setting up social bots does not require complex software or programming skills. There exist online forums, which provide easy and free instructions for implementing social bots, thus, facilitating their creation and management [14].
Bots are best known for the dissemination of information, based on political [17] or economic content [18]. Their involvement in public health information is not extensively widespread, although they are considered dangerous agents for the dissemination of verified health information and they promote misinformation amongst the population [14]. In addition, groups, such as the so-called anti-vaccine groups, habitually use these social bots in such a way that they can boost and spread their information further and faster than verified health information is transmitted by organizations and users [19,20].
In this context, the main objective of the present study was to analyze the role of bots in the dissemination of health information related to COVID-19 vaccines, both in favor of and against the vaccination policy.

2. Materials and Methods

2.1. Study Design and Ethics

An observational, retrospective, time-limited study was proposed, in which activity on the social network Twitter was analyzed.
Since this study is performed on a social network and only activity among Twitter users is measured, no approval from a Research Ethics Committee is required. However, accounts of individual users were anonymized in order to develop good research practices on social media [21].

2.2. Data Collection

The information from the tweets was extracted through an API (Application Programming Interface) search tool, using the professional version of the software NodeXL (Social Media Research Foundation). This application connects to the chosen social network and allows us to study and download information on dates, users, keywords, and even studies how the different entities are related, i.e., the influence and communication in that network of users shown in communication nodes.
To achieve the objectives proposed in this study, the Twitter users included in the data analysis were those who had sent tweets with the following features: (i) tweets published in Spanish; (ii) tweets containing keywords or hashtag (selected for their importance with Google Trends) pro-vaccination (“yomevacuno” #yomevacuno ”COVID-19” #COVID-19) or anti-vaccination hashtags (#yonomevacuno “yonomevacuno” ”COVID-19” #COVID-19) against COVID-19; and (iii) tweets that were published between 24 December 2020 (00:00 a.m. CET) and 30 April 2021 (23:59 p.m. CET).
To obtain data on the bot score we used the Botometer API V4 (Observatory on Social Media and the Network Science Institute at Indiana University, Indiana, USA) to compute bot scores of users. The value obtained ranged from 0.0 to 1.0; scores closer to 1 represent a higher chance of bot-ness, while those with scores closer to 0 probably belong to humans. Congruent with the sensitivity settings of previous studies [22,23], we set the threshold to 0.76 considering that values larger than or equal to 0.76 meant that the Twitter account is a bot; otherwise, the user was considered human [22]. In some cases, Botometer fails to output a score due to issues including account suspensions and authorization problems. Those accounts that could not be assessed were excluded from further analysis.

2.3. Data Analysis

The analysis of the data compiled was performed in several steps (Figure 1). The first step was to analyze the most influential Twitter users who employed the analyzed hashtags, as well as their characteristics, using the Betweenness Centrality Score (BCS), which measures the influence of a vertex over the flow of information to other vertices, always assuming that information will travel through the shortest vertex path. The BSC value reflects how a user can control the information, choosing whether to share it or not, disclosing it to their network [20,24]. Secondly, we analyzed the activity in pro and anti-vaccination networks. It enabled us to identify content, activities, and/or influential users that can be strongly associated with overall Twitter activity, measured by the metrics of interactions and impressions. The interactions were defined as ‘favorite’ and ‘retweets’; meanwhile, the impression is an indicator of propagation of information, obtained when the number of tweets is multiplied by the number of followers [25]. Third, a content analysis was performed with the categories created after analyzing the data. It is important to note that, in this category analysis, only original tweets were taken into account, since these were considered to be those that generated the actual content disseminated throughout the user network. The content and category coding were performed independently by two researchers and corroborated by a third person, whereby any differences in approach and focus were always discussed and resolved with full agreement.
Finally, for data analysis, descriptive and inferential statistics were used via the Statistical Package for the Social Sciences software (SPSS) version 23.0 (IBM, Armonk, NY, USA). First, the normality distribution of data was tested with Kolmogorov–Smirnov’s test and homoscedasticity with Levene’s test. Differences between groups were assessed with Student’s t-test and a two-sided p-value of 0.05 was considered statistically significant.

3. Results

3.1. User Analysis of Pro and Anti-Vaccination Networks

Within the pro-vaccination network, a total of 2823 unique users was observed and 240 users were eliminated due to not granting access to check the account. Finally, we obtained 2583 users, of which 162 were categorized as bots (6.27%) (Table 1), whereas, within the anti-vaccination network, 5039 users were found, of which 376 were eliminated due to not granting access to check the account. Finally, we obtained 4663 users, of which 420 were categorized as bots (9%) (Table 1).
We observed that, in relation to the geolocation, there are differences between both networks, where 65.59% out of the total (n = 265) in the pro-vaccination network had a defined geolocation in their profile, compared to 20.77% (n = 113) with real geolocation in the anti-vaccination network. Main countries where bots were observed were Chile (n = 153; 37.87%), followed by Spain (n = 58; 14.36%) in the pro-vaccination network. Regarding the anti-vaccination network, 8.09% (n = 44) out of the total were located in Spain, followed by 6.25% (n = 34) located in Argentina (Figure 2).
In relation to the interactions generated in each network, it was observed that 979,418 interactions were generated in the pro-vaccination network, of which 42,135 (4.3%) corresponded to bots, with a ratio between interactions and messages in this network of 136.6. The average number of interactions generated by these bots was 125.39.
In the anti-vaccination network, 97,875 interactions were generated, of which bots accounted for 4.41% (4724). The ratio between interactions and messages in this network is 0.65. The average number of interactions generated by these bots was 24.016 (Table 1).
Finally, the impressions generated in each network were analyzed. It was found that in the pro-vaccination network, 101,668,395 impressions took place, of which the bots generated 4,856,946 (4.75%). In the anti-vaccination network, a total of 6,032,115,787 impressions was generated, with 460,521,238 (7.63%) attributed to bots. (Table 1). When analyzing the effect of bots on impressions, it is found that their participation is significant in none of the networks (Table 1).
In the pro-vaccination network, users identified as humans were found to generate a higher number of interactions than bots on the network (t = 3.44; p < 0.05). However, no significant differences were observed when an analysis of the impressions was performed. In the anti-vaccination network, we found the same behavior as in the pro-vaccine network, with significant differences in interactions and no significant differences in interactions.

3.2. Behavior of Pro-Vaccination and Anti-Vaccination Networks

When both networks are compared, we can observe that the interactions differ significantly (t = −33.512; p < 0.05), with a greater effect in the pro-vaccination network (mean = 368.24) than in the anti-vaccination network (mean = 23.01). However, the anti-vaccine network showed a significant difference when impressions (t = 4,73; p < 0.05) and messages (t = 30.68; p < 0.05) were compared with the pro-vaccination network.
In relation to the behavior of bots in pro-vaccination and anti-vaccination networks, the possible effect of bots in the two networks was studied, finding that the number of messages and the interactions produced by the bots in both networks differed significantly (t = −33.512; p < 0.05), observing that bots had a greater effect in the pro-vaccination network than in the anti-vaccination network. When analyzing the number of messages sent by the bots in both networks, it was observed that the bots sent a greater number of messages (mean = 17.33) in the anti-vaccination network compared to the messages (mean = 1.91) in the pro-vaccination network (t = 9.01; p < 0.05). Subsequently, the role of bots versus human users was compared in each network. With respect to the pro-vaccination network, it was found that bots generated a significant impact on interactions (t = 30.571; p < 0.05), while this effect was not observed on impressions (t = 0.323; p = 0.46). Within the anti-vaccination network, no significant impact on impressions was found, being human users the most influential users (t = 4.198; p ≤ 0.05), while this effect was not observed on interactions (t = 0.22; p = 0.826) for bots.

3.3. Influence of Bots and Content Analysis

The most influential bots in both networks were analyzed and categorized using the BCS score (Table 2). It was found that, within the 20 most influential users in the pro-vaccination network, 3 were categorized as bots, placing 2 of them among the 5 most influencing users of that network: users PV1 and PV2. PV1 introduces itself as a citizen focused on political activism, PV2 as a citizen without further explanation, and PV3 as an alternative communication channel.
Within the anti-vaccination network, among the 20 users with the highest BSC value, there is only one user recognized as a bot, labeled as AV1, who presents itself as a former teacher.
Within the most influential bots in both networks, for the anti-vaccination network, a total of 15 messages was generated, all being tweets, sent by 14 users and accounting for 1525 interactions and 57,492 impressions. Within the pro-vaccination network, 11 messages posting topics against COVID-19 vaccination were detected, accounting for 71 interactions and 99,945 impressions.
The entire sample of bots’ messages was collected within the anti-vaccine and pro-vaccines networks and analyzed. Considering the different approaches within the pro and anti-vaccination networks, movement was analyzed in order to categorize them (Table 3), where we can see a predominance of certain tendencies. In the pro-vaccination stream, the political focus was the main group (51.85%), followed by general tweets not expressing a view or clear opinion (29.63%), also showing support for the anti-vaccine movement (7.41%).
Analysis of the anti-vaccination network shows a high prevalence of messages that include exclusive hashtags or images (33.02%), followed by anti-vaccine messages (20.93%). Finally, the third group of messages was defined by messages with political content (Table 3). The anti-vaccine messages were defined by three main themes: vaccine safety (70.69%), vaccine efficacy (6.89%), and beliefs about COVID-19 vaccine (22.42%).

4. Discussion

In this study, we analyzed both the presence of bots and their impact on the networks in which they operate, disseminating information about health topics, such as vaccination against COVID-19.
Firstly, we did not find a large number of user accounts that were identified as bots, a similar situation previously described in the literature, which states that the presence of bots on Twitter can be different for networks focused on social or political information, with the percentage of bots ranging from 7.1% to 9.9% [11] and even increasing in some cases, depending on the author, from 9% to 15% [26]; in networks focused on health information, values between 3.9% [13], 5.4% [27], and 6% [28] are observed. As can be seen in the networks analyzed in this study, the percentage of bots detected in the two networks studied can be associated to disinformation on health issues, such as vaccination. However, it is important to point out that, in each network, there exist a small number of bots disseminating messages contrary to the general thread of the network in which they are located.
Secondly, it is observed that the behavior of the bots does not differ, regardless of the network in which they operate. For instance, the bots engaged in the pro-vaccination network display the same behavior as that typically associated with bots disseminating health disinformation. This behavior is consistent with their own nature as message repeaters [14,18], increasing impressions upon a given message or facilitating interactions [26], clearly intending to influence human users’ opinion [15] on healthcare matters [10,11,12]. This behavior is mostly focused on disseminating information about vaccines by tweeting socially or politically tinged messages, either in the pro-vaccination or anti-vaccination network, a situation that is consistent with findings raised by authors, such as Broniatowski et al., 2018 [12].
Third, with respect to the dissemination of information commonly associated with bots in relation to health information, we must differentiate between the networks analyzed, although both networks agree that the main focus of the bots is the subject matter of political messages or messages not expressing opinions, including images or hashtags.
Regarding the anti-vaccination network, the messages mostly have an anti-vaccination approach against COVID-19, either through political or social content. They even fabricate news about false problems or side effects associated to vaccines [29]. In our study, the impact of the bots was of little relevance, both in terms of generating interactions and impressions in the network [12], in addition to continuing the dissemination of low-credibility health information [30]. In this network, human users are responsible for the dissemination of anti-vaccine information [16,31]. Although, in our study, the effect is limited, the consequences of bots spreading health disinformation in times of health emergency should not be disregarded [14].
Furthermore, we found that bots engaged in the pro-vaccination network elicit a significant impact on what is defined as interactions with the network. This finding is consistent with what was previously described about the characteristics of messages with a pro-vaccination approach; that is, they are networks in which user participation is promoted [14]. The ability of bots to generate these interactions could be associated with their behavior in the pro-vaccination network, not being mere repeaters of information [8] but generators of a higher level of engagement with the human user [16], by mimicking the behavior of humans [17]. It is important to consider that users distrust information coming from accounts clearly identified as bots [32]. Hence, social bots can generate changes in the individuals they interact with, as the information received is more trustworthy [30].
The influence of the bots in their respective networks was another element assessed in this study. It was observed that bots in the anti-vaccination network do not play an important role amongst the 20 most influential users in the network. This situation is consistent with what was observed in other studies, where the role of bots was not prominent in the activity of the networks analyzed [16,26,28,31]. However, in the pro-vaccination network, 2 of the 20 most influential users were categorized as bots, reaching scores of 0.84 in Botometer. These data, at least to the researchers’ knowledge, have not been observed in other studies analyzing the use of bots fostering public-health policies in social networks.
As a result of our research, we found a new line of work related to the use of bots by healthcare organizations and healthcare workers, who are one of the biggest disseminators of examples and information. They also encounter rejection and uncertainty about vaccines, due to a lack of adequate information, the speed, and validation of vaccines, etc. [33]. We must understand the importance of these tools and use them for the common benefit of society, as in public-health policies, but always with identifiable bots to be credible. The great demand for information brought about by the recent pandemic must be accompanied by a major effort in scientific dissemination, which is evidence-based and regulated to avoid ethical implications and much of this work can be done in an automated way by bots with advice from health workers and government organizations.
Our study has some limitations. First, the analysis is only focused on Twitter, thus, being limited to this social network. In addition, when retrieving information using specific hashtags and keywords, it is possible that we missed users who posted anti- or pro-vaccination messages related to COVID-19 without using these specific keywords and hashtags. Another one is related to the use of the Botometer, since this tool presents some difficulties in detecting hybrid-type bots [14], which could mean the number of bots in our study is underestimated.

5. Conclusions

To the authors’ knowledge, the present study is the first to analyze the presence of bots in two networks, one in favor and another opposing vaccination against COVID-19, considering Twitter posts written in Spanish.
We believe that the findings shown in this study display information of interest to health organizations, both to better understand the role of bots spreading health misinformation and to assess the potential use of tools to disseminate verified health messages and information on social networks.
Although correcting Twitter misinformation remains a huge public-health challenge, the utilization of bots designed and generated by health organizations could be an extremely effective tool to make health misinformation on social media fade away. However, also, given the data observed in this study, bots can be extremely useful for the dissemination of messages in favor of public-health policies, generating a greater scope of verified health information. Given the high degree of interaction amongst users, it may be beneficial to strengthen their views and confidence on public-health policies.
Finally, we believe that the constant and regular use of social bots by healthcare organizations, together with training policies for healthcare professionals in the use of social networks, can be of great interest to enhance their participation and improve the effectiveness of healthcare communication.

Author Contributions

Conceptualization, C.R.-N. and I.H.-P.; methodology, C.R.-N., S.S.-F. and B.J.-G.; formal analysis, C.R.-N., I.H.-P. and M.d.C.Á.P.; data curation, C.R.-N. and I.H.-P.; writing—original draft preparation, C.S.R.M., B.J.-G. and I.H.-P.; writing—review and editing, A.S.-G. and P.J.J.H. All authors meet the author criteria. All authors have read and agreed to the published version of the manuscript.


This research was funded by Fundación Banco Santander and Fundación Alfonso X el Sabio, grant number 1012031.

Institutional Review Board Statement

Ethical review and approval were waived for this study as it was performed on a social network and it did not interfere with any patient or human data beyond measuring Internet activity among Twitter users. Moreover, this study only used data from users who consented on Twitter to disclose their data publicly (i.e., no privacy settings were selected by them). However, accounts of individual users were anonymized in order to develop good research practices on social networks.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data that support the findings of this study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Coelho, C.M.; Suttiwan, P.; Arato, N.; Zsido, A.D.N. On the nature of fear and anxiety triggered by COVID-19. Front. Psychol. 2020, 11, 581314. [Google Scholar] [CrossRef] [PubMed]
  2. Secosan, I.; Virga, D.; Crainiceanu, Z.P.; Bratu, L.M.; Bratu, T. Infodemia: Another Enemy for Romanian Frontline Healthcare Workers to Fight during the COVID-19 Outbreak. Medicina 2020, 56, 679. [Google Scholar] [CrossRef] [PubMed]
  3. Mano, R.S. Social media and online health services: A health empowerment perspective to online health information. Comp. Human Behav. 2014, 39, 404–412. [Google Scholar] [CrossRef]
  4. Zaracostas, J. How to fight an infodemic. World Rep. 2020, 395, 676. [Google Scholar] [CrossRef]
  5. Greenberg, N.; Docherty, M.; Gnanapragasam, S.; Wesseley, S. Managing mental health challenges faced by healthcare workers during COVID-19 pandemic. BMJ 2020, 368, m1211. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Scott, R.E.; Mars, M. Behaviour change and e-health-looking broadly: A scoping narrative review. Stud. Health Technol. Inform. 2020, 268, 123–138. [Google Scholar] [PubMed]
  7. Rovetta, A.; Bhagavathula, A.S. COVID-19-Related Web Search Behaviors and Infodemic Attitudes in Italy: Infodemiological Study. JMIR Public Health Surveill. 2020, 6, e19374. [Google Scholar] [CrossRef] [PubMed]
  8. Himelein-Wachowiak, M.K.; Giorgi, S.; Devoto, A.; Rahman, M.; Ungar, L.; Schwartz, A.; Epstein, D.H.; Leggio, L.; Curtis, B. Bots and misinformation spread on social media: Implicatons for COVID-19. J. Med. Internet Res. 2021, 5, e269331. [Google Scholar] [CrossRef]
  9. Ahmed, F.; Abulaish, M. A generic statistical approach for spam detection in Online Social Networks. Comput. Commun. 2013, 36, 1120–1129. [Google Scholar] [CrossRef]
  10. Subrahmanian, V.; Azaria, A.; Durst, S.; Kagan, V.; Galstyan, A.; Lerman, K.; Zhu, L.; Ferrara, E.; Flammini, A.; Menczer, F. The DARPA Twitter Bot Challenge. Computer 2016, 49, 38–46. [Google Scholar] [CrossRef] [Green Version]
  11. Xu, W.; Sasahara, K. Characterizing the roles of bots on Twitter during the COVID-19 infodemic. J. Comp. Soc. Sci. 2022, 5, 591–609. [Google Scholar] [CrossRef] [PubMed]
  12. Broniatowski, D.A.; Jamison, A.M.; Qi, S.; AlKulaib, L.; Chen, T.; Benton, A.; Quinn, S.C.; Dredze, M. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. Am. J. Public Health 2018, 108, 1378–1384. [Google Scholar] [CrossRef] [PubMed]
  13. Dunn, A.G.; Surian, D.; Dalmazzo, J.; Rezazadegan, D.; Steffens, M.; Dyda, A.; Leask, J.; Coiera, E.; Dey, A.; Mandl, K.D. Limited role of bots in spreading vaccine-critical information among active twitter users in the United States:2017–2019. Am. J. Public Health 2020, 110, S319–S325. [Google Scholar] [CrossRef] [PubMed]
  14. Rauchfleisch, A.; Kaiser, J. The false positive problem of atomatic bot detection in social science research. PLoS ONE 2020, 15, e0241045. [Google Scholar] [CrossRef]
  15. Blankenship, E.B.; Goff, M.E.; Yin, J.; Tse, Z.T.H.; Fu, K.-W.; Liang, H.; Saroha, N.; Fung, I.C.-H. Sentiment, contents, and retweets: A study of two vaccine-related twitter datasets. Perm. J. 2018, 22, 17–138. [Google Scholar] [CrossRef] [Green Version]
  16. Swire-Thompson, B.; Lazer, D. Public Health and online misinformation: Challenges and recommendations. Annu. Rev. Public Health 2020, 41, 413–451. [Google Scholar] [CrossRef] [Green Version]
  17. Ferrara, E.; Varol, O.; Davis, C.; Menczer, F.; Flammini, A. The rise of social bots. Commun. ACM 2016, 59, 96–104. [Google Scholar] [CrossRef] [Green Version]
  18. Zhang, J.; Zhang, R.; Zhang, Y.; Yan, G. The Rise of Social Botnets: Attacks and Countermeasures. IEEE Trans. Dependable Secur. Comput. 2018, 15, 1068–1082. [Google Scholar] [CrossRef] [Green Version]
  19. Starbird, K. Disinfrmation’s spread: Bots, trolls and all of us. Nature 2019, 571, 449. [Google Scholar] [CrossRef] [PubMed]
  20. Ahmed, W.; Bath, P.; Demartini, G. Using Twitter as a data source: An overview of ethical, legal, and methodological challenge. Adv. Res. Ethics Integr. 2017, 2, 79–107. [Google Scholar] [CrossRef] [Green Version]
  21. Shi, W.; Yang, J.; Zhang, J.; Wen, S.; Su, J. Social bots’ sentiment engagement in health emergencies: A topic-based analysis of the COVID-19 pandemic discussions on Twitter. Int. J. Environ. Res. Public Health 2020, 17, 8701. [Google Scholar] [CrossRef] [PubMed]
  22. Jemielniak, D.; Krempovich, Y. An analysis of AstraZeneca COVID-19 vaccine misinformation and fear mongering on Twitter. Public Health 2021, 200, 4–6. [Google Scholar] [CrossRef]
  23. Sayyadiharikandeh, M.; Onur, V.; Kai-Cheng, Y.; Flammini, A.; Menczer, F. Detection of novel social bots by ensembles of specialized classifiers. In Proceedings of the 29th ACM International Conference on Information & Knowledge Management, 19 October 2020; pp. 2725–2732. [Google Scholar] [CrossRef]
  24. Clauset, A.; Newman, M.; Moore, C. Finding community structure in very large networks. Phys. Rev. E 2004, 70 (6 Pt 2), 066111. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  25. Saha, K.; Torous, J.; Ernala, S.K.; Rizuto, C.; Staffordm, A.; De Choudhury, M. A computational study of mental health awareness campaigns on social media. Transl. Behav. Med. 2019, 9, 1197–1207. [Google Scholar] [CrossRef] [PubMed]
  26. Shao, C.; Ciampaglia, G.L.; Varol, O.; Yang, K.C.; Flammini, A.; Menczer, F. The spread of low-credibiity content by social–bots. Nat. Commun. 2018, 9, 4787. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  27. Varol, O.; Ferrara, E.; Davos, C.A.; Menczer, F.; Flammini, A. Online Human-Bot interactions: Detection, estimation and characterization. In Proceedings of the Eleventh International AAAI Conference, Montréal, QC, Canada, 15–18 May 2017; pp. 280–290. [Google Scholar]
  28. Hoffman, B.L.; Colditz, J.B.; Shensa, A.; Wolynn, R.; Tanjea, S.B.; Felter, E.M.; Wolynn, T.; Sidani, J.E. #DoctorSpeakUp: Lessons learned from a pro-vaccine Twitter event. Vaccine 2021, 39, 2684–2691. [Google Scholar] [CrossRef] [PubMed]
  29. Herrera-Peco, I.; Jiménez-Gómez, B.; Romero Magdalena, C.S.; Deudero, J.J.; García-Puente, M.; Benítez De Gracia, E.; Ruiz Núñez, C. Antivaccine Movement and COVID-19 Negationism: A Content Analysis of Spanish-Written Messages on Twitter. Vaccines 2021, 9, 656. [Google Scholar] [CrossRef]
  30. Lanius, C.; Weber, R.; MacKenzie, W.I., Jr. Use of bot and content flags to limit the spread of misinformation among social networks: A behaviour and attitude survey. Soc. Netw. Anal. Min. 2020, 11, 32. [Google Scholar] [CrossRef] [PubMed]
  31. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef] [PubMed]
  32. Scannel, D.; Desens, L.; Guadagno, M.; Tra, Y.; Acker, E.; Sheridan, K.; Rosner, M.; Mathieu, J.; Fulk, M. COVID-19 vaccine discourse on Twitter: A content analysis of persuasion techniques, sentiment and mis/disinformation. J. Health Commun. 2021, 26, 443–459. [Google Scholar] [CrossRef]
  33. Di Gennaro, F.; Murri, R.; Segala, F.V.; Cerruti, L.; Abdulle, A.; Saracino, A.; Bavaro, D.F.; Fantoni, M. Attitudes towards Anti-SARS-CoV2 Vaccination among Healthcare Workers: Results from a National Survey in Italy. Viruses 2021, 13, 371. [Google Scholar] [CrossRef]
Figure 1. Data mining and analysis scheme.
Figure 1. Data mining and analysis scheme.
Vaccines 10 01240 g001
Figure 2. Bots’ location on pro and anti-vaccination networks. (Figure adapted from Fobos92′s world map with CC BY-SA 3.0 license).
Figure 2. Bots’ location on pro and anti-vaccination networks. (Figure adapted from Fobos92′s world map with CC BY-SA 3.0 license).
Vaccines 10 01240 g002
Table 1. Characteristics of users, impressions, and interactions in pro and anti-vaccination networks.
Table 1. Characteristics of users, impressions, and interactions in pro and anti-vaccination networks.
Pro-Vaccination Network
N%Ratio (I/m)MeanComparison Humans-Bots-t Student (p-Value)
Users’ InteractionsHuman937,28395.7193.65387.153.44 (0.04) **
Bots42,1354.3136.36 260.09
Users’ impressionsHuman96,811,4499520,0002.3739,988.2070.323 (0.746)
Anti-vaccination network
n%Ratio (I/m)MeanComparison Humans-bots-t Student (p-value)
Users’ InteractionsHuman93,15195.171.3321.954.198 (0.001) **
User’s impressionsHuman5,571,594,54992.3779,296.281,313,126.220.22 (0.826)
where: Ratio IM, means Ratio between number of impressions or interaction by message in the network and (**) denotes statistically significant differences.
Table 2. Characteristics of most influential bots in pro and anti-vaccination networks.
Table 2. Characteristics of most influential bots in pro and anti-vaccination networks.
NetworkUser CodeDescriptionBot ScoreBSCNetwork Activity
Anti-vaccinationAV1Citizen0.7816,444.45Criticism to government
AV2Citizen0.816,008.92Conspiration: the vaccine as a means to foster genocide
AV3Citizen0.7815,228.38Support to vaccine against COVID-19
AV4Citizen, nonconformist0.7814,001.76Negationist: neither the virus nor the pandemic does exist
AV5Citizen0.7613,453.91Criticism to government
Pro-vaccinationPV1Political activist0.8491,7876.41Spread of news about vaccines approvals
PV2Citizen0.84160,118.69Information about vaccination set-off
PV3Political activist0.8876,177.66Information about vaccination set-off
PV4Citizen0.845,713.95Approval of vaccines by the European Medication Agency
PV5Citizen0.7824,801.58Spread of positive information on vaccines availability
where: BCS means Betweenness Centrality Score.
Table 3. Categorization of messages in the pro- and anti-vaccination network movement.
Table 3. Categorization of messages in the pro- and anti-vaccination network movement.
CategoryPro-Vaccination NetworkAnti-Vaccination Network
Political content51.85% (84)18.6% (78)
Vaccine awareness11.11% (18)
General tweets not expressing a view or opinion29.63% (48)33.02% (139)
Conspiracy theories 13.48% (57)
Pandemic negationism 3.72% (16)
Anti-vaccine tweets 26.97% (113)
Opposed to main subject of the network7.41% (12)4.21% (17)
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Ruiz-Núñez, C.; Segado-Fernández, S.; Jiménez-Gómez, B.; Hidalgo, P.J.J.; Magdalena, C.S.R.; Pollo, M.d.C.Á.; Santillán-Garcia, A.; Herrera-Peco, I. Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter. Vaccines 2022, 10, 1240.

AMA Style

Ruiz-Núñez C, Segado-Fernández S, Jiménez-Gómez B, Hidalgo PJJ, Magdalena CSR, Pollo MdCÁ, Santillán-Garcia A, Herrera-Peco I. Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter. Vaccines. 2022; 10(8):1240.

Chicago/Turabian Style

Ruiz-Núñez, Carlos, Sergio Segado-Fernández, Beatriz Jiménez-Gómez, Pedro Jesús Jiménez Hidalgo, Carlos Santiago Romero Magdalena, María del Carmen Águila Pollo, Azucena Santillán-Garcia, and Ivan Herrera-Peco. 2022. "Bots’ Activity on COVID-19 Pro and Anti-Vaccination Networks: Analysis of Spanish-Written Messages on Twitter" Vaccines 10, no. 8: 1240.

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop