Next Article in Journal
Information Sharing Strategies in the Social Media Era: The Perspective of Financial Performance and CSR in the Food Industry
Previous Article in Journal
How to Integrate Financial Big Data and FinTech in a Real Application in Banks: A Case of the Modeling of Asset Allocation for Products Based on Data
Order Article Reprints
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:

Bots as Active News Promoters: A Digital Analysis of COVID-19 Tweets

School of Communication, Simon Fraser University, Burnaby, BC V5A 1S6, Canada
Data Scientist, Royal Bank of Canada, Toronto, ON M5J 2W7, Canada
Author to whom correspondence should be addressed.
Information 2020, 11(10), 461;
Received: 3 September 2020 / Revised: 19 September 2020 / Accepted: 24 September 2020 / Published: 27 September 2020
(This article belongs to the Section Information and Communications Technology)


In this study, we examined the activities of automated social media accounts or bots that tweet or retweet referencing #COVID-19 and #COVID19. From a total sample of over 50 million tweets, we used a mixed method to extract more than 185,000 messages posted by 127 bots. Our findings show that the majority of these bots tweet, retweet and mention mainstream media outlets, promote health protection and telemedicine, and disseminate breaking news on the number of casualties and deaths caused by COVID-19. We argue that some of these bots are motivated by financial incentives, while other bots actively support the survivalist movement by emphasizing the need to prepare for the pandemic and learn survival skills. We only found a few bots that showed some suspicious activity probably due to the fact that our dataset was limited to two hashtags often used by official health bodies and academic communities.

1. Introduction

The purpose of this study is to identify bot accounts to understand the nature of messages sent by them on COVID-19. Social media bots have been widely discussed in academic literature as some kind of moral panic mostly in relation to spreading controversial and politically polarized messages or in connection to problematic health bots [1,2]. The findings of this study, however, show that bots that reference COVID-19 mostly mention mainstream media and credible health sources while spreading breaking news on the pandemic or urging people to stay at home. We argue that many bots seem to be spreading news to gain profit through clickbaits or by directing Twitter users to certain websites. We also argue that there are still advantages from using these Twitter bots to inform people about the pandemic risks. Though Twitter bots are different from health chatbots, the results of this study align with previous research on the possible benefits, advantages, or possibilities afforded by the use of official health chatbots [3,4,5,6]. Health chatbots, for example, have shown to be useful in addressing patients’ needs in different contexts [7,8,9,10,11,12].
Despite the fact that the intentions and makers of Twitter bots are different from official health chatbots, they are both automated accounts that are active on social media. We argue here that within the context of this COVID-19 pandemic, there are clear similarities but certainly different motivations. WHO, on the one hand, created a health chatbot on Facebook messenger as well as on the messaging app, Viber, to assist in combating misinformation [13,14]. On the other hand, we believe that some of the Twitter bots might be designed to gather Twitter followers and gain popularity, which could be utilized in different ways in generating marketing income. Even the focus on disseminating factual news on COVID-19 can be another way to gain the trust of some Twitter users which is similar to the way mainstream media outlets in the United States originally found objectivity to be financially rewarding because they could use it to attract broader non-partisan audiences [15]. In other words, these bots mostly disseminate news, and some of them seem to be driving users to other websites that can provide income through clicking. Since “online advertising is a numbers game, the more traffic these individuals can drive to their website, the more potential clicks they receive” [16].

2. Literature Review

This study is situated within the field of health communication and the use of digital methods. [17], for example, highlights three main areas that computational analysis and health communication often cover including “the use of big data… for examining public perceptions of health conditions or events, investigating network-related dimensions of health phenomena, and illness monitoring” (p. 26). Our current study covers most of these areas though the focus is more on the automated disseminators of health news. Due to the widespread of health misinformation on social media [18], it is crucial to understand the nature of information disseminated by actors like Twitter bots. As Swire-Thompson and Lazer emphasize [19], examining health misinformation is important because it can have “severe consequences with regard to people’s quality of life and even their risk of mortality; therefore, understanding it within today’s modern context is an extremely important task” (p. 433).
As will be shown below, research on social media use and COVID-19 focused on several areas though the study of automated bots in pandemics is still under-researched. A number of conspiracies were also promoted on social media by bots. Ferrara [20], for instance, used bot detection techniques to study 43.3 million COVID-19 related English tweets and observed how bots used COVID-19 as a vector to promote ideological hashtags that were often associated with the alt-right in the USA. Amongst those online political conspiracies of COVID-19, a popular theory has linked 5G to the spread of the pandemic. In another study, a social network analysis of 10,140 tweets that contained the “5Gcoronavirus” keyword or the #5GCoronavirus hashtag was conducted, and the results showed that fake news websites were the most popular web sources shared by users [21].
In general, social media data is used as a useful tool for digital disease detection during the COVID-19 outbreak. (We would like to thank Ms. Xiaosu Li, a graduate student at the School of Communication at Simon Fraser University, for her assistance in collecting part of the literature on social media use and COVID-19.) For example, Jahanbin and Rahmanian [22] applied web news mining technology to track the geographical locations of COVID-19 related tweets. Upon examining 364,080 tweets from 179,534 users during the period between 31 December 2019 and 6 February 2020, they found out that most of the tweets about coronavirus have been from the US (42.1%), China (13.0%), Italy (11.8%), and Australia (6.6%), which was consistent with the report of the cases obtained from the WHO [22]. Their research revealed the possibility of using data mining techniques within social networks to track and predict the spread of the pandemic outbreak. Researchers have also explored the possibilities of using user-reported personal information on Twitter for disease detection and disease severity estimation [23]. In addition, Klein et al. [24] used a social media mining approach to automatically detect ‘sick tweets’ that contain personal information which could indicate potential exposure to COVID-19. The authors analyzed the chronological and geographical distribution of these “sick tweets” [24]. Geolocated Twitter data can also be merged with other data types that capture human movement to create a surveillance alert system to improve global and national response to public health threats [25]. These studies indicate the importance social media data can have in shedding light on public health issues, including infectious diseases detection and surveillance.
Besides digital health surveillance, several scholars are focusing on the need and opportunity for strategic health communication through social media analysis. A study that investigated information communication networks and information-sharing behaviors found the popularity and positive spillover effect of medical news on COVID-19 on Twitter [26]. New possibilities of using social media platforms such as YouTube to demarcate current and valid information and educate as well as mobilize the public to adopt preventive behaviors for mitigating the spread of COVID-19 pandemic have also been explored [27,28].
Nevertheless, social media is not always a positive force. The COVID-19 pandemic has created what is known as an ‘infodemic’ since a massive amount of misinformation has been spreading on social media at an unprecedented speed [29]. Kouzy et al. [30], for instance, conducted an analysis of 673 tweets and found that medical misinformation and unverifiable content pertaining to the COVID-19 outbreak were widely disseminated as 24.8% of tweets included misinformation and 17.4% contained some unverifiable content. This study provides an early quantification of the magnitude of misinformation spread on Twitter. Similarly, another study conducted by Pulido et al. [31] analyzed the circulation of false and evidence-based information during the COVID-19 pandemic. Their analysis of 1000 tweets showed that false information was tweeted more than science-based evidence or fact-checking information; nevertheless, the circulation of false information was at a lower rate. The importance of urgent interventions to curb the spread of medical misinformation that could jeopardize public health safety was highlighted in this study.
The rapid spread of misinformation on social media fueled panic during the outbreak. Various researches have examined sentiment dynamics on social media discussion pertaining COVID-19. For example, findings of a sentiment analysis of over 20 million posts from Twitter suggested that negative emotions such as fear, anger, and sadness were dominant during the outbreak [32]. Several other studies found that social media has a significant impact on spreading fear and panic during the COVID-19 outbreak that might jeopardize people’s psychological well-being, calling for attention and social support on mental health [33,34].
Social media have also perpetuated stereotyping and discrimination against individuals and groups on the basis of their racial identities. After the US presidential reference of the “China Virus” term on 16 March 2020, the rise in tweets referencing “Chinese virus” or “China virus,” along with the content of these tweets, indicate that COVID-19 stigma against Asian communities is likely being perpetuated on Twitter [35]. Nevertheless, Asian people are not the only victim of COVID-19 fueled stigmatization. Since older adults have been identified as a group at higher risks of death from COVID-19, a significant proportion of news has covered this topic, which has led to an increase in ageism claiming that COVID-19 is particularly a disease of older people on Twitter [36].
Besides particular social issues of discrimination, various researchers followed an infodemiological approach to map general public concerns on social media during the pandemic. For example, Abd-Alrazaq et al. [37] identified four main public concerns on Twitter in their examination of 167,073 unique tweets: (1) geological origin of the virus, (2) causes leading to the transfer of COVID-19 to humans, (3) impact on people, countries, and the economy, (4) ways of mitigating the risk of infection.
To address the gaps highlighted above on the empirical study of Twitter bots in relation to COVID-19 pandemic, our study attempts to answer the following research questions: RQ1: What is the percentage of social media bots in the examined dataset, RQ2: What is the connection amongst them based on social network analysis, RQ3: What do these bots’ tweet about and what are the general sentiments and nature of their messages? RQ4: What are the public discourses on bots?

3. Methods

We used a mixed approach mostly comprised of several digital methods in this study which align with what Rains [17] mentions in his review of health communication research techniques such as “data acquisition, classification/prediction, text mining, [and] network analysis” (p. 27). First, we collected 50,811,299 tweets and retweets referencing #COVID-19 and #COVID19 for a period of over two months from 12 February until 18 April 2020. We focused on these two hashtags because they are standard terms used by WHO and other official sources. These tweets were sent by 11,706,754 unique users (unique users are those whose Twitter usernames are not repeated in the collected dataset) and this dataset was collected using the Twitter Capture and Analysis Toolset (TCAT) platform that utilizes Twitter public Application programming Interface (API), allowing the collection of a portion of public tweets referencing the above two hashtags [38]. Due to API limitations, the platform sometimes hits the permitted rate limit resulting in brief delays in collecting tweets. In other words, the collection of this social media data was done within the terms and conditions of Twitter guidelines. We then used a Python 3 script to identify the top 1000 most active Twitter users who tweet the most about COVID-19 because bots are known to be very active in spreading messages. For bots’ detection, we used the Python version of Botometer because it allows bulk assessment of thousands of accounts unlike the website version that can only be done individually [39]. The tool provides ordinal scores ranging from 0 which is more likely to be human until 5 which is more likely to be bot. After obtaining the tweets sent by the active bots, we found a total of 185,099 tweets and retweets (Figure 1). We decided to use a score of 3 and above to make sure that the accounts are bots despite the fact that 12 other account names that scored slightly below 3 contained the word bot in them such as @sumanebot, @Bot_Corona_V, and @covid19BotLatam. We then analyzed the bots’ tweets using automated sentiment analysis tool called VADER with the use of a Python 3 package [40]. The algorithm calculates the sentiment score of each tweet and measures the mean with values ranging between −1 to +1 (−1 highly negative, +1 highly positive, 0 neutral). All our Python 3 scripts are shared on GitHub. (For more details on our Python 3 scripts, please see the following link: This is followed by using topic modelling to understand the main topics discussed by these bots. Here, we used QDA Miner—WordStat8 commercial software that offers a topic modelling tool built on factor analysis (FA). This software is used because of its practical feature of providing names for the generated topics. The FA approach ranks topics based on the Eigenvalue that is a mathematical linear system, indicating the dominance of certain topics in the text corpus, for the higher this value is, the more dominant the topic is found in the corpora [41]. The formula used is based on calculating the factor loading which identifies the strength of “the relationship of each word to each topic” as “ each word wi in the vocabulary V containing all words in a corpus, wwiiVV, ∀ii ∈ {1, …, nn}, can be represented as a linear function of m(<n) topics (aka common factors), ttjjTT, ∀jj ∈ {1, …, mm}” [42].
In addition, we conducted a social network analysis based on usernames and their mentions using Gephi software. To understand the number of online communities that interact among bots, we used the modularity partition method which is built on an algorithm that detects large networks and unfolds a complete hierarchical community structure for the network [43]. Our goal is to understand whether the bots are strongly clustered together and whether they are connected to other similar online communities. We used the following options to generate our graph: OpenOrd spatialization algorithm, modularity class color, and betweenness centrality for the node size. In social network analysis, users are considered “nodes (or actors) and mentions are linked” [44]. Here, we used a Python 3 script to extract mentions from tweets sent by the bot users and arranged the dataset accordingly. The larger nodes show higher connectivity and more interaction in mentioning other users [45]. Our directed network consisted of 29,642 nodes and 42,313 edges, and we also created an interactive high resolution graph ( in order to show all the details of the network. (We would like to thank Dr. Jacob Groshek, an Associate Professor at Kansas State University, for his kind assistance in suggesting a network algorithm and setting up the high resolution graph). Further, we identified some important results like the most recurrent emojis, their frequencies, main categorization and subcategorization, hashtags, and words used by bots using Python scripts such as EmojiMapper (more information can be found here: To understand who the bots mostly reference and mention in their tweets, we extracted the most mentioned users using a Python 3 script, and then we conducted a thematic analysis to identify those mentioned users by examining their Twitter profile description and some of their tweets in their timeline. If an account mentions that it is run by a citizen journalist, we code it as such. If there is no description available and an individual’s name is used, we code it as personal. We found four main categories among the top 30 most mentioned users which include: (1) health agencies-official bodies, (2) mainstream media, (3) citizen journalists-grassroot organizations, and (4) personal accounts. Further, we compared the top 5000 mentioned users with bots’ accounts to see whether these bots reference other similar ones. Finally, we identified English language tweets that reference bots in the overall dataset, and we examined the most retweeted posts to understand the public perspectives on such automated accounts. The main methodological steps we followed can be summarized in the following Chart 1.

4. Results

From a total of the 1000 most active users that tweeted the most, we identified 127 active Twitter accounts that scored 3+ out of 5 in their likelihood of being bots; however, there were 40 other accounts that did not have a score either because their accounts were private or were deleted such as @NovelCoronaBot and @covidworldinfo probably because they violated Twitter automation rules. We are not sure, though, whether these deleted accounts are bots or not because Twitter often suspends user accounts for a variety of other reasons like spamming and/or using abusive language [46]. The active bot users sent 185,099 tweets, and the average number of tweets and retweets each bot sent during the time period of the study is 1457. The most active bot user in the whole dataset is @coronavid19_bot (n = 16,484), while other users clearly indicate their bot nature like @StayAtHomeBot and @layoffbot. The majority of other bots do not have the same name features such as @newworldsurvive, @DrCoronavirus, @coronavirusbuzz, and @VirusTimes.
The social network analysis shows 10 major online communities, and the largest is the one in pink (84.15%) which has more than 24,000 nodes followed by the light green one (9.25%) containing about 2000 nodes (Figure 2). The remaining online communities are smaller than the two main ones. The social network graph also shows two bot users, @ChalecosAmarill and @AleLRoss198, have large nodes because they are far more interactive in mentioning other users than other ones. Regarding the most referenced users by bots, we found that 5 out of 30 top accounts are bots while two are recently deleted accounts. In terms of their account types, 4 of them are citizen journalists and grassroot organizations and 3 are personal accounts.
As for the top 30 most mentioned users, we found that mainstream media comes first (33.3%) followed by health agencies and official bodies (30%), personal accounts (23.3%), and citizen journalists or grassroot organizations (10%). To further examine these most mentioned users by bots, we ran the Python 3 script for bot detection to see whether they are mostly bots or humans. We limited our focus here to the top 1000 most mentioned users, and we found that 86.9% were more likely to be humans in contrast to 5% that scored between 3–4.9 out of 5, while 7.6% accounts had no scores. Further, the comparison between the 5000 most mentioned users and bots shows that there are only 24 references to other bots, constituting 0.47% of mentioned users.
In relation to topic modelling analysis, four main topics were generated and ranked based on their Eigenvalue including: “total cases” (8.09), “Covid news” (7.22), “prepper bushcraft” (6.76), and “stay at home” (6.42) (Table 1). Regarding sentiment analysis, the mean results indicate (0.016212) score with a standard deviation of (0.343864) and variance of (0.118243).
As for the most recurrent words, we found that the top seven ones are used 116,112 in total in the tweets, including “new”, “cases”, “total”, “confirmed”, “worldwide”, “totaling”, and “death”. The bots accounts sent 3136 emojis often including a hashtag #covid19memes (n = 3038). The most recurrent emojis include Information 11 00461 i001 (confirmed cases, n = 8529), Information 11 00461 i002 (deaths, n = 4773), Information 11 00461 i003 (recovered, n = 3685), Information 11 00461 i004 (virus, n = 7932) as well as other related emojis like Information 11 00461 i005 (virus, n = 4181), Information 11 00461 i006 (Face mask, n = 2076), Information 11 00461 i007 (death, n = 3278), and Information 11 00461 i008 (SOS, n = 1135). The sequence of emojis gives another insight on the conveyed meaning of these information items; for example, we find that country emoji flags constitute some of the top 20 most recurrent sequences, indicating the ongoing news updates on the number of COVID’s patients, deaths, and recoveries (See Table 2). We also find other emoji sequences like Information 11 00461 i009 (disseminating news on the virus, = 678), Information 11 00461 i010 (disseminating videos on the virus, = 666), Information 11 00461 i011 (n = 110), Information 11 00461 i012 (please wash your hands with soap and water, n = 39), and Information 11 00461 i013 (sick people around the world, n = 12). In terms of emoji categorization, the results show that emoji symbols are the most frequent ones followed by smileys and people, travel, places and flags, objects, food and drink, and animals and nature. In addition, the list of the top 20 emojis subcategories contain important cue like warnings, sick face, and negative face emojis (Table 3). Finally, we identified 3232 tweets posted by Twitter users that referenced bots in English language.

5. Discussion

To answer the study’s research questions, we found that 12.7% of the top 1000 most active Twitter users who reference #COVID-19 and #COVID19 are bots, yet this percentage increases if we incorporate some of the deleted accounts as well as those that slightly scored below three out of five. We believe the percentage of bots identified in this study is an important indicator of the presence and possible influence of these automated accounts on social media. What is interesting, though, is that two of the active accounts initially identified in our dataset which both sent 2049 tweets did not yield a bot score because they did not have any tweets available in their timelines. Previous research shows that some problematic social media users intentionally delete their posts to avoid detection or removal by social media platforms [47]. The qualitative examination of one of these accounts, which sent 685 messages, shows praise for the COVID-19 actions taken by China including spreading unverified information about the alleged effectiveness of traditional Chinese medicine in curing the virus. Almost all the retweets were meant to amplify official Chinese sources like @ChinaEUMission, @Chinacultureorg, and @spokespersonCHN as well as Chinese mainstream media like @ChinaDaily, @CGTNofficial, and @XHNews.
To understand whom these bots mostly reference and mention, we found that they mostly cite mainstream media channels from different countries such as ABSCBNNews (Philippines), El Comercio (Ecuador), and Al Etihad (UAE) followed by health agencies and official bodies like the WHO, CDC, and Dr. Tedros Ghebreyesus. These categories are followed by personal accounts and citizen journalists or grassroot organizations. The prominence of citing, retweeting, and mentioning mainstream media outlets and official health agencies show that the bots are mostly amplifying news on COVID-19. Though these bots are obviously profiting from directing users to their websites, we think that they can still be useful in complementing the information activities of credible sources because they can further disseminate news on COVID-19.
The sentiment analysis of the bots’ tweets show that the mean is neutral since the tweets mostly focus on disseminating news and updated figures that do not contain sensationalism or extreme negative or positive sentiments. The emoji subcategorization provides further insight into the sentiment expressed by bots, for Table 3 shows that there is some balance between neutral, positive, and negative emojis like warnings and sick as well as negative faces.
Further, the topic modelling results show that bots that reference COVID-19 are mostly disseminating updated news on the pandemic which is evident from the first and second topics. This is corroborated with the evidence gathered from the most frequent words like “cases”, “total”, “confirmed”, and “worldwide” as well as some of the most used hashtags such as #covid19news (n = 3103) and emojis. We argue here that the purpose of using emojis and their sequences is to further attract the attention of Twitter users due to their appealing pictorial non-verbal communication qualities. They function as a complementary message to the news on COVID-19 such as conveying updated figures of confirmed and recovered cases and number of deaths ( Information 11 00461 i001, Information 11 00461 i003, Information 11 00461 i002) as well as reminders to wear face masks ( Information 11 00461 i006) or wash hands ( Information 11 00461 i012).
The fourth salient topic is a call to stay at home which is borrowed from the advice given by numerous health agencies and official bodies around the world. This is evident in the use the most frequent hashtags such as #Stayhome, #WashYourHands, #StaySafe, #stayathome, #protectyourselfandyourfamily, and #StayHomeSaveLives (totally used 12,759). However, the third topic which focuses on survival skills like “prepper bushcraft” is quite different, for it supports the survivalism subculture movement that is built on the belief that a natural disaster, war, or pandemic is inevitable, so people need to prepare by purchasing enough goods and learning bushcraft skills to ensure their survival [48]. The presence of this frequent topic is also corroborated with the evidence collected from examining the most frequent hashtags like #survival, #bushcraft, and #prepper used 21,215 times in total.
Though it is not among the most frequent topics, the practice of telemedicine (n = 725) during pandemic times is highlighted in relation to promoting the services of an Indian medical clinic.
In addition, the social network analysis results show that these bots neither function in a coordinated way nor focus on one community, for there are 10 different online communities that are not strongly connected or clustered together, denoting the various mentions and scattered audiences that they target. These findings are corroborated with the evidence cited above on the high percentage of human accounts mentioned by the bots examined in this study (86.9%) as well as the very low percentage of bots referencing similar ones (0.47%).
Finally and to answer the fourth research question, the public discourses on automated bots are mostly focused on the affordances of health chatbots around the world such as launching a botchat called Yani in the Philippines, a WhatsApp chatbot to disseminate “reliable information and rapid testing diagnostics” in Senegal, a telegram bot providing “verified claims” on COVID-19 in India or the “World’s 1st Multilingual AI-Bot” on COVID-19 in Pakistan. The Kansas Department of Health and Environment and the Missouri Department of Health and Senior Services in the USA, for example, tweeted about launching chatbots to answer COVID-19- related questions. On the other hand, the Twitter public expressed a few concerns regarding harassment, the politicization of COVID-19, and the possible spread of disinformation by bots allegedly run by the Chinese and Russian governments.
Conceptually, we believe the findings of this study can be useful in developing theory because the majority of previous studies on social media bots focus on their nefarious disinformation functions especially in connection to research on health communication [1,2]. We think that this conceptualization is limited because it does not take into account the possible financial incentives behind using Twitter bots that disseminate factual news rather than disinformation, and it does not provide a complete picture of the bots’ useful implementation in public health.

6. Conclusions

Despite that some literature on social media bots highlight the controversial and anti-social nature of automated accounts, the findings of this study show that the majority of bots spread news on and awareness of COVID-19 risks while citing and referencing mainstream media outlets and credible health sources. We argue that there might be financial incentives behind designing some of these bots. However, and if monitored and updated with credible information by health agencies themselves, we believe that bots can be useful during health crises due to their efficiency and speed in spreading valuable information, some of which is crucial for public health.
It is imperative to highlight some of the limitations of this study. First, the tweets and retweets that we collected only referenced two hashtags #COVID-19 and #COVID19 which are the scientific names of the pandemic because we were interested to see how this community discusses the pandemic. These terms are normally used by an online community that is expected to be more knowledgeable about the virus and they use or rely on official sources. This type of community can be different from other online communities that prefer using other popular terms like #coronavirus or some controversially popular hashtags like #Wuhanvirus or #Chinesevirus. In other words, the findings of our study are only limited to the online community that prefers using the mainstream scientific terms, which might explain why we have not seen many bots that disseminated disinformation or other problematic content amongst the most active bot users. Second, it is important to know the sources of these bots and their possible financial incentives, yet such information cannot be easily obtained, and this remains a major limitation. In this study, for example, we found evidence regarding one suspicious Twitter account that provides support for the Chinese government’s narratives regarding COVID-19. Another limitation of the study is the platform choice as we focused on Twitter alone, yet there are many other social media platforms that allow using bots such as Telegram.

Author Contributions

Conceptualization, A.A.-R.; Formal analysis, A.A.-R.; Writing—original draft, A.A.-R.; Writing—review & editing, A.A.-R.; Methodology, V.S. All authors have read and agreed to the published version of the manuscript.


The publication of this paper was supported by SFU Central Open Access Fund.

Conflicts of Interest

The authors declare no conflict of interest.


  1. Allem, J.P.; Ferrara, E. Could social bots pose a threat to public health? Am. J. Public Health 2018, 108, 1005. [Google Scholar] [CrossRef] [PubMed]
  2. Broniatowski, D.A.; Jamison, A.M.; Qi, S.; AlKulaib, L.; Chen, T.; Benton, A.; Dredze, M. Weaponized health communication: Twitter bots and Russian trolls amplify the vaccine debate. Am. J. Public Health 2018, 108, 1378–1384. [Google Scholar] [CrossRef] [PubMed]
  3. Brandtzaeg, P.B.; Følstad, A. Chatbots: Changing user needs and motivations. Interactions 2018, 25, 38–43. [Google Scholar] [CrossRef][Green Version]
  4. Greer, S.; Ramo, D.; Chang, Y.J.; Fu, M.; Moskowitz, J.; Haritatos, J. Use of the Chatbot “Vivibot” to Deliver Positive Psychology Skills and Promote Well-Being Among Young People After Cancer Treatment: Randomized Controlled Feasibility Trial. JMIR MHealth UHealth 2019, 7, e15018. [Google Scholar] [CrossRef]
  5. Kretzschmar, K.; Tyroll, H.; Pavarini, G.; Manzini, A.; Singh, I. NeurOx Young People’s Advisory Group. Can your phone be your therapist? Young people’s ethical perspectives on the use of fully automated conversational agents (chatbots) in mental health support. Biomed. Inform. Insights 2019, 11. [Google Scholar] [CrossRef][Green Version]
  6. Skjuve, M.; Brandtzæg, P.B. Chatbots as a new user interface for providing health information to young people. In Youth and News in a Digital Media Environment–Nordic-Baltic Perspectives; Nordicom: Göteborg, Sweden, 2018; Available online: (accessed on 25 September 2020).
  7. Battineni, G.; Chintalapudi, N.; Amenta, F. AI Chatbot Design during an Epidemic like the Novel Coronavirus. Healthcare 2020, 8, 154. [Google Scholar] [CrossRef]
  8. Datta, C.; Yang, H.Y.; Kuo, I.H.; Broadbent, E.; MacDonald, B.A. Software platform design for personal service robots in healthcare. In Proceedings of the 6th IEEE Conference on Robotics, Automation and Mechatronics (RAM), Manila, Philippines, 12–15 November 2013; pp. 156–161. [Google Scholar]
  9. Følstad, A.; Brandtzæg, P.B. Chatbots and the new world of HCI. Interactions 2017, 24, 38–42. [Google Scholar] [CrossRef]
  10. Iyengar, S. Mobile health (mHealth). In Fundamentals of Telemedicine and Telehealth; Academic Press: Cambridge, MA, USA, 2020; pp. 277–294. [Google Scholar] [CrossRef]
  11. Sciarretta, E.; Alimenti, L. Wellbeing Technology: Beyond Chatbots. In International Conference on Human-Computer Interaction; Springer: Cham, Switzerland, 2019; pp. 514–519. [Google Scholar]
  12. Vaagan, R.W.; Biseth, H.; Sevincer, V. Faktuell: Youths as journalists in online newspapers and magazines in Norway. In Youth and News in a Digital Media Environment–Nordic-Baltic Perspectives; Nordicom: Göteborg, Sweden, 2018. [Google Scholar]
  13. WHO. WHO and Rakuten Viber Fight COVID-19 Misinformation with Interactive Chatbot. Available online: (accessed on 31 March 2020).
  14. WHO. WHO Launches a Chatbot on Facebook Messenger to Combat COVID-19 Misinformation. Available online: (accessed on 15 April 2020).
  15. Lichter, S. The Media. In Understanding America: The Anatomy of an Exceptional Nation; Schuck, P.H., Wilson, J.Q., Eds.; Public Affairs: New York, NY, USA, 2008; pp. 181–218. [Google Scholar]
  16. Mills, A.J.; Pitt, C.; Ferguson, S.L. The relationship between fake news and advertising: Brand management in the era of programmatic advertising and prolific falsehood. J. Advert. Res. 2019, 59, 3–8. [Google Scholar] [CrossRef]
  17. Rains, S.A. Big data, computational social science, and health communication: A review and agenda for advancing theory. Health Commun. 2020, 35, 26–34. [Google Scholar] [CrossRef]
  18. Bode, L.; Vraga, E.K. See something, say something: Correction of global health misinformation on social media. Health Commun. 2018, 33, 1131–1140. [Google Scholar] [CrossRef]
  19. Swire-Thompson, B.; Lazer, D. Public health and online misinformation: Challenges and recommendations. Annu. Rev. Public Health 2020, 41, 433–451. [Google Scholar] [CrossRef] [PubMed][Green Version]
  20. Ferrara, E. What types of COVID-19 conspiracies are populated by Twitter bots? First Monday 2020, 25. [Google Scholar] [CrossRef]
  21. Ahmed, W.; Vidal-Alaball, J.; Downing, J.; Seguí, F.L. COVID-19 and the 5G conspiracy theory: Social network analysis of Twitter data. J. Med. Internet Res. 2020, 22, e19458. [Google Scholar] [CrossRef] [PubMed]
  22. Jahanbin, K.; Rahmanian, V. Using Twitter and web news mining to predict COVID-19 outbreak. Asian Pac. J. Trop. Med. 2020, 8, 378–380. [Google Scholar]
  23. Mackey, T.; Purushothaman, V.; Li, J.; Shah, N.; Nali, M.; Bardier, C.; Cuomo, R. Machine Learning to Detect Self-Reporting of Symptoms, Testing Access, and Recovery Associated With COVID-19 on Twitter: Retrospective Big Data Infoveillance Study. JMIR Public Health Surveill. 2020, 6, e19509. [Google Scholar] [CrossRef] [PubMed]
  24. Klein, A.; Magge, A.; O’Connor, K.; Cai, H.; Weissenbacher, D.; Gonzalez-Hernandez, G. A Chronological and Geographical Analysis of Personal Reports of COVID-19 on Twitter. MedRxiv 2020. [Google Scholar] [CrossRef]
  25. Bisanzio, D.; Kraemer, M.U.; Bogoch, I.I.; Brewer, T.; Brownstein, J.S.; Reithinger, R. Use of Twitter social media activity as a proxy for human mobility to predict the spatiotemporal spread of COVID-19 at global scale. Geospat. Health 2020, 15. [Google Scholar] [CrossRef]
  26. Park, H.W.; Park, S.; Chong, M. Conversations and medical news frames on twitter: Infodemiological study on covid-19 in South Korea. J. Med. Internet Res. 2020, 22, e18897. [Google Scholar] [CrossRef]
  27. Basch, C.H.; Hillyer, G.C.; Meleo-Erwin, Z.C.; Jaime, C.; Mohlman, J.; Basch, C.E. Preventive behaviors conveyed on YouTube to mitigate transmission of COVID-19: Cross-sectional study. JMIR Public Health Surveill. 2020, 6, e18807. [Google Scholar] [CrossRef]
  28. Basch, C.E.; Basch, C.H.; Hillyer, G.C.; Jaime, C. The role of YouTube and the entertainment industry in saving lives by educating and mobilizing the public to adopt behaviors for community mitigation of COVID-19: Successive sampling design study. JMIR Public Health Surveill. 2020, 6, e19145. [Google Scholar] [CrossRef]
  29. WHO. Novel Coronavirus (2019-nCoV); Situation Report-13; World Health Organization: Geneva, Switzerland, 2020; Available online: (accessed on 25 September 2020).
  30. Kouzy, R.; Abi Jaoude, J.; Kraitem, A.; El Alam, M.B.; Karam, B.; Adib, E.; Baddour, K. Coronavirus goes viral: Quantifying the COVID-19 misinformation epidemic on Twitter. Cureus 2020, 12, e7255. [Google Scholar] [CrossRef] [PubMed][Green Version]
  31. Pulido, C.M.; Villarejo-Carballido, B.; Redondo-Sama, G.; Gómez, A. COVID-19 infodemic: More retweets for science-based information on coronavirus than for false information. Int. Sociol. 2020. [Google Scholar] [CrossRef][Green Version]
  32. Lwin, M.O.; Lu, J.; Sheldenkar, A.; Schulz, P.J.; Shin, W.; Gupta, R.; Yang, Y. Global sentiments surrounding the COVID-19 pandemic on Twitter: Analysis of Twitter trends. JMIR Public Health Surveill. 2020, 6, e19447. [Google Scholar] [CrossRef] [PubMed]
  33. Ahmad, A.R.; Murad, H.R. The impact of social media on panic during the COVID-19 pandemic in Iraqi Kurdistan: Online questionnaire study. J. Med. Internet Res. 2020, 22, e19556. [Google Scholar] [CrossRef]
  34. Ni, M.Y.; Yang, L.; Leung, C.M.; Li, N.; Yao, X.I.; Wang, Y.; Liao, Q. Mental Health, Risk Factors, and Social Media Use During the COVID-19 Epidemic and Cordon Sanitaire Among the Community and Health Professionals in Wuhan, China: Cross-Sectional Survey. JMIR Ment. Health 2020, 7, e19009. [Google Scholar] [CrossRef]
  35. Budhwani, H.; Sun, R. Creating COVID-19 Stigma by Referencing the Novel Coronavirus as the “Chinese virus” on Twitter: Quantitative Analysis of Social Media Data. J. Med. Internet Res. 2020, 22, e19301. [Google Scholar] [CrossRef]
  36. Jimenez-Sotomayor, M.R.; Gomez-Moreno, C.; Soto-Perez-de-Celis, E. Coronavirus, Ageism, and Twitter: An Evaluation of Tweets about Older Adults and COVID-19. J. Am. Geriatr. Soc. 2020. [Google Scholar] [CrossRef]
  37. Abd-Alrazaq, A.; Alhuwail, D.; Househ, M.; Hamdi, M.; Shah, Z. Top concerns of Tweeters during the COVID-19 pandemic: Infoveillance study. J. Med. Internet Res. 2020, 22, e19016. [Google Scholar] [CrossRef][Green Version]
  38. Bruns, A.; Weller, K.; Borra, E.; Rieder, B. Programmed method: Developing a toolset for capturing and analyzing tweets. Aslib J. Inf. Manag. 2014. [Google Scholar] [CrossRef]
  39. Al-Rawi, A.; Groshek, J.; Zhang, L. What the fake? Assessing the extent of networked political spamming and bots in the propagation of# fakenews on Twitter. Online Inf. Rev. 2019, 43, 53–71. [Google Scholar]
  40. Hutto, C.; Gilbert, E. VADER: A Parsimonious Rule-based Model for Sentiment Analysis of Social Media Text. In Proceedings of the Eighth International Conference on Weblogs and Social Media (ICWSM-14), Ann Arbor, MI, USA, 1–4 June 2014. [Google Scholar]
  41. Al-Rawi, A. Gatekeeping fake news discourses on mainstream media versus social media. Soc. Sci. Comput. Rev. 2019, 37, 687–704. [Google Scholar] [CrossRef]
  42. Péladeau, N.; Davoodi, E. Comparison of latent Dirichlet modeling and factor analysis for topic extraction: A lesson of history. In Proceedings of the the Hawaii International Conference on System Sciences (HICSS), Waikoloa Village, HI, USA, 3–6 January 2018; pp. 1–9. [Google Scholar]
  43. Blondel, V.D.; Guillaume, J.L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008. [Google Scholar] [CrossRef][Green Version]
  44. Himelboim, I.; Xiao, X.; Lee, D.K.L.; Wang, M.Y.; Borah, P. A social networks approach to understanding vaccine conversations on Twitter: Network clusters, sentiment, and certainty in HPV social networks. Health Commun. 2020, 35, 607–615. [Google Scholar] [CrossRef] [PubMed]
  45. Sadri, A.M.; Hasan, S.; Ukkusuri, S.V.; Lopez, J.E.S. Analysis of social interaction network properties and growth on Twitter. Soc. Netw. Anal. Min. 2018, 8, 56. [Google Scholar] [CrossRef]
  46. Twitter. Automation Rules. Available online:,over%20multiple%20accounts%20you%20operate (accessed on 3 November 2017).
  47. Al-Rawi, A. The fentanyl crisis & the dark side of social media. Telemat. Inform. 2019, 45, 101280. [Google Scholar]
  48. Kabel, A.; Chmidling, C. Disaster prepper: Health, identity, and American survivalist culture. Hum. Organ. 2014, 73, 258–266. [Google Scholar] [CrossRef]
Figure 1. Frequency of bots’ tweets referencing COVID-19 from 12 February to 18 April 2020.
Figure 1. Frequency of bots’ tweets referencing COVID-19 from 12 February to 18 April 2020.
Information 11 00461 g001
Chart 1. The main methodological procedures followed in the study.
Chart 1. The main methodological procedures followed in the study.
Information 11 00461 ch001
Figure 2. Social network analysis of Twitter users and mentions (for a more detailed graph, please see the following link:
Figure 2. Social network analysis of Twitter users and mentions (for a more detailed graph, please see the following link:
Information 11 00461 g002
Table 1. Topic modelling of Twitter bots’ tweets.
Table 1. Topic modelling of Twitter bots’ tweets.
Totaling; Place; Worldwide; Confirmed; Case; Recoveries; Coronavirus; Death
Confirmed Case; Stay; Country; Home; Update; Cases; Deaths; Active; Total; Recovered
Total Deaths; Total Recovered; Active Cases; Total Cases; Stay at Home; Total Active Cases; Confirmed Cases; Total Number; Total Confirmed; Positive Cases
Bajucovid; Coronaviruscovid; Indonesiabebascovid; Fightingcovid; Memes; Testing; Italia; India; News7.22
Prepper; Bushcraft; Survival; Follow; Coronavirusoutbreak; Corona; Stayhomesavelives6.76
4.Stay at homeCMSTU; Protectyourselfandyourfamily; JPTU; Prayformalaysiaп; Sayangimalaysiaku; Stayathome; Dudukrumah; Jabatanpenerangan
Allahpeliharakanlahterengganu; Washyourhands; Staysafe; Stayhome; Stayathome
Table 2. The top 20 most frequent emoji sequences.
Table 2. The top 20 most frequent emoji sequences.
No.Emoji SequenceCount
9.Information 11 00461 i014913
14.Information 11 00461 i009678
16.Information 11 00461 i010666
Table 3. The main categories and subcategories of bots’ memes.
Table 3. The main categories and subcategories of bots’ memes.
No.Main CategoriesCountNo.SubcategoriesCount
2.smileys and people22,9722.warning9991, places, and flags13,3783.geometric9238
4.objects66334.body8592 map5332 and weather4306
7.animals and nature1497.other symbol3477
8.face fantasy3363
10.transport ground2492
11.sick face2244
12.face neutral1821
13.light and video1753
14.alphanum1638 paper1352
16.av symbol1295
18.face negative1168 building764
20.face positive717

Share and Cite

MDPI and ACS Style

Al-Rawi, A.; Shukla, V. Bots as Active News Promoters: A Digital Analysis of COVID-19 Tweets. Information 2020, 11, 461.

AMA Style

Al-Rawi A, Shukla V. Bots as Active News Promoters: A Digital Analysis of COVID-19 Tweets. Information. 2020; 11(10):461.

Chicago/Turabian Style

Al-Rawi, Ahmed, and Vishal Shukla. 2020. "Bots as Active News Promoters: A Digital Analysis of COVID-19 Tweets" Information 11, no. 10: 461.

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop