1. Introduction
Social media has become a major mode of global communication, characterized by its potential to reach large audiences and spread information rapidly [
1,
2]. In contrast to traditional media, content posted need not undergo editorial management nor scientific vetting, and users frequently maintain anonymity, which allow individuals to express their views directly [
3,
4]. This means that there are a multitude of emotional expressions, false information, and rumors on social media, which could deviate the public opinions from the track of reason [
5]. Researchers thus argue that it is important to analyze sentiments on social media [
6,
7,
8].
In face of health emergencies, social media has become an emerging platform for the public to share opinions, express attitudes, and seek coping strategies [
9,
10,
11,
12]. During the outbreak of coronavirus disease 2019 (COVID-19), intensive global efforts toward physical distancing and isolation to curb the spread of COVID-19 may have intensified the use of social media for individuals to remain connected [
13]. Researchers found that the public express their fears of infection and shock regarding the contagiousness of COVID-19 [
14], along with their feelings about infection control strategies on social media [
15,
16,
17]. However, established evidence has proven that social bots—automated accounts controlled and manipulated by computer algorithms [
18]—often express sentiments in online discussions on social media [
7,
19], which might manipulate or even distort online public opinions [
20,
21].
The birth of COVID-19 vaccines has also sparked heated discussions and debates on social media [
22]. Social media has long been a prominent sphere of vaccine debates [
23]. With 217 million daily activities [
24], Twitter is a convenient tool for discussing and debating public health topics, including vaccines and vaccination [
2]. Both pro-vaccine and anti-vaccine information is prevalent on Twitter [
2]. Social bots disseminate anti-vaccine messages by masquerading as legitimate users, eroding the public consensus on vaccination [
25].
A previous study found that social bots have three high-intensity stages of intervention in the COVID-19 vaccines-related topics [
22]. Yet, how social bots shape public sentiment and public opinion remains to be studied.
Against this backdrop, this study seeks to scrutinize whether the sentiments of social bots affect human users’ sentiments of COVID-19 vaccines. This overall objective is divided into the following two sub-objectives.
The first sub-objective is to analyze the sentiments of both social bots and human users on the discussion of COVID-19 vaccines in three stages. The semi-supervised deep learning model, i.e., the BERT-CNN sentiment analysis framework, was used for the sentiment analysis of tweets posted by social bots and humans in this study. The second sub-objective is to perform a statistical analysis of whether there was a time-series correlation between the sentiments of social bots and humans.
To fulfil these objectives, the study is divided into seven sections. Following the introduction, the second section reviews and examines other studies which have described the sentiment engagement of social bots in online vaccine discussions and the methods of sentiment analysis on social media. The third section presents data and methodology, including the collection and cleaning of tweets, the method of bots’ detection, the specific steps in building the sentiment analysis model, and the introduction of Granger causality. The findings of the empirical analysis are presented in the fourth section and explained and discussed in the fifth section. The sixth section and final section set out the study’s main conclusions, identify its limitations, and address our future directions.
5. Discussion
The Granger causality test showed that social bots strongly influenced human sentiments about COVID-19 vaccines. Their ability to send tweets with positive or negative sentiments had a corresponding effect on the vaccine sentiments of human users on Twitter. However, the intervention of social bots was not a “magic bullet.” The impact of social bots on human online vaccine sentiment may have been limited in some cases. We speculate that the conditions for social bots to affect the public opinion environment may be closely related to the real world and specific issues in addition to the intervention intensity of social bots discussed in previous research results.
Stage 1 (from 11 December 2020 to 8 February 2021)
At stage 1, social bots with positive sentiments have increased human tweets with positive sentiments. Interestingly, the Granger causality test also proved that the release of human positive vaccine sentiment tweets was the reason for the increased tweets of positive sentiments from social bots. Tweets of positive sentiments from social bots and humans at this stage were observed to explore this intriguing result.
Since several earliest authorized COVID-19 vaccines came one after another at this stage, the public had high hopes for COVID-19 vaccines and worried about whether vaccines were sufficiently effective and safe. Most of the human discussions were about vaccines’ effectiveness, how to obtain the vaccine, whether the quantity was sufficient, and whether it would cause other harm to the body after vaccination [
22]. Our observation of social bots’ public tweets found that social bots responded positively to human concerns about these issues.
On the one hand, social bots stressed that vaccines were safe enough. Various authorized vaccines underwent rigorous safety testing experiments. They listed the probability of resistance of various COVID-19 vaccines to the virus, claiming that the vaccine would prevent the COVID-19 virus and become a “game-changer” in the fight against the COVID-19 pandemic. On the other hand, social bots informed the public that the COVID-19 vaccine had been approved for use in the United States, the United Kingdom, India, and other countries by forwarding information from news media. The social bots also claimed that the number of vaccines was sufficient, and anyone could complete vaccination conveniently after an appointment. Through these expressions, social bots responded effectively to the focus of public attention.
Social bots with positive sentiments used the form of retweeting human tweets to spread their opinions, with fewer tweets than the first two stages. Human tweets provided social bots with content to spread. When the number of positive human tweets increased, the text base that social bots could forward was also expanded, allowing social bots to spread more vaccination-promoting content. As relevant tweets accumulated, social bots’ frequency and number of retweets increased [
59]. The information on Twitter was short and informal, which made it more difficult for human users to recognize automatically generated content. Social bots were treated as ordinary human accounts, which, in turn, caused humans and social bots to continue to repost each other’s tweets [
60,
61]. In this context, social bots increased tweets with positive human sentiments.
Stage 2 (from 4 March 2021 to 2 May 2021)
In our previous conjecture, the tweets with negative sentiments from the second-stage social bots had a noticeable effect. If the vaccine’s safety was questioned in the vaccination campaign involving one’s health, it could significantly hinder vaccination. The intervention of social bots with negative sentiments was likely to amplify this negative impact.
The focus of the discussion between humans and social bots at this stage was mainly on the topic of whether vaccination could cause blood clots. Although reportedly the probability of blood clots caused by vaccination was extremely low [
62], it caused public panic about vaccines. With the governments of Australia, the United Kingdom, the United States, and other countries suspending the vaccination of the AstraZeneca vaccine and Johnson & Johnson vaccine, the safety of vaccines once again became the focus of discussion [
63,
64]. The controversy over the safety of vaccines greatly impacted the public’s willingness to vaccinate [
65], and negative sentiments about vaccines caused by blood clots were spread on social media [
66].
This stage had the most intensive social bots with positive sentiments. Social bots with positive sentiments intervened strongly in discussing COVID-19 vaccines. Under the influence of social bots, human users tweeted more positive sentiments. In addition, the social bots quoted authoritative sources, such as the FDA, in large numbers. The content published and forwarded by social bots supported vaccination, showing strong scientificity. Social bots also emphasized that the probability of vaccination-induced blood clots was low, and the potential benefits of vaccination outweighed its potential risks. Moreover, they used the actions of governments to resume the use of AstraZeneca and Johnson & Johnson vaccines to demonstrate the safety of vaccines.
The content delivered by these social bots had a solid scientific basis. Relevant studies have shown that the probability of vaccine triggering was only about one in a million, and the incidence of thrombosis in the vaccinated population was lower than that in the general population. Moreover, the combined cases of thrombosis, thrombocytopenia and bleeding, and disseminated intravascular coagulation were even rarer [
67]. Not getting vaccinated posed a risk of infection and increased the safety risk to families, communities, and even the entire country [
68]. In this case, people may have adopted a more positive attitude toward vaccines to protect family members, friends, or other groups at risk [
69].
In this context, social bots with positive sentiments played the role of science popularizers by scientific explanations and even added links to popular science videos, allowing social bots to positively impact the public. Previous studies have proved that various information related to vaccines was complex during the pandemic, and online science popularization could improve the popularity of users’ scientific knowledge, reverse online rumors, and increase the public’s willingness to vaccinate [
70]. This may be the reason that social bots could positively influenced the public sentiment at this stage.
Stage 3 (from 10 June 2021 to 8 August 2021)
At stage 3, social bots’ tweeting of positive sentiments was not a Granger cause for the positive sentiments sent by human vaccines. On the one hand, this was related to the declined proportion of tweets sent by social bots with positive sentiments (positive tweets from social bots accounted for 9.75% of all positive tweets, the least among the three stages). On the other hand, this was related to the development of the COVID-19 pandemic.
The topic of vaccine discussions on Twitter revolved around the spread of the Delta variant. Social bots with positive sentiments emphasized that the vaccine still worked, but rarely specified how well the vaccine would prevent the Delta variant. They posted information on how many vaccines were available at a given vaccination site and called on the public to get vaccinated during business hours of the day. In addition, they said authorities were expanding the tests of the vaccine’s effectiveness in preventing the variant. Unlike stage 2, the probability of a blood clot from vaccination was extremely low, as supported by scientific research. In the context of supporting facts, public panics were reduced with the continuous popularization of science [
70], which provided a realistic basis for the impact of social bots with positive sentiments on human users.
However, in the absence of scientific support, the impact of social bots might be limited. At the time, there was insufficient scientific evidence that the existing vaccines were as effective against Delta variants as they were for several previous virus variants [
71]. In people infected with the Delta variant, viral loads were similar in vaccinated and unvaccinated people [
72,
73]. The viral loads maintained after vaccination explained the Delta variant’s rapid global spread despite increasing vaccination coverage [
74]. The Delta variant was nearly twice as contagious as earlier variants and may have caused more severe illness. People fully vaccinated could develop vaccine-breakthrough infections and spread the virus to others [
75].
At this stage, social bots with negative sentiments mostly cited information from authoritative sources. Compared with the previous prevention rate of vaccines against other variants, it showed that the prevention ability of vaccines against the Delta variants had been reduced, and vaccination was not effective in preventing the COVID-19 pandemic. When the public believed that vaccines could not deal with current situations, they may have been more influenced by negative messages [
76]. The failure of the vaccine has deepened the public’s panics about the COVID-19 pandemic, and the vaccine anxiety and anti-vaccine sentiment in social media have also continued to breed [
77], creating more opportunities for negative social bots to have a harmful impact.
6. Conclusions
As the public increasingly uses social media to obtain health information, there is also growing interest in interactive social media in public health promotion. It is difficult for the public to communicate as frequently offline as in the past due to the continuous spread of the COVID-19 pandemic worldwide, which makes the discussion of health information through social media more frequent.
Vaccination, one of the most successful public health interventions and an essential means of preventing infectious diseases, has become the focus of public online discussions during the outbreak. Social media has become an important channel for the public to obtain vaccine information and to shape vaccination attitudes. However, in the discussion of this topic, there are human users and a large number of social bots that significantly affect the public’s online vaccine sentiments and offline vaccination behaviors.
In the work, we specifically analyzed 142,883 tweets from the three stages of COVID-19 vaccines. We detected 8.87% of the social bots, who posted 15,716 tweets. Later, a newly developed BERT-CNN sentiment analysis framework was constructed to perform a sentiment analysis on tweets from social bots and human users. A validation using Granger causality was performed to verify whether the sentiments of social bots affected those of human users. We found that social bots with positive sentiments increased positive human tweets at stages 1 and 2. At Stage 3, social bots with positive sentiments failed to influence human users. On the contrary, social bots with negative sentiments increased tweets with negative sentiments by online users. Social bots significantly affected the spread of positive or negative sentiments and the corresponding sentiments of humans.
The work confirmed Broniatowski’s view that social bots do not have a stable identity in their online influence on the topic of vaccination. In other words, social bots are not necessarily “good” or “bad” in online discussions of COVID-19 vaccines. The implication for us is that when other traditional means of science communication fail in future public health emergencies, some social bots can serve as communicators of correct health information. Social bots can be used to refute rumors, popularize scientific behaviors, and help people build scientific concepts of health. The intervention of social bots in the public opinion environment is not a “magic bullet,” and whether “good” or “bad” bots can produce the desired effect is affected by the development of the COVID-19 pandemic.