Next Article in Journal
In Loco Parentis: Informal Kinship Care in Australia—Social Benefit and Material Poverty
Next Article in Special Issue
Explanatory Journalism within European Fact Checking Platforms: An Ally against Disinformation in the Post-COVID-19 Era
Previous Article in Journal
Career Choices, Representation of Work and Future Planning: A Qualitative Investigation with Italian University Students
Previous Article in Special Issue
Combatting Fake News: A Global Priority Post COVID-19
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Doctors for the Truth: Echo Chambers of Disinformation, Hate Speech, and Authority Bias on Social Media

by
Joana Milhazes-Cunha
* and
Luciana Oliveira
*
CEOS.PP, ISCAP, Instituto Politécnico do Porto, Rua Jaime Lopes Amorim, s/n, 4465-004 Porto, Portugal
*
Authors to whom correspondence should be addressed.
Societies 2023, 13(10), 226; https://doi.org/10.3390/soc13100226
Submission received: 31 July 2023 / Revised: 4 October 2023 / Accepted: 12 October 2023 / Published: 23 October 2023
(This article belongs to the Special Issue Fake News Post-COVID-19)

Abstract

:
The COVID-19 pandemic has been the catalyser of one of the most prolific waves of disinformation and hate speech on social media. Amid an infodemic, special interest groups, such as the international movement of “Doctors for the Truth”, grew in influence on social media, while leveraging their status as healthcare professionals and creating true echo chambers of COVID-19 false information and misbeliefs, supported by large communities of eager followers all around the world. In this paper, we analyse the discourse of the Portuguese community on Facebook, employing computer-assisted qualitative data analysis. A dataset of 2542 textual and multimedia interactions was extracted from the community and submitted to deductive and inductive coding supported by existing theoretical models. Our investigation revealed the high frequency of negative emotions, of toxic and hateful speech, as well as the widespread diffusion of COVID-19 misbeliefs, 32 of which are of particular relevance in the national context.

1. Introduction

The effects of misinformation are felt far and wide, especially in an era where platforms have “penetrated the heart of societies—affecting institutions, economic transactions, and social and cultural practices (…)” [1] (p. 2), which makes it of the utmost importance to study and analyse online communities where it spreads and grows. In this study, we explore the Portuguese Facebook page “Doctors for the Truth” (DfT), a community of doctors known for spreading misinformation. The Portuguese group was preceded by other pages with the same name and theoretical alignment in countries such as Argentina, Paraguay, Germany, and Spain, which demonstrates the reach of this dangerous misinformation phenomenon around the world. The Portuguese page, created on 31 August 2020, which reached around 63,000 followers 1, established itself as a persistent mechanism of resistance to measures to contain the spread of the virus, disseminating information and beliefs that discredit national and international health bodies. This page involved a minority and multidisciplinary community of doctors who denied the severity of the virus, the measures approved by the Directorate-General for Health and the Ministry of Health, as well as the government’s actions in its efforts to combat the pandemic. With various pieces of content already denounced as disinformation by the social network Facebook itself and fact-checkers, this community manipulated true information and shared information that discredited the impact of the virus, supporting the non-mandatory use of masks, the harmfulness of the use of masks by children, and the theory that the virus is not transmissible by asymptomatic individuals. On their page, the doctors encouraged debate on the measures proposed by the government regarding the application of states of calamity and emergency and, above all, the extraordinary measures resulting from them, such as school closures and driving bans. Using an inflammatory tone and encouraging their followers to resist the health and prevention measures, this group promoted a series of activities on their page, in addition to creating posts setting out their views. The community held press conferences and online debates. Likewise, the community issued several statements repudiating journalistic pieces and opinion articles that contradicted their views and positions on combating the pandemic and encouraged their followers to take part in demonstrations and rallies organised by them. Action was also taken by “Ordem dos Médicos”, a Portuguese regulatory institution for physicians, who suspended some of the group’s doctors for their online activity, whose sharing of misinformation was perceived as “a direct confrontation with their professional duty” [2,3]. One of the most controversial incidents since the creation of the group, which led to a lawsuit being filed with the Portuguese Medical Association, occurred when one of the leading doctors of the group gave instructions to several users in a Telegram group on how to manipulate PCR diagnostic tests for COVID-19, instructing possible infected people to carry out a series of steps in order to increase their chances of testing negative. The news provoked several reactions in the Portuguese scientific and medical community, who considered the doctor’s attitude to be serious and reprehensible [4]. The case resulted in the doctor being suspended for six months, one of the most serious sanctions handed down by the Portuguese Medical Association, thanks to “statements that jeopardise public health” [5].
The Portuguese Facebook page “Doctors for the Truth” is illustrative of a trend already observed in social networks of sharing incorrect or non-factual information about the virus. The creation of communities like that of DfT arose as a response to the fear generated by the threat that COVID-19 presents to life, becoming, over time, a place for the expression of opinions about the management of this public health crisis. This page is also illustrative of the discourse that denies the seriousness of the pandemic and its scientific evidence. Therefore, this study aimed to understand the activity of this community through the observation and reporting of interactions, practices, beliefs, and behaviours of users who converge on this page.
Taking into consideration the ability that social media platforms, such as Facebook, have to create echo chambers—promoting interaction between like-minded people and decreasing users’ chances of encountering conflicting points of view to their own—but also keeping in mind the position of authority and responsibility that these doctors occupy, this community’s interactions represent an important object of study, especially when it comes to the effect of the dynamics of power and authority bias [6,7,8] that are implied in this case. In the present work, we have studied one node of a network of echo chambers, which reportedly share the same ideology (its roots link back to the Tobacco Industry Research Community (TIRC) and to the Heartland Institute). There is no previous research on any of the nodes of the DfT network, whether in terms of behaviour, speech, associated beliefs, etc. Our research, building on analogous theoretical models for studying specific aspects that we believe to be critical in this network, provides a theoretical basis composed of eight dimensions to characterize these aspects in each of these specific nodes. It is our hope that other researchers can replicate our model to analyse other nodes of the network (in comparative transnational research) or other COVID-19 echo chambers.
In the following sections, we present the theoretical backgrounds that support our research, such as the influence of user-generated content in the spreading of false and misleading information, the risks of misinformation, the role that psychological traits, emotions, and echo chambers play in the sharing of false content, as well as the heightened presence of hateful and toxic speech online during the pandemic.

1.1. User Online Disinformation

As the representation of a new public health threat at a global level, the COVID-19 pandemic has motivated heated debate both offline and online. The World Health Organisation (WHO) itself has classified the dissemination of false content about SARS-CoV-2 as an infodemic, stating that it is a threat to the physical and mental well-being of individuals, representing a real risk for countries trying to stop its spread, and even considering that misinformation can be lethal [9]. According to the Reuters Institute’s 2020 Digital News Report [10], in most countries (and in Portugal, as shown by [11]), newspapers remain the main source of information. However, social networks such as Facebook are now used by a third of the sample (31%) to obtain news and information; in Portugal, this figure amounts to 50%, who mainly rely on Facebook. Social networks are characterized by user-generated content (UGC), the veracity and credibility of which is open to question [12]. While journalism is governed by a code of ethics based on objectivity, this editorial control does not exist on social media [13], which leads to the persistent posting of false or misleading content. For this reason, misinformation on social media can be explained by the lack of control and rigour in UCG. The spread of misinformation becomes even more worrying considering the research of Vosoughi, Roy and Aral [14], which states that misinformation spreads faster than correct information. When it comes to different types of falsehoods, misinformation is described as “(…) (i) misinformation is inaccurate information, open to multiple comprehensions and uses, being the prefix mis-, an indication of mistake or something wrong.” ([15] p. 16), usually propagated without the intention to deceive, while disinformation consists of the intentional spread of false information [16] also described by Diaz Ruiz and Nilsson [17] (p. 1) as “an adversarial campaign that weaponizes multiple rhetorical strategies and forms of knowing—including not only falsehoods but also truths, half-truths, and value-laden judgments—to exploit and amplify identity-driven controversies.” In this latter definition, we can include “fake news”, considered by Paskin [18] as “news articles that originate either on mainstream media (online or offline) or social media and have no factual basis, but are presented as facts and not satire.”. For Quandt [19], the definition of the term “fake news” encompasses a spectrum that includes not only the unintentional dissemination of misinformation, but also that which is carelessly reported, and intentional, which spreads rumours and creates division. Studies point out that in cases of misinformation sharing, accounts with larger numbers of followers, such as media outlets or influencers, tend to defend the veracity of the information they share even if it is false—which contributes to the legitimisation of false information and the facilitation of its spread—while accounts with smaller numbers of followers tend to demonstrate uncertainty about their own statements more easily [20]. Considering that both true and false information about the virus cause users to adjust their behaviour in line with what they take to be factual, combating disinformation with true information is seen as one of the solutions to not only combat the infodemic but also to help individuals take preventive measures in line with the pandemic reality [21,22].

1.2. Who Believes in False Content?

Individuals with higher levels of education exhibit a greater ability to discern between fake and truthful content [23]. Additionally, Bronstein et al. [24] found an association between a tendency towards delusional disorders, dogmatism, religious fundamentalism, and reduced analytical thinking and a greater propensity to believe in fake news. For Pennycook & Rand [25] what leads individuals to share misinformation is the “lack of careful reasoning and relevant knowledge, as well as the use of familiarity and source heuristics”, with sharing being accidental, motivated by inattention. Conspiracy theories are also part of the enormous amount of misinformation found online. Byford [26] justifies belief in conspiracy theories as being rooted in a desire to provide answers to complex situations by reducing perceived uncertainty, a view also reflected by Goreis and Kothgassner [27]. The impact of emotions is also documented as a determining factor in the act of sharing false content online. Negative emotions such as anger, fear, and anxiety, as well as a lack of emotional intelligence, can have a detrimental impact on individuals’ judgement, making them more likely to believe and share false information [28,29,30].
However, the perspective of those who are exposed to false information is that this is not a problem caused by users; rather, they believe it is promoted by the media, journalists, and politicians, which reveals dissatisfaction with and lack of trust in these entities and in technology companies in general [31]. The lack of trust in media outlets and its association with political bias is a trend already observed by other authors such as [32,33,34,35].

1.3. Polarisation and Echo Chambers

With algorithms that increasingly favour the personalization of content, platforms such as Facebook intend for their users’ feeds to become more captivating and engaging. By creating an increasingly personalized feed, in which users only see what the algorithms determine, individuals become isolated from other realities. Echo chambers are then created, a phenomenon in which the political content a user is exposed to is in line with the content the user shares [36]. Without opposing views, users see the same topics being portrayed in radically different ways, thus contributing to the polarisation of opinions [37]. A true environment of selective exposure is thus created, in which the user, affected by cognitive biases such as confirmation bias [38] (the tendency to look for evidence that confirms pre-existing beliefs) and the phenomenon of motivated reasoning [39] (in which we change our beliefs to better accommodate our opinions), chooses what they want to see, further reducing opposing views and ideas. Additionally, in echo chambers, dissenting views are actively cast aside and belittled, creating a sense of distrust in outside sources [40].
Sunstein [41] defines group polarisation as a phenomenon that occurs in groups where there is a deliberation of ideas and, as the discussion takes place, the general—but also individual—opinion moves to a more extreme position, according to the previous tendency already observed in the discussion. The author states that homogeneity is the enemy of good deliberation since a plurality of opinions is essential for various perspectives to be heard and to avoid “unjustified extremism”. In social media, where echo chambers are created, the perfect ecosystem for the diffusion of false information and growth of toxicity and hate speech is therefore created, which, according to the authors Cinelli et al. [42], are commonly linked. This is a cause for concern for society in terms of the power that these platforms can hold, particularly when it comes to politics [43,44,45]. The study by Wang et al. [46], for example, explores the correlation between a change in opinion, namely political polarisation, and interaction with political campaigns on social media. The research concludes that social media interaction with election campaigns can result in the radicalisation of opinions and the formation of echo chambers. This phenomenon of polarisation is also worrying because of the toxic discourse that occurs between users. In his analysis of online political discourse, Saveski [47] observed that there is a tendency for toxic discourse to be more present in communities with less diversity of opinions. Salminen et al. [48] also found that discussions around political topics generate more toxic discourse between users. This result is in line with Ksiazek [49], which, in addition to this same result, found a correlation between the use of multimedia resources in news coverage and a greater presence of hostile comments. Coe, Kenski and Rains [50], in their analysis of discussions in comments sections on social media websites, found that, in addition to toxic discourse, especially in the form of personal insults, being particularly recurrent, the use of this type of discourse provokes negative reactions in users. These results are in line with several others [36,51], which also found a greater presence of negative emotions around toxic speech. This discourse sometimes also takes shape on a physical level. The research presented in Gallacher, Heerdink and Hewstone [52] revealed that interaction between polarized political protest groups online can translate into physical violence in offline encounters.
Polarisation can also occur in facets other than politics. For example, other works [53,54] explored the polarisation of discourse in rhetoric against mask-wearing as a COVID-19 prevention measure and anti-vaccine narratives, and both studies concluded that the discourse was toxic and emotional. In a post-truth era where beliefs matter more than facts and where social media plays a crucial role in the way that citizens receive and interpret information, attention is due to the phenomenon of echo chambers and how they participate in the spreading of disinformation, especially when taking into consideration that Choi et al. [55] confirm that rumours spread by users integrated into echo chambers become more viral and spread at a faster rate than when they are spread by users that are not integrated into echo chambers.

1.4. Hate Speech

Between echo chambers and the polarisation of opinions, social media platforms have created the ideal setting for the spread of hateful and toxic speech. Hate speech can be defined as: “(…) any kind of communication in speech, writing or behaviour, that attacks or uses pejorative or discriminatory language with reference to a person or a group on the basis of who they are, in other words, based on their religion, ethnicity, nationality, race, colour, descent, gender or other identity factor.” [56] (p. 2). Several authors [57,58] have explored the influence of social media on the proliferation of online hate speech and concluded that platforms such as Facebook promote this type of speech by privileging “incendiary” content, which often has very specific targets.
Lingiardi et al. [59], in their mapping of hate speech on Twitter, found that the most affected targets of hate speech were women, followed by immigrants, gay and lesbian people, Muslims, Jews, and people with disabilities. Obermaier, Hofbauer and Reinemann [60], developed their research around hate speech directed at journalists, who are frequent targets of toxicity. The emergence of the COVID-19 pandemic also provoked the appearance of this type of discourse mainly against China and its inhabitants, given the origin of the virus, but also against Iran—one of the countries most affected by the virus at the beginning of the pandemic—and against Jews and immigrants, seen as the culprits of the invention and dissemination of the virus, respectively [61,62,63].
In our content analysis model, we have included the study of the presence of hate speech according to the typologies defined by Guterres [56] for the DfT community.

1.5. Misbeliefs about COVID-19

Several studies have confirmed that disinformation about SARS-CoV-2 has been spreading at a rapid pace on social media, just like a virus spreads between humans [64,65,66,67]. This rampant spread of misinformation poses a risk to users. As previously mentioned, the WHO considers that misinformation about COVID-19 can even be lethal [9]. It is increasingly clear that social platforms such as Facebook, Twitter, WhatsApp and YouTube are fertile ground for the spread of conspiracy theories, rumours, and false information [68]. Goreis and Kothgassner [27] analysed the relationship between social media and the spread of conspiracy theories. The authors stated that conspiracy theories spread on social networks in times of uncertainty and danger which, as a defence mechanism, help those who take refuge in them to find answers to events that cause fear and insecurity, possibly relieving their distress. This is, therefore, a sign that fake news and information are shared to find simple answers to complex problems and are not part of a premeditated effort to sow chaos and distrust. In fact, the data show that most of the sharing of disinformation is done by individual users, demonstrating that there is no malign intention behind these behaviours [69,70]. Even so, it is important to state that some disinformation is shared with a disruptive intent. A study carried out in the United States and the Philippines observed the activity of bots that disseminate disinformation about COVID-19 and contribute to fuelling hate speech on social media [62].
Next, we present a set of works whose thematic proximity and empirical contributions are of relevance for the conducting of our research, namely regarding the exploration of the dimensions of thematic content analysis. As misbeliefs, we also consider misinformation and conspiracy theories about the virus.
With the aim of identifying major COVID-19 topics in English-language tweets, Chandrasekaran et al. [71] uncovered 10 major COVID-19 themes, with 26 associated subtopics, from 1 January 2020 to 9 May 2020. The main themes identified were the origin of the virus, its prevention, symptoms, spread and growth, treatment and recovery, impact on the economy and markets, impact on the health sector, government response, political impact, and racism. These were subdivided into the following subtopics: outbreak; alternative causes; social distancing; disinfection and clean-up; modes of transmission; spread of cases; outbreaks and locations; deaths; drugs and vaccines; therapies; alternative methods; testing; product shortages; panic buying; stock exchange; employability; impact on business; impact on hospitals and clinics; policy changes; essential workers; travel restrictions; financial measures; and containment regulations. The investigation also revealed that users’ sentiment towards the pandemic was mostly negative at the beginning of the analysis, but sentiment gradually receded and became positive towards the end of the period.
Shahsavari [72] utilized machine learning to automatically detect COVID-19 conspiracy theories in the English language circulating on Reddit and 4Chan. The following narratives were uncovered:
  • The virus is related to the 5G network, explaining the Chinese provenance of the virus through a link to communications giant Huawei;
  • Release, accidental or deliberate, of the virus from either a Chinese laboratory or an unspecified military laboratory, and its role as a biological weapon;
  • The virus originated in Chinese culinary practices and is all part of a cover-up by the Chinese Communist Party;
  • Perpetration of a hoax by a globalist cabal in which the virus is no more dangerous than a mild flu or the common cold;
  • Use of the pandemic as a covert operation supported by Bill Gates to develop a global surveillance regime facilitated by widespread vaccination.
The research of Enders et al. [73] was based on a survey of 1040 Americans on different types of misinformation about COVID-19 and its consequences. The authors identified eleven conspiracy theories related to the virus:
  • The number of coronavirus-related deaths has been exaggerated;
  • The threat of coronavirus has been exaggerated by political groups who want to do damage to President Trump;
  • Coronavirus has been purposely created and released by powerful people as part of a conspiracy;
  • Coronavirus is being used to force a dangerous and unnecessary vaccine on Americans;
  • Ultraviolet (UV) light can prevent or cure COVID-19;
  • Coronavirus is being used to install tracking devices inside our bodies;
  • Hydroxychloroquine can prevent or cure COVID-19;
  • COVID-19 cannot be transmitted in areas with hot and humid climates;
  • Bill Gates is behind the coronavirus pandemic;
  • Putting disinfectant on the body can prevent or cure COVID-19;
  • The dangers of 5G mobile phone technology are being covered up by the virus.
The belief that attracted the most supporters was that the number of deaths from the disease was being inflated. Theories related to COVID-19 were primarily linked to political motivations and disbelief in science. Authors have also revealed that belief in these theories influences decision-making regarding inoculation against the disease, participation in public leisure activities, and optimism about the immediate future. Earnshaw et al. [74] investigation also delved into belief in COVID-19 theories. The authors applied a survey with the following theories:
  • Coronavirus is a myth to force the vaccination of people;
  • Coronavirus does not exist;
  • The coronavirus was developed by the government as part of a biological weapons programme.
  • Big Pharma is encouraging the spread of coronavirus to make money;
  • 5G is causing the coronavirus;
  • The government can cure the coronavirus but chooses not to do so for financial gain.
The research revealed that 33% of respondents admitted to believing one or more of these theories. Those who were more likely to believe in these theories had lower levels of knowledge about the virus and more lack of trust in medical sources. Additionally, individuals admitted to adhering less to the rules to fight the virus. Finally, Cassese, Farhat and Miller [75] studied gender differences in adherence to the following beliefs about COVID-19:
  • The virus is a biological weapon intentionally released by China;
  • The virus was accidentally released by China;
  • The virus was accidentally released by the US;
  • Scientists are exaggerating the seriousness to damage President Trump;
  • The media are exaggerating the seriousness to hurt President Trump;
  • Democratic governors are hoarding fans to harm President Trump;
  • Democratic governors are not handing out coronavirus tests to harm President Trump;
  • 5G technology is causing coronavirus to spread faster;
  • The coronavirus is not real;
  • Former Microsoft CEO Bill Gates is creating a screening device to be injected with the coronavirus vaccine;
  • The coronavirus was intentionally created to reduce the world’s population.

2. Materials and Methods

In this descriptive case study, we intended to understand the activity of the DfT community through the observation and reporting of interactions, practices, beliefs, and behaviours of users who interacted on the page. The following research question is thus imposed: “What are the beliefs and the discourse promoted by the group “Médicos pela Verdade—Portugal” and how does this community reveal the traits of an echo chamber?”. We hope to contribute to the existing literature about disinformation and echo chambers, especially that pertaining to COVID-19, by revealing how fake news and false content circulate in Facebook communities and how users react to its presence, particularly in a context where authoritative figures are the ones promoting said disinformation.
In our analysis, we considered all the interactions that took place between 31 August 2020—the date the page was created—and 28 February 2021—the month in which the community was dissolved. This period of time allows us to cover three crucial phases of the community under analysis: its creation, evolution, and closure, followed by the announcement of the creation of a new page, the “Alliance for Health”.
Data were collected using Buzzmonitor, a social media monitoring and management application. The Facebook group was indexed to the application, which collected public information, posts, and comments from followers. The data were then exported to a spreadsheet, with the following field structure:
  • Page ID;
  • Post ID;
  • Internal Number;
  • Comment ID;
  • Reply ID;
  • Total Replies;
  • Unique Replies;
  • URL;
  • Link;
  • Date and time of interaction;
  • Number of Likes;
  • Number of Shares;
  • Number of Comments;
  • Number of “LOVE” reactions;
  • Number of “WOW” reactions;
  • Number of “HAHA” reactions;
  • Number of “SORRY” reactions;
  • Number of “ANGER” reactions;
  • Content of the interaction;
  • Source;
  • Author;
  • Service (source of interaction);
  • Sentiment (positive, negative, neutral).
A non-probabilistic sample of comments and posts was extracted from the full dataset of interactions according to the popularity of posts made by the administrators of the page, followed by a chronological selection of comments made under those posts. With regard to determining popularity, which serves as the basis for selecting the most popular (posts) on the page, the most commented posts were taken as an indicator, to ensure we captured the contexts that generated more conversation. “Likes” and “Reactions” were not used as popularity indicators as they provide no content. On average, the page’s posts garnered 117 comments from followers, with an interquartile range (IQR) between 41 and 155 comments. Based on the premise that the posts that generate the most active discussions have a higher potential to reveal the community, we selected all the posts with more than 155 comments (>IQR), including those identified as outliers (posts with an abnormally high number of comments). In total, 2542 text messages were submitted to the computer-assisted content analysis. We also incorporated textual content and multimedia content (videos, such as interviews) in our analysis. A total of 13 videos produced by the group of doctors were transcribed and incorporated into corresponding entry posts. The volume of data submitted for analysis is detailed in Table 1.
The dataset was then imported into MAXQDA and subjected to coding and categorisation in a deductive and inductive process of content analysis. This allowed us to present results in the form of analytical categorisation and quantify indicators such as thematic relevance. For the content analysis, the fields below were used. “Content” is the one that contains the text and multimedia content shared by the page/user. The other fields consist of reference fields, for consulting the messages in context:
  • Page_ID;
  • Post_ID;
  • Comment_ID;
  • Reply_ID;
  • URL;
  • Link;
  • Date;
  • Content;
  • Author_Gender;
  • Sentiment.
Each MAXQDA entry can represent more than one different subject, feeling, theme, and emotion, so the same message can be assigned to several codes. The aim of this method is that the final quantitative and qualitative analysis provides the most accurate picture of the abundance and style of communication of this community.
We utilize several content coding models supported by consolidated research in relation to discourse, emotions and feelings expressed, themes discussed, and beliefs shared in online communities.
Due to Facebook’s data protection policies, we were not able to collect socio-demographic data regarding the 63,000 members of this community, and thus it is not possible to present an overall characterization of the profile of its members.
In the following section, we present the content coding models and criteria employed.

Content Coding Models

Figure 1 illustrates the different coding dimensions used to analyse the content of the posts and comments. Seven models were applied to both posts and comments (Sentiment, Emotions, Nature of Interaction, Entities, Hate Speech, and Associated Belief). On the other hand, Page Initiative was only applied to posts as it aims to reveal the type of event promoted on the Facebook page like a livestream, a workshop, or a protest; and Page Approval was only applied to comments, as it portrayed the sentiment of approval or disapproval of the community of doctors patent in the analysed messages.
For the coding of sentiment, we followed text-mining techniques. Buzzmonitor’s software for automatic sentiment detection, based on Bayes’ theorem, was used to quantitatively rate sentiment, ranging from −1 to 1, translated into “Positive”, “Negative”, and “Neutral”, which received further manual verification by the researcher during content analysis. This categorisation was applied to every post and comment in our sample.
We categorized emotions according to [76] six core emotions. Categorisation by emotion was only applied to statements where their presence was clear (e.g., neutral statements would not be classified according to emotion).
For the category of “Nature of Interaction” we began by following Zubiaga et al. [20] model for the examination of how people orient themselves to and spread rumours on social media, making the necessary adjustments for it to fit the nature of our case. The same criteria that the authors applied to primary tweets (source tweets) was applied to posts and comments that mentioned new theories, that were not present in the rest of the thread. As such, following the author’s rules, these posts or comments could not be categorized regarding “Response Type”, a type of classification reserved for tweets made in response to a primary tweet. Alternatively, for posts that mentioned some type of belief and comments that initiated debate about a type of belief not reflected in the original post or discussion, only the categories of “Support” and “Evidence” were applied. For comments responding to beliefs postulated in posts or by other followers, only categorisation by “Response Type” and “Evidence” was applied.
When it came to themes, we employed the model proposed by Chandrasekaran et al. [71] for the categorisation of tweets according to COVID-19 topics on Twitter. This method was applied to posts and comments that reflected the themes in [71]. Additional changes to the model were made to incorporate the themes “Quarantine”, “Use of Masks”, and “Asymptomatic cases”.
Following the definitions of hate speech in the literature [56,77], we categorized statements according to hate speech regarding the presence of discriminatory and offensive speech targeted at race, gender, sexual orientation, religion, ethnicity, descent, colour, and nationality, but also culture and political affiliation and stance.
For the analysis of statements regarding beliefs and falsehoods related to COVID-19, we applied Shahsavari’s [72] model to posts and comments that contained COVID-19 beliefs and theories. Additional changes to the model were made to incorporate beliefs specifically found in this community.
The following categories were created by the authors of this study to better reflect the nature of our case: “Entities” allowed for the classification of the type of entities referred to and was applied to posts and comments that mention certain entities; and “Page Approval” was created in order to allow the assessment of the level of support among the followers of the page and was only applied to comments where support or disapproval of the page and its initiatives were clear.
When it comes to general criteria, only “Sentiment” was applied mandatorily. Other categorisations were dependent on the presence or absence of the entity type, emotion, theme, belief, etc. Each post or comment could be classified more than once with the same or different codes and categories, to ensure that the portrait of this community and its followers was as faithful to reality as possible. Interactions were categorized according to their context, so a comment that addresses, for example, the topic of ‘Testing’, but is part of a conversation that addresses the topic of ‘Quarantine’, is categorised according to both codes. We believe that this is one of the advantages of content categorisation by humans, as it allows us to reveal, with greater accuracy, the subtleties of the studied discourse.
In the following section, we present the results of our analysis.

3. Results and Discussion

The adopted categorisation model was used as a starting point for content analysis, and was dynamically adjusted, through inductive processes, to the nature of the communication, behaviours, and discussions on the page. We present our adapted content coding and categorisation model in Table 2.
Our results indicate that the most popular topics were “Testing” (15.7%) in posts and “Use of Mask” in comments (17.9%). In Chandrasekaran et al. [71] analysis, on which we drew to form our model, “Testing” occupied only 3.51% of the conversation in tweets. “Use of Mask”, on the other hand, was one of the themes we introduced from the analysis of the DfT page and for which we have no other benchmarks.
Effectively, PCR tests have generated criticism regarding their reliability as well as the testing policy itself, which, according to the doctors, was causing the disproportionate creation of false positives. The obligatory use of masks was highly condemned by the doctors who composed the group, but also repudiated by the community of followers (Table 2). The frequency of discussion around this theme, associated with a predominantly negative tone, is also related to the high support for the belief that the “Use of Mask is unnecessary or even dangerous”, which assumes a relevance of 17.3% in the discussion of the publications and 13.8% in the comments (Table 3).
We identified the presence of hate speech, cultural in nature (posts = 20%; comments = 61.9%), especially directed at journalists. This discourse was propagated by doctors and mirrored in the community of followers, which amplified the resistance created against the media. Our results are in line with those of Obermaier, Hofbauer and Reineman [60], where journalists were also identified as targets of hate speech, pointing out that 17% of the journalists surveyed had been victims of personal hate speech, and 28% had been confronted with hate speech directed at their professional class. The insults directed at this class align with the belief that there was manipulation by the media of the real pandemic situation. Both the administrators of the group and the followers of the page believed that the media created a “wave of fear”, seeking to influence society and to profit from media coverage (Table 4). This is a trend also observed by Nielsen and Graves [31], where it was found that individuals attribute the cause of disinformation to the media. The presence of cultural hate speech is further leveraged by the presence of toxic speech among followers, who often argued and insulted each other making use of a more informal and personal register.
On the other hand, the links between beliefs in the analysed theories (some of them conspiracy theories), which involved elements such as the media and the political class, offer a plausible justification for the presence of cultural and political hate speech. Several authors [31,32,33,34,35] have reinforced the great distrust of followers towards the media, journalists, and politicians, who are seen as originators of the dissemination of false information online. The administrators’ discourse, and especially that of followers, precisely reflects this trend. Users often made accusations against politicians, in an inflammatory and aggressive tone, whom they accused of taking advantage of the pandemic. These entities then became the real culprits for the economic and social situation of the time. In our study, we detected the presence of political hate speech in large proportions (posts = 80%; comments = 33.5%), directed at Portuguese parties and political figures, but also discrimination between users based on their political affiliation (Table 5). Thus, this type of discourse was particularly prominent at times when the actions of the government and other political entities were criticised, but also when conspiracy theories were propagated that attributed the origin of the virus to supposed elites and political organisations. As such, our results allow us to conclude that the presence of hate speech from the political sphere is a growing trend.
Several factors may explain the presence of toxic and hate speech on the page. On the one hand, it can be argued that the community itself is an echo chamber thanks to the homogeneity of opinions (sender and audience) and the very nature of the page, which promoted, from the outset, a combative discourse regarding the management of the pandemic. The small percentage of disapproval of the page (13%) on the part of the followers should be noted. As Sustein [41] argues, homogeneity of opinions does not invite deliberation and certainly does not promote opposition between different points of view, so this polarized environment motivates the “extremisms” mentioned by the author. This theory is also supported by Saveski [47], wherein a greater presence of toxic discourse in communities where opinions are homogeneous was observed.
Comparing our results with those of Shahsavari et al. [72], we understand that all the beliefs identified by the authors, except for the one related to the virus arising through cooking practices, were identified in our sample. However, 32 additional beliefs, not predicted in this model, were identified. To date, this is the only study that has explored postulated beliefs about COVID-19, on the social network Facebook, in Portugal, and the systematization of these beliefs is of great importance. Some of the beliefs identified on the page have already been observed in other countries, such as those related to the New World Order, “Operation Lockstep”, among others. However, novel beliefs were also identified, such as “There is a higher incidence of the virus in municipalities whose councils are not led by the Socialist Party” and a conspiracy theory linked to the group, based on the idea that “There are people infiltrated in social media to manipulate the group and its followers”.
On the other hand, it is worth noting that the most popular beliefs on the sender side differ slightly from those on the audience side. We recall that the most frequently postulated beliefs on the page were “Manipulation by the Media”, “Use of Mask is unnecessary and even dangerous” and “PCR tests are not reliable diagnostic methods” (Table 6).
As for the followers of the page, the most popular beliefs were “The virus is not as lethal as health authorities want to make it seem and is as dangerous as a common flu”, “Manipulation by the Media”, and “Wearing a mask is unnecessary and even dangerous” (Table 7). Our results mirror those of [73] as that study also identified the belief related to the inflation of the death toll as one of the most popular (29% vs. P = 8%; C = 17.1%). In our sample we found other beliefs also identified by the author, propagated in the US context, such as the use of the pandemic to damage Donald Trump’s presidency (28% vs. C = 0.5%); the fact that the pandemic is a hoax (27% vs. P = 2.7%; C = 11.2%); the use of the pandemic to implement microchips in patients (18% vs. C = 0.5%); hydroxychloroquine and disinfectant being a cure for the disease (18% vs. C = 0.1%); Bill Gates being responsible for the pandemic (13% vs. C = 1%); and 5G technology being linked to the virus (11% vs. P = 2.7%; C = 0.4%).
Our results are also in line with those of Earnshaw et al. [74], which also identified, in the United States, support for beliefs such as the possibility of the virus being a bioweapon; the virus having been accidentally released by China; 5G technology being involved in the dissemination of the virus; and the possibility of it being a hoax. Cassese, Farhart and Miller [75], regarding the American context, also identified beliefs present in our sample, such as the virus being accidentally released by China; 5G technology being responsible for the spread of the virus; the virus not being real; screening devices being injected with the COVID-19 vaccine; and the virus being intentionally created to reduce the world population. Our results reinforce some of the findings of these authors’ papers reporting the international context in 2020. This was indeed a fertile year for the proliferation of misinformation on a staggering scale, with impacts at almost every level of human life.
The results of the analysis of the dimensions of feeling and emotion, both by the sender and the recipient, show consistency since in both cases sentiment is mostly negative and the most recurrently identified emotions are sadness (in the case of publications) and anger (in the case of comments).
Our results also show that this is a very polarized community in its discourse, where negativity predominates, accompanied by a high frequency of the emotions of anger and sadness. The negative feelings and emotions such as anger and sadness are associated with the revolt against the management of the pandemic, more specifically regarding the mandatory use of masks, the testing policy, and the supposed ineffectiveness of PCR tests, a thesis promulgated by the doctors of the group, mirrored by the follower community. These results can be further explained by the detection of hate speech on the page. The positive feelings and emotions of joy (publications = 30%; comments = 9.9%) are related to the high levels of support and high frequency of statements of approval of the page and the doctors who make up the “Physicians for Truth—Portugal” movement (87%) by the followers and the positive responses of doctors in the face of encouraging comments.
The strong incidence of negative emotions and their influence on individuals’ discernment is associated with a higher propensity to believe and disseminate false information on COVID-19 [28,29,30], something that may explain the large circulation of demonstrably false information disseminated not only by the group’s administrators but also by the audience. Effectively, Pennycook and Rand [25] attribute, in part, the belief in fake news to intuitive and emotional thinking, a demarcated dimension in the discourse of the group under analysis. Belief in fake news is also associated with reduced analytical thinking, delusional disorders, dogmatism, and religious fundamentalism [24]. Although we cannot empirically state that this is the profile of the doctors and followers of the page who spread false information, we can confirm the presence of conspiracy thinking, given the high dissemination of this type of theory in the community. According to several authors [26,27], conspiracy theories arise in times of uncertainty and serve as a response to complex situations. The pandemic represented a great moment of uncertainty in the world, so it is to be expected that individuals with a psychological predisposition to reject information from experts, as happens on this page, seek some level of comfort in these theories.
Despite being discredited by the media, health specialists, and the Ordem dos Médicos, this community found much support and popularity in the Portuguese online space. The influence of the functioning of social networks, as mechanisms that reinforce individuals’ opinions, by not exposing users to alternative views and prioritising polarised discourse, may explain why supporters of the page enclose themselves in the explanations and beliefs of this community, ignoring the warning signs about the group. Additionally, as doctors, the administrators of the page present themselves as experts in the health field, so authority bias is at play [6,7,8]. This dynamic of power and the flaw in human logic helps us to understand how information, which is considered false may still be vehemently defended, especially when combined with other biases, also already mentioned in this work, such as confirmation bias and motivated reasoning [38,39]. This dynamic becomes even more worrying if we consider that these authority figures are conferring legitimacy to conspiracy theories and inciting the spread of toxic discourse against specific groups, influencing the surrounding community.
As we have seen, this community presents trends already observed in the literature, such as its polarised nature, the sharing of misinformation, the belief in false information, the dissemination of conspiracy theories and the presence of hate speech. Despite the numerous initiatives of fact-checkers, the media, government structures, and technology companies themselves to curb online disinformation, the “infodemic” continues to make its presence felt in the Portuguese online space in communities such as the one we have analysed.

4. Conclusions

In this investigation we sought to portray the phenomenon of disinformation on COVID-19 in Portugal, on Facebook, through the analysis of the DfT page, offering the first systematic portrait of this community of doctors and followers. To do so, we adopted a mixed research strategy of embedded typology, where quantitative and qualitative components were combined to draw a faithful portrait of the reality of this page, signalled in the Portuguese media and referenced by the Ordem dos Médicos as a relevant driver of disinformation.
Our analysis also showed the way disinformation is amplified in an echo chamber environment, especially in a context where said disinformation is promoted by authority figures, and how it directly influences the discourse of the community of followers, especially when it comes to setting the tone for an intense discussion, marked by strong emotions. In this case, the analysed community of followers vehemently showed their support and took hold of the discussion by sharing their own convictions. Although there was a global convergence in the themes discussed on the page, our results show that the doctors gave priority to discussion around testing, and followers privileged discussion about mask use.
The prioritization of topics such as “Use of Mask”, “Medicines and Vaccines”, and “Deaths” reveals that the most discussed topics by followers consist of subjects that affect them more immediately—the mandatory use of a personal protective device, the ways of curing the virus, and its fatality rate. The themes most discussed by doctors (“Testing”, “Deaths”, and “Use of Mask”) seem to give primacy to the debate on policies for the management of the pandemic.
The beliefs most recurrently disseminated by the doctors on the page are “Manipulation by the Media”, the “Use of Mask is unnecessary or even dangerous”, and “PCR tests are not reliable diagnostic methods”. In turn, the most frequently addressed beliefs by the followers are “The virus is not as lethal as health authorities want to make it seem and is as dangerous as a common flu”, “Manipulation by the Media”, and “Use of Mask is unnecessary or even dangerous”. These overlaps highlight the convergence of discourses, but also the role of the audience in the selective amplification of themes, driving the overall narrative of the group.
We detected an expressive presence of political and cultural hate speech in the group, both in posts and comments, the former with greater expression in posts by the doctors, and the latter with greater expression in the analysed comments. We also detected hate speech regarding race, religion, and nationality, although only residually. These are types of discourses that proliferate in a predominantly negative climate. In fact, the page’s posts reveal the presence of sadness, anger, and fear, and the comments reveal anger, joy, and sadness (joy expressed only as support for the group’s activities), which allows us to understand that this is a highly polarized community in its discourse, rapidly expressing opposite emotions, at similar high frequencies. The communication style of the community of doctors has a very clear and significant reflection in the discourse of their followers, who mirror and amplify the initial behaviour and discourse. The communication effort of the group finds correspondence with the users who interact with the published contents, comment on the topics being debated, and even initiate their own conversations, reflecting their concerns. A true environment of distress and uncertainty was created, that therefore fostered panic, fear, and anger, which as we have seen are among the dominant emotions expressed in the conversations.
Despite this, the doctors’ discourse was not aimed at reassuring the community of followers, but persistently conveying a sense of fear and anger, urging revolt against health authorities and the government, and antagonising the media. By addressing topics such as the transmissibility of the virus through 5G technology and the involvement of the New World Order, the doctors offered credibility and fertile ground for conspiracy theories, contributing to the phenomenon of “infodemic” experienced on social networks and therefore feeding into the vicious cycle of fake news, conspiracy theories, and disinformation found online. Furthermore, the group’s actions had a great impact on the undermining of trust in media, science, and health professionals, as well as trust in governmental institutions, which were the major entities involved in the management of the pandemic.
Our results reveal the behaviour of this community as a true echo chamber where outside opinions are disregarded and cast aside, mistrust in official sources is cultivated, and where beliefs are held higher than facts, much as stated by Sustein [37]. Disinformation flows freely, with little questioning in an authentic selective exposure environment, where confirmation bias and motivated reasoning dominate the discourse [38,39]. The debate is filled with intense negative emotions and tone, and toxicity and hate speech are prevalent, facilitating the emotional arousal that diminishes the necessary critical thinking to prevent the acceptance and spread of misinformation. Additionally, our research shows the power that audiences have in amplifying topics that are particularly important to them, even if those are not the themes most commonly discussed by the senders. This trend showcases the true power of echo chambers in the intensification of harmful narratives, the spread of disinformation, the grooming of hate, and the decay of critical unbiased thinking. This is a cause for concern for society, as such echo chambers are created with a clear agenda to undermine trust in science, governments, politics, and financial, health, and education systems, which is a threat to democracy.
We cannot fail to note that solutions seeking to prevent the creation of communities where disinformation proliferates, such as preventive intervention in terms of increasing knowledge about social networks, need to be promoted. Efforts are being made by social media platforms to reduce the amount of disinformation that circulates in them. However, these efforts will fall short if users of these platforms do not develop the critical capacity necessary to not engage with false content. Media literacy as well as higher health literacy could help users to better identify false information and not propagate it.
It is worth highlighting the theoretical and practical implications of this study. At the theoretical level, the study resulted in the systematization of a belief system proclaimed and disseminated in the national context. We have also presented the type of narrative, environment, and audience response that characterizes an echo chamber. As regards the practical implications, this research culminated in the characterization of the communication and discourse profile of the Portuguese DfT community, offering empirical evidence that allows organisms and authorities to raise awareness about the organised activity of the group, which may put at risk the health, life, and social participation of citizens. While these findings have made it possible to understand the impact of the DfT community in the dissemination of fake news, conspiracy theories, and hate speech, we must point out that the analysed data refer to a sample of posts and comments, following a criterion of popularity, meaning that not all interactions generated by this community were submitted to content analysis. Additionally, we were unable to perform a demographic analysis of the individuals involved in the discussions, as the current data protection regulations prevent the collection of demographic information from users. Indeed, future investigations, possibly enabled by automatic content coding, should delve deeper into other DfT communities, like the ones in Germany, Spain, Uruguay, or Peru, in an effort to analyse the correspondence of behaviour between these international movements and the potential presence of similar fake news stories, COVID-19 beliefs, and hate speech patterns, which might be indicative of a global movement of echo chambers of misinformation and disinformation.

Author Contributions

Conceptualization, J.M.-C. and L.O.; Formal analysis, L.O.; Funding acquisition, L.O.; Investigation, J.M.-C. and L.O.; Methodology, J.M.-C. and L.O.; Resources, J.M.-C.; Supervision, L.O.; Validation, L.O.; Visualization, J.M.-C. and L.O.; Writing—original draft, J.M.-C.; Writing—review & editing, J.M.-C. and L.O. All authors have read and agreed to the published version of the manuscript.

Funding

This work is financed by Portuguese national funds through FCT—Fundação para a Ciência e Tecnologia, under the project UIDB/05422/2020.

Institutional Review Board Statement

Ethical review and approval were waived for this study given that the data used in research are freely available in the public domain and the datasets were properly anonymised prior to analysis.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Note

1
As of October 2021.

References

  1. Van Dijck, J.; Poell, T.; De Waal, M. The Platform Society: Public Values in a Connective World; Oxford University Press: Oxford, UK, 2018. [Google Scholar]
  2. Firmino, T.; Maia, A. Sete Médicos têm Processos Disciplinares por Veicularem Desinformação Sobre COVID-19 Público. 2020. Available online: https://www.publico.pt/2020/10/21/ciencia/noticia/abertos-processos-disciplinares-sete-medicos-veiculam-desinformacao-covid19-1935998 (accessed on 21 October 2020).
  3. Novais, V. “Médicos Pela Verdade”. Ordem Abre Processo Contra Movimento que Nega a Gravidade da COVID-19. Observador. 2020. Available online: https://observador.pt/especiais/medicos-pela-verdade-ordem-abre-processo-contra-movimento-que-nega-a-gravidade-da-covid-19/ (accessed on 19 October 2020).
  4. SIC Notícias. “Médicos Pela Verdade”. Ordem Abre Processos ao Grupo que Contesta Medidas Contra a COVID-19. 2020. Available online: https://sicnoticias.pt/especiais/coronavirus/2020-11-28-Medicos-pela-verdade.-Ordem-abre-processos-ao-grupo-que-contesta-medidas-contra-a-Covid-19 (accessed on 31 December 2020).
  5. SIC Notícias. Ordem Dos Médicos Suspende Rosto do Movimento Negacionista “Médicos Pela Verdade” SIC Notícias. 2021. Available online: https://sicnoticias.pt/especiais/coronavirus/2021-02-11-Ordem-dos-Medicos-suspende-rosto-do-movimento-negacionista-Medicos-pela-Verdade (accessed on 22 May 2021).
  6. Milgram, S. Behavioral study of obedience. J. Abnorm. Soc. Psychol. 1963, 67, 371. [Google Scholar] [CrossRef] [PubMed]
  7. Milgram, S. Some conditions of obedience and disobedience to authority. Hum. Relat. 1965, 18, 57–76. [Google Scholar] [CrossRef]
  8. Milgram, S. The perils of obedience. Harpers Mag 1973, 247, 62–77. Available online: http://www.physics.utah.edu/~detar/phys4910/readings/ethics/PerilsofObedience.html (accessed on 5 March 2021).
  9. World Health Organization. Managing the COVID-19 Infodemic: Promoting Healthy Behaviours and Mitigating the Harm from Misinformation and Disinformation. 2020. Available online: https://www.who.int/news/item/23-09-2020-managing-the-covid-19-infodemic-promoting-healthy-behaviours-and-mitigating-the-harm-from-misinformation-and-disinformation (accessed on 1 December 2020).
  10. Newman, N.; Fletcher, R.; Schulz, A.; Andi, S.; Nielsen, R.-K. Reuters Institute Digital News Report 2020. Available online: https://bit.ly/2BhczTN (accessed on 12 March 2021).
  11. Cabrera, A.; Martins, C.; Cunha, I.F. A cobertura televisiva da pandemia de COVID-19 em Portugal: Um estudo exploratório. Media J. 2020, 20, 185–204. [Google Scholar]
  12. Osatuyi, B. Information sharing on social media sites. Comput. Hum. Behav. 2013, 29, 2622–2631. [Google Scholar] [CrossRef]
  13. Lazer, D.M.; Baum, M.A.; Benkler, Y.; Berinsky, A.J.; Greenhill, K.M.; Menczer, F.; Metzger, M.J.; Nyhan, B.; Pennycook, G.; Rothschild, D.; et al. The science of fake news. Science 2018, 359, 1094–1096. [Google Scholar] [CrossRef]
  14. Vosoughi, S.; Roy, D.; Aral, S. The spread of true and false news online. Science 2018, 359, 1146–1151. [Google Scholar] [CrossRef]
  15. Santos-D’Amorim, K.; de Oliveira Miranda, M.K.F. Misinformation, disinformation, and malinformation: Clarifying the definitions and examples in disinfodemic times. Encontros Bibli 2021, 26, 1–23. [Google Scholar]
  16. Tucker, J.A.; Guess, A.; Barberá, P.; Vaccari, C.; Siegel, A.; Sanovich, S.; Stuckal, D.; Nyhan, B. Social Media, Political Polarization, and Political Disinformation: A Review of the Scientific Literature. Political Polarization, and Political Disinformation: A Review of the Scientific Literature. 2018. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3144139 (accessed on 17 March 2021). [CrossRef]
  17. Diaz Ruiz, C.; Nilsson, T. Disinformation and echo chambers: How disinformation circulates on social media through identity-driven controversies. J. Public Policy Mark. 2023, 42, 18–35. [Google Scholar] [CrossRef]
  18. Paskin, D. Real or fake news: Who knows? J. Soc. Media Soc. 2018, 7, 252–273. [Google Scholar]
  19. Quandt, T.; Frischlich, L.; Boberg, S.; Schatto-Eckrodt, T. Fake news. In The International Encyclopedia of Journalism Studies; Vos, T.P., Hanusch, F., Dimitrakopoulou, D., Geertsema-Sligh, M., Sehl, A., Eds.; 2019; pp. 1–6. Available online: https://www.researchgate.net/publication/332749986_Fake_News (accessed on 17 March 2021).
  20. Zubiaga, A.; Liakata, M.; Procter, R.; Wong Sak Hoi, G.; Tolmie, P. Analysing How People Orient to and Spread Rumours in Social Media by Looking at Conversational Threads. PLoS ONE 2016, 11, e0150989. [Google Scholar] [CrossRef]
  21. Bowles, J.; Larreguy, H.; Liu, S. Countering misinformation via WhatsApp: Preliminary evidence from the COVID-19 pandemic in Zimbabwe. PLoS ONE 2020, 15, e0240005. [Google Scholar] [CrossRef]
  22. Elayeh, E.; Aleidi, S.M.; Ya’acoub, R.; Haddadin, R.N. Before and after case reporting: A comparison of the knowledge, attitude and practices of the Jordanian population towards COVID-19. PLoS ONE 2020, 15, e0240780. [Google Scholar] [CrossRef]
  23. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef]
  24. Bronstein, M.V.; Pennycook, G.; Bear, A.; Rand, D.G.; Cannon, T.D. Belief in fake news is associated with delusionality, dogmatism, religious fundamentalism, and reduced analytic thinking. J. Appl. Res. Mem. Cogn. 2019, 8, 108–117. [Google Scholar] [CrossRef]
  25. Pennycook, G.; Rand, D.G. The psychology of fake news. Trends Cogn. Sci. 2021, 25, 388–402. [Google Scholar] [CrossRef] [PubMed]
  26. Byford, J. Conspiracy Theories: A Critical Introduction; Palgrave Mackmillan: New York, NY, USA, 2011. [Google Scholar]
  27. Goreis, A.; Kothgassner, O.D. Social Media as Vehicle for Conspiracy Beliefs on COVID-19. Digit. Psychol. 2020, 1, 36–39. [Google Scholar] [CrossRef]
  28. Freiling, I.; Krause, N.M.; Scheufele, D.A.; Brossard, D. Believing and sharing misinformation, fact-checks, and accurate information on social media: The role of anxiety during COVID-19. New Media Soc. 2021, 25, 141–162. [Google Scholar] [CrossRef] [PubMed]
  29. Martel, C.; Pennycook, G.; Rand, D.G. Reliance on emotion promotes belief in fake news. Cogn. Res. Princ. Implic. 2020, 5, 47. [Google Scholar] [CrossRef] [PubMed]
  30. Preston, S.; Anderson, A.; Robertson, D.J.; Shephard, M.P.; Huhe, N. Detecting fake news on Facebook: The role of emotional intelligence. PLoS ONE 2021, 16, e0246757. [Google Scholar] [CrossRef] [PubMed]
  31. Nielsen, R.K.; Graves, L. “News You Don’t Believe”: Audience Perspectives on Fake News. Reuters Inst. Study J. 2017. Available online: https://reutersinstitute.politics.ox.ac.uk/sites/default/files/2017-10/Nielsen%26Graves_factsheet_1710v3_FINAL_download.pdf (accessed on 17 March 2021).
  32. Ognyanova, K.; Lazer, D.; Robertson, R.E.; Wilson, C. Misinformation in action: Fake news exposure is linked to lower trust in media, higher trust in government when your side is in power. Harv. Kennedy Sch. Misinf. Rev. 2020, 1, 1–19. [Google Scholar] [CrossRef]
  33. Ojala, M. Is the Age of Impartial Journalism Over? The Neutrality Principle and Audience (Dis) trust in Mainstream News. J. Stud. 2021, 22, 2042–2060. [Google Scholar] [CrossRef]
  34. Nelson, J.L.; Lewis, S.C. Only “sheep” trust journalists? How citizens’ self-perceptions shape their approach to news. New Media Soc. 2021, 25, 1522–1541. [Google Scholar] [CrossRef]
  35. Shin, W.; Kim, C.; Joo, J. Hating journalism: Anti-press discourse and negative emotions toward journalism in Korea. Journalism 2021, 22, 1239–1255. [Google Scholar] [CrossRef]
  36. Garimella, K.; De Francisci Morales, G.; Gionis, A.; Mathioudakis, M. Political discourse on social media: Echo chambers, gatekeepers, and the price of bipartisanship. In Proceedings of the 2018 World Wide Web Conference, Lyon, France, 23–27 April 2018; International World Wide Web Conferences Steering Committee: Lyon, France, 2018; pp. 913–922. [Google Scholar]
  37. Sunstein, C.R. #Republic: Divided Democracy in the Age of Social Media; Princeton University Press: Princeton, NJ, USA, 2017. [Google Scholar]
  38. Wason, P.C. On the failure to eliminate hypotheses in a conceptual task. Q. J. Exp. Psychol. 1960, 12, 129–140. [Google Scholar] [CrossRef]
  39. Kunda, Z. The case for motivated reasoning. Psychol. Bull. 1990, 108, 480. [Google Scholar] [CrossRef] [PubMed]
  40. Nguyen, C.T. Echo chambers and epistemic bubbles. Episteme 2020, 17, 141–161. [Google Scholar] [CrossRef]
  41. Sunstein, C.R. The Law of Group Polarization; John, M., Ed.; Olin Law & Economics Working Paper(91); University of Chicago Law School: Chicago, IL, USA, 1999. [Google Scholar]
  42. Cinelli, M.; Pelicon, A.; Mozetič, I.; Quattrociocchi, W.; Novak, P.K.; Zollo, F. Dynamics of online hate and misinformation. Sci. Rep. 2021, 11, 22083. [Google Scholar] [CrossRef]
  43. Anderson, M. Most Americans say social media companies have too much power, influence in politics. Pew Res. Cent. 2020. Available online: https://www.pewresearch.org/short-reads/2020/07/22/most-americans-say-social-media-companies-have-too-much-power-influence-in-politics/ (accessed on 3 April 2021).
  44. Bucher, T. Want to be on the top? Algorithmic power and the threat of invisibility on Facebook. New Media Soc. 2012, 14, 1164–1180. [Google Scholar] [CrossRef]
  45. Lim, E. Personal Identity Economics: Facebook and the Distortion of Identity Politics. Soc. Media Soc. 2021, 7, 20563051211017492. [Google Scholar] [CrossRef]
  46. Wang, X.; Sirianni, A.D.; Tang, S.; Zheng, Z.; Fu, F. Public discourse and social network echo chambers driven by socio-cognitive biases. Phys. Rev. X 2020, 10, 041042. [Google Scholar] [CrossRef]
  47. Saveski, M. Polarization and Toxicity in Political Discourse Online; Massachusetts Institute of Technology: Cambridge, MA, USA, 2020. [Google Scholar]
  48. Salminen, J.; Sengün, S.; Corporan, J.; Jung, S.-G.; Jansen, B.J. Topic-driven toxicity: Exploring the relationship between online toxicity and news topics. PLoS ONE 2020, 15, e0228723. [Google Scholar] [CrossRef] [PubMed]
  49. Ksiazek, T.B. Commenting on the news: Explaining the degree and quality of user comments on news websites. J. Stud. 2018, 19, 650–673. [Google Scholar] [CrossRef]
  50. Coe, K.; Kenski, K.; Rains, S.A. Online and uncivil? Patterns and determinants of incivility in newspaper website comments. J. Commun. 2014, 64, 658–679. [Google Scholar] [CrossRef]
  51. Gervais, B.T. Incivility online: Affective and behavioral reactions to uncivil political posts in a web-based experiment. J. Inf. Technol. Politics 2015, 12, 167–185. [Google Scholar] [CrossRef]
  52. Gallacher, J.D.; Heerdink, M.W.; Hewstone, M. Online Engagement Between Opposing Political Protest Groups via Social Media is Linked to Physical Violence of Offline Encounters. Soc. Media Soc. 2021, 7, 2056305120984445. [Google Scholar] [CrossRef]
  53. Miyazaki, K.; Uchiba, T.; Tanaka, K.; Sasahara, K. The Strategy Behind Anti-Vaxxers’ Reply Behavior on Social Media. arXiv 2021, arXiv:2105.10319. [Google Scholar]
  54. Pascual-Ferrá, P.; Alperstein, N.; Barnett, D.J.; Rimal, R.N. Toxicity and verbal aggression on social media: Polarized discourse on wearing face masks during the COVID-19 pandemic. Big Data Soc. 2021, 8, 20539517211023533. [Google Scholar] [CrossRef]
  55. Choi, D.; Chun, S.; Oh, H.; Han, J. Rumor propagation is amplified by echo chambers in social media. Sci. Rep. 2020, 10, 310. [Google Scholar] [CrossRef]
  56. Guterres, A. United Nations Strategy and Plan of Action on Hate Speech. 2019. Available online: https://www.un.org/en/genocideprevention/documents/U(20Strategy) (accessed on 12 April 2021).
  57. Klein, A. Hate speech in the information age. In Fanaticism, Racism, and Rage Online; Palgrave Macmillan: Cham, Switzerland, 2017; pp. 25–39. [Google Scholar] [CrossRef]
  58. Munn, L. Angry by design: Toxic communication and technical architectures. Humanit. Soc. Sci. Commun. 2020, 7, 53. [Google Scholar] [CrossRef]
  59. Lingiardi, V.; Carone, N.; Semeraro, G.; Musto, C.; D’Amico, M.; Brena, S. Mapping Twitter hate speech towards social and sexual minorities: A lexicon-based approach to semantic content analysis. Behav. Inf. Technol. 2020, 39, 711–721. [Google Scholar] [CrossRef]
  60. Obermaier, M.; Hofbauer, M.; Reinemann, C. Journalists as targets of hate speech. How German journalists perceive the consequences for themselves and how they cope with it. SCM Stud. Commun. Media 2018, 7, 499–524. [Google Scholar] [CrossRef]
  61. Alshalan, R.; Al-Khalifa, H.; Alsaeed, D.; Al-Baity, H.; Alshalan, S. Detection of Hate Speech in COVID-19–Related Tweets in the Arab Region: Deep Learning and Topic Modeling Approach. J. Med. Internet Res. 2020, 22, e22609. [Google Scholar] [CrossRef]
  62. Uyheng, J.; Carley, K.M. Bots and online hate during the COVID-19 pandemic: Case studies in the United States and the Philippines. J. Comput. Soc. Sci. 2020, 3, 445–468. [Google Scholar] [CrossRef]
  63. Velásquez, N.; Leahy, R.; Restrepo, N.J.; Lupu, Y.; Sear, R.; Gabriel, N.; Jha, O.; Golberg, V.; Johnson, N. Hate multiverse spreads malicious COVID-19 content online beyond individual platform control. arXiv 2020, arXiv:2004.00673. [Google Scholar]
  64. Ahmed, N.; Shahbaz, T.; Shamim, A.; Khan, K.S.; Hussain, S.; Usman, A. The COVID-19 Infodemic: A Quantitative Analysis Through Facebook. Cureus 2020, 12, e11346. [Google Scholar] [CrossRef]
  65. Gallotti, R.; Valle, F.; Castaldo, N.; Sacco, P.; De Domenico, M. Assessing the risks of” infodemics” in response to COVID-19 epidemics. arXiv 2020, arXiv:2004.03997. [Google Scholar] [CrossRef]
  66. Knuutila, A.; Herasimenka, A.; Au, H.; Bright, J.; Howard, P.N. Covid-Related Misinformation on YouTube; Oxford Internet Institute, Oxford University: Oxford, UK, 2020. [Google Scholar]
  67. Naeem, S.B.; Bhatti, R.; Khan, A. An exploration of how fake news is taking over social media and putting public health at risk. Health Inf. Libr. J. 2021, 38, 143–149. [Google Scholar] [CrossRef]
  68. Stecula, D.A.; Pickup, M. Social media, cognitive reflection, and conspiracy beliefs. Front. Political Sci. 2021, 3, 62. [Google Scholar] [CrossRef]
  69. Kouzy, R.; Abi Jaoude, J.; Kraitem, A.; El Alam, M.B.; Karam, B.; Adib, E.; Zarka, J.; Traboulsi, C.; Akl, E.W.; Baddour, K. Coronavirus goes viral: Quantifying the COVID-19 misinformation epidemic on Twitter. Cureus 2020, 12, e7255. [Google Scholar] [CrossRef] [PubMed]
  70. Yustitia, S.; Asharianto, P.D. Misinformation and Disinformation of COVID-19 on Social Media in Indonesia. In Proceeding of LPPM UPN “VETERAN” Yogyakarta Conference Series 2020–Political and Social Science Series; RSF Press: Yogyakarta, Indonesia, 2020; Volume 1, pp. 51–65. [Google Scholar] [CrossRef]
  71. Chandrasekaran, R.; Mehta, V.; Valkunde, T.; Moustakas, E. Topics, Trends, and Sentiments of Tweets About the COVID-19 Pandemic: Temporal Infoveillance Study. J. Med. Internet Res. 2020, 22, e22624. [Google Scholar] [CrossRef] [PubMed]
  72. Shahsavari, S.; Holur, P.; Tangherlini, T.R.; Roychowdhury, V. Conspiracy in the time of corona: Automatic detection of covid-19 conspiracy theories in social media and the news. arXiv 2020, arXiv:2004.13783. [Google Scholar] [CrossRef] [PubMed]
  73. Enders, A.M.; Uscinski, J.E.; Klofstad, C.; Stoler, J. The different forms of COVID-19 misinformation and their consequences. Harv. Kennedy Sch. Misinf.Rev. 2020, 1, 21. [Google Scholar] [CrossRef]
  74. Earnshaw, V.A.; Eaton, L.A.; Kalichman, S.C.; Brousseau, N.M.; Hill, E.C.; Fox, A.B. COVID-19 conspiracy beliefs, health behaviors, and policy support. Transl. Behav. Med. 2020, 10, 850–856. [Google Scholar] [CrossRef]
  75. Cassese, E.C.; Farhart, C.E.; Miller, J.M. Gender differences in COVID-19 conspiracy theory beliefs. Politics Gend. 2020, 16, 1009–1018. [Google Scholar] [CrossRef]
  76. Ekman, P. An argument for basic emotions. Cogn. Emot. 1992, 6, 169–200. [Google Scholar] [CrossRef]
  77. Mahoney, K. Hate Speech, Equality, and the State of Canadian Law. Wake For. L. Rev. 2009, 44, 321. [Google Scholar]
Figure 1. Coding dimensions.
Figure 1. Coding dimensions.
Societies 13 00226 g001
Table 1. Volume of data considered for analysis.
Table 1. Volume of data considered for analysis.
Type of ContentQuantityLength
Posts8229,621 words
Comments2460201,036 words
Videos13339.81 min
Table 2. Adapted content coding and categorisation model.
Table 2. Adapted content coding and categorisation model.
DimensionsIndicatorsP (%)C (%)
Page ApprovalApproval 87
Disapproval 13
Nature of Interaction
[14]
Support
Supporting92.694.4
Denying05.6
Underspecified7.40
Response type
Agreed66.755.7
Disagreed017.4
Appeal for more information04.7
Comment33.322.2
Evidentiality
First-hand experience5.44.7
URL pointing to evidence14.39.9
Quotation of person/organization8.92
Attachment of picture3.60
Quotation of unverifiable source01.6
Employment of reasoning32.117.6
No reasoning35.764.4
Emotions [51]Joy 9.930
Fear 23.712.8
Rage 25.234
Disgust 1.53.5
Sadness 33.618.8
Surprise 6.10.9
SentimentPositive 8.715.2
Negative 5051.9
Neutral 41.332.8
Themes
[46]
Source (Origin) 00
Outbreak 00
Alternative causes 1.10.1
Prevention 00.3
Social distancing 2.20.7
Disinfecting and Cleanliness1.12.6
Use of masks 1217.9
Symptoms 1.51.9
Asymptomatic cases4.43.5
Spread and Growth 00
Modes and Transmission00.7
Spread of cases 6.24.1
Hotspots and locations0.40
Death reports 15.712.8
Treatment and Recovery00
Drugs and Vaccines 6.214.5
Therapies 0.41.9
Alternative methods 2.61.7
Testing 15.712.1
Quarantine 1.51.2
Impact on the economy and markets0.40.3
Shortage of products00.1
Panic buying00
Stock markets 00
Employment 0.40.8
Impact on business 0.70.6
Impact on health care sector00.1
Impact on hospitals and clinics8.46.2
Policy changes1.51.5
Frontline workers 00.8
Government response 10.93.5
Travel restrictions00.5
Financial measures0.40.1
Lockdown regulations4.49.1
Political impact 2.20.1
Hate Speech
[38,52]
Cultural 2061.9
Political 8033.5
Race 00.6
Gender00
Sexual orientation 00
Religion01.9
Ethnicity 00
Descent00
Colour00
Nationality01.9
Associated Belief
[47]
Transmissibility through 5G technology2.70.4
Bill Gates and the vaccine that aims to limit population growth in the world01
The virus originated in culinary practices00
Accidental release of the virus from a Chinese laboratory1.30
Bioweapon01.7
The virus is not as lethal as the health authorities want to make it sound and is just as dangerous as ordinary flu817.1
Associated Belief (local)
Own identification
Use of masks is unnecessary or even dangerous17.313.8
Use of masks led to children’s deaths1.30.1
Manipulation by the media26.716
The Pandemic is a hoax2.711.2
The pandemic aims to set up a new world bank00.5
Illuminati conspiracy00.5
Freemasonry conspiracy00.2
The virus is a method of population control/form of dictatorship128.1
COVID-19 aims to end US presidency00.5
The virus is a form of far-left dictatorship00.1
The virus is related to the New World Order42.8
Chinese plot for world domination02.8
COVID aims to justify recession and cover up mismanagement by governments00.5
COVID-19 is part of ‘Operation Lockstep’ from ‘The Rockefeller Playbook’00.1
Hydroxychloroquine is effective in treating COVID-1900.1
PCR tests aim to collect genetic material from the tested00.2
Asymptomatic patients are not infected with the virus or do not transmit it1.31.7
Coconut oil is effective in treating COVID-1900.1
There is a higher incidence of the virus in municipalities whose councils are not led by the Socialist Party00.8
There will not be a second wave of COVID-19 infections in Portugal5.30.4
PCR tests are not reliable diagnostic methods129.6
There are people infiltrating social media to manipulate the group and its followers01.3
COVID-19 vaccines are bad for your health1.36.5
Vaccines insert tracking devices (microchips) in patients00.5
Vaccines contain toxic components00.1
Vaccines cause autism00.1
Getting the flu vaccine increases susceptibility to COVID-192.70.2
Vaccines infect patients with the COVID-19 virus00.4
Vitamin C treatment cures COVID-1900.2
Chlorine dioxide cures COVID-1901.2
World leaders receive fake vaccines00.2
Measures applied by the Government to encourage vaccination are disproportionate 1.30
Note: “P” stands for “Posts” and “C” for “Comments”.
Table 3. Verbatim expressions of the themes.
Table 3. Verbatim expressions of the themes.
ThemesVerbatim
Testing
Post in which doctors debate the level of reliability of PCR tests.
“Contrary to what happened in previous epidemics, almost all health authorities agreed that a single positive result from an Rt-PCR test would be a sufficient indicator for the diagnosis of infection, even in asymptomatic people with no history of exposure. This was done in concert and on the basis of the generalised belief that positive results are highly reliable. However, data from PCR tests for similar viruses show that the test produces false positives in appreciable quantities, enough to make positive results unreliable with the consequent clinical and social implications, but also skewing epidemiological statistics, namely the proportion, prevalence and hospitalisation and mortality rates.”
User refutes another user’s point about the gravity of the COVID-19 pandemic by discrediting PCR tests.“What if we started at the beginning? The egg or the chicken? The pillar on which this whole so-called “pandemic” rests is a gigantic fallacy, namely the PCR tests. A fact that, I assume, you already know as the informed person you seem to be. So, having said that, what are we going to talk about?”
The use of masks is unnecessary or even dangerous
In a speech at a demonstration, a member of the group mentions how the use of mask has negative effects on children.
“It’s because I have a profile, in practical anaesthetic terms, in paediatric anaesthesia and I work in a place with many, many children being anaesthetised for magnetic resonance… some of these children, very, very sick, very, very hard, very, sad stories, that I can’t agree with [wearing masks] because what I see being done to some of these children, who are institutionalised, and who are children who don’t have the proper levels of intelligence to understand what’s going on and who become gagged with a meaningless mask.”
User does not believe in the need to wear a mask as a protective measure and regrets the adherence to this protective measure by other citizens, even though it was not compulsory.“True. We have to abstract ourselves from this whole ‘thing’. The truth is that even though it’s not compulsory, I already see a lot of people wearing masks in the street. Unfortunately, the message of fear is being passed on. And in the end they will say it was all necessary because otherwise it would have been much worse. I don’t understand how, nowadays, and with access to all kinds of information, people still go along with what they’re told on TV by the media and politicians, without questioning politicians, without questioning anything. There will have to be a few for many.”
Table 4. Verbatim expressions of cultural hate speech.
Table 4. Verbatim expressions of cultural hate speech.
ThemesVerbatim
Cultural Hate Speech
User responds to a comment that stated that the COVID-19 pandemic was causing mental health impacts.
“You’ll see that if you have any “nerves” they’ll go away as soon as you switch off the telly. I’ve been watching only animals, culinary shows, history and a few films since April. Nothing else. I switch off the radio when the news comes on. I don’t care what these freaks of journalists say”
User insults a journalist notorious for denouncing fake news, who was mentioned in a post from the group of doctors.“This dude isn’t a journalist, he’s a trash journalist!”
Table 5. Verbatim expressions of political hate speech.
Table 5. Verbatim expressions of political hate speech.
ThemesVerbatim
Political Hate Speech
Excerpt from a video published on the DfT page where one doctor refers to politicians as “tyrants”.
“(…) testing is mostly done by decree, by decision of some tyrant, and not by clinical criteria”
User denounces the alleged corruption associated with the management of health measures during the pandemic.“All the people should file complaints against these charlatans and corrupt people who just want to take away our freedom and make us slaves to the system.”
Death threats from a user addressed to the politicians in charge of the management of the health measures implemented during the pandemic.“I’M ALREADY IN SUCH A STATE THAT ANY DAY NOW I’LL LOSE MY MIND AND KILL SOME”
Table 6. Verbatim expressions of associated beliefs—sender.
Table 6. Verbatim expressions of associated beliefs—sender.
ThemesVerbatim
Manipulation by the Media
Excerpt from a video where a doctor from the group argues that the media are “brainwashing” uneducated civilians into thinking the pandemic is more serious than it actually is.
“They play on people’s ignorance! They don’t use scientific terms, they use some scientific reasoning on television and do you think “Mrs Maria”, in the pharmacy or supermarket, understands what they’re saying? No! What does she think? She gets scared! That’s brainwashing by fear! “I don’t know, scared! I don’t understand anything he’s saying, if the doctor is saying it’s like this then it’s true, this is bad, I could die!”. That’s it! It’s pure manipulation! There’s no scientific basis, there’s no evidence, it’s all based on belief and not evidence! Evidence!”
The use of masks is unnecessary and even dangerous
Excerpt from a video where a doctor from the group argues that masks as a preventive measure to avoid COVID-19 spread are unnecessary and could be dangerous in the long run, as they hinder the strengthening of the human immune system.
“Let’s talk about masks. When you go for a walk in the countryside, do you dress like a beekeeper because you might get stung by a bee? We don’t. It’s the beekeeper who goes to work the hives and gets all dressed up. It’s exactly the same with masks. What happens at hospital level can’t be extrapolated to what happens in the community, or in terms of disinfection, because the ecosystem of microorganisms, the form, the type of handling procedures that exist there are not the same as those that exist here. Outside, the reality is different and we have our friendly bacteria that have to survive in order to help us survive and help our immune system to be fortified, and our immune system is a muscle, so if it’s not trained, it withers away. And that’s my fear about next winter.”
PCR tests are not reliable diagnostic methods
Post from the DfT page arguing that PCR tests are unreliable due to false positive results.
“Contrary to what happened in previous epidemics, almost all health authorities agreed that a single positive result from an Rt-PCR test would be enough to diagnose infection, even in asymptomatic people with no history of exposure. This was done in a concerted manner and on the basis of the generalised belief that positive results are highly reliable. However, data from PCR tests for similar viruses show that the test produces false positives in appreciable quantities, enough to make positive results unreliable with the consequent clinical and social implications, but also skewing epidemiological statistics, namely the proportion, prevalence and hospitalisation and mortality rates.”
Table 7. Verbatim expressions of associated beliefs—recipient.
Table 7. Verbatim expressions of associated beliefs—recipient.
ThemesVerbatim
The virus is not as lethal as health authorities want to make it seem and is as dangerous as the common flu
User responds to another commenter who had challenged the page’s narrative that COVID was not dangerous by mentioning the mortality caused by the virus worldwide.
“That’s it, keep eating ice-creams with your forehead… And while you’re at it, ask your family doctors, scientists, virologists and the like how many people actually die from Covid? If it’s such a lethal virus, explain to me why 44% of those infected are asymptomatic and 90% of those infected are at home being treated with paracetamol? I’d be grateful for clarification”
Manipulation by the Media
User responds to a post on the page about the role of the media in the relaying of information about the pandemic.
“You’re sure to realise what’s really going on… What this journalist is doing is showing us the way, the only solution for us to be able to communicate if we don’t react badly to this masquerade… Do me a favour, if you can… use logical thinking before reacting with feelings … … … Almost 50 years ago, the only way we could communicate was through newspapers and the radio… you know… … … Contrary to what many people think, the first power is the media, because they have direct access to and manipulate the population en masse… Remember that next time and re-analyse the text you read and, above all, where you read it… I’d be very grateful… rest in peace and be happy”
Wearing a mask is unnecessary and even dangerous
User responds to a commenter who defended the use of masks.
“Since when do masks protect you from viruses? If you magnify the mask under a microscope and put a virus next to it, you’ll realise that “loads” of viruses pass through every hole in the mask that you can’t see, not to mention that people are dying of other diseases and not Covid, because if you look at the official data and look closely, you’ll realise that the other diseases that killed thousands in previous years, such as flu, pneumonia, cancers etc, have disappeared. So what conclusion do you draw? MIRACLE all the other diseases have been eradicated.”
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Milhazes-Cunha, J.; Oliveira, L. Doctors for the Truth: Echo Chambers of Disinformation, Hate Speech, and Authority Bias on Social Media. Societies 2023, 13, 226. https://doi.org/10.3390/soc13100226

AMA Style

Milhazes-Cunha J, Oliveira L. Doctors for the Truth: Echo Chambers of Disinformation, Hate Speech, and Authority Bias on Social Media. Societies. 2023; 13(10):226. https://doi.org/10.3390/soc13100226

Chicago/Turabian Style

Milhazes-Cunha, Joana, and Luciana Oliveira. 2023. "Doctors for the Truth: Echo Chambers of Disinformation, Hate Speech, and Authority Bias on Social Media" Societies 13, no. 10: 226. https://doi.org/10.3390/soc13100226

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop