COVID-Related Misinformation Migration to BitChute and Odysee

: The overwhelming amount of information and misinformation on social media platforms has created a new role that these platforms are inclined to take on, that of the Internet custodian. Mainstream platforms, such as Facebook, Twitter and YouTube, are under tremendous public and political pressure to combat disinformation and remove harmful content. Meanwhile, smaller platforms, such as BitChute and Odysee, have emerged and provide fertile ground for disinformation as a result of their low content-moderation policy. In this study, we analyze the phenomenon of removed content migration from YouTube to BitChute and Odysee. In particular, starting from a list of COVID-related videos removed from YouTube due to violating its misinformation policy, we ﬁnd that ∼ 15% (1114 videos) of them migrated to the two low content-moderation platforms under study. This amounts to 4096 videos on BitChute and 1810 on Odysee. We present an analysis of this video dataset, revealing characteristics of misinformation dissemination similar to those on YouTube and other mainstream social media platforms. The BitChute–Odysee COVID-related dataset is publicly available for research purposes on misinformation analysis.


Introduction
Millions of Internet users share their opinions and beliefs on social media platforms. Given that some users' posts are considered offensive, misleading or, in general harmful, social media platforms are responsible for removing them to maintain a healthy online environment and protect their user community. To this end, they turn to content moderation policies and systems. For instance, Facebook has publicly stated its commitment to allocating 5% of the firm's revenue, which amounts to USD 3.7 billion, to content moderation (https://knowledge.wharton.upenn.edu/article/social-media-firms-moderatecontent/-accessed on 24 October 2022).
While large social media platforms arguably invest in fighting disinformation using content moderation, an ecosystem of alternative, low-content moderation platforms has emerged and become popular. For instance, platforms, such as Gab, Voat, Minds and 4chan are known for their tolerance to misinformation and harmful content in general. These platforms allow free speech with minimal standards for content [1], and most of them have been extensively studied due to their close connection to the QAnon movement [2,3]. BitChute and Odysee have recently joined the ecosystem of alternative low contentmoderation platforms, attracting public attention. BitChute introduced itself in 2017 as a free-speech content-sharing platform, while Odysee serves as an alternative video sharing option since the end of 2020. To the best of our knowledge, these two platforms have not been analyzed extensively, even though they have attracted sizable user communities. A recent Reuters report highlights both platforms' significant impact on the disinformation landscape, spreading conspiracy theories and abusive content (https:// www.reuters.com/investigates/special-report/usa-media-misinformation/-accessed on  Google Trends data also suggest that the BitChute platform has been growing in popularity since early 2019, peaking in May 2020, when the Plandemic video appeared, and then in early January. Figure 2 illustrated the analysis of Google Trends on BitChute and Odysee search terms worldwide and for the last five years. Odysee follows a similar growth path as BitChute but with somewhat lower rates. In addition, some studies focus mainly on the content generated on BitChute [4,5]; however, little is known about the platforms' user base and network, the content they generate (especially Odysee) and their interaction with other platforms. To address the above gaps, this study contributes to the study of the following aspects: • Migration of misinformation from the YouTube mainstream platform to alternative low content-moderation platforms, namely Bitchute and Odysee. • Analysis of the characteristics of misinformation dissemination in Bitchute and Odysee. • Analysis of how users engage with misinformation videos in Bitchute and Odysee.
To this end, we seek answers to the following research questions: In the next section, we present previous work on health-related misinformation, focusing on misinformation studies on mainstream and low content-moderation platforms. Then, in the methodology section, we describe our dataset-creation process and its characteristics. Finally, we discuss our main findings, highlighting similarities and differences between the platforms and conclude the paper.

Related Work
In this section, we organize related work in the following three areas. We first present previous studies on misinformation and conspiracy theories related to health issues, focusing on public health policies and health protective behaviors, a rather critical topic during the COVID-19 pandemic. The spread of online misinformation during the pandemic has become a major public health concern, making it difficult to find credible and trustworthy information sources. Since social media platforms are highly associated with the spread misinformation online, our second topic of interest is the spread of misinformation on mainstream social media platforms, with a particular focus on YouTube videos. Since most of these mainstream platforms have adopted policies to reduce conspiratorial and anti-social content, low content-moderation platforms have emerged in the information ecosystem, promoting such content. Hence, we also present a few studies on misinformation on low content-moderation platforms, with a special interest in BitChute and Odysee, which are the focus of our study.

Health-Related Misinformation and Conspiracy Theories
Health-related misinformation, with a particular focus on conspiracy theories, has been widely studied in the past [6]. It has been observed that a tendency to believe that major public events are secretly orchestrated by powerful and malevolent entities [7] is negatively associated with health-protective behaviors [8]. Many studies found a negative relationship between medical conspiracy beliefs and health-protective behaviors [9], such as vaccination [10][11][12][13][14], or safer sex [15,16]. Moreover, people who believe in conspiracy theories tend to express higher levels of distrust of public institutions [17,18]; hence, their support for public health policies to address the pandemic is decreased [19]. These behaviors can raise concerns about having severe consequences in public health matters [20]. Furthermore, there is evidence that many Twitter users used metaphorical expressions to discuss about infections during the pandemic. Metaphors cannot be easily validated and when they are associated with negative emotions, such as fear, anticipation, and sadness; they hinder participation and awareness in healthcare discussions, contributing to the creation of a problematic health environment [21].
Health-related misinformation during the COVID-19 pandemic was characterized as another type of pandemic, known as "infodemic" [22]. Misinformation in the context of COVID-19 can include inaccurate information regarding the virus and its transmission, 'fake news', hoaxes, pseudoscience, conspiracy theories, and fabricated reports regarding methods of prevention and treatment. The most common topics of COVID-19 misinformation included, among others, the origin of the virus, cures and remedies, vaccination, masks and self-protection (https://covidinfodemiceurope.com/report/covid_report.pdfaccessed on 24 October 2022 ).
Conspiracy theories are frequently associated with explanations of the COVID-19 pandemic. With severe political, economic, and social implications, the current health crisis has created a powerful context and conditions in which conspiracy beliefs are likely to develop, such as vulnerability, insecurity, and isolation. Some recent studies provided evidence of a link between COVID-19 conspiracy or unfounded beliefs and reluctance to engage in health-protective behaviours related to COVID-19 prevention, such as minimizing the time spent outside the home, maintaining social distancing, wearing a face mask and hand-washing [23][24][25][26][27]. Another study found that people who believed conspiracies had lower intentions to vaccinate and to support COVID-19 public health policies [28,29]. In general, exposure to online COVID-19 misinformation is associated with a previous misinformation belief and sometimes with low preventive behaviours [23,30].

Misinformation on Mainstream Social Media and Video Platforms
Social media platforms play a key role in the information ecosystem. However, they are also associated with disinformation, and political propaganda [31]. In the past few years, conspiracy groups relying on social media manipulation using bots and online disinformation orchestrated campaigns, spread fake scientific information with regards to vaccination and other anti-science movements, creating a massive threat to public health [32,33].
During the COVID-19 pandemic, social media have proven to be a mixed blessing. On the one hand, social media can be used effectively to provide essential health-related information promoting the debate within the global scientific community. Yet, digital platforms can also disseminate flawed studies, inaccurate information and misinformation, while concerns have arisen regarding the fast and extended spread of COVID-19 conspiracy theories on social media [34]. A study shows that COVID-19 misinformation has similar probabilities with accurate information to spread and engage users on social media [35]. Another study presented an extremely low capability of the social network users to determine whether the content of the platform is real or fake [36].
YouTube is one of the platforms that has been strongly associated with conspiracy beliefs. Some findings show that 60% of people who believed that 5G networks caused COVID-19 symptoms were informed about the virus from YouTube [8]. Moreover, YouTube [37][38][39][40] and Facebook [41][42][43] were identified as major vectors for dissemination of conspiracy beliefs and misinformation, on medical and other topics. Previous studies have been conducted in order to analyze health-related videos on YouTube, such as smoking [44,45], obesity [46], and vaccination [47,48], and their potential impact on the audience. A quantita-tive analysis of 560 YouTube videos in 2017 related to the link between vaccines and autism or other serious side effects on children revealed an increasing trend in the annual number of uploaded videos and that most videos were negative in tone [49]. Likewise, another study pointed out that most of the HPV vaccine-related YouTube videos were negative in tone, adding that negative videos had higher ratings than positive or ambiguous ones [50]; the latter finding was also confirmed by the authors of [51]. In addition, studies on Twitter suggest that it plays a similar role in the spread and impact of misinformation as YouTube and Facebook [35,[52][53][54].

Misinformation on Low Content Moderation Platforms
YouTube has received considerable criticism for potentially radicalizing individuals and promoting conspiracy videos [55]. To improve the quality of content on its platform, on January 2019, YouTube reduced recommendations to conspiratorial and alternative news content (https://blog.youtube/news-and-events/continuing-our-work-toimprove/-accessed on 24 October 2022), but it allowed the videos to remain on the platform [56]. Therefore, exposure to anti-social content videos occurs on many platforms beyond YouTube (via shared links). There is evidence that reducing exposure to anti-social videos on YouTube potentially reduces sharing on other mainstream platforms, such as Twitter and Reddit [57]. In reaction to perceived risks of censorship in mainstream media and platforms, the so-called alt-tech spaces emerged and have become increasingly popular, especially among users who are suspended on mainstream social networks for violating their terms of service. Most of them are low content-moderation platforms, producing and promoting anti-social content (i.e., disinformation, political propaganda, conspiracy theories, pseudoscience, and hate speech). For this reason, these alternative platforms have also received attention from researchers, practitioners, and policy makers [58].
Many of these platforms, including Gab, 4chan, and Parler, are very popular among fringe communities, especially far-right groups, with hateful and extremist content [59][60][61][62][63]. According to [61], Gab is predominantly used for the dissemination and discussion of news and world events, it has a significant rate of hate speech, much higher than Twitter, and it attracts alt-right users, conspiracy theorists, and other trolls. A study analyzing comments, users, and websites discussed in Dissenter, a browser extension created by Gab, identified a high degree of comment toxicity among a core group of users [64]. 4chan has also received considerable attention, mainly because of its politically incorrect /pol/ board, offensive culture, openly antisemitic rhetoric, users' associations with the alt-right, and meme virality on social media [65][66][67][68]. In addition, Pieroni et al. [69] studied the spread of misinformation in Parler as an "alternative" place for free speech from a data science and UX design perspective. Another study on a Parler dataset [70] found that discussion on the platform mainly focused on conservative topics, President Trump, as well as conspiracy theories (e.g., QAnon). Another glaring example, as a result of the mainstream social media self-regulation strategy, is the launch of Gettr in 2021. Gettr is an alternative social network platform launched by former US President Donald Trump's team, which hosts pro-Trump content mixed with conspiracy theories and anti-left stances [71]; however, without significant organic engagement and activity and with a lower level of toxicity, compared to other fringe social networks [72].
Unlike the attention that most of the aforementioned platforms have received from the scientific community, BitChute and Odysee remain relatively neglected by the research community. Concerning the BitChute platform, authors in [4] claimed that a handful of channels in the platform receive high engagement, and almost all of those channels contain conspiracies or hate speech. They found that BitChute has a higher rate of hate speech than Gab but less than 4chan, and also, some BitChute content producers have been banned from other platforms. Additionally, a BitChute dataset, called The MeLa BitChute Dataset [5], is publicly available, providing a basis for investigating the content on the platform.
BitChute is already one of the most popular alt-tech platforms shared on Telegram, according to [73], and one of the leading destinations for Internet celebrities who have been banned from mainstream platforms [74]. In 2019, approximately 20% of deplatformed far-right channels on YouTube had a BitChute presence [75]. Furthermore, there has been an increase in interest in alt-tech platforms, such as Gab and BitChute, after the January 6 attack on the US Capitol and the "Great Deplatforming" [76]. Moreover, YouTube's de-recommendation strategy seems to suppress the sharing of misinforming content on Twitter and Reddit [57]. However, it simply moves this to alt-tech spaces, such as BitChute, as in the case of the Plandemic video, which was quickly removed from Facebook, YouTube, and Twitter but remained untouched on BitChute. [57,77].

Methodology
Our starting point is a YouTube dataset of COVID-related misinformation videos. Authors in [78] released a dataset of 8105 COVID-related misinformation videos, called the Oxford COVID-related misinformation dataset, which according to the authors' findings, is less than 1% of all YouTube videos related to the COVID pandemic. YouTube had removed all but 50 of these videos (marked as 'misinformation videos') at the time the dataset was collected. We queried YouTube with these 50 videos 16 months after the dataset was created, and only 31 of them were still available on the platform. Regarding the unavailable videos when the dataset was collected, the authors in [78] managed to collect the titles and metadata and publicly provide them to the community for analysis. We first conducted a short analysis of these videos, and then used them as a background collection to investigate their online presence on alternative video platforms.

Oxford COVID-Related Misinformation Dataset
The Oxford dataset is composed of the URLs of the videos and a set of metadata (e.g., video title, video description, view count, and channel id) in cases where they were recoverable. The videos have been removed from YouTube, resulting in a lack of visual information. For that reason, we could only use the video titles to investigate their migration to other online platforms. In the dataset we found 7220 videos (out of 8104) where the video title was recovered, and used those videos as a reference dataset for our analysis.
We first compared the video titles to retrieve the unique cases of COVID misinformation. We used SentenceTransformers (https://www.sbert.net/), a Python framework for state-of-the-art sentence, text and image embeddings, to compare the titles. The sentencesimilarity process revealed that the dataset consists of 7066 unique cases of COVID-related misinformation, from which there are 110 cases with one to twelve duplicate videos. An example of such a case is presented in Table 1. However, to simplify the processing, in the rest of the analysis, we consider all 7220 videos as unique cases.
We also applied the en_core_web_sm language-detection model by the spaCy library (https://spacy.io/-accessed on 24 October 2022) to detect the languages of the 7220 video titles observing that misinformation is disseminated in several languages. This revealed that several languages are present, but English dominates, making up more than 43% of the videos. Table 2 presents the top 5 languages detected in the titles. Regarding propagation to other social media platforms, the authors of [78] provided the tweet IDs sharing those YouTube videos. We queried the Twitter API and found that only 1% (2360 out of 245,064 tweets) were still available, while the remaining 99% are tweets that Twitter removed. This implies that YouTube videos presenting misinformation content shared through Twitter were also removed.
Finally, we found that the number of views is provided only for ∼10% of the videos in the dataset. Although the number is too small to draw valid results, the average number of views amounted to 150,138 and showed that these videos gained considerable attention before being removed.

Video Retrieval in BitChute and Odysee
We applied a semi-automatic process to collect videos removed by YouTube for violating the platform's policies and then posted on BitChute and Odysee. Figure 3 illustrates the steps we followed. First, we collected the titles of the 7220 COVID-related misinformation videos from the Oxford dataset and used them to query BitChute and Odysee. Even though the Oxford dataset contains duplicate cases of COVID-related misinformation (i.e., several videos refer to the same case), we considered these cases individually as the video titles were slightly different, which increased our chances of finding them on BitChute and Odysee.
Since BitChute and Odysee do not provide a public API for data collection, we used a custom-built scraper to search with the query text (video titles) and collect the returned results and their metadata during that time. Due to this limitation, several metadata fields could not be collected.
Since we solely rely on the text of the title, we submitted them without pre-processing. A primary limitation in our analysis is that the initial YouTube videos are unavailable online, and visual inspection of the retrieved videos was not possible. Additionally, we rely on the accuracy of the search algorithm of each platform. In the BitChute case, videos with identical or similar titles were returned. At the same time, there were cases where the search returned no videos meaning that the query title did not appear in any videos on the platform. The queries resulted in 13,336 Bitchute videos corresponding to 1416 YouTube COVID-related misinformation cases. Concerning Odysee, most queries returned at least 50 results, even if the results differed significantly from the query text. This implies a challenge in the next steps of the process, where we filter the results to find exact duplicates of the YouTube videos appearing in Odysee. For Odysee, 324,300 videos were collected in this step.
The filtering process consists of two steps. In the first step, we used the text similarity between the collected videos to split the collection into three subsets: (1) near duplicates, (2) videos that can be automatically processed, taking into account visual information, and (3) videos that need to be manually annotated. In the following, we describe the process in detail.

Text Similarity
We used the SentenceTransformers (https://www.sbert.net/) Python library to compare the video titles. Each query video was compared to the retrieved videos from BitChute and Odysee, resulting in a title similarity value between 0 and 1, with one indicating that the titles are identical. We set a threshold of 0.9 and consider videos with similarity above this threshold as near duplicates. This led to a subset of 3860 BitChute videos corresponding to 893 YouTube cases and 2088 Odysee videos from 369 YouTube cases. The rest of the videos, with a similarity below 0.9, were provided as input to the second filtering step.

Visual Similarity
To apply visual similarity, we need a reference video. Since the original YouTube videos are not available, we use as reference videos the BitChute and Odysee videos that were identified as near duplicates based on the text similarity, namely, the 893 and 369 YouTube cases coming out of the previous step. For each of these cases, we set as reference video the BitChute or Odysee video with the higher similarity value (in case there are multiple videos with the same similarity value, we randomly selected one of them). Then, we applied the visual near duplicate detection approach of [79] on the reference videos and the videos with textual similarity below 0.9. This resulted in the discovery of 4256 duplicate BitChute videos corresponding to 372 YouTube cases and 487 duplicate Odysee videos out of 119 YouTube cases.

Manual Similarity
In the case where the retrieved videos had no similar titles to the titles of the YouTube cases, the textual and visual similarity was not applicable. In this case, we applied manual annotation to the videos. This required two annotators and ≈50 h per annotator to inspect the videos. The annotators were asked to manually annotate the BitChute and Odysee videos based on the metadata provided for the YouTube query and references to debunking articles and other sources on the Internet. For example, for a video entitled 'Coronavirus and 5G by dr Thomas Cowan' the annotator found a debunking article on CBC (https://www.cbc.ca/news/science/fact-check-viral-video-coronavirus-1.5506595), where screenshots of the original video were provided and the annotator could label BitChute and Odysee videos that depict these screenshots as duplicates. In total, 523 and 329 YT cases were manually annotated for BitChute and Odysee, respectively. Regarding BitChute, 15,813 videos were manually annotated, resulting in 1293 duplicates corresponding to 182 YouTube cases. Similarly, for Odysee, the manual inspection resulted in 968 duplicates from 222 YouTube cases.

BitChute and Odysee Dataset
In summary, we created a dataset of 5906 videos posted on the BitChute and Odysee platforms after their removal from YouTube, which addresses RQ1. These videos were derived from 1048 and 591 COVID-related misinformation cases posted on YouTube, which amounts to ≈15% of the Oxford dataset, that is 1114 unique YouTube videos. We observe that a relatively small percentage of the videos migrated to BitChute and Odysee, but we also need to consider the following: • We used a semi-automated process to recover these videos, so there may be videos that migrated, but our process could not recover; • We solely relied on the video titles in the first step to search and compare videos; the videos might have been posted on the target platforms with a different title, in which cases it would not be possible to recover them. Table 3 presents the number of BitChute and Odysee videos retrieved in each process and the number of YouTube videos they correspond. We collected the metadata of 3994 (due to using a custom-built scraper for collecting metadata, there are videos where the metadata are missing) Bitchute videos and 1810 Odysee videos. The unique value derives from the fact that, as presented above, the Oxford dataset contains duplicate cases of COVID-related misinformation, which means that the collected videos are the same for several cases.

BitChute Analysis
In this section, we present the analysis of the videos we managed to collect from the Bitchute platform in order to address RQ2, RQ3 and RQ4.

Temporal Disribution
We first study the temporal distribution of the BitChute videos. Figure 4 illustrates a timeline showing the BitChute videos per the corresponding YouTube video. Each line corresponds to one YouTube video and its BitChute duplicates. The horizontal axis corresponds to the time between posting the YouTube video and its BitChute duplicates. We plotted the BitChute videos posted at most within a day of the posting of the YouTube video. Due to missing metadata, we examined 297 YouTube videos and their 2335 BitChute duplicates. In total, 12% of the BitChute duplicates were posted minutes to a day after posting the YouTube video, while 50% of the BitChute duplicates appeared on the platform between one day and a week after posting the YouTube video. The remaining 34% of videos were shared more than a week after the original YouTube video. Therefore, more than half of the videos appeared in BitChute at most within a week after the original YouTube video.

Video Interactions
We then study the likes-dislikes, and views acquired by the videos. There are 961 BitChute videos (24%) with zero likes. Additionally, 53% of the BitChute videos had at most 10 likes. Only 13 of the BitChute videos, i.e., 0.3%, had a significant impact with more than 1000 likes. Inspecting the dislikes, we noticed that there is not much dislike activity since 84% of the videos have no dislike reaction, while the remaining 16% have at most 77 dislikes. The like-dislike analysis revealed that users on the BitChute platform applaud the misinformation to some extent, while there are only limited objections. We plotted a histogram of the likes and dislikes of the top 20 disliked videos in Figure 5. It is worth noting that 70.2% of videos collected less than 1000 views, which is considered low compared to the engagement that misinformation videos attract on YouTube [38]. Only 5.5% of the BitChute videos managed to attract more than 10,000 views and are thus comparable in popularity to the YouTube content.

Titles and Descriptions
We analyzed the titles of the BitChute videos and extracted parts of speech using the spaCy library. The average number of tokens for the BitChute title is 8.2, where 2.1 are nouns, 0.7 verbs, 0.36 numbers, 1.85 pronouns, 0.82 punctuation, and the rest are other parts of speech, such as adjectives, auxiliaries, etc. The nouns dominate, as expected, while verb usage in BitChute titles is limited. It is noticeable that pronouns are used almost as much as nouns. In general, there is evidence that possessive pronouns (e.g., "my"), Wh-Determiners (e.g., "which") and Wh-Adverbs (e.g., "where") occur more often in clickbait headlines as compared to legitimate news headlines, creating a curiosity gap by asking questions which entice users to click in order to satisfy their need for information [80].
Additionally, we applied the aforementioned spaCy language detection model on the BitChute titles and similar to the Oxford dataset, we found that English dominates the dataset (≈72% of titles). The second most frequent language is German with a percentage of ≈15%. Spanish is the third language (≈2%).
For the video description, the average number of tokens is 128. It is a typical case for the description to be longer. As a next step, we searched for keywords (deleted, banned, removed) that refer to content take-down to investigate whether the BitChute users informed the viewers that the video was deleted from other platforms. This resulted in only 99 video descriptions with the keyword 'deleted', 330 with the keyword 'banned' and 153 with the keyword 'removed', that corresponds to 2.5%, 8.3% and 3.9% of the videos, respectively. Example sentences that appear in video descriptions and contain the keywords mentioned above include: 'Deleted YouTube video of a German Journalist visiting a "hospital with coronavirus patients", but finds no one there', 'This video is VERY BIG and it's getting deleted in YouTube. Make sure to download it and spread it around.' and 'David Icke's explosive interview with London Real, which sparked controversy after being banned from YouTube and Vimeo.' We calculated the average number of likes for the videos containing the keyword 'deleted' in their description and compared it with the rest. The average number of likes for those containing the keyword is 54, while the average on 10 random subsets of the same number of videos (99) without the keyword in their description was much less, 24.6. Similarly, for the 330 videos containing the keyword 'banned', the average number of likes was found to be 42, while for 10 random subsets, the average number of likes was computed to be 25.5. Different behavior appears for the keyword 'removed', where the average number of likes for videos containing the keyword and the same number for 10 random samples was found to be quite close, at 24.5 and 22, respectively.

Categories
According to Figure 6, the dominant video category is 'None' followed by 'News and Politics' and 'Education'. The category is assigned by the user before sharing the video on the platform. Although the BitChute platform offers an option for the 'Health and Medical' category, users do not prefer this option but rather assign COVID-related videos to none or other unrelated categories.

BitChute and IDs
The BitChute platform provides users with functionality that automatically sends YouTube videos to BitChute by assigning the YouTube channel ID. In this case, videos from YouTube are shared through BitChute using the same video ID. In the created dataset, there are only 93 videos with identical video IDs between BitChute and YouTube. We cannot determine whether the users do not take advantage of this feature because they are not aware of it or because they prefer to not reveal the source of the video.

Cross-Platform Diffusion
We also applied cross-platform search to investigate the diffusion of BitChute videos on mainstream social media platforms. We selected Facebook and Twitter, the most popular platforms accessible through APIs. We used the Twitter Academic search API to retrieve historical tweets containing BitChute videos, and CrowdTangle for Facebook posts by searching with the keyword 'Bitchute'. Although the number of BitChute videos shared through both platforms is large, the number of videos from the created dataset shared through these platforms is limited. Specifically, we retrieved only 285 BitChute URLs from the created dataset in the 2,057,195 tweets sharing a BitChute video URL (0.01%) and, similarly, 52 BitChute video URLs from the 28,867 Facebook posts (0.2%). We need to note that since these videos are already banned as misinformation-related, there might have been tweets and Facebook posts sharing them but removed by the platforms. Authors in categorize [81] categorized YouTube and BitChute content during U.S. election fraud discussions on Twitter. They revealed that the size of YouTube content shared through Twitter compared to BitChute is much larger, taking into account the popularity and size of each platform. However, their investigation concluded that each platform's content varies drastically in terms of topic.

Comments
Just 670 of the 3994 BitChute videos of the dataset have at least one comment (16.8%). In total, 5919 comments were posted on 670 BitChute videos, and for 311 of them, we retrieved only one comment. The highest number of comments collected for a video was 403.
We analyzed the collected comments using an emotion-detection method [82] since related studies point out that misinformation can spread rapidly when a group falls into negative emotions, such as anxiety. In general, there is evidence that viewers tended to watch and like negative videos more than positive videos on YouTube [50,51]. The comments are classified into five emotion categories (happy, sad, fear, angry and surprise) with a score in the range between 0 and 1, where the higher the score, the more the emotion appears in the comment text. In Figure 7, the negative emotion of fear dominates in the video comments. Table 4 presents examples of positive and negative emotions.  Additionally, we applied a toxicity detection library (https://github.com/unitaryai/ detoxify) that detects five categories of toxicity, obscene, insult, threat and attack. A score in the range of 0 to 1 was calculated for each category. In Figure 8, we observe that comments mainly contain toxicity (e.g., ' year later we are still putting up with this crap. Humanity is lost.') and to a lesser extent, insult (e.g., 'Stupid MSM reporters cannot think for themselves, much less investigate anything.'). Obscenities, threats and attacks appear at a very low rate.

Channels
In addition to the analysis conducted on the collected videos, we also studied the channels sharing the BitChute videos. We collected the channel names of 3636 videos and found 1708 unique channels sharing those videos. A quarter of the channels have fewer than 10 subscribers, while the top 10 popular channels have more than 74,000 subscribers. In Table 5, we list the top 10 popular channels with their number of subscribers and the number of videos from the collected dataset they shared.
We then used the 'sherlock' (https://github.com/sherlock-project/sherlock) Python library to search social media accounts by username across social networks (https://github. com/sherlock-project/sherlock/blob/master/sites.md) including mainstream social media platforms such as Facebook and Twitter. However, we are missing YouTube and Odysee since they are not part of the supported sites. Supposing a user selects the same username across different platforms, which is common, we retrieved the platforms where the channels sharing the BitChute videos also have accounts. We need to consider that although it is typical for users to use the same username on different platforms, there might be several cases in which different users appear with the same username. Such a case is the account with username 'thecrowhouse', which is an influential account on BitChute (Table 5). We manually inspected the account on both platforms (BitChute and Facebook) and concluded that the accounts most likely were created by different users who incidentally selected the same username. We manually inspected all accounts in Table 5. Three of these channels do not appear on Facebook (or sherlock could not retrieve the Facebook accounts). For six of these channels, Sherlock retrieved a Facebook account with an identical username but manually inspecting these accounts, we discovered they are no longer available on the platform. We cannot be sure whether the users decided to delete their accounts on Facebook or Facebook removed them, but in any case, these accounts might be considered suspicious. The account with username 'styxhexenhammer666' appears in both BitChute and Dailymotion (https://www.dailymotion.com/) and shares similar content. In Figure 9, we present the top 20 platforms in which the channels appear. Facebook is in the fourth position, with almost half of the channels having a matching Facebook account, while Twitter appears in a lower position. Quora (https://www.quora.com/), Y combinator (https://www.ycombinator.com/) and ICQ (https://icq.com/mobile/en) appear among the top-3 positions. These platforms have not been associated with misinformation. However, although limited, there is investigation on inaccurate content of such platforms such as a recent case study [83] that focused on content posted through Quora to evaluate different mechanisms and algorithms to filter insincere and spam content.

Comparison with MeLa Dataset
We also made use of the MeLa dataset, which the authors in [5] claimed as a nearcomplete dataset (June 2019 to December 2021), and retrieved 9958 and 74,948 videos containing one or more of the keywords 'covid', 'coronavirus', 'covid19', 'covid-19' in the title of the video or the description, respectively. The collected videos can be considered as a complementary COVID-related misinformation dataset since videos in BitChute often depict misinformation cases. The collected dataset accounts for 7% of the COVID-related videos appearing in the MeLa dataset and, therefore, on the platform for the period covered by the dataset .

Odysee Analysis
In this section, we present the analysis of the videos recovered by the Odysee platform addressing RQ2, RQ3 and RQ4.

Temporal Distribution
We created a timeline showing the posting time of Odysee matches to the YouTube videos. Each line corresponds to one YouTube video and the matched Odysee duplicates. The horizontal axis corresponds to the time between the posting of the YouTube video and its Odysee counterparts. Figure 10 shows the Odysee videos posted at most within a day (Odysee does not provide the exact time (hour, minutes) but the day of posting) of the posting of the original video. Due to missing metadata, we examined 162 YouTube videos and their 966 Odysee counterparts. In total, 35.4% of these Odysee videos were posted minutes to a day after posting the YouTube video, and 27.8% of the Odysee duplicates were shared between one day to a week after posting the YouTube video. Similar to BitChute, more than half of the Odysee matched videos appeared on the platform at most within a week after first appearing on YouTube.

Video Interactions
Odysee videos receive few interactions. We collect the number of views for each Odysee video and present them in a histogram plot in Figure 11. We notice that most videos have a low number of views (<200), while there are few videos with a number of views close to 1000. While misinformation-bearing videos often gain much popularity on mainstream platforms, it seems that Odysee has only a small user community for now. Concerning the likes-dislikes reactions on the videos, Figure 12 shows that there are few reactions on the collected videos. The likes prevail with a high difference over the dislikes. Only 3.5% of the videos have at least one dislike reaction while 37% of the videos are liked. The number of likes per video is, in most cases, less than 10.

Video Titles and Descriptions
We applied the aforementioned language detection model on the Odysee titles. Similar to the Oxford dataset, English dominates (≈67% of titles). The second most frequent language is German (≈12%). Spanish is the third language detected in Odysee videos (≈4.5%).
We searched for keywords (deleted, banned, and removed) that refer to deletion to investigate whether the Odysee users informed viewers that the video was deleted from other platforms. From the 1810 processed descriptions, we calculated only 34 (1.9%) containing the keyword 'deleted', 3.1% the keyword 'banned' and 2.8% the keyword 'removed'.

Cross-Platform Diffusion
Similarly to BitChute, we used the Twitter API and CrowdTangle to search for Odysee videos in tweets and Facebook posts. Concerning Twitter, we collected 868,110 tweets with the keyword Odysee. Then, we compared the URLs shared through these tweets with the collected Odysee videos. We found only 0.02% (178 videos) of the collected dataset being tweeted. We need to consider here that the number of tweets may be higher, but the tweets could have been removed at the time of searching. With respect to Facebook posts, we collected a small number of posts (∼5000) containing the keyword 'Odysee' indicating that Facebook users do not disseminate content coming from the Odysee platform. None of the collected Odysee videos appears in these posts.

Comments
We observed that Odysee users comment very little on the videos shared on the platform. Only 12% of the collected videos have at least one comment, and in total, 644 comments were posted on the videos of our collection. We analyzed the comments similarly in the same way as for BitChute comments using the emotion detection and toxicity detection approaches. It is noteworthy that Odysee's comments have more positive emotions ( Figure 13). Additionally, the toxicity detection ( Figure 14) reveals that user feedback is not particularly offensive as in the case of BitChute.

Channels
The collected videos were shared by 765 unique channels. We collected the number of followers for each channel and noticed that 19 have no followers, 16% have less than 10 followers, and the most popular channel has 129,000 followers. Additionally, we applied 'Sherlock' to search Odysee accounts on other platforms. Similarly to BitChute, we present the top 20 platforms where users appear to have an account in Figure 15. Facebook and Twitter are listed in the top 20 platforms, which shows that users in Odysee also have accounts on mainstream social media platforms.

Discussion
This paper provides evidence on the diffusion of COVID-related disinformation videos from YouTube, a mainstream video platform, to BitChute and Odysee, which are low content-moderation platforms. Starting from a dataset of 7220 videos that were banned from YouTube after being flagged as spreading COVID-19 misinformation, we collected a dataset of 4096 BitChute and 1810 Odysee videos.
Our analysis indicates that COVID-related misinformation videos on YouTube are shared on low content-moderation platforms soon after their original posting. The analysis showed that, for both BitChute and Odysee, most videos were shared at most within a week after their upload on YouTube. One might hypothesize that users are aware that their content will not have a long lifespan on the mainstream platforms, and they choose to post them on platforms with low content moderation. This finding is also in line with another study, which shows that the number of shares of the "Plandemic" on BitChute reached a peak just the day after the video was banned from YouTube [57].
As for the popularity of migrated videos on BitChute and Odysee, the number of views on both platforms is relatively low. The number of views and other viewer interaction metrics are also used in other studies of misleading health-information videos in online video platforms such as YouTube [37]. However, the extremely low number of views of videos in our study does not permit the study of more complex correlations as in [78]. Furthermore, we analyzed the number of likes and dislikes, which is also relatively low on both platforms. Yet, it is worth mentioning that videos in BitChute that feature the keywords "banned" or "deleted" in the title tend to gather more likes, which is in agreement with previous work on YouTube that found a tendency for negative tone videos to be liked more by the viewers than videos with positive or ambiguous tone [50].
Finally, emotion detection has been investigated in relation to the spread of misinformation. Authors in [84] proposed an emotion-based misinformation detection framework to learn content-and comment-emotion representations. They found that the proportion of anger on fake news content is 9% higher than in real news, while the percentage of happiness is 8% lower. In our analysis, we detected more negative emotions on BitChute, especially fear, compared to Odysee, where we found more positive emotions.

Conclusions
Given the increasing effort by mainstream platforms to moderate misinformation, malicious actors are looking for alternative low content-moderation platforms to disseminate misleading content. In this work, we retrieved COVID-related misinformation videos that were shared and removed on YouTube and appear in BitChute and Odysee. We created and analyzed a dataset of 5906 COVID-related misinformation videos. In the literature, there is limited investigation on BitChute and Odysee, and this work provides initial findings on the characteristics of misinformation dissemination through these platforms. Misinformation that is shared through low content-moderation platforms seems to follow misinformation practices also used in mainstream social media platforms. Our findings reveal that these platforms receive misleading videos and provide them with a space that allows them to remain online. Although our analysis has shown little user engagement on these platforms (at least in comparison with mainstream platforms), the high amount of misleading content hosted on them could pose a risk in the future. Future research should focus on broader categories of misinformation, beyond the coronavirus, and the dissemination of misinformation from platforms other than YouTube, such as Facebook and Twitter. In addition, future work should investigate the characteristics of disinformation spread through these platforms to derive features for developing automated methods to detect misleading videos. Finally, future research could focus on the analysis of channels that publish misleading videos. Data Availability Statement: The data will be available upon request.

Conflicts of Interest:
The authors declare no conflict of interest.