3. Public Health Policy Monitoring
In this Section, we present the Public Health Policy Perception Monitoring component as well as experimental results. This section is organized as follows. In
Section 3.1, we introduce the source and collection of the dataset. We present three sentiment analysis tools and justify our choice of one of them based on human evaluation in
Section 3.2. In
Section 3.3, we discuss the formula to compute Concern Levels. We present the visualization of concern level trends for each policy in
Section 3.4. In
Section 3.5, we analyze the changes in concern levels over time and identify the significance of these changes. We then investigate the relationship between concern levels and pandemic progress during our study period in
Section 3.6. In
Section 3.7, we visualize the monthly concern levels for each policy in each US state and present the information in geographical maps.
3.2. Sentiment Analysis for Public Concern Levels
In order to track public perceptions towards government health policies during the pandemic, we applied a sentiment analysis tool to each tweet. A number of sentiment analysis tools are described in the literature. We considered the VADER [
28], Stanford Sentiment Analyzer [
41], and TextBlob [
42]. In order to decide which sentiment analysis tool to use, we performed three initial experiments. First, we conducted a human evaluation of a sample of 100 randomly collected tweets regarding COVID-19 [
43]. Three research group members participated in this evaluation. Every member worked independently to label each tweet either with 0 as “Negative”, 2 as “Neutral”, or 4 as “Positive” sentiment. We computed a combined result of the human labelers by a majority vote.
Second, in order to identify the degree of agreement among the three sets of human-labeled sentiments, we computed the Inter-rater reliability based on Krippendorff’s alpha [
44]. We compared the sentiments of the combined human result with those generated by the Stanford Sentiment Analyzer, VADER, and TextBlob. Among the 100 tweets, 91 agreed with a human majority vote, while there was no agreement for the remaining 9. Among the 91 tweets, there were 51 for which TextBlob and the combined human results agreed on the same sentiments, 58 for VADER, and 43 for the Stanford Sentiment Analyzer. Therefore, the accuracy of our human evaluation for TextBlob is 56% (51/91), for VADER, it is 63.7% (58/91), and for the Stanford Sentiment Analyzer, it is 47.2% (43/91). We utilized an online calculator [
45] to obtain the value of alpha, which was computed as 0.485. Third, according to [
46], data with an alpha < 0.667 should be discarded, which meant it did not matter which tool we selected. Based on the prior studies, the Stanford Sentiment Analyzer has the best accuracy among the three, with 80.7% [
41], while VADER has an accuracy of 76.8%, and TextBlob has 68.8% accuracy [
47]. Thus, we chose the Stanford Sentiment Analyzer.
The Stanford Sentiment Analyzer uses a fine-grained analysis based on both words and labeled phrasal parse trees to train a Recursive Neural Tensor Network (RNTN) model. The approach of RNTN is shown in
Figure 1. In order to derive the sentiment (p
2) of a phrase, the RNTN uses a compositionality function g(.) to compute parent node vectors from child node vectors (
Figure 1). To compute the sentiment (p
1) of the sub-phrase, it uses the function g(.) with the sentiments of its children b and c. For each node (a, b, and c), the analyzer uses varied features for classifying its sentiment. Therefore, the sentiment computed by the RNTN model is based on (1) the sentiment values of each word and (2) the sentiment of the parse-tree structure composed from the sentiment values of words and sub-phrases. Furthermore, these characteristics enable the model to capture some sophisticated features, such as sarcasm and negation expressed by input phrases.
Each input phrase is classified as belonging to one of five sentiment classes: “Very Negative”, “Negative”, “Neutral”, “Positive”, or “Very Positive”.
Table 2 shows examples of sentiments of the dataset tweets classified by the Stanford Sentiment Analyzer. For our study, we used only three sentiment classes. A positive sentiment was assigned to a tweet if it was classified as positive or very positive by the Stanford Sentiment Analyzer. Similarly, a tweet was assigned a negative sentiment when it was classified as either negative or very negative. The neutral sentiment of a tweet remained neutral.
3.7. Public Concern Map by Policy
Over the whole period of data collection, we observed that states with large populations tended to fluctuate to a smaller degree than states with small populations. For example, in Wyoming, which has a population of 578,759, the concern levels between August 2020 and May 2021 varied widely: 0.91, 0.80, 0.82, 0.88, 0.83, 0.76, 0.97, 0.87, 0.76, and 0.93. However, these values were computed from small numbers of tweets. For example, the concern level of 0.93 for May 2021 is based on only 25 negative tweets. In comparison, for the same month, the concern level of California, which has 39.5 million residents, was 0.83, but it was based on 3963 negative tweets. Texas, which has 29.2 million people, had a comparable concern level to California of 0.84, based on 3712 negative tweets. The concern level of 0.97 in Wyoming, which was recorded in February 2021, is the highest observed in any state and any category. The lowest concern level was recorded in Minnesota as 0.68 for March 2021. Even though Minnesota, which has 5.7 million residents, is ten times more populous than Wyoming, this result was based on only 69 negative tweets. All these numbers are for “COVID-19 (General)”. For individual policies, values are notably lower.
For concerns about masks (“Face Coverings”), Idaho and New Hampshire were at the low end of the spectrum in May 2021, at 0.57 and 0.43, respectively. However, the state of Wyoming, which is a neighboring state of Idaho, had a concern level at the high end of its range, with a value of 0.86, based on 18 negative tweets. With the stored data, a concern map can be computed for each policy for each month. The concern map shown in
Figure 4 illustrates the concern levels of each state regarding “Face Coverings” in May 2021.
For Economic Impact in April 2021, Mississippi had the countrywide highest concern level of 0.85 but based on only 118 negative tweets. South Dakota and North Dakota had the lowest concern levels about the economy at 0.65 and 0.70, respectively. We could not discern any obvious regional patterns.
We expected that concern maps for “Economic Impact” would have clearly differentiated red (Republican) and blue (Democrat) states. Given the instability of values for small states, we chose California as a representative of the blue states and Texas as the prototypical red state. We then compared concern levels for all ten months. However, the results contradicted our expectations. First, we noticed that values were relatively stable throughout the period. In California, the values varied from 0.75 to 0.79. For Texas, the values were between 0.76 and 0.79. Furthermore, the absolute value of the difference in each month between the two states was never larger than 0.02.
One of our expectations was narrowly fulfilled. We hypothesized that “high-tech states” produce more tweet activity than rural states. We summed the total number of tweets in each of the ten largest states by population and set them in relation to the number of citizens, shown in
Table 6. New York and California, the home states of many computing startups, lead the pack at 19% and 14%, respectively. The other large states vary between 9% and 13%, with Texas, Illinois, and Pennsylvania coming close to California.
6. Conclusions
In this work, we have developed a Public Health Policy Perception Monitoring and Awareness Platform that tracks citizens’ concern level trends about ten different public health policies during the COVID-19 pandemic. Our tools can also enhance the public understanding of government health policies.
The concern level tracking revealed that “COVID-19 (General)” and “Ventilators” engendered the highest concern levels, while the “Face Coverings” policy caused the lowest concern levels during the data collection period. This answers the first research question we raised in the Introduction.
Between the first week and the last week of our data period, the concern levels about “Economic Impact”, “COVID-19 (General)”, “Face Coverings”, “Quarantine”, “School Closing”, and “Testing” demonstrated negative changes, while “Business Closing”, “Contact Tracing”, “Social Distancing”, and “Ventilators” had positive changes. Moreover, all changes except for “Business Closing” were significant. This addresses the second research question raised in the Introduction.
Throughout the study period, the concern level of “COVID-19 (General)” stayed relatively stable, even though the trends of both infections and deaths were notably fluctuating. Therefore, no meaningful correlation between the pandemic progress and the concern levels could be identified. This provides an answer to our third research question.
We expected to see a clear difference in concern levels regarding the economy that would reflect the political divide between politically red (Republican) and blue (Democrat) states. However, our experiments showed no evidence supporting this hypothesis. This provides some insights about our fourth question.
We reviewed publicly available policies and local actions in 23 distinct policy areas, including 5295 pandemic-related policies [
50]. Policies for Prevention/Flattening the Curve occurred most often (947), including rules for face covering, quarantine/self-isolation, social distancing, COVID-19 testing, ventilators, etc. The second most frequent policies dealt with Government Operations, totaling 631, including policies related to emergency services operations, first responders, and frontline medical workers across the country. The Housing policy category, with a total count of 521, ranked third. This provides answers to our fifth research question.
The policy awareness tool, however, demonstrated regional differences for the analyzed policies, such as housing-focused policies in California compared to transit/mobility-focused policies in New York. This constitutes an example answer for our sixth research question.
The summary of the housing policies in
Table 7 shows that the California government is providing more funds to help local people, such as tenants, the homeless, etc., to ameliorate the impact of the pandemic. The assistance from the government includes rental reimbursement, extended eviction protection, and accommodations for the homeless. This limited answer to our seventh research question should be extended in the future.
Our reported results are based on a specific period and concentrate on concern levels and government policies. However, the continuous monitoring capabilities of our system can be used to observe temporal trends and geographic distributions in policy perception. Thus, our proposed platform can provide public perceptions as a near real-time feedback mechanism for policymakers and evaluators to appraise each health policy.
7. Future Work
A limitation in the presented work is an analysis of concern levels about COVID vaccines. At the beginning of this study, vaccines did not yet exist. With the wide availability of Pfizer, Moderna, etc. vaccines, the new phenomenon of “political” and personal resistance to vaccine policies has arisen. Thus, unexpected to us, vaccines were not universally welcomed by the population. A simple recording of concern levels about vaccines does not distinguish between citizens concerned about getting access to the shots versus citizens concerned about the negative effects of the vaccines. We plan to perform a more fine-grained analysis of the available social network data about vaccines.
We were surprised to see that steep rises in infections and deaths had no discernable effect on the concern levels to different policies expressed by Twitter-active netizens. This phenomenon could be a topic of an in-depth future investigation and will require the addition of a sociologist to the team. It would also be interesting to see in the future whether other “sudden, unpredictable events” (what Nassim Taleb has called “Black Swan Events”) [
59] will also have no measurable effects on the concern levels of Twitter users. For example, did an overnight spike in oil prices (caused by the Ukraine conflict) cause more concerns than a spike in COVID-19 deaths? There are several ways the concepts embodied in this paper could be extended both in granularity and generality. Granularity could be increased by refining the location tagging of tweets, for example, to the county, city, or borough level. This would enable us to better identify concern levels before and after specific governmental actions at the corresponding geographical level, such as the imposition of a quarantine order in a specific area. The research could also be expanded to incorporate broader (e.g., international) geographic areas and languages.
As noted, the Stanford Sentiment Analyzer does not easily support the analysis of several aspects in the same tweet. We propose to perform a multipolarity analysis based on different aspects identified in the tweets. We plan to apply sentence tokenization and embedding to achieve sentence topic modeling. Thus, each policy could be further categorized. For example, “Vaccine Policy” could be categorized as “Pfizer”, “Moderna”, “J & J”, etc. By combining these refinements with the use of the Sentiment Analyzer, we would see how citizens react to the specific vaccines rather than the whole “Vaccine Policy”. In the work of Bonifazi et al. [
60], they figured out that anti-vaxxers tend to have denser ego networks and are more cohesive than pro-vaxxers. This shows that anti-vaxxers have four times more interactions than pro-vaxxers. We propose to analyze and compare the networks among pro-vaxxers, especially when they have different vaccine preferences.
One of our goals in this work is to provide actionable data and real-time analysis results to policymakers. When concern levels in the population rise to high values, a policy change might be called for. However, as we pointed out, the concern levels in a small state are less alarming than those in one of the mega states, e.g., California, Texas, Florida, New York, Illinois, etc., because they are based on a few tweets. Thus, a new metric along the lines of a “concern impact level” that incorporates concern level and population size would be useful. We intend to investigate possible formulas for a concern impact level in future work.
Finally, the concepts in this paper could be applied to understand citizens’ concerns about entirely different public events and policy actions. Our concern level tracking system could be easily repurposed to analyze tweets for sentiments regarding other policies, e.g., immigration rules, tax cuts, congestion pricing in New York City, or funding for interplanetary space travel. The possibilities are as broad as the range of government actions and natural events causing them. We plan to expand this dynamic policy data collection capability by allowing users to select new policies to be tracked or by automatically identifying emerging policies to monitor. This will make the platform adjustable for future crisis events.
Author Contributions
Conceptualization, S.A.C. and J.G.; Data curation, C.-y.L., M.R. and F.Y.; Formal analysis, C.-y.L.; Investigation, C.-y.L., J.G. and S.A.C.; Methodology, C.-y.L. and S.A.C.; Software, M.R. and F.Y.; Supervision, J.G. and S.A.C.; Visualization, C.-y.L., M.R. and F.Y.; Writing—original draft, C.-y.L., M.R. and S.A.C.; Writing—review and editing, J.G. All authors have read and agreed to the published version of the manuscript.
Funding
Research reported in this publication was supported by the National Center for Advancing Translational Sciences (NCATS), a component of the National Institute of Health (NIH) under award number UL1TR003017. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health. This research was partially funded by the National Research Foundation of Korea (NRF-2017S1A3A2066084).
Acknowledgments
We thank Sasikala Vasudevan for her work on the policy maps.
Conflicts of Interest
We declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
- The World Health Organization (WHO). Characterizes COVID-19 as a Pandemic. 2020. Available online: https://www.who.int/emergencies/diseases/novel-coronavirus-2019/events-as-they-happen (accessed on 3 January 2022).
- Agence France-Presse. China Locks down City of 9 Million and Reports 4,000 Cases as Omicron Tests Zero-Covid Strategy. The Guardian, 2022. Available online: https://www.theguardian.com/world/2022/mar/22/china-locks-down-city-of-9-million-and-reports-4000-cases-as-omicron-tests-zero-covid-strategy (accessed on 30 March 2022).
- Tsui, K. Shanghai Ikea Lockdown Sparks Scramble as China Enforces ‘Zero COVID’. Washington Post, 2022. Available online: https://www.washingtonpost.com/world/2022/08/15/shanghai-ikea-lockdown-china-zero-covid/ (accessed on 20 August 2022).
- Coen, D.; Kreienkamp, J.; Tokhi, A.; Pegram, T. Making global public policy work: A survey of international organization effectiveness. Glob. Policy 2022, 13, 1–13. [Google Scholar] [CrossRef]
- COVID-19 Dashboard by the Center for Systems Science and Engineering (CSSE) at Johns Hopkins University (JHU). Available online: https://coronavirus.jhu.edu/map.html (accessed on 8 March 2021).
- Coronavirus in the U.S.: Latest Map and Case Count. Available online: https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html?action=click&module=Spotlight&pgtype=Homepage (accessed on 8 March 2021).
- New Jersey COVID-19 Information Hub. Available online: https://covid19.nj.gov/index.html (accessed on 8 March 2021).
- Sayce, D. The Number of Tweets per Day in 2020. 2020. Available online: https://www.dsayce.com/social-media/tweets-day/ (accessed on 15 December 2021).
- Stoy, L. Sentiment Analysis: A Deep Dive Into The Theory, Methods, and Applications. 2021. Available online: https://lazarinastoy.com/sentiment-analysis-theory-methods-applications/ (accessed on 2 February 2022).
- Asur, S.; Huberman, B. Predicting the Future with Social Media. In Proceedings of the 2010 IEEE/WIC/ACM International Conference on Web Intelligence and Intelligent Agent Technology, Toronto, ON, Canada, 31 August–3 September 2010; pp. 492–499. [Google Scholar] [CrossRef] [Green Version]
- Moschitti, A. Efficient Convolution Kernels for Dependency and Constituent Syntactic Trees. In Machine Learning: ECML 2006; Lecture Notes in Computer Science; Fürnkranz, J., Scheffer, T., Spiliopoulou, M., Eds.; Springer: Berlin/Heidelberg, Germany, 2006; Volume 4212. [Google Scholar] [CrossRef] [Green Version]
- Tang, D.; Wei, F.; Yang, N.; Zhou, M.; Liu, T.; Qin, B. Learning Sentiment-Specific Word Embedding for Twitter Sentiment Classification. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Baltimore, MD, USA, 22–27 June 2014; pp. 1555–1565. [Google Scholar] [CrossRef] [Green Version]
- Singh, L.G.; Mitra, A.; Singh, S.R. Sentiment Analysis of Tweets Using Heterogeneous Multi-layer Network Representation and Embedding. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP, Online, 16–20 November 2020. [Google Scholar]
- Davisson, M. Staff Budget Briefing, FY 2017-18; Department of Public Health and Environment, 2016. Available online: https://leg.colorado.gov/sites/default/files/fy2017-18_pubheabrf.pdf (accessed on 23 April 2021).
- Tan, L.; Zhang, H. An approach to user knowledge acquisition in product design. Adv. Eng. Inform. 2021, 50, 101408. [Google Scholar] [CrossRef]
- Nassirtoussi, A.K.; Aghabozorgi, S.; Wah, T.Y.; Ngo, D.C.L. Text mining for market prediction: A systematic review. Expert Syst. Appl. 2014, 41, 7653–7670. [Google Scholar] [CrossRef]
- Rill, S.; Reinel, D.; Scheidt, J.; Zicari, R.V. PoliTwi: Early detection of emerging political topics on twitter and the impact on concept-level sentiment analysis. Knowl.-Based Syst. 2014, 69, 24–33. [Google Scholar] [CrossRef]
- Torkildson, M.; Starbird, K.; Aragon, C. Analysis and Visualization of Sentiment and Emotion on Crisis Tweets. In Cooperative Design, Visualization, and Engineering; Luo, Y., Ed.; Springer International Publishing: Berlin/Heidelberg, Germany, 2014; pp. 64–67. [Google Scholar] [CrossRef]
- Kontopoulos, E.; Berberidis, C.; Dergiades, T.; Bassiliades, N. Ontology-based sentiment analysis of twitter posts. Expert Syst. Appl. 2013, 40, 4065–4074. [Google Scholar] [CrossRef]
- Lahuerta-Otero, E.; Cordero-Gutiérrez, R. Looking for the perfect tweet. The use of data mining techniques to find influencers on twitter. Comput. Hum. Behav. 2016, 64, 575–583. [Google Scholar] [CrossRef]
- Ji, X.; Chun, S.A.; Geller, J. Monitoring Public Health Concerns Using Twitter Sentiment Classifications. In IEEE International Conference on Healthcare Informatics; IEEE: Philadelphia, PA, USA, 2013. [Google Scholar] [CrossRef]
- OECD. The territorial impact of COVID-19: Managing the crisis and recovery across levels of government. In OECD Policy Responses to Coronavirus (COVID-19); OECD Publishing: Paris, France, 2021. [Google Scholar] [CrossRef]
- Colorado Department of Public Health and Environment. Available online: https://cdphe.colorado.gov/ (accessed on 15 June 2021).
- Kravitz-Wirtz, N.; Aubel, A.; Schleimer, J.; Pallin, R.; Wintemute, G. Public Concern About Violence, Firearms, and the COVID-19 Pandemic in California. JAMA Netw. Open 2021, 4, e2033484. [Google Scholar] [CrossRef]
- Nicomedesa, C.J.C.; Avila, R.M.A. An analysis on the panic during COVID-19 pandemic through an online form. J. Affect. Disord. 2020, 276, 14–22. [Google Scholar] [CrossRef]
- Centers for Disease Control and Prevention. COVID Data Tracker. 2021. Available online: https://covid.cdc.gov/covid-data-tracker/#datatracker-home (accessed on 24 April 2021).
- Mittal, R.; Mittal, A.; Aggarwal, I. Identification of affective valence of Twitter generated sentiments during the COVID-19 outbreak. Soc. Netw. Anal. Min. 2021, 11, 108. [Google Scholar] [CrossRef]
- Hutto, C.J.; Gilbert, E. VADER: A Parsimonious Rule-Based Model for Sentiment Analysis of Social Media Text. In Proceedings of the International AAAI Conference on Web and Social Media. 8, Ann Arbor, MI, USA, 1–4 June 2014; pp. 216–225. Available online: https://ojs.aaai.org/index.php/ICWSM/article/view/14550 (accessed on 15 December 2021).
- Hung, M.; Lauren, E.; Hon, E.S.; Birmingham, W.C.; Xu, J.; Su, S.; Hon, S.D.; Park, J.; Dang, P.; Lipsky, M.S. Social Network Analysis of COVID-19 Sentiments: Application of Artificial Intelligence. J. Med. Internet Res. 2020, 22, e22590. [Google Scholar] [CrossRef]
- Yadav, A.; Vishwakarma, D.K. A Language-independent Network to Analyze the Impact of COVID-19 on the World via Sentiment Analysis. ACM Trans. Internet Technol. 2022, 22, 28. [Google Scholar] [CrossRef]
- Yu, X.; Ferreira, M.D.; Paulovich, F.V. Senti-COVID19: An Interactive Visual Analytics System for Detecting Public Sentiment and Insights Regarding COVID-19 From Social Media. IEEE Access 2021, 9, 126684–126697. [Google Scholar] [CrossRef]
- Basiri, M.E.; Nemati, S.; Abdar, M.; Asadi, S.; Acharrya, U.R. A novel fusion-based deep learning model for sentiment analysis of COVID-19 tweets. Knowl.-Based Syst. 2021, 228, 107242. [Google Scholar] [CrossRef]
- Gupta, P.; Kumar, S.; Suman, R.R.; Kumar, V. Sentiment Analysis of Lockdown in India During COVID-19: A Case Study on Twitter. IEEE Trans. Comput. Soc. Syst. 2021, 8, 992–1002. [Google Scholar] [CrossRef]
- Naseem, U.; Razzak, M.I.; Khushi, M.; Eklund, P.W.; Kim, J. COVIDSenti: A Large-Scale Benchmark Twitter Data Set for COVID-19 Sentiment Analysis. IEEE Trans. Comput. Soc. Syst. 2021, 8, 1003–1015. [Google Scholar] [CrossRef]
- Chen, E.; Lerman, K.; Ferrara, E. COVID-19: The first public coronavirus twitter dataset. arXiv 2020, arXiv:2003.07372. [Google Scholar]
- Lopez, C.E.; Vasu, M.; Gallemore, C. Understanding the perception of COVID-19 policies by mining a multilanguage twitter dataset. arXiv 2020, arXiv:2003.10359. [Google Scholar]
- Yaqub, U. Tweeting During the COVID-19 Pandemic: Sentiment Analysis of Twitter Messages by President Trump. Digit. Gov. Res. Pract. 2021, 2, 1–7. [Google Scholar] [CrossRef]
- National Institutes of Health. Office of Data Science Strategy. Available online: https://datascience.nih.gov/COVID-19-open-access-resources (accessed on 5 November 2022).
- European Centre for Disease Prevention and Control. Download Historical Data (to 14 December 2020) on the Daily Number of New Reported COVID-19 Cases and Deaths Worldwide. Available online: https://www.ecdc.europa.eu/en/publications-data/download-todays-data-geographic-distribution-COVID-19-cases-worldwide (accessed on 6 November 2022).
- Twitter Developer Platform. Available online: https://developer.twitter.com/en/docs (accessed on 1 June 2021).
- Socher, R.; Perelygin, A.; Wu, J.; Chuang, J.; Manning, C.D.; Ng, A.; Potts, C. Recursive Deep Models for Semantic Compositionality Over a Sentiment Treebank. In Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing, Seattle, WA, USA, 18–21 October 2013; Available online: https://aclanthology.org/D13-1170 (accessed on 2 January 2021).
- Loria, S. TextBlob Documentation. Release 0.16.0. 2020. Available online: https://buildmedia.readthedocs.org/media/pdf/TextBlob/latest/TextBlob.pdf (accessed on 15 December 2021).
- COVID 19 Daily Trends—New Cases, Deaths by State and Counties. Available online: https://www.makeready.com/covid/ (accessed on 28 February 2022).
- Krippendorff, K. Estimating the Reliability, Systematic Error and Random Error of Interval Data. Educ. Psychol. Meas. 1970, 30, 61–70. [Google Scholar] [CrossRef]
- ReCal for Nominal, Ordinal, Interval, and Ratio-Level Data. Available online: http://dfreelon.org/recal/recal-oir.php (accessed on 1 March 2022).
- Krippendorff, K. Content Analysis: An Introduction to Its Methodology, 2nd ed.; Sage Publications: Thousand Oaks, CA, USA, 2004. [Google Scholar]
- Sentiment Analysis without Modeling: TextBlob vs. VADER vs. Flair. Available online: https://pub.towardsai.net/sentiment-analysis-without-modeling-TextBlob-vs-vader-vs-flair-657b7af855f4 (accessed on 20 October 2022).
- Free Statistics Calculators, Calculator: P-Value for Correlation Coefficients. Available online: https://www.danielsoper.com/statcalc/calculator.aspx?id=44 (accessed on 15 December 2021).
- IMF. Policy Tracker. Available online: https://www.imf.org/en/Topics/imf-and-covid19/Policy-Responses-to-COVID-19 (accessed on 1 September 2022).
- COVID-19: Local Action Tracker. Available online: https://www.nlc.org/resource/COVID-19-local-action-tracker/ (accessed on 1 March 2022).
- Lewis, M.; Liu, Y.; Goyal, N.; Ghazvininejad, M.; Mohamed, A.; Levy, O.; Stoyanov, V.; Zettlemoyer, L. BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension. arXiv 2019. [Google Scholar] [CrossRef]
- Raffel, C.; Shazeer, N.; Roberts, A.; Lee, K.; Narang, S.; Matena, M.; Zhou, Y.; Li, W.; Liu, P.J. Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer. arXiv 2019. [Google Scholar] [CrossRef]
- Barrios, F.; López, F.; Argerich, L.; Wachenchauzer, R. Variations of the Similarity Function of TextRank for Automated Summarization. arXiv 2016. [Google Scholar] [CrossRef]
- Radford, A.; Wu, J.; Child, R.; Luan, D.; Amodei, D.; Sutskever, I. Language Models are Unsupervised Multitask Learners. 2019. Available online: https://openai.com/blog/better-language-models/ (accessed on 15 June 2022).
- Wu, H.; Huang, J.; Zhang, C.J.P.; He, Z.; Ming, W. Facemask shortage and the coronavirus disease (COVID-19) outbreak: Reflection on public health measures. medRxiv 2020. [Google Scholar] [CrossRef]
- Roy, S.; Dutta, R.; Ghosh, P. Recreational and philanthropic sectors are the worst-hit US industries in the COVID-19 aftermath. Soc. Sci. Humanit. Open 2021, 3, 100098. [Google Scholar] [CrossRef] [PubMed]
- Cheng, C.; Wang, H.; Ebrahimi, O.V. Adjustment to a “New Normal:” Coping Flexibility and Mental Health Issues During the COVID-19 Pandemic. Front. Psychiatry 2021, 12, 353. [Google Scholar] [CrossRef]
- Chun, S.A.; Li, C.; Toliyat, A.; Geller, J. Tracking Citizen’s Concerns During COVID-19 Pandemic. In Proceedings of the 21st Annual International Conference on Digital Government Research, New York, NY, USA, 15–19 June 2020. [Google Scholar] [CrossRef]
- Black Swan in the Stock Market: What Is It, With Examples and History. 2022. Available online: https://www.investopedia.com/terms/b/blackswan.asp (accessed on 6 November 2022).
- Bonifazi, G.; Breve, B.; Cirillo, S.; Corradini, E.; Virgili, L. Investigating the COVID-19 vaccine discussions on Twitter through a multilayer network-based approach. Inf. Process. Manag. 2022, 59, 103095. [Google Scholar] [CrossRef]
| Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2022 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).