Next Article in Journal
Seeing Opportunity in Virtual Reality: A Rapid Review of the Use of VR as a Tool in Vision Care
Previous Article in Journal
Curriculum–Vacancy–Course Recommendation Model Based on Knowledge Graphs, Sentence Transformers, and Graph Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy

by
Claudel Mombeuil
1,2,*,
Hugues Séraphin
3 and
Hemantha Premakumara Diunugala
4
1
Rezo Inovasyon Edikatif Ayisyen (RINOVEDA), Mirebalais HT 5210, Haiti
2
Faculté des Sciences Administratives, Université Notre Dame d’Haïti, Port-au-Prince HT 6110, Haiti
3
Oxford Brookes Business School, Oxford Brookes University, Oxford OX3 0BP, UK
4
Department of Social Statistics, University of Sri Jayewardenepura, Nugegoda 10250, Sri Lanka
*
Author to whom correspondence should be addressed.
Technologies 2025, 13(8), 341; https://doi.org/10.3390/technologies13080341
Submission received: 28 June 2025 / Revised: 25 July 2025 / Accepted: 30 July 2025 / Published: 6 August 2025
(This article belongs to the Section Information and Communication Technologies)

Abstract

The proliferation of social media has paradoxically facilitated the widespread dissemination of fake news, impacting individuals, politics, economics, and society as a whole. Despite the increasing scholarly research on this phenomenon, a significant gap exists regarding its dynamics in developing countries, particularly how predictors of fake news sharing interact, rather than merely their net effects. To acquire a more nuanced understanding of fake news sharing behavior, we propose identifying the direct and complex interplay among key variables by utilizing a dual analytical framework, leveraging Structural Equation Modeling (SEM) for linear relationships and Fuzzy-Set Qualitative Comparative Analysis (fsQCA) to uncover asymmetric patterns. Specifically, we investigate the influence of news-find-me orientation, social media trust, information-sharing tendencies, and status-seeking motivation on the propensity of fake news sharing behavior. Additionally, we delve into the moderating influence of social media literacy on these observed effects. Based on a cross-sectional survey of 1028 Haitian social media users, the SEM analysis revealed that news-find-me perception had a negative but statistically insignificant influence on fake news sharing behavior. In contrast, information sharing exhibited a significant negative association. Trust in social media was positively and significantly linked to fake news sharing behavior. Meanwhile, status-seeking motivation was positively associated with fake news sharing behavior, although the association did not reach statistical significance. Crucially, social media literacy moderated the effects of trust and information sharing. Interestingly, fsQCA identified three core configurations for fake news sharing: (1) low status seeking, (2) low information-sharing tendencies, and (3) a unique interaction of low “news-find-me” orientation and high social media trust. Furthermore, low social media literacy emerged as a direct core configuration. These findings support the urgent need to prioritize social media literacy as a key intervention in combating the dissemination of fake news.

1. Introduction

Social media services permit users to generate, evaluate, edit, and distribute information or knowledge with unprecedented ease [1,2,3,4]. This phenomenon, often referred to as the “democratization of information,” has eroded the traditional gatekeeping role of media outlets [1,5] and has also created a foundation for the proliferation of issues such as fake news, scams, cyberbullying, and hate speech [6,7,8,9]. Of these issues, fake news has received the most attention because it (fake news) can lead to false beliefs and misconceptions [10], fostering inaccurate and fragmented individual conceptions or an inadequate understanding of complex situations that can have damaging consequences for the economy, politics, and the whole society [11,12,13,14,15]. More importantly, the proliferation of fake news can contaminate public discourse and sway the decisions of governmental, organizational, and individual actors [16]. Furthermore, the high prevalence of fake news sharing can impact citizens’ cognitive abilities [17,18,19] and their ability to make rational purchase decisions [20]. Inability to distinguish authentic news from fake news leads users to spread bias—knowingly or not—triggering a chain reaction that ultimately manipulates public opinion [21]. As a result of the adverse impact of fake news proliferation, understanding why individuals share fake news has become a major area of concern for both researchers and policymakers. While the adverse impacts of fake news have garnered substantial academic attention, the majority of studies have focused on the context of the Global North. This geographic concentration leaves a critical gap in understanding the implications of fake news in the Global South [22,23,24,25].
Previous research has investigated a broad range of psychological, social, and technological drivers behind fake news creation and dissemination, drawing on various theoretical lenses; for further insights, see [26,27,28,29]. However, as shown in Table 1, the existing literature reveals a fragmented and often inconsistent understanding of the phenomenon of fake news. This supports Gray et al.’s [30] argument that relying on single-theory frameworks in the social sciences may obscure the complexity of human behavior. To address this limitation, recent scholars have advocated for multi-theoretical and multi-methodological approaches to capture more nuanced insights, e.g., [30,31]. Despite these calls, few studies have integrated complementary theoretical perspectives, such as Uses and Gratifications Theory (UGT) and Rational Choice Theory (RCT), to explain the sharing of fake news. UGT focuses on individuals’ psychological needs in media use, while RCT emphasizes deliberate cost-benefit evaluations, offering a dual lens that is particularly relevant to understanding online content-sharing behavior.
Despite growing interest in fake news sharing, there remains a lack of integrated theoretical and methodological frameworks capable of fully explaining how individuals process and disseminate misinformation on social media. Specifically, few studies examine the combined effects of news-find-me orientation, social media trust, status-seeking, and information-sharing behavior within a single model. Moreover, the moderating role of social media literacy, though theoretically compelling, is underexplored. Furthermore, most empirical studies in this domain rely heavily on variable-based statistical methods like Structural Equation Modeling (SEM). While powerful for estimating linear relationships, SEM often fails to detect configural or combinatory patterns that better reflect real-world behaviors [34,40,41]. Also, configural analyses (e.g., fsQCA) that reveal causal patterns leading to fake news sharing remain scarce, especially in large-sample contexts. To overcome these limitations, this study combines SEM with Fuzzy-set Qualitative Comparative Analysis (fsQCA)—a method rooted in complexity theory and increasingly applied in digital behavior studies, e.g., [35]. This hybrid approach enhances analytical robustness and enables both generalizable and case-specific insights.
To address these gaps, this study pursues three core objectives. First, it investigates the direct influence of news-find-me perception, trust in social media (social media trust), status-seeking motivation, and information-sharing behavior on fake news sharing behavior among social media users in a developing economy (Haiti). Second, it examines the moderating role of social media literacy in shaping the relationships between these four antecedents and fake news sharing behavior. Third, it applies fuzzy-set Qualitative Comparative Analysis (fsQCA) to identify configurations of conditions—combinations of factors—that collectively contribute to fake news sharing behavior, thereby complementing the findings derived from Structural Equation Modeling (SEM). Corresponding to these objectives, this study poses the following research questions: To what extent do news-find-me perception, social media trust, status-seeking motivation, and information-sharing behavior predict fake news sharing? Does social media literacy moderate these relationships? What specific configurations of conditions lead to users’ inclination to disseminate fake news?
By addressing these questions, this study contributes to the theoretical integration of Uses and Gratifications Theory (UGT) and Rational Choice Theory (RCT), provides actionable insights for designing effective misinformation interventions, and advances methodological innovation through the combined application of SEM and fsQCA. In the remainder of this paper, we first elaborate on the foundational theoretical frameworks that underpin this study. We then define each independent variable and formulate hypotheses regarding their influence on fake news sharing behavior, drawing on prior scholarly work. Additionally, we examine how these variables combine to form predictive configurations of misinformation engagement. The subsequent sections detail the methodological approach, followed by the presentation and discussion of key findings. Finally, we articulate the theoretical and practical contributions of this study, acknowledge its limitations, and conclude with reflective remarks on future research directions.

2. Scholarly Review

2.1. Foundational Theory

The research leverages two major theories to interpret users’ fake news sharing behaviors on social networks. The first is the Uses and Gratifications Theory, initially developed by Blumler and Katz [42]. This theory emphasizes how individuals dynamically choose media content to accomplish explicit needs and motivations [27,43]. Initially applied to traditional media, its scope has expanded to include online platforms, e.g., [27,44,45]. The second theory is Rational Choice Theory (RCT), developed by Browning, Webster [46]. This theory suggests that individuals make logical calculations to choose options that benefit them the most by carefully weighing options to determine the most advantageous course of action [28]. Rooted in fields such as political science [33,39], RCT has more recently been utilized to analyze fake news sharing, e.g., [26,28], explaining why individuals might disseminate fake news despite the potential harm to individuals, institutions, products, brands, and business organizations.

2.2. News-Find-Me Perception and Its Impact on Fake News Sharing

The emergence of the Internet and social media platforms has played a major role in decreasing reliance on traditional media outlets like radio and television for obtaining information. As media options have become more diverse, the need for active scrutiny of news media has declined [47]. This decline is supported by people’s overreliance on the ambient informational environment provided by social media networks, peer groups, and social feeds, potentially detaching them from the traditional practice of purposefully gathering information [48,49]. The transition from actively seeking news outlets to stay well-versed about up-to-date events to relying on social feeds and online social networks reflects a shift driven by the overwhelming nature of the ambient information environment, resulting in information overload for digital news consumers, who now perceive social media as a sufficient means of achieving their informational objectives [48]. Information overload in the digital age has led people to increasingly rely on their social networks to act as personal news filters [50]. This reliance fosters a feeling of being up-to-date without deliberately searching for news, a concept known as the “news-find-me” perception [51,52]. Furthermore, individuals’ reliance on algorithmic curation and social connections to automatically deliver news to them not only reduces the need for active information seeking but also increases trust in these algorithmic feeds [52].
Under the disguise of the news-find-me perception, social media users may feel less inclined to actively seek news, believing they can be effortlessly informed [49]. Drawing on Uses and Gratifications Theory, it is assumed that users adopt passive habits of information consumption to fulfill needs such as convenience and cognitive ease [53], even at the cost of accuracy. Consistent with Rational Choice Theory, studies indicate that individuals tend to minimize cognitive effort by favoring low-cost information channels—such as algorithmic feeds—over deliberate fact-checking, particularly when the perceived reward of being informed appears easily attainable [54,55,56]. Consistent with these theories, previous research demonstrates that reliance on passive news consumption heightens vulnerability to misinformation [57,58]; therefore, individuals with stronger news-find-me orientations are more likely to share fake news, driven by heuristic processing and inadequate verification of sources. Furthermore, empirical studies by Apuke & Omar [59] and Wei and colleagues [28] showed that news-find-me predicts users’ inclination to share inaccurate information, such as fake news, on social media platforms. Also, research has shown that individuals with a strong news-find-me orientation tend to depend more heavily on algorithmically curated feeds, which are prone to amplifying emotionally charged and potentially misleading content [22,60]. Drawing on the insights outlined above, we propose the following hypothesis:
Hypothesis 1:
News-find-me perception is positively associated with users’ fake news-sharing behavior.

2.3. Social Media Trust and Fake News-Sharing Behavior

Sustained interactions among individuals are dominated by the trust factor, which means that trust plays a substantial role in maintaining and fostering ongoing relationships and engagements. According to Mayer, Davis [61], the concept of “Trust” is characterized by the readiness to have faith in someone, driven by positive expectations rooted in their past actions. In the context of the online environment, trust has gained noteworthy importance due to the profusion of user-generated content [62], arising from the effective exchange of information [63]. Online trust often refers to the trust individuals have in any online platforms, websites, or technologies that require the support of the Internet to perform certain tasks. In the literature, online trust, trust in social media, or trust in social network sites are used interchangeably. Warner-Søderholm, Bertsch [64] defined trust in social networking platforms as the belief that social networking sites can be trusted, are reliable, and can effectively serve their intended purpose, which is to facilitate open and innovative information sharing. This is particularly important as individuals are more likely to take informational risks and provide social support when their levels of online trust are high [65]. Consequently, it is anticipated that users with a robust sense of trust in online information sources are inclined to endorse and propagate shared information within their social circles, implying that online trust correlates with fake news sharing [66].
Trust in social media platforms increases users’ likelihood of accepting and redistributing content without independent verification [6,7]. Grounded in Uses and Gratifications Theory, we posit that trust in social media may compel users to satisfy psychological needs for security and validation during information consumption, thereby diminishing their level of skepticism. Consistent with this statement, research shows that users tend to perceive a reduced risk when sharing content from trusted sources, particularly when the anticipated reward involves accruing social capital or fostering engagement [67], aligning with Rational Choice Theory. In this regard, Talwar, Dhir [66] investigated the sharing of fake news on social media and discovered that people who have greater trust in online information tend to intentionally share false content and are less likely to fact-check before doing so.
Similarly, Salehan, Kim [68] and Wei, Gong [28] found that users who place confidence in social media platforms are more likely to share misinformation. This implies that such trust might create a misleading sense of reliability, causing users to be less critical of the content they encounter [69]. More importantly, studies have shown that elevated levels of trust in social media platforms can diminish users’ motivation to verify the accuracy of information prior to sharing [57,70,71]. When individuals place trust in the platform or their social network, they may implicitly assume that the content has already been scrutinized, thereby increasing the risk of disseminating misinformation [72]. Collectively, these findings suggest that trust in social media serves as a cognitive shortcut, diminishing users’ perceived need for scrutiny and thereby promoting the uncritical sharing of content—including fake news. Drawing on these insights, we formulate the following hypothesis:
Hypothesis 2:
Social media trust is positively associated with users’ fake news-sharing behavior.

2.4. Information Sharing and Fake News Sharing Behavior

Information sharing represents a central gratification pursued by social media users, who frequently disseminate content to nurture interpersonal connections, articulate their self-concept, or participate in broader conversational threads [69,73,74]. However, this frequent and emotionally driven sharing on social media platforms, particularly when motivated by perceived social relevance, can undermine verification efforts, increasing the likelihood of unchecked dissemination. Consequently, social media platforms have ascended as dominant vectors for the rapid dissemination of fake news [75,76], facilitated by algorithmic amplification, low barriers to sharing, and often limited user scrutiny. Their streamlined design facilitates the spread of both exact and inexact information [77].
The ease and low cost of sharing information on these platforms make them ideal for disseminating content, both genuine [45] and fabricated [75]. Users often share information with good intentions, but social media features make it easy to create and spread false narratives [77]. Studies by Chen, Sin [78] show that the ease of sharing information and sharing fabricated information are positively linked. People often share news impulsively, driven by emotions like fear and anxiety, and forgo fact-checking [29]. Furthermore, frequent information sharing on social media is often driven by personal gratifications—including entertainment, altruism, and identity expression—as outlined by Uses and Gratifications Theory [79]. Over time, this behavior can become habitual, and the sheer volume of shared content increases the risk of inadvertently disseminating fake news [26].
From the perspective of Rational Choice Theory, users may perceive the social rewards of sharing—such as engagement or affirmation—as outweighing potential misinformation costs, particularly when they are unaware of the content’s inaccuracy [80]. This explains why the rise of social media, with its high penetration rate, is linked to the surge of fake news [81]. Prior studies have demonstrated a significant positive connection between habitual information sharing and the distribution of fake news, e.g., [27,28]. Additionally, research indicates that individuals who frequently share content on social media platforms are more susceptible to disseminating misinformation, particularly when the accuracy of information is not prioritized in their decision-making process [82]. Drawing on these insights, we seek to test the following hypothesis:
Hypothesis 3:
Information sharing is positively associated with users’ fake news sharing behaviors.

2.5. Status-Seeking and Fake News Sharing Behavior

While “status-seeking” might appear self-evident, scholarly definitions vary. Anderson, Hildreth [83] characterize it as the desire for admiration, respect, and deference, while Delhey, Schneickert [84] stress the motivation to ascend social hierarchies and gain recognition for achievements. Despite these different emphases, both definitions highlight the social embeddedness of status-seeking within groups, communities, or societies. Fundamentally, status-seeking involves behaviors aimed at acquiring recognition, respect, or admiration. In the specific context of the digital era, Apuke and Omar [32] define status-seeking as the ways individuals utilize social media to enhance their standing within their online social networks.
Given that status-seeking motivation often rewards individuals with increased visibility and message virality, users may be inclined to share novel or sensational content, including fake news, as a means of attracting attention and enhancing their social prestige within digital networks. Consequently, we argue that both Uses and Gratifications Theory (UGT) and Rational Choice Theory (RCT) offer valuable perspectives for understanding status-seeking behavior in digital environments. UGT posits that individuals actively engage with content that fulfills specific gratifications—such as attention-seeking through emotionally charged or controversial material, traits often linked to fake news dissemination [85]. Complementing this, RCT suggests that users strategically weigh the social rewards of engagement (e.g., likes, shares, and visibility) against potential costs [86]. In such calculations, content accuracy may be deprioritized if recognition is perceived as a more valuable outcome [86].
In the context of online communities, individuals identified as opinion leaders who share news on social networking sites are often highly engaged, perceived as knowledgeable, and offer commentary on their shared content to enhance its value [87]. Consequently, some individuals tend to emulate these online opinion leaders in their pursuit of status. Research has indicated that people frequently share content on social media platforms to attain recognition [88] or to project an appearance of being well-informed in order to gain social endorsements, a behavior influencing the motivation to engage in sharing fake news [89]. In line with this, Lee, Ma [90] identified status-seeking as the most robust predictor of news-sharing behavior.
Prior research suggests that users often share sensational content not because of its accuracy but due to its capacity to attract attention and provoke engagement [91,92]. This suggests that when individuals are motivated by a desire for status, they may prioritize the visibility and popularity of their shared content over its accuracy. However, the relationship between status-seeking motivations and fake news sharing presents mixed findings in existing literature. While some studies report a significant positive association [28,44,45], others find no statistically significant effect [44,59]. These contradictory results suggest the potential influence of moderating variables or contextual factors that may strengthen or weaken this relationship. To help resolve this discrepancy, we examine the following relationship:
Hypothesis 4:
Status-seeking motivation is positively associated with users’ fake news-sharing behavior.

2.6. Social Media Literacy as Moderator

Social media literacy equips individuals with multidimensional competencies for platform engagement, including both operational skills for digital navigation [93] and analytical capacities for content discernment [94]. Schreurs and Vandenbosch [93] conceptualize social media literacy as a dual-capacity framework encompassing both risk mitigation strategies and opportunity optimization during digital content engagement. This skill is crucial for recognizing and combating fake news [95,96,97]. The empirical evidence provided by Apuke, Omar [98] lends support to this notion, suggesting that exposure to social media literacy concepts resulted in enhanced fake news identification and awareness, and a subsequent decrease in sharing behaviors.
Nevertheless, social media literacy extends beyond simply identifying fake news; it equips users to critically evaluate both content and sources, a crucial skill given social media’s role as a breeding ground for misinformation [99]. Such literacy cultivates responsible online engagement [100] and develops critical content evaluation skills, with studies demonstrating significantly reduced misinformation sharing among literate users. Research shows that individuals with stronger literacy skills have a lower propensity to trust and share fake news [101]. Importantly, research points to educational investment as the most sustainable and comprehensive economic and social policy for combating fake news dissemination [102]. Results by Wei, Gong [28] supported this statement, showing that social media literacy significantly moderates the influence of factors like news-find-me perception, social media trust, and information sharing on fake news sharing behavior. Thus, we draw on the above insights to formulate the following moderating effect of social media literacy:
Hypothesis H1a:
‘Social media literacy’ moderates the effect of ‘News-find-me’ on fake news sharing behavior.
Hypothesis H2a:
‘Social media literacy’ moderates the effect of ‘social media trust’ on fake news sharing behavior.
Hypothesis H3a:
‘Social media literacy’ moderates the effect of ‘information sharing’ on fake news sharing behavior.
Hypothesis H4a:
‘Social media literacy moderates the effect of ‘status-seeking’ on fake news sharing behaviors.

2.7. Control Variables and Model Development

Control variables are widely used in management and organizational research [103,104], but they are notably absent in information systems studies and research on internet-based technologies [105]. Even when included, their effects are often unreported, hindering a comprehensive understanding of their impact on analysis outcomes [105]. Despite not being the primary focus, control variables should be utilized to clarify the effect of any independent variable on a particular dependent variable [106] in order to enhance the robustness of the results [107]. Studies have shown mixed findings regarding the influence of demographic factors (e.g., education, age, and gender) on internet-based technologies, e.g., [108,109,110,111], indicating the need to control for these variables. In light of this, we propose controlling for age, education, family status, gender, and occupation to address potential biases and confounding factors, as exposed in Figure 1.

2.8. Configural Model with sfQCA

The CB-SEM emphasizes relationships among latent variables and is commonly used when the goal is to test a theoretically driven model with a well-defined structure, and it is driven by a more traditional hypothesis-testing perspective, focusing on the assessment of model fit [112,113,114]. However, complex relationships are characterized by nonlinear relationships between variables and abrupt shifts that lead to divergent outcomes, rendering traditional variance-based methods, which assume linearity, ill-suited to capture dynamic interdependencies [115,116,117]. Configurational theory addresses this issue by reframing phenomena as synergistic combinations of conditions rather than isolated variables [40,41]. This approach hinges on a critical distinction: while variables represent measurable characteristics that vary across cases, conditions refer to thresholds or ranges within antecedent or outcome factors that collectively shape results [118]. A cornerstone of configurational theory is equifinality, the principle that multiple causal pathways can yield identical outcomes [40,41,119]. While Figure 1 depicts hypothesized linear relationships between independent variables and fake news sharing (tested via structural equation modeling, SEM), SEM alone struggles to account for non-linear interactions or equifinality [120]. To overcome this issue, we integrate fsQCA with SEM. This dual-method approach leverages fsQCA’s capacity to identify combinatorial causal configurations and SEM’s strength in validating linear hypotheses [121]. By bridging these two methods, we uncover both linear effects and synergistic pathways, advancing a holistic understanding of fake news sharing behaviors. Given the significant gap in empirical studies regarding the configurations that facilitate fake news sharing, our research seeks to elucidate how news-find-me perception, social media trust, information sharing, and status seeking combine to form necessary or sufficient conditions for fake news sharing behavior.

3. Method

3.1. Research Context

The propagation of fake news is influenced by technological factors and national information contexts [122]. Consequently, Duffy, Tandoc [22] suggested that researchers should broaden their scope beyond studying fake news exclusively within the contexts of Western countries such as the US and the UK. Recognizing this critical gap, this study applies the research frameworks presented in Figure 1 to the urgent context of Haiti, a developing nation experiencing a substantial rise in the dissemination of fake news. The widespread use of smartphones, along with heavy dependence on major social media networks like Facebook and WhatsApp in Haiti, has established an environment conducive to the swift dissemination of misinformation. This issue intensified during the COVID-19 pandemic and has since become intertwined with volatile political situations and the pervasive threat of armed gang violence. Worryingly, despite the increasing prevalence of fake news in many developing countries, public knowledge of any efforts to combat it remains scarce. By investigating this specific environment, we aim to provide crucial insights into the dynamics of fake news and pave the way for informed, effective interventions and policy formulation.

3.2. Instrument Development

This research adopts a quantitative methodology characterized by a large cross-sectional survey. The questionnaire survey consisted of three parts. The first section aimed to gain thoughtful responses with minimal bias by informing participants about this study’s purpose, guaranteeing anonymity, and emphasizing confidentiality, see [123,124]. It included a memo explaining these details. The second section collected demographic data. Age was categorized into four groups: (1) 8–28 years old, (2) 29–39 years old, (3) 40–50 years old, and (4) above 50 years old. We coded gender as male (0) or female (1). Participants’ education levels spanned from high school completion to graduate degrees, including current and former bachelor’s degree students. Occupation included six categories: private sector employee, public sector employee, independent professional, non-profit employee, self-employed/entrepreneur, and unemployed. Family status included five categories: single, married, divorced/separated, widowed, and free union. These sociodemographic factors were included as controls due to their potential influence on fake news sharing behaviors. The third section measured the key variables in this study. Five previously validated items assessed news-find-me perception, see [28,59]. Social media trust, information sharing, and status-seeking were measured using four validated items each, see [28,45,59]. Social media literacy, the moderating variable, was assessed using ten items validated by Wei, Gong [28]. Finally, eight validated items measured fake news-sharing behaviors [27,28,,59].
The first author translated the measurement items into French, and the second verified the translation. This process was facilitated by both authors’ fluency in French (an official language) and proficiency in English, stemming from their academic and professional experiences in English-speaking environments. To mitigate potential common method bias (CMB) during questionnaire construction, several procedural measures were implemented, including clarity of item wording, counterbalancing of questions, and the use of varied 5-point Likert scale response options, see [123,125]. Data quality was further ensured through three pre-defined exclusion criteria for non-genuine responses: (1) correct answers to two embedded control questions, (2) variability in responses across items (i.e., avoiding identical ratings), and (3) explicit checking of the consent-to-participate checkbox. The questionnaire underwent pilot testing, and no significant issues were identified.

3.3. Data Collection and Sample Composition

A large-scale paper-and-pencil survey was employed for data collection in four Haitian cities: Gonaïves, Port-au-Prince, Mirebalais, and Hinche. From the beginning of February to late May 2023, two surveyors per city distributed a total of 800 copies of the final questionnaire. Participants provided verbal consent and checked a consent checkbox before independently completing the survey, yielding 1200 initial responses. Subsequent data cleaning excluded 172 questionnaires based on predefined criteria: excessive missing responses, systematic response patterns (e.g., consistently selecting the same Likert option), or failure on attention-trap questions. This rigorous cleaning process resulted in a final sample of 1028 usable responses. The final sample consists of 47.5% males and 52.5% females. In decreasing order, 41.1% of the respondents were single, followed by 26.2% married, 13.9% divorced, 11.4% free union, and 7.5% widowed. In decreasing order, 39.8% of the respondents were pursuing bachelor’s degrees, 33.9% had completed bachelor’s degrees, 23% were high school graduates, and 3.4 had completed graduate studies. In decreasing order, 40% of the respondents were in the age bracket of 18 and 28 years old, 23.8% were in the age bracket of 40–50 years old, 22.6% were in the age bracket of 29–39 years old, and 13.3% were above the threshold of 50 years old. In decreasing order, 32.4% of the respondents were independent professionals, 19.3% were self-employed, 15.6% were employees of non-profit organizations, 14.5% were employees of the public sector, and 12.2% were employees of the private sector.

3.4. Missing Values and Confirmatory Factor Analysis

Before analyzing the data, we checked for missing values using descriptive statistics under SPSS version 27. Our analysis identified missing data across several items, with SMLit1, TrustSNS4, NewsFM4, and SSeeking4 exhibiting the highest rates—each showing missing values in approximately 1% of the responses. Other items with missing values included InfoSharing1, SMLit3, SMLit4, and SMLit6, each with missing values in approximately 0.8% of the cases. The remaining items that had missing value issues were in the range of 0.2% to 0.7%. Since the percentage of missing values was relatively low, we decided to use regression imputation to address these issues. After dealing with the missing values, we conducted a confirmatory factor analysis (CFA) to identify any potential issues with the dataset. The CFA analysis revealed that items InfoSharing7, SMLit1, SMLit2, and SMLit4 loaded on separate factors and were therefore removed from the analysis. Additionally, items SSeeking4, SMLit3, SMLit10, SMLit14, and SharingFNs1 had very poor loadings and were also removed from the analysis. We checked the dataset for normal distribution, and the results indicate that the dataset was normally distributed, as all skew values and kurtosis values were in the range of ±2, see [126,127,128]. The measurement items used in the questionnaire survey are presented in the Appendix A.

4. Results

Before evaluating the research model, we measured the psychometric properties of the variables using different measurements.

4.1. Measurements

Our analysis confirms that the measurement scales used in this study are valid and reliable (see Table 2). Construct validity was verified through two complementary assessments: (1) convergent validity to examine indicator reliability, and (2) discriminant validity to ensure construct distinctiveness, following the criteria established by Fornell and Larcker [129] and contemporary best practices [130,131,132]. All constructs demonstrated adequate convergent validity (AVE > 0.5, CR > 0.7). Two tests confirmed discriminant validity: (1) all HTMT values were below 0.85 (Table 3), and (2) each construct’s AVE square root exceeded its correlations with other constructs. Despite a CMIN/DF of 3.930, RMSEA of 0.053, GFI of 0.896, AGFI of 0.878, IFI of 0.931, and CFI of 0.930 were achieved, indicating an acceptable fit model [133]. Common method bias was assessed via Harman’s single-factor test (29.6% variance explained, below 50% threshold). Multicollinearity was evaluated through VIF scores, all below 5 [134,135], indicating no significant multicollinearity issues.

4.2. Results of the Hypothesized Relationships via SEM

To evaluate the hypothesized relationships illustrated in Figure 1, we conducted a covariance-based structural equation modeling (CB-SEM) analysis using AMOS 24. This approach was selected because our model’s constructs were operationalized using well-established, validated measurement scales adapted from prior research across diverse contexts [112]. CB-SEM is particularly suited for confirmatory studies, as it enables rigorous testing of theoretical relationships while relying on assumptions of multivariate normality and a sufficiently large sample size to produce stable parameter estimates. Following the advice provided by Shiau, Chau [105], we carried out two separate analyses to examine the impact of our independent variable while treating sociodemographic factors as control variables. In the first SEM analysis, we excluded these control variables, and in the second analysis, we incorporated them into the model. Consequently, this approach helped support the robustness of our results.

4.3. Results of Direct Effects

Our analysis reveals that the combined influence of news-find-me, social media trust, status-seeking, and information sharing explains a modest 9.7% of the variance in fake news sharing behavior. The inclusion of sociodemographic variables increases this explanatory power to 11.2%, suggesting their non-negligible role.
As presented in Table 4, the direct effects exhibit notable patterns:
  • News-find-me demonstrates a negligible and statistically non-significant effect on fake news sharing, both without (β = −0.013, p = 0.715) and with sociodemographic controls (β = −0.015, p = 0.667), leading to the rejection of Hypothesis 1.
  • Social media trust, however, exhibits a robust positive association with fake news sharing, remaining significant in both models (β = 0.170, p < 0.001 without sociodemographics; β = 0.140, p < 0.001 with sociodemographics), supporting Hypothesis 2.
  • Information sharing negatively predicts fake news sharing, with strong significance in both specifications (β = −0.307, p < 0.001 without sociodemographic; β = −0.291, p < 0.001 with sociodemographic), rejecting Hypothesis 3.
  • Status-seeking shows no meaningful effect (β = 0.001, p = 0.989 without sociodemographic; β = 0.004, p = 0.900 with sociodemographic), resulting in the rejection of Hypothesis 4.
Among sociodemographic factors, education (β = 0.064, p = 0.029), age (β = 0.077, p = 0.008), and family status (β = 0.109, p < 0.001) positively correlate with fake news sharing behavior, whereas gender (β = −0.069, p = 0.019) has a negative effect. Occupation (β = −0.024, p = 0.410) shows no significant influence.
Table 4. Results of the direct effects.
Table 4. Results of the direct effects.
A. Hypothesized Relationships, Excluding DemographicStandardized EstimateS.E.C.R.p
H1: NFM → FNSB−0.0130.040−0.3660.715
H2: SMT → FNSB0.1700.0385.122***
H3: IS → FNSB−0.3070.037−8.880***
H4: SSM → FNSB0.0010.0370.0180.986
B. Hypothesized Relationships, Including DemographicStandardized
Estimate
S.E.C.R.p
H1: NFM → FNSB−0.0150.039−0.4300.667
H2: SMT → FNSB0.1400.0374.246***
H3: IS → FNSB−0.2910.036−8.502***
H4: SSM → FNSB0.0040.0360.1250.900
Education → FNSB0.0640.0362.1870.029
Age → FNSB0.0770.0272.6320.008
Family Status → FNSB0.1090.0223.705***
Gender → FNSB−0.0690.058−2.3520.019
Occupation → FNSB−0.0240.021−0.8240.410
FNSB = fake news sharing behavior; SSM = status-seeking motivation; NFM = news-find-me; SMT = social media trust; IF = information sharing. *** indicates that p is significant at the 0.001 levels.

4.4. Results of Moderating Effects

To deepen our investigation of the proposed framework, we conducted two complementary structural equation modeling (SEM) analyses: one excluding sociodemographic controls and another incorporating them. The baseline model (without demographics) explained 15.0% of the variance in fake news sharing behavior, increasing marginally to 15.4% when sociodemographic factors were included.
As presented in Table 5 and Figure 2, our examination of social media literacy’s moderating role yielded nuanced findings:
  • Non-significant moderations:
    The buffering effect of social media literacy proved insignificant for both news-find-me perception (β = −0.12, p = 0.34) and status-seeking (β = 0.08, p = 0.42), leading to the rejection of H1a and H4a
  • Conditional moderation:
    Social media literacy significantly attenuated the link between social media trust and fake news sharing in the baseline model (β = −0.25, p < 0.01), but this effect diminished when demographics were controlled (β = −0.14, p = 0.06), yielding partial support for H2a
  • Robust moderation:
    The weakening effect of social media literacy on information seeking relationship with fake news sharing remained significant across both models (β = −0.31, p < 0.001; β = −0.28, p < 0.01), strongly supporting H3a
These results collectively suggest that while social media literacy consistently reduces vulnerability to misinformation stemming from informational motives, its protective role against trust-based sharing may depend on user demographics. The non-significant findings for status-seeking and news-find-me perception indicate boundary conditions for literacy’s moderating capacity.
Table 5. Moderating effect of social media literacy in the absence and presence of the sociodemographic variables.
Table 5. Moderating effect of social media literacy in the absence and presence of the sociodemographic variables.
A. Hypothesized Moderating Effects, Excluding the DemographicStandardized
Estimate
S.E.C.R.pConcluding Remark
H1: SML*NFM → FNSB−0.0610.036−1.6070.108Not confirmed
H2: SML*SMT → FNSB0.0970.0332.5900.010Confirmed
H3: SML*IS → FNSB0.1990.0375.223***Confirmed
H4: SML*SSM → FNSB−0.0040.037−0.1130.910Not confirmed
SML → FNSB−0.1350.039−3.710***Significant
B. Hypothesized Moderating Effects, Including the DemographicStandardized
Estimate
S.E.C.R.pConcluding Remark
H1: SML*NFM → FNSB−0.0470.035−1.2360.216Not confirmed
H2: SML*SMT → FNSB0.0720.0321.9450.052Partially confirmed
H3: SML*IS → FNSB0.1840.0374.840***Confirmed
H4: SML*SSM → FNSB0.0180.0360.4720.637Not confirmed
SML → FNSB−0.1080.039−2.9730.003Significant
Education → FNSB0.0660.0352.2860.022Significant
Age → FNSB0.0570.0261.9750.048Significant
Family Status → FNSB0.1040.0213.615***Significant
Gender → FNSB−0.0580.057−2.0160.044Significant
Occupation → FNSB−0.0210.020−0.7390.460Not significant
SML = social media literacy; *** denotes that p is significant at the 0.001 level.
Figure 2. Results of the proposed structural model.
Figure 2. Results of the proposed structural model.
Technologies 13 00341 g002
To elucidate social media literacy’s moderating role, we conducted plot analyses (Figure 3), revealing important insights:
Information sharing: Literacy significantly attenuated the negative association with fake news sharing (β = −0.28, p < 0.01), consistent across models with/without sociodemographic controls;
Social media trust: Counterintuitively, literacy amplified the positive trust–misinformation link (β = 0.19, p < 0.05) in both model specifications.
Positive predictors: Education (β = 0.066, p = 0.022), age (β = 0.057, p = 0.048), and family status (β = 0.104, p < 0.001);
Negative predictors: Gender (β = −0.058, p = 0.044);
Non-significant: Occupation (β = −0.021, p = 0.460).

4.5. fsQCA: Methodology and Solution Configurations

Complementing the results of the SEM analysis of linear relationships, we employed fsQCA to identify complex causal configurations of antecedent conditions that collectively explain fake news sharing behavior. This methodological synergy enhances our understanding of how news-find-me, social media trust, information sharing, status seeking, and social media literacy synergistically influence fake news sharing, moving beyond isolated linear effects to uncover combinatorial pathways [121]. Notably, fsQCA is a systematic methodological approach to identifying causal configurations that determine outcomes across cases [40]. Unlike traditional statistical methods, fsQCA operates on crisp-set or fuzzy logic, enabling researchers to uncover combinatorial patterns and equifinality, that is, multiple pathways leading to the same outcome [34,136]. Interestingly, fsQCA is suitable for accommodating small, medium, and large samples [137].
We used fsQCA version 4.1 to explore the configurations of predictors shown in Figure 1, aiming to identify distinct causal pathways. All causal conditions (the latent variables) were calibrated using fuzzy-set logic, with membership scores ranging from 0 (complete non-membership) to 1 (full membership). Following the systematic calibration approach outlined in previous studies [118,138,139,140], we defined three qualitative breakpoints for each condition: full non-membership = 5th percentile threshold, crossover point = median value (neutral membership), and full membership = 95th percentile threshold. This calibration ensures robust categorization of conditions while preserving their continuous nature (see Table 6). The truth table was built following the calibration, and the necessity conditions analysis was conducted.

4.6. Results of the fsQCA

In principle, a condition is deemed necessary if its consistency value is above 0.90 [31]. As detailed in Table 7, our analysis revealed that no single condition was necessary for either high or low fake news sharing. The truth table analysis, excluding social media literacy as a moderator, is represented by the model: C_SharingFN = f(C_InfoSharing, C_Sseeking, C_NFMe, C_TrustSM). Table A1 and Table A2, shown in the Appendix C and Appendix D, highlight three core conditions: the absence of status-seeking behavior (~C_Sseeking), the absence of information sharing (~C_InfoSharing), and the combined absence of news-find-me perception and trust in social media (~C_NFMe*C_TrustSM). These conditions consistently appear across all the complex, parsimonious, and intermediate solutions presented in Table A1. This consistency suggests that these conditions are core prerequisites for sharing fake news, regardless of model assumptions. The lack of variation between solutions further confirms that no simplifying assumptions (e.g., logical remainders) altered these core pathways, and no peripheral conditions were identified.
Ma, Cheng [139] proposed a two-stage approach to examining moderation using fsQCA, emphasizing its methodological advantages over traditional methods. This approach involves the following: (1) identifying primary conjunctural causation (causal configurations leading to an outcome) without the moderator; (2) re-running QCA with the moderator to assess its influence by analyzing how it alters interdependencies among causal factors in the initial configurations (see Table A3 and Table A4 in Appendix E and Appendix F). Furthermore, Ma, Cheng [139] suggest three requirements when testing moderation with fsQCA:
  • Configuration Stability: The core causal recipe must remain invariant across both analytical stages, with no alterations to its constituent factors.
  • Conditional Transformation: At minimum, one moderated configuration must exhibit transition(s) in condition status (core⇄peripheral) between stages.
  • Moderator Centrality: In at least one qualifying configuration, the moderator must function as a core (rather than peripheral) present condition.
Our fsQCA results demonstrate that social media literacy functions as a qualified moderator through the following evidentiary support:
  • Configuration Stability: The core solution terms remained identical between moderated and non-moderated analyses (Table A1 and Table A2 vs. Table A2 and Table A3), satisfying Criterion 1 of complete causal recipe invariance.
  • Moderator Centrality: Social media literacy emerged as a core present condition in Configuration 3 (Table A3), fulfilling Criterion 3’s requirement for moderator significance.
  • Conditional Transformation: While we observed stability in solution terms, the anticipated transitions between core and peripheral conditions (Criterion 2) were not fully evidenced. This partial fulfillment (2/3 criteria) suggests social media literacy operates as a selective moderator—effectively influencing certain causal pathways but not fundamentally restructuring condition hierarchies.

5. Discussion

This study investigated how news-finding, social media trust, status-seeking, and information sharing influence fake news sharing behavior. We also examined how social media literacy moderates these relationships. We tested the underlying relationships using SEM and fsQCA. A notable and unexpected finding was the lack of a direct effect of news-find-me perception on fake news sharing behavior, contradicting prior scholarly research by Wei, Gong [28] and Apuke and Omar [34]. Theoretically, this suggests that the mechanism of news discovery (news-find-me) may not be a primary determinant in the decision to share fake news. This aligns with the Rational Choice Theory [38], which postulates that individuals assess potential benefits and drawbacks before acting. Conversely, this result challenges Uses and Gratifications Theory [36], which proposes that individual needs and desires drive media choices.
Conflicting with previous findings, e.g., [28,44,45], the desire for social status (status-seeking) did not significantly influence fake news sharing behavior. This aligns with an earlier finding by Apuke and Omar 32. According to Uses and Gratifications Theory [36], individuals share information to satisfy personal needs. However, our finding for the lack of a significant effect of status-seeking calls into question the idea that fake news is primarily shared for social gain. The theory of rational choice [38] offers a possible explanation. People might not perceive a clear benefit (like increased social status) or cost (like damage to reputation) associated with sharing fake news. This perspective aligns with our finding that information sharing itself had a negative and significant effect on fake news sharing, contradicting previous research [27,28]. Theoretically, this suggests that individuals who actively share information might be more discerning and less likely to share something they perceive as uninformed or lacking value. Our results showed that social media trust significantly increased fake news sharing, which aligns with previous studies [28,66]. These findings reflect the utility-maximizing principles of Rational Choice Theory, where individuals function as strategic information evaluators: actively disseminating content perceived to enhance social capital (trusted information) while deliberately suppressing potential reputation threats (untrusted information). In this context, trusting social media leads individuals to share fake news because they may perceive it as trustworthy.
Existing literature establishes that limited digital literacy impedes fake news detection [141], while developed social media literacy enhances misinformation recognition and curbs its dissemination [95,97,142]. Our analysis reveals a more nuanced relationship—social media literacy’s moderating influence only significantly attenuated the effects of social media trust and information sharing on fake news sharing when we excluded the control variables. This pattern partially corroborates Wei et al.’s [28] results, though our inclusion of control variables reveals an important boundary condition: social media literacy’s attenuating effect on the social media trust–fake news sharing relationship demonstrated significant sensitivity to contextual factors, with its protective influence diminishing in controlled models.
Our analysis of social media literacy’s moderating effects revealed both anticipated and surprising results. As expected, social media literacy weakens the negative influence of information sharing on fake news sharing. This means that people with higher social media literacy are less likely to share fake news, even if they share information frequently [143]. This finding supports the idea that social media education can equip individuals with critical thinking skills to curb the spread of misinformation. However, a surprising finding emerged concerning social media trust. Our analysis suggests that social media literacy may amplify the positive relationship between social media trust and fake news sharing. Specifically, individuals with high social media literacy who also exhibit high trust in social media platforms may be more inclined to share fake news. One potential explanation for this result is that heightened social media literacy could lead individuals to overestimate their capacity to discern credible information, paradoxically increasing their susceptibility to sharing biased or inaccurate content from trusted sources. By revealing both anticipated and unexpected outcomes, this study highlights the nuanced role of social media literacy.
Additionally, the results of the fsQCA provided crucial insights into fake news sharing, complementing and reinforcing the findings from the SEM analysis. For instance, the fsQCA revealed three consistent core configurations: (1) low status-seeking, (2) low general information sharing, and (3) a unique interaction of low news-find-me orientation coupled with high trust in social media. While social media literacy did not drastically alter these primary configurations, its low or absent state emerged as a direct core configuration for fake news sharing, strongly supporting its significant moderating effect. These empirical observations offer a robust foundation for deeper theoretical interpretation and also contribute to the corpus of the literature on fake news sharing.

5.1. Theoretical Contributions

This study makes four key theoretical contributions to the literature on fake news sharing, drawing from Rational Choice Theory (RCT) and Uses and Gratifications Theory (UGT). First, this study introduces a novel dual-theoretical framework that synthesizes RCT and UGT to explain fake news sharing behavior. RCT elucidates how individuals engage in cost-benefit evaluations—balancing perceived rewards (e.g., social validation) against risks (e.g., reputational harm)—while UGT frames media engagement as a means of fulfilling psychological and social needs (e.g., entertainment, connection, or cognitive ease). The integration of these theories offers a richer, multidimensional understanding of why users choose to share potentially inaccurate content online. Second, this study leverages both Structural Equation Modeling (SEM) and Fuzzy-set Qualitative Comparative Analysis (fsQCA) to offer complementary empirical insights. SEM reveals direct and moderating relationships among key variables, while fsQCA uncovers complex configurations of conditions that lead to fake news sharing. For example, a combination of low “news-find-me orientation” and high social media trust emerged as a core configuration in the fsQCA results. This supports the RCT concept of bounded rationality—where users accept algorithmically curated content as reliable due to perceived effort-reduction—and aligns with UGT’s notion of passive gratification through reduced cognitive load.
Third, the findings challenge the traditional assumption that fake news sharing is purely irrational or malicious. Instead, this study reframes such behavior as subjectively rational, driven by cognitive shortcuts and emotionally satisfying choices. For instance, users with low status-seeking tendencies may avoid fake news sharing to reduce reputational costs (RCT) while also showing less interest in self-presentation gratifications (UGT). This dual framing enriches both theoretical perspectives by embedding fake news sharing behavior within the practical limitations of attention, motivational salience, and cognitive processing. Fourth and last, the identification of low SML as a standalone causal pathway contributes to both RCT and UGT. Under RCT, poor digital literacy impairs individuals’ ability to assess risks and benefits accurately, leading to flawed decision-making. Within the framework of Uses and Gratifications Theory, this behavior can be understood as mis-gratification, wherein users seek to fulfill emotional or social needs—such as entertainment or a sense of group belonging—by sharing content, even when that content is inaccurate or false. This finding positions SML not just as a moderating factor but as a core mechanism shaping digital behavior, thus expanding theoretical models of fake news sharing.

5.2. Practical Implications

The widespread dissemination of fake news presents substantial threats to economic stability and social cohesion, with particularly severe consequences for developing nations like Haiti. Addressing these dangers necessitates context-specific interventions from NGOs, businesses, and policymakers that carefully consider local sociodemographic factors. As emphasized by Ma, Wang [144], advancing cross-sectoral collaboration in misinformation governance is essential for building resilient and transparent information ecosystems that uphold public trust. Central to this effort is the design and implementation of contextualized educational frameworks and targeted awareness campaigns. To maximize impact, these initiatives must leverage diverse communication channels, spanning traditional media, formal education systems (e.g., integrating social media literacy into school curricula), and the digital platforms where misinformation proliferates. Such multi-pronged strategies empower individuals to critically evaluate information, discern credible sources from fabricated narratives, and actively mitigate the dissemination of disinformation. By embedding these competencies into societal structures, stakeholders can cultivate a public capable of navigating complex media landscapes while fostering collective accountability for information integrity. Crucially, social media literacy programs should incorporate practical training on readily available fake news detection tools, such as fact-checking algorithms and source verification frameworks. Equipping users with the ability to proactively utilize these resources can significantly slow the viral spread of false information while promoting a culture of accountability among content creators and platforms.

5.3. Limitations

While this research advances the understanding of fake news sharing behavior in understudied contexts, four key limitations merit acknowledgment. First, the findings derive from data collected in Haiti—a setting marked by acute socio-political instability, economic precarity, and one of the world’s lowest digital literacy rates. These contextual particularities, compounded by the absence of institutional anti-misinformation initiatives, may limit the generalizability of results to stable, high-literacy environments. Cross-national comparative studies using standardized instruments are urgently needed to disentangle universal behavioral drivers from context-contingent factors. Second, while our framework integrated salient predictors (e.g., status-seeking motives), it omitted potentially critical variables such as confirmation bias, algorithmic amplification patterns, or cross-platform sharing norms. Future models could strengthen their predictive power by incorporating these elements through mixed-method designs that pair surveys with computational trace analyses. Finally, the self-report measures, though psychometrically validated, risk conflating intentional sharing with accidental dissemination. Complementary experimental paradigms (e.g., simulated social media tasks) could objectively differentiate deliberate from negligent fake news sharing behaviors.

6. Concluding Remark

This study employed an integrated methodological approach combining Structural Equation Modeling (SEM) and Fuzzy-Set Qualitative Comparative Analysis (fsQCA) to address three core objectives: examining the direct effects of news-find-me perception, social media trust, status-seeking motivation, and information sharing on fake news sharing behavior; exploring the moderating influence of social media literacy; and uncovering configurational patterns that contribute to fake news sharing behavior. The SEM analysis revealed that news-find-me perception exerted a negative but statistically insignificant influence on fake news sharing behavior, whereas information sharing demonstrated a significant negative effect. In contrast, trust in social media was positively and significantly associated with fake news sharing, while status-seeking motivation showed a positive but statistically insignificant relationship with fake news sharing behavior.
The configurational insights from fsQCA added nuance by identifying multiple causal pathways leading to fake news sharing behavior. Key configurations included low status-seeking motivation, limited general information sharing, and a distinct pattern marked by low news-find-me orientation coupled with high trust in social media. Across these pathways, we found that low social media literacy emerged consistently as a central condition, highlighting its foundational role in vulnerability to fake news. Theoretically, this study advances the Uses and Gratifications Theory by elucidating how diverse motivational constructs intersect with patterns of media engagement to influence fake news sharing behavior. It also extends Rational Choice Theory by demonstrating that perceptions of utility and risk are shaped not solely through objective evaluation but via bounded rationality—where digital competencies and motivational dispositions guide the use of heuristics in information processing and dissemination. Although the findings offer substantive contributions to the literature, this study acknowledges its methodological and contextual limitations and identifies pathways for future inquiry.

Author Contributions

Conceptualization, C.M.; Methodology, C.M. and H.S.; Validation, C.M., H.S. and H.P.D.; Formal analysis, C.M. and H.P.D.; Data curation, H.P.D.; Writing—original draft, C.M.; Writing—review & editing, H.S.; Visualization, H.P.D.; Supervision, C.M.; Project administration, C.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

The authors adhered to all ethical standards applicable to the conduct of research involving human participants. This study received ethical review and approval from the Research Ethics Committee of Rezo Inovasyon Edikatif Ayisyen (RINOVEDA) on 25 November 2022.

Informed Consent Statement

Prior to their participation, all respondents provided explicit written informed consent. This was obtained after a comprehensive verbal and written explanation of this study’s purpose, procedures, benefits, expected duration, and confidentiality measures. Participants were explicitly informed that their participation was entirely voluntary, they had the right to withdraw at any time without penalty or consequence, and that refusal to participate would not affect any pre-existing relationship.

Data Availability Statement

Data and materials can be found at https://doi.org/10.7910/DVN/TXHJMM.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Itemized MeasurementsSources
ConstructCodeItemAuthors
Fake News-sharing behaviorSharingFNs1
Deleted
I shared news on social media that I recognized as visibly fabricated or intentionally misleading.[27], Wei, Gong [28], Apuke and Omar [34]
SharingFNs2I shared news on social media that I strongly suspected was false but chose to post anyway.
SharingFNs3I shared exaggerated claims on social media, aware they were inflated, but did not check their accuracy.
SharingFNs4I intentionally shared exaggerated news on social media to attract attention, knowing it was misleading.
SharingFNs5I shared news on social media without assessing its credibility, even though I felt uncertain about its accuracy.
SharingFNs6Despite concerns that it might be inauthentic, I shared news on social media without confirming its source.
SharingFNs7I shared news on social media after only skimming it, even though I doubted its truthfulness.
SharingFNs8I shared news on social media, fully aware it likely contained inaccuracies or half-truths.
News-find-meNewsFM1When news is released, I rely on my friends to inform me of the essentials.Wei, Gong [28], Apuke and Omar [34]
NewsFM2Despite not actively keeping up with the news, I can stay well-informed.
NewsFM3I do not feel pressured to stay updated with the news because I trust that it will reach me.
NewsFM4I depend on the news shared by my friends, tailored to their interests or social media activities.
Status seekingSSeeking1Posting news on social media makes me feel significant.Thompson, Wang [69]; Apuke and Omar [34]
SSeeking2Sharing information on social media enhances my sense of status.
SSeeking3Utilizing social media for information dissemination boosts my professional image.
SSeeking4_Deteted I post news on social media because my peers are pushing me to get involved.
SSeeking5I utilize social media platforms to share news and garner support and respect.
Information SharingInfoSharing1I share content that may be valuable to others on social media.Thompson, Wang [69]; Apuke and Omar [34]
InfoSharing2I share information on social media to gather feedback on my findings.
InfoSharing3I share information on social media to keep others informed.
InfoSharing4I share practical knowledge and skills with others through social media.
InfoSharing5I use social media as a platform for easy self-expression.
InfoSharing6I share engaging content on social media that may interest or entertain others.
InfoSharing7_DeletedI share personal insights about myself on social media.
InfoSharing8I use social media to offer a glimpse into my life and experiences.
Trust in social media platformsTrustSNS1Social networking platforms serve as dependable social networks.Laato, Islam [29]; Apuke and Omar [34]
TrustSNS2I trust social media sites to safeguard my privacy and personal information.
TrustSNS3I rely on social media platforms to protect my data from unauthorized access.
TrustSNS4I have confidence in social media platforms to fulfill their commitments.
Social media literacySMLit1_DeletedI possess the knowledge to create a social media account.Wei, Gong [28]
SMLit2_DeletedI am familiar with the process of deleting or deactivating my social media account.
SMLit3_DeletedI understand how to post content, such as photos, on my social media profiles.
SMLit4_DeletedI know how to remove undesirable content from my social media accounts.
SMLit5I am knowledgeable about copyright laws governing social media platforms.
SMLit6I am adept at managing conflicts that arise on social media.
SMLit7I understand the dynamics and etiquette of social media platforms.
SMLit8I can verify the accuracy of information shared on social media using various sources.
SMLit9I can discern whether information on social media is true or false.
SMLit10_DeletedSocial media platforms like Facebook curate the content I see.
SMLit11The content I post on social media remains permanent.
SMLit12The advertisements I encounter on social media are tailored to my preferences.
SMLit13I utilize social media platforms to share news and garner support and respect.
SMLit14_DeletedWhen engaging with social media, I become deeply absorbed.

Appendix B. Scattergrams of fsQCA Solutions

Technologies 13 00341 g0a1

Appendix C

Table A1. Results of the analysis of the truth table, excluding the moderator.
Table A1. Results of the analysis of the truth table, excluding the moderator.
--- COMPLEX SOLUTION ---
frequency cutoff: 10
consistency cutoff: 0.815266
Raw coverageUnique coverageConsistency
~C_Sseeking0.645516 0.06717410.686231
~C_InfoSharing0.720071 0.123247 0.71413
~C_NFMe*C_TrustSM 0.427153 0.0210943 0.80281
solution coverage: 0.835441
solution consistency: 0.657643
--- PARSIMONIOUS SOLUTION ---
frequency cutoff: 10
consistency cutoff: 0.815266
Raw coverageUnique coverageConsistency
~C_Sseeking 0.645516 0.0671741 0.686231
~C_InfoSharing 0.720071 0.123247 0.71413
~C_NFMe*C_TrustSM 0.427153 0.0210943 0.80281
solution coverage: 0.835441
solution consistency: 0.657643
--- INTERMEDIATE SOLUTION ---
frequency cutoff: 10
consistency cutoff: 0.815266
Raw coverageUnique coverageConsistency
~C_Sseeking 0.645516 0.06717410.686231
~C_InfoSharing 0.720071 0.123247 0.71413
~C_NFMe*C_TrustSM 0.427153 0.0210943 0.80281
solution coverage: 0.835441
solution consistency: 0.657643
Note: the tilde (~) indicates the absence or low membership in the set. * indicates the logical AND operator.

Appendix D

Table A2. Results of fsQCA for conjunctural causations for fake news sharing behaviors.
Table A2. Results of fsQCA for conjunctural causations for fake news sharing behaviors.
Solutions123
News find me Technologies 13 00341 i001
Social media trust Technologies 13 00341 i002
Information sharing Technologies 13 00341 i003
Status seekingTechnologies 13 00341 i004
solution coverage: 0.835441
solution consistency: 0.657643
Black Circles denote core conditions; X Marks indicate the absence of a core condition; Blank Spaces denote “don’t care” conditions. No peripheral conditions were detected in the solutions.

Appendix E

Table A3. Results of the analysis of the truth table, including the moderator (SM Literacy).
Table A3. Results of the analysis of the truth table, including the moderator (SM Literacy).
--- COMPLEX SOLUTION ---
frequency cutoff: 2
consistency cutoff: 0.821985
Raw coverageUnique coverageConsistency
~C_Literacy 0.708607 0.04401980.708498
~C_Sseeking0.645514 0.0318702 0.686231
~C_InfoSharing 0.7200690.0394230.714132
~C_NFMe*C_TrustSM0.4271520.0135803 0.802809
solution coverage: 0.879461
solution consistency: 0.638975
--- PARSIMONIOUS SOLUTION ---
frequency cutoff: 2
consistency cutoff: 0.821985
Raw coverageUnique coverageConsistency
~C_Literacy 0.7086070.04401980.708498
~C_Sseeking0.645514 0.03187020.686231
~C_InfoSharing0.720069 0.039423 0.714132
~C_NFMe*C_TrustSM0.4271520.01358030.802809
solution coverage: 0.879461
solution consistency: 0.638975
--- INTERMEDIATE SOLUTION ---
frequency cutoff: 10
consistency cutoff: 0.815266
Raw coverageUnique coverageConsistency
~C_Sseeking 0.6455140.03187020.686231
~C_InfoSharing 0.720069 0.039423 0.714132
~C_NFMe*C_TrustSM 0.427152 0.0135803 0.802809
~C_Literacy 0.708607 0.0440198 0.708498
solution coverage: 0.879461
solution consistency: 0.638975
Note: the tilde (~) indicates the absence or low membership in the set. * indicates the logical AND operator.

Appendix F

Table A4. Moderating influences of social media literacy.
Table A4. Moderating influences of social media literacy.
Solutions1234
News-find-me Technologies 13 00341 i005
Social media trust Technologies 13 00341 i006
Information sharing Technologies 13 00341 i007
Status seekingTechnologies 13 00341 i008
SM literacy Technologies 13 00341 i009
Solution Coverage: 0.879461
Solution Consistency: 0.638975
Black Circles denote core conditions; X Marks indicate the absence of a core condition; Blank Spaces denote "don’t care" conditions. No peripheral conditions were detected in the solutions.

References

  1. Zhou, Q.; Li, B.; Scheibenzuber, C.; Li, H. Fake news land? Exploring the impact of social media affordances on user behavioral responses: A mixed-methods research. Comput. Hum. Behav. 2023, 148, 107889. [Google Scholar] [CrossRef]
  2. Cano-Marin, E.; Mora-Cantallops, M.; Sanchez-Alonso, S. The power of big data analytics over fake news: A scientometric review of Twitter as a predictive system in healthcare. Technol. Forecast. Soc. Change 2023, 190, 122386. [Google Scholar] [CrossRef]
  3. Li, N. Analyzing the Complexity of Public Opinion Evolution on Weibo: A Super Network Model. J. Knowl. Econ. 2024, 16, 3404–3439. [Google Scholar] [CrossRef]
  4. Xie, Z.; Chiu, D.K.W.; Ho, K.K.W. The Role of Social Media as Aids for Accounting Education and Knowledge Sharing: Learning Effectiveness and Knowledge Management Perspectives in Mainland China. J. Knowl. Econ. 2024, 15, 2628–2655. [Google Scholar] [CrossRef] [PubMed]
  5. Chaudhuri, N.; Gupta, G.; Popovič, A. Do you believe it? Examining user engagement with fake news on social media platforms. Technol. Forecast. Soc. Change 2025, 212, 123950. [Google Scholar] [CrossRef]
  6. Hossain, M.A.; Quaddus, M.; Warren, M.; Akter, S.; Pappas, I. Are you a cyberbully on social media? Exploring the personality traits using a fuzzy-set configurational approach. Int. J. Inf. Manag. 2022, 66, 102537. [Google Scholar] [CrossRef]
  7. Lee, J.; Kim, K.; Park, G.; Cha, N. The role of online news and social media in preventive action in times of infodemic from a social capital perspective: The case of the COVID-19 pandemic in South Korea. Telemat. Inform. 2021, 64, 101691. [Google Scholar] [CrossRef]
  8. Raj, C.; Meel, P. People lie, actions Don’t! Modeling infodemic proliferation predictors among social media users. Technol. Soc. 2022, 68, 101930. [Google Scholar] [CrossRef]
  9. Pang, H.; Liu, J.; Lu, J. Tackling fake news in socially mediated public spheres: A comparison of Weibo and WeChat. Technol. Soc. 2022, 70, 102004. [Google Scholar] [CrossRef]
  10. Marsh, E.J.; Stanley, M.L. False beliefs: Byproducts of an adaptive knowledge base? In The Psychology of Fake News; Routledge: London, UK, 2020; pp. 131–146. [Google Scholar]
  11. Scheibenzuber, C.; Hofer, S.; Nistor, N. Designing for fake news literacy training: A problem-based undergraduate online-course. Comput. Hum. Behav. 2021, 121, 106796. [Google Scholar] [CrossRef]
  12. Shu, K.; Sliva, A.; Wang, S.; Tang, J.; Liu, H. Fake news detection on social media: A data mining perspective. ACM SIGKDD Explor. Newsl. 2017, 19, 22–36. [Google Scholar] [CrossRef]
  13. Velichety, S.; Shrivastava, U. Quantifying the impacts of online fake news on the equity value of social media platforms—Evidence from Twitter. Int. J. Inf. Manag. 2022, 64, 102474. [Google Scholar] [CrossRef]
  14. Carlson, M. Fake news as an informational moral panic: The symbolic deviancy of social media during the 2016 US presidential election. Inf. Commun. Soc. 2020, 23, 374–388. [Google Scholar] [CrossRef]
  15. Kantartopoulos, P.; Pitropakis, N.; Mylonas, A.; Kylilis, N. Exploring Adversarial Attacks and Defences for Fake Twitter Account Detection. Technologies 2020, 8, 64. [Google Scholar] [CrossRef]
  16. Olan, F.; Jayawickrama, U.; Arakpogun, E.O.; Suklan, J.; Liu, S. Fake news on Social Media: The Impact on Society. Inf. Syst. Front. 2024, 26, 443–458. [Google Scholar] [CrossRef]
  17. Byun, K.J.; Park, J.; Yoo, S.; Cho, M. Has the COVID-19 pandemic changed the influence of word-of-mouth on purchasing decisions? J. Retail. Consum. Serv. 2023, 74, 103411. [Google Scholar] [CrossRef]
  18. Bermes, A. Information overload and fake news sharing: A transactional stress perspective exploring the mitigating role of consumers’ resilience during COVID-19. J. Retail. Consum. Serv. 2021, 61, 102555. [Google Scholar] [CrossRef]
  19. Matthes, J.; Corbu, N.; Jin, S.; Theocharis, Y.; Schemer, C.; van Aelst, P.; Strömbäck, J.; Koc-Michalska, K.; Esser, F.; Aalberg, T.; et al. Perceived prevalence of misinformation fuels worries about COVID-19: A cross-country, multi-method investigation. Inf. Commun. Soc. 2023, 26, 3133–3156. [Google Scholar] [CrossRef]
  20. Liu, Y.; Liu, S.; Ye, D.; Tang, H.; Wang, F. Dynamic impact of negative public sentiment on agricultural product prices during COVID-19. J. Retail. Consum. Serv. 2022, 64, 102790. [Google Scholar] [CrossRef]
  21. Harris, S.; Hadi, H.J.; Ahmad, N.; Alshara, M.A. Fake News Detection Revisited: An Extensive Review of Theoretical Frameworks, Dataset Assessments, Model Constraints, and Forward-Looking Research Agendas. Technologies 2024, 12, 222. [Google Scholar] [CrossRef]
  22. Duffy, A.; Tandoc, E.; Ling, R. Too good to be true, too good not to share: The social utility of fake news. Inf. Commun. Soc. 2020, 23, 1965–1979. [Google Scholar] [CrossRef]
  23. Wasserman, H.; Madrid-Morales, D. An Exploratory Study of “Fake News” and Media Trust in Kenya, Nigeria and South Africa. Afr. J. Stud. 2019, 40, 107–123. [Google Scholar] [CrossRef]
  24. Masavah, V.M.; Turpin, M. Fake News in Developing Countries: Drivers, Mechanisms and Consequences; Springer Nature: Cham, Switzerland, 2024. [Google Scholar]
  25. Nenno, S.; Puschmann, C. All The (Fake) News That’s Fit to Share? News Values in Perceived Misinformation across Twenty-Four Countries. Int. J. Press/Politics 2025. [Google Scholar] [CrossRef]
  26. Talwar, S.; Dhir, A.; Kaur, P.; Zafar, N.; Alrasheedy, M. Why do people share fake news? Associations between the dark side of social media use and fake news sharing behavior. J. Retail. Consum. Serv. 2019, 51, 72–82. [Google Scholar] [CrossRef]
  27. Apuke, O.D.; Omar, B. Fake news and COVID-19: Modelling the predictors of fake news sharing among social media users. Telemat. Inform. 2021, 56, 101475. [Google Scholar] [CrossRef]
  28. Wei, L.; Gong, J.; Xu, J.; Abidin, N.E.Z.; Apuke, O.D. Do social media literacy skills help in combating fake news spread? Modelling the moderating role of social media literacy skills in the relationship between rational choice factors and fake news sharing behaviour. Telemat. Inform. 2023, 76, 101910. [Google Scholar] [CrossRef]
  29. Laato, S.; Islam, A.K.M.N.; Islam, M.N.; Whelan, E. What drives unverified information sharing and cyberchondria during the COVID-19 pandemic? Eur. J. Inf. Syst. 2020, 29, 288–305. [Google Scholar] [CrossRef]
  30. Gray, R.; Owen, D.; Adams, C. Some theories for social accounting?: A review essay and a tentative pedagogic categorisation of theorisations around social accounting. In Sustainability, Environmental Performance and Disclosures; Freedman, M., Jaggi, B., Eds.; Emerald Group Publishing Limited: Leeds, UK, 2009; pp. 1–54. [Google Scholar]
  31. Fernando, S.; Lawrence, S. A theoretical framework for CSR practices: Integrating legitimacy theory, stakeholder theory and institutional theory. J. Theor. Account. Res. 2014, 10, 149–178. [Google Scholar]
  32. Apuke, O.D.; Omar, B. Modelling the antecedent factors that affect online fake news sharing on COVID-19: The moderating role of fake news knowledge. Health Educ. Res. 2020, 35, 490–503. [Google Scholar] [CrossRef]
  33. Wang, X.; Chao, F.; Yu, G.; Zhang, K. Factors influencing fake news rebuttal acceptance during the COVID-19 pandemic and the moderating effect of cognitive ability. Comput. Hum. Behav. 2022, 130, 107174. [Google Scholar] [CrossRef]
  34. Apuke, O.D.; Omar, B. Social media affordances and information abundance: Enabling fake news sharing during the COVID-19 health crisis. Health Inform. J. 2021, 27, 14604582211021470. [Google Scholar] [CrossRef]
  35. Talwar, S.; Dhir, A.; Singh, D.; Virk, G.S.; Salo, J. Sharing of fake news on social media: Application of the honeycomb framework and the third-person effect hypothesis. J. Retail. Consum. Serv. 2020, 57, 102197. [Google Scholar] [CrossRef]
  36. Balakrishnan, V.; Ng, K.S.; Rahim, H.A. To share or not to share—The underlying motives of sharing fake news amidst the COVID-19 pandemic in Malaysia. Technol. Soc. 2021, 66, 101676. [Google Scholar] [CrossRef]
  37. Ahmed, S. Disinformation Sharing Thrives with Fear of Missing Out among Low Cognitive News Users: A Cross-national Examination of Intentional Sharing of Deep Fakes. J. Broadcast. Electron. Media 2022, 66, 89–109. [Google Scholar] [CrossRef]
  38. Aoun Barakat, K.; Dabbous, A.; Tarhini, A. An empirical approach to understanding users’ fake news identification on social media. Online Inf. Rev. 2021, 45, 1080–1096. [Google Scholar] [CrossRef]
  39. Kumar, A.; Shankar, A.; Behl, A.; Arya, V.; Gupta, N. Should I share it? Factors influencing fake news-sharing behaviour: A behavioural reasoning theory perspective. Technol. Forecast. Soc. Change 2023, 193, 122647. [Google Scholar] [CrossRef]
  40. Ostrom, E. A Behavioral Approach to the Rational Choice Theory of Collective Action: Presidential Address, American Political Science Association, 1997. Am. Political Sci. Rev. 1998, 92, 1–22. [Google Scholar] [CrossRef]
  41. Driscoll, A.; Krook, M.L. Feminism and rational choice theory. Eur. Political Sci. Rev. 2012, 4, 195–216. [Google Scholar] [CrossRef]
  42. Blumler, J.G.; Katz, E. The Uses of Mass Communications: Current Perspectives on Gratifications Research; Sage: Beverly Hills, CA, USA, 1974. [Google Scholar]
  43. Halpern, D.; Valenzuela, S.; Katz, J.; Miranda, J.P. From Belief in Conspiracy Theories to Trust in Others: Which Factors Influence Exposure, Believing and Sharing Fake News; Springer International Publishing: Cham, Switzerland, 2019. [Google Scholar]
  44. Ragin, C.C. Redesigning Social Inquiry: Fuzzy Sets and Beyond; University of Chicago Press: Chicago, IL, USA, 2009. [Google Scholar]
  45. Fiss, P.C. A set-theoretic approach to organizational configurations. Acad. Manag. Rev. 2007, 32, 1180–1198. [Google Scholar] [CrossRef]
  46. Browning, G.; Webster, F.; Halcli, A. Understanding contemporary society: Theories of the present. In Understanding Contemporary Society; Sage Publications: Singapore, 1999; pp. 1–520. [Google Scholar]
  47. Hopmann, D.N.; Wonneberger, A.; Shehata, A.; Höijer, J. Selective Media Exposure and Increasing Knowledge Gaps in Swiss Referendum Campaigns. Int. J. Public Opin. Res. 2015, 28, 73–95. [Google Scholar] [CrossRef]
  48. de Zúñiga, H.G.; Cheng, Z. Origin and evolution of the News Finds Me perception: Review of theory and effects. Prof. Inf. 2021, 30, 1–17. [Google Scholar] [CrossRef]
  49. Gil de Zúñiga, H.; Diehl, T. News finds me perception and democracy: Effects on political knowledge, political interest, and voting. New Media Soc. 2019, 21, 1253–1271. [Google Scholar] [CrossRef]
  50. Pentina, I.; Tarafdar, M. From “information” to “knowing”: Exploring the role of social media in contemporary news consumption. Comput. Hum. Behav. 2014, 35, 211–223. [Google Scholar] [CrossRef]
  51. Toff, B.; Nielsen, R.K. “I Just Google It”: Folk Theories of Distributed Discovery. J. Commun. 2018, 68, 636–657. [Google Scholar] [CrossRef]
  52. Gil de Zúñiga, H.; Weeks, B.; Ardèvol-Abreu, A. Effects of the News-Finds-Me Perception in Communication: Social Media Use Implications for News Seeking and Learning About Politics. J. Comput.-Mediat. Commun. 2017, 22, 105–123. [Google Scholar] [CrossRef]
  53. Ruggiero, T.E. Uses and gratifications theory in the 21st century. Mass Commun. Soc. 2000, 3, 3–37. [Google Scholar] [CrossRef]
  54. Müller, T.; Husain, M.; Apps, M.A. Preferences for seeking effort or reward information bias the willingness to work. Sci. Rep. 2022, 12, 19486. [Google Scholar] [CrossRef] [PubMed]
  55. Caplin, A.; Dean, M. Revealed Preference, Rational Inattention, and Costly Information Acquisition. Am. Econ. Rev. 2014, 105, 2183–2203. [Google Scholar] [CrossRef]
  56. Chong, D.; Mullinix, K. Rational Choice and Information Processing. In The Cambridge Handbook of Political Psychology; Cambridge University Press: Cambridge, UK, 2022; pp. 118–138. [Google Scholar]
  57. Tandoc, E.C.; Ling, R.; Westlund, O.; Duffy, A.; Goh, D.; Wei, L.Z. Audiences’ acts of authentication in the age of fake news: A conceptual framework. New Media Soc. 2018, 20, 2745–2763. [Google Scholar] [CrossRef]
  58. Omar, B.; Dequan, W. Watch, Share or Create: The Influence of Personality Traits and User Motivation on TikTok Mobile Video Usage. 2020. Available online: https://www.learntechlib.org/p/216454/?nl=1 (accessed on 23 March 2024).
  59. Fiss, P.C. Building Better Causal Theories: A Fuzzy Set Approach to Typologies in Organization Research. Acad. Manag. J. 2011, 54, 393–420. [Google Scholar] [CrossRef]
  60. McLoughlin, K.L.; Brady, W.J. Human-algorithm interactions help explain the spread of misinformation. Curr. Opin. Psychol. 2024, 56, 101770. [Google Scholar] [CrossRef]
  61. Mayer, R.C.; Davis, J.H.; Schoorman, F.D. An integrative model of organizational trust. Acad. Manag. Rev. 1995, 20, 709–734. [Google Scholar] [CrossRef]
  62. DuBois, T.; Golbeck, J.; Srinivasan, A. Predicting Trust and Distrust in Social Networks. In Proceedings of the 2011 IEEE Third International Conference on Privacy, Security, Risk and Trust and 2011 IEEE Third International Conference on Social Computing, Boston, MA, USA, 9–11 October 2011. [Google Scholar]
  63. Grabner-Kräuter, S.; Bitter, S. Trust in online social networks: A multifaceted perspective. Forum Soc. Econ. 2015, 44, 48–68. [Google Scholar] [CrossRef]
  64. Warner-Søderholm, G.; Bertsch, A.; Sawe, E.; Lee, D.; Wolfe, T.; Meyer, J.; Engel, J.; Fatilua, U.N. Who trusts social media? Comput. Hum. Behav. 2018, 81, 303–315. [Google Scholar] [CrossRef]
  65. Lin, S.-W.; Liu, Y.-C. The effects of motivations, trust, and privacy concern in social networking. Serv. Bus. 2012, 6, 411–424. [Google Scholar] [CrossRef]
  66. Yuan, Y.-P.; Tan, G.W.-H.; Ooi, K.-B. What shapes mobile fintech consumers’ post-adoption experience? A multi-analytical PLS-ANN-fsQCA perspective. Technol. Forecast. Soc. Change 2025, 217, 124162. [Google Scholar] [CrossRef]
  67. Metzger, M.J.; Flanagin, A.J. Credibility and trust of information in online environments: The use of cognitive heuristics. J. Pragmat. 2013, 59, 210–220. [Google Scholar] [CrossRef]
  68. Salehan, M.; Kim, D.J.; Koo, C. A study of the effect of social trust, trust in social networking services, and sharing attitude, on two dimensions of personal information sharing behavior. J. Supercomput. 2018, 74, 3596–3619. [Google Scholar] [CrossRef]
  69. Thompson, N.; Wang, X.; Daya, P. Determinants of News Sharing Behavior on Social Media. J. Comput. Inf. Syst. 2020, 60, 593–601. [Google Scholar] [CrossRef]
  70. Torres, R.; Gerhart, N.; Negahban, A. Epistemology in the era of fake news: An exploration of information verification behaviors among social networking site users. ACM SIGMIS Database DATABASE Adv. Inf. Syst. 2018, 49, 78–97. [Google Scholar] [CrossRef]
  71. Freiling, I.; Waldherr, A. Why Trusting Whom? Motivated Reasoning and Trust in the Process of Information Evaluation. In Trust and Communication: Findings and Implications of Trust Research; Blöbaum, B., Ed.; Springer International Publishing: Cham, Switzerland, 2021; pp. 83–97. [Google Scholar]
  72. Guess, A.; Nagler, J.; Tucker, J. Less than you think: Prevalence and predictors of fake news dissemination on Facebook. Sci. Adv. 2019, 5, eaau4586. [Google Scholar] [CrossRef]
  73. Paletz, S.B.F.; Auxier, B.E.; Golonka, E.M. Motivation to Share. In A Multidisciplinary Framework of Information Propagation Online; Springer International Publishing: Cham, Switzerland, 2019; pp. 37–45. [Google Scholar]
  74. Oh, S.; Syn, S.Y. Motivations for sharing information and social support in social media: A comparative analysis of Facebook, Twitter, Delicious, YouTube, and Flickr. J. Assoc. Inf. Sci. Technol. 2015, 66, 2045–2060. [Google Scholar] [CrossRef]
  75. McGonagle, T. “Fake news”: False fears or real concerns? Neth. Q. Hum. Rights 2017, 35, 203–209. [Google Scholar] [CrossRef]
  76. Narwal, B. Fake news in digital media. In Proceedings of the 2018 International Conference on Advances in Computing, Communication Control and Networking (ICACCCN), Greater Noida, India, 12–13 October 2018. [Google Scholar]
  77. Tandoc, E.C.; Jenkins, J.; Craft, S. Fake News as a Critical Incident in Journalism. J. Pract. 2019, 13, 673–689. [Google Scholar] [CrossRef]
  78. Chen, X.; Sin, S.-C.J.; Theng, Y.-L.; Lee, C.S. Why Students Share Misinformation on Social Media: Motivation, Gender, and Study-level Differences. J. Acad. Librariansh. 2015, 41, 583–592. [Google Scholar] [CrossRef]
  79. Chen, X.; Sin, S.-C.J.; Theng, Y.-L.; Lee, C.S. Why do social media users share misinformation? In Proceedings of the 15th ACM/IEEE-CS Joint Conference on Digital Libraries, Knoxville, TN, USA, 21–25 June 2015. [Google Scholar]
  80. Allcott, H.; Gentzkow, M. Social media and fake news in the 2016 election. J. Econ. Perspect. 2017, 31, 211–236. [Google Scholar] [CrossRef]
  81. Igbinovia, M.O.; Okuonghae, O.; Adebayo, J.O. Information literacy competence in curtailing fake news about the COVID-19 pandemic among undergraduates in Nigeria. Ref. Serv. Rev. 2021, 49, 3–18. [Google Scholar] [CrossRef]
  82. Pennycook, G.; Epstein, Z.; Mosleh, M.; Arechar, A.A.; Eckles, D.; Rand, D.G. Shifting attention to accuracy can reduce misinformation online. Nature 2021, 592, 590–595. [Google Scholar] [CrossRef]
  83. Anderson, C.; Hildreth, J.A.D.; Howland, L. Is the desire for status a fundamental human motive? A review of the empirical literature. Psychol. Bull. 2015, 141, 574–601. [Google Scholar] [CrossRef]
  84. Delhey, J.; Schneickert, C.; Hess, S.; Aplowski, A. Who values status seeking? A cross-European comparison of social gradients and societal conditions. Eur. Soc. 2022, 24, 29–60. [Google Scholar] [CrossRef]
  85. Cruz, J.A.B., Jr.; Caguiat, M.R.R.; Ebardo, R.A. Investigating the Design of Social Networking Sites to Examine the Spread of Political Misinformation Using the Uses and Gratifications Theory. Int. J. Res. Sci. Innov. 2024, 11, 1–11. [Google Scholar] [CrossRef]
  86. Haridakis, P. Uses and Gratifications. In The International Encyclopedia of Media Studies; Routledge: London, UK, 2019. [Google Scholar]
  87. Bergström, A.; Jervelycke Belfrage, M. News in Social Media. Digit. J. 2018, 6, 583–598. [Google Scholar] [CrossRef]
  88. Ma, L.; Sian Lee, C.; Hoe-Lian Goh, D. Understanding news sharing in social media. Online Inf. Rev. 2014, 38, 598–615. [Google Scholar] [CrossRef]
  89. Ali, K.; Li, C.; Zain-Ul-Abdin, K.; Muqtadir, S.A. The effects of emotions, individual attitudes towards vaccination, and social endorsements on perceived fake news credibility and sharing motivations. Comput. Hum. Behav. 2022, 134, 107307. [Google Scholar] [CrossRef]
  90. Lee, C.S.; Ma, L.; Goh, D.H.-L. Why Do People Share News in Social Media? Springer: Berlin/Heidelberg, Germany, 2011. [Google Scholar]
  91. Sharma, K.; Qian, F.; Jiang, H.; Ruchansky, N.; Zhang, M.; Liu, Y. Combating fake news: A survey on identification and mitigation techniques. ACM Trans. Intell. Syst. Technol. (TIST) 2019, 10, 1–42. [Google Scholar] [CrossRef]
  92. Kim, A.; Dennis, A.R. Says who? The effects of presentation format and source rating on fake news in social media. Mis Q. 2019, 43, 1025–1040. [Google Scholar] [CrossRef]
  93. Schreurs, L.; Vandenbosch, L. Introducing the Social Media Literacy (SMILE) model with the case of the positivity bias on social media. J. Child. Media 2021, 15, 320–337. [Google Scholar] [CrossRef]
  94. Festl, R. Social media literacy & adolescent social online behavior in Germany. J. Child. Media 2021, 15, 249–271. [Google Scholar]
  95. Guess, A.M.; Lerner, M.; Lyons, B.; Montgomery, J.M.; Nyhan, B.; Reifler, J.; Sircar, N. A digital media literacy intervention increases discernment between mainstream and false news in the United States and India. Proc. Natl. Acad. Sci. USA 2020, 117, 15536–15545. [Google Scholar] [CrossRef]
  96. Lee, N.M. Fake news, phishing, and fraud: A call for research on digital media literacy education beyond the classroom. Commun. Educ. 2018, 67, 460–466. [Google Scholar] [CrossRef]
  97. Moore, R.C.; Hancock, J.T. A digital media literacy intervention for older adults improves resilience to fake news. Sci. Rep. 2022, 12, 6008. [Google Scholar] [CrossRef] [PubMed]
  98. Apuke, O.D.; Omar, B.; Asude, E. Tunca, Literacy Concepts as an Intervention Strategy for Improving Fake News Knowledge, Detection Skills, and Curtailing the Tendency to Share Fake News in Nigeria. Child Youth Serv. 2023, 44, 88–103. [Google Scholar] [CrossRef]
  99. Musgrove, A.T.; Powers, J.R.; Rebar, L.C.; Musgrove, G.J. Real or fake? Resources for teaching college students how to identify fake news. Coll. Undergrad. Libr. 2018, 25, 243–260. [Google Scholar] [CrossRef]
  100. Goldani, M.H.; Safabakhsh, R.; Momtazi, S. Convolutional neural network with margin loss for fake news detection. Inf. Process. Manag. 2021, 58, 102418. [Google Scholar] [CrossRef]
  101. Melchior, C.; Warin, T.; Oliveira, M. An investigation of the COVID-19-related fake news sharing on Facebook using a mixed methods approach. Technol. Forecast. Soc. Change 2025, 213, 123969. [Google Scholar] [CrossRef]
  102. Lisi, G. Is Education the Best Tool to Fight Disinformation? J. Knowl. Econ. 2024, 15, 15996–16012. [Google Scholar] [CrossRef]
  103. Becker, T.E. Potential Problems in the Statistical Control of Variables in Organizational Research: A Qualitative Analysis with Recommendations. Organ. Res. Methods 2005, 8, 274–289. [Google Scholar] [CrossRef]
  104. Becker, T.E.; Atinc, G.; Breaugh, J.A.; Carlson, K.D.; Edwards, J.R.; Spector, P.E. Statistical control in correlational studies: 10 essential recommendations for organizational researchers. J. Organ. Behav. 2016, 37, 157–167. [Google Scholar] [CrossRef]
  105. Shiau, W.-L.; Chau, P.Y.; Thatcher, J.B.; Teng, C.-I.; Dwivedi, Y.K. Have we controlled properly? Problems with and recommendations for the use of control variables in information systems research. Int. J. Inf. Manag. 2024, 74, 102702. [Google Scholar] [CrossRef]
  106. Carlson, K.D.; Wu, J. The Illusion of Statistical Control: Control Variable Practice in Management Research. Organ. Res. Methods 2012, 15, 413–435. [Google Scholar] [CrossRef]
  107. Gao, L.; Waechter, K.A. Examining the role of initial trust in user adoption of mobile payment services: An empirical investigation. Inf. Syst. Front. 2017, 19, 525–548. [Google Scholar] [CrossRef]
  108. Elena-Bucea, A.; Cruz-Jesus, F.; Oliveira, T.; Coelho, P.S. Assessing the Role of Age, Education, Gender and Income on the Digital Divide: Evidence for the European Union. Inf. Syst. Front. 2021, 23, 1007–1021. [Google Scholar] [CrossRef]
  109. Haight, M.; Quan-Haase, A.; Corbett, B.A. Revisiting the digital divide in Canada: The impact of demographic factors on access to the internet, level of online activity, and social networking site usage. In Current Research on Information Technologies and Society; Routledge: London, UK, 2016; pp. 113–129. [Google Scholar]
  110. Booker, C.L.; Kelly, Y.J.; Sacker, A. Gender differences in the associations between age trends of social media interaction and well-being among 10-15 year olds in the UK. BMC Public Health 2018, 18, 321. [Google Scholar] [CrossRef]
  111. Nas, E.; de Kleijn, R. Conspiracy thinking and social media use are associated with ability to detect deepfakes. Telemat. Inform. 2024, 87, 102093. [Google Scholar] [CrossRef]
  112. Gefen, D.; Straub, D.; Boudreau, M.-C. Structural equation modeling and regression: Guidelines for research practice. Commun. Assoc. Inf. Syst. 2000, 4, 7. [Google Scholar] [CrossRef]
  113. Joreskog, K.G.; Wold, H. The ML and PLS Techniques for Modeling with Latent Variables: Historical and Comparative Aspects. In Systems Under Indirect Observation: Causality, Structure, Prediction, Part I; Joreskog, K.G., Wold, H., Eds.; Elsevier: Amsterdam, The Netherlands, 1982; pp. 263–270. [Google Scholar]
  114. Davcik, N.S. The use and misuse of structural equation modeling in management research. J. Adv. Manag. Res. 2014, 11, 47–81. [Google Scholar] [CrossRef]
  115. Pappas, I.O.; Bley, K. Fuzzy-set qualitative comparative analysis: Introduction to a configurational approach. In Researching and Analysing Business; Routledge: London, UK, 2023; pp. 362–376. [Google Scholar]
  116. Urry, J. The Complexity Turn. Theory Cult. Soc. 2005, 22, 1–14. [Google Scholar] [CrossRef]
  117. Park, Y.; Mithas, S. Organized complexity of digital business strategy: A configurational perspective. MIS Q. 2020, 44, 85–127. [Google Scholar] [CrossRef]
  118. Pappas, I.O.; Woodside, A.G. Fuzzy-set Qualitative Comparative Analysis (fsQCA): Guidelines for research practice in Information Systems and marketing. Int. J. Inf. Manag. 2021, 58, 102310. [Google Scholar] [CrossRef]
  119. Woodside, A.G. Embrace•perform•model: Complexity theory, contrarian case analysis, and multiple realities. J. Bus. Res. 2014, 67, 2495–2503. [Google Scholar] [CrossRef]
  120. Khedhaouria, A.; Cucchi, A. Technostress creators, personality traits, and job burnout: A fuzzy-set configurational analysis. J. Bus. Res. 2019, 101, 349–361. [Google Scholar] [CrossRef]
  121. Kumar, S.; Sahoo, S.; Lim, W.M.; Kraus, S.; Bamel, U. Fuzzy-set qualitative comparative analysis (fsQCA) in business and management research: A contemporary overview. Technol. Forecast. Soc. Change 2022, 178, 121599. [Google Scholar] [CrossRef]
  122. Humprecht, E. Where ‘fake news’ flourishes: A comparison across four Western democracies. Inf. Commun. Soc. 2019, 22, 1973–1988. [Google Scholar] [CrossRef]
  123. Jordan, P.J.; Troth, A.C. Common method bias in applied settings: The dilemma of researching in organizations. Aust. J. Manag. 2020, 45, 3–14. [Google Scholar] [CrossRef]
  124. Podsakoff, P.M.; MacKenzie, S.B.; Lee, J.-Y.; Podsakoff, N.P. Common method biases in behavioral research: A critical review of the literature and recommended remedies. J. Appl. Psychol. 2003, 88, 879. [Google Scholar] [CrossRef]
  125. Podsakoff, P.M.; MacKenzie, S.B.; Podsakoff, N.P. Sources of method bias in social science research and recommendations on how to control it. Annu. Rev. Psychol. 2012, 63, 539–569. [Google Scholar] [CrossRef]
  126. George, D. SPSS for Windows Step by Step: A Simple Study Guide and Reference, 17.0 Update, 10/e; Pearson Education India: Tamil Nadu, India, 2011. [Google Scholar]
  127. Johnson, R.A.; Wichern, D.W. Applied Multivariate Statistical Analysis, 6th ed.; Pearson: Upper Saddle River, NJ, USA, 2007. [Google Scholar]
  128. Byrne, B.M. Structural Equation Modeling with Mplus: Basic Concepts, Applications, and Programming; Routledge: London, UK, 2013. [Google Scholar]
  129. Fornell, C.; Larcker, D.F. Evaluating structural equation models with unobservable variables and measurement error. J. Mark. Res. 1981, 18, 39–50. [Google Scholar] [CrossRef]
  130. Hair, J.F.; Black William, C.; Babin, B.J.; Anderson Rolph, E.; Tatham, R.L. Multivariate Data Analysis; Pearson New International Edition; Pearson Education Limited: Harlow, Essex, UK, 2014. [Google Scholar]
  131. Hair, J.F.; Black, W.C.; Babin, B.J.; Anderson, R.E.; Tatham, R. Multivariate Data Analysis, 6th ed.; Pearson Prentice Hall: Saddle River, NJ, USA, 2009. [Google Scholar]
  132. Voorhees, C.M.; Brady, M.K.; Calantone, R.; Ramirez, E. Discriminant validity testing in marketing: An analysis, causes for concern, and proposed remedies. J. Acad. Mark. Sci. 2016, 44, 119–134. [Google Scholar] [CrossRef]
  133. Hu, L.-t.; Bentler, P.M. Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Struct. Equ. Model. 1999, 6, 1–55. [Google Scholar] [CrossRef]
  134. Neter, J. Applied Linear Statistical Models, 4th ed.; Irwin: Chicago, IL, USA, 1996. [Google Scholar]
  135. Kock, N. Common method bias in PLS-SEM: A full collinearity assessment approach. Int. J. E-Collab. (IJEC) 2015, 11, 1–10. [Google Scholar] [CrossRef]
  136. Woodside, A.G. Proposing a new logic for data analysis in marketing and consumer behavior: Case study research of large-N survey data for estimating algorithms that accurately profile X (extremely high-use) consumers. J. Glob. Sch. Mark. Sci. 2012, 22, 277–289. [Google Scholar] [CrossRef]
  137. Castelló-Sirvent, F.; Pinazo-Dallenbach, P. Corruption Shock in Mexico: fsQCA Analysis of Entrepreneurial Intention in University Students. Mathematics 2021, 9, 1702. [Google Scholar] [CrossRef]
  138. Duşa, A. QCA with R: A Comprehensive Resource; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  139. Ma, T.; Cheng, Y.; Guan, Z.; Li, B.; Hou, F.; Lim, E.T.K. Theorising moderation in the configurational approach: A guide for identifying and interpreting moderating influences in QCA. Inf. Syst. J. 2024, 34, 762–787. [Google Scholar] [CrossRef]
  140. Porcuna-Enguix, L.; Bustos-Contell, E.; Serrano-Madrid, J.; Labatut-Serer, G. Constructing the Audit Risk Assessment by the Audit Team Leader When Planning: Using Fuzzy Theory. Mathematics 2021, 9, 3065. [Google Scholar] [CrossRef]
  141. Buchanan, T. Why do people spread false information online? The effects of message and viewer characteristics on self-reported likelihood of sharing social media disinformation. PLoS ONE 2020, 15, e0239666. [Google Scholar] [CrossRef]
  142. Armeen, I.; Niswanger, R.; Tian, C. Combating Fake News Using Implementation Intentions. Inf. Syst. Front. 2024, 27, 1107–1120. [Google Scholar] [CrossRef]
  143. Zhang, L.; Iyendo, T.O.; Apuke, O.D.; Gever, C.V. Experimenting the effect of using visual multimedia intervention to inculcate social media literacy skills to tackle fake news. J. Inf. Sci. 2025, 51, 135–145. [Google Scholar] [CrossRef]
  144. Ma, R.; Wang, X.; Yang, G.-R. Fighting fake news in the age of generative AI: Strategic insights from multi-stakeholder interactions. Technol. Forecast. Soc. Change 2025, 216, 124125. [Google Scholar] [CrossRef]
Figure 1. Proposed structural model.
Figure 1. Proposed structural model.
Technologies 13 00341 g001
Figure 3. Slope test results for the moderating effect of social media literacy.
Figure 3. Slope test results for the moderating effect of social media literacy.
Technologies 13 00341 g003aTechnologies 13 00341 g003b
Table 1. Summary of the literature on focal predictors and theories used to explain fake-news sharing behavior.
Table 1. Summary of the literature on focal predictors and theories used to explain fake-news sharing behavior.
Independent VariablesModerator/MediatorDependent Variable(s)TheoriesAuthors
Individual social networking sites dependency;
Parasocial interaction;
Information seeking;
Perceived herd;
Social tie strength;
Status-seeking.
Fake news knowledge (Mediator)Fake news sharing.Dependency theory;
Social impact theory;
Social networking sites;
Uses and gratification theory.
Apuke and Omar [32]
Information sharing;
News finds me;
Status seeking;
Trust in social networking sites.
Social media literacy skills (Moderator) Rational choice theory.Wei, Gong [28]
Status-seeking;
Socializing;
Entertainment;
Pastime;
Information sharing.
News quality; source credibility (moderators) Uses and gratification theory;
Information adoption model.
Thompson, Wang [33]
Information overload;
Information sharing;
News-find-me perception;
Self-expression;
Status seeking;
Trust in online information.
The affordance theory. Apuke and Omar [34]
Active corrective action on fake news;
Authenticating news before sharing;
Instantaneous sharing of fake news to create awareness;
Passive corrective action on fake news.
Sharing fake news related lack of time; sharing fake news related to religiosity.The third-person effect hypothesis;
The honeycomb framework.
Talwar, Dhir [35]
Authoritativeness of source;
Consensus indicators;
Demographic variables;
Digital literacy;
Personality.
Spread false information.
Information overload;
Online information trust;
Perceived severity;
Perceived susceptibility.
Unverified information sharing; cyberchondria.Cognitive load theory health belief model; Protection-motivation theory.Laato, Islam [29]
Technological factors;
Fear of missing out;
Entertainment;
Ignorance;
Pastime;
Altruism.
Fake news sharing.Self-determination theory;
Socio-cultural-psychological-technology model;
Uses and gratification theory.
Balakrishnan, Ng [36]
Social media news use;
Fears of missing out.
Cognitive ability (mediator)Deepfake sharing----Ahmed [37]
Usage
intensity;
Social
Credibility;
Expertise.
Trust in
social media (mediator); verification
behavior (mediator)
Fake news
identification
Referential theory of the illusory truth effect;
“Theory of frequency and referential validity”.
Aoun Barakat, Dabbous [38]
Argument quality;
Information readability;
Source authority;
Source influence.
Cognitive ability (mediator)Fake news rebuttalsElaboration likelihood model;
Rebuttal acceptance.
Wang, Chao [33]
Authenticating news before sharing;
Fear of missing out (FOMO);
Government regulation;
Information quality;
Joy of missing out (JOMO);
Source credibility.
Perceived believability (mediator); social status-seeking and cognitive influence (moderators).Intention to share fake newsBehavioral Reasoning Theory.Kumar, Shankar [39]
Table 2. Validity of the measures.
Table 2. Validity of the measures.
VariablesItemsLoadingsCRAVEMSVMaxR(H)VIF
News-find-meNewsFM40.6960.8250.5410.2400.8281.403
NewsFM30.778
NewsFM20.757
NewsFM10.708
Social media trustTrustSNS40.6930.8360.5610.2400.8401.297
TrustSNS30.787
TrustSNS20.778
TrustSNS10.734
Social media literacySMLit130.7200.9220.5970.3320.9251.565
SMLit120.716
SMLit110.818
SMLit90.814
SMLit80.768
SMLit70.801
SMLit60.788
SMLit50.7480.8540.5940.2560.8591.383
Status-seekingSSeeking50.696
SSeeking30.804
SSeeking20.782
SSeeking10.797
Information sharingInfoSharing80.7080.9120.5970.3320.9131.635
InfoSharing60.778
InfoSharing50.784
InfoSharing40.782
InfoSharing30.780
InfoSharing20.779
InfoSharing10.795
Fake news-sharingSharingFNs80.8370.9220.6310.0930.929
SharingFNs70.840
SharingFNs60.836
SharingFNs50.819
SharingFNs40.803
SharingFNs30.770
SharingFNs20.634
Table 3. Means, standard deviation, correlations, and discriminant validity.
Table 3. Means, standard deviation, correlations, and discriminant validity.
Focal VariablesMeanStd. Deviation123456
1. News-find-me2.99810.886480.7350.490 ***0.366 ***0.418 ***0.442 ***−0.080 *
2. Social media trust2.87200.874300.5000.7490.374 ***0.366 ***0.288 ***0.086 *
3. Social media literacy2.52780.937420.3850.3840.7720.447 ***0.576 ***−0.208 ***
4. Status-seeking2.81600.956080.4380.3800.4590.7710.506 ***−0.112 **
5. Information Sharing2.62080.950600.4430.2990.5980.5150.773−0.305 ***
6. Fake news-sharing3.35731.009490.0640.1010.2000.0980.2860.794
***, **, and * indicate that p is significant at the 0.001, 0.01, and 0.05 levels, respectively. Squared Inter-construct Correlations are shown with AVE (diagonal in Bold). The region in grey represents the values of the HTMT Matrix.
Table 6. Calibration method using key percentiles of the data distribution (5th, 50th, and 95th).
Table 6. Calibration method using key percentiles of the data distribution (5th, 50th, and 95th).
Calibration MethodInfo SharingS-SeekingNFMSM TrustSM LiteracyFN Sharing
Full non-membership (5%)1.14291.25001.25001.25001.25001.4286
Cross (50%)2.57142.75003.00003.00002.37503.5714
Full membership (95%)4.28574.29384.25004.25004.12504.7143
Table 7. Necessary condition analysis for low and high fake news sharing.
Table 7. Necessary condition analysis for low and high fake news sharing.
ConditionsHigh Fake News SharingLow Fake News Sharing
Consistency CoverageConsistency Coverage
C_InfoSharing0.5546140.582360.6999550.706066
~C_InfoSharing0.7200690.714130.5859750.558289
C_Sseeking 0.6318780.619480.6927670.652469
~C_Sseeking 0.6455140.686230.5959830.608659
C_NFMe 0.6590950.625950.7056920.643851
~C_NFMe 0.6249930.688530.5900270.624441
C_TrustSM0.6462580.67710.6364240.640572
~C_TrustSM0.6569410.652880.6791890.648444
C_Literacy0.5473660.569870.696520.696631
~C_Literacy0.7086070.70850.5699340.547436
Note: ~ denotes a low or absence of condition.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Mombeuil, C.; Séraphin, H.; Diunugala, H.P. Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy. Technologies 2025, 13, 341. https://doi.org/10.3390/technologies13080341

AMA Style

Mombeuil C, Séraphin H, Diunugala HP. Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy. Technologies. 2025; 13(8):341. https://doi.org/10.3390/technologies13080341

Chicago/Turabian Style

Mombeuil, Claudel, Hugues Séraphin, and Hemantha Premakumara Diunugala. 2025. "Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy" Technologies 13, no. 8: 341. https://doi.org/10.3390/technologies13080341

APA Style

Mombeuil, C., Séraphin, H., & Diunugala, H. P. (2025). Mapping Linear and Configurational Dynamics to Fake News Sharing Behaviors in a Developing Economy. Technologies, 13(8), 341. https://doi.org/10.3390/technologies13080341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop