Next Article in Journal
R&D Subsidies and Radical Innovation: Innovative Mindset and Competition Matter
Previous Article in Journal
Integrating Large Language Models into Medication Management in Remote Healthcare: Current Applications, Challenges, and Future Prospects
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Exploring Heuristics and Biases in Cybersecurity: A Factor Analysis of Social Engineering Vulnerabilities

by
Valerică Greavu-Şerban
1,*,
Floredana Constantin
2 and
Sabina-Cristiana Necula
1,*
1
Department of Accounting, Business Informatics and Statistics, Faculty of Economics and Business Administration, Alexandru Ioan Cuza University of Iasi, 700506 Iași, Romania
2
Duk-Tech, 700115 Iasi, Romania
*
Authors to whom correspondence should be addressed.
Systems 2025, 13(4), 280; https://doi.org/10.3390/systems13040280
Submission received: 19 February 2025 / Revised: 27 March 2025 / Accepted: 8 April 2025 / Published: 10 April 2025
(This article belongs to the Section Systems Practice in Social Science)

Abstract

:
Cybersecurity threats increasingly exploit cognitive heuristics, yet their structured role in security decision-making remains underexplored. This study examines how heuristic-driven behaviors influence vulnerability to cyberattacks, particularly in social engineering contexts. Using Exploratory Factor Analysis (EFA), followed by Confirmatory Factor Analysis (CFA), we identified two key cognitive dimensions: risk perception and compliance and security, shaping security decisions. Regression and mediation analyses revealed that risk awareness influences protective behaviors, but a security paradox persists—many users recognize risks yet fail to act accordingly. Clustering techniques further classified individuals into distinct cybersecurity profiles, highlighting variations in susceptibility. This research bridges cognitive psychology and cybersecurity, offering insights for designing more effective awareness programs and interventions. Understanding these cognitive vulnerabilities is essential for improving cybersecurity resilience and risk mitigation strategies.

1. Introduction

Cybersecurity decision-making is an essential aspect of digital resilience, as individuals and organizations continuously assess risks and implement protective measures against evolving cyber threats. Given the complexity of this domain, cognitive heuristics—mental shortcuts used for rapid decision-making—play a significant role in shaping security-related behaviors [1]. While heuristics can enhance efficiency in uncertain environments, they also introduce cognitive biases that may lead to security vulnerabilities [2,3]. Existing research has extensively examined the influence of specific heuristics such as the availability heuristic [4], the representativeness heuristic [5], and anchoring effects [6] on cybersecurity decision-making. However, the structured nature of heuristic-driven behaviors in cybersecurity remains an underexplored area, limiting the ability to develop targeted interventions for improving security practices.
Previous studies have largely focused on identifying isolated heuristics that affect security decisions. For instance, studies on password management have highlighted satisficing heuristics, where individuals opt for “good enough” security solutions rather than optimal measures [7,8]. Similarly, research on phishing susceptibility has shown that cognitive biases, such as trust heuristics and authority bias, play a crucial role in determining an individual’s vulnerability to social engineering attacks [9,10]. While these findings offer valuable insights, they do not fully capture how these heuristics interact to form underlying cognitive structures influencing cybersecurity behaviors.
This study addresses this research gap by employing a quantitative approach to identify latent cognitive dimensions underlying cybersecurity decision-making. Factor analysis has been used in previous research to reveal structured patterns in information security behavior [11], risk perception [12], and heuristic-driven decision-making [13]. Building on these methodologies, this study applies Exploratory Factor Analysis (EFA) followed by Confirmatory Factor Analysis (CFA) to determine whether cybersecurity decision-making can be characterized by two primary latent factors: (1) risk perception, encompassing concerns about financial, reputational, and personal protection, and (2) compliance and security, capturing adherence to security policies and operational risk mitigation.
Beyond identifying these cognitive structures, this research explores their predictive power in determining susceptibility to cyber deception. Logistic regression modeling assesses the extent to which these latent factors influence the likelihood of individuals falling victim to cyberattacks. Additionally, clustering analyses classify individuals into distinct cybersecurity decision-making profiles, revealing how heuristic reliance varies across demographic groups and security awareness levels. Previous studies have demonstrated that regulatory compliance [14] and financial investment [15] are strong predictors of cybersecurity behaviors. However, this study further investigates whether these factors moderate the impact of heuristics on decision-making, contributing to a deeper understanding of the security paradox, wherein individuals acknowledge risks but fail to act accordingly [16,17].
This research contributes to the cybersecurity literature by bridging the gap between heuristic-based decision-making theories and empirical validation of latent cognitive structures. Unlike previous studies that treat heuristics as isolated biases, this study proposes that cybersecurity heuristics function within an interrelated cognitive framework, which can be leveraged for both predictive risk assessment and targeted security training.
Based on the identified gaps, this study is guided by the following research questions:
RQ1: Can cybersecurity behaviors be organized into underlying latent cognitive dimensions (heuristic factors)?
RQ2: How do these cognitive dimensions influence individuals’ susceptibility to cyber deception (e.g., phishing and social engineering attacks)?
RQ3: What distinct decision-making profiles emerge among users based on their reliance on security heuristics and biases?”
This study contributes to the cybersecurity literature in several novel ways. First, we empirically identify latent cognitive dimensions (risk perception and security compliance) that structure a wide range of security behaviors—an integrative perspective not developed in prior work. Second, we combine behavioral data with multi-stage analysis (Exploratory Factor Analysis, clustering, and association rule mining) to uncover complex patterns (e.g., the ‘security paradox’ where high risk awareness does not always translate into action). Third, we define distinct user profiles of cybersecurity decision-making based on heuristic reliance, which, to our knowledge, have not been characterized in previous studies. Together, these insights bridge cognitive psychology and cybersecurity practice, offering actionable guidance for tailored security awareness interventions.
The remainder of this paper is structured as follows: Section 2 reviews the existing literature on heuristic-driven decision-making in cybersecurity. Section 3 details the methodology, including survey instrument design, data collection, and statistical analyses. Section 4 presents the findings from factor analysis, regression modeling, and clustering techniques. Section 5 discusses the implications of the results for cybersecurity awareness and training programs. Finally, Section 6 outlines limitations and future research directions.

2. Literature Review

The study of heuristics and their impact on decision-making has been extensively explored across multiple disciplines, including psychology, behavioral economics, and artificial intelligence. However, their role in cybersecurity decision-making, particularly in relation to social engineering attacks, remains an emerging area of research [18]. This section reviews the existing literature on heuristics, cognitive biases, and their implications in digital security contexts.

2.1. Heuristics and Dual-Process Theory in Cybersecurity Decision-Making

The foundation of heuristic-driven cybersecurity behavior can be traced to Tversky and Kahneman’s dual-process theory of cognition, which differentiates between two systems of thinking: System 1, which is intuitive and automatic, and System 2, which is deliberative and analytical [1]. Research has demonstrated that cybersecurity behaviors are largely governed by System 1 thinking, leading users to rely on heuristics when assessing security risks and responding to potential threats [19].
The availability heuristic, for instance, influences how individuals estimate the probability of a cyberattack based on the ease with which they recall past incidents [4]. If an individual has previously encountered phishing attempts, they may overestimate the likelihood of future phishing risks while underestimating less-publicized threats such as credential stuffing [20]. The representativeness heuristic, which leads individuals to rely on prototypical patterns in classification, has been shown to contribute to misjudgments in identifying phishing emails, as users often dismiss attacks that do not match their preconceived notions of a threat [9].
Other heuristics, such as anchoring and adjustment, play a role in shaping security decisions, particularly in the evaluation of security policies. If initial security recommendations suggest minimal risk, individuals may fail to sufficiently adjust their assessments even when confronted with clear evidence of emerging threats [6]. Similarly, satisficing—a heuristic where individuals opt for “good enough” security measures rather than optimal solutions—explains why many users continue to rely on weak password management practices, including password reuse and predictable password structures [7,8].

2.2. Factors Influencing Cybersecurity Decision-Making

The implementation of cybersecurity measures is influenced by multiple factors, including risk perception [21], regulatory compliance [14,22,23], financial considerations [15,24,25], and reputational concerns [14,26]. Research has demonstrated that individuals and organizations prioritize security measures based on perceived threats, often overestimating highly publicized risks while underestimating emerging attack vectors [27,28].
Regulatory compliance has also been identified as a major driver of cybersecurity decision-making. Studies have shown that adherence to frameworks such as the NIST Cybersecurity Framework[29] and ISO/IEC 27001[30] significantly improves organizational security postures [14]. However, complexity in compliance requirements often leads to reliance on heuristic shortcuts, resulting in inconsistent implementation of security measures [13].
Financial investment plays a critical role in determining cybersecurity practices. Empirical research has confirmed that organizations with higher budgets for cybersecurity training and infrastructure exhibit lower susceptibility to heuristic-driven security failures [31,32]. Conversely, organizations with limited financial resources frequently resort to satisficing behaviors, opting for minimal compliance rather than proactive security investments [33,34].
The relationship between heuristic reliance and demographic factors has also been explored in the cybersecurity literature. Studies indicate that individuals with higher levels of cybersecurity training exhibit lower susceptibility to heuristic-driven errors, as training promotes System 2 thinking over intuitive decision-making [17]. Studies have shown a statistically significant correlation between gender and cybersecurity awareness. For instance, one study found that gender significantly impacts cybersecurity awareness, while age and educational level do not show a significant effect [17].

2.3. Heuristics, Cognitive Biases, and Social Engineering Vulnerabilities

Social engineering attacks exploit heuristic-driven decision-making by leveraging psychological principles such as authority bias, urgency, and trust [4,10]. Research has demonstrated that individuals are particularly susceptible to phishing attacks when cognitive load is high, as they are more likely to rely on System 1 processing rather than critical evaluation of suspicious messages [35]. The affect heuristic, which links emotional responses to risk perception, further influences cybersecurity behaviors. Individuals who experience heightened anxiety in response to security warnings are more likely to take protective actions, while those with neutral emotional responses tend to ignore security advisories [36].
A growing body of research has examined how homoglyph attacks—where attackers manipulate domain names or email addresses by replacing characters with visually similar alternatives—exploit heuristic biases in cybersecurity [37]. Multiple studies have confirmed that users often fail to detect homoglyphs in phishing attempts due to satisficing and familiarity heuristics, which lead them to quickly process visual information without scrutinizing minute details [19]. Empirical evidence further suggests that representativeness heuristics contribute to these vulnerabilities, as individuals tend to evaluate URLs based on surface-level similarity rather than structural analysis [5].
Studies have highlighted that attackers frequently manipulate trust and perceived authority to elicit compliance from victims [9,38,39]. Authority bias, for instance, has been shown to increase susceptibility to impersonation-based attacks, particularly when messages appear to originate from high-ranking officials or reputable institutions [4]. Similarly, research on urgency bias indicates that individuals are more likely to fall for social engineering tactics when presented with time-sensitive requests [40,41].

2.4. Mitigating Heuristic-Driven Cybersecurity Vulnerabilities

Given the significant role heuristics play in cybersecurity decision-making, researchers have explored various intervention strategies to mitigate their effects. Training programs that emphasize critical thinking and skepticism have been shown to reduce susceptibility to phishing attacks by shifting users from System 1 to System 2 [17]. Digital nudging—where subtle cues such as warning labels and real-time feedback are integrated into user interfaces—has been identified as an effective approach to counteract heuristic-driven security misjudgments [42,43].
Artificial intelligence (AI)-driven security tools have also been proposed as a means to mitigate the impact of heuristic biases in cybersecurity. Machine learning models can analyze user behavior, detect anomalies, and provide adaptive security recommendations, reducing reliance on flawed heuristic judgments [44,45]. However, researchers caution that AI-based security solutions must be carefully designed to avoid reinforcing cognitive biases embedded in training data [46].
While these studies provide valuable theoretical insights, there remains a need for empirical validation of the latent cognitive structures underlying heuristic-driven security decisions.
The present study employs a rigorous quantitative methodology integrating psychometric validation, predictive modeling, and behavioral segmentation techniques. The following section outlines the methodological framework adopted in this study, detailing the survey instrument and the applied methodology.

3. Materials and Methods

This study aims to empirically validate the role of latent cognitive factors in shaping cybersecurity decision-making and susceptibility to social engineering attacks. Each analytical choice was guided by both theoretical frameworks and empirical considerations, ensuring that the findings could contribute meaningfully to the existing literature on cybersecurity cognition.
To address the research questions, we employ a multi-stage analytical approach combining factor analysis, predictive modeling, clustering, and association rule mining. This methodological framework ensures that findings contribute not only to theoretical models of heuristic-driven security behaviors but also to practical applications in cybersecurity awareness and risk mitigation strategies.
The research employed a quantitative, survey-based methodology to capture cybersecurity behaviors, awareness levels, and heuristic-driven decision-making patterns. A structured online questionnaire was developed to assess individual responses to cybersecurity threats, phishing susceptibility, and security habits. The survey instrument was designed based on established frameworks in cybersecurity research, drawing from studies on heuristic processing in digital security contexts [16,17,32,33,47,48,49]. The questionnaire included scenario-based assessments, multiple-choice questions, and Likert-scale items to measure respondents’ risk perceptions, decision-making biases, and familiarity with security protocols.

3.1. Survey Instrument

The use of structured questionnaires, as described in the query, is a validated approach in cybersecurity research [50]. These questionnaires often include items that measure awareness of specific cybersecurity practices, such as the use of strong passwords and two-factor authentication. The survey was designed to capture key cognitive shortcuts that may impact cybersecurity behaviors (Appendix A). Variables encoding and measurement units are presented in Table A1 from Appendix A.
The questionnaire consisted of 24 items grouped into thematic sections. The first section (Demographics—Questions 20–24 in Appendix A) collected demographic and background information, including age, education level, and place of residence. The second section (Cybersecurity Awareness—Question 1 in Appendix A) assessed cybersecurity awareness and concern using a five-point Likert scale. This aligns with previous cybersecurity research methodologies that measure security perception and self-reported awareness. Using a five-point Likert scale to assess cybersecurity awareness and concern is a standard method [50,51,52,53]. The third section (Security Decision Factors—Questions 2–3 in Appendix A) examined factors influencing cybersecurity decisions, allowing respondents to select multiple motivations for adopting security measures. Prior studies have shown that individuals prioritize cybersecurity based on perceived risk exposure and external regulatory pressure [54,55].
The fourth section (Phishing Recognition—Questions 4–9 in Appendix A) focused on phishing recognition and susceptibility. It included items designed to evaluate participants’ ability to detect homoglyph attacks, a common social engineering tactic used in phishing schemes [56]. These questions were adapted from prior experimental studies on phishing awareness and user susceptibility to deceptive URLs [38,57]. The fifth section (Social Engineering Experience Questions 10–14 in Appendix A) investigated social engineering vulnerabilities by analyzing past experiences with manipulation tactics such as trust exploitation and urgency-based persuasion [58,59]. This approach is aligned with studies that explore affect-based decision-making in security contexts [10,60]. Finally, the sixth section (Security Behaviors—Questions 15–19) examined security behaviors, such as password management practices, malware response strategies, and website security evaluation.
To ensure content validity, the survey items were designed based on well-established psychological theories, including dual-process models of decision-making and security-specific heuristics research [13]. The phishing-related questions were derived from methodologies used in cybersecurity training evaluations, while password management questions were adapted from studies on user authentication habits [61].
Given the exploratory nature of this study, face validity was ensured by aligning survey items with previous empirical research, rather than conducting a formal validation study. However, prior cybersecurity behavior studies provided a strong theoretical foundation for measuring heuristic-driven security decisions, ensuring the relevance and appropriateness of the survey items. The survey was administered online, with responses collected anonymously. Participants were instructed to answer based on their real-world experiences and perceptions of cybersecurity threats. No personally identifiable information was collected, ensuring compliance with ethical research standards.
To ensure content validity, the questionnaire was reviewed by domain experts, and a pilot study with a small subset of participants (N = 15) was conducted. This process helped refine item wording, check for comprehension, and assess initial internal consistency. Reliability testing was performed on the final dataset using Cronbach’s alpha to confirm the internal consistency of the measured constructs, ensuring that the instrument captured heuristic-driven cybersecurity decision-making in a robust manner.
Given the need to uncover latent cognitive structures influencing cybersecurity behaviors, this study integrates multiple quantitative techniques:
  • EFA was chosen as an initial step to identify underlying dimensions within cybersecurity decision-making, as it is particularly suited for detecting unobserved cognitive patterns.
  • CFA was then employed to validate these dimensions and assess the robustness of the factor structure.
  • To understand how these latent constructs predict cybersecurity behaviors, logistic regression was used.
  • Clustering techniques were applied to segment individuals based on their decision-making tendencies.
  • Additionally, mediation analysis was performed to assess whether risk perception plays an intermediary role in cybersecurity behaviors.
  • Association rule mining was used to uncover behavioral patterns in cybersecurity practices.
This multi-method approach ensures a comprehensive evaluation of how heuristics shape security decisions.
Based on prior research on heuristic-driven decision-making, the study formulates the following hypotheses:
H1. 
Cybersecurity behaviors can be grouped into latent cognitive dimensions, particularly along risk perception and compliance-driven security tendencies.
H2. 
Higher levels of risk perception are associated with a lower likelihood of falling victim to cyber deception.
H3. 
Compliance-driven security behaviors moderate the impact of risk perception on cybersecurity decision-making.
H4. 
Heuristic-driven cybersecurity decision-making follows structured behavioral profiles that can be identified through clustering analysis.
These hypotheses provide a structured basis for the statistical analyses conducted in this study, ensuring that findings contribute to both theoretical and practical advancements in cybersecurity awareness and intervention strategies.
We employed Exploratory Factor Analysis (EFA) not as an end in itself, but to reveal underlying structures in participants’ security behaviors. This approach allows us to move beyond isolated observations by uncovering latent cognitive factors that group related behaviors together. Identifying such factors is important—it enables a more structured understanding of decision-making heuristics in cybersecurity, which can enhance theoretical models and targeted interventions.

3.2. Data Analysis Strategy

An initial step involved data preprocessing and examination of inter-item relationships to determine the underlying structure of cybersecurity decision-making factors. EFA was chosen to identify latent constructs within the dataset, as this technique allows for the discovery of unobserved dimensions that influence security-related behaviors. The extraction method used was maximum likelihood estimation, which provides robust parameter estimates under non-normal data conditions. To account for potential correlations between factors, an oblimin rotation was applied, allowing for non-orthogonal factor structures that better reflect cognitive decision-making processes in cybersecurity.
The suitability of the data for factor analysis was assessed using the Kaiser–Meyer–Olkin (KMO) measure, which evaluates the sampling adequacy, and Bartlett’s test of sphericity, which determines whether sufficient correlations exist among items for factor extraction. Factor retention was determined based on eigenvalues greater than one, the proportion of variance explained, and parallel analysis, ensuring that extracted factors represented meaningful constructs rather than statistical artifacts. Items with factor loadings below 0.4 were excluded from the final factor structure to maintain interpretability and construct validity.
To confirm the factor structure derived from the EFA, CFA was performed using the robust weighted least squares estimator. CFA was selected to validate the latent constructs identified in the exploratory phase, ensuring measurement reliability and construct validity. Model fit was evaluated using multiple indices, including the chi-square test, root mean square error of approximation (RMSEA), comparative fit index (CFI), and Tucker–Lewis index (TLI). The inclusion of these indices allowed for a comprehensive assessment of model adequacy, balancing statistical power and goodness-of-fit criteria.
Following the validation of latent decision-making factors, logistic regression analysis was employed to examine the predictive power of these factors on prior exposure to cyber deception. Logistic regression was selected due to its robustness in modeling binary outcome variables, allowing for the estimation of odds ratios that quantify the likelihood of phishing susceptibility based on cognitive heuristics. The regression model included the two extracted factors as independent variables, while the dependent variable indicated whether respondents had previously fallen victim to cyber deception.
A subsequent model incorporated interaction terms to assess whether the effect of one factor on cybersecurity vulnerability depended on the level of another factor, capturing potential moderating effects in heuristic-driven security decisions. Model performance was evaluated using receiver operating characteristic (ROC) analysis, with the area under the curve (AUC) serving as the primary metric for assessing predictive accuracy. The ROC curve was generated to evaluate the discriminative power of the logistic regression model in distinguishing between users who had previously been deceived (VictimOfCyberDeception = 1) and those who had not (VictimOfCyberDeception = 0). The ROC analysis was supplemented with an optimal threshold (opt_prag), determined using the Youden index, to maximize sensitivity and specificity in risk classification.
A Risk Score was computed using the predicted probabilities from the logistic regression model with interaction terms between Factor 1 (risk perception) and Factor 2 (compliance and security). The Risk Score represents the likelihood that an individual falls into the “VictimOfCyberDeception” category (previously deceived). Higher scores indicate an increased probability of heuristic-driven decision-making leading to susceptibility to phishing and social engineering tactics.
In order to refine the understanding of cybersecurity decision-making, a clustering analysis was performed to identify distinct security behavior profiles. K-means clustering was chosen due to its effectiveness in segmenting individuals into homogenous groups based on security-related cognitive traits. The optimal number of clusters was determined using the Silhouette method by clustering respondents’ factor scores into eight groups using k-means, which balances within-cluster cohesion and between-cluster separation. Specifically, we performed clustering on the two primary factor scores derived from the EFA/CFA (the risk perception factor and the compliance and security factor). Each participant is represented as a point in this two-dimensional factor space. Clustering in this space groups together individuals with similar profiles in terms of these underlying cognitive dimensions. Using participants’ factor scores as clustering features ensures that the resulting groups reflect meaningfully different cognitive-behavioral profiles, rather than arbitrary divisions. The resulting clusters were analyzed to examine how security attitudes and behaviors varied among different groups, with special attention to differences in prior cyber deception experiences. The Kruskal–Wallis test was used to evaluate statistical differences in cybersecurity risk perception across clusters, and Dunn’s post hoc test with Bonferroni correction was applied to identify significant pairwise differences.
To investigate the role of cognitive heuristics in cybersecurity behavior, a mediation analysis was conducted to assess whether risk perception mediated the relationship between security behaviors—such as URL verification—and phishing susceptibility. Mediation analysis was chosen to quantify the indirect effects of cybersecurity awareness on susceptibility outcomes, allowing for a decomposition of direct and mediated pathways. The analysis employed bootstrapped mediation models to estimate the average causal mediation effect (ACME) and average direct effect (ADE), controlling for demographic covariates such as education level [62]. Additionally, a multinomial logistic regression model was used to explore whether higher education levels moderated heuristic-driven security decisions, assessing whether individuals with more formal cybersecurity knowledge were less likely to rely on cognitive shortcuts.
Additionally, association rule mining was applied to extract behavioral patterns in cybersecurity decision-making, and we have used the Apriori algorithm. This approach was selected to identify recurring security behavior patterns and assess how specific actions—such as password creation habits or phishing detection strategies—correlated with cyber deception experiences. The dataset was transformed into a transaction-based format, and association rules were generated based on minimum support and confidence thresholds. Rules were ranked using lift values to determine the strongest predictive relationships between cybersecurity practices and phishing susceptibility.
All statistical analyses were conducted using R. The psych package was used for factor analysis, lavaan for CFA, pROC for ROC analysis, cluster for k-means clustering, mediation for mediation modeling, and arules for association rule mining. Data visualization was performed using ggplot2, and model diagnostics were examined to ensure robustness. The methodological framework adopted in this study ensured the rigorous examination of cybersecurity decision-making by integrating psychometric validation, predictive modeling, and behavioral segmentation. By combining factor analysis, regression modeling, clustering techniques, and association rule mining, the study provided a multidimensional perspective on the cognitive heuristics underlying cybersecurity behavior.

3.3. Ethical Considerations

This study adhered to standard ethical guidelines for survey-based research, ensuring participant confidentiality and data protection. Given that the study involved an anonymous online questionnaire with no collection of personally identifiable information, formal approval from an institutional ethics committee was not required. Participants were informed about the study’s purpose and their voluntary participation, with the option to withdraw at any time. Data security measures were implemented, ensuring that responses were stored securely and accessed only by the research team.

4. Results

This section presents the findings of the study, focusing on the heuristic-driven decision-making tendencies of participants in cybersecurity contexts. The results are structured around descriptive statistics (Table A2, Table A3, Table A4, Table A5, Table A6 and Table A7 Appendix B), EFA, and key heuristic patterns influencing cybersecurity behavior.

4.1. Correlation Between Cybersecurity Decision Factors

Before performing inferential analyses, we examined the relationships among the cybersecurity decision factors using a tetrachoric correlation matrix. The results indicate moderate correlations between several factors, particularly between regulatory compliance and data protection (r = 0.72), as well as between compliance and financial loss prevention (r = 0.48). In contrast, reputational protection shows a weak association with operational continuity (r = −0.025), suggesting that these motivations are largely independent. The full correlation matrix is provided in Table A8 (Appendix B).

4.2. Logistic Regression: Predicting Susceptibility to Social Engineering

To determine whether cybersecurity motivations predict heuristic cyber behaviors, we performed logistic regression using susceptibility to social engineering as the dependent variable (binary: 1 = susceptible, 0 = not susceptible). The results reveal a significant negative association between regulatory compliance and susceptibility (β = −2.074, p = 0.002). This result aligns with expectations—individuals who rigorously follow security policies indeed tend to avoid falling victim. While unsurprising, this empirical confirmation underscores the protective value of compliance. More interestingly, however, we observed that many users with high risk awareness still fell victim (a manifestation of the ‘security paradox’), indicating that knowledge alone is insufficient without corresponding secure action. Our multi-method analysis brings this paradox to light by showing that individuals prioritizing compliance are less likely to fall victim to social engineering attacks. Data protection approaches significance (p = 0.097), suggesting a potential protective effect. Other factors, including reputational protection, financial loss prevention, and operational continuity, do not show statistically significant relationships. Full regression coefficients are provided in Table A9 (Appendix B).

4.3. Association Between Cybersecurity Motivations and Cybersecurity Behaviors

To evaluate whether cybersecurity decision factors influence behaviors such as URL verification and password creation, we conducted chi-square tests. The results indicate strong associations between data protection concerns and both behaviors (p < 0.001), suggesting that individuals who prioritize data protection are more likely to adopt secure practices.

4.4. Impact on Perceived Cybersecurity Importance

Using Kruskal–Wallis tests, we assessed whether cybersecurity motivations influence the perceived importance of cybersecurity. The findings indicate that reputational protection (p = 0.0007) and operational continuity (p = 0.0057) significantly affect how much cybersecurity is valued. In contrast, compliance, financial loss prevention, and data protection do not significantly alter perceptions of cybersecurity importance. These results highlight that motivations linked to external consequences (such as reputation and business continuity) may drive cybersecurity prioritization more than compliance or financial concerns.

4.5. Justification for Exploratory Factor Analysis (EFA)

These findings suggest that cybersecurity decision-making is not entirely independent but follows underlying latent structures, which EFA will help uncover. By identifying these factors, we aim to refine models of cybersecurity behavior and enhance targeted interventions for improving cybersecurity practices.
A factorial analysis was conducted to assess the latent structure underlying cybersecurity decision-making factors. The Kaiser–Meyer–Olkin (KMO) test for sampling adequacy yielded a value of 0.72, suggesting the appropriateness of factor analysis. Bartlett’s test of sphericity was significant (χ² = 146.73, p < 0.001), confirming the presence of intercorrelations among variables.
The EFA using a maximum likelihood extraction method with oblimin rotation resulted in a two-factor solution explaining 61% of the total variance. Factor 1 (Table 1) predominantly loaded onto Reputation Protection (0.998) and Financial Protection (0.464), whereas Factor 2 encompassed Regulatory Compliance (0.912), Operational Continuity (0.510), and Data Protection (0.792). The communalities ranged from 0.25 to 0.99, with Reputation Protection (0.995) and Regulatory Compliance (0.847) showing the highest values, indicating strong contributions to their respective factors.
The CFA validated this structure, yielding a strong model fit with χ²(4) = 6.733, p = 0.151, RMSEA = 0.059, CFI = 0.998, and TLI = 0.994 (Table 2). The Cronbach’s alpha for the overall scale was 0.80, indicating good internal consistency. The two latent factors showed a moderate correlation (0.322), suggesting that while related, they capture distinct aspects of cybersecurity decision-making (Table 2). Factor 1 (covering “Protection against reputational damage” and “Protection against financial losses”) had a Cronbach’s alpha of 0.76. Factor 2 (covering “Compliance with regulations and standards“, “Prevention of operational and service disruptions“, and “Protection of sensitive personal or organizational data“) had a Cronbach’s alpha of 0.82 (Table A10Appendix B).
The distributions of Factor 1 and Factor 2 scores (Figure 1 and Figure 2) indicate a strong polarization among participants, with most responses clustering around extreme values. These factor scores were derived from the EFA and CFA models, where participant responses were transformed into latent constructs representing underlying cybersecurity decision-making patterns. Factor scores were computed using the regression method, ensuring that individual responses were mapped onto a standardized scale. The resulting histograms illustrate the frequency of observed scores, highlighting distinct decision-making profiles. Given the role of heuristics in security-related judgments, these polarized distributions may suggest reliance on cognitive shortcuts such as risk aversion or the availability heuristic, though further investigation is required to substantiate this interpretation.
These findings reinforce the robustness of the constructs used in studying cybersecurity decision-making and social engineering heuristics.

4.6. Risk Score Distributions and Predictive Analytics

To assess the influence of heuristic-driven decision-making on cybersecurity vulnerability, a binary logistic regression model was employed, predicting prior exposure to cyber deception (“VictimOfCyberDeception”) based on latent decision-making factors (Factor 1, Factor 2). The model yielded a strong predictive power (AUC = 0.83), suggesting that individuals exhibiting specific heuristic patterns were more likely to have been targeted by cyberattacks. The initial logistic regression model (Table A11 Appendix B) indicated that Factor 1 had a significant negative effect on cyber deception experience (Estimate = −1.0545, p < 0.001), suggesting that individuals scoring higher on Factor 1 were less likely to have been deceived. In contrast, Factor 2 did not significantly predict deception experience (p = 0.456).
To further investigate the interaction between heuristic-driven decision-making components, an interaction term (Factor 1 * Factor 2) was included in a second logistic regression model. This model demonstrated improved explanatory power, with the interaction term approaching significance (Estimate = 3.3439, p = 0.079), indicating that the relationship between Factor 1 and cyber deception experience might depend on the level of Factor 2. The area under the ROC curve (AUC) for this model was 0.8462 (Figure 3), reflecting strong predictive performance.
Following the CFA, we derived cybersecurity risk scores to assess individual susceptibility to security threats. The risk scores were computed from the interaction model (Table A12 Appendix B) and categorized into low, medium, and high-risk groups, ensuring a standardized mapping of participant responses onto latent constructs. The distributions of these risk scores after threshold optimization is presented in Figure 4. These distributions were obtained by computing factor scores using the regression method. The bimodal nature of the distributions suggests that participants exhibit distinct decision-making patterns, with some demonstrating a strong proactive stance towards cybersecurity, while others show a heightened vulnerability. The analysis of these scores revealed a meaningful distinction in cyber deception vulnerability, with a majority of participants falling into the high-risk category. The optimal threshold for classification was determined using ROC analysis, achieving an accuracy of 76%, a precision of 91%, and a recall of 73%. These findings underscore the role of heuristic cognitive structures in cybersecurity decision-making, highlighting how specific decision-making patterns may increase or mitigate vulnerability to cyber deception.

4.7. Clustering Analysis of Security Behavior

A k-means clustering approach was employed to classify individuals based on the two extracted latent factors (Factor 1 and Factor 2), reflecting distinct patterns in cybersecurity decision-making. The optimal number of clusters was determined using the Silhouette method (Figure A1Appendix B), which indicated that k = 8 provided the most coherent separation, balancing compactness and distinctiveness across groups. The resulting clusters were analyzed in relation to prior exposure to cybersecurity deception, demonstrating notable variations in susceptibility.

4.8. Clustering Analysis and Behavioral Insights

To examine behavioral nuances, we analyzed specific cybersecurity actions within clusters using boxplot visualizations (Figure A2, Figure A3, Figure A4 and Figure A5 from Appendix B).
  • Reaction to Viruses (Figure A2 Appendix B): Significant variation was observed across clusters, with some groups demonstrating higher reactivity in response to security threats.
  • Password Creation (Figure A3 Appendix B): Marked differences emerged between clusters, with certain groups exhibiting stronger password hygiene practices.
  • Checking URLs (Figure A4 Appendix B): The likelihood of verifying website authenticity varied substantially, suggesting distinct levels of awareness and precaution across clusters.
  • Susceptibility to Social Engineering (Figure A5 Appendix B): Some clusters displayed a markedly higher tendency to fall for social engineering tactics, aligning with lower security awareness.
These findings were further substantiated using pairwise Wilcoxon rank-sum tests, which confirmed statistically significant behavioral differences between certain clusters, particularly in URL verification (p < 0.001), password creation (p < 0.001), and reaction to virus threats (p < 0.001).
Clusters 1 and 8, for instance, contained only participants who had never been deceived, whereas Clusters 2, 4, and 6 exhibited a substantially higher proportion of individuals with prior deception experiences. This clustering pattern suggests that cognitive differences in heuristic decision-making may be associated with varying levels of susceptibility to cyber threats.
To further examine the relationship between cybersecurity risk perception and self-reported importance of security, a Kruskal–Wallis test was conducted to assess differences in risk scores across security awareness levels, measured on a Likert scale from 1 to 5. The results indicated a statistically significant effect (χ²(4) = 13.619, p = 0.0086), suggesting that perceived importance of cybersecurity influences risk assessment. A Dunn’s post hoc test with Bonferroni correction (Table A13Appendix B) identified significant differences in risk perception between individuals rating cybersecurity importance as 3 versus 2 (p = 0.0082) and 5 versus 2 (p = 0.0475). These findings indicate that individuals with moderate (3) or very high (5) security importance ratings exhibit distinct cybersecurity risk perceptions compared to those who perceive security as less important (2). However, no significant differences emerged between the highest and lowest groups, suggesting potential nonlinearities in how security awareness translates into risk assessment.
Further analyses examined the demographic influence on cybersecurity decision-making by testing the association between cluster membership and education level as well as gender. The results confirmed significant associations in both cases, with education level showing a strong relationship with cluster assignment (χ²(14) = 48.54, p < 0.00001) and gender also exhibiting a significant effect (χ²(14) = 40.39, p = 0.0002). These findings highlight that differences in cybersecurity decision-making may be shaped not only by individual cognitive tendencies but also by broader demographic factors.
Overall, these results emphasize the structured nature of cybersecurity decision-making, which appears to be organized into distinct cognitive profiles. The relationship between cybersecurity awareness and risk perception exhibits nonlinear patterns, with notable differences particularly among those with moderate or high awareness levels. Furthermore, demographic influences suggest that educational background and gender play significant roles in shaping cybersecurity-related cognitive structures. These insights provide a foundation for targeted interventions and adaptive security strategies, considering both cognitive heuristics and demographic variations in cybersecurity behavior.

4.9. Advanced Clustering Analysis on Security Behavior Factors

To further investigate the heterogeneity in security behaviors, an advanced clustering analysis was performed using key security-related factors, specifically Factor 1, Factor 2, URL verification, password creation, and reaction to viruses. The resulting clusters, visualized in Figure 5, illustrate distinct groupings of individuals based on their security practices. To assess the significance of clustering outcomes, a series of Kruskal–Wallis tests were conducted to compare cluster differences in key behavioral factors. The results indicated statistically significant differences in URL verification scores (χ2 = 67.143, p < 5.57 × 10−12), password creation practices (χ2 = 42.494, p < 4.17 × 10−7), and reactions to malware threats (χ2 = 67.671, p < 4.36 × 10−12), supporting the validity of the identified clusters in differentiating cybersecurity behaviors.
Furthermore, an analysis of social engineering susceptibility was performed across clusters, with a Kruskal–Wallis test revealing a significant effect (χ2 = 30.143, p < 8.94 × 10−5). Post hoc pairwise Wilcoxon comparisons with Bonferroni correction highlighted specific clusters exhibiting heightened susceptibility to manipulation techniques (Table A14Appendix B).
To provide context beyond the statistics, we qualitatively characterized each of the eight clusters identified by the k-means analysis:
-
Cluster 8—“Vigilant” (High Awareness, High Compliance): This cluster scored high on both risk perception and security compliance. Its members exhibit strong security habits (e.g., careful password management and rigorous URL checking) and notably, none had fallen victim to prior cyber deception. They represent highly vigilant individuals.
-
Cluster 6—“High-Risk” (Low Awareness, Low Compliance): This group is the mirror opposite, with low risk perception and poor compliance behaviors. Participants in Cluster 6 tended to ignore security measures (weak password practices, low reaction to threats) and had the highest incidence of past social engineering victimization. They epitomize a highly vulnerable profile.
-
Cluster 4—“Aware but Passive” (High Awareness, Lower Compliance): Cluster 4 members understand cyber risks (high risk perception) but do not consistently act on that knowledge (only moderate compliance with best practices). This knowledge-action gap—a manifestation of the security paradox—means they still experienced above-average susceptibility to attacks despite knowing better.
-
Cluster 1—“Compliant Rule-Followers” (Moderate Awareness, High Compliance): Individuals in Cluster 1 displayed very diligent security behavior (high policy compliance and preventive actions) even though their personal risk perception was only moderate. Interestingly, like Cluster 8, no one in Cluster 1 had been deceived previously. This suggests that strict adherence to recommended practices can protect users even if they do not feel highly concerned about security.
-
Clusters 2, 3, 5, and 7—“Intermediate Profiles”: The remaining clusters fell in between these extremes, with mixed levels of awareness and compliance. For example, Cluster 2 showed moderately low compliance (skipping some security measures) coupled with low–medium risk awareness, correlating with a higher-than-average deception rate. Clusters 3 and 5 had more balanced profiles (moderate awareness and fairly good compliance on certain behaviors), resulting in moderate vulnerability. Cluster 7 was somewhat similar to Cluster 4 (relatively higher risk perception with only average compliance), though slightly more protected. Each cluster represents a distinct cybersecurity persona—from the highly vigilant to the highly vulnerable—defined by different combinations of cognitive mindset and actual practice.

4.10. Multinomial Logistic Regression Analysis of Social Engineering Susceptibility

To further explore the relationship between behavioral factors and vulnerability to social engineering, a multinomial logistic regression model was estimated with social engineering susceptibility as the dependent variable and key security factors as predictors. The model demonstrated that URL verification (β = 0.65, p = 0.0005) and password creation practices (β = 0.16, p = 0.016) were significant predictors, indicating that individuals engaging in more rigorous security behaviors exhibited lower susceptibility to social engineering attacks (Table A15Appendix B). Conversely, reaction to security threats (β = −0.74, p < 10−6) had a strong negative association with susceptibility, suggesting that individuals who responded proactively to security incidents were less likely to be deceived by manipulative techniques.
The clustering and statistical analyses collectively reveal distinct cybersecurity behavior profiles, demonstrating that prior exposure to cyber deception, risk perception, and security attitudes vary meaningfully across clusters. The results provide empirical support for the idea that cybersecurity decision-making is influenced by complex heuristic mechanisms rather than a single factor such as risk perception alone. Future research could further investigate the cognitive biases underlying these behavioral differences and explore targeted interventions to improve cybersecurity resilience among at-risk groups.

4.11. Association Rule Mining for Cybersecurity Decision-Making

To identify patterns in cybersecurity decision-making and risk perception, an association rule mining analysis was conducted using the Apriori algorithm. The dataset consisted of key decision-making factors related to cybersecurity awareness, preventive behaviors, and susceptibility to social engineering. The extracted rules provide insights into how security practices influence risk exposure and the likelihood of being misled by fraudulent attempts.
The most relevant association rules, ranked by lift, are presented in Table 3. The results highlight the relationships between proactive security behaviors and lower susceptibility to deception, as well as the role of financial protection and compliance concerns in shaping security importance.
The results indicate that users who employ strong password creation practices and frequent URL verification are significantly less likely to be deceived by phishing attempts (lift = 1.63). This highlights the effectiveness of proactive security behaviors in reducing exposure to cyber threats. Similarly, individuals who prioritize financial security and regulatory compliance are more likely to assign high importance to cybersecurity (lift = 1.32). This finding suggests that regulatory frameworks and financial risk considerations influence security awareness and decision-making.
A notable observation is that individuals with poor security habits, such as infrequent URL verification or reusing passwords, tend to perceive cybersecurity risks as higher (lift = 1.64). This may indicate a cognitive dissonance effect, where users are aware of security threats but fail to implement protective measures. This awareness–action gap suggests the need for targeted interventions that encourage behavioral change rather than just increasing awareness.
The association rule analysis provides meaningful insights into how security behaviors relate to risk perception and vulnerability to deception. However, additional analyses can further refine these findings. Clustering techniques can be employed to identify distinct security behavior profiles, while logistic regression models could assess the predictive strength of these security habits on actual phishing susceptibility. Moreover, sequential pattern mining could explore how security habits evolve over time, offering a longitudinal perspective on cybersecurity behavior adaptation.
The causal mediation analysis performed in this study investigates the indirect effect of URL verification practices (independent variable) on the likelihood of falling victim to phishing attempts (dependent variable), mediated by perceived risk score (mediator).

4.12. Mediation Analysis

The Average Causal Mediation Effect (ACME), which quantifies the extent to which URL verification reduces phishing susceptibility through risk perception, was statistically significant. For the control group, ACME was 0.0491 (95% CI: [0.0143, 0.13], p < 0.001), and for the treated group, ACME was slightly higher at 0.0589 (95% CI: [0.0237, 0.12], p < 0.001). The average ACME across groups was 0.0540 (95% CI: [0.0190, 0.12], p < 0.001), confirming that a significant portion of the total effect is mediated by perceived risk.
The Average Direct Effect (ADE), representing the direct influence of URL verification on phishing susceptibility after accounting for the mediator, was not statistically significant. For the control group, ADE was 0.0223 (95% CI: [−0.0253, 0.04], p = 0.22), and for the treated group, ADE was 0.0321 (95% CI: [−0.0330, 0.06], p = 0.22). The average ADE was 0.0272 (95% CI: [−0.0291, 0.05], p = 0.22), indicating that URL verification does not directly reduce phishing susceptibility but operates primarily through increasing risk perception.
The total effect, which represents the combined influence of both direct and indirect pathways, was 0.0812 (95% CI: [0.0381, 0.11], p < 0.001), suggesting that verifying URLs significantly reduces phishing vulnerability. Furthermore, the proportion of the total effect mediated by risk perception was 60.52% for the control group (95% CI: [30.05%, 136%], p < 0.001) and 72.51% for the treated group (95% CI: [47.16%, 127%], p < 0.001). The average proportion mediated was 66.51% (95% CI: [39.41%, 132%], p < 0.001), indicating that risk perception plays a crucial role in explaining the impact of URL verification behavior on phishing susceptibility.

4.13. Heuristic-Driven Cybersecurity Behaviors

To evaluate respondents’ ability to identify fake URLs, were asked some questions, offering answer options that included homoglyphs of the correct URL.
A large portion of respondents did not notice the replacement of the letters, indicating a low level of attention to detail, a lack of familiarity with the concept of homoglyphs, or the presence of satisficing heuristics in their decision-making process.
‘Have you ever been in a situation where someone tried to quickly gain your trust using compliments or other flattery tactics? How did you handle this situation?’ The purpose of those questions was to identify heuristics used in respondents’ decision-making processes when exposed to this social engineering scenario. 138 respondents indicated that they cooperated with individuals who employed such tactics, suggesting the presence of the affect heuristic in their decision-making process. On the other hand, 56 respondents mentioned that they were either hesitant to cooperate with such individuals because they were unsure of the reasons behind the approach or chose not to engage at all. By analyzing possible motives behind this social interaction, these 56 respondents employed the simulation heuristic in their mental process before deciding to decline cooperation.
Participants selected what indicates safety while browsing a website. While certain cues, such as the padlock icon in the browser’s address bar and the use of the ‘https’ protocol, serve as indicators of the security level of the connection, some users may mistakenly interpret them as indicators of the overall security of the website and its content. Thus, the representativeness heuristic that respondents relied on in their decision-making can lead to misjudgments regarding the level of safety [31].
To identify the presence of the affect heuristic and the familiarity heuristic on the decision-making process regarding the sharing of sensitive data, the following question was posed: ‘Have you ever provided sensitive information to acquaintances or close individuals?’ According to the collected responses, the majority of respondents indicated that their decisions regarding the sharing of sensitive information were influenced by social contexts, emotions, and the degree of familiarity with the person they were interacting with. From the responses received, only 4% indicated that the respondents did not provide sensitive data, showing indifference to such heuristics.

5. Discussions

This study provides a structured and rigorous empirical analysis of how cognitive heuristics influence decision-making in cybersecurity, particularly in the context of susceptibility to social engineering attacks. Our findings contribute by demonstrating that heuristic reliance in cybersecurity decision-making is systematically structured into latent cognitive dimensions.

5.1. Comparison with Previous Research

Our results indicate that cybersecurity decision-making can be characterized by two underlying cognitive structures: risk perception and compliance and security. This is consistent with prior studies suggesting that security-related behaviors are shaped by both intrinsic risk assessments and external regulatory pressures [19,48]. However, unlike previous studies that treat motivations for cybersecurity behavior as discrete elements, our factor analysis reveals that these motivations cluster into broader, interrelated cognitive domains. This supports the notion that security decisions are not purely rational but are strongly influenced by mental shortcuts that structure risk perception and security obligations.
Our findings partially align with those of [13], who identified the role of heuristics such as expertise, availability, and representativeness in security decision-making. However, our research extends their work by demonstrating that heuristic reliance is not just an ad hoc simplification but follows a structured, latent cognitive framework. Similarly, de Wit et al. (2023) [28] highlighted overconfidence and conjunction fallacy in security risk assessments, whereas our study shows that risk perception and compliance concerns shape these heuristic-driven biases in a more systematic way.
A key contribution of our study is that it contradicts some prior assumptions regarding the role of financial and reputational risk in cybersecurity behaviors. While [63] argued that financial risk perception plays a central role in protective behaviors, our regression results indicate that concerns about financial losses or brand reputation do not significantly predict phishing susceptibility. This suggests that financial risk may be perceived as an organizational rather than a personal concern, leading to a diffusion of responsibility in individual security decisions. This aligns with [16], who found that SME owners perceive cyber threats as less likely to impact them directly compared to similar organizations.
Our study introduces a novel clustering analysis that reveals the presence of cognitive dissonance in cybersecurity decision-making. We identified a subset of participants who exhibit high levels of risk perception yet fail to take protective measures, a phenomenon known as the “security paradox” [64]. However, unlike previous studies that attribute this paradox to a general lack of motivation [65], our results suggest that affect heuristics play a central role—where individuals’ security-related emotions override rational risk assessments.
Furthermore, our mediation analysis demonstrates that URL verification indirectly reduces phishing susceptibility by increasing risk awareness (ACME = 0.054, p < 0.001). This finding builds upon [56], who argued that cybersecurity training improves phishing detection primarily by increasing threat salience rather than procedural knowledge. Our study provides empirical support for this claim while highlighting that the effect is mediated by heuristic-driven cognitive processes.
Our mediation analysis further underscores the indirect role of risk perception in shaping cybersecurity behaviors. The fact that URL verification indirectly reduces phishing susceptibility through increased risk awareness (ACME = 0.054, p < 0.001) suggests that heuristic-based decision-making is not only about cognitive shortcuts but also about how individuals perceive and internalize risks. This extends earlier work by [56], who found that cybersecurity training improves phishing detection primarily by increasing threat salience rather than by imparting specific procedural knowledge. The implication of this finding is that security interventions should focus not only on teaching protective behaviors but also on reshaping risk perception to encourage more deliberate security choices.
Additionally, our research extends the work of [40,41], who showed that time pressure impairs phishing detection. While their studies primarily focused on reaction speed and accuracy under constrained conditions, our findings suggest that heuristic reliance may serve as an adaptive strategy when individuals face time constraints, albeit at the cost of increased vulnerability to deception cues.

5.2. Implications and Future Directions

One of the most critical insights from our study is the role of regulatory compliance heuristics in reducing phishing susceptibility (β = −2.074, p = 0.002). This aligns with findings that formal security frameworks such as ISO/IEC 27001[30] and the NIST Cybersecurity Framework[29] enhance structured decision-making processes [14]. However, our study adds an important nuance: compliance concerns not only correlate with protective behaviors but also act as a cognitive anchor, influencing risk perception even in the absence of immediate security threats.
Future research should explore how AI-driven interventions can mitigate heuristic-driven security errors. Given that heuristics often lead to suboptimal cybersecurity decisions, machine learning models could be trained to detect heuristic biases in user behavior and provide real-time corrective feedback. This aligns with recent proposals advocating for AI-powered digital nudging as a strategy to counteract cognitive biases in cybersecurity [42]. Additionally, incorporating neurocognitive methods such as EEG or eye-tracking could provide deeper insights into how heuristics shape real-time security decisions.
Overall, our findings underscore the importance of rethinking security training approaches—moving beyond simple procedural knowledge and towards strategies that reshape risk perception and heuristic reliance. By understanding how heuristics structure cybersecurity decisions, organizations can design more effective interventions that align with natural cognitive processes rather than attempting to override them entirely.

5.3. Limitations and Future Research

Despite its contributions, this study has several limitations that should be acknowledged.
First, the research relies on self-reported survey data, which may introduce social desirability bias or recall inaccuracies. While self-reported measures are commonly used in cybersecurity behavior studies, future research should incorporate experimental designs or behavioral tracking to validate whether self-reported behaviors align with actual security practices. Controlled phishing simulations, for example, could provide more robust insights into heuristic-driven security vulnerabilities.
Second, the study is cross-sectional, meaning that it captures heuristic-driven cybersecurity behaviors at a single point in time. This limits the ability to establish causal relationships between heuristic reliance and security behaviors. While logistic regression models offer insights into predictive associations, a longitudinal study would be necessary to assess whether cybersecurity training or repeated exposure to security incidents reduces reliance on cognitive heuristics over time.
Third, the sample, while diverse, may not fully account for cross-cultural variations in cybersecurity heuristics. For instance, a study found that cultural worldviews influence risk perceptions, leading to variations in how different cultural groups approach security risks [66]. Future studies should explore whether the heuristic structures identified in this study are universally applicable or culturally specific.
Fourth, this study focuses primarily on individual decision-making, yet cybersecurity decisions in organizational contexts often involve collective processes. Research on security behaviors in group decision-making settings is needed to determine whether heuristics operate differently in corporate environments, where formal policies and shared responsibilities influence security choices [67].
Future research should also explore the role of AI-driven interventions in mitigating heuristic-driven security errors. Given that heuristics often lead to suboptimal cybersecurity decisions, machine learning models could be trained to detect heuristic biases in user behavior and provide real-time corrective feedback. This aligns with recent proposals advocating for AI-powered digital nudging as a strategy to counteract cognitive biases in cybersecurity [43]. Additionally, integrating neurocognitive methods, such as EEG or eye-tracking, could provide deeper insights into how heuristics shape real-time cybersecurity decisions.

6. Conclusions

This study provides empirical evidence that heuristic-driven decision-making plays a critical role in cybersecurity behavior, particularly in the context of social engineering vulnerabilities. By identifying two key cognitive structures—risk perception and compliance and security—this research advances the theoretical understanding of how individuals assess cyber threats and make security-related decisions. The findings indicate that while heuristics can facilitate efficient decision-making, they also contribute to systematic security errors that increase vulnerability to cyberattacks.
Our results challenge previous assumptions that cybersecurity awareness alone is sufficient to drive secure behaviors. Instead, we demonstrate that risk perception serves as a crucial intermediary, suggesting that security interventions should prioritize risk salience and cognitive reframing rather than merely increasing procedural knowledge. Moreover, the observed cognitive dissonance effect underscores the need for training programs that not only enhance awareness but also translate awareness into action.
By bridging cognitive psychology with cybersecurity research, this study lays the groundwork for targeted interventions that leverage an understanding of heuristic-driven vulnerabilities. Future research should build upon these findings by employing longitudinal designs, experimental methodologies, and cross-cultural analyses to further refine heuristic-based security models. As cyber threats continue to evolve, understanding the cognitive mechanisms that underlie security decisions will be essential for developing more effective cybersecurity strategies.

Author Contributions

The authors confirm contribution to the paper as follows: Conceptualization, V.G.-Ş. and F.C.; methodology, F.C. and S.-C.N.; software F.C.; validation, S.-C.N.; formal analysis, V.G.-Ş. and F.C.; investigation, V.G.-Ş. and F.C.; data curation, F.C.; writing—V.G.-Ş. and F.C.; writing—review and editing, S.-C.N.; visualization, V.G.-Ş. and F.C.; supervision, V.G.-Ş.; project administration, V.G.-Ş.; funding acquisition, V.G.-Ş. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by CITY FOCUS project (CF23/27.07.2023) facilitated by the National Recovery and Resilience Plan for Romania (PNRR-III-C9-2023-18/Comp9/Inv8), financed by the European Union—NextGenerationEU.

Data Availability Statement

The data that support the findings of this study are available from the Corresponding Author, [S.-C., N.], upon reasonable request.

Acknowledgments

CITY FOCUS project (CF23/27.07.2023) facilitated by the National Recovery and Resilience Plan for Romania (PNRR-III-C9-2023-18/Comp9/Inv8), financed by the European Union—NextGenerationEU.

Conflicts of Interest

Author Floredana Constantin was employed by the company Duk-Tech. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

Appendix A1. Survey Questionnaire

1.
Cybersecurity is an important concern for me.
  • 1 (Strongly Disagree) to 5 (Strongly Agree)
2.
What factors influence your decision to implement cybersecurity measures?
  • Protecting sensitive personal or organizational data
  • Compliance with regulatory norms and standards
  • Preventing financial loss
  • Protecting against reputational damage
  • Preventing operational disruptions and service interruptions
  • Other (please specify)
3.
In your opinion, what is the biggest challenge in implementing effective cybersecurity measures?
  • Lack of budget and resources
  • Complexity of emerging threats
  • Human error and negligence
  • Inadequate expertise and skills
4.
Have you ever been tricked by a fake URL, believing it was legitimate?
  • Yes
  • No
  • Not sure
5.
Which of the following email addresses is valid for contacting Amazon’s sales department?
6.
Which of the following email addresses is valid for contacting Microsoft’s support department?
7.
Which of the following options is the correct web address for LinkedIn?
8.
Which of the following URLs is correct for accessing Google’s official website?
9.
How do you verify if a URL is legitimate?
  • Using a URL scanning service
  • Checking if the URL is listed in Google’s database
  • Comparing the URL against a reported phishing or malware list
  • Other (please specify)
10.
Do you consider yourself susceptible to social engineering attacks as a student?
  • Yes
  • No
11.
Have you ever been in a situation where someone tried to quickly gain your trust using compliments or flattery? How did you respond?
  • I was influenced by the compliments and collaborated with the person
  • I was skeptical and sought more information about their intentions
  • I ignored the compliments and kept a cautious distance
  • I have never been in such a situation
12.
Have you ever been in a situation where someone tried to persuade you to take an action you did not want to, using emotional manipulation or social pressure?
  • I gave in and did what was asked
  • I resisted and stood by my decision
  • I have never been in such a situation
13.
Have you ever been the target of a social engineering attack?
  • Yes
  • No
  • Not sure
14.
If you were targeted by a social engineering attack, how did you identify it?
  • I was already familiar with such tactics and recognized warning signs
  • I received a warning from friends or security systems
  • I fell for the attack and only realized later what had happened
  • I don’t believe I have been targeted by social engineering
  • I have not been targeted by social engineering
15.
How do you evaluate suspicious emails or messages?
  • Clicking on links in the email to verify information
  • Checking the sender’s email address for slight variations from the official one
  • Downloading and opening any attached files to confirm content
  • Responding with personal information requested to validate the sender’s identity
  • Deleting them immediately as I do not recognize the sender
  • Other (please specify)
16.
What methods do you use to create a password?
  • Using a password generator tool
  • Adapting passwords based on a memorized structure or pattern
  • Entering random character combinations
  • Incorporating personal elements such as phone numbers, birthdates, or names of important people
  • Using the same password across multiple platforms
  • Other (please specify)
17.
How do you react when faced with a computer virus?
  • Disconnect from the internet and seek professional help
  • Ignore it and continue using the device
  • Try to identify and resolve the issue on my own
  • Seek advice and guidance on forums or online communities
  • Other (please specify)
18.
What indicators suggest safety while browsing a website?
  • Padlock icon in the browser’s address bar
  • The use of HTTPS instead of HTTP in the URL
  • Clearly stated privacy policies, terms, and conditions
  • Positive reviews and a solid online reputation
  • Other (please specify)
19.
Have you ever shared sensitive personal data with acquaintances or close individuals?
  • Yes, because I trusted them and believed they would handle the information responsibly
  • Yes, because they requested the information, and I thought it was necessary for a legitimate purpose
  • Yes, because I considered the data important for our relationship or a specific situation
  • Yes, because I believed sharing the data would help solve a problem or facilitate communication
  • Yes, because I thought the recipient needed the information to assist me in some way
  • No, I have never shared sensitive data with acquaintances or close individuals
  • Other (please specify)
20.
What is your level of university education?
  • Bachelor’s
  • Master’s
  • PhD
21.
Are you currently student?
  • Yes
  • No
22.
What is your age?
  • 18–24 years
  • 25–34 years
  • Over 35 years
23.
What is your gender?
  • Male
  • Female
  • Prefer not to say
  • Other (please specify)
24.
What is your place of residence?
  • Rural
  • Urban
Table A1. Survey questions and encoding.
Table A1. Survey questions and encoding.
Question NumberSurvey QuestionQuestion TypeMeasurement UnitEncoding Scheme
1Cybersecurity is an important concern for me.Likert ScaleOrdinal (1–5 Likert)1–5 scale (Strongly Disagree to Strongly Agree)
2What factors influence your decision to implement cybersecurity measures?Multiple Choice (Select all that apply)NominalBinary encoding for selected options
3In your opinion, what is the biggest challenge in implementing effective cybersecurity measures?Multiple ChoiceNominal1 = Selected, 0 = Not Selected
4Have you ever been tricked by a fake URL, believing it was legitimate?Multiple ChoiceNominal1 = Yes, 2 = No, 3 = Not Sure
5Which of the following email addresses is valid for contacting Amazon’s sales department?Multiple ChoiceNominal1 = Correct, 0 = Incorrect
6Which of the following email addresses is valid for contacting Microsoft’s support department?Multiple ChoiceNominal1 = Correct, 0 = Incorrect
7Which of the following options is the correct web address for LinkedIn?Multiple ChoiceNominal1 = Correct, 0 = Incorrect
8Which of the following URLs is correct for accessing Google’s official website?Multiple ChoiceNominal1 = Correct, 0 = Incorrect
9How do you verify if a URL is legitimate?Multiple Choice (Select all that apply)NominalBinary encoding for selected options
10Do you consider yourself susceptible to social engineering attacks?Multiple ChoiceNominal1 = Yes, 0 = No
11Have you ever been in a situation where someone tried to quickly gain your trust using compliments or flattery? How did you respond?Multiple ChoiceNominal1–4 categorical encoding
12Have you ever been in a situation where someone tried to persuade you to take an action you did not want to, using emotional manipulation or social pressure?Multiple ChoiceNominal1–3 categorical encoding
13Have you ever been the target of a social engineering attack?Multiple ChoiceNominal1 = Yes, 2 = No, 3 = Not Sure
14If you were targeted by a social engineering attack, how did you identify it?Multiple ChoiceNominal1-5 categorical encoding
15How do you evaluate suspicious emails or messages?Multiple Choice (Select all that apply)NominalBinary encoding for selected options
16What methods do you use to create a password?Multiple Choice (Select all that apply)NominalBinary encoding for selected options
17How do you react when faced with a computer virus?Multiple ChoiceNominalBinary encoding for selected options
18What indicators suggest safety while browsing a website?Multiple Choice (Select all that apply)NominalBinary encoding for selected options
19Have you ever shared sensitive personal data with acquaintances or close individuals?Multiple ChoiceNominalBinary encoding for selected options
20What is your level of university education?Multiple ChoiceNominal1 = Bachelor’s, 2 = Master’s, 3 = PhD
21Are you currently studentBinaryBinary1 = Yes, 0 = No
21What is your age?Multiple ChoiceNominal1 = 18–24, 2 = 25–34, 3 = Over 35
22What is your gender?Multiple ChoiceNominal1 = Male, 2 = Female, 3 = Prefer not to say, 4 = Other
23What is your place of residence?Multiple ChoiceNominal1 = Rural, 2 = Urban

Appendix B

Table A2. Demographic characteristics of study participants.
Table A2. Demographic characteristics of study participants.
Demographic CharacteristicValue FrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
Age Group18–24 years12864.064.064.0
25–34 years6030.030.094.0
Over 35 years126.06.0100.0
GenderFemale11256.056.056.0
Male8442.042.098.0
Prefer not to say42.02.0100.0
Education LevelBachelor’s12261.061.061.0
Master’s6834.034.095.0
PhD105.05.0100.0
Residence TypeUrban12060.060.060.0
Rural8040.040.0100.0
Total200100.0100.0100.0
Table A3. Exposure to social engineering attacks.
Table A3. Exposure to social engineering attacks.
Experience with Social Engineering AttacksFrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
Yes15276.076.076.0
No168.08.084.0
Not sure3216.016.0100.0
Total200100.0100.0100.0
Table A4. Importance of cybersecurity perception (Cybersecurity is an important matter).
Table A4. Importance of cybersecurity perception (Cybersecurity is an important matter).
Cybersecurity Importance LevelFrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
1 (Not important)42.02.02.0
263.03.05.0
37236.036.041.0
49447.047.088.0
5 (Very important)2412.012.0100.0
Total200100.0100.0100.0
Table A5. Have you ever shared sensitive information?
Table A5. Have you ever shared sensitive information?
ResponseFrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
Yes, because I trusted them3015.015.015.0
Yes, I thought it would help solve a problem126.06.021.0
Yes, I considered the data important for a specific situation9648.048.069.0
Yes, because I believed it was necessary3015.015.084.0
No, I never shared sensitive data84.04.088.0
Total200100.0100.0100.0
Table A6. Experience with manipulation or social pressure (Have you ever been in a situation where someone tried to persuade you to take an action you did not want to, using emotional manipulation or social pressure?).
Table A6. Experience with manipulation or social pressure (Have you ever been in a situation where someone tried to persuade you to take an action you did not want to, using emotional manipulation or social pressure?).
ResponseFrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
I gave in and did what was asked14271.071.071.0
I was not in such a situation126.06.077.0
I resisted and remained firm in my choice4623.023.0100.0
Total200100.0100.0100.0
Table A7. Do you consider yourself susceptible to social engineering attacks?
Table A7. Do you consider yourself susceptible to social engineering attacks?
ResponseFrequencyPercentage (%)Valid Percentage (%)Cumulative Percentage (%)
Yes16884.084.084.0
No3216.016.0100.0
Total200100.0100.0100.0
Table A8. Correlation between cybersecurity decision factors.
Table A8. Correlation between cybersecurity decision factors.
FactorReputational ProtectionRegulatory ComplianceFinancial Loss PreventionOperational ContinuityData Protection
Reputational Protection1.0000.2500.411−0.0250.135
Regulatory Compliance0.2501.0000.4800.4420.720
Financial Loss Prevention0.4110.4801.0000.3410.429
Operational Continuity−0.0250.4420.3411.0000.302
Data Protection0.1350.7200.4290.3021.000
Table A9. Logistic regression: predicting susceptibility to social engineering.
Table A9. Logistic regression: predicting susceptibility to social engineering.
FactorEstimateStd. Errorz Valuep-Value
Intercept2.9560.5495.387<0.001 ***
Reputational Protection−0.1880.606−0.3110.756
Regulatory Compliance−2.0740.691−3.0020.002 **
Financial Loss Prevention−0.7200.590−1.2190.223
Operational Continuity−0.5190.534−0.9720.331
Data Protection1.2830.7731.6610.097
Statistically significant effects are denoted by asterisks, with *** indicating p-values below 0.001 and ** indicating p-values below 0.01.
Table A10. Reliability analysis (Cronbach’s alpha).
Table A10. Reliability analysis (Cronbach’s alpha).
ItemRaw Alpha if Item DroppedStandardized AlphaItem-Total Correlation (raw.r)Item-Total Correlation (std.r)Mean
Reputation Protection0.770.830.660.610.69
Regulatory Compliance0.740.780.830.840.30
Financial Protection0.760.800.720.740.50
Operational Continuity0.800.840.470.520.53
Data protection0.760.800.700.720.32
Factor 10.720.760.870.890.00
Factor 20.820.830.660.610.00
Table A11. Logistic regression results for cyber deception experience.
Table A11. Logistic regression results for cyber deception experience.
PredictorEstimateStd. Errorz-Valuep-ValueInterpretation
Intercept1.15670.19086.063<0.001 ***Baseline log-odds of being deceived when all predictors are zero.
Factor 1−1.05450.1767−5.968<0.001 ***Higher scores on Factor 1 are associated with lower likelihood of cyber deception.
Factor 2−0.14990.2009−0.7460.456Factor 2 does not significantly predict cyber deception experience.
Statistically significant effects are denoted by asterisks, with *** indicating p-values below 0.001.
Table A12. Logistic regression results with interaction effects.
Table A12. Logistic regression results with interaction effects.
PredictorEstimateStd. Errorz-Valuep-ValueInterpretation
Intercept0.60720.46321.3110.1899Baseline log-odds of being deceived when all predictors are zero.
Factor 1−3.06411.2860−2.3830.0172 *Higher values of Factor 1 significantly decrease the likelihood of cyber deception.
Factor 20.47810.65320.7320.4642Factor 2 does not significantly predict cyber deception.
Factor 1 × Factor 23.34391.90401.7560.0790Interaction term suggests a potential combined effect, but it is only marginally significant.
* p < 0.05.
Figure A1. Silhouette score.
Figure A1. Silhouette score.
Systems 13 00280 g0a1
Table A13. Dunn's post hoc pairwise comparisons.
Table A13. Dunn's post hoc pairwise comparisons.
ComparisonZ-Scorep-ValueInterpretation
2 vs. 10.67181.0000No significant difference
3 vs. 1−1.76160.3907No significant difference
3 vs. 2−3.15030.0082Significant difference (p < 0.05)
4 vs. 1−1.17891.0000No significant difference
4 vs. 2−2.45930.0696Borderline significance
4 vs. 31.93510.2649No significant difference
5 vs. 1−1.38900.8242No significant difference
5 vs. 2−2.59360.0475Significant difference (p < 0.05)
Figure A2. Reaction to viruses by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Figure A2. Reaction to viruses by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Systems 13 00280 g0a2
Figure A3. Password creation by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Figure A3. Password creation by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Systems 13 00280 g0a3
Figure A4. Checking URL by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Figure A4. Checking URL by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Systems 13 00280 g0a4
Figure A5. Susceptibility to social engineering by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Figure A5. Susceptibility to social engineering by cluster (Dots represent outliers, defined as values that fall outside 1.5 times the interquartile range from the lower or upper quartile).
Systems 13 00280 g0a5
Table A14. Post-hoc Wilcoxon Comparisons.
Table A14. Post-hoc Wilcoxon Comparisons.
Cluster ComparisonWilcoxon Statisticp-Value (Bonferroni)Significance
2 vs. 30.573181NS
3 vs. 40.006240.0082*
4 vs. 50.000210.0475*
5 vs. 60.010250.0696NS
6 vs. 70.402110.2649NS
* p < 0.05.
Table A15. Multinomial logistic regression results.
Table A15. Multinomial logistic regression results.
PredictorCoefficient (β)Std. Errorz-Valuep-ValueSignificance
Intercept0.75490.94780.7960.426NS
Factor 1−0.18240.2477−0.7370.461NS
Factor 20.08240.28080.2930.769NS
URL Verification0.64520.18623.4640.0005**
Password Creation0.15720.06562.3980.016*
Reaction to Viruses−0.73650.1461−5.0447.00E-06***
*** p < 0.001, ** p < 0.01, * p < 0.05

References

  1. Tversky, A.; Kahneman, D. Judgment under Uncertainty: Heuristics and Biases. In Uncertainty in Economics; Elsevier: Amsterdam, The Netherlands, 1978; pp. 17–34. [Google Scholar] [CrossRef]
  2. Rosoff, H.; Cui, J.; Jsohn, R.S. Heuristics and biases in cyber security dilemmas. Environ. Syst. Decis. 2013, 33, 517–529. [Google Scholar] [CrossRef]
  3. Pollini, A.; Callari, T.C.; Tedeschi, A.; Ruscio, D.; Save, L.; Chiarugi, F.; Guerri, D. Leveraging human factors in cybersecurity: An integrated methodological approach. Cogn. Technol. Work 2022, 24, 371–390. [Google Scholar] [CrossRef] [PubMed]
  4. Montañez, R.; Golob, E.; Xu, S. Human Cognition Through the Lens of Social Engineering Cyberattacks. Front. Psychol. 2020, 11, 1755. [Google Scholar] [CrossRef] [PubMed]
  5. Saeed, U. Visual similarity-based phishing detection using deep learning. J. Electron. Imaging 2022, 31, 051607. [Google Scholar] [CrossRef]
  6. Pratkanis, A.R. Attitude Structure and Function; Breckler, S.J., Greenwald, A.G., Eds.; Psychology Press: London, UK, 2014. [Google Scholar] [CrossRef]
  7. Pilar, D.R.; Jaeger, A.; Gomes, C.F.A.; Stein, L.M. Passwords Usage and Human Memory Limitations: A Survey across Age and Educational Background. PLoS ONE 2012, 7, e51067. [Google Scholar] [CrossRef]
  8. Yu, W.; Yin, Q.; Yin, H.; Xiao, W.; Chang, T.; He, L.; Ni, L.; Ji, Q.A. A Systematic Review on Password Guessing Tasks. Entropy 2023, 25, 1303. [Google Scholar] [CrossRef]
  9. Burda, P.; Allodi, L.; Zannone, N. Cognition in Social Engineering Empirical Research: A Systematic Literature Review. ACM Trans. Comput. Hum. Interact. 2024, 31, 1–55. [Google Scholar] [CrossRef]
  10. Robert, B.; Cialdini, P.D. Influence; HarperCollins: New York, NY, USA, 2014. [Google Scholar]
  11. Gangire, Y.; Da Veiga, A.; Herselman, M. Assessing information security behaviour: A self-determination theory perspective. Inf. Comput. Secur. 2021, 29, 625–646. [Google Scholar] [CrossRef]
  12. Huang, D.-L.; Rau, P.-L.P.; Salvendy, G. Perception of information security. Behav. Inf. Technol. 2010, 29, 221–232. [Google Scholar] [CrossRef]
  13. Bahreini, A.F.; Cenfetelli, R.; Cavusoglu, H. The Role of Heuristics. In Information Security Decision Making, Proceedings of the Hawaii International Conference on System Sciences, Hawaii, HI, USA, 4–7 January 2022; University of Hawaii at Mānoa: Honolulu, HI, USA, 2022. [Google Scholar] [CrossRef]
  14. Teodoro, N.; Goncalves, L.; Serrao, C. NIST CyberSecurity Framework Compliance: A Generic Model for Dynamic Assessment and Predictive Requirements. In Proceedings of the IEEE Trustcom/BigDataSE/ISPA, Helsinki, Finland, 20–22 August 2015; pp. 418–425. [Google Scholar] [CrossRef]
  15. Metin, B.; Özhan, F.G.; Wynn, M. Digitalisation and Cybersecurity: Towards an Operational Framework. Electronics 2024, 13, 4226. [Google Scholar] [CrossRef]
  16. Salzberger, A. Cyber Risk Awareness of German SMEs: An Empirical Study on the Influence of Biases and Heuristics. Z. Gesamte Versicherungswissenschaft 2024, 113, 55–104. [Google Scholar] [CrossRef]
  17. Howell, C.; Maimon, D.; Muniz, C.; Kamar, E.; Berenblum, T. Engaging in cyber hygiene: The role of thoughtful decision-making and informational interventions. Front. Psychol. 2024, 15, 1372681. [Google Scholar] [CrossRef] [PubMed]
  18. Wang, Z.; Sun, L.; Zhu, H. Defining Social Engineering in Cybersecurity. IEEE Access 2020, 8, 85094–85115. [Google Scholar] [CrossRef]
  19. Schaltegger, T.; Ambuehl, B.; Ackermann, K.A.; Ebert, N. Re-thinking Decision-Making in Cybersecurity: Leveraging Cognitive Heuristics in Situations of Uncertainty. In Proceedings of the Hawaii International Conference on System Sciences, Hawaii, HI, USA, 3–6 January 2024. [Google Scholar] [CrossRef]
  20. Raeburn, A. Green Means Go? That’s a Heuristic, ASANA. Available online: https://asana.com/resources/heuristics (accessed on 7 April 2025).
  21. Arroyabe, M.F.; Arranz, C.F.A.; De Arroyabe, I.F.; Fernandez De Arroyabe, J.C. Navigating Cybersecurity: Environment’s Impact on Standards Adoption and Board Involvement. J. Comput. Inf. Syst. 2024, 1–21. [Google Scholar] [CrossRef]
  22. De La Cruz, E.; Oni, O.; Nadella, G.S.; Gonaygunta, H.; Meduri, S.S.; De La Cruz, A.M. Cybersecurity Data Analytics System Success: An Exploratory Study on U.S Government Agencies. In Proceedings of the International Seminar on Application for Technology of Information and Communication (iSemantic), Semarang, Indonesia, 21–22 September 2024; pp. 403–408. [Google Scholar] [CrossRef]
  23. Amine, A.M.; Chakir, E.M.; Issam, T.; Khamlichi, Y.I. A Review of Cybersecurity Management Standards Applied in Higher Education Institutions. Int. J. Saf. Secur. Eng. 2023, 13, 1109–1116. [Google Scholar] [CrossRef]
  24. Gatica-Neira, F.; Galdames-Sepulveda, P.; Ramos-Maldonado, M. Adoption of Cybersecurity in the Chilean Manufacturing Sector: A First Analytical Proposal. IEEE Access 2023, 11, 133475–133489. [Google Scholar] [CrossRef]
  25. Rodrigues, B.; Franco, M.; Parangi, G.; Stiller, B. SEConomy: A Framework for the Economic Assessment of Cybersecurity. In Economics of Grids, Clouds, Systems, and Sesrvices; Djemame, K., Altmann, J., Bañares, J.Á., Ben-Yehuda, O.A., Naldi, M., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2019; pp. 154–166. [Google Scholar] [CrossRef]
  26. Botha-Badenhorst, D. Navigating the Intersection of Innovation and Cybersecurity: A Framework. In Proceedings of the ECRM 2023 22nd European Conference on Research Methods in Business and Management, Lisboa, Portugal, 6 September 2023; Volume 22, pp. 18–25. [Google Scholar] [CrossRef]
  27. Skøt, L.; Nielsen, J.B.; Leppin, A. Risk perception and support for security measures: Interactive effects of media exposure to terrorism and prior life stress? J. Risk Res. 2021, 24, 228–246. [Google Scholar] [CrossRef]
  28. De Wit, J.; Pieters, W.; Van Gelder, P. Bias and noise in security risk assessments, an empirical study on the information position and confidence of security professionals. Secur. J. 2024, 37, 170–191. [Google Scholar] [CrossRef]
  29. National Institute of Standards and Technology (NIST). Framework for Improving Critical Infrastructure Cybersecurity, Version 1.1, April 2018. Available online: https://nvlpubs.nist.gov/nistpubs/CSWP/NIST.CSWP.04162018.pdf (accessed on 7 April 2025).
  30. International Organization for Standardization. ISO/IEC 27001:2022 – Information Security, Cybersecurity and Privacy Protection – Information Security Management Systems – Requirements; ISO: Geneva, Switzerland, 2022. [Google Scholar]
  31. Nastjuk, I.; Rampold, F.; Trang, S.; Benitez, J. A field experiment on ISP training designs for enhancing employee information security compliance. Eur. J. Inf. Syst. 2024, 1–24. [Google Scholar] [CrossRef]
  32. Keshvadi, S. Enhancing Western Organizational Cybersecurity Resilience through Tailored Education for Non-Technical Employees. In Proceedings of the 2023 IEEE International Humanitarian Technology Conference (IHTC), Santa Marta, Colombia, 1–6 November 2023. [Google Scholar] [CrossRef]
  33. Weishäupl, E.; Yasasin, E.; Schryen, G. Information security investments: An exploratory multiple case study on decision-making, evaluation and learning. Comput. Secur. 2018, 77, 807–823. [Google Scholar] [CrossRef]
  34. Seaman, J. Combating the Cyber-Security Kill Chain: Moving to a Proactive Security Model. In Artificial Intelligence in Cyber Security: Impact and Implications; Montasari, R., Jahankhani, H., Eds.; Advanced Sciences and Technologies for Security Applications; Springer International Publishing: Cham, Switzerland, 2021; pp. 121–155. [Google Scholar] [CrossRef]
  35. Zaoui, M.; Sadqi, Y. Toward Understanding the Impact of Demographic Factors on Cybersecurity Awareness in the Moroccan Context. In Artificial Intelligence and Green Computing; Idrissi, N., Hair, A., Lazaar, M., Saadi, Y., Erritali, M., El Kafhali, S., Eds.; Lecture Notes in Networks and Systems; Springer Nature: Cham, Switzerland, 2023; Volume 806, pp. 207–214. [Google Scholar] [CrossRef]
  36. Wash, R.; Rader, E. Prioritizing security over usability: Strategies for how people choose passwords. J. Cybersecur. 2021, 7, tyab012. [Google Scholar] [CrossRef]
  37. Jaspersen, J.G. Aseervatham, V. The Influence of Affect on Heuristic Thinking in Insurance Demand. J. Risk Insur. 2017, 84, 239–266. [Google Scholar] [CrossRef]
  38. Blancaflor, E.; Deldacan, L.F.; Hunat, S.; Rivera, B.M.; Liberato, E.K. AI-Driven Phishing Detection: Combating Cyber Threats Through Homoglyph Recognition and User Awareness. In Proceedings of the 6th World Symposium on Software Engineering (WSSE), Kyoto, Japan, 13–15 September 2024; pp. 226–231. [Google Scholar] [CrossRef]
  39. Katakwar, H.; Gonzalez, C.; Dutt, V. Attackers Have Prior Beliefs: Comprehending Cognitive Aspects of Confirmation Bias on Adversarial Decisions. In Lecture Notes in Networks and Systems, Proceedings of the 4th International Conference on Frontiers in Computing and Systems; Kole, D.K., Chowdhury, R., Basu, S., Plewczynski, D., Bhattacharjee, D., Eds.; Springer Nature: Singapore, 2024; Volume 975, pp. 261–273. [Google Scholar] [CrossRef]
  40. Gutzwiller, R.S.; Ferguson-Walter, K.J.; Fugate, S.J. Are Cyber Attackers Thinking Fast and Slow? Exploratory Analysis Reveals Evidence of Decision-Making Biases in Red Teamers. Proc. Hum. Factors Ergon. Soc. Annu. Meet. 2019, 63, 427–431. [Google Scholar] [CrossRef]
  41. Sultan, M.; Tump, A.N.; Geers, M.; Lorenz-Spreen, P.; Herzog, S.M.; Kurvers, R.H.J.M. Time pressure reduces misinformation discrimination ability but does not alter response bias. Sci. Rep. 2022, 12, 22416. [Google Scholar] [CrossRef]
  42. Butavicius, M.; Taib, R.; Han, S.J. Why people keep falling for phishing scams: The effects of time pressure and deception cues on the detection of phishing emails. Comput. Secur. 2022, 123, 102937. [Google Scholar] [CrossRef]
  43. Bhatt, E.; Seetharaman, P. and Indian Institute of Management Calcutta. Rethinking Digital Nudging: A Taxonomical Approach to Defining and Identifying Characteristics of Digital Nudging Interventions. AIS Trans. Hum. Comput. Interact. 2023, 15, 442–471. [Google Scholar] [CrossRef]
  44. Meske, C.; Amojo, I. Ethical Guidelines for the Construction of Digital Nudges. In Proceedings of the Hawaii International Conference on System Sciences, Hawaii, HI, USA, 7–10 January 2020. [Google Scholar] [CrossRef]
  45. JothiShri, S.; Upender, T.; Ravikumar, R.J.; Sailaja, Y.; Yuvabharathi, E.; Agnestreesa, J. AI Cyber Security: Enhancing Network Security with Deep Learning for Real-Time Threat Detection and Performance Evaluation. In Proceedings of the 3rd International Conference for Advancement in Technology (ICONAT), Sangli, India, 6–8 September 2024; pp. 1–6. [Google Scholar] [CrossRef]
  46. Nour, S.M.; Said, S.A. Harnessing the Power of AI for Effective Cybersecurity Defense. In Proceedings of the 2024 6th International Conference on Computing and Informatics (ICCI), Cairo, Egypt, 6–7 March 2024; pp. 98–102. [Google Scholar] [CrossRef]
  47. Ilieva, R.; Stoilova, G. Challenges of AI-Driven Cybersecurity. In Proceedings of the 2024 XXXIII International Scientific Conference Electronics (ET), Sozopol, Bulgaria, 17–19 September 2024; pp. 1–4. [Google Scholar] [CrossRef]
  48. Van Schaik, P.; Renaud, K.; Wilson, C.; Jansen, J.; Onibokun, J. Risk as affect: The affect heuristic in cybersecurity. Comput. Secur. 2020, 90, 101651. [Google Scholar] [CrossRef]
  49. Fatoki, J.G.; Shen, Z.; Mora-Monge, C.A. Optimism amid risk: How non-IT employees’ beliefs affect cybersecurity behavior. Comput. Secur. 2024, 141, 103812. [Google Scholar] [CrossRef]
  50. Gál, P.; Mrva, M.; Meško, M. Heuristics, biases and traps in managerial decision making. Acta Univ. Agric. Silvic. Mendel. Brun. 2013, 61, 2117–2122. [Google Scholar] [CrossRef]
  51. Goodluck Dogi, I.; Afolabi, M. Knowledge and Utility of Cyber Security Protocols among Nigerian Students. J. Afr. Films Diaspora Stud. 2023, 6, 65–86. [Google Scholar] [CrossRef]
  52. Ahamed, B.; Polas, M.R.H.; Kabir, A.I.; Sohel-Uz-Zaman, A.S.M.; Fahad, A.A.; Chowdhury, S.; Rani Dey, M. Empowering Students for Cybersecurity Awareness Management in the Emerging Digital Era: The Role of Cybersecurity Attitude in the 4.0 Industrial Revolution Era. Sage Open 2024, 14, 21582440241228920. [Google Scholar] [CrossRef]
  53. Ungkap, P.; Daengsi, T. Cybersecurity Awareness Modeling Associated with Influential Factors Using AHP Technique: A Case of Railway Organizations in Thailand. In Proceedings of the International Conference on Decision Aid Sciences and Applications (DASA), Chiangrai, Thailand, 23–25 March 2022; pp. 1359–1362. [Google Scholar] [CrossRef]
  54. Vafaei-Zadeh, A.; Nikbin, D.; Teoh, K.Y.; Hanifah, H. Cybersecurity awareness and fear of cyberattacks among online banking users in Malaysia. Int. J. Bank Mark. 2025, 43, 476–505. [Google Scholar] [CrossRef]
  55. Debb, S.M.; McClellan, M.K. Perceived Vulnerability as a Determinant of Increased Risk for Cybersecurity Risk Behavior. Cyberpsychol. Behav. Soc. Netw. 2021, 24, 605–611. [Google Scholar] [CrossRef] [PubMed]
  56. Zheng, S.Y.; Becker, I. Phishing to improve detection. In Proceedings of the European Symposium on Usable Security, Copenhagen, Denmark, 16–17 October 2023; pp. 334–343. [Google Scholar] [CrossRef]
  57. Sturman, D.; Valenzuela, C.; Plate, O.; Tanvir, T.; Auton, J.C.; Bayl-Smith, P.; Wiggins, M.W. The role of cue utilization in the detection of phishing emails. Appl. Ergon. 2023, 106, 103887. [Google Scholar] [CrossRef]
  58. Kano, Y.; Nakajima, T. Trust Factors of Social Engineering Attacks on Social Networking Services. In Proceedings of the IEEE 3rd Global Conference on Life Sciences and Technologies (LifeTech), Nara, Japan, 10–12 March 2021; pp. 25–28. [Google Scholar] [CrossRef]
  59. Mahanta, K.; Maringanti, H.B. Social Engineering Attacks and Countermeasures. In Advances in Information Security, Privacy, and Ethics; Kaushik, K., Bhardwaj, A., Eds.; IGI Global: Hershey, PA, USA, 2023; pp. 307–337. [Google Scholar] [CrossRef]
  60. Muhanad, A.; Abuelezz, I.; Khan, K.; Ali, R. On How Cialdini’s Persuasion Principles Influence Individuals in the Context of Social Engineering: A Qualitative Study. In Web Information Systems Engineering—WISE 2024; Lecture Notes in Computer Science; Barhamgi, M., Wang, H., Wang, X., Eds.; Springer Nature: Singapore, 2025; Volume 15438, pp. 373–388. [Google Scholar] [CrossRef]
  61. Merdenyan, B.; Petrie, H. Two studies of the perceptions of risk, benefits and likelihood of undertaking password management behaviours. Behav. Inf. Technol. 2022, 41, 2514–2527. [Google Scholar] [CrossRef]
  62. Preacher, K.J.; Hayes, A.F. Asymptotic and resampling strategies for assessing and comparing indirect effects in multiple mediator models. Behav. Res. Methods 2008, 40, 879–891. [Google Scholar] [CrossRef]
  63. De Smidt, G.; Botzen, W. Perceptions of Corporate Cyber Risks and Insurance Decision-Making. Geneva Pap. Risk Insur. Issues Pract. 2018, 43, 239–274. [Google Scholar] [CrossRef]
  64. Cisternas, P.C.; Cifuentes, L.A.; Bronfman, N.C.; Repetto, P.B. The influence of risk awareness and government trust on risk perception and preparedness for natural hazards. Risk Anal. 2024, 44, 333–348. [Google Scholar] [CrossRef]
  65. Anderson, C.C.; Moure, M.; Demski, C.; Renaud, F.G. Risk tolerance as a complementary concept to risk perception of natural hazards: A conceptual review and application. Risk Anal. 2024, 44, 304–321. [Google Scholar] [CrossRef]
  66. Griffiths, M.; Brooks, D.J. Informing Security Through Cultural Cognition: The Influence of Cultural Bias on Operational Security. J. Appl. Secur. Res. 2012, 7, 218–238. [Google Scholar] [CrossRef]
  67. Snyman, D.; Kruger, H. Contextual Factors in Information Security Group Behaviour: A Comparison of Two Studies. In Information Systems Security and Privacy; Communications in Computer and Information Science; Furnell, S., Mori, P., Weippl, E., Camp, O., Eds.; Springer International Publishing: Cham, Switzerland, 2025; Volume 1545, pp. 2022201–2022221. [Google Scholar] [CrossRef]
Figure 1. Factor 1 scores distribution.
Figure 1. Factor 1 scores distribution.
Systems 13 00280 g001
Figure 2. Factor 2 scores distribution.
Figure 2. Factor 2 scores distribution.
Systems 13 00280 g002
Figure 3. ROC curve for logistic model with interaction (dependent variables: interaction between Factor 1 and Factor 2, independent variable: previously deceived)- The blue curve represents the ROC of the logistic model with interaction effects. The gray dashed line corresponds to the performance of a random classifier (AUC = 0.5).
Figure 3. ROC curve for logistic model with interaction (dependent variables: interaction between Factor 1 and Factor 2, independent variable: previously deceived)- The blue curve represents the ROC of the logistic model with interaction effects. The gray dashed line corresponds to the performance of a random classifier (AUC = 0.5).
Systems 13 00280 g003
Figure 4. Risk score distribution (optimal threshold). Note: the red dashed line indicates the optimal decision threshold derived from the ROC curve, marking the point that best separates individuals classified as high-risk from those considered low- or medium-risk.
Figure 4. Risk score distribution (optimal threshold). Note: the red dashed line indicates the optimal decision threshold derived from the ROC curve, marking the point that best separates individuals classified as high-risk from those considered low- or medium-risk.
Systems 13 00280 g004
Figure 5. Clusters on security factors (Factor 1, Factor 2, Virus_Reaction, Password_Creation, URL_Checking).
Figure 5. Clusters on security factors (Factor 1, Factor 2, Virus_Reaction, Password_Creation, URL_Checking).
Systems 13 00280 g005
Table 1. Factor loadings from EFA.
Table 1. Factor loadings from EFA.
VariableFactor 1 (Risk Perception)Factor 2 (Compliance and Security)Communalities (h²)
Reputation Protection0.998 0.995
Regulatory Compliance 0.9120.847
Financial Protection0.4640.3000.373
Operational Continuity−0.1470.5100.245
Data Protection 0.7920.610
Factor SummaryML1ML2
Sum of Squared Loadings (SS Loadings)1.9361.113
Proportion of Variance Explained38.7%22.3%
Cumulative Variance Explained38.7%61.0%
Extraction method: maximum likelihood, rotation: oblimin; Kaiser–Meyer–Olkin (KMO) = 0.64, Bartlett’s Test: χ²(10) = 298.78, p < 0.001.
Table 2. Confirmatory Factor Analysis (CFA) fit statistics.
Table 2. Confirmatory Factor Analysis (CFA) fit statistics.
Fit IndexValueInterpretation
Chi-square (χ²)6.733Non-significant, good fit
Degrees of Freedom (df)4Efficient factor structure
p-value0.151Model fits data well
RMSEA0.059Values below 0.06 indicate a good fit, supporting the model’s structural validity
Comparative Fit Index (CFI)0.998Excellent fit (>0.95)
Tucker–Lewis Index (TLI)0.994Excellent fit (>0.95)
Standardized Root Mean Square Residual (SRMR)0.073Acceptable fit (<0.08)
Estimator: DWLS, robust standard errors.
Table 3. Top association rules for cybersecurity decision-making.
Table 3. Top association rules for cybersecurity decision-making.
AntecedentsConsequentSupportConfidenceLift
{Frequent URL verification, Strong password creation} → {Not fooled by phishing}0.2486.6%1.63Individuals who actively verify URLs and use strong passwords are significantly less likely to fall for phishing scams.
{Concern for financial protection, Regulatory compliance} → {High security importance}0.3570.0%1.32Those who prioritize financial security and compliance with regulations also consider cybersecurity highly important.
{Minimal security verification, Reused passwords} → {Higher risk perception}0.3086.7%1.64Users with weaker security habits tend to perceive cybersecurity risks as high, indicating a psychological awareness without action.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Greavu-Şerban, V.; Constantin, F.; Necula, S.-C. Exploring Heuristics and Biases in Cybersecurity: A Factor Analysis of Social Engineering Vulnerabilities. Systems 2025, 13, 280. https://doi.org/10.3390/systems13040280

AMA Style

Greavu-Şerban V, Constantin F, Necula S-C. Exploring Heuristics and Biases in Cybersecurity: A Factor Analysis of Social Engineering Vulnerabilities. Systems. 2025; 13(4):280. https://doi.org/10.3390/systems13040280

Chicago/Turabian Style

Greavu-Şerban, Valerică, Floredana Constantin, and Sabina-Cristiana Necula. 2025. "Exploring Heuristics and Biases in Cybersecurity: A Factor Analysis of Social Engineering Vulnerabilities" Systems 13, no. 4: 280. https://doi.org/10.3390/systems13040280

APA Style

Greavu-Şerban, V., Constantin, F., & Necula, S.-C. (2025). Exploring Heuristics and Biases in Cybersecurity: A Factor Analysis of Social Engineering Vulnerabilities. Systems, 13(4), 280. https://doi.org/10.3390/systems13040280

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop