Next Article in Journal
Anomaly Detection Against Fake Base Station Threats Using Machine Learning
Previous Article in Journal
Hashing in the Fight Against CSAM: Technology at the Crossroads of Law and Ethics
 
 
Article
Peer-Review Record

Perceiving Digital Threats and Artificial Intelligence: A Psychometric Approach to Cyber Risk

J. Cybersecur. Priv. 2025, 5(4), 93; https://doi.org/10.3390/jcp5040093
by Diana Carbone 1,†, Francesco Marcatto 1,*,†, Francesca Mistichelli 1,2 and Donatella Ferrante 1
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 4: Anonymous
J. Cybersecur. Priv. 2025, 5(4), 93; https://doi.org/10.3390/jcp5040093
Submission received: 15 September 2025 / Revised: 20 October 2025 / Accepted: 24 October 2025 / Published: 3 November 2025
(This article belongs to the Section Security Engineering & Applications)

Round 1

Reviewer 1 Report

  1. The reported McDonald's ω of 0.61 for the Proactive Awareness subscale is identified as a limitation. This score is low for a psychometric scale, which weakens the conclusions about the behavioral aspects of the clusters.
  2. The authors present that relying only on self-report measures for expertise, bias, and behavior may lead to social desirability bias. This limits causal conclusions and represents a significant weakness in this type of research.
  3. The discussion about the paradox—that clusters with higher expertise and proactive behaviors (Clusters 1 and 2) show a stronger optimism bias—could be clearer. A better explanation of this "overconfidence" mechanism would help improve understanding.
  4. In the Discussion there need to be c000000larity of optimism bias explanation as it is crucial to clarify the optimism bias paradox. For the Vigilant Realists and Under-concerned Optimists, the authors should mention that their strong belief in their protective efforts ("I am less at risk") likely leads to the optimism bias, even as they engage in protective actions. This view frames the bias as overconfidence from high efficacy instead of ignorance.
  5. The article about Machine Learning and the Internet of Things (IoT) provides important context for the digital threats discussed in the manuscript, particularly those linked to IoT. This includes hazards such as malware and online credential theft. The widespread use of IoT technologies, where data is created and used by machine learning and artificial intelligence, adds to the modern cyber threats' non-traditional nature. It is suggested for authors to refer to the paper “Systematic analysis based on Conflux of machine learning and internet of things using bibliometric analysis” for further support.
  6. The manuscript points out that Vigilant Realists and Under-concerned Optimists have strong optimism bias, even with high expertise and proactive behavior. It would be helpful to include a brief explanation in the Discussion to clarify why this happens. Discussing the "illusion of control," which is often linked to expertise, could show how people believe their actions reduce their risk, leading to the bias. I recommend that the authors revise the manuscript to add the suggested changes. After making these revisions, they should move forward with the review process for publication. This will improve the overall quality of the research.

N/A

Author Response

We would like to sincerely thank the reviewer for the constructive and insightful comments, which have greatly contributed to improving the quality and precision of our work. Explanations of how each point has been addressed are provided in bold immediately following each specific comment.

 

The reported McDonald's ω of 0.61 for the Proactive Awareness subscale is identified as a limitation. This score is low for a psychometric scale, which weakens the conclusions about the behavioral aspects of the clusters.

The authors present that relying only on self-report measures for expertise, bias, and behavior may lead to social desirability bias. This limits causal conclusions and represents a significant weakness in this type of research.

We thank the reviewer for this valuable observation. We fully acknowledge that the exclusive use of self-report measures may introduce potential biases, particularly for constructs such as optimism bias and security behaviors. However, previous research has shown that self-assessment can provide a valid and informative approach to measuring cybersecurity practices, with evidence of honest self-reporting even for unsafe or undesirable behaviors (Russell et al., 2017; Cain et al., 2018).
These studies support the reliability of self-reported data as a practical and widely accepted proxy when behavioral observation or log-based measures are not feasible in large-scale online surveys. We have clarified this point and acknowledged the potential limitations of self-report measures in the revised manuscript as follows (rows 485-490): “Nonetheless, several limitations should be acknowledged. First, the reliance on self-report measures for expertise, optimism bias, and proactive awareness may have introduced social desirability or recall biases, limiting the strength of the behavioral inferences that can be drawn. Nevertheless, previous research has shown that self-reported cybersecurity behaviors can validly reflect individual practices and tendencies, even when unsafe actions are disclosed [42,43].

References:

Russell, J. D., Weems, C. F., Ahmed, I., & Richard III, G. G. (2017). Self-reported secure and insecure cyber behaviour: factor structure and associations with personality factors. Journal of Cyber Security Technology, 1(3-4), 163-174. 

Cain, A. A., Edwards, M. E., & Still, J. D. (2018). An exploratory study of cyber hygiene behaviors and knowledge. Journal of information security and applications, 42, 36-45. 

 

The discussion about the paradox—that clusters with higher expertise and proactive behaviors (Clusters 1 and 2) show a stronger optimism bias—could be clearer. A better explanation of this "overconfidence" mechanism would help improve understanding.

In the Discussion there need to be clarity of optimism bias explanation as it is crucial to clarify the optimism bias paradox. For the Vigilant Realists and Under-concerned Optimists, the authors should mention that their strong belief in their protective efforts ("I am less at risk") likely leads to the optimism bias, even as they engage in protective actions. This view frames the bias as overconfidence from high efficacy instead of ignorance.

We thank the reviewer for this insightful comment. We have revised the relevant paragraph in the Discussion to clarify the mechanism underlying the optimism bias among the Vigilant Realists and Under-concerned Optimists. Specifically, we now emphasize that the stronger optimism bias observed in these clusters is likely rooted in a heightened sense of efficacy and control derived from their technical competence and consistent engagement in protective behaviors, rather than from ignorance or lack of awareness. This overconfidence can foster an illusion of control, leading individuals to underestimate their personal vulnerability despite their actual exposure to risk. The revised passage reads as follows (rows 460-474): “Another notable finding is the pervasive role of optimism bias, which was evident to varying degrees across all clusters. While participants generally perceived themselves as less at risk than others of similar age and IT expertise, the intensity of this bias varied significantly. Vigilant Realists and Under-concerned Optimists showed stronger optimism bias compared to Anxious & Uncertain and Concerned Bystanders. This points to a paradox: those with higher perceived expertise and proactive behaviors may also underestimate their personal vulnerability. A possible explanation is that consistent engagement in security practices and a sense of technical competence can foster an illusion of control, leading individuals to believe that their behaviors effectively shield them from risk [40]. Over time, this confidence may evolve into a subtle form of complacency, where familiarity with threats reduces perceived risk rather than reinforcing caution. Similar patterns have been observed in other safety-critical domains, where experience and skill can paradoxically lower perceived risk [24,40,41]. Understanding this relationship is crucial for designing interventions that mitigate overconfidence without diminishing protective behaviors.

 

The article about Machine Learning and the Internet of Things (IoT) provides important context for the digital threats discussed in the manuscript, particularly those linked to IoT. This includes hazards such as malware and online credential theft. The widespread use of IoT technologies, where data is created and used by machine learning and artificial intelligence, adds to the modern cyber threats' non-traditional nature. It is suggested for authors to refer to the paper “Systematic analysis based on Conflux of machine learning and internet of things using bibliometric analysis” for further support.

We thank the reviewer for this suggestion. Therefore, we have updated the Introduction section to include this reference as follows (rows 34-42): “In recent years, the proliferation of interconnected devices and systems, driven by the convergence of machine learning, artificial intelligence (AI), and the Internet of Things, has further transformed the cyber risk landscape [3]. As data is increasingly created, exchanged, and processed autonomously by smart devices and algorithms, new non-traditional vulnerabilities have emerged that blur the boundaries between human and technological agency [3]. These developments have amplified the potential for threats such as malware propagation and credential theft within complex, inter-linked ecosystems, highlighting the need for a deeper understanding of how individuals perceive and respond to such risks.

 

The manuscript points out that Vigilant Realists and Under-concerned Optimists have strong optimism bias, even with high expertise and proactive behavior. It would be helpful to include a brief explanation in the Discussion to clarify why this happens. Discussing the "illusion of control," which is often linked to expertise, could show how people believe their actions reduce their risk, leading to the bias. I recommend that the authors revise the manuscript to add the suggested changes. After making these revisions, they should move forward with the review process for publication. This will improve the overall quality of the research.

We thank the reviewer for this comment, which aligns closely with the previous suggestion regarding the clarification of the optimism bias paradox. In the revised version, we have expanded the Discussion section to explicitly address this point. Specifically, we now explain that the stronger optimism bias observed among Vigilant Realists and Under-concerned Optimists may stem from an illusion of control—that is, the belief that their knowledge and protective actions effectively minimize their personal risk. This clarification reinforces the interpretation that optimism bias in these clusters reflects confidence derived from perceived efficacy rather than lack of awareness.

The revised passage reads as follows (rows 460-474): “Another notable finding is the pervasive role of optimism bias, which was evident to varying degrees across all clusters. While participants generally perceived themselves as less at risk than others of similar age and IT expertise, the intensity of this bias varied significantly. Vigilant Realists and Under-concerned Optimists showed stronger optimism bias compared to Anxious & Uncertain and Concerned Bystanders. This points to a paradox: those with higher perceived expertise and proactive behaviors may also underestimate their personal vulnerability. A possible explanation is that consistent engagement in security practices and a sense of technical competence can foster an illusion of control, leading individuals to believe that their behaviors effectively shield them from risk [40]. Over time, this confidence may evolve into a subtle form of complacency, where familiarity with threats reduces perceived risk rather than reinforcing caution. Similar patterns have been observed in other safety-critical domains, where experience and skill can paradoxically lower perceived risk [24,40,41]. Understanding this relationship is crucial for designing interventions that mitigate overconfidence without diminishing protective behaviors.

Reviewer 2 Report

Overall, the study lies within a relevant and timely field and appears to be properly structured from both a mathematical and a cybersecurity standpoint. However, there are several points that should be clarified.

Overall, the study lies within a relevant and timely field and appears to be properly structured from both a mathematical and a cybersecurity standpoint. However, there are several points that should be clarified:

1. From the presentation, it is not clear what the novelty and specific contribution of this work to science are. Have similar surveys and studies already been conducted, including in other countries? How do the findings of those studies compare with the present results?

2. What exactly is meant by the risk of "generative AI"? The other risks listed describe specific actions, whereas here a technology (a phenomenon) is mentioned. It would be helpful to clarify the formulation (what exactly constitutes this risk?) and to ensure that respondents understood this concept in a sufficiently consistent way.

3. It would be advisable to describe in more detail how exactly the clustering was carried out and according to what criteria, as well as what the clusters presented in Table 3 represent. Was the subsequent naming of these clusters in the text determined a priori or a posteriori? In addition, it would be useful for Table 3 to include cluster characteristics (size, etc.) that are later referenced in the text.

After addressing these comments, the article may be reconsidered for publication.

Author Response

We would like to sincerely thank the reviewer for the constructive and insightful comments, which have greatly contributed to improving the quality and precision of our work. Explanations of how each point has been addressed are provided in bold immediately following each specific comment.

 

Major comments

Overall, the study lies within a relevant and timely field and appears to be properly structured from both a mathematical and a cybersecurity standpoint. However, there are several points that should be clarified.

Detailed comments

Overall, the study lies within a relevant and timely field and appears to be properly structured from both a mathematical and a cybersecurity standpoint. However, there are several points that should be clarified:

  1. From the presentation, it is not clear what the novelty and specific contribution of this work to science are. Have similar surveys and studies already been conducted, including in other countries? How do the findings of those studies compare with the present results?

We thank the reviewer for this important comment. We agree that clarifying the novelty and specific contribution of the study improves its contextualization within existing research. In the revised manuscript, we have added a paragraph at the end of the Present Study section explicitly describing how our work extends prior research. Specifically, we emphasize that this study applies the psychometric paradigm to a comprehensive and contemporary set of digital hazards, including both traditional and emerging threats such as generative AI, within a working population, and compares perceptions across IT and non-IT employees. This framing highlights the study’s originality and its contribution to advancing knowledge on how modern digital risks are cognitively and emotionally represented. The updated section reads as follows (rows 131-139): “In contrast to previous studies that have mainly investigated isolated cybersecurity risks or general attitudes toward online safety [33,34], this research introduces a broader and integrative perspective. It applies the psychometric paradigm to a comprehensive and contemporary set of digital hazards, including both traditional and emerging threats, within a working population. This cross-sectional comparison between IT and non-IT employees, coupled with the inclusion of generative AI as a novel hazard, provides original insights into how modern digital risks are cognitively and emotionally represented, thereby extending prior work on technological risk perception to the rapidly evolving domain of cybersecurity.”

  1. What exactly is meant by the risk of "generative AI"? The other risks listed describe specific actions, whereas here a technology (a phenomenon) is mentioned. It would be helpful to clarify the formulation (what exactly constitutes this risk?) and to ensure that respondents understood this concept in a sufficiently consistent way.

We thank the reviewer for this valuable comment. To minimize potential ambiguity, participants were provided with brief definitions and illustrative examples of each digital hazard before completing the risk ratings (now reported in Appendix A). This procedure ensured that all stimuli were clearly defined and consistently understood. Furthermore, the definition of Generative AI has been included as an example in the Measures section as follows (rows 177-183): “Before evaluating each hazard, participants were provided with a brief definition and illustrative examples to ensure a consistent understanding of the stimuli. For instance: Generative AI “a branch of artificial intelligence capable of generating, through the processing of pre-acquired data, original content such as images, texts, or videos, similar to those created by humans (e.g., ChatGPT, Gemini, Copilot, DALL·E).” The complete set of hazard descriptions is reported in Appendix A.

  1. It would be advisable to describe in more detail how exactly the clustering was carried out and according to what criteria, as well as what the clusters presented in Table 3 represent. Was the subsequent naming of these clusters in the text determined a priori or a posteriori? In addition, it would be useful for Table 3 to include cluster characteristics (size, etc.) that are later referenced in the text.

We thank the reviewer for this useful suggestion. We have revised the description of the clustering procedure to clarify the analytical criteria adopted and to explicitly state that the interpretation and naming of the clusters were conducted a posteriori, based on the patterns emerging from the two risk dimensions (dread and unknown risk) and their theoretical consistency with the psychometric paradigm. The updated text reads as follows (rows 208-218; 294-311; 329-331): “These variables were subsequently standardized (z-scores) and entered into an exploratory hierarchical cluster analysis using Ward’s method and squared Euclidean distance to identify patterns of similarity in participants' perception profiles. Inspection of the dendrogram and agglomeration coefficients guided the determination of the optimal number of clusters. To validate the identified structure, a K-means cluster analysis was subsequently performed, assigning each participant to the nearest centroid. The resulting groups were interpreted based on their mean scores on dread and unknown risk across hazards. To assess the internal consistency and robustness of the identified cluster structure, additional validation procedures were performed, including the calculation of silhouette coefficients and a cross-validation approach (70/30 training–test split).”; “To identify overarching patterns in how participants perceived the seven digital hazards, a hierarchical cluster analysis was performed on the dread and unknown risk ratings, using Ward’s method with squared Euclidean distance as the similarity measure. Examination of the dendrogram (available in Appendix B) and inspection of the agglomeration coefficients revealed a pronounced increase in within-cluster heterogeneity when moving from a four- to a three-cluster solution, supporting the selection of a four-cluster structure. This pattern suggested that the four-cluster solution provided the most parsimonious and theoretically meaningful representation of participants’ risk perception profiles. Consequently, the four-cluster structure was retained and further validated through a K-means cluster analysis, which confirmed the stability of the solution and allowed the assignment of each participant to the nearest cluster centroid. To further evaluate the robustness of this solution, we computed the average silhouette width and conducted a simple 70/30 cross-validation procedure. The mean silhouette width across clusters was 0.103, with cluster-specific averages of 0.12 (Cluster 1), 0.10 (Cluster 2), 0.07 (Cluster 3), and 0.13 (Cluster 4), indicating moderate internal cohesion and adequate separation among groups. In the cross-validation, test cases were assigned to the nearest centroids derived from the training set, resulting in 9, 30, 26, and 25 cases per cluster, respectively. These results support the overall stability and interpretability of the four-cluster solution.”; “The interpretation of the clusters was conducted a posteriori, based on the patterns emerging across the two risk dimensions and their consistency with theoretical expectations derived from the psychometric paradigm.

Moreover, we have updated Table 3 (now Table 4, in this updated version) to include the number and percentage of participants in each cluster, as recommended. These changes improve the clarity and transparency of the clustering process and its interpretation.

After addressing these comments, the article may be reconsidered for publication.

Reviewer 3 Report

1. Did you empirically confirm, using CFA/EFA, that the 9 items load on the two theoretical dimensions (dread/unknown)? Results and fit indices?

2.Why did you opt for a unidimensional measure? Is there evidence of convergent validity or sensitivity analysis?

3. SeBIS reliability (ω=.61): What decisions will you make to mitigate its impact (ordinal α/ω, IRT, re-estimation without weak items)?

4. Clustering (k=4): How can you avoid the potential closed loop between the 4-group power and the final solution? Can you provide silhouette/gap, bootstrap stability, and cross-validation?

5. What specific risk triggers did participants receive? Could you show that the high "unknown" is not an artifact of stimulus ambiguity?

6.  Will you report effect sizes and CIs for all tests and apply correction for multiple comparisons in ANOVA/post-hoc and t tests?

Two-factor CFA (WLSMV); loadings table and fit indices.

Correlation between overall risk and (dread/unknown) by hazard (7x3 matrix).

Regressions: predicting proactive behavior from cluster (dummies), expertise, and optimism (with sector × cluster interaction).

Cluster profile plot and (dread/unknown) heat plot.

Sensitivity analysis: repeat comparisons with robust (Welch/Games-Howell) and report convergence.

Author Response

We would like to sincerely thank the reviewer for the constructive and insightful comments, which have greatly contributed to improving the quality and precision of our work. Explanations of how each point has been addressed are provided in bold immediately following each specific comment.

  1. Did you empirically confirm, using CFA/EFA, that the 9 items load on the two theoretical dimensions (dread/unknown)? Results and fit indices?

We thank the reviewer for this insightful comment. We are aware that the factorial structure of the psychometric paradigm (dread vs. unknown/familiarity) has been extensively discussed in the literature (e.g., Slovic, 1987; Sjöberg, 2000; Breakwell, 2014), and that replication of the original two-factor solution is often partial or context-dependent. Consistent with prior studies applying this paradigm to new domains (e.g., Xu et al., 2020; Won & Yang, 2022; Barattucci et al., 2025), we initially adopted the established theoretical structure as an interpretive framework rather than as a measurement model to be revalidated in each study. Nonetheless, in response to the reviewer’s valuable suggestion, we conducted an additional confirmatory factor analysis (CFA) using the nine semantic-differential items to test the hypothesized two-factor solution. An ordinal CFA using the WLSMV estimator was first attempted, but the model failed to converge due to estimation problems (e.g., negative residual variances and non-significant loadings). These issues were mainly associated with the fatal, immediate, and controllable items, which exhibited highly skewed distributions and sparse observations in some response categories. To obtain a more stable and interpretable solution, we retained only the indicators with reliable loadings and well-behaved variances (terrifying, few_many, new, voluntary, known_workers, and known_experts). A subsequent CFA treating these indicators as continuous achieved satisfactory convergence and fit indices (χ² = 122.81, df = 8, CFI = 0.921, TLI = 0.852, RMSEA = 0.083, SRMR = 0.057), supporting an acceptable representation of the two latent dimensions. Although this approach represents a pragmatic adjustment from the original ordinal specification, it provides a robust and transparent estimation of the underlying structure of dread and unknown risk.

We have therefore updated the Data analysis section as follows (rows 202-207): “Following the psychometric paradigm, items related to dread and unknown risk were first averaged for each of the seven digital hazards, resulting in 14 perception variables (7 dread scores and 7 unknown risk scores). Before computing these aggregated scores, the two-factor structure underlying the nine risk-perception items was preliminarily examined through confirmatory factor analysis (CFA), which showed an acceptable fit to the theoretical model. Detailed CFA results are available from the authors upon request.

References:

Barattucci, M., Ramaci, T., Matera, S., Vella, F., Gallina, V., & Vitale, E. (2025). Differences in Risk Perception Between the Construction and Agriculture Sectors: An Exploratory Study with a Focus on Carcinogenic Risk. La Medicina del Lavoro, 116(3), 16796.2025;

Breakwell, G. M. (2014). The psychology of risk. Cambridge University Press.

Slovic ,P. (1987). Perception of Risk. Science, 236:280-285. DOI:10.1126/science.3563507

Sjöberg, L. (2000), Factors in Risk Perception. Risk Analysis, 20: 1-12. https://doi.org/10.1111/0272-4332.00001

Wong, J. C. S., & Yang, J. Z. (2022). Comparative risk: Dread and unknown characteristics of the COVID‐19 pandemic versus COVID‐19 vaccines. Risk Analysis, 42(10), 2214-2230.

Xu, L., Qiu, J., Gu, W., & Ge, Y. (2020). The dynamic effects of perceptions of dread risk and unknown risk on SNS sharing behavior during EID events: Do crisis stages matter. Journal of the Association for Information Systems, 21(3), 545-573.

2.Why did you opt for a unidimensional measure? Is there evidence of convergent validity or sensitivity analysis?

We thank the reviewer for raising this important point. The Proactive Awareness subscale was treated as a unidimensional measure, consistent with its original validation study (Egelman & Peer, 2015), where the five items were explicitly designed to capture a single underlying construct reflecting proactive security practices. To empirically assess dimensionality in our sample, we conducted an exploratory factor analysis (EFA) using maximum likelihood extraction. A one-factor solution showed all items loading adequately (0.43–0.63) with good model fit (RMSEA = 0.049, RMSR = 0.04, TLI = 0.94), supporting a unidimensional structure. A two-factor model did not yield a theoretically meaningful separation—one factor was dominated by items 1–2, another by item 5, and items 3–4 cross-loaded on both factors—with only a marginal increase in explained variance (ΔVar = +4%). Given the brevity of the scale, the high inter-factor correlation (r = 0.50), and the conceptual coherence of a single latent construct, we retained the unidimensional solution. Reliability in our sample (McDonald’s ω = .61) closely mirrored the original study (ω = .64), supporting comparability across studies and suggesting adequate internal consistency for a short behavioral measure.

  1. SeBIS reliability (ω=.61): What decisions will you make to mitigate its impact (ordinal α/ω, IRT, re-estimation without weak items)?

We appreciate the reviewer’s insightful observation regarding the reliability of the Proactive Awareness subscale of the SeBIS. In addition to the factor-analytic results already reported, we further examined the scale’s internal consistency using both classical and ordinal reliability estimates. While McDonald’s ω was modest (ω = .61), this value was comparable to that reported in the original validation study. Re-estimating reliability after removing individual items did not substantially improve the coefficient (max α = .56), indicating that all items contributed meaningfully to the construct. To account for the ordinal nature of the Likert responses, we computed ordinal reliability indices (ordinal α = .67; ordinal ω = .67), which yielded slightly higher values, confirming acceptable internal coherence for a short behavioral scale. Given the theoretical coherence, empirical unidimensionality, and consistency with prior research, we retained the full five-item version for all subsequent analyses. The modest reliability is transparently acknowledged as a limitation in the Discussion, and we note that it reflects the brevity and behavioral nature of the scale rather than a structural weakness.

We have clarified this point and acknowledged the potential limitations of self-report measures in the revised manuscript as follows (rows 485-490): “Nonetheless, several limitations should be acknowledged. First, the reliance on self-report measures for expertise, optimism bias, and proactive awareness may have introduced social desirability or recall biases, limiting the strength of the behavioral inferences that can be drawn. Nevertheless, previous research has shown that self-reported cybersecurity behaviors can validly reflect individual practices and tendencies, even when unsafe actions are disclosed [42,43].

  1. Clustering (k=4): How can you avoid the potential closed loop between the 4-group power and the final solution? Can you provide silhouette/gap, bootstrap stability, and cross-validation?

We thank the reviewer for raising this important methodological point. In response, we added a new paragraph in the Results section reporting validation analyses of the four-cluster solution. Specifically, we calculated the average silhouette width and conducted a 70/30 cross-validation split. The mean silhouette width across clusters was 0.103 (Cluster 1 = 0.12, Cluster 2 = 0.10, Cluster 3 = 0.07, Cluster 4 = 0.13), indicating moderate internal cohesion. In the cross-validation, test cases were assigned to the nearest centroids derived from the training set, resulting in 9, 30, 26, and 25 cases per cluster, respectively. These results confirm that the four-cluster structure is reasonably stable and theoretically meaningful, supporting the validity of the adopted solution. Therefore, we have updated the text as follows (rows 215-218; 304-311): “To assess the internal consistency and robustness of the identified cluster structure, additional validation procedures were performed, including the calculation of silhouette coefficients and a cross-validation approach (70/30 training–test split).”; “To further evaluate the robustness of this solution, we computed the average silhouette width and conducted a simple 70/30 cross-validation procedure. The mean silhouette width across clusters was 0.103, with cluster-specific averages of 0.12 (Cluster 1), 0.10 (Cluster 2), 0.07 (Cluster 3), and 0.13 (Cluster 4), indicating moderate internal cohesion and adequate separation among groups. In the cross-validation, test cases were assigned to the nearest centroids derived from the training set, resulting in 9, 30, 26, and 25 cases per cluster, respectively. These results support the overall stability and interpretability of the four-cluster solution.”.

 

  1. What specific risk triggers did participants receive? Could you show that the high "unknown" is not an artifact of stimulus ambiguity?

We thank the reviewer for this valuable comment. To minimize potential ambiguity, participants were provided with brief definitions and illustrative examples of each digital hazard before completing the risk ratings (now reported in Appendix A). This procedure ensured that all stimuli were clearly defined and consistently understood. Furthermore, descriptive statistics indicate that “unknown risk” ratings varied systematically across hazards. Specifically, well-established cyber threats such as malware (M = 2.99), phishing (M = 2.69), and credential theft (M = 2.89) were rated low on the unknown dimension, whereas generative AI was rated substantially higher (M = 4.50). This pattern supports the interpretation that the elevated “unknown” ratings for AI reflect its novelty and rapidly evolving nature rather than any artifact of stimulus ambiguity. An example of a digital hazard has been added to the Measures section as follows (rows 177-183): “Before evaluating each hazard, participants were provided with a brief definition and illustrative examples to ensure a consistent understanding of the stimuli. For instance: Generative AI “a branch of artificial intelligence capable of generating, through the processing of pre-acquired data, original content such as images, texts, or videos, similar to those created by humans (e.g., ChatGPT, Gemini, Copilot, DALL·E).” The complete set of hazard descriptions is reported in Appendix A.

  1. Will you report effect sizes and CIs for all tests and apply correction for multiple comparisons in ANOVA/post-hoc and t tests?

We thank the reviewer for this valuable suggestion. We have now reported effect sizes (Cohen’s d for t tests and η² for ANOVAs) and corresponding 95% confidence intervals for all relevant analyses. In addition, all post-hoc comparisons have been conducted using the Tukey correction for multiple testing. These revisions are reflected in the Results section as follows.

Detailed comments

Two-factor CFA (WLSMV); loadings table and fit indices.

As we reported in the answer to the first comment, we have updated the Data analysis section as follows (202-207): “Following the psychometric paradigm, items related to dread and unknown risk were first averaged for each of the seven digital hazards, resulting in 14 perception variables (7 dread scores and 7 unknown risk scores). Before computing these aggregated scores, the two-factor structure underlying the nine risk-perception items was preliminarily examined through confirmatory factor analysis (CFA), which showed an acceptable fit to the theoretical model. Detailed CFA results are available from the authors upon request.”

Correlation between overall risk and (dread/unknown) by hazard (7x3 matrix).

We thank the reviewer for this valuable suggestion. We have added and commented the requested correlations (Table 3).

Regressions: predicting proactive behavior from cluster (dummies), expertise, and optimism (with sector × cluster interaction).

We thank the reviewer for this helpful suggestion. As requested, we conducted a multiple linear regression predicting proactive security behavior from cluster membership (dummy-coded), self-rated cybersecurity expertise, optimism bias, gender, age, and the interaction between cluster and IT sector. The analysis showed that proactive behavior was significantly and positively predicted by cybersecurity expertise (β = .15, p < .001) and negatively predicted by optimism bias (β = –.15, p = .004). Cluster 4 was a significant negative predictor (β = –.42, p = .005), while the cluster × sector interaction was nonsignificant. The model explained approximately 24% of the variance in proactive behavior (R² = .242, Adjusted R² = .215). We believe that this addition strengthens the manuscript by providing a clearer understanding of how individual and cluster-level characteristics jointly shape proactive security behaviors.

These results have been incorporated into the Results section and discussed in relation to behavioral engagement patterns in the Discussion section as follows (rows 370-381; 443-447): “To further examine how individual differences and cluster membership jointly influenced proactive awareness, a multiple linear regression was conducted including cluster membership (dummy-coded), self-rated cybersecurity expertise, optimism bias, and professional sector (IT vs. non-IT) as predictors, along with their interaction terms. The model accounted for approximately 24% of the variance in proactive awareness (R² = .24). Self-rated cybersecurity expertise emerged as a significant positive predictor (β = .15, p < .001), whereas optimism bias negatively predicted proactive awareness (β = –.15, p = .004). Among the cluster variables, only Cluster 4 was a significant negative predictor (β = –.421, p = .005), indicating lower engagement in security behaviors compared to participants in Cluster 1, which served as the reference group. Cluster 2, Cluster 3, gender, and all interaction terms were not significant, and age showed a marginal positive association (β = .006, p = .072).”; “Consistent with this interpretation, regression analyses confirmed that self-rated cybersecurity expertise positively predicted proactive security behaviors, whereas optimism bias exerted a small but significant negative effect. Moreover, membership in the Concerned Bystanders cluster also predicted reduced engagement in protective behaviors, even after accounting for individual differences in expertise and optimism.”.

Cluster profile plot and (dread/unknown) heat plot.

Thank you for this helpful suggestion. The dendrogram illustrating the hierarchical clustering procedure is now provided in Appendix B, while the heat plot depicting average dread and unknown risk scores across hazards and clusters has been added to the main text as Figure 2, accompanied by a brief explanatory paragraph as follows (rows 312-316): “Figure 2 provides a visual representation of the cluster profiles. The heat map illustrates the mean scores of each cluster across the seven digital hazards along the dread and unknown risk dimensions. Warmer colors indicate higher perceived intensity on each dimension, visually distinguishing the four patterns of risk perception.”.

Sensitivity analysis: repeat comparisons with robust (Welch/Games-Howell) and report convergence.

We thank the reviewer for this valuable suggestion. To assess the robustness of our ANOVA results, we re-ran all between-cluster comparisons using the Welch test, with post-hoc Games–Howell corrections for unequal variances. The results converged fully with those of the standard ANOVA and Tukey post-hoc tests, confirming the same significant effects and group differences reported in the manuscript. Therefore, the original conclusions remain unchanged. The updated text reads as follows (rows 353-356): “A series of one-way ANOVAs with Tukey post-hoc tests were conducted to compare clusters on self-rated cybersecurity expertise, optimism bias, and proactive awareness. All analyses were also replicated using the Welch correction and Games–Howell post-hoc tests to account for unequal variances, yielding fully consistent results (Table 6).”.

Reviewer 4 Report

This paper attempts to apply the psychometric paradigm to digital risk perception, addressing a timely topic, but exhibits several methodological weaknesses. 

  1. Self-report measures introduce bias, particularly for optimism bias and proactive behaviors, as no behavioral observations or objective metrics were used, potentially inflating or misrepresenting actual responses.
  2. The proactive awareness subscale demonstrates insufficient reliability (ω=0.61), and although consistent with prior research, low reliability may compromise measurement precision; using a more robust scale or additional items is recommended.
  3. In cluster analysis, the number of clusters was determined subjectively (e.g., dendrogram inspection) without quantitative validation (e.g., silhouette coefficient), risking unstable or overfitted classifications.
  4. The analysis did not control for potential confounding effects of demographic variables (e.g., age, gender), possibly omitting key influencers and introducing interpretation bias.

Author Response

We would like to sincerely thank the reviewer for the constructive and insightful comments, which have greatly contributed to improving the quality and precision of our work. Explanations of how each point has been addressed are provided in bold immediately following each specific comment.

This paper attempts to apply the psychometric paradigm to digital risk perception, addressing a timely topic, but exhibits several methodological weaknesses. 

Detailed comments

  1. Self-report measures introduce bias, particularly for optimism bias and proactive behaviors, as no behavioral observations or objective metrics were used, potentially inflating or misrepresenting actual responses.

 

We thank the reviewer for this valuable observation. We fully acknowledge that the exclusive use of self-report measures may introduce potential biases, particularly for constructs such as optimism bias and security behaviors. However, previous research has shown that self-assessment can provide a valid and informative approach to measuring cybersecurity practices, with evidence of honest self-reporting even for unsafe or undesirable behaviors (Russell et al., 2017; Cain et al., 2018).
These studies support the reliability of self-reported data as a practical and widely accepted proxy when behavioral observation or log-based measures are not feasible in large-scale online surveys. We have clarified this point and acknowledged the potential limitations of self-report measures in the revised manuscript as follows (rows 485-490): “Nonetheless, several limitations should be acknowledged. First, the reliance on self-report measures for expertise, optimism bias, and proactive awareness may have introduced social desirability or recall biases, limiting the strength of the behavioral inferences that can be drawn. Nevertheless, previous research has shown that self-reported cybersecurity behaviors can validly reflect individual practices and tendencies, even when unsafe actions are disclosed [42,43].”.

References:

Russell, J. D., Weems, C. F., Ahmed, I., & Richard III, G. G. (2017). Self-reported secure and insecure cyber behaviour: factor structure and associations with personality factors. Journal of Cyber Security Technology, 1(3-4), 163-174. 

Cain, A. A., Edwards, M. E., & Still, J. D. (2018). An exploratory study of cyber hygiene behaviors and knowledge. Journal of information security and applications, 42, 36-45. 

 

 

  1. The proactive awareness subscale demonstrates insufficient reliability (ω=0.61), and although consistent with prior research, low reliability may compromise measurement precision; using a more robust scale or additional items is recommended.

 

We appreciate the reviewer’s insightful observation regarding the reliability of the Proactive Awareness subscale. As suggested also by Reviewer #3, we conducted an exploratory factor analysis (EFA) using maximum likelihood extraction. A one-factor solution showed all items loading adequately (0.43–0.63) with good model fit (RMSEA = 0.049, RMSR = 0.04, TLI = 0.94), supporting a unidimensional structure. A two-factor model did not yield a theoretically meaningful separation—one factor was dominated by items 1–2, another by item 5, and items 3–4 cross-loaded on both factors—with only a marginal increase in explained variance (ΔVar = +4%). Given the brevity of the scale, the high inter-factor correlation (r = 0.50), and the conceptual coherence of a single latent construct, we retained the unidimensional solution. In addition to the factor-analytic results, we further examined the scale’s internal consistency using both classical and ordinal reliability estimates. While McDonald’s ω was modest (ω = .61), this value was comparable to that reported in the original validation study. Re-estimating reliability after removing individual items did not substantially improve the coefficient (max α = .56), indicating that all items contributed meaningfully to the construct. To account for the ordinal nature of the Likert responses, we computed ordinal reliability indices (ordinal α = .67; ordinal ω = .67), which yielded slightly higher values, confirming acceptable internal coherence for a short behavioral scale. Given the theoretical coherence, empirical unidimensionality, and consistency with prior research, we retained the full five-item version for all subsequent analyses. The modest reliability is transparently acknowledged as a limitation in the Discussion, and we note that it reflects the brevity and behavioral nature of the scale rather than a structural weakness.

 

 

  1. In cluster analysis, the number of clusters was determined subjectively (e.g., dendrogram inspection) without quantitative validation (e.g., silhouette coefficient), risking unstable or overfitted classifications.

 

We thank the reviewer for raising this important methodological point. In response, we added a new paragraph in the Results section reporting validation analyses of the four-cluster solution. Specifically, we calculated the average silhouette width and conducted a 70/30 cross-validation split. The mean silhouette width across clusters was 0.103 (Cluster 1 = 0.12, Cluster 2 = 0.10, Cluster 3 = 0.07, Cluster 4 = 0.13), indicating moderate internal cohesion. In the cross-validation, test cases were assigned to the nearest centroids derived from the training set, resulting in 9, 30, 26, and 25 cases per cluster, respectively. These results confirm that the four-cluster structure is reasonably stable and theoretically meaningful, supporting the validity of the adopted solution. Nevertheless, we have clarified this point and acknowledged the potential limitations of self-report measures in the revised manuscript as follows (304-311): “To further evaluate the robustness of this solution, we computed the average silhouette width and conducted a simple 70/30 cross-validation procedure. The mean silhouette width across clusters was 0.103, with cluster-specific averages of 0.12 (Cluster 1), 0.10 (Cluster 2), 0.07 (Cluster 3), and 0.13 (Cluster 4), indicating moderate internal cohesion and adequate separation among groups. In the cross-validation, test cases were assigned to the nearest centroids derived from the training set, resulting in 9, 30, 26, and 25 cases per cluster, respectively. These results support the overall stability and interpretability of the four-cluster solution”.

 

 

  1. The analysis did not control for potential confounding effects of demographic variables (e.g., age, gender), possibly omitting key influencers and introducing interpretation bias.

 

We thank the reviewer for this valuable suggestion. In response, we conducted a multiple linear regression including age and gender as covariates, along with cluster membership, self-rated cybersecurity expertise, and optimism bias as predictors of proactive security behavior. Results indicated that gender was not a significant predictor, while age showed a marginally positive effect (β = 0.006, p = 0.072). These results are now reported and discussed in the Results section as follows (rows 370-381): “To further examine how individual differences and cluster membership jointly influenced proactive awareness, a multiple linear regression was conducted including cluster membership (dummy-coded), self-rated cybersecurity expertise, optimism bias, and professional sector (IT vs. non-IT) as predictors, along with their interaction terms. The model accounted for approximately 24% of the variance in proactive awareness (R² = .24). Self-rated cybersecurity expertise emerged as a significant positive predictor (β = .15, p < .001), whereas optimism bias negatively predicted proactive awareness (β = –.15, p = .004). Among the cluster variables, only Cluster 4 was a significant negative predictor (β = –.421, p = .005), indicating lower engagement in security behaviors compared to participants in Cluster 1, which served as the reference group. Cluster 2, Cluster 3, gender, and all interaction terms were not significant, and age showed a marginal positive association (β = .006, p = .072).”.

Round 2

Reviewer 2 Report

The authors have addressed all comments and have significantly improved the presentation and justification of the results. The article can be accepted for publication.

The authors have addressed all comments and have significantly improved the presentation and justification of the results. The article can be accepted for publication. However, the dendrogram in Appendix B appears unreadable due to the extremely small font size. Its inclusion in the paper requires revision and coordination with the Editorial Office before publication.

Author Response

We sincerely thank the reviewer for their kind evaluation and constructive feedback. We acknowledge the comment regarding the readability of the dendrogram in Appendix B. To address this, the dendrogram will be provided as a separate image file, ensuring that all details are clearly visible. 

Reviewer 3 Report

Ready. Congratulations

It is Ok

Author Response

We sincerely thank the reviewer for the positive and encouraging feedback, as well as for the time dedicated to evaluating our manuscript. We are pleased to hear that the revisions have satisfactorily addressed the previous comments. As a final step, we had the English language carefully reviewed, and the minor linguistic adjustments made are highlighted in yellow in the revised version.

Reviewer 4 Report

The authors have addressed all the previous concerns and the paper may be accepted.

N/A

Author Response

We sincerely thank the reviewer for the positive and encouraging feedback, as well as for the time dedicated to evaluating our manuscript. We are pleased to hear that the revisions have satisfactorily addressed the previous comments.

Back to TopTop