Review Reports
- Carmen G. Arbulú Pérez Vargas1,
- Moises E. Rosas2 and
- Paloma Valdivia-Vizarreta3,*
Reviewer 1: Wilson Cheong Hin Hong Reviewer 2: Hsiao-Ping Hsu
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsWhile I appreciate the cross-country comparison, I have major concerns over the measures, organization and the central arguments, as well as some less critical issues. Below are my detailed comments:
- The title is misleading— “Three Student Perspectives” suggests an in-depth qualitative study (e.g., case studies of three individuals), but the manuscript is actually a quantitative survey of over 800 students across three countries.
- The abstract does not need to contain so many methodological details: “The research was conducted between May and September 2020, employing a comparative approach that integrated Sankey diagrams and multivarithable Poisson regression models with Sidak adjustments. The analysis controlled for variables such as age, gender, prior study modality, disability status, and the month in which the survey was administered”. Instead, the authors should briefly mention the sample and the survey method.
- The literature review needs reorganizing. Currently, it is scattered and not synthesized into a clear gap statement.
- I recommend having a synthesis at the end of the introduction section to tie the country contexts back to the research question.
- The “Participants” sub-sections should contain information about sampling methods and participant information. Do not mix participant and instrument information.
- The description of the instrument is confusing—it references both the CEAU and DigComp frameworks but does not clearly explain how the digital competence questions were developed or validated. If I understand correctly, the measures are two single items for self-perceived digital competence. Is it a valid meaure? If I were a respondent, I would not be sure what my level is when I am prompted to “indicate your level of digital skills…”. This is the most critical issue in this study.
- You stated that “…a binary dependent variable was used that was coded as "yes" when the student stated that their DCs increased, or as "no" when they were maintained or decreased”. Then, Poisson distribution regression models are not appropriate models as the data is binomial. The choice of a dichotomized DV over ordinal measurement requires a strong justification as your choice greatly reduced the statistical power of the models. Also, given you use several variables, should some of them be considered random factors instead of fixed factors? Gender variable is often missing. I suggest looking into mixed effects models (especially with a log-binomial family) and see if they might work better for your purposes. Make sure to report the effect size and model fit statistics too.
- Phrases like “transformative contexts enabled students to test and mobilise their capacities” read causal, and some claims in the discussion too. Reframe as hypotheses consistent with the observed patterns in self-perceptions, not established causes. Further, the data cannot disentangle policy effects from institutional support or selection into responding. Downplay such claims and instead propose mechanisms to be tested in future work. Similarly in the conclusion, “Progress seems to be influenced mainly by countries’ education policies” is a strong claim not supported by the analysis presented.
Some minor issues, but the manuscript writing is mostly alright.
- “During the preparation of this work, the authors no used LMM.” (Apart from the sentence structure, LMM should be LLM.)
- “Conclussion”
Author Response
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions highlighted/in track changes in the re-submitted files.
- The title is misleading— “Three Student Perspectives” suggests an in-depth qualitative study (e.g., case studies of three individuals), but the manuscript is actually a quantitative survey of over 800 students across three countries.
Resposte: Both changes have been made. The current title is: Self-Perceived Digital Competencies and University Change: Cross-Country Survey Evidence fom Spain, Mexico, and Peru During COVID-19.
- The abstract does not need to contain so many methodological details: “The research was conducted between May and September 2020, employing a comparative approach that integrated Sankey diagrams and multivarithable Poisson regression models with Sidak adjustments. The analysis controlled for variables such as age, gender, prior study modality, disability status, and the month in which the survey was administered”. Instead, the authors should briefly mention the sample and the survey method.
Resposte: Completely agreed. We have removed the methodological details and included only a brief mention of the sample and the survey method.
Comment: The literature review needs reorganizing. Currently, it is scattered and not synthesized into a clear gap statement.
Resposte: The bibliographic references have been thoroughly reviewed. Originally in Vancouver format, they have been adapted to APA in accordance with the publisher’s guidelines. In addition, each of the cited contributions has been carefully verified.
- I recommend having a synthesis at the end of the introduction section to tie the country contexts back to the research question.
Resposte: Thank you. We have added the following paragraph at the end of the literature review:
Despite national contributions on self-perception (Romero-Tena et al., 2021; Salem et al., 2022) and broad international comparisons of the student experience (Aristovnik et al., 2020), multi-country evidence focused specifically on self-perceptions during COVID-19 remains limited. Accordingly, this study examines whether perceived changes differ across countries and student profiles.
- The “Participants” sub-sections should contain information about sampling methods and participant information. Do not mix participant and instrument information.
Resposte: Thank you. We have organized this subsection accordingly. Also, we have modified the following paragraph:
881 students were included in the study. Students with physical disabilities (n=7) who completed the survey were excluded, since they constituted a very small subgroup, which would prevent any reliable inference about them. It was also not possible to combine them with other categories of the Disability status variable because such a combination would lack conceptual meaning. Besides this, convergence problems during the estimation of the statistical models, due to small cell size, warranted their exclusion from the study.
- The description of the instrument is confusing—it references both the CEAU and DigComp frameworks but does not clearly explain how the digital competence questions were developed or validated. If I understand correctly, the measures are two single items for self-perceived digital competence. Is it a valid meaure? If I were a respondent, I would not be sure what my level is when I am prompted to “indicate your level of digital skills…”. This is the most critical issue in this study.
Resposte: Thank you for the comment. We realize now that this issue needs more elaboration in the paper. We agree on the critical importance of this issue. An ordinal scale is not valid for estimating absolute parameters due to the lack of equidistant intervals and/or the absence of a true zero, but it is valid for estimating relative changes in paired samples, since the analysis focuses on the direction or order of changes within the same individuals. This is widely used in studies that employ Likert-type scales to estimate before–after changes, or in quality-of-life scales, in which the “true” quality of life of a subject cannot be estimated with a single measurement, but improvement or deterioration can be assessed in subsequent measurements.
In the present study, to evaluate the perception of change, the dependent variable was constructed from a single ordinal item (basic, intermediate, advanced, specialist), asked twice of the same study subject—the first time referring to the beginning of the pandemic, and the second time referring to the (later) moment of the survey (one or several months afterward)—to assess change on that ordinal scale. As in any ordinal scale, the terms basic, intermediate, advanced, and specialist do not represent a specific or “real” level of competence, as they are only labels for those categories, which could have been replaced by numeric labels from 1 to 4 without representing a specific or objectively evaluated level of digital competence. We acknowledge that a single item has limitations compared with multi-item scales. However, for our specific objective of capturing self-perceived changes during an unprecedented crisis period, this parsimonious approach was appropriate because, in our view, it imposed minimal additional burden on students, who might have been experiencing significant stress associated with coping with the pandemic while continuing their studies. Regarding the choice of category labels, we considered that a change from “basic” to “advanced” would be more interpretable than from “1” to “3,” provided it was made clear that a competence was not being evaluated, but rather a self-perception.
- You stated that “…a binary dependent variable was used that was coded as "yes" when the student stated that their DCs increased, or as "no" when they were maintained or decreased”. Then, Poisson distribution regression models are not appropriate models as the data is binomial. The choice of a dichotomized DV over ordinal measurement requires a strong justification as your choice greatly reduced the statistical power of the models. Also, given you use several variables, should some of them be considered random factors instead of fixed factors? Gender variable is often missing. I suggest looking into mixed effects models (especially with a log-binomial family) and see if they might work better for your purposes. Make sure to report the effect size and model fit statistics too.
Resposte: Thank you for the comment. While it may seem counterintuitive to use a Poisson distribution for a binary outcome (which is a count of 0 or 1), the modified Poisson regression method that uses a robust (sandwich) variance estimator provides reliable estimates of risk (proportion or prevalence) ratios and associated standard errors in cross-sectional or prospective studies, making it a strong alternative to logistic regression and log-binomial models (Chen, W., Qian, L., Shi, J. et al. Comparing performance between log-binomial and robust Poisson regression models for estimating risk ratios under model misspecification. BMC Med Res Methodol 18, 63 (2018). https://doi.org/10.1186/s12874-018-0519-5).
Among the advantages of using Poisson regression for binomial outcomes or proportions are:
- Coefficients are directly interpretable, unlike logistic regression, which estimates log-odds that are difficult to interpret. (Talbot D, Mésidor M, Chiu Y, Simard M, Sirois C. An Alternative Perspective on the Robust Poisson Method for Estimating Risk or Prevalence Ratios. Epidemiology. 2023 Jan 1;34(1):1-7. doi: 10.1097/EDE.0000000000001544).
- Handles common outcomes better. It avoids the frequent non-convergence issues sometimes seen with log-binomial regression, especially when dealing with common outcomes (prevalence > 10%), which is seen in the present study. (Chen, W., Qian, L., Shi, J. et al. Comparing performance between log-binomial and robust Poisson regression models for estimating risk ratios under model misspecification. BMC Med Res Methodol 18, 63 (2018). https://doi.org/10.1186/s12874-018-0519-5).
- Robustness to model misspecification: The robust Poisson method is based on a weaker assumption than a fully parametric Poisson model. It only requires a log-linear relationship between the risk/prevalence and the explanatory variables and does not assume that the outcome follows a Poisson distribution, making it more flexible. (Talbot D, Mésidor M, Chiu Y, Simard M, Sirois C. An Alternative Perspective on the Robust Poisson Method for Estimating Risk or Prevalence Ratios. Epidemiology. 2023 Jan 1;34(1):1-7. doi: 10.1097/EDE.0000000000001544).
The choice of a dichotomized DV over ordinal measurement would entail a loss of information and a decrease in statistical power. In the present study, the DV is not the ordinal item (basic, intermediate, advanced, specialist) but rather the difference in this item measured on two occasions, and, due to the lack of equal intervals, the difference between the two ordinal measurements in the same person can be polytomized (worse, same, better) or dichotomized (worse or same vs. better). We chose the latter to avoid using ordinal logistic regression, which has the limitations noted relative to Poisson regression and would complicate and divert the objective of the analysis.
We appreciate the observation regarding the possibility of modeling some variables as random effects. However, in our case, the primary objective of including these variables in the model is not to evaluate their association with the dependent variable, but to adequately adjust (control) the comparison among the three countries included in the study. For this reason, we considered it more appropriate to treat them as fixed effects.
- Age and gender: Kept as fixed effects, given that they are relevant demographic variables and their specific effects could influence the cross-country comparison.
- Prior study modality: This variable was included solely to adjust for the effect that the prior modality could have on the dependent variable. Since we do not seek to interpret differences among modalities, treating it as a fixed effect is more appropriate, especially given that we worked with the full set of modalities observed in the sample.
- Disability status: This variable, being dichotomous, was also included for adjustment rather than interpretation. Therefore, keeping it as a fixed effect avoids unnecessary additional assumptions about the distribution of its levels.
- Month of survey administration: Although we evaluated the possibility of treating this variable as a random effect, the survey was administered only during five months (May to September), which represent the entirety of the study period and not a random sample of time. Given the small number of levels and our exclusive interest in adjusting for potential temporal variation, specifying Month as a fixed effect is more appropriate and statistically more stable.
Finally, regarding the effect size (95% confidence intervals), these are shown in Table 3 for the cross-country comparison, which is the objective of the study. They are not shown for the other independent variables because, as noted, the goal of including them in the model is not to estimate their effect on the DV, but to increase the internal validity of the cross-country comparison.
- Phrases like “transformative contexts enabled students to test and mobilise their capacities” read causal, and some claims in the discussion too. Reframe as hypotheses consistent with the observed patterns in self-perceptions, not established causes. Further, the data cannot disentangle policy effects from institutional support or selection into responding. Downplay such claims and instead propose mechanisms to be tested in future work. Similarly in the conclusion, “Progress seems to be influenced mainly by countries’ education policies” is a strong claim not supported by the analysis presented.
Resposte : We have revised the Discussion and Limitations sections to address this point. In the Discussion, causal language has been softened, and in the Limitations we now explicitly note that the data do not allow us to disentangle policy effects from institutional support or potential self-selection into responding.
Comments :
Comments on the Quality of English Language
Some minor issues, but the manuscript writing is mostly alright.
- “During the preparation of this work, the authors no used LMM.” (Apart from the sentence structure, LMM should be LLM.)
- “Conclussion”
Resposte: The manuscript has been carefully revised in English, and the expressions noted, including those highlighted by the reviewer, have been corrected.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis manuscript provides a comparative examination of university students’ self-perceived digital competences (DCs) across three Spanish-speaking countries during the first six months of the COVID-19 pandemic. The comparative element and the use of Poisson regression modelling are commendable. At the same time, the manuscript would benefit from a more rigorous theoretical framing, methodological transparency, and deeper interpretive engagement with the findings.
Below I outline comments in two parts. The thematic review identifies foundational challenges related to framing, conceptualisation, and methodology. The structured review offers detailed, section-by-section comments to help guide revisions.
Thematic Review
Conceptualisation of Digital Competence
The study relies on self-perceived DCs operationalised as categorical levels (basic, intermediate, advanced, specialist). While DigComp is briefly cited, there is little engagement with ongoing debates about what counts as digital competence and how perception may diverge from actual ability. Without this conceptual clarity, the paper risks equating improved self-reported categories with substantive skill development. A stronger theoretical bridge between DigComp (or similar frameworks) and the survey measures is needed.
Methodological Justification and Transparency
The choice of Poisson regression for a binary dependent variable is unconventional. Logistic regression would normally be used, and the rationale for preferring Poisson requires explicit justification. Similarly, the exclusion of seven students with disabilities (due to model convergence) undermines inclusivity and warrants deeper discussion. These decisions affect the validity and replicability of the study, so greater transparency is essential.
Interpretation of Findings
The results compellingly show Peru and Mexico “catching up” with Spain in perceived DCs, but the analysis remains descriptive mainly. More interpretive depth is needed: Why did Peru and Mexico improve? How do institutional responses (outlined in Section 1.1 Contextual Analysis) link to the observed changes? Without stronger explanatory framing, the paper risks overstating what is essentially a descriptive trend. Please make this critical linkage more explicit.
Policy and Practical Implications
The conclusion gestures towards proactive, agile models of education, but the implications remain abstract. For greater impact, the paper should articulate concrete strategies: What curricular reforms, training initiatives, or policy actions might sustain and deepen the gains observed in Mexico and Peru?
Structured Review
Abstract
- Clear summary of aims and findings. However, references to artificial intelligence appear peripheral. Either integrate AI substantively into the analysis or remove from the framing.
Introduction
- The pandemic is described as exposing inequalities, but the connection to digital competence is not fully articulated. Clarify how digital competence is distinct from access issues.
- Claim of novelty (We have not identified any studies…) needs to be qualified. While it is true that few studies directly examine self-perceived digital competences across multiple countries during the pandemic, there are notable comparative works that examine related issues (e.g., Aristovnik et al., 2020, who studied higher education students across 62 countries). These may not focus specifically on digital competences, but they do compare academic, social, and digital aspects of students’ experiences. Presently, the manuscript risks overstating its originality. I suggest rephrasing to acknowledge that this study contributes a focused analysis of DC self-perceptions, rather than implying there were no prior cross-country investigations at all. This more careful positioning will strengthen credibility.
Contextual Background
- These sections are informative but overly descriptive. They could be condensed, with more analytical contrast between Spain, Mexico, and Peru in terms of preparedness and digital policy prior to the pandemic.
Methods
- Sampling: Reliance on convenience sampling is understandable under pandemic conditions, but limitations for generalisability should be acknowledged more directly.
- Instrument: The binary “improvement/no improvement” coding simplifies a nuanced construct. Consider acknowledging alternative approaches (e.g., scale-based measures, triangulation).
- Statistical analysis: Justify the choice of Poisson regression and clarify Sidak adjustment procedure.
Results
- Tables 2–4 and Figures 1–2 are informative, but the interpretation is underdeveloped. For example, Table 4 highlights heterogeneous patterns by gender and prior study modality: Spanish students show relatively low improvements overall, with some unexpected variations (e.g., higher reported gains among those with sensory disabilities), while Peruvian students report substantial improvements across nearly all categories. These patterns are presented descriptively but not analysed in depth. The discussion should directly address why Spain diverges from Mexico and Peru, and whether this reflects institutional preparedness, cultural factors in self-assessment, or differences in baseline digital training. At present, the reader is left to infer explanations. More explicit linkage between the quantitative findings and the contextual background (Section 1.1) would give the results greater explanatory power and policy relevance.
- Surprising findings should be explicitly highlighted and interpreted. For instance, Spain begins with a stronger baseline of digital competences, yet students report comparatively lower levels of improvement than their peers in Mexico and Peru. This counterintuitive result deserves more than descriptive reporting—it should be problematised and explained. Does this pattern suggest a “ceiling effect” where Spanish students had less room for perceived growth? Or might it reflect different institutional strategies during the pandemic, where Spain’s rapid transition to emergency remote teaching created stress without generating perceived gains? Bringing these possible interpretations into the discussion would help readers understand not only what the numbers show, but why the divergence occurs.
Discussion
- Strong in noting that self-perception does not equal measured competence, but this point is not sustained. Bring in literature on self-assessment biases (e.g., Dunning–Kruger effect) to contextualise.
- Better integrate the institutional policy responses (Section 1.1) into the explanation of cross-country differences.
Conclusion
- The argument that the pandemic was an “opportunity” for developing countries requires more nuance. While the data show that students in Mexico and Peru reported notable improvements in their digital competences, framing this solely as an “opportunity” risks oversimplifying a complex reality. The authors should explicitly discuss whether these gains are likely to be temporary, tied to emergency conditions, or sustainable over the long term. For example, do these improvements reflect genuine structural changes (e.g., expanded digital infrastructure, permanent curriculum reform), or are they mainly short-term adaptations under crisis? Without systemic investment in connectivity, teacher training, and institutional support, the progress observed could easily stagnate or even reverse. Adding this layer of analysis would prevent overgeneralisation and strengthen the policy relevance of the conclusion.
- Move from abstract calls (“proactive and agile models”) to concrete recommendations. For example: targeted digital literacy training in teacher education programmes; government subsidies for connectivity; or embedding DigComp assessments into curricula.
Generally readable, but some phrasing is awkward (e.g., “the authors no used LMM”). Careful proofreading is needed.
Author Response
Thank you very much for taking the time to review this manuscript. Please find the detailed responses below and the corresponding revisions highlighted/in track changes in the re-submitted files.
Reviewer 2
This manuscript provides a comparative examination of university students’ self-perceived digital competences (DCs) across three Spanish-speaking countries during the first six months of the COVID-19 pandemic. The comparative element and the use of Poisson regression modelling are commendable. At the same time, the manuscript would benefit from a more rigorous theoretical framing, methodological transparency, and deeper interpretive engagement with the findings.
Below I outline comments in two parts. The thematic review identifies foundational challenges related to framing, conceptualization, and methodology. The structured review offers detailed, section-by-section comments to help guide revisions.
Thematic Review
Comments 1:
Conceptualisation of Digital Competence
The study relies on self-perceived DCs operationalized as categorical levels (basic, intermediate, advanced, specialist). While DigComp is briefly cited, there is little engagement with ongoing debates about what counts as digital competence and how perception may diverge from actual ability. Without this conceptual clarity, the paper risks equating improved self-reported categories with substantive skill development. A stronger theoretical bridge between DigComp (or similar frameworks) and the survey measures is needed.
Resposte 1:
The conceptualization of digital competence has been strengthened, with an expanded reference to DigComp and a broader discussion of current debates. In addition, recent studies have been incorporated to support the relevance of measuring self-perception, while also acknowledging its limitations. The amended passage is as follows:
“Digital competence, as a multidimensional construct, is described in reference frameworks such as DigComp (Redecker & Punie, 2017), developed by the European Commission, which sets out five key areas: information and data, communication and collaboration, content creation, security, and problem-solving. However, in this study, we do not assess objective performance in these areas, but rather students’ self-perceived digital competence, categorized into four levels: basic, intermediate, advanced, and specialized. These categories, inspired by the progressive logic of DigComp, make it possible to approximate how students feel capable of mobilizing their skills. Beyond conceptual frameworks (Salem et al., 2022), it is crucial to measure this self-perception because it reflects dimensions of confidence and agency that influence the effective use of technologies in situations of change, and because it is sensitive to rapid transformations in the educational context and to inequalities that are not always evident in objective tests.”
Comments 2:
Methodological Justification and Transparency
The choice of Poisson regression for a binary dependent variable is unconventional. Logistic regression would normally be used, and the rationale for preferring Poisson requires explicit justification. Similarly, the exclusion of seven students with disabilities (due to model convergence) undermines inclusivity and warrants deeper discussion. These decisions affect the validity and replicability of the study, so greater transparency is essential.
Resposte 2: Thank you for the comment. Regarding the excluded subjects, we have expanded the relevant paragraph as follows:
881 students were included in the study. Students with physical disabilities (n=7) who completed the survey were excluded, since they constituted a very small subgroup, which would prevent any reliable inference about them. It was also not possible to combine them with other categories of the Disability status variable because such a combination would lack conceptual meaning. Besides this, convergence problems during the estimation of the statistical models, due to small cell size, warranted their exclusion from the study.
Regarding the use of Poisson regression, we have stated: “ while it may seem counterintuitive to use a Poisson distribution for a binary outcome (which is a count of 0 or 1), the modified Poisson regression method that uses a robust (sandwich) variance estimator provides reliable estimates of risk (proportion or prevalence) ratios and associated standard errors in cross-sectional or prospective studies, making it a strong alternative to logistic regression and log-binomial models (Chen, W., Qian, L., Shi, J. et al. Comparing performance between log-binomial and robust Poisson regression models for estimating risk ratios under model misspecification. BMC Med Res Methodol 18, 63 (2018). https://doi.org/10.1186/s12874-018-0519-5).”
Among the advantages of using Poisson regression for binomial outcomes or proportions are:
- Coefficients are directly interpretable, unlike logistic regression, which estimates log-odds that are difficult to interpret. (Talbot D, Mésidor M, Chiu Y, Simard M, Sirois C. An Alternative Perspective on the Robust Poisson Method for Estimating Risk or Prevalence Ratios. Epidemiology. 2023 Jan 1;34(1):1-7. doi: 10.1097/EDE.0000000000001544).
- Handles common outcomes better. It avoids the frequent non-convergence issues sometimes seen with log-binomial regression, especially when dealing with common outcomes (prevalence > 10%), which is seen in the present study. (Chen, W., Qian, L., Shi, J. et al. Comparing performance between log-binomial and robust Poisson regression models for estimating risk ratios under model misspecification. BMC Med Res Methodol 18, 63 (2018). https://doi.org/10.1186/s12874-018-0519-5).
- Robustness to model misspecification: The robust Poisson method is based on a weaker assumption than a fully parametric Poisson model. It only requires a log-linear relationship between the risk/prevalence and the explanatory variables and does not assume that the outcome follows a Poisson distribution, making it more flexible. (Talbot D, Mésidor M, Chiu Y, Simard M, Sirois C. An Alternative Perspective on the Robust Poisson Method for Estimating Risk or Prevalence Ratios. Epidemiology. 2023 Jan 1;34(1):1-7. doi: 10.1097/EDE.0000000000001544).
Comments 3:
Interpretation of Findings
The results compellingly show Peru and Mexico “catching up” with Spain in perceived DCs, but the analysis remains descriptive mainly. More interpretive depth is needed: Why did Peru and Mexico improve? How do institutional responses (outlined in Section 1.1 Contextual Analysis) link to the observed changes? Without stronger explanatory framing, the paper risks overstating what is essentially a descriptive trend. Please make this critical linkage more explicit.
Resposte 3:
We have now made the linkage more explicit between the findings and the contextual analysis. The following paragraph has been added to the Discussion section:
“These findings can be partially explained by the institutional differences outlined in Section 1.1. The lower initial levels of digital preparedness in Mexico and Peru likely created greater scope for perceived improvement when faced with emergency remote teaching. The rapid implementation of institutional responses, including the deployment of digital platforms, ad hoc training, and flexible pedagogical strategies, may have directly influenced students’ perceived competence. The contrast with Spain, where more consolidated systems were already in place prior to the pandemic, suggests that abrupt systemic transformation may foster measurable gains in digital self-efficacy under crisis conditions”.
Comments 4:
Policy and Practical Implications
The conclusion gestures towards proactive, agile models of education, but the implications remain abstract. For greater impact, the paper should articulate concrete strategies: What curricular reforms, training initiatives, or policy actions might sustain and deepen the gains observed in Mexico and Peru?
Resposte 3: We acknowledge the importance of being more concrete. In line with the reviewer’s input, the text of the conclusions has been expanded accordingly.:
Structured Review
Comments 5:
Abstract
- Clear summary of aims and findings. However, references to artificial intelligence appear peripheral. Either integrate AI substantively into the analysis or remove from the framing.
Resposte 5: The reference to artificial intelligence has been removed from the abstract, and its integration in the introduction, discussion and conclusions has been simplified, emphasizing that digital competences must continue to be developed beyond the pandemic, as technologies are constantly evolving and require flexibility and ongoing adaptation.
Comments 6:
Introduction
- The pandemic is described as exposing inequalities, but the connection to digital competence is not fully articulated. Clarify how digital competence is distinct from access issues.
Resposte 6: In response, we have extended the relevant paragraph in the Introduction to more clearly differentiate between digital access and digital competence, in line with the reviewer’s suggestion.
Comments 7:
- Claim of novelty (We have not identified any studies…) needs to be qualified. While it is true that few studies directly examine self-perceived digital competences across multiple countries during the pandemic, there are notable comparative works that examine related issues (e.g., Aristovnik et al., 2020, who studied higher education students across 62 countries). These may not focus specifically on digital competences, but they do compare academic, social, and digital aspects of students’ experiences. Presently, the manuscript risks overstating its originality. I suggest rephrasing to acknowledge that this study contributes a focused analysis of DC self-perceptions, rather than implying there were no prior cross-country investigations at all. This more careful positioning will strengthen credibility.
Resposte 7: We fully agree, and the text has been revised accordingly: “Although relatively few studies have directly examined self-perceived digital competencies across multiple countries during the pandemic, there are relevant contributions addressing this dimension within national contexts, such as the longitudinal study by Salem et al. (2022) in Saudi Arabia. There are also broader comparative studies, such as Aristovnik et al. (2020), which analyze academic, social, and digital aspects of the student experience in 62 countries, although they do not focus specifically on the self-perception of digital competencies. In this sense, our study seeks to contribute a perspective centered on the self-perception of university students in developing countries.”
Comments 8:
Contextual Background
- These sections are informative but overly descriptive. They could be condensed, with more analytical contrast between Spain, Mexico, and Peru in terms of preparedness and digital policy prior to the pandemic.
Resposte 8:
The contextual sections have been summarized and integrated, with greater emphasis on the analytical contrast between Spain, Mexico and Peru in terms of digital preparedness and policy prior to the pandemic.
Comments 9:
Methods
- Sampling: Reliance on convenience sampling is understandable under pandemic conditions, but limitations for generalizability should be acknowledged more directly.
Resposte 9: Thank you for the comment. We have mentioned it explicitly in the discussion section.
Comments 10:
- Instrument: The binary “improvement/no improvement” coding simplifies a nuanced construct. Consider acknowledging alternative approaches (e.g., scale-based measures, triangulation).
Resposte 10: Thank you for the comment. We have expanded the passage as follows: “In the present study, to evaluate the perception of change, the dependent variable was constructed from a single ordinal item (basic, intermediate, advanced, specialist), asked twice of the same study subject—the first time referring to the beginning of the pandemic, and the second time referring to the (later) moment of the survey (one or several months afterward)—to assess change on that ordinal scale. As in any ordinal scale, the terms basic, intermediate, advanced, and specialist do not represent a specific or “real” level of competence, as they are only labels for those categories, which could have been replaced by numeric labels from 1 to 4 without representing a specific or objectively evaluated level of digital competence. We acknowledge that a single item has limitations compared with multi-item scales. However, for our specific objective of capturing self-perceived changes during an unprecedented crisis period, this parsimonious approach was appropriate because, in our view, it imposed minimal additional burden on students, who might have been experiencing significant stress associated with coping with the pandemic while continuing their studies.”
Comments 11:
- Statistical analysis: Justify the choice of Poisson regression and clarify Sidak adjustment procedure.
Resposte 11: Please see Comment 2.
Comments 12:
Results
- Tables 2–4 and Figures 1–2 are informative, but the interpretation is underdeveloped. For example, Table 4 highlights heterogeneous patterns by gender and prior study modality: Spanish students show relatively low improvements overall, with some unexpected variations (e.g., higher reported gains among those with sensory disabilities), while Peruvian students report substantial improvements across nearly all categories. These patterns are presented descriptively but not analyzed in depth. The discussion should directly address why Spain diverges from Mexico and Peru, and whether this reflects institutional preparedness, cultural factors in self-assessment, or differences in baseline digital training. At present, the reader is left to infer explanations. More explicit linkage between the quantitative findings and the contextual background (Section 1.1) would give the results greater explanatory power and policy relevance.
Resposte 12: Thank you for the comment. We have revised the relevant section, stressing that the role of the secondary independent variables was mainly to control the cross-country comparison, not to explain the dependent variable, se we didn’t expanded on it.
Comments 13:
- Surprising findings should be explicitly highlighted and interpreted. For instance, Spain begins with a stronger baseline of digital competences, yet students report comparatively lower levels of improvement than their peers in Mexico and Peru. This counterintuitive result deserves more than descriptive reporting—it should be problematised and explained. Does this pattern suggest a “ceiling effect” where Spanish students had less room for perceived growth? Or might it reflect different institutional strategies during the pandemic, where Spain’s rapid transition to emergency remote teaching created stress without generating perceived gains? Bringing these possible interpretations into the discussion would help readers understand not only what the numbers show, but why the divergence occurs.
Resposte 13: To address this observation, we have expanded the Discussion with a deeper interpretation of the Spanish and Peruvian cases. For Spain, we integrated prior literature (Aguilar Cuesta et al., 2021) and added the possibility of a ceiling effect. For Peru, we linked the perceived improvement to forced exposure to virtual environments and to recent public policies (García Zare et al., 2023). With regard to Mexico, the trends were more aligned with those of Peru, but without the same institutional reinforcement, which is why the Discussion focuses on the more contrasting cases.
Comments 14:
Discussion
- Strong in noting that self-perception does not equal measured competence, but this point is not sustained. Bring in literature on self-assessment biases (e.g., Dunning–Kruger effect) to contextualise.
Resposte 14: Thank you for the comment. We have sustained this point as follows:
In subsection 2.2 The Instrument: “We acknowledge that these single-item measures have limitations in terms of precision and may present ambiguity for respondents regarding what constitutes each competency level. However, ordinal scales are appropriate for capturing relative changes in paired samples, as our analysis focuses on the direction of change within individuals rather than absolute competency levels. This approach is consistent with established practices in educational research examining self-perceived changes over time.”
In the Discussion section: “Our study has some limitations. First, although we assessed students' self-perception of changing their DCs, we did not directly measure these competencies. However, self-perception is a valid and relevant construct because it influences behavior and predicts students’ technological adoption (Bandura, 1997; Compeau & Higgins, 1995; Venkatesh & Davis, 2000; Venkatesh et al., 2003). Although we believe that our assessments reflect an underlying change, it would be necessary to confirm our findings with direct measurements.”
Comments 15:
- Better integrate the institutional policy responses (Section 1.1) into the explanation of cross-country differences.
Resposte 15: We have taken this observation into account, and the relevant section has been reduced and integrated in the revised version, ensuring that it is no longer merely descriptive.
Comments 16:
Conclusion
- The argument that the pandemic was an “opportunity” for developing countries requires more nuance. While the data show that students in Mexico and Peru reported notable improvements in their digital competences, framing this solely as an “opportunity” risks oversimplifying a complex reality. The authors should explicitly discuss whether these gains are likely to be temporary, tied to emergency conditions, or sustainable over the long term. For example, do these improvements reflect genuine structural changes (e.g., expanded digital infrastructure, permanent curriculum reform), or are they mainly short-term adaptations under crisis? Without systemic investment in connectivity, teacher training, and institutional support, the progress observed could easily stagnate or even reverse. Adding this layer of analysis would prevent overgeneralisation and strengthen the policy relevance of the conclusion.
Resposte 16: The observation has been addressed by qualifying the reference to the pandemic as an “opportunity”, emphasizing that advances in digital competences require cautious interpretation. The conclusion now incorporates an analysis of the possibility that such improvements reflect emergency conditions and may not be sustainable without investment in connectivity, teacher training and institutional support.
Comments 17:
- Move from abstract calls (“proactive and agile models”) to concrete recommendations. For example: targeted digital literacy training in teacher education programmes; government subsidies for connectivity; or embedding DigComp assessments into curricula.
Resposte 17: We appreciate the suggestion. Concrete recommendations have been incorporated into the conclusions: teacher training in digital literacy, grants for connectivity, the integration of DigComp assessments into the curriculum, and the promotion of open resources and hybrid teaching.
Comments 18:
Comments on the Quality of English Language
Generally readable, but some phrasing is awkward (e.g., “the authors no used LMM”). Careful proofreading is needed.
Resposte 18: The manuscript has been carefully revised in English and the expressions indicated, including those mentioned by the reviewer, have been corrected.
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsI thank the authors for the effort to address my comments. However, there are still issues I would like to see fixed before the publication:
- I think the abstract should briefly state the sample, countries, timeframe, and survey method only. Other methodological details are not needed.
- For instrument validity, provide concrete validation evidence. I remain concerned that a single-item self-rating is weak for construct validity even when used as a paired measure. If nothing can be done about it, clearly acknowledge the limited psychometric validation.
- Reporting of full model results and fit statistics. I expected greater transparency beyond the country CIs. Include a supplementary table with full model coefficients, 95% CIs and p‑values for all covariates and key fit statistics or diagnostics to allow readers to assess adjustment effects.
- Please hedge causal claims. I recommend framing mechanisms as associations rather than causes. All you’ve found was correlations with a cross-sectional survey.
I've found duplicated sentences, some awkward phrasing, and typographical errors (e.g., the LLM sentence). Proofread the manuscript once again.
Author Response
Por favor vea el archivo adjunto.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsI would like to thank the authors for their comprehensive and constructive responses to my previous comments. The revised manuscript has significantly improved in terms of conceptual framing, methodological transparency, and interpretive depth.
Author Response
The methodology has been improved and a table has been included as an annex.