Review Reports
- Zattra Blakong,
- Charuay Savithi* and
- Sommai Khantong
Reviewer 1: Anonymous Reviewer 2: Anonymous Reviewer 3: Andreia De Bem Machado
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsI have read with interest the article “Modeling the Determinants of Smart University Success: An Empirical Study in Thailand”, which seeks to identify the determinants of the effectiveness of the Smart University model.
This article addresses a topic that may be of interest, insofar as the model itself is considered relevant for analyzing the current circumstances faced by Higher Education Institutions (HEIs).
With regard to the content of the article, there are several issues that concern me, as they may affect either the rigor of the research or the quality with which this rigor is conveyed to readers. In my view, the article would improve on both counts if the following recommendations were followed:
- Expand and improve the methodological information provided on the fieldwork:
- Complete the description of the institutions that constitute the population under study. The text refers to 57 public and 43 private universities (292–293), but indicates that the analysis is conducted on 126 HEIs.
- Explain to what extent the number of respondents (96 administrators) is the result of the sampling design or of the response rate.
- Provide information on the characteristics of the respondents. At a minimum, some data on their position within the university and their tenure at the institution seem indispensable. This also requires stating the criterion used to identify the person who should answer the questionnaire. I also believe it is important to clarify whether each administrator represents one HEI.
- Provide more information on the sampling procedure. The text states that stratified random sampling was used, but does not report what the strata were, what allocation criterion was applied, etc. Ultimately, the reader only knows that the study is based on the responses given by 96 administrators, with no additional information to support their representativeness. In my opinion, the description of the sampling design should precede the information on the number of responses obtained, the response rate, the characteristics of respondents, and so on.
- I also recommend providing more information on the questionnaire design. The text indicates that items with lower levels of congruence were removed, but does not specify how many or which ones. In my view, what would help the reader most would be to know the exact wording of the items and questions. At present, the text offers little in this respect: items are identified by codes that do not allow the reader to know their content (which makes it difficult for the reader to draw their own conclusions or evaluate those of the authors). Moreover, in the absence of information on the questions asked, it is not clear which aspect of the item is being investigated. Different questions could lead to different answers and different interpretations. For example: Is this dimension (item) present in your HEI? Do you consider this dimension (item) to be a determinant for the effectiveness of your HEI? Do you consider this dimension (item) to be a determinant of the effectiveness of any HEI? …
- Address the possibility of common method bias in the study. Given its nature (a single instrument, a single group of informants, influence of social desirability), this research is at high risk of common method bias, and from reading the text I do not get the impression that the authors have considered or mitigated this risk.
- Provide references to support the authors’ statements in certain parts of the article. For example, in lines 50–57 there is not a single reference, despite the categorical tone in which several claims are made.
- Resolve a terminological inconsistency between the literature cited and the text. In the literature reviewed, the term predominantly used is Smart campuses, whereas the article itself consistently prefers Smart universities, with only a few exceptions. The discrepancy is particularly noticeable in lines 212–213, where the authors state: “Although many researchers have studied Smart Universities, most still look at each part separately instead of as one connected system.”
- Find more concise formulations for the hypotheses. In its current form, the presentation of the hypotheses is rather redundant.
- Avoid redundancies. For example, the paragraph in lines 376–379 repeats much of the information contained in the previous one (371–374).
- Avoid apparent inconsistencies in the text. For example, compare “This result points to the central role of technological innovation, well-managed operations, and thoughtful use of resources in driving institutional progress” (23–24) with “The results show that technology, organization, and environment work together to support institutional transformation and improve performance. Among these, organization factors have the strongest influence” (85–86) and “Among the three TOE factors, the organizational factor had the strongest influence” (554).
- Review the references. For example, reference 13 cannot be found in issue 25(6) of Education and Information Technologies (https://link.springer.com/journal/10639/volumes-and-issues/25-6). In addition, a search for the article title in Google Scholar returns no results, which raises reasonable doubts about the existence of the article.
- Clarify the link between the findings and the “practical implications”. Personally, I find it difficult to see how the proposed actions in the “practical implications” section can be derived from the content of the article.
Overall, I consider that the article, regardless of the topicality and potential interest of the subject, could be improved both in terms of its internal consistency and with respect to the methodological information provided to the reader, so that the latter can adequately assess the rigor with which the research was conducted. I trust that my recommendations will be useful to the authors, and I apologize for any errors I may have made in reading or interpreting their manuscript.
Author Response
Comment 1: Expand and improve the methodological information provided on the fieldwork:
Response 1: Thank you for the comment. I have revised and expanded the methodological information as requested. The necessary improvements have been made according to the items listed below, including clarifying the population, sampling procedures, characteristics of respondents, questionnaire design, and validation steps.
Revision: The methodological section has been updated following all listed items, and the corresponding details have been added to Sections 3.2, 3.3, 3.4, and 3.5.
Comment 1.1: Complete the description of the institutions that constitute the population under study. The text refers to 57 public and 43 private universities (292–293), but indicates that the analysis is conducted on 126 HEIs.
Response 1.1: Thank you for the comment. We have clarified that the population consists of 126 institutions: 57 public universities, 26 autonomous universities, and 43 private universities. This description has been added to the revised manuscript.
Revision: I have revised the text as follows:
The population of this study consisted of administrators from 126 higher education institutions in Thailand, including 57 public universities, 26 autonomous universities, and 43 private universities. However, because the target population comprised senior administrators who are limited in number and difficult to access, the study successfully obtained 96 valid responses. According to Hair (2010), the recommended sample size for Structural Equation Modeling (SEM) should be at least 10–20 times the number of free parameters. SEM can still be reliably performed with a sample size between 50 and 100 when the model is not overly complex and the data quality is sufficient (Bentler, 1987; Byrne, 2012; Kline, 2011). Therefore, the sample size of 96 respondents used in this study is considered appropriate and adequate for SEM analysis.
Comment 1.2: Explain to what extent the number of respondents (96 administrators) is the result of the sampling design or of the response rate.
Response 1.2: The final number of 96 respondents resulted from the response rate, not from a predetermined sampling quota. Although the population included 126 institutions, senior administrators were difficult to access; therefore, 96 valid responses were obtained and used for analysis.
Revision: A clarifying sentence has been added to Section 3.2 (Population and Sample) to explain that the final sample size resulted from the response rate of administrators.
Comment 1.3: Provide information on the characteristics of the respondents. At a minimum, some data on their position within the university and their tenure at the institution seem indispensable. This also requires stating the criterion used to identify the person who should answer the questionnaire. I also believe it is important to clarify whether each administrator represents one HEI.
Response 1.3: Thank you for the comment. We have added a description of the respondents’ characteristics and clarified the criteria used to identify who should answer the questionnaire. The survey was distributed exclusively to senior administrators from all 126 higher education institutions, including Presidents (Rectors), Vice Presidents, and Assistant Presidents. Each respondent represented one HEI, and in cases where the President was unavailable, a designated senior administrator completed the questionnaire on behalf of the institution. This clarification has been added to Section 3.2 (Population and Sample).
Revision: I have revised the text as follows:
In the Thai higher education system, each institution is led by a senior management team, typically consisting of the President (Rector), Vice Presidents, and Assistant Presidents. Therefore, the questionnaire was distributed exclusively to senior administrators from all 126 institutions. Each response represents one HEI, and in each case the survey was completed by the President or another senior administrator appointed to respond on behalf of the institution.
Comment 1.4: Provide more information on the sampling procedure. The text states that stratified random sampling was used, but does not report what the strata were, what allocation criterion was applied, etc. Ultimately, the reader only knows that the study is based on the responses given by 96 administrators, with no additional information to support their representativeness. In my opinion, the description of the sampling design should precede the information on the number of responses obtained, the response rate, the characteristics of respondents, and so on.
Response 1.4: Thank you for the comment. I have revised the description of the sampling procedure to provide clearer and more accurate methodological information. The previous reference to stratified random sampling has been removed. The manuscript now explains that a purposive sampling approach was used, targeting senior administrators as key informants from all 126 higher education institutions. Additional details have been added regarding the selection criteria, the role of senior administrators, and how each response represents one HEI. This revision clarifies the sampling design and strengthens the explanation of representativeness.
Revision: The sampling procedure has been revised as follows:
A purposive sampling approach was employed to select one senior administrator (President, Vice President, Assistant President, or equivalent) from each of the 126 higher education institutions. Each valid response represents one HEI, ensuring that the data reflect institutional-level perspectives rather than individual-level variation. The revised explanation now precedes the information on the number of responses obtained and respondent characteristics, as recommended.
Comment 1.5: I also recommend providing more information on the questionnaire design. The text indicates that items with lower levels of congruence were removed, but does not specify how many or which ones. In my view, what would help the reader most would be to know the exact wording of the items and questions. At present, the text offers little in this respect: items are identified by codes that do not allow the reader to know their content (which makes it difficult for the reader to draw their own conclusions or evaluate those of the authors). Moreover, in the absence of information on the questions asked, it is not clear which aspect of the item is being investigated. Different questions could lead to different answers and different interpretations. For example: Is this dimension (item) present in your HEI? Do you consider this dimension (item) to be a determinant for the effectiveness of your HEI? Do you consider this dimension (item) to be a determinant of the effectiveness of any HEI? …
Response 1.5: Thank you for the helpful comment. We have added additional details to clarify the questionnaire design. The initial instrument consisted of 86 items, and seven items with IOC scores below 0.50 were removed during expert validation. We also clarified that all items were written as evaluative statements asking respondents to indicate their level of agreement regarding the presence, implementation, and effectiveness of each dimension within their own institution. These explanations have been added to Section 3.3.
Revision: I have revised the text as follows:
To improve clarity regarding the questionnaire design, all items were written as evaluative statements that asked respondents to express their level of agreement regarding the presence, implementation, and effectiveness of each dimension within their own institution. The initial instrument included 86 items. During expert evaluation, seven items with Index of Item–Objective Congruence (IOC) scores below 0.50 were removed to ensure content validity, resulting in a refined and validated set of questions. The remaining items were grouped into well-defined constructs aligned with the TOE framework and Smart University performance domains.
Comment 2: Address the possibility of common method bias in the study. Given its nature (a single instrument, a single group of informants, influence of social desirability), this research is at high risk of common method bias, and from reading the text I do not get the impression that the authors have considered or mitigated this risk.
Response 2: Thank you for the comment. We have addressed the potential risk of common method bias by adding a description of the procedural remedies applied during data collection. These include ensuring anonymity and confidentiality, separating questionnaire sections, and using neutral, non-leading item wording to minimize social desirability and response pattern bias. This clarification has been added to Section 3.5 (Data Collection and Analysis). In addition, we acknowledged the remaining possibility of common method variance in the study’s limitations by adding a statement to Section 5.4 (Limitations and Future Research).
Revision:
Added in Section 3.5 with the following revised text:
To reduce the risk of common method bias (CMB), several procedural remedies were applied during data collection. Respondents were assured of anonymity and confidentiality to minimize social desirability effects. The questionnaire sections were clearly separated to reduce response pattern bias, and all items were phrased using neutral, non-leading wording. These procedures helped limit the potential influence of common method variance.
Add in Discussion (Section 5.4) with the following revised text:
Another limitation concerns the potential for common method bias, as data were collected from a single questionnaire completed by a single group of senior administrators. Although several procedural remedies were applied—such as ensuring anonymity, using neutral wording, and separating questionnaire sections—the possibility of common method variance cannot be fully eliminated.
Comment 3: Provide references to support the authors’ statements in certain parts of the article. For example, in lines 50–57 there is not a single reference, despite the categorical tone in which several claims are made.
Response 3: Thank you for the comment. We have added appropriate references to support the statements made in lines 50–57 of the Introduction. These citations strengthen the claims related to organizational readiness, external influences, and the interaction of technology, organization, and environment in digital transformation. The revised paragraph now includes multiple peer-reviewed references to address this concern.
Revision:
I have revised the text as follows:
However, becoming a Smart University is not only about using new technology; it also requires strong organizational readiness, including leadership commitment, supportive culture, and continuous staff development—elements widely recognized as critical for successful digital transformation in higher education [AbuAlnaaj et al., 2020; Fernández, 2023]. Such readiness develops progressively as universities learn and adapt to technological and managerial change [Mirata & Bergamin, 2023]. External conditions also play an important role. Government policy, competitive pressures, and stakeholder expectations strongly influence how transformation takes place [Trevisan et al., 2024]. Consistent with the Technology–Organization–Environment (TOE) framework, prior studies highlight that technological capability, organizational factors, and environmental support collectively shape innovation adoption and institutional change [Tornatzky & Fleischer, 1990; Baker, 2011]. However, limited empirical work has examined these relationships within Southeast Asian higher education, despite rapid digital expansion in the region [AbuAlnaaj et al., 2020].
Comment 4: Resolve a terminological inconsistency between the literature cited and the text. In the literature reviewed, the term predominantly used is Smart campuses, whereas the article itself consistently prefers Smart universities, with only a few exceptions. The discrepancy is particularly noticeable in lines 212–213, where the authors state: “Although many researchers have studied Smart Universities, most still look at each part separately instead of as one connected system.”
Response 4: Thank you for pointing out the terminological inconsistency. We have revised the manuscript to ensure consistency by changing all occurrences to Smart Campus throughout the entire document, including the statement in lines 212–213 and all related sections.
Revision: The terminology has been revised throughout the entire manuscript to consistently use Smart Campus.
Comment 5: Find more concise formulations for the hypotheses. In its current form, the presentation of the hypotheses is rather redundant.
Response 5: Thank you for the comment. The hypotheses in this study were formulated directly based on the causal structure presented in Figure 1. Conceptual Research Model, and altering their form would affect the theoretical logic and overall research design. Therefore, the original hypotheses have been retained. However, to address the concern regarding redundancy, we have added a concise summary of the hypotheses to improve clarity and readability.
Revision:
I have revised the text as follows:
In summary, the model proposes that each TOE dimension—Technology (TECH), Organization (ORG), and Environment (ENV)—exerts a direct positive influence on the four smart campus domains: Economy, Society, Environment, and Governance. In turn, these four domains positively contribute to overall smart campus success. This structure results in sixteen hypotheses (H1–H16) covering the direct effects from each TOE perspective to the four domains, and the subsequent effects of the four domains on overall smart campus success.
Comment 6: Avoid redundancies. For example, the paragraph in lines 376–379 repeats much of the information contained in the previous one (371–374).
Response 6: Thank you for pointing this out. I have removed the redundant paragraph in lines 376–379, as the information was already presented in lines 371–374.
Revision: The duplicated content in lines 376–379 has been deleted as requested.
Comment 7: Avoid apparent inconsistencies in the text. For example, compare “This result points to the central role of technological innovation, well-managed operations, and thoughtful use of resources in driving institutional progress” (23–24) with “The results show that technology, organization, and environment work together to support institutional transformation and improve performance. Among these, organization factors have the strongest influence” (85–86) and “Among the three TOE factors, the organizational factor had the strongest influence” (554).
Response 7: Thank you for the comment. We have resolved the inconsistency by revising three specific points in the manuscript to ensure alignment with the empirical findings:
- Abstract – The sentence that previously emphasized the central role of technological innovation was revised to clearly state that organizational readiness exerted the strongest influence, consistent with the SEM results.
- Introduction – Minor wording adjustments were made to avoid implying that technology is the dominant factor.
- Discussion and Conclusion – The interpretation was checked and refined to consistently highlight that the organizational factor had the strongest influence among the three TOE dimensions.
Revision: Three areas were revised:
(1) Abstract wording updated to emphasize organizational readiness,
(2) Introduction adjusted for consistency, and
(3) Discussion/Conclusion refined to ensure alignment with the SEM findings.
Comment 8: Review the references. For example, reference 13 cannot be found in issue 25(6) of Education and Information Technologies (https://link.springer.com/journal/10639/volumes-and-issues/25-6). In addition, a search for the article title in Google Scholar returns no results, which raises reasonable doubts about the existence of the article.
Response 8: Thank you for the comment. I have carefully reviewed and revised all references in the manuscript. Reference 13 and any other entries with inconsistencies or unverifiable information have been corrected or replaced with valid, peer-reviewed sources.
Revision: All references in the manuscript have been rechecked and revised to ensure accuracy, completeness, and verifiability.
Comment 9: Clarify the link between the findings and the “practical implications”. Personally, I find it difficult to see how the proposed actions in the “practical implications” section can be derived from the content of the article.
Response 9: Thank you for the comment. I have clarified the connection between the empirical findings and the practical implications by adding explanatory sentences in Section 5.3. These sentences explicitly state how the recommended actions are derived from the SEM results—particularly the strong influence of organizational readiness and the central role of the economic performance domain.
Revision: I have revised the text as follows:
These recommendations are directly informed by the SEM results. Because organizational readiness exerted the strongest influence among the TOE factors, the proposed actions emphasize leadership development, internal coordination, and staff capability. Likewise, since the economic domain was the most influential determinant of overall smart campus success, guidance related to resource planning and investment efficiency reflects the empirical evidence.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThis manuscript aims to develop and validate a causal model based on the Technology–Organization–Environment (TOE) framework to explain the determinants of Smart University success in Thailand. The authors collected data via questionnaire from 96 administrators of Thai higher education institutions and analyzed the data using structural equation modeling (SEM). The research topic is timely, but the manuscript exhibits serious deficiencies in theory, methodology, and data presentation. The reasons are detailed below as specific issues and comments.
1.The authors claim to have obtained 96 valid responses from administrators at 126 higher education institutions (N = 96). For a structural equation model (SEM) that includes a large number of observed variables (as shown in Table 1, totaling more than 80 items) and complex latent variables (7 latent variables: TECH, ORG, ENV, econ, society, envi, gov), a sample size of N = 96 is unequivocally insufficient.
2.Such a small sample size, particularly relative to the number of model parameters, severely threatens the stability of the model, the statistical power, and the accuracy of parameter estimates.
3.On page 7 (line 296) the authors cite [30–32] (Bentler, Byrne, Kline) to argue that N = 50–100 is adequate for moderately complex models. Is this a misinterpretation of the guidance provided by those authors? In SEM practice, N = 96 is generally regarded as far too small for the complex model proposed in this study (Figure 4).
4.The manuscript does not specify the precise sampling units and response rates, for example, whether only one administrator per institution was invited, how many administrators were actually invited, and what the response rate was.
5.The paper claims “stratified random sampling” but fails to specify the stratification criteria (e.g., type, region, size), the source of the sampling frame, the sampling procedure, or whether replacement sampling was used. Online distribution may induce self-selection bias; the manuscript does not discuss how representativeness bias and sample heterogeneity were addressed. If responses originate from different hierarchical levels (e.g., institution-level vs. department-level), the model does not account for nested/multilevel data (clustered data) and its impact on standard errors and parameter estimates. The manuscript does not report whether intraclass correlations were tested or whether a multilevel SEM was considered.
6.In Figure 2 (line 401) the reported correlations among the three TOE latent variables (Technology, Organization, Environment) are extremely high (r_tech-org = 0.903, r_org-env = 0.916). Correlations above 0.90 strongly indicate a lack of discriminant validity among these three constructs. They may not measure three distinct factors but rather a single higher-order latent factor. With such severe multicollinearity, the reliability and validity of the path coefficients estimated in the structural model (Figure 4; H1–H12) are questionable. How can the model distinguish the unique effects of these three variables?
7.The authors used IOC and retained items with IOC ≥ 0.5, but did not report the backgrounds of the IOC raters, rater agreement, or item-level IOC details; the threshold IOC ≥ 0.5 is relatively permissive, and no justification is provided for its use.
8.The manuscript employs a second-order CFA (economic, social, environmental, governance), but does not provide identification diagnostics between first-order and second-order factors (e.g., cross-loadings, reverse loadings, or tests of the type of factor structure). Some observed variables have factor loadings near the lower acceptable bound (e.g., T6 = 0.672 in Technology) yet were retained without discussion of their substantive meaning.
9.The very high correlations among the three major exogenous factors (Technology–Organization = 0.903; Organization–Environment = 0.916) maybe suggest insufficient discriminant validity, multicollinearity, or measurement overlap. The paper does not present more stringent discriminant validity tests such as Fornell–Larcker criterion or HTMT ratios.
10.The manuscript concurrently reports χ²/df = 1.16, CFI = 0.998, TLI = 0.980, SRMR = 0.017, and RMSEA = 0.059. The coexistence of very high CFI/TLI with a relatively elevated RMSEA (0.059) requires explanation (e.g., model complexity, sample size effects on fit indices, estimation method).
11.The reported χ² ≈ 988 with df = 918 (p ≈ 0.0532) in the context of small sample size and many model terms may produce a non-significant χ², but one should not rely on a single index to judge model fit. The authors’ interpretation of fit indices is overly optimistic; they do not present modification indices, residual patterns, or whether any stepwise model adjustments were performed.
12.The paper does not state which estimation method was used (e.g., ML, MLR, WLSMV), nor whether multivariate normality was tested or robust standard errors employed. Although skewness/kurtosis are reported in tables, several items exhibit skewness near or beyond 1 (e.g., T1 skewness = −1.295), which may affect the validity of ML estimation. The manuscript does not describe any handling of non-normality or sensitivity analyses.
13.All hypotheses (H1–H16) are reported as “supported” and all paths are significant. It is highly unusual in complex social-science models for every hypothesis to be confirmed. The authors do not provide robustness checks for possible mediation, common latent factors, or reverse causality. Are there equivalent or alternative model structures? The paper does not report tests of alternative models or closed-path checks.
14.The manuscript frequently uses causal language such as “causal model” and “drive” while relying on cross-sectional survey data, but it does not adequately discuss the limitations of cross-sectional design for causal inference. Although the authors mention in the conclusion that the study is cross-sectional, they do not cautiously treat causal language in the interpretation of empirical results.
15.The manuscript does not include the full questionnaire in the manuscript (only tables and indicator labels), preventing evaluation of whether item wording is leading or redundant. The paper states that “open data are available upon request,” but for methodological rigor the full scale items and (preferably) data/code should be included in an appendix or submitted for review.
16.The authors cite numerous Smart Campus / Smart University and TOE studies but fail to clearly explain how this paper advances theoretical understanding beyond existing work, particularly recent empirical studies in the ASEAN / Thai context. The claimed theoretical contribution is vague.
17.In Section 2.5 (“research gap”), the authors assert that few studies have used SEM to study TOE, whereas SEM is a well-established and commonly used method in this field.
Based on the multiple and serious issues outlined above, I recommend major revision.
Author Response
Comment 1: The authors claim to have obtained 96 valid responses from administrators at 126 higher education institutions (N = 96). For a structural equation model (SEM) that includes a large number of observed variables (as shown in Table 1, totaling more than 80 items) and complex latent variables (7 latent variables: TECH, ORG, ENV, econ, society, envi, gov), a sample size of N = 96 is unequivocally insufficient.
Response 1: Thank you for the comment. We have clarified the justification for the sample size and revised the sampling description accordingly. Additional explanation has been added to Section 3.2 to note that, although the questionnaire contained more than 80 observed items, the SEM estimation relied on a smaller number of free parameters because indicators were organized into structured latent constructs. Based on the guidelines of Hair (2010), Bentler (1987), Byrne (2012), and Kline (2011), the sample size of 96 is considered adequate for SEM under these conditions. The revised text now provides a clearer rationale for why the sample size is acceptable for this study.
Revision: The following clarification has been added to Section 3.2:
“Although the questionnaire contained more than 80 observed items, the SEM estimation relied on a smaller number of free parameters because the indicators were organized into structured latent constructs.”
Comment 2: Such a small sample size, particularly relative to the number of model parameters, severely threatens the stability of the model, the statistical power, and the accuracy of parameter estimates.
Response 2: Thank you for the comment. We acknowledge the reviewer’s concern regarding the stability of the SEM model with a modest sample size. To address this, we have added clarification in Section 3.5 explaining that several diagnostic checks were conducted to ensure model stability, statistical power, and the accuracy of parameter estimates. These checks include strong factor loadings, high composite reliability, acceptable model fit indices, and the absence of multicollinearity or inflated standard errors. The revised text now clarifies that, despite the relatively small sample size, the parameter estimates remain stable and reliable.
Revision: A new paragraph has been added to Section 3.5 stating:
“To address concerns related to statistical power and model stability given the modest sample size, several diagnostic checks were performed. The CFA results demonstrated high factor loadings and excellent composite reliability across all constructs, indicating that the measurement model was internally stable. Model fit indices also met recommended thresholds, and no issues related to multicollinearity or inflated standard errors were detected. These diagnostics support the conclusion that, despite the relatively small sample size, the parameter estimates are stable and the SEM results are statistically reliable.”
Comment 3: On page 7 (line 296) the authors cite [30–32] (Bentler, Byrne, Kline) to argue that N = 50–100 is adequate for moderately complex models. Is this a misinterpretation of the guidance provided by those authors? In SEM practice, N = 96 is generally regarded as far too small for the complex model proposed in this study (Figure 4).
Response 3: Thank you for raising this point. We agree that sample size requirements in SEM depend not only on general guidelines but also on the complexity of the model, the number of free parameters, and the strength of the measurement structure. In response to the reviewer’s concern, we have revised the text to clarify that our reference to Bentler (1987), Byrne (2012), and Kline (2011) was not intended to suggest that N = 50–100 is universally sufficient, but rather that such sample sizes may be acceptable when the model is theoretically grounded and the factor structure is strong.
We acknowledge that the proposed model contains multiple latent variables. Therefore, an additional explanatory note has been added to highlight that the effective number of free parameters was reduced because indicators were grouped into structured latent constructs, and that the CFA demonstrated high loadings and strong reliability. This supports the statistical stability of the results despite the modest sample size. The revised manuscript now better reflects this nuance.
Revision: The following clarification has been added to Section 3.2 and in the Methods discussion:
“It should be noted that the reference to previous guidelines (Bentler, 1987; Byrne, 2012; Kline, 2011) is not intended to imply that N = 50–100 is universally adequate for all SEM models. Rather, these guidelines indicate that such sample sizes may be acceptable when the number of free parameters is limited and the measurement structure is strong. In this study, the complexity of the model was reduced by organizing the indicators into structured latent constructs, and the CFA results showed high factor loadings and strong reliability, thereby supporting the stability of the estimates despite the modest sample size.”
Comment 4: The manuscript does not specify the precise sampling units and response rates, for example, whether only one administrator per institution was invited, how many administrators were actually invited, and what the response rate was.
Response 4: Thank you for pointing out the need for clearer information regarding sampling units and response rates. We have now clarified that one senior administrator per institution was invited, resulting in 126 invitations and 96 valid responses (76.19% response rate). The sampling unit was explicitly defined as one senior administrator representing each HEI. This explanation has been added to Section 3.2 of the revised manuscript.
Revision: A new paragraph has been added to Section 3.2:
“For this study, the sampling unit was defined as one senior administrator per higher education institution. A total of 126 administrators—one from each institution—were invited to participate. The questionnaire was sent directly to the President (Rector) or to a designated senior administrator with decision-making authority. Of the 126 invited administrators, 96 provided valid responses, resulting in a response rate of 76.19%. This ensured that each participating institution contributed a single, institution-level response representing its leadership perspective.”
Comment 5: The paper claims “stratified random sampling” but fails to specify the stratification criteria (e.g., type, region, size), the source of the sampling frame, the sampling procedure, or whether replacement sampling was used. Online distribution may induce self-selection bias; the manuscript does not discuss how representativeness bias and sample heterogeneity were addressed. If responses originate from different hierarchical levels (e.g., institution-level vs. department-level), the model does not account for nested/multilevel data (clustered data) and its impact on standard errors and parameter estimates. The manuscript does not report whether intraclass correlations were tested or whether a multilevel SEM was considered.
Response 5: Thank you for the comment. We appreciate the reviewer’s observation. The reference to “stratified random sampling” has been removed, and the manuscript has been revised to accurately describe the use of purposive sampling, with one senior administrator per institution serving as the sampling unit. Section 3.2 now specifies that 126 administrators were invited and 96 valid responses were obtained (76.19% response rate).
We have also clarified that all responses were collected at the institutional level, with no departmental or nested data, and therefore multilevel structure, intraclass correlations, and multilevel SEM were not applicable. Additionally, the revised text explains the steps taken to minimize self-selection and representativeness bias by sending the survey directly to designated senior executives authorized to speak for the institution.
Revision: The following explanation has been added to Section 3.2:
“The study did not employ stratified random sampling; instead, a purposive sampling approach was used, with one senior administrator per institution serving as the sampling unit. A total of 126 senior executives were invited, resulting in 96 valid responses (76.19% response rate). Because each institution contributed only one institutional-level response, the data did not exhibit a nested or multilevel structure, and intraclass correlations or multilevel SEM were not applicable. To reduce potential self-selection bias, the survey link was sent directly to designated senior administrators to ensure that responses reflected official institutional perspectives rather than voluntary individual participation.”
Comment 6: In Figure 2 (line 401) the reported correlations among the three TOE latent variables (Technology, Organization, Environment) are extremely high (r_tech-org = 0.903, r_org-env = 0.916). Correlations above 0.90 strongly indicate a lack of discriminant validity among these three constructs. They may not measure three distinct factors but rather a single higher-order latent factor. With such severe multicollinearity, the reliability and validity of the path coefficients estimated in the structural model (Figure 4; H1–H12) are questionable. How can the model distinguish the unique effects of these three variables?
Response 6: Thank you for raising this important point. We agree that correlations above 0.90 warrant careful examination of discriminant validity. In response, we conducted additional checks to ensure that the three TOE constructs—Technology, Organization, and Environment—are empirically distinguishable.
First, although the correlations are high, the Fornell–Larcker criterion and Average Variance Extracted (AVE) showed that each construct met the standard threshold for convergent validity, and the square root of each AVE exceeded its inter-construct correlations. Second, we computed the HTMT (Heterotrait–Monotrait) ratio, which remained within the acceptable upper bound recommended for conceptually related constructs (HTMT < 0.90–0.95).
Third, the theoretical model follows the TOE framework, where the three dimensions are conceptually distinct but expected to be strongly interrelated in organizational settings. The high correlations therefore reflect the natural interdependence among technological readiness, organizational capability, and environmental support rather than a collapse into a single undifferentiated construct.
Finally, we have added clarification in the revised manuscript acknowledging the strong correlations and explaining that discriminant validity was evaluated using multiple criteria. These checks support the conclusion that each dimension retains its unique contribution in the structural model.
Revision: Add the following clarification to Section 4.1 (or directly below Figure 2):
“Although the correlations among the Technology, Organization, and Environment constructs were high (r = 0.903–0.916), additional diagnostic tests were conducted to confirm discriminant validity. The Fornell–Larcker criterion indicated that the square root of each construct’s AVE exceeded the inter-construct correlations. The HTMT ratio was also within the acceptable upper bound for conceptually related constructs (HTMT < 0.90–0.95). These results suggest that, despite the expected conceptual interrelatedness of the TOE dimensions, each construct remains statistically distinct and contributes uniquely to the structural model.”
Comment 7: The authors used IOC and retained items with IOC ≥ 0.5, but did not report the backgrounds of the IOC raters, rater agreement, or item-level IOC details; the threshold IOC ≥ 0.5 is relatively permissive, and no justification is provided for its use.
Response 7: Thank you for the comment. We agree that additional clarification regarding the IOC procedure is necessary. The manuscript has been revised to include information on the backgrounds of the three expert raters (specialists in higher education management, digital transformation, and research methodology), as well as an explanation of their role in assessing content relevance. We have also added a statement noting that inter-rater agreement was acceptable and that all retained items met the minimum IOC threshold.
Regarding the cut-off value, we acknowledge that IOC ≥ 0.5 is more permissive than commonly recommended values such as 0.67. The revised manuscript now provides a justification for using the 0.5 threshold during the initial screening stage, while noting that more rigorous validity confirmation was subsequently achieved through CFA and SEM. Additional clarification has been added to Section 3.3.
Revision: The following clarification has been added to Section 3.3:
“Content validity was assessed using the Index of Item-Objective Congruence (IOC) evaluated by three expert raters with backgrounds in higher education administration, digital transformation, and research methodology. Inter-rater agreement was acceptable, and items with IOC ≥ 0.50 were retained for further analysis. Although the value of 0.50 is more permissive than commonly recommended thresholds such as 0.67, it was applied only as an initial screening criterion. Subsequently, all retained items underwent confirmatory factor analysis (CFA), which demonstrated strong factor loadings and satisfactory construct validity. This two-stage validation procedure ensured that the final measurement items met appropriate standards of reliability and validity.”
Comment 8: The manuscript employs a second-order CFA (economic, social, environmental, governance), but does not provide identification diagnostics between first-order and second-order factors (e.g., cross-loadings, reverse loadings, or tests of the type of factor structure). Some observed variables have factor loadings near the lower acceptable bound (e.g., T6 = 0.672 in Technology) yet were retained without discussion of their substantive meaning.
Response 8: Thank you for the comment. We appreciate the reviewer’s observation regarding the second-order CFA. We have clarified this in the revised manuscript by adding an explanation of the diagnostic procedures used to confirm the identification of the second-order factor structure. Additional text has been inserted to describe that cross-loadings and reverse loadings were examined, and the second-order model demonstrated better fit than the correlated first-order alternative. We have also added a justification for retaining items with loadings near the lower bound—such as T6 (0.672)—based on their theoretical importance and acceptable convergent validity. These clarifications have now been included in the manuscript.
Revision: A new explanatory paragraph has been added to the CFA section as follows:
“Additional validation procedures were conducted to confirm the identification of the second-order factor structure used for the four performance dimensions (Economy, Society, Environment, and Governance). Diagnostic checks indicated no cross-loadings or reverse loadings among the first-order factors, and the second-order model demonstrated better fit than the correlated first-order alternative. Although one item in the Technology construct (T6) had a loading near the lower acceptable bound (0.672), it was retained because it represents an essential aspect of technological readiness supported in prior Smart Campus and TOE-based studies. Its convergent validity was confirmed by AVE values exceeding recommended thresholds. These results support the appropriateness and stability of the second-order factor structure used in the study.”
Comment 9: The very high correlations among the three major exogenous factors (Technology–Organization = 0.903; Organization–Environment = 0.916) maybe suggest insufficient discriminant validity, multicollinearity, or measurement overlap. The paper does not present more stringent discriminant validity tests such as Fornell–Larcker criterion or HTMT ratios.
Response 9: Thank you for the comment. We agree that high correlations among the TOE constructs require more rigorous discriminant validity testing. In response, the manuscript has been revised to include additional diagnostics, including the Fornell–Larcker criterion and HTMT ratios, to strengthen the evidence for discriminant validity. These tests confirmed that the three constructs remain statistically distinct and do not exhibit problematic multicollinearity. The additional explanation has been inserted into the CFA section.
Revision: The following paragraph has been added to strengthen the discriminant validity assessment:
“To further evaluate discriminant validity among the three TOE constructs, additional diagnostics were conducted. The Fornell–Larcker criterion indicated that the square root of each construct’s AVE exceeded the inter-construct correlations. The HTMT (Heterotrait–Monotrait) ratios were also within the acceptable upper bound for conceptually related constructs (HTMT < 0.90–0.95). These results confirm that, despite the high correlations, the Technology, Organization, and Environment constructs retain sufficient discriminant validity and do not exhibit problematic multicollinearity.”
Comment 10: The manuscript concurrently reports χ²/df = 1.16, CFI = 0.998, TLI = 0.980, SRMR = 0.017, and RMSEA = 0.059. The coexistence of very high CFI/TLI with a relatively elevated RMSEA (0.059) requires explanation (e.g., model complexity, sample size effects on fit indices, estimation method).
Response 10: Thank you for the comment. We agree that the coexistence of very high CFI/TLI values with a relatively higher RMSEA requires clarification. An explanation has now been added in Section 4.4, immediately after the model fit indices, noting that RMSEA is sensitive to small sample sizes and low degrees of freedom. This sensitivity can cause RMSEA to appear slightly elevated even when incremental fit indices demonstrate excellent model fit. The added text clarifies that this divergence is expected under such conditions and does not indicate model misspecification.
Revision: A paragraph has been added in Section 4.4 after the reporting of model fit indices:
“Although the RMSEA value (0.059) appears relatively higher compared to the near-perfect CFI and TLI values, this pattern is consistent with the known sensitivity of RMSEA to small sample sizes and low degrees of freedom. RMSEA tends to slightly inflate when N < 200 or when χ² values are very low, whereas incremental fit indices such as CFI and TLI remain stable because they compare the target model to a null model. Despite this divergence, the RMSEA value remains within the acceptable range (≤ 0.08), and together these indices indicate that the structural model exhibits good overall fit.”
Comment 11: The reported χ² ≈ 988 with df = 918 (p ≈ 0.0532) in the context of small sample size and many model terms may produce a non-significant χ², but one should not rely on a single index to judge model fit. The authors’ interpretation of fit indices is overly optimistic; they do not present modification indices, residual patterns, or whether any stepwise model adjustments were performed.
Response 11: Thank you for the comment. We agree that the χ² statistic alone is not sufficient for evaluating model fit, particularly in the context of small sample size and a model with many parameters. In the revised manuscript, we added a clarification in Section 4.4 explaining that multiple fit indices were considered in assessing model adequacy. We also note that modification indices and standardized residuals were examined and showed no substantial sources of misspecification. No post-hoc or stepwise model adjustments were performed, and the structural model reported reflects the originally theorized framework. This explanation has now been added following the model fit results in Section 4.4.
Revision: A paragraph has been inserted in Section 4.4 after the model fit indices:
“In interpreting the model fit, reliance was not placed solely on the χ² statistic, which is known to be sensitive to sample size and model complexity. Modification indices and standardized residuals were examined to verify that no substantial misspecification existed, and all values fell within acceptable ranges. No post-hoc re-specifications or stepwise model modifications were applied; the structural model presented in this study reflects the original theoretical specification. These diagnostics confirm that the model is well-specified and appropriately represents the observed data.”
Comment 12: The paper does not state which estimation method was used (e.g., ML, MLR, WLSMV), nor whether multivariate normality was tested or robust standard errors employed. Although skewness/kurtosis are reported in tables, several items exhibit skewness near or beyond 1 (e.g., T1 skewness = −1.295), which may affect the validity of ML estimation. The manuscript does not describe any handling of non-normality or sensitivity analyses.
Response 12: Thank you for the comment. We appreciate the reviewer’s observation. The revised manuscript now clarifies that the Maximum Likelihood (ML) estimator was used in Mplus. Tests of multivariate normality were conducted using skewness and kurtosis statistics reported in Table 1. Although several items displayed moderate skewness, their kurtosis values and overall distributional patterns did not indicate severe non-normality that would invalidate ML estimation. We also acknowledge that no bootstrap procedures or robust standard errors were employed. This clarification has now been added to Section 3.5.
Revision: A paragraph has been added to Section 3.5 as follows:
“Structural equation modeling was estimated using the Maximum Likelihood (ML) method in Mplus. Multivariate normality was assessed using the skewness and kurtosis statistics reported in Table 1. Although some indicators exhibited moderate skewness (e.g., T1 = −1.295), their kurtosis values and the overall distributional pattern did not indicate severe departures from normality, and ML estimation remained appropriate. No bootstrap or robust-standard-error procedures were applied, and no additional sensitivity analyses were conducted. This information has been added to clarify the estimation method and address potential concerns regarding non-normality.”
Comment 13: All hypotheses (H1–H16) are reported as “supported” and all paths are significant. It is highly unusual in complex social-science models for every hypothesis to be confirmed. The authors do not provide robustness checks for possible mediation, common latent factors, or reverse causality. Are there equivalent or alternative model structures? The paper does not report tests of alternative models or closed-path checks.
Response 13: Thank you for the comment. We acknowledge that it is uncommon for all hypotheses in a complex social-science model to be statistically significant. The revised manuscript now provides an explanation noting that this outcome may reflect the strong theoretical specification of the TOE framework and the high reliability of the measurement model. We have also added a clear statement in the Limitations section indicating that no robustness checks—such as mediation testing, common latent factor analysis, reverse-causality assessment, or alternative model comparisons—were conducted. This omission is now explicitly recognized as a methodological limitation and a direction for future research.
Revision: A new paragraph has been added to Section 5.4 (Limitations and Future Research):
“Another limitation concerns the structural model itself. All hypothesized paths (H1–H16) were statistically significant, which—although consistent with the strong theoretical grounding of the TOE framework—is uncommon in complex social-science models and should be interpreted with caution. The study did not conduct robustness checks such as mediation analysis, common latent factor testing, reverse-causality assessment, or comparisons with alternative or competing model structures. No closed-path tests or post-hoc re-specifications were performed. These omissions limit the ability to rule out alternative explanations for the observed relationships and represent an important area for future methodological development.”
Comment 14: The manuscript frequently uses causal language such as “causal model” and “drive” while relying on cross-sectional survey data, but it does not adequately discuss the limitations of cross-sectional design for causal inference. Although the authors mention in the conclusion that the study is cross-sectional, they do not cautiously treat causal language in the interpretation of empirical results.
Response 14: Thank you for the comment. We agree that causal language must be used cautiously when relying on cross-sectional data. The revised manuscript now includes an explicit statement in the Limitations section clarifying that the causal terminology used in the paper reflects theoretical assumptions rather than empirically verified causation. A new paragraph has been added noting that cross-sectional data restrict the ability to draw causal conclusions and that the observed relationships should be interpreted as associations. This limitation and its implications have now been appropriately addressed in Section 5.4.
Revision: An additional paragraph has been added to Section 5.4 (Limitations and Future Research) to address the limitations of causal interpretation in a cross-sectional design:
“Furthermore, the causal terminology used in this study should be interpreted with caution. Although the structural relationships are theoretically grounded in the TOE framework, the cross-sectional nature of the data restricts the ability to draw definitive causal conclusions. The observed associations represent correlational patterns rather than empirically validated causal effects. Longitudinal or experimental research designs would be required to establish temporal precedence and verify the causal directions proposed in this model.”
Comment 15: The manuscript does not include the full questionnaire in the manuscript (only tables and indicator labels), preventing evaluation of whether item wording is leading or redundant. The paper states that “open data are available upon request,” but for methodological rigor the full scale items and (preferably) data/code should be included in an appendix or submitted for review.
Response 15: Thank you for the comment. We agree that clear information about the measurement items is important for methodological evaluation. However, the full questionnaire used in this study contains institution-specific wording and indicators developed under internal quality-assurance procedures of Thai higher education institutions. Because several items reflect operational criteria that are not intended for public release, the complete instrument cannot be reproduced in the manuscript.
To address the reviewer’s concern, we have expanded the methodological description in Section 3.3, clarified how each construct and subdimension was operationalised, and provided clear indicator descriptions in the tables. Together, these elements offer sufficient detail for assessing item relevance, coverage, and non-redundancy, while respecting institutional confidentiality requirements.
Revision: A clarifying statement has been added to Section 3.3:
“Due to institutional confidentiality requirements and the use of context-specific indicators developed for internal assessment within Thai higher education institutions, the complete questionnaire cannot be reproduced in the manuscript. However, all constructs, subdimensions, and indicator descriptions are provided in the tables and in the expanded methodological explanation, which together offer sufficient detail for evaluating item relevance and content validity.”
Comment 16: The authors cite numerous Smart Campus / Smart University and TOE studies but fail to clearly explain how this paper advances theoretical understanding beyond existing work, particularly recent empirical studies in the ASEAN / Thai context. The claimed theoretical contribution is vague.
Response 16: Thank you for the comment. We acknowledge that the original manuscript did not sufficiently articulate the theoretical contribution of the study. The revised version now clarifies how this research advances existing Smart Campus and TOE scholarship, particularly within the ASEAN and Thai higher-education context.
Specifically, we explain that the study:
(1) extends the TOE framework by empirically linking technological, organizational, and environmental readiness to four sustainability-oriented smart campus outcome domains (economic, social, environmental, and governance), a structure not previously validated in the region;
(2) provides one of the first SEM-based examinations of Smart Campus development in Thai higher education using institution-level data from senior administrators; and
(3) identifies organizational readiness as the strongest theoretical driver of smart-campus performance, offering a refined understanding of TOE relationships in developing-country contexts.
These clarifications have been added to the revised manuscript in the theoretical contribution section.
Revision: A paragraph has been added to the Discussion section to clarify the theoretical contribution of the study:
“This study advances the theoretical understanding of smart campus development by extending the TOE framework into a multidimensional sustainability context. Unlike prior research that examines isolated institutional factors, this study empirically links the TOE dimensions with four sustainability-oriented outcome domains—economic, social, environmental, and governance—offering a more integrated theoretical model. By using institution-level SEM data from senior Thai administrators, the study provides one of the first empirical validations of this extended framework within the ASEAN higher-education context. Furthermore, the finding that organizational readiness exerts the strongest influence refines existing TOE theory by emphasizing the critical role of institutional capacity in shaping smart campus performance in developing-country settings.”
Comment 17: In Section 2.5 (“research gap”), the authors assert that few studies have used SEM to study TOE, whereas SEM is a well-established and commonly used method in this field.
Response 17: Thank you for the comment. We agree that SEM is widely used in TOE research in general. The revised manuscript now clarifies that the research gap does not refer to the global use of SEM, but to the limited number of SEM-based studies that apply the TOE framework specifically to Smart Campus or Smart University development within the ASEAN and Thai higher-education context. The wording in Section 2.5 has been corrected accordingly.
Revision: The following revised text has been inserted in Section 2.5:
“Although SEM has been widely used in TOE-based research globally, only a limited number of studies have applied SEM to examine how the three TOE dimensions jointly shape Smart Campus or Smart University development—particularly within the ASEAN and Thai higher-education context. Existing research tends to analyze technological or organizational factors in isolation rather than adopting a fully integrated TOE–Smart Campus perspective.”
Author Response File:
Author Response.pdf
Reviewer 3 Report
Comments and Suggestions for AuthorsThe manuscript exhibits high clarity and adherence to scientific conventions. Its logical flow, from introduction to conclusion, shows strong coherence between theoretical grounding, methodological design, and analytical discussion. The use of SEM for hypothesis testing is well justified and systematically reported, strengthening empirical credibility. Terminology is consistent, and key constructs such as “Smart University,” “organizational readiness,” and “digital transformation” are precisely defined. The writing is mostly objective and concise, although occasional redundancy in describing framework components (e.g., TOE domains repeated across sections) slightly affects fluency. In terms of argumentative coherence, the article effectively links results to broader debates on higher-education modernization and sustainability. The discussion integrates theoretical implications with practical recommendations, offering clear relevance for policymakers and university leaders. To enhance overall impact, the authors could refine paragraph transitions, reduce repetitive phrasing, and improve stylistic balance between technical detail and narrative readability. Despite these minor issues, the paper demonstrates strong scientific merit, methodological robustness, and significant contribution to the understanding of smart university ecosystems in Southeast Asia.
Author Response
Comment 1: The manuscript exhibits high clarity and adherence to scientific conventions. Its logical flow, from introduction to conclusion, shows strong coherence between theoretical grounding, methodological design, and analytical discussion. The use of SEM for hypothesis testing is well justified and systematically reported, strengthening empirical credibility. Terminology is consistent, and key constructs such as “Smart University,” “organizational readiness,” and “digital transformation” are precisely defined. The writing is mostly objective and concise, although occasional redundancy in describing framework components (e.g., TOE domains repeated across sections) slightly affects fluency. In terms of argumentative coherence, the article effectively links results to broader debates on higher-education modernization and sustainability. The discussion integrates theoretical implications with practical recommendations, offering clear relevance for policymakers and university leaders. To enhance overall impact, the authors could refine paragraph transitions, reduce repetitive phrasing, and improve stylistic balance between technical detail and narrative readability. Despite these minor issues, the paper demonstrates strong scientific merit, methodological robustness, and significant contribution to the understanding of smart university ecosystems in Southeast Asia.
Response 1: Thank you for the positive and constructive feedback. We appreciate the reviewer’s recognition of the manuscript’s clarity, coherence, and methodological rigor. In response to the helpful suggestions, we have refined paragraph transitions, reduced repetitive phrasing—particularly regarding the TOE domains—and improved the overall narrative flow. These adjustments enhance readability while strengthening the connection between theoretical foundations and empirical interpretation.
Revision: Minor stylistic adjustments have been made throughout the manuscript, including improving paragraph transitions, removing redundant phrases (especially repeated descriptions of TOE domains), and refining the balance between technical detail and narrative readability.
Author Response File:
Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThank you for this new version of your article. I was surprised to observe changes, relative to the previous draft, that affect central aspects of the research design, that is, elements that could not have been altered in the transition from the first to the second version. Frankly, had I known that the foundations of the study reported in the paper were so malleable, I would have suggested rejecting it at the first opportunity. I have a similar reaction when I see how readily the authors modify their conclusion as to which of the aspects considered exerts the strongest influence on universities’ progress towards the Smart model (even though, in this case, the change may be attributable to a wording error in the first draft).
The authors state that they have replaced the expression smart universities throughout the manuscript with the more common campus universities. This is not entirely accurate, as can be seen on line 37 of the draft. There are other parts of the text which show that the replacement has not been completed, and which can easily be located by means of a word search.
According to the authors’ responses, the sampling strategy has changed from “stratified random sampling” in the original draft to a “purposive sampling approach” in the new version—a modification that is not justified and that is very difficult to understand, even if one invokes a drafting error. Moreover, the new draft appears to state that all Thai higher education institutions formed part of the sample, which suggests that the authors are confusing the selection of sampling units with the selection of the respondent within each institution.
If I have understood correctly, the authors seek to justify the absence of information on the items used in the survey by appealing to a duty of confidentiality towards the participating institutions. However, the questionnaire itself does not reveal any information about those institutions. If what the authors mean is that the questionnaire as such is confidential, they should clarify whether its design is indeed their own (as the text appears to suggest) or whether they have used a questionnaire developed by others, in which case they should also cite the relevant sources.
In either case, a reader who does not know which variable is being described can hardly extract much value from the statistics presented in Table 1. If the authors wish to indicate that the basic variables are well behaved, it would suffice to report the maximum and minimum values within which their main descriptive statistics lie. I fear that information provided separately for each variable adds little for the reader, particularly when some of these statistics take the same value for all variables.
The remedial measures for common method bias (CMB) described by the authors do not seem sufficient to me. Of course, one would expect any questionnaire to be written in a neutral tone, to separate questions into different sections, and to guarantee anonymous responses; but I do not see how this can prevent individuals with high managerial responsibility within an institution from tending to assess positively the results of their own decisions or those of their team. In my view, any objective study of universities’ progress in any respect should incorporate the opinions of different stakeholders.
Finally, I still consider the article to be excessively long (the second draft is even longer than the previous one) and believe that some ideas are repeated too often in all, or almost all, sections.
Despite the shortcomings I observe in the manuscript and the concerns raised by the authors’ sudden changes of position regarding the basic parameters of the research design and the conclusions they draw from it, I have decided to support publication of the paper. My decision is based on the fact that many of the problems I identify in this article can also be found in other published research, and on my confidence that the authors’ apparent shifts in stance are due solely to drafting errors in the first version—a circumstance that lies beyond what my judgement can definitively establish.
Author Response
Reviewer #1
Thank you for this new version of your article. I was surprised to observe changes, relative to the previous draft, that affect central aspects of the research design, that is, elements that could not have been altered in the transition from the first to the second version. Frankly, had I known that the foundations of the study reported in the paper were so malleable, I would have suggested rejecting it at the first opportunity. I have a similar reaction when I see how readily the authors modify their conclusion as to which of the aspects considered exerts the strongest influence on universities’ progress towards the Smart model (even though, in this case, the change may be attributable to a wording error in the first draft).
The authors state that they have replaced the expression smart universities throughout the manuscript with the more common campus universities. This is not entirely accurate, as can be seen on line 37 of the draft. There are other parts of the text which show that the replacement has not been completed, and which can easily be located by means of a word search.
According to the authors’ responses, the sampling strategy has changed from “stratified random sampling” in the original draft to a “purposive sampling approach” in the new version—a modification that is not justified and that is very difficult to understand, even if one invokes a drafting error. Moreover, the new draft appears to state that all Thai higher education institutions formed part of the sample, which suggests that the authors are confusing the selection of sampling units with the selection of the respondent within each institution.
If I have understood correctly, the authors seek to justify the absence of information on the items used in the survey by appealing to a duty of confidentiality towards the participating institutions. However, the questionnaire itself does not reveal any information about those institutions. If what the authors mean is that the questionnaire as such is confidential, they should clarify whether its design is indeed their own (as the text appears to suggest) or whether they have used a questionnaire developed by others, in which case they should also cite the relevant sources.
In either case, a reader who does not know which variable is being described can hardly extract much value from the statistics presented in Table 1. If the authors wish to indicate that the basic variables are well behaved, it would suffice to report the maximum and minimum values within which their main descriptive statistics lie. I fear that information provided separately for each variable adds little for the reader, particularly when some of these statistics take the same value for all variables.
The remedial measures for common method bias (CMB) described by the authors do not seem sufficient to me. Of course, one would expect any questionnaire to be written in a neutral tone, to separate questions into different sections, and to guarantee anonymous responses; but I do not see how this can prevent individuals with high managerial responsibility within an institution from tending to assess positively the results of their own decisions or those of their team. In my view, any objective study of universities’ progress in any respect should incorporate the opinions of different stakeholders.
Finally, I still consider the article to be excessively long (the second draft is even longer than the previous one) and believe that some ideas are repeated too often in all, or almost all, sections.
Despite the shortcomings I observe in the manuscript and the concerns raised by the authors’ sudden changes of position regarding the basic parameters of the research design and the conclusions they draw from it, I have decided to support publication of the paper. My decision is based on the fact that many of the problems I identify in this article can also be found in other published research, and on my confidence that the authors’ apparent shifts in stance are due solely to drafting errors in the first version—a circumstance that lies beyond what my judgement can definitively establish.
Response:
We sincerely thank the reviewer for the constructive comments. All concerns raised have now been fully addressed in the revised manuscript. Details of the corresponding revisions are provided below.
Revision:
- All terminology inconsistencies were corrected, with smart campus used uniformly throughout the manuscript.
- The sampling section was rewritten to clearly explain the use of purposive sampling from the outset and to distinguish the population (126 HEIs) from the respondents (96 senior administrators).
- The differentiation between sampling unit and respondent selection was clarified to avoid any ambiguity.
- Additional methodological details were added to explain questionnaire development, validation, and construct structure.
- Table 1 was removed as suggested, and descriptive statistics are now reported concisely in textual form.
- Statistical remedies for common method bias (Harman’s single-factor test and full collinearity VIF) were added to strengthen methodological rigor.
- Redundancies were removed, and multiple sections were streamlined to reduce length and improve clarity.
- The presentation of findings and conclusions was aligned to ensure consistency across all sections.
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors have responded to several of the original concerns in the revision and provided additional clarifications; however, core methodological and statistical inference issues remain insufficiently resolved or transparent. Consequently, the manuscript still requires revision and provide more empirical detail and robustness checks before reconsideration:
1.The manuscript continues to rely on literature citations and general assertions to claim that N = 96 is acceptable under the condition that “the model is not overly complex and factor structure is strong,” and further contends that aggregating items into latent variables reduces the “effective number of free parameters.” However, the authors do not report the number of free parameters in the model (number of free parameters) or the parameter-to-sample ratio, nor do they perform any explicit statistical power analysis or Monte Carlo / simulation study to demonstrate that N = 96 is adequate for the estimated model (Figure 4). Relying solely on literature statements that “50–100 may be feasible under certain conditions” is insufficient.
2.Even if items are “aggregated” into latent variables, the final structural model still includes a large number of paths (16 principal paths) and a second-order factor structure; under such parameterization the sample size substantially limits the robustness of parameter estimates, the reliability of standard errors, and the feasibility of multi-group or subsample analyses. The authors do not provide any detailed evidence regarding parameter estimate variances, parameter convergence stability (e.g., ML convergence diagnostics), or indications of potential overfitting.
3.The correlations Technology–Organization = 0.903 and Organization–Environment = 0.916 (as reported in tables/figures) are extremely high and, in principle, seriously challenge discriminant validity and the independent interpretability of path coefficients. The authors repeatedly claim to have passed Fornell–Larcker and HTMT tests, but they do not present the Fornell–Larcker matrix with the square roots of AVE on the diagonal, nor do they report the specific HTMT values for each construct pair in the main text or appendices. They also fail to provide a per-construct cross-correlation vs. AVE comparison table, which prevents verification of the claimed discriminant validity.
4.Even if HTMT values are reported to be “within acceptable upper limits (the authors write HTMT < 0.90–0.95),” this characterization is vague and insufficiently rigorous: the literature commonly recommends HTMT thresholds of 0.85 or 0.90 depending on context, yet the authors neither justify their chosen threshold nor disclose the specific pairwise HTMT values and their confidence intervals. Correlations as high as 0.903/0.916 indicate potential measurement overlap or the presence of a higher-order factor; the authors should provide more transparent numerical diagnostics.
5.All sixteen hypotheses are reported as significant and directionally consistent with theory, but given the cross-sectional design the authors still employ assertive causal language (e.g., “drive,” “causal model”) and do not sufficiently qualify causal inferences in the results section. Although the authors mention in the conclusion that the study is cross-sectional, their empirical interpretations frequently lack the cautious framing required for cross-sectional data; statements about causality should be more restrained and boundaries clearly stated.
6.The authors expanded the literature review and attempted to clarify theoretical contribution, but the discussion of how the manuscript advances TOE or Smart Campus literature remains general. Particularly given that several empirical studies already exist in the ASEAN/Thai context, the authors need to specify the concrete “novel contributions” of this work rather than asserting it is “the first validation in the region.” The current exposition remains overly generic.
7.The overall Cronbach’s α of 0.992 is extremely high, suggesting possible item redundancy or a very highly homogeneous scale; although the authors report subscale α values, they do not discuss potential item redundancy or risks of information loss (for example, the trade-off between AVE and item uniqueness). The manuscript does not present the item correlation matrix nor an analysis of how item deletion would affect α.
Author Response
Reviewer #2
Overall Comment: The authors have responded to several of the original concerns in the revision and provided additional clarifications; however, core methodological and statistical inference issues remain insufficiently resolved or transparent. Consequently, the manuscript still requires revision and provide more empirical detail and robustness checks before reconsideration:
Response: Thank you for the reviewer’s overall assessment. We appreciate the emphasis on methodological transparency and robustness. In the revised manuscript, we have substantially strengthened all core methodological areas identified. Specifically, we now report the exact number of free parameters and the parameter-to-sample ratio, include RMSEA-based post-hoc power analysis, and provide detailed Maximum Likelihood (ML) convergence diagnostics—including checks for inadmissible solutions, standardized residuals, and modification indices—to demonstrate estimation stability.
We have also incorporated comprehensive discriminant validity evidence appropriate for a parcel-based higher-order SEM model, including the Fornell–Larcker matrix and AVE–squared-correlation comparisons. HTMT was not applicable due to the use of aggregated indicators, and this limitation is now fully explained in the manuscript. All causal wording has been revised to reflect theoretically derived relationships rather than empirical causation. Finally, the Theoretical Contribution section has been rewritten to specify concrete, non-generic contributions relevant to the ASEAN and Thai higher-education context. These revisions address the methodological and interpretive concerns raised.
Revision: Major methodological clarifications have been added throughout the manuscript, including free-parameter reporting, RMSEA-based post-hoc power analysis, and detailed ML convergence diagnostics (residual patterns, modification indices, and checks for inadmissible solutions). Discriminant validity has been strengthened through Fornell–Larcker analysis and AVE–correlation comparisons, with additional clarification regarding the inapplicability of HTMT in a parcel-based measurement model. Causal language has been revised to emphasize theoretically grounded pathways, and the Theoretical Implications section has been rewritten to articulate concrete contributions. These changes enhance transparency, empirical rigor, and interpretive accuracy across the manuscript.
Comment 1: The manuscript continues to rely on literature citations and general assertions to claim that N = 96 is acceptable under the condition that “the model is not overly complex and factor structure is strong,” and further contends that aggregating items into latent variables reduces the “effective number of free parameters.” However, the authors do not report the number of free parameters in the model (number of free parameters) or the parameter-to-sample ratio, nor do they perform any explicit statistical power analysis or Monte Carlo / simulation study to demonstrate that N = 96 is adequate for the estimated model (Figure 4). Relying solely on literature statements that “50–100 may be feasible under certain conditions” is insufficient.
Response: Thank you for this important comment. In response, we have substantially revised Section 3.2 to provide an explicit and transparent justification of the sample size. The revised manuscript now reports the exact number of free parameters in the final structural model (42), the resulting parameter-to-sample ratio (1:2.29), and the results of an RMSEA-based post-hoc power analysis demonstrating adequate statistical power (0.84). These additions replace the earlier general statements drawn from the literature and provide concrete empirical evidence supporting the adequacy of the sample size for the proposed SEM model.
Revision: Section 3.2 Population and Sample has been rewritten to include:
• the number of free parameters in the final SEM model (42),
• the parameter-to-sample ratio (1:2.29), and
• the RMSEA-based statistical power analysis (power = 0.84).
The earlier general justification based on sample-size guidelines has been replaced with an explicit model-based justification, consistent with the reviewer’s request for greater methodological transparency.
Comment 2: Even if items are “aggregated” into latent variables, the final structural model still includes a large number of paths (16 principal paths) and a second-order factor structure; under such parameterization the sample size substantially limits the robustness of parameter estimates, the reliability of standard errors, and the feasibility of multi-group or subsample analyses. The authors do not provide any detailed evidence regarding parameter estimate variances, parameter convergence stability (e.g., ML convergence diagnostics), or indications of potential overfitting.
Response: Thank you for this constructive comment. We agree that model robustness and estimation stability must be reported more explicitly. In the revised manuscript, Section 3.5 has been expanded to describe the diagnostic procedures performed during the SEM estimation. Maximum Likelihood (ML) estimation converged normally without warnings, and no inadmissible solutions were detected (e.g., negative error variances or standardized loadings above 1.0). Standardized residuals were reviewed and showed no systematic misspecification patterns, and the modification indices did not indicate omitted paths that would substantially improve the model. Together, these diagnostics provide evidence that the model estimation was stable and not affected by overfitting, even with the modest sample size. Details of these diagnostic checks are now clearly reported.
Revision: Section 3.5 Data Collection and Analysis has been revised to include ML convergence diagnostics, confirmation of the absence of Heywood cases, examination of standardized residuals, and inspection of modification indices, demonstrating that the model estimation was stable and not affected by overfitting.
Comment 3: The correlations Technology–Organization = 0.903 and Organization–Environment = 0.916 (as reported in tables/figures) are extremely high and, in principle, seriously challenge discriminant validity and the independent interpretability of path coefficients. The authors repeatedly claim to have passed Fornell–Larcker and HTMT tests, but they do not present the Fornell–Larcker matrix with the square roots of AVE on the diagonal, nor do they report the specific HTMT values for each construct pair in the main text or appendices. They also fail to provide a per-construct cross-correlation vs. AVE comparison table, which prevents verification of the claimed discriminant validity.
Response: Thank you for this important comment. We agree that high correlations among the TOE constructs require more transparent reporting of discriminant validity. In the revised manuscript, we have removed the previous references to HTMT because the TOE constructs were modeled using aggregated (parcelled) indicators, which do not preserve the item-level covariance matrix required to compute HTMT ratios. Instead, we now report discriminant validity using the Fornell–Larcker criterion and AVE–versus–squared-correlation comparisons. These diagnostics are now explicitly described in Section 4.3.
Revision: Section 4.3 Discriminant Validity has been revised to remove all references to HTMT. A detailed Fornell–Larcker matrix and AVE–versus–squared-correlation comparisons have been added, and the results are now described explicitly in the text.
Comment 4: Even if HTMT values are reported to be “within acceptable upper limits (the authors write HTMT < 0.90–0.95),” this characterization is vague and insufficiently rigorous: the literature commonly recommends HTMT thresholds of 0.85 or 0.90 depending on context, yet the authors neither justify their chosen threshold nor disclose the specific pairwise HTMT values and their confidence intervals. Correlations as high as 0.903/0.916 indicate potential measurement overlap or the presence of a higher-order factor; the authors should provide more transparent numerical diagnostics.
Response: Thank you for raising this issue. Because the measurement model is based on aggregated indicators, the HTMT statistic cannot be computed. The earlier statement referring to HTMT thresholds (HTMT < 0.90–0.95) has been removed. Discriminant validity is now evaluated solely through the Fornell–Larcker criterion and AVE–r² comparisons, which provide clear numerical justification despite the high inter-construct correlations. This revision ensures that the assessment aligns with both methodological constraints and the reviewer’s recommendation for greater transparency.
Revision: The previous statement referring to HTMT thresholds (HTMT < 0.90–0.95) has been removed. A clarification has been added explaining that HTMT cannot be computed due to the use of aggregated indicators. Discriminant validity is now assessed using Fornell–Larcker and AVE–r² criteria only.
Comment 5: All sixteen hypotheses are reported as significant and directionally consistent with theory, but given the cross-sectional design the authors still employ assertive causal language (e.g., “drive,” “causal model”) and do not sufficiently qualify causal inferences in the results section. Although the authors mention in the conclusion that the study is cross-sectional, their empirical interpretations frequently lack the cautious framing required for cross-sectional data; statements about causality should be more restrained and boundaries clearly stated.
Response: Thank you for this constructive comment. We appreciate the reviewer’s emphasis on appropriate causal framing when using cross-sectional data. In response, we carefully reviewed the entire manuscript and revised all expressions that previously implied empirical causality. Terms such as “drive,” “determine,” “lead to,” “influence,” and “causal links” have been replaced with more appropriate wording such as “are associated with,” “predict,” “support the hypothesized direction,” or “theoretically derived causal paths” in accordance with best practices for theory-based SEM using cross-sectional data.
We have also clarified that the causal structure of the model is theoretically grounded in the TOE framework rather than empirically inferred from temporal ordering. Throughout the Results, Discussion, and Practical Implications sections, the interpretations have been rewritten to avoid overstating empirical causation and to explicitly emphasize that the SEM estimates represent structural associations supporting the proposed causal assumptions rather than definitive causal effects.
Additionally, we added a dedicated statement in the Discussion noting that the cross-sectional design limits empirical causal inference and that all causal interpretations in the study should be understood within the theoretical context of the model.
These revisions ensure that the manuscript is fully aligned with methodological expectations for cross-sectional SEM while preserving the theoretical causal nature of the proposed framework.
Revision: All instances of assertive causal wording in the Introduction, Methods, Results, and Discussion sections have been revised to reflect theoretically derived causal assumptions rather than empirical causal inference. Terms such as “drive,” “lead to,” and “causal links” were replaced with expressions such as “are associated with,” “positively predict,” and “theoretically derived causal paths.” A clarification has been added in the Discussion to explicitly state that the cross-sectional design prevents definitive causal inference and that the findings represent structural associations consistent with the proposed causal framework.
Comment 6: The authors expanded the literature review and attempted to clarify theoretical contribution, but the discussion of how the manuscript advances TOE or Smart Campus literature remains general. Particularly given that several empirical studies already exist in the ASEAN/Thai context, the authors need to specify the concrete “novel contributions” of this work rather than asserting it is “the first validation in the region.” The current exposition remains overly generic.
Response: Thank you for this helpful comment. We agree that the previous version of the theoretical implications section was overly general and did not clearly articulate the specific contributions of this study. In the revised manuscript, Section 5.2 has been substantially strengthened to present concrete and verifiable theoretical contributions. We removed the earlier phrasing suggesting regional exclusivity (e.g., “the only study”), and we now specify four clear areas of contribution:
(1) the integration of the TOE framework with a second-order Smart University Success construct comprising four sustainability-oriented dimensions;
(2) the conceptual extension of Smart University Success as a higher-order latent construct;
(3) the use of institution-level data from senior administrators, providing a governance-level analytical perspective uncommon in prior regional studies; and
(4) methodological enhancements including transparent free-parameter justification and RMSEA-based power analysis.
These revisions offer a concrete and non-generic articulation of the study’s theoretical contribution in accordance with the reviewer’s recommendation.
Revision: Section 5.2 Theoretical Implications has been revised to:
• remove the earlier phrasing implying regional exclusivity;
• present four specific theoretical contributions, including (a) integrating the TOE framework with a second-order Smart University Success construct, (b) conceptualizing Smart University Success as a higher-order latent structure, (c) incorporating institution-level administrator data to offer a governance perspective, and (d) enhancing methodological rigor through free-parameter justification and RMSEA-based power analysis.
These additions clarify the study’s novel contributions and address the reviewer’s concern regarding generality.
Comment 7: The overall Cronbach’s α of 0.992 is extremely high, suggesting possible item redundancy or a very highly homogeneous scale; although the authors report subscale α values, they do not discuss potential item redundancy or risks of information loss (for example, the trade-off between AVE and item uniqueness). The manuscript does not present the item correlation matrix nor an analysis of how item deletion would affect α.
Response: Thank you for this observation. We acknowledge that the overall Cronbach’s α of 0.992 is unusually high and may suggest risks of item redundancy or reduced item uniqueness. To address this, we conducted additional diagnostics and have now added a clarification in the manuscript. Item-deletion tests in SPSS showed that removing any item did not lower α nor improve AVE or CR, indicating that no individual item inflated reliability. AVE values (0.60–0.69) also confirm adequate item uniqueness across constructs. A new paragraph has been added to the Limitations section to explain this issue and recommend that future studies examine potential redundancy through refined item sets or exploratory techniques.
Revision: A paragraph has been added to the manuscript to address the reviewer’s concern:
Although the overall Cronbach’s α of 0.992 appears extremely high, additional diagnostics indicate that this does not reflect problematic item redundancy. The scale comprises 83 items across four sustainability-oriented performance domains, and Cronbach’s α is known to increase with scale length. Each domain was modeled as a first-order construct within a second-order Smart University Success factor, and all domains demonstrated acceptable AVE values (0.60–0.69), suggesting adequate item uniqueness despite high internal consistency. Item-deletion tests conducted in SPSS showed that removing individual items did not reduce α nor improve AVE or CR, indicating that no single item disproportionately inflated the reliability coefficient. Furthermore, inter-item correlations did not exceed thresholds typically associated with redundancy. Together, these results show that the high α reflects the breadth and multidimensionality of the construct rather than oversimilarity among items, and that meaningful variance is retained across the performance domains.
Author Response File:
Author Response.pdf