Next Article in Journal
Modelling Future Pathways for Industrial Process Heat Decarbonisation in New Zealand: The Role of Green Hydrogen
Previous Article in Journal
Recommender Systems for Multimodal Transportation in Smart Sustainable Cities
 
 
Article
Peer-Review Record

Sustainable Transformation of the Accounting and Auditing Profession: Readiness for Blockchain Technology Adoption Through UTAUT and TAM3 Frameworks

Sustainability 2025, 17(23), 10811; https://doi.org/10.3390/su172310811
by Ahmed Almgrashi and Abdulwahab Mujalli *
Reviewer 1: Anonymous
Reviewer 2:
Sustainability 2025, 17(23), 10811; https://doi.org/10.3390/su172310811
Submission received: 15 October 2025 / Revised: 15 November 2025 / Accepted: 27 November 2025 / Published: 2 December 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

1. Major issues

(1) Consistency of scale specification

I noticed references to both an even-numbered Likert scale and a 5-point scale. I recommend settling on a single scale (e.g., 1–5) and ensuring item wording, table labels, and interpretations are fully aligned.

(2) Alignment of item count and construct structure

I would appreciate one definitive list of retained items and constructs. A brief table indicating dropped items with reasons (pilot, reliability, discriminant validity) would help, and the minimum-sample rationale should match that final count.

(3) Duplicate/leading items and language quality

Please remove duplicated wording (e.g., EE-2/EE-3), rewrite any value-laden statements in neutral terms, and run a light pass to correct basic grammar/lexicon (e.g., affect, implementing, thinks). This will improve measurement quality and reader confidence.

(4) Hypothesis label–results mapping

I found inconsistencies between hypothesis labels in text, figure, and tables. A small mapping table (Hypothesis → Path → Results label) and consistent usage throughout would make the results easier to follow and verify.

(5) Sample-size rationale

Rather than relying only on the “10× rule,” I suggest adding a brief a priori power analysis (effect size assumptions, α, and power ≥ 0.80). This will make the sampling justification more defensible.

(6) Common method bias (CMB) diagnostics

In my view, Harman’s test and VIF are not sufficient on their own. Please add at least one supplementary check (e.g., a marker variable or an unmeasured latent method factor approach) and report the outcome.

(7) Equivalence after translation/back-translation

Since the instrument underwent translation, a minimal piece of evidence for measurement equivalence (e.g., MICOM or a content-validity index from an expert panel) would support generalization claims.

(8) “Sustainable transition” framing vs. measures

The framing emphasizes sustainability, while the empirical model culminates in intention to use (IU). I encourage either (A) reframing the paper as a technology-adoption study (with corresponding adjustments to the title/abstract/conclusions), or (B) adding outcomes tied directly to sustainability (transparency/traceability, internal control/audit quality, environmental/social performance).

 

2. Minor issues

(1) Numerical basis for sample adequacy

Please complement the “10× rule” with a short power analysis; even a concise calculation will help.

(2) Heterogeneity and external validity

Including key demographic/organizational controls—or a succinct multi-group analysis (MGA)—would reassure readers about robustness across subgroups.

(3) Clarity of results narrative

For each path, I recommend reporting standardized β, t, 95% CI, and p explicitly, and revising any prose that may blur reference paths.

(4) Predictive validity

If feasible, an appendix with PLSpredict (or a simple holdout test) would substantiate claims about predictive utility.

(5) Transparency of data-processing flow

A compact flowchart/table that shows exclusions, missing-data handling, and items dropped at each reliability/validity stage would aid reproducibility.

 

3. Editorial

(1) Placeholders and numeric fields

Kindly replace all placeholders (e.g., response rate 56.3%, DOIs) and remove any template remnants in text and tables.

(2) Terminology/abbreviations/notation

Define PE/EE/SI/IU at first use, streamline repetitive captions, and standardize decimal precision, significance markers, and numbering.

(3) Language editing

A light copy-edit for tense, subject–verb agreement, prepositions/articles, and word choice would be beneficial. I would prioritize the survey item wording.

(4) Figures and tables

Improving label legibility, column-header spacing, and footnote placement will enhance readability.

 

Overall recommendation: Major Revision. If the authors address the measurement alignment, hypothesis labeling, and methodological checks noted above, I believe the manuscript can be substantially strengthened.

Author Response

I would like to express my sincere appreciation to the editor and reviewers for their valuable time, insightful comments, and constructive feedback on my manuscript. I am especially grateful to Reviewer #1 for the encouraging remarks and thoughtful suggestions regarding my work. All comments have been carefully considered, and the manuscript has been revised accordingly to enhance its clarity, quality, and overall contribution. Detailed responses to each comment are provided below, prepared in accordance with the journal’s guidelines.

Comment 1: I noticed references to both an even-numbered Likert scale and a 5-point scale. I recommend settling on a single scale (e.g., 1–5) and ensuring item wording, table labels, and interpretations are fully aligned. 

Response1: Thank you for pointing this out. We agree with this comment. Therefore, we have standardized all Likert-type items to a 5-point scale (1 = Strongly Disagree to 5 = Strongly Agree) to ensure consistency and clarity throughout the manuscript. The necessary revisions have been made in the Methods section (page 13–14, lines 605–622), as well as in all related tables and interpretation sections where the scale was previously inconsistent. 

Comment 2:  I would appreciate one definitive list of retained items and constructs. A brief table indicating dropped items with reasons (pilot, reliability, discriminant validity) would help, and the minimum-sample rationale should match that final count.

Response 2: We sincerely thank the Reviewer for this insightful comment. We fully agree with the suggestion. A pilot test has been conducted, and the corresponding details have been incorporated into the revised manuscript on pages 15–16, lines 640–647. The reliability analysis has also been completed and included on page 16, lines 649–657, as well as in Table 2.

A pilot study involving twenty-seven participants was undertaken to validate the measurement instruments and questionnaire, following the recommendations of Hair et al. (2019). The internal consistency of each construct in the pilot questionnaire was assessed using Cronbach’s alpha, with a minimum threshold of 0.70 considered acceptable for reliability. All Cronbach’s alpha coefficients exceeded this threshold, demonstrating that the questionnaire exhibited a high level of internal consistency and coherence among the responses. Table 2 presents the survey variables and their corresponding Cronbach’s alpha values (pilot sample), as shown on page 16, lines 649–657.

Regarding the minimum sample size rationale, we have ensured that it aligns with the final number of retained items. Table 1 (page 15) provides the constructs and the number of indicators for each. Following the widely accepted guideline for structural equation modeling (SEM), the minimum recommended sample size is ten times the number of observed variables. This study includes 30 observed variables; therefore, the minimum required sample size is 30 × 10 = 300. Consequently, the number of participants should not be fewer than 310 to ensure adequate statistical power and model stability.

 

Comment 3: Please remove duplicated wording (e.g., EE-2/EE-3), rewrite any value-laden statements in neutral terms, and run a light pass to correct basic grammar/lexicon (e.g., affect, implementing, thinks). This will improve measurement quality and reader confident.

Response 3: We sincerely appreciate the reviewer’s thoughtful feedback concerning wording duplication and language quality, particularly in relation to the Effort Expectancy items (e.g., EE-2/EE-3). As noted, these items were intentionally adopted from well-established and validated scales within the UTAUT framework. Retaining the original wording supports conceptual consistency and enables comparability with previous studies employing the same validated constructs. That said, we have carefully re-examined the items and overall manuscript to ensure clarity and accuracy of expression. We confirm that the language used remains appropriate for academic standards, while preserving the theoretical integrity and validity of the measurement instrument. We thank the reviewer for the suggestion and trust that the rationale above clarifies our decision to maintain the current item wording.

Comment 4: I found inconsistencies between hypothesis labels in text, figure, and tables. A small mapping table (Hypothesis → Path → Results label) and consistent usage throughout would make the results easier to follow and verify. 

Response 4: Thank you for pointing this out. We agree with this comment. Therefore, we have carefully reviewed and corrected the inconsistencies between the hypothesis labels in the text, figures, and tables.

Comment 5: Rather than relying only on the “10× rule,” I suggest adding a brief a priori power analysis (effect size assumptions, α, and power ≥ 0.80). This will make the sampling justification more defensible.  

Response 5: Thank you for this constructive comment. We agree that including an a priori power analysis provides a stronger justification for the adequacy of our sample size and enhances methodological transparency. Accordingly, we have performed and reported an a priori statistical power analysis to determine the minimum required sample size for this study. This revision has been added to the Methodology section (Page 16, Paragraph 2, Lines 667–677) of the revised manuscript.

Comment 6: Common method bias (CMB) diagnostics — In my view, Harman’s test and VIF are not sufficient on their own. Please add at least one supplementary check (e.g., a marker variable or an unmeasured latent method factor approach) and report the outcome 

Response 6: Thank you for pointing this out. We agree with this comment. Therefore, we have incorporated an additional diagnostic technique to strengthen our assessment of common method bias (CMB). Specifically, we applied the marker variable technique as a supplementary test to the Harman’s single-factor and VIF analyses previously reported. The results of the marker variable test indicated that the correlations between the marker variable and the main constructs were statistically insignificant, suggesting that CMB is not a serious concern in this study. These additions have been included in the revised manuscript on page 18–19, lines 756–761, under the subsection 5.3. Common Method Bias Diagnostics.

Comment 7: Equivalence after translation/back-translation. Since the instrument underwent translation, a minimal piece of evidence for measurement equivalence (e.g., MICOM or a content-validity index from an expert panel) would support generalization claims.

Response 7: Thank you for this valuable comment. In the revised manuscript, we have provided explicit evidence of measurement equivalence for the translated instrument. As described in the Methodology section (page 15, paragraph 1 Lines 637–641), the questionnaire was translated and back-translated following Brislin (1986). To ensure conceptual and semantic accuracy, three bilingual experts in accounting, auditing, and information systems evaluated the translated items for clarity and equivalence. A Content Validity was computed, and all items achieved values above 0.80, indicating satisfactory measurement equivalence (Polit & Beck, 2006). This evidence supports the reliability and generalizability of the translated instrument across linguistic contexts. 

Comment 8: The framing emphasizes sustainability, while the empirical model culminates in intention to use (IU). I encourage either (A) reframing the paper as a technology-adoption study (with corresponding adjustments to the title/abstract/conclusions), or (B) adding outcomes tied directly to sustainability (transparency/traceability, internal control/audit quality, environmental/social performance). 

Response 8: Thank you for pointing this out. We agree with this valuable comment. Therefore, we have ensured that outcomes directly tied to sustainability are clearly reflected in the revised manuscript. Specifically, we have elaborated on how the model contributes to sustainable transition outcomes in the introduction section (page 4, paragraph 13, lines 172–181), and Discussion section (page 26, paragraph 2, lines 979–1987) and further emphasized these implications in the Practical Implications section (page 28, paragraph 2, lines 1069–1076)  

 

The following section presents the reviewer’s minor comments and our detailed responses to each.

 

Comment 1: Please complement the “10× rule” with a short power analysis; even a concise calculation will help.

Response 1: Thank you for this constructive comment. We agree that including an a priori power analysis provides a stronger justification for the adequacy of our sample size and enhances methodological transparency. Accordingly, we have performed and reported an a priori statistical power analysis to determine the minimum required sample size for this study. This revision has been added to the Methodology section (Page 16, Paragraph 2, Lines 667–677) of the revised manuscript.

Comment 2: Heterogeneity and external validity — Including key demographic/organizational controls—or a succinct multi-group analysis (MGA)—would reassure readers about robustness across subgroups. 

Response 2: Thank you for pointing this out. We agree that accounting for heterogeneity and external validity would enhance the robustness of the findings. In this study, the main objective was to examine the overall impact of the independent variables on the dependent variable within a unified model, rather than testing subgroup differences through MGA. Therefore, a multi-group analysis was not conducted. We acknowledge this as a limitation of the study and have added the following clarification in the manuscript. This addition can be found in the Limitations and Future Research section on page 29, paragraph 2, and lines 1114–1122 of the revised manuscript as follows (marked in yellow in the revised version):.

Comment 3: Clarity of results narrative — For each path, I recommend reporting standardized β, t, 95% CI, and p explicitly, and revising any prose that may blur reference paths. 

Response 3: Thank you for pointing this out. We appreciate the reviewer’s attention to the clarity of the results narrative. We agree that explicitly reporting all statistical parameters enhances transparency and interpretability. In the current version of the manuscript, the standardized β coefficients, t-values, 95% confidence intervals, and p-values are already presented in the Results section and corresponding tables (see Section 5.5, pages 22-23, Table 7, and related narrative paragraphs 2 on page 22, lines 806–815).

 

Comment 4: If feasible, an appendix with PLSpredict (or a simple holdout test) would substantiate claims about predictive utility. 

Response 4: Thank you for pointing this out. We agree with this valuable comment. Therefore, we have included the PLSpredict results to provide additional evidence for the model’s predictive utility. Specifically, we performed a PLS Predict analysis using the blindfolding procedure and cross-validated redundancy measures to assess the predictive relevance (Q²) of the endogenous constructs. The results demonstrate that the model has acceptable out-of-sample predictive performance, supporting the robustness of our findings. The new content has been added in the revised manuscript on pages 22-23, Lines 836–846, and the detailed PLSpredict results have been presented in Table 8.

Reviewer 2 Report

Comments and Suggestions for Authors

The study is well-structured and addresses a current, relevant topic in the context of blockchain technology adoption in the accounting and auditing field, framing it within the broader context of sustainable digital transformation. The combined use of the UTAUT and TAM3 models to explain technology adoption intentions is a theoretically sound and underexplored approach, particularly in the Saudi context.

However, we consider the theoretical framework to be overly descriptive and somewhat critical. A framework that clearly justifies the rationale for the UTAUT-TAM3 combination and how it relates to sustainability is needed.

Another detail is the relationship with sustainability.

The title highlights "Sustainable Transformation," but then sustainability is treated only as "digital efficiency and transparency." There should be a conceptual or empirical approach to organizational, environmental, or social sustainability. For example, the link between blockchain adoption and the Sustainable Development Goals (SDGs), green accounting, or digital sustainability could be strengthened to highlight blockchain's contribution to more ethical, transparent, and responsible auditing practices.

The methodology deviated from the choice of PLS-SEM based on distribution limitations, sample size, and the theoretical model.

When interpreting the results, they should be done critically and grounded in theory. For example, the lack of significance of CSE → EE or CO → IU would warrant a theoretical explanation (e.g., low professional technological upgrading, institutional barriers, cultural context).

Finally, ask what the specific implications of the study are. And what are the limitations in concrete terms?

Regarding the references, it is advisable to include more recent references. Most are published before 2021. There are many papers from 2022 to 2025 on digital transformation and accounting sustainability. Tables should consist of footnotes with a source ("Prepared by the authors using SmartPLS 4.0") and present significance levels in standard format (p < 0.001; p < 0.01).

Consequently, Figure 1 (conceptual model) is unformatted and should be labelled.

From a linguistic perspective, there is some redundancy to be avoided.

Author Response

I would like to express my sincere appreciation to the editor and reviewers for their valuable time, insightful comments, and constructive feedback on my manuscript. I am especially grateful to Reviewer #2for the encouraging remarks and thoughtful suggestions regarding my work. All comments have been carefully considered, and the manuscript has been revised accordingly to enhance its clarity, quality, and overall contribution. Detailed responses to each comment are provided below, prepared in accordance with the journal’s guidelines.

Comment 1: However, we consider the theoretical framework to be overly descriptive and somewhat critical. A framework that clearly justifies the rationale for the UTAUT-TAM3 combination and how it relates to sustainability is needed.

Response 1: Thank you very much for this insightful comment. We agree that the original version of the theoretical framework section was overly descriptive and did not sufficiently articulate the rationale for combining the UTAUT and TAM3 models, nor its connection to sustainability. Accordingly, we have substantially revised this section to provide a stronger theoretical justification and a clearer conceptual link to sustainability. The revised text can be found on page 8, paragraph 1-2, and lines 344–365 of the revised manuscript.

 

Comment 2: The title highlights “Sustainable Transformation,” but then sustainability is treated only as “digital efficiency and transparency.” There should be a conceptual or empirical approach to organizational, environmental, or social sustainability. For example, the link between blockchain adoption and the Sustainable Development Goals (SDGs), green accounting, or digital sustainability could be strengthened to highlight blockchain’s contribution to more ethical, transparent, and responsible auditing practices.

Response 2: We sincerely thank the reviewer for this valuable observation. We acknowledge that the initial version of the manuscript primarily discussed sustainability from a digital efficiency and transparency perspective. In the revised version, we have broadened the conceptualization of sustainability to explicitly include organizational, environmental, and social dimensions and to align blockchain adoption with the Sustainable Development Goals (SDGs), green accounting, and digital sustainability principles.

 

To address this comment, the following revisions were made:

 

  • Introduction (Section 1, Page 4, line 172-182): We added a new paragraph explaining how blockchain contributes to sustainable transformation beyond efficiency, explicitly connecting it to SDG 9 (Industry, Innovation and Infrastructure), SDG 12 (Responsible Consumption and Production), and SDG 16 (Peace, Justice and Strong Institutions). This paragraph positions blockchain as an enabler of ethical and responsible auditing.

 

  • Literature Review (Section 2.2. Blockchain, Sustainability, and Accountable Accounting and Auditing:, Page 7, line 323-341): A new subsection elaborates on the relationship between blockchain, green accounting, and digital sustainability, highlighting how blockchain supports ESG reporting, environmental accountability, and ethical transparency in auditing practices.

 

  • Discussion section (page 26, paragraph 2, lines 979–1987): We included a paragraph interpreting the findings through a sustainability lens, emphasizing how blockchain-based auditing enhances ethical accountability, reduces resource use, and aligns digital transformation with broader sustainability goals.

 

  • Practical Implications section (page 28, paragraph 2, lines 1069–1076) We added text illustrating how blockchain adoption can enable sustainable and ethical audit practices, supporting ESG and SDG-oriented auditing frameworks. We also suggest that professional bodies incorporate sustainability-oriented content in blockchain training for auditors.

 

These changes collectively ensure that the manuscript better reflects the “Sustainable Transformation” focus highlighted in the title by integrating conceptual and practical insights into sustainability-driven auditing. We believe these additions substantially enhance the manuscript’s theoretical depth and practical relevance by linking blockchain technology to ethical, environmental, and social sustainability outcomes in auditing

 

Comment 3: The methodology deviated from the choice of PLS-SEM based on distribution limitations, sample size, and the theoretical model.

Response 3: We thank the reviewer for this valuable observation. Although the sample size (N = 353) was sufficient for Partial Least Squares Structural Equation Modeling (PLS-SEM), PLS-SEM was deliberately chosen for several methodological and theoretical reasons. The study’s primary objective is predictive rather than confirmatory, focusing on explaining variance in the endogenous constructs. Moreover, the model includes both reflective and formative constructs, and preliminary tests indicated deviations from multivariate normality, making PLS-SEM more appropriate. Additionally, the model’s complexity—with multiple latent constructs and numerous indicators—further supports this methodological choice. We have revised the methodology section (Pages 17 and 18, lines 672 to 698) to clarify these justifications and cited recent methodological authorities (Hair et al., 2019) to support this decision.

Comment 4: When interpreting the results, they should be done critically and grounded in theory. For example, the lack of significance of CSE → EE or CO → IU would warrant a theoretical explanation (e.g., low professional technological upgrading, institutional barriers, cultural context).

Response 4: Thank you for pointing this out. We agree with this valuable comment. Therefore, we have revised the interpretation of results to include a more critical, theory-based explanation for the non-significant relationships between Computer Self-Efficacy (CSE) → Effort Expectancy (EE) and Compatibility (CO) → Intention to Use (IU).

Comment 5: Finally, ask what the specific implications of the study are. And what are the limitations in concrete terms?

Response 5: Thank you for pointing this out. We agree with this valuable comment. Accordingly, we have expanded the Implications and Limitations sections to provide more concrete, practice-oriented recommendations and a clearer discussion of the study’s constraints. These revisions can be found on pages 25–27, with specific additions on page 25, lines 235–242 and on page 26, lines 966–988 for implications and page 26, lines 1067–1078 for limitations.

Comment 6: "Regarding the references, it is advisable to include more recent references. Most are published before 2021. There are many papers from 2022 to 2025 on digital transformation and accounting sustainability. Tables should consist of footnotes with a source ('Prepared by the authors using SmartPLS 4.0') and present significance levels in standard format (p < 0.001; p < 0.01)."

Response 6: We sincerely appreciate the reviewer’s valuable suggestion. In response, we have thoroughly updated the references to include recent and relevant studies published between 2022 and 2025 that focus on digital transformation and accounting sustainability. These additions strengthen the theoretical foundation and enhance the currency of the manuscript. Furthermore, all tables have been revised to include footnotes indicating the data source (e.g., “Prepared by the authors using SmartPLS 4.0”). The significance levels have also been standardized according to conventional reporting formats (p < 0.001; p < 0.01; p < 0.05) to ensure clarity and consistency throughout the manuscript. We believe these revisions have improved both the rigor and presentation quality of the paper.

Comment 7: Consequently, Figure 1 (conceptual model) is unformatted and should be labelled.” Author

Response 7: We appreciate the reviewer’s feedback. Figure 1 has been reformatted to ensure consistency and clarity. The layout, alignment, and text formatting have been improved, and appropriate labels have been added to all elements of the conceptual model. The updated figure is now clearly titled and properly captioned in accordance with the journal’s formatting guidelines.

Reviewer Comment 8: “From a linguistic perspective, there is some redundancy to be avoided.” Author

Response 8: We appreciate the reviewer’s observation. We have carefully reviewed the manuscript and revised the text to remove linguistic redundancies and enhance conciseness and clarity throughout.

Back to TopTop