Development and TAM-Based Validation of a User Experience Scale for Actual System Use in Online Courses
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsDear Authors,
I enjoyed reading Methods and Results sections. I do have a couple of minor comments, though. one - in regard to the participants from Credamo – you are not mentioning whether they were compensated or not and what experience they had with online courses (your article topic). Then, after the original extraction, when you retained 45 items, it is a good practice to rename the indicators according to what they actually measure. For example, for CQ construct you had items CQ-4, CQ-5, CQ-7, CQ-9 all load as the same indicator; you left it as a Knowledge Presentation, but if one reads through the items they actually represent a larger concept, such as Course Quality. Similarly, when TQ-7, TQ-9 and TS-3, TS-4 load as the same indicator and you leave it as Course Structure and Organization, I would argue that these items rather represent Fair Assessment. Please, look into these details but overall, the methods and results part looks very good.
Where I had a problem with the manuscript is the theoretical foundations and the purpose of your study. Right from the introduction, where you mention problems of the existing research such as “learning interactivity, limited course content attractiveness, inadequate technical support, and challenges in meeting personalized learning needs” p. 1, ll 28-29 – all these variables have been covered in various studies of e-Learning employing TAM. I started to look for the specific studies in your literature review section but did not find any. Neither the literature review nor the references include extensive TAM research on e-learning conducted over the last 20 years. Almost all the external variables presented as something new in this study were explored in prior studies under different names, such as media quality (Content Quality), system quality (Technical Support), perceived interaction (Interactive Experience) and relevance for learning (Learning outcome). While the names might sound different, the measurement items used in these studies closely resemble ones in a manuscript. Moreover, satisfaction, mentioned but not referenced by the authors, was also explored as a dependent variable in at least one of the studies. User Experience presented by the authors as a dependent variable has been explored as exogenous variable. The items which the authors developed for Actual System Use differ from the TAM model ones and can be argued to reflect the constructs of Attitude Toward Using and Behavioral Intention to Use rather than the actual use.
Overall, the suggestion for the authors is to include corresponding TAM literature on e-Learning and to position their study within this existing literature.
Author Response
Dear reviewer 1:
Thank you very much for the insightful comments. Sorry for bringing trouble to you about our response to your comments. Thank you for giving us a choice to correcting the shortcoming of our manuscript. We have revised our manuscript according to expert advices. Please read the response and revised manuscript. We really hope that our revision can meet the comments. If the revised manuscript maybe exist the shortcoming, please tell us. We will try our best to continue to revise our manuscript in order to improve our manuscript. Really thank your insightful comments, understanding and help again!
Regards,
Mei Wang
Reviewer #1:
I enjoyed reading Methods and Results sections. I do have a couple of minor comments, though.
one - in regard to the participants from Credamo – you are not mentioning whether they were compensated or not and what experience they had with online courses (your article topic). Then, after the original extraction, when you retained 45 items, it is a good practice to rename the indicators according to what they actually measure. For example, for CQ construct you had items CQ-4, CQ-5, CQ-7, CQ-9 all load as the same indicator; you left it as a Knowledge Presentation, but if one reads through the items they actually represent a larger concept, such as Course Quality. Similarly, when TQ-7, TQ-9 and TS-3, TS-4 load as the same indicator and you leave it as Course Structure and Organization, I would argue that these items rather represent Fair Assessment. Please, look into these details but overall, the methods and results part looks very good.
Where I had a problem with the manuscript is the theoretical foundations and the purpose of your study. Right from the introduction, where you mention problems of the existing research such as “learning interactivity, limited course content attractiveness, inadequate technical support, and challenges in meeting personalized learning needs” p. 1, ll 28-29 – all these variables have been covered in various studies of e-Learning employing TAM. I started to look for the specific studies in your literature review section but did not find any. Neither the literature review nor the references include extensive TAM research on e-learning conducted over the last 20 years. Almost all the external variables presented as something new in this study were explored in prior studies under different names, such as media quality (Content Quality), system quality (Technical Support), perceived interaction (Interactive Experience) and relevance for learning (Learning outcome). While the names might sound different, the measurement items used in these studies closely resemble ones in a manuscript. Moreover, satisfaction, mentioned but not referenced by the authors, was also explored as a dependent variable in at least one of the studies. User Experience presented by the authors as a dependent variable has been explored as exogenous variable. The items which the authors developed for Actual System Use differ from the TAM model ones and can be argued to reflect the constructs of Attitude Toward Using and Behavioral Intention to Use rather than the actual use.
I read your comments carefully and distilled them down to the following points:
COMMENT 1: "one - in regard to the participants from Credamo – you are not mentioning whether they were compensated or not and what experience they had with online courses (your article topic)."
RESPONSE: Thank you very much for the insightful comments. According to expert advice, we have substantially modified our manuscript in order to clearly explain since multiple approaches exists to face such challenge. The Credamo platform has a large user base, encompassing individuals from various regions across China. Participation in this study was entirely voluntary. Before beginning the questionnaire, respondents were informed of relevant participation guidelines, including an informed consent statement. As noted in the manuscript, participants who completed the survey and passed data validation received a reward of 5 RMB. To ensure data quality, multiple restrictions were applied, such as limiting participants to current university students and requiring a high credibility score. Due to these strict criteria, the platform charged a relatively high service fee to accurately match eligible respondents. Importantly, responses submitted in an unreasonably short time or with uniform answers were excluded, and the corresponding participants were not compensated. Notably, Generation Z college students—who are digital natives—value personalized, efficient, and interactive learning experiences. Since 2020, they have generally been exposed to online courses. The demographic section of the questionnaire also included an item on average daily time spent on online learning, indicating that participants had relevant learning experience. Please read revised manuscript, thanks!
COMMENT 2: Then, after the original extraction, when you retained 45 items, it is a good practice to rename the indicators according to what they actually measure. For example, for CQ construct you had items CQ-4, CQ-5, CQ-7, CQ-9 all load as the same indicator; you left it as a Knowledge Presentation, but if one reads through the items they actually represent a larger concept, such as Course Quality. Similarly, when TQ-7, TQ-9 and TS-3, TS-4 load as the same indicator and you leave it as Course Structure and Organization, I would argue that these items rather represent Fair Assessment. Please, look into these details but overall, the methods and results part looks very good.
RESPONSE: Thank you very much for the insightful comments. In our manuscript, After the exploratory factor analysis, it is indeed necessary to classify the items under the corresponding secondary indicators based on the extracted factor dimensions. This has been clearly marked in the table according to the factor structure, allowing readers to better understand this section. The classifications have been explicitly presented in both the main text and Appendix A.4. Please read revised manuscript, thanks!
COMMENT 3: Where I had a problem with the manuscript is the theoretical foundations and the purpose of your study. Right from the introduction, where you mention problems of the existing research such as “learning interactivity, limited course content attractiveness, inadequate technical support, and challenges in meeting personalized learning needs” p. 1, ll 28-29 – all these variables have been covered in various studies of e-Learning employing TAM. I started to look for the specific studies in your literature review section but did not find any. Neither the literature review nor the references include extensive TAM research on e-learning conducted over the last 20 years. Almost all the external variables presented as something new in this study were explored in prior studies under different names, such as media quality (Content Quality), system quality (Technical Support), perceived interaction (Interactive Experience) and relevance for learning (Learning outcome). While the names might sound different, the measurement items used in these studies closely resemble ones in a manuscript. Moreover, satisfaction, mentioned but not referenced by the authors, was also explored as a dependent variable in at least one of the studies. User Experience presented by the authors as a dependent variable has been explored as exogenous variable. The items which the authors developed for Actual System Use differ from the TAM model ones and can be argued to reflect the constructs of Attitude Toward Using and Behavioral Intention to Use rather than the actual use.
RESPONSE: Thank you very much for the insightful comments. The manuscript has been enhanced by adding a clear statement of the research questions (Section 1.1) and research objectives (Section 1.2), which further improves the clarity and logical structure of the paper. Section 1.3 provides a systematic review of various user experience environments in the context of online teaching, including user experience in MOOCs, in intelligent learning environments, in general online teaching, in online courses, and on online teaching platforms. It also summarizes the core components of user experience across different educational contexts. Section 1.4 further reviews a range of extended models based on the Technology Acceptance Model (TAM), identifying the key variables commonly involved in related studies, thereby laying a solid theoretical foundation for the subsequent empirical research.
In addition, relevant literature sources have been added to the table of primary and secondary variable definitions in Appendix A.1. Similarly, clear citations and sources have also been provided in Appendix A.2 and Appendix A.3, ensuring the transparency and academic rigor of the research process.
Indeed, some variables in the manuscript, although differing in nomenclature from those used in existing studies, share similarities in terms of measurement items. These prior variables are often derived from studies on various types of systems, such as entertainment video platforms, e-commerce websites, ride-hailing applications, and hotel booking systems. In contrast, the present study focuses specifically on the context of online courses, with measurement dimensions and item design that are more targeted and systematic. The initial version of the scale includes six primary dimensions and a total of 64 measurement items, covering key aspects such as online course platforms, teaching teams, and technical support. This comprehensive framework allows for a more accurate representation of users’ actual experiences in the context of online learning.
Please read revised manuscript, thanks!
And so on, please read our revised manuscript. We thank the comments and opportunity for us to improve our manuscript. As much as possible, the questions were taken into account during the preparation of the revised manuscript. We hope that the manuscript is now suitable for publication.
Author Response File: Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript presents a valuable and methodologically rigorous study on the development and validation of a user experience (UX) scale for online course users, grounded in an extended Technology Acceptance Model (TAM). The topic is highly relevant and timely, and the integration of pedagogical, technical, and motivational factors into a unified structural model is a notable contribution to the literature on educational technology. The use of expert validation, EFA, CFA, and SEM confirms the scientific quality of the study.
However, several aspects of the manuscript should be revised for greater clarity and coherence. First, Figure 1—which presents the central conceptual model—is not followed by any descriptive paragraph or interpretation. The reader is immediately directed to the Results section without a clear explanation of the model’s structure, the role of each component, or the theoretical basis for the relationships presented. A concise interpretive paragraph should be added directly after Figure 1 to bridge this gap.
Second, although the six dimensions of user experience (Interactive Experience, Content Quality, Learning Outcome, Teaching Quality, Technical Support, Learning Motivation) are described in the literature review, the manuscript does not provide a consolidated mapping between these components and the specific sources from which they were derived. To strengthen transparency and traceability, a summary table could be included, linking each construct to its theoretical foundation.
Third, although the manuscript presents a well-defined conceptual model and applies structural equation modeling (SEM), it does not include any explicitly stated research questions or hypotheses. At a minimum, the authors should formulate key research questions or theoretical assumptions that reflect the rationale behind the tested relationships (e.g. in the Introduction).
Fourth, the description of the sample requires greater specificity. The manuscript provides basic demographic data such as age, gender, and online learning experience, but omits important contextual information such as participants’ field of study, institutional affiliation, and geographic location. These details are essential for assessing the generalizability of the findings and should be included (in the section 2).
Fifth, the manuscript alternates between the abbreviations “UX” and “UE” when referring to user experience. To ensure terminological consistency and avoid confusion, one form should be selected and applied uniformly throughout the text.
Finally, the paper would benefit from light language editing. Several sentences, especially in the abstract and introduction, are unnecessarily wordy or repetitive. Streamlining the language will improve clarity and readability.
In conclusion, this is a strong and publishable manuscript that makes an important contribution to the field of online education research. With the suggested revisions primarily related to figure interpretation, theoretical transparency, and presentation structure, the paper will meet the standards for publication.
Comments on the Quality of English LanguageThe manuscript is generally clear and grammatically correct. However, it would benefit from light language editing to improve clarity, eliminate redundancies, and streamline sentence structure, specifically in the abstract and introduction. In some places, expressions are repetitive (e.g., “user experience” mentioned twice in one sentence in lines 33–34), and certain sentences are overly long or stylistically heavy. Terminology such as "UX / UE" should also be made consistent throughout the manuscript.
Author Response
Dear reviewer 2:
Thank you very much for the insightful comments. Sorry for bringing trouble to you about our response to your comments. Thank you for giving us a choice to correcting the shortcoming of our manuscript. We have revised our manuscript according to expert advices. Please read the response and revised manuscript. We really hope that our revision can meet the comments. If the revised manuscript maybe exist the shortcoming, please tell us. We will try our best to continue to revise our manuscript in order to improve our manuscript. Really thank your insightful comments, understanding and help again!
Regards,
Mei Wang
Reviewer #2:
The manuscript presents a valuable and methodologically rigorous study on the development and validation of a user experience (UX) scale for online course users, grounded in an extended Technology Acceptance Model (TAM). The topic is highly relevant and timely, and the integration of pedagogical, technical, and motivational factors into a unified structural model is a notable contribution to the literature on educational technology. The use of expert validation, EFA, CFA, and SEM confirms the scientific quality of the study.
However, several aspects of the manuscript should be revised for greater clarity and coherence. First, Figure 1—which presents the central conceptual model—is not followed by any descriptive paragraph or interpretation. The reader is immediately directed to the Results section without a clear explanation of the model’s structure, the role of each component, or the theoretical basis for the relationships presented. A concise interpretive paragraph should be added directly after Figure 1 to bridge this gap.
Second, although the six dimensions of user experience (Interactive Experience, Content Quality, Learning Outcome, Teaching Quality, Technical Support, Learning Motivation) are described in the literature review, the manuscript does not provide a consolidated mapping between these components and the specific sources from which they were derived. To strengthen transparency and traceability, a summary table could be included, linking each construct to its theoretical foundation.
Third, although the manuscript presents a well-defined conceptual model and applies structural equation modeling (SEM), it does not include any explicitly stated research questions or hypotheses. At a minimum, the authors should formulate key research questions or theoretical assumptions that reflect the rationale behind the tested relationships (e.g. in the Introduction).
Fourth, the description of the sample requires greater specificity. The manuscript provides basic demographic data such as age, gender, and online learning experience, but omits important contextual information such as participants’ field of study, institutional affiliation, and geographic location. These details are essential for assessing the generalizability of the findings and should be included (in the section 2).
Fifth, the manuscript alternates between the abbreviations “UX” and “UE” when referring to user experience. To ensure terminological consistency and avoid confusion, one form should be selected and applied uniformly throughout the text.
Finally, the paper would benefit from light language editing. Several sentences, especially in the abstract and introduction, are unnecessarily wordy or repetitive. Streamlining the language will improve clarity and readability.
In conclusion, this is a strong and publishable manuscript that makes an important contribution to the field of online education research. With the suggested revisions primarily related to figure interpretation, theoretical transparency, and presentation structure, the paper will meet the standards for publication.
The manuscript is generally clear and grammatically correct. However, it would benefit from light language editing to improve clarity, eliminate redundancies, and streamline sentence structure, specifically in the abstract and introduction. In some places, expressions are repetitive (e.g., “user experience” mentioned twice in one sentence in lines 33–34), and certain sentences are overly long or stylistically heavy. Terminology such as "UX / UE" should also be made consistent throughout the manuscript.
COMMENT 1: First, Figure 1—which presents the central conceptual model—is not followed by any descriptive paragraph or interpretation. The reader is immediately directed to the Results section without a clear explanation of the model’s structure, the role of each component, or the theoretical basis for the relationships presented. A concise interpretive paragraph should be added directly after Figure 1 to bridge this gap.
RESPONSE: Thank you very much for the insightful comments. According to expert advice, we have substantially modified our manuscript in order to clearly explain since multiple approaches exists to face such challenge. Indeed, the initial version of the manuscript did not provide detailed descriptions of the hypothesized paths following Figure 1, which may have hindered readers’ understanding and the overall coherence of the proposed research model. To enhance the clarity and logical flow of the text, the manuscript has been revised to include explicit descriptions of all 14 hypothesized paths immediately after Figure 1. Each path is now clearly explained in terms of the relationships between variables, thereby facilitating a more comprehensive understanding of the theoretical framework and hypothesis structure underlying this study. Please read revised manuscript, thanks!
COMMENT 2: Second, although the six dimensions of user experience (Interactive Experience, Content Quality, Learning Outcome, Teaching Quality, Technical Support, Learning Motivation) are described in the literature review, the manuscript does not provide a consolidated mapping between these components and the specific sources from which they were derived. To strengthen transparency and traceability, a summary table could be included, linking each construct to its theoretical foundation.
RESPONSE: Thank you very much for the insightful comments. Thank you very much for the insightful comments. The manuscript has been enhanced by adding a clear statement of the research questions (Section 1.1) and research objectives (Section 1.2), which further improves the clarity and logical structure of the paper. Section 1.3 provides a systematic review of various user experience environments in the context of online teaching, including user experience in MOOCs, in intelligent learning environments, in general online teaching, in online courses, and on online teaching platforms. It also summarizes the core components of user experience across different educational contexts. Section 1.4 further reviews a range of extended models based on the Technology Acceptance Model (TAM), identifying the key variables commonly involved in related studies, thereby laying a solid theoretical foundation for the subsequent empirical research.
In addition, relevant literature sources have been added to the table of primary and secondary variable definitions in Appendix A.1. Similarly, clear citations and sources have also been provided in Appendix A.2 and Appendix A.3, ensuring the transparency and academic rigor of the research process. Please read revised manuscript, thanks!
COMMENT 3: Third, although the manuscript presents a well-defined conceptual model and applies structural equation modeling (SEM), it does not include any explicitly stated research questions or hypotheses. At a minimum, the authors should formulate key research questions or theoretical assumptions that reflect the rationale behind the tested relationships (e.g. in the Introduction).
RESPONSE: Thank you very much for the insightful comments. As outlined above, this study has incorporated clearly defined research questions and objectives, thereby clarifying the core focus of the investigation. It provides a systematic review of the research context concerning user experience in online teaching environments and summarizes relevant extensions of the Technology Acceptance Model (TAM). Based on this foundation, the study formulates specific research hypotheses. Furthermore, the theoretical underpinnings of each construct and corresponding measurement items are thoroughly elaborated, ensuring that the research framework is grounded in solid theoretical foundations and holds substantial practical value. Please read revised manuscript, thanks!
COMMENT 4: Fourth, the description of the sample requires greater specificity. The manuscript provides basic demographic data such as age, gender, and online learning experience, but omits important contextual information such as participants’ field of study, institutional affiliation, and geographic location. These details are essential for assessing the generalizability of the findings and should be included (in the section 2).
RESPONSE: Thank you very much for the insightful comments. The manuscript has been revised to include demographic information of the participants (Table 5), allowing readers to gain a clearer understanding of the sample sources and participants’ prior usage experience. In addition, the rationale for selecting the Credamo platform has been articulated, highlighting its advantages in participant recruitment, screening mechanisms, and compensation scheme. These additions contribute to enhancing the reliability of the data and the transparency of the overall research design. Please read revised manuscript, thanks!
COMMENT 5: Fifth, the manuscript alternates between the abbreviations “UX” and “UE” when referring to user experience. To ensure terminological consistency and avoid confusion, one form should be selected and applied uniformly throughout the text.
RESPONSE: Thank you very much for the insightful comments. “UX” and “UE” are both commonly used abbreviations for “User Experience.” To avoid potential ambiguity caused by inconsistent terminology, this study adopts “UE” as the standardized abbreviation for user experience throughout the manuscript. Please read revised manuscript, thanks!
COMMENT 6: Finally, the paper would benefit from light language editing. Several sentences, especially in the abstract and introduction, are unnecessarily wordy or repetitive. Streamlining the language will improve clarity and readability.
RESPONSE: Thank you very much for the insightful comments. The manuscript did contain instances of imprecise or overly wordy language. To address this issue, the text has undergone thorough language editing to improve clarity, conciseness, and overall readability. Please read revised manuscript, thanks!
COMMENT 7: The manuscript is generally clear and grammatically correct. However, it would benefit from light language editing to improve clarity, eliminate redundancies, and streamline sentence structure, specifically in the abstract and introduction. In some places, expressions are repetitive (e.g., “user experience” mentioned twice in one sentence in lines 33–34), and certain sentences are overly long or stylistically heavy. Terminology such as "UX / UE" should also be made consistent throughout the manuscript.
RESPONSE: Thank you very much for the insightful comments. We sincerely appreciate your recognition of our team’s work. In response to the reviewers’ comments, we have carefully revised the manuscript, addressing each suggestion in detail. The revisions primarily focus on clarifying figure interpretations, enhancing theoretical transparency, and improving the overall presentation structure. Please read revised manuscript, thanks!
And so on, please read our revised manuscript. We thank the comments and opportunity for us to improve our manuscript. As much as possible, the questions were taken into account during the preparation of the revised manuscript. We hope that the manuscript is now suitable for publication.
Author Response File: Author Response.pdf