Next Article in Journal
Effects of AI-Assisted Feedback via Generative Chat on Academic Writing in Higher Education Students: A Systematic Review of the Literature
Previous Article in Journal
Cooperative Learning and Academic Writing Skills: An Application of the Collective Working Memory Effect
Previous Article in Special Issue
Promoting Digital Competencies in Pre-Service Teachers: The Impact of Integrative Learning Opportunities
 
 
Article
Peer-Review Record

Investigating Teachers’ Changing Perceptions Towards MOOCs Through the Technology Acceptance Model

Educ. Sci. 2025, 15(10), 1395; https://doi.org/10.3390/educsci15101395
by Patrick Camilleri 1,*, Abeer Watted 2 and Michelle Attard Tonna 1
Reviewer 1: Anonymous
Reviewer 2:
Educ. Sci. 2025, 15(10), 1395; https://doi.org/10.3390/educsci15101395
Submission received: 20 August 2025 / Revised: 2 October 2025 / Accepted: 13 October 2025 / Published: 17 October 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors
  1. Introduction

The Introduction section primarily focuses on two components: self-efficacy and the Technology Acceptance Model (TAM). However, the relationship between these two core constructs has not been clearly elaborated.

In the section explaining the TAM, additional context-specific research should be supplemented, including: (1) prior applications of TAM in the context of Massive Open Online Courses (MOOCs); and (2) existing studies that have employed TAM to investigate teacher populations. This will strengthen the theoretical grounding of the current research.

 

  1. Materials and Methods

Further clarification is required regarding the reliability and validity of the questionnaire used. Specifically, the manuscript should address: (1) which existing literatures or validated instruments the questionnaire was adapted from; and (2) how the collected data demonstrate adequate reliability (e.g., Cronbach’s α coefficients) and validity (e.g., content validity, construct validity) indicators.

 

  1. Results

The Results section primarily reports descriptive statistics of the questionnaire data, with no application of more sophisticated statistical methods (e.g., inferential statistics, regression analysis, or structural equation modeling). As a result, the insights derived from the data are relatively limited. This limitation should be acknowledged in the writing of the Results section and further elaborated in the Limitations subsection.

 

  1. Discussion

The logical flow of the Discussion section is insufficiently clear. Additionally, the research contributions of this study (e.g., theoretical advancements, practical implications, or new insights for MOOC-based teacher education) are not explicitly articulated. A more structured synthesis of the study’s contributions is recommended.

 

Other Detailed Comments

The term "Massive Open Online Courses (MOOCs)" is repeatedly defined or mentioned in close proximity (e.g., Line 32 & Line 36). To improve readability, avoid redundant definitions after the initial introduction.

Similarly, "Technology Acceptance Model (TAM)" is excessively repeated (e.g., Line 40, Line 118, Line 126). After the full name and abbreviation are first provided, the abbreviation "TAM" can be used consistently thereafter.

The manuscript includes unmotivated formatting (e.g., bold text in Line 39, italicized text in Line 43) without explanation. Please clarify the purpose of such formatting or standardize it in accordance with the journal’s guidelines.

There is a scarcity of recent references (i.e., studies published in 2022 or later). Supplementing up-to-date, relevant literature will help contextualize the study’s contributions within current research and highlight its timeliness.

There are issues with citation formatting. For example, "Author 2 (a), 2023" (Line 92) lacks clear bibliographic information, making it impossible to identify the referenced work. Please standardize all citations and ensure complete, traceable references.

Author Response

Responses to reviewers’ comments -Reviewer 1

  1. Introduction (Changes are in Blue in this section; pages 1-5)

Comment: The Introduction section primarily focuses on two components: self-efficacy and the Technology Acceptance Model (TAM). However, the relationship between these two core constructs has not been clearly elaborated.

Response: We thank the reviewers for their constructive feedback. In response to the comment that “the relationship between self-efficacy and the Technology Acceptance Model (TAM) has not been clearly elaborated”, we have revised the Introduction to explicitly strengthen this connection.

In particular, we have:

  • Added passages that clarify how self-efficacy functions as a precursor to TAM constructs: teachers with higher self-efficacy are more likely to perceive technology as easier to use (PEoU) and more useful (PU).
  • Highlighted how self-efficacy explains the underlying confidence and motivation that feed into TAM’s behavioural intention (BI) and actual use (AU).
  • Ensured that throughout the Introduction, self-efficacy and TAM are presented as complementary frameworks, together offering a comprehensive explanation of teachers’ perceptions and attitudinal changes towards MOOCs.

Comment: In the section explaining the TAM, additional context-specific research should be supplemented, including: (1) prior applications of TAM in the context of Massive Open Online Courses (MOOCs); and (2) existing studies that have employed TAM to investigate teacher populations. This will strengthen the theoretical grounding of the current research.

Response: We have duly expanded the section explaining the Technology Acceptance Model (TAM) to include context-specific studies. Specifically, we have:

  1. Added prior applications of TAM in the MOOC context for example, Alraimi et al. (2015) who examined learners’ continuance intention in MOOCs using TAM, and Escobar-Rodríguez and Monge-Lozano (2012) who investigated e-learning adoption, directly relevant to the MOOC setting.
  2. Included studies employing TAM with teacher populations– for instance, Teo (2009), who modelled technology acceptance among pre-service teachers in Singapore, and Sánchez-Prieto et al. (2016), who studied Spanish pre-service teachers’ acceptance of mobile learning using TAM.

These additions, now highlighted in the revised manuscript, strengthen the theoretical foundation of our study by directly linking TAM to both MOOCs and teacher populations, thereby justifying its use as our guiding framework.

  1. Materials and Methods (Changes are in Blue in this section; pages 8-10)

Comment: Further clarification is required regarding the reliability and validity of the questionnaire used. Specifically, the manuscript should address: (1) which existing literatures or validated instruments the questionnaire was adapted from; and (2)  how the collected data demonstrate adequate reliability (e.g., Cronbach’s α coefficients) and validity (e.g., content validity, construct validity) indicators.

Response: We greatly appreciate this important observation, which has helped us strengthen the transparency and rigor of our methodology. In the revised manuscript, we have expanded the description of the research instrument to provide greater clarity, as follows:

The quantitative data were collected through two structured questionnaires administered before and after the MOOC experience. The questionnaires were adapted from the Technology Acceptance Model (TAM) developed by Davis (1989). Items were selected and modified to reflect the context of MOOCs in teacher education. Both questionnaires used a five-point Likert scale ranging from 1 (“strongly disagree”) to 5 (“strongly agree”).

The pre-questionnaire was designed to capture participants’ initial perceptions of MOOCs prior to engaging with the course. It consisted of two parts:

Part 1: Demographic data and basic information: Place of residence, gender, age, level of education, academic field, teaching experience,

Part 2: It included three scales as expressed through TAM (Davis, 1989) focusing on:

  • Perceived Usefulness (PU): Six items assessed participants’ beliefs about the potential value of MOOCs for learning and professional development. Items addressed aspects such as the flexibility of MOOCs, their contribution to staying updated with educational trends, access to experts and diverse courses, and concerns about recognition by employers and quality compared to traditional learning. This scale demonstrated high reliability (Cronbach’s α = 0.832).
  • Perceived Ease of Use (PEoU): Five items measured participants’ expectations regarding the usability and accessibility of MOOCs. Items included both advantages (e.g., the ability to learn at one’s own pace, ease of navigation) and concerns (e.g., self-discipline, lack of interaction, time commitment). The reliability of this scale was moderate (Cronbach’s α = 0.620).

The post-questionnaire was administered after participants completed the MOOC and was designed to assess changes in perceptions as well as future intentions to use MOOCs. It included three scales:

  • Perceived Usefulness (PU): Expanded to eight items, this scale assessed participants’ reflections on the value of MOOCs after completion. In addition to the dimensions included in the pre-questionnaire, new items addressed the ability of MOOCs to connect participants to a global learning community, to enhance student motivation, and to provide access to high-quality education. The scale demonstrated strong reliability (Cronbach’s α = 0.860).
  • Perceived Ease of Use (PEoU): Three items examined participants’ evaluations of usability after the learning experience. Items emphasized the flexibility of self-paced learning, ease of use, and the potential of MOOCs to incorporate innovative teaching strategies. The scale showed acceptable reliability (Cronbach’s α = 0.680).
  • Behavioral Intention (BI): Four items measured participants’ intention to integrate MOOCs into their future teaching practice. These included overall impressions of MOOCs, consideration of MOOCs as tools to enhance classroom instruction, and the intention to rely on MOOCs as a primary teaching medium. While conceptually aligned with the TAM framework, this scale demonstrated lower internal consistency (Cronbach’s α = 0.560), and the results should therefore be interpreted with caution.

The Content validity of the pre and post-questionnaires was established in two stages. First, items were drawn from validated TAM-based questionnaires reported in the literature (Davis ,1989)., ensuring strong alignment with the constructs of Perceived Usefulness (PU), Perceived Ease of Use (PEoU), and Behavioral Intention (BI). Second, the adapted items were reviewed by three experts in educational technology and two teachers’ education, who evaluated clarity, relevance, and appropriateness for pre-service teacher populations. Based on their feedback, several items were refined to enhance contextual accuracy. The Construct validity was supported by the theoretical structure of TAM. PU items consistently reflected beliefs about the value of MOOCs for professional growth and learning, PEoU items captured usability and accessibility, and BI items assessed intention to adopt MOOCs in future teaching practice. The strong theoretical coherence between items and constructs provides further evidence of validity.

 

  1. Results (Changes are in Blue in this section; (page 17,page 20)

Comment : The Results section primarily reports descriptive statistics of the questionnaire data, with no application of more sophisticated statistical methods (e.g., inferential statistics, regression analysis, or structural equation modeling). As a result, the insights derived from the data are relatively limited. This limitation should be acknowledged in the writing of the Results section and further elaborated in the Limitations subsection.

Response: We thank the reviewer for this important observation. We fully agree that our analysis relied primarily on descriptive statistics and did not employ advanced statistical modeling techniques such as regression analysis or structural equation modeling. This approach was sufficient for addressing the research questions. Consequently, this choice reflects both the exploratory nature of the study and our aim to capture overall trends in participants’ perceptions rather than to generalise findings across populations or to model causal relationships.

At the same time, we emphasise that our qualitative analysis of students’ responses to open-ended questions adds an important complementary perspective. These qualitative findings enrich the descriptive statistics by highlighting participants’ experiences in their own words, illustrating how they navigated challenges, developed digital literacy, and recognized the professional value of MOOCs. By integrating both quantitative and qualitative data, we believe the study provides a more comprehensive picture of pre-service teachers’ perceptions

In the revised manuscript, we have addressed this point in two ways. Firstly, we added an explicit acknowledgement in the Results section as follow:

Finally, the study primarily relied on descriptive statistics and reliability testing to capture changes in participants’ perceptions across time. This approach was sufficient for addressing the research questions and provided a clear picture of overall trends within the sample. Nonetheless, the inclusion of qualitative analysis of participants’ responses to open-ended questions strengthens the findings by offering richer detail and triangulation, providing context and meaning to the trends identified in the descriptive statistics.

Secondly, we elaborated on this issue in the Limitations subsection, clarifying that while descriptive analyses provide useful insights, they constrain the scope of conclusions that can be drawn.

Finally, the study primarily relied on descriptive statistics and reliability testing to capture changes in participants’ perceptions across time. This approach was sufficient for addressing the research questions, Nonetheless, the inclusion of qualitative analysis of participants’ responses to open-ended questions strengthens the findings by offering richer detail and triangulation, providing context and meaning to the trends identified in the descriptive statistics. Future research should therefore incorporate inferential and multivariate methods to strengthen the generalizability and depth of findings.

  1. Discussion (Changes are in Blue in this section; (page 18- 20)

Comment: The logical flow of the Discussion section is insufficiently clear. Additionally, the research contributions of this study (e.g., theoretical advancements, practical implications, or new insights for MOOC-based teacher education) are not explicitly articulated. A more structured synthesis of the study’s contributions is recommended.

Response: The Discussion is now more structured with the use of key words and subtitles to locate the significant parts of the argument.

  1. Other Detailed Comments (Changes are in Blue)

Comment: The term "Massive Open Online Courses (MOOCs)" is repeatedly defined or mentioned in close proximity (e.g., Line 32 & Line 36). To improve readability, avoid redundant definitions after the initial introduction.

Response: This has now been addressed.

Comment: Similarly, "Technology Acceptance Model (TAM)" is excessively repeated (e.g., Line 40, Line 118, Line 126). After the full name and abbreviation are first provided, the abbreviation "TAM" can be used consistently thereafter.

Response: This has now been addressed.

Comment: The manuscript includes unmotivated formatting (e.g., bold text in Line 39, italicized text in Line 43) without explanation. Please clarify the purpose of such formatting or standardize it in accordance with the journal’s guidelines.

Response: This has now been addressed.

Comment: There is a scarcity of recent references (i.e., studies published in 2022 or later). Supplementing up-to-date, relevant literature will help contextualize the study’s contributions within current research and highlight its timeliness.

Response: This has now been addressed.

Comment: There are issues with citation formatting. For example, "Author 2 (a), 2023" (Line 92) lacks clear bibliographic information, making it impossible to identify the referenced work. Please standardize all citations and ensure complete, traceable references.

Response: This has now been addressed.

Reviewer 2 Report

Comments and Suggestions for Authors

[Line 213-228] If I understand the presented data correctly, the sample (which would be more accurately described as a convenience sample) is predominantly female (92%), while males account for only 8%. This gender imbalance may limit the generalizability of the findings, particularly if perceptions or experiences with MOOCs differ by gender. Similarly, the geographic distribution is both limited and uneven: 79% of participants are Israeli, and only 21% are Maltese, suggesting that participants were included based on availability rather than systematic selection. Cultural or educational system differences may also have influenced the responses. The age distribution is likewise unbalanced, with a strong concentration of young adults. Perceptions of MOOCs may differ substantially between younger participants and more experienced individuals; the predominance of younger participants therefore reduces the external validity of the results for other age groups. The distribution of teaching experience also appears somewhat arbitrary: 71% of participants are students without teaching experience, while only 29% have any professional experience. Moreover, participants voluntarily completed a MOOC, introducing a clear self-selection bias. Individuals who choose to enroll in and complete such a course are likely to be more motivated or technologically proficient than the average teacher, which further limits the generalizability of the results. The "sample" size (N=144) is relatively small, particularly given the division across multiple subgroups (country, age, teaching experience, field of study), consequently, statistical analyses of these subgroups (e.g., experienced teachers) may have limited statistical power, and broad inferences should be drawn with caution.

[Line 512-531] Since the sample was predominantly young, motivated, technologically competent, and self-selected, it was virtually inevitable that participants would exhibit a generally positive attitude toward MOOCs even before engaging with them.

[around line 540] With a sample of only 144 participants, especially when divided into small and non-representative subgroups, percentiles are unstable, and any interpretations based on them must be approached with extreme caution, limited to describing the specific sample without generalization.

I suggest changing the description to clarify the real limitations of this research, which I have reported here, so that readers understand that everything seems to have gone well, but that there are many biases that lead to this result. Alternatively, the sample could be extended, but I do not see this as necessary if all biases are taken into account without wanting to show the research as perfectly successful at all costs.

 

Author Response

Responses to reviewers’ comments -Reviewer 2

Comments and Suggestions for Authors-(Changes are in Blue)

Comment: [Line 213-228] If I understand the presented data correctly, the sample (which would be more accurately described as a convenience sample) is predominantly female (92%), while males account for only 8%. This gender imbalance may limit the generalizability of the findings, particularly if perceptions or experiences with MOOCs differ by gender.

Similarly, the geographic distribution is both limited and uneven: 79% of participants are Israeli, and only 21% are Maltese, suggesting that participants were included based on availability rather than systematic selection.

Cultural or educational system differences may also have influenced the responses. The age distribution is likewise unbalanced, with a strong concentration of young adults.

Perceptions of MOOCs may differ substantially between younger participants and more experienced individuals; the predominance of younger participants therefore reduces the external validity of the results for other age groups

 The distribution of teaching experience also appears somewhat arbitrary: 71% of participants are students without teaching experience, while only 29% have any professional experience.

Response: We thank the reviewer for this detailed observation. We agree that our sample should be described as a convenience sample of voluntary MOOC completers, and we now make this explicit in the revised manuscript (Methods – Participants; Limitations).

At the same time, we note that these characteristics — namely, the strong predominance of female participants, the youth of the cohort, and the limited professional experience — are typical features of teacher education programs in our context. Thus, while they introduce sampling imbalance when viewed against the entire population of practicing teachers, they also reflect the real composition of pre-service teacher cohorts, thereby providing ecological validity within this educational setting.

Comment [Line 512-531] Since the sample was predominantly young, motivated, technologically competent, and self-selected, it was virtually inevitable that participants would exhibit a generally positive attitude toward MOOCs even before engaging with them.

Response: We thank the reviewer for this important point. We agree that the demographic profile of our participants (young, self-selected, and technologically inclined) likely predisposed them toward favorable attitudes. However, we wish to clarify that the outcome was not inevitable. In fact, 210 students initially enrolled in the MOOC, all of whom shared the same demographic characteristics (young, predominantly female, pre-service teachers). Of these, only 144 participants completed the course and were therefore included in the study. This attrition rate is consistent with the broader MOOC literature, which reports very high dropout levels. Thus, our findings reflect the perceptions of the subgroup that persisted and successfully completed the course — a subgroup that cannot be assumed to represent all initial registrants. We have clarified this point in the Limitations section to highlight that our results apply to MOOC completers rather than to all enrollees.

Comment [around line 540] With a sample of only 144 participants, especially when divided into small and non-representative subgroups, percentiles are unstable, and any interpretations based on them must be approached with extreme caution, limited to describing the specific sample without generalization.

Response: We appreciate this observation. To clarify, 210 participants initially enrolled in the MOOC, but only 144 completed it and were included in the analysis. Our study did not attempt to compare subgroups (e.g., by gender, country, age, or teaching experience), nor did we conduct inferential analyses between completers and non-completers. Rather, our research questions were designed to examine within-participant changes (pre- to post-course) in perceptions of usefulness (PU) and ease of use (PEoU), alongside behavioral intentions (BI).

We therefore present demographic breakdowns only for descriptive purposes, to contextualize the composition of the completers. No subgroup inferential statistics were carried out, and we now make this explicit in the revised manuscript.

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The authors have effectively addressed the major concerns raised in the previous round. The revised manuscript shows improvements in theoretical coherence, methodological transparency, and the structure of the discussion, which has significantly enhanced its overall quality. Only minor revisions are now required, mainly to improve readability and ensure consistency in formatting (e.g., heading levels, reference style, and figure/table layout), with a focus on language and formatting refinement.

Author Response

Comments 1: The authors have effectively addressed the major concerns raised in the previous round. The revised manuscript shows improvements in theoretical coherence, methodological transparency, and the structure of the discussion, which has significantly enhanced its overall quality. Only minor revisions are now required, mainly to improve readability and ensure consistency in formatting (e.g., heading levels, reference style, and figure/table layout), with a focus on language and formatting refinement.

Response 1: Thank you for pointing this out. I have duly carried out the minor updates mentioned.

Back to TopTop