The Impact of Early Robotics on Kindergarten Children’s Self-Efficacy and Problem-Solving Abilities
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe manuscript offers an interesting idea for engaging children in STEM education to build self-efficacy skills. A few suggestions are highlighted below.
- The IRB study approval number should be included.
- Table 1 and others that are labeled as measures, should be relabeled as survey items and not measures. Please name and cite the measure used for the analysis in the second paragraph of the Results section. Were the items self-developed by the authors?
- Remove the highlight on table 5.
- Self-efficacy was named as an outcome in RQ1, but Table 5 questions on self-efficacy reflects child's belief about their self-efficacy, but asking about the child's confidence would carrying the task would have been better. Similarly, the 4th paragraph in the Discussion mentions increased confidence in following the visual instructions, but only the 2nd item in table 1 asks about instructions stating, "I can follow picture instructions." with a reported p-value that is not significant. The authors making reaching conclusions based on the present study, as there is limited empirical data to support these statements. Also, table 5 asking about self-efficacy only has 2 items relating to such an outcome. Inferences drawn in the Discussion section should be tempered to reflect what is actually presented in the study.
Author Response
Comment 1: The IRB study approval number should be included.
Response 1: Thank you for pointing this out. We agree with this comment. Therefore, we have added the IRB approval number in the Institutional Review Board Statement section at the end of the manuscript.
The updated text in the manuscript now reads:
“The study was conducted in accordance with the Ministry of Education, approval code: 9701.”
Comment 2: Table 1 and others that are labeled as measures should be relabeled as survey items and not measures. Please name and cite the measure used for the analysis in the second paragraph of the Results section. Were the items self-developed by the authors?
Response 2: Thank you for this helpful clarification. We agree that the term “survey items” more accurately describes the self-developed questions used in this study. Accordingly, we have relabeled Tables 1–5 to refer to “survey items.”
A minor table labeling change was made in Table 1, which now reads:
“Table 1. Baseline comparisons between research and control groups on pre-test survey items.”
In addition, the following clarification was added to the main text (page X, paragraph Y, line Z):
“These items were self-developed by the authors to reflect the program’s learning objectives and to ensure age-appropriate comprehension for kindergarten children, drawing on established frameworks of early STEM and self-efficacy assessment (Bandura, 1997).”
Comment 3: Remove the highlight on Table 5.
Response 3: Thank you for noticing this. The highlight on Table 5 has been removed in the revised manuscript.
Comment 4: Self-efficacy was named as an outcome in RQ1, but Table 5 questions on self-efficacy reflect the child’s belief about their self-efficacy, whereas asking about the child’s confidence in carrying out the task would have been better. Similarly, the 4th paragraph in the Discussion mentions increased confidence in following the visual instructions, but only the 2nd item in Table 1 asks about instructions stating, “I can follow picture instructions,” with a reported p-value that is not significant. The authors are making conclusions based on the present study, yet there is limited empirical data to support these statements. Also, Table 5, asking about self-efficacy, only has 2 items relating to such an outcome. Inferences drawn in the Discussion section should be tempered to reflect what is actually presented in the study.
Response 4: We appreciate this thoughtful and constructive feedback. We have carefully revised the manuscript to clarify the conceptual distinction between self-efficacy and confidence and to ensure that the terminology used in the Research Questions, Results, and Discussion sections accurately reflects the constructs measured.
Clarified terminology: Throughout the manuscript, we now use confidence rather than self-efficacy when referring to direct survey items such as “I feel good about building things” or “I can follow picture instructions,” since these reflect children’s self-reported confidence rather than validated self-efficacy scales. While we use the term self-efficacy broadly in alignment with Bandura’s framework, the items in this study are best understood as confidence statements developed to capture children’s perceived ability in context rather than standardized self-efficacy measures.
New paragraph added to Section 5.5:
“The two confidence items were designed to approximate self-efficacy-related beliefs in an age-appropriate format. However, they should be interpreted as indicators of children’s perceived ability rather than as validated self-efficacy measures.”
Clarified constructs: In Section 4.1 (Aims of the Study), we now explicitly note that while the study aimed to explore self-efficacy–related constructs, the items capture children’s confidence or belief in their ability rather than formal measures of self-efficacy.
Tempered interpretation: The Discussion section has been revised to moderate claims regarding “increased self-efficacy” or “visual instruction confidence.” These statements now emphasize observed trends and preliminary evidence rather than definitive conclusions.
Additional revision to the Limitations section:
“Fourth, although the study explored constructs related to self-efficacy, the survey reflected children’s confidence or perceived ability rather than validated self-efficacy scales.”
Author Response File:
Author Response.pdf
Reviewer 2 Report
Comments and Suggestions for AuthorsOverall, I think that this study is novel and well done. It adds to a growing body of literature on robotics self-efficacy development for younger students. Below, I note some areas for improvement.
- The authors state that “most studies assess self-efficacy indirectly” through self-reports. However, that is a direct measurement of one’s self-beliefs. The author suggests that actual performance is required. While objective performance certainly adds interesting data to consider alongside self-efficacy, it is a separate construct. I think a slight rewording is all that is needed here, though.
- Some studies have shown a ceiling effect assessing self-efficacy in such a young age (i.e., most 5 and 6 year olds assume they’re capable of anything). The authors might address this in the manuscript (i.e., citations to support the developmental ability of 5 year olds to assess their contextualized self-efficacy). To be clear, I think this is possible, I just think the authors need to address it in the writing.
- The authors describe that the teachers in the robotics condition allowed students to choose to play with the robotics during their own free time. It would have been really interesting to measure self-chosen engagement with the robotics kits outside of the structured class time (e.g., in the stations that were described) and compare that to both self-efficacy and skill development. Maybe a thought for a future study!
- Careful using confidence and self-efficacy interchangeably in research questions. They are technically different constructs.
- How did you account for potential teacher level self-select bias noise (e.g., the type of teacher who chose to implement a robotics curriculum is likely different than the type that didn’t, and student variance stemming from those differences might inaccurately be attributed to the robotics curriculum as opposed to teacher differences). This is barely mentioned in limitations as part of a list of limitations with the quasi-experimental design, but I think it needs to also be addressed in the methods in some way. At minimum, it needs to be described how the teachers came to be teaching either curriculum. That is, how were teachers recruited and how was assignment of the two curriculums made?
- If the goal was to shape students’ self-efficacy, focusing some on the sources of self-efficacy would make sense. I think the intervention easily aligns with the four main hypothesized sources of self-efficacy, but I think the authors need to explicitly write out how the intervention was targeted to improve students’ self-efficacy. Ford et. al (2023) I Fail, Therefore I Can outlines sources of self-efficacy that students often look to in robotics self-efficacy development. This manuscript and citations within it might help guide some thinking on a revised section. Notably, the section “Failure and Motivation in Educational Robotics” has several citations of studies focused on self-efficacy development in robotics for elementary and middle level students. I think that pulling from these citations would provide a firmer foundation for the study and would strengthen the section on current literature in educational robotics and self-efficacy as well as self-efficacy development.
- The paragraphs starting on Lines 292 and 299 are identical.
- The item “I feel good about building things.” is not measuring self-efficacy (and is not completely clear what’s being asked). It is, perhaps, measuring physiological state (a source of self-efficacy). However, I would suggest removing this item from the study or renaming it.
- In Table 1, the line “Reassembling a broken toy animal” is somewhat confusing. I would suggest replacing that with the actual item asked (i.e., If a [toy animal/kindergarten chair] is broken, I believe I can fix it). Then under that, an additional line that says something like, “Objective Performance.”
- I like the transfer of skill assessment (i.e., fix something unrelated to robotics). That was really interesting!
Author Response
We sincerely thank the reviewer for encouraging feedback and constructive comments. We greatly appreciate the time, care, and expertise invested in reviewing our manuscript and providing valuable suggestions that have significantly strengthened this work.
Comment 1: The authors state that “most studies assess self-efficacy indirectly” through self-reports. However, that is a direct measurement of one’s self-beliefs. The author suggests that actual performance is required. While objective performance certainly adds interesting data to consider alongside self-efficacy, it is a separate construct. I think a slight rewording is all that is needed here, though.
Response 1: We thank the reviewer for this helpful clarification. We have therefore revised the text in Section 2.4 to clarify that our study examines both self-reported self-efficacy and actual performance as related but separate outcomes, avoiding the implication that self-reports are indirect measures.
The revised text in Section 2.4 now reads:
“Moreover, most studies assess self-efficacy solely through self-reports or teacher ratings, without examining children’s actual task performance as a complementary indicator of learning outcomes.”
Comment 2: Some studies have shown a ceiling effect assessing self-efficacy in such a young age (i.e., most 5- and 6-year-olds assume they’re capable of anything). The authors might address this in the manuscript (i.e., citations to support the developmental ability of 5-year-olds to assess their contextualized self-efficacy). To be clear, I think this is possible; I just think the authors need to address it in the writing.
Response 2: Thank you for this insightful point. We agree that young children (ages 4–7) may exhibit elevated confidence or limited metacognitive differentiation, potentially resulting in ceiling effects when measuring self-perceptions of ability.
To address this, we have added a paragraph in Section 4.4 (Instruments, Measures and Procedures):
“Measuring self-efficacy with very young children requires additional consideration. Young children (ages 4–7) often express generally high confidence in their abilities, which can create ceiling effects and limit the sensitivity of self-report measures (Staus et al., 2021). Moreover, their metacognitive awareness is still developing, so they may find it difficult to differentiate subtle gradations of ability. Accordingly, the binary response format was selected to maximize comprehension and reliability for this age group, though it may underestimate individual variation or small changes in perceived capability.”
We also added the following paragraph to the Limitations section:
“Third, the study relied on dichotomous self-report items, which, while age-appropriate, may have simplified children’s responses and reduced sensitivity to subtle distinctions in perception. Measuring self-beliefs in early childhood presents additional challenges, as children aged 4–7 often display uniformly high confidence in their abilities, leading to potential ceiling effects in self-report instruments (Staus et al., 2021). Consequently, the binary ‘yes/no’ format used here may have limited the detection of small variations or incremental gains in perceived capability over time. Future research would benefit from employing more fine-grained response scales or incorporating complementary observational and behavioral measures to capture self-efficacy development with greater precision.”
Reference added:
Staus, N. L., O'Connell, K., & Storksdieck, M. (2021, July). Addressing the ceiling effect when assessing STEM out-of-school time experiences. Frontiers in Education, 6, 690431. Frontiers Media SA.
Comment 3: The authors describe that the teachers in the robotics condition allowed students to choose to play with the robotics during their own free time. It would have been really interesting to measure self-chosen engagement with the robotics kits outside of the structured class time (e.g., in the stations that were described) and compare that to both self-efficacy and skill development. Maybe a thought for a future study!
Response 3: Thank you for this thoughtful observation and suggestion. We agree that examining children’s self-chosen engagement with robotics materials would provide valuable insights into their motivation and learning behaviors.
To acknowledge this, we have added the following paragraph to the Limitations and Future Directions section:
“Fifth, although the robotics kits were available for free exploration outside of formal lessons, children’s self-chosen engagement with these materials was not systematically measured. Tracking voluntary use of the robotics area could provide valuable insights into children’s intrinsic motivation, persistence, and the relationship between unstructured engagement and self-efficacy or skill development. Future studies could integrate such measures to better understand how informal play complements structured instruction.”
Comment 4: Careful using confidence and self-efficacy interchangeably in research questions. They are technically different constructs.
Response 4: We thank the reviewer for this valuable observation. Following feedback from both reviewers, we have carefully revised the manuscript to ensure that confidence and self-efficacy are used consistently and accurately according to their conceptual distinctions. Specifically, references to self-efficacy now appear only when discussing the theoretical framework or broader implications, whereas confidence is used to describe the self-report items employed in this study.
Comment 5: How did you account for potential teacher-level self-selection bias noise (e.g., the type of teacher who chose to implement a robotics curriculum is likely different than the type that didn’t, and student variance stemming from those differences might inaccurately be attributed to the robotics curriculum as opposed to teacher differences)? This is barely mentioned in limitations as part of a list of limitations with the quasi-experimental design, but I think it needs to also be addressed in the methods in some way. At minimum, it needs to be described how the teachers came to be teaching either curriculum. That is, how were teachers recruited and how was assignment of the two curriculums made?
Response 5: Thank you for highlighting this important potential source of bias. We have expanded the Methods section to explicitly describe teacher recruitment and curriculum assignment, and added a new subsection titled 4.3.1. Instructional Supports and Bias-Mitigation Procedures.
The new subsection reads:
“To ensure experimental integrity and reduce teacher-level variance unrelated to the condition assignment, the study employed several standardization and bias-mitigation procedures. All participating kindergartens followed a shared instructional scope-and-sequence, utilized weekly pacing guides, and relied upon common lesson materials to maintain parity in content delivery across both groups. For initial training, teachers in both the control and robotics conditions attended a brief orientation covering the year’s plan and assessment protocols. In keeping with the program’s principle that robotics should be taught by the regular classroom teacher rather than an external expert, all participating teachers received a specially tailored workshop preparing them to deliver robotics content using the available classroom equipment. This training covered the yearly plan, use of the robotic kits, and assessment procedures, ensuring consistent implementation across kindergartens. The research team conducted light-touch fidelity checks approximately once per month via brief kindergarten visits, specifically to verify adherence to the predetermined instructional schedule rather than to evaluate teacher performance. Finally, to minimize any potential teacher influence on student responses, all student surveys and performance tasks were administered by trained, external research assistants following a standardized script.”
In addition, we added the following paragraph to the Limitations section:
“Although curriculum assignment preceded the study and teachers did not self-select for research participation, residual teacher-level and classroom-context effects may remain. We mitigated these through common pacing and materials, teacher orientation, and light-touch fidelity checks, but they cannot be fully eliminated.”
Comment 6: If the goal was to shape students’ self-efficacy, focusing some on the sources of self-efficacy would make sense. I think the intervention easily aligns with the four main hypothesized sources of self-efficacy, but I think the authors need to explicitly write out how the intervention was targeted to improve students’ self-efficacy. Ford et al. (2023) I Fail, Therefore I Can outlines sources of self-efficacy that students often look to in robotics self-efficacy development. This manuscript and citations within it might help guide some thinking on a revised section. Notably, the section “Failure and Motivation in Educational Robotics” has several citations of studies focused on self-efficacy development in robotics for elementary and middle level students. I think that pulling from these citations would provide a firmer foundation for the study and would strengthen the section on current literature in educational robotics and self-efficacy as well as self-efficacy development.
Response 6: We thank the reviewer for this excellent and insightful suggestion. We agree that explicitly linking the intervention’s design to Bandura’s four main sources of self-efficacy (mastery experience, vicarious experience, social persuasion, and physiological/emotional states) provides a stronger theoretical foundation for the study.
To address this, we have added a new paragraph to Section 2.3, which now reads:
“Educational robotics provides multiple avenues through which self-efficacy can develop. These benefits directly align with the four sources of self-efficacy proposed by Bandura (1997)—mastery experiences, vicarious learning, social persuasion, and physiological and emotional states—which are actively supported in learning contexts through successful hands-on building, collaborative peer observation, positive reinforcement, and engaging in playful, low-stakes activities that maintain enjoyment and persistence even after failure. Recent studies highlight that encountering and overcoming failure during robotics activities can itself be a critical catalyst for self-efficacy development (Ford et al., 2023; Jäggle et al., 2020). Together, these features might indicate that well-designed robotics programs help to foster multiple self-efficacy sources, particularly when failure is framed as an opportunity for learning and growth.”
References added:
Ford, C. J., Mohr-Schroeder, M. J., & Usher, E. L. (2023). I fail; therefore, I can: Failure mindset and robotics self-efficacy in early adolescence. Education Sciences, 13(10), 1038.
Jäggle, G., Lammer, L., Wiesner, J. O., & Vincze, M. (2020). Towards a robotics self-efficacy test in educational robotics. Constructionism 2020, 583.
Comment 7: The paragraphs starting on Lines 292 and 299 are identical.
Response 7: Thank you for noticing this duplication. We have removed the redundant paragraph.
Comment 8: The item “I feel good about building things.” is not measuring self-efficacy (and is not completely clear what’s being asked). It is, perhaps, measuring physiological state (a source of self-efficacy). However, I would suggest removing this item from the study or renaming it.
Response 8: We thank the reviewer for this thoughtful observation. We agree that the item “I feel good about building objects” primarily captures an affective or emotional dimension rather than a direct measure of self-efficacy. However, as affective states are recognized as one of the four key sources of self-efficacy (Bandura, 1997), we retained this item to represent the emotional and motivational aspects of children’s experiences with building tasks.
To clarify this interpretation, we have slightly rephrased the item description in the Results section and tables as “Confidence/enjoyment in building objects.” We also added a note in the text specifying that this item reflects an affective source contributing to self-efficacy rather than a direct efficacy judgment.
Comment 9: In Table 1, the line “Reassembling a broken toy animal” is somewhat confusing. I would suggest replacing that with the actual item asked (i.e., If a [toy animal/kindergarten chair] is broken, I believe I can fix it). Then under that, an additional line that says something like “Objective Performance.”
Response 9: We appreciate this helpful suggestion. To improve clarity, we have revised Table 1 so that the objective task is now explicitly labeled as “Objective Performance.” The updated entry reads:
“Objective Performance: Successfully reassembled toy animal (%).”
Author Response File:
Author Response.pdf
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors responses address my concerns.
