Next Article in Journal
Assessment and Suggestions on Sustainable Development of Regional Ecological Economy Based on Emergy Theory: A Case Study of Henan Province
Previous Article in Journal
Empirical Analysis of China’s Agricultural Total Factor Productivity and the Reform of “County Administrated by Province”: Insights from Agricultural Enterprise Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Effect of Time Management and Help-Seeking in Self-Regulation-Based Computational Thinking Learning in Taiwanese Primary School Students

1
Department of Computer Science, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
2
Degree Program of E-Learning, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(16), 12494; https://doi.org/10.3390/su151612494
Submission received: 28 June 2023 / Revised: 14 August 2023 / Accepted: 15 August 2023 / Published: 17 August 2023

Abstract

:
Computational thinking skills are increasingly required for working with information technology products and are considered core learning objectives in science and technology curriculums across all grades. However, there is yet to be a curriculum model for computational thinking, and many teachers are still figuring out this issue and designing courses to cultivate these skills in students. We planned 8-course periods for 108 curriculums, using the Bebras International Computational Thinking Challenge and programming learning motivation scale to evaluate game-based lessons from Code.org. The grade-3 and -4 students were randomly divided into self-regulation and guided-learning groups, and 153 valid data were analyzed using paired t tests and ANCOVA. As a result, we found the learning behaviors of the two groups of students to be worthy of further exploration in terms of time management and help-seeking learning strategies. Code.org’s game-based lessons effectively engage students to complete most of the course, addressing the usual course completion issues when self-paced. The self-regulation group spent more time in peer discussions and had better learning outcomes than the guided-learning group. To this end, we provide detailed curriculum information as a teaching model for the self-regulated learning of computational thinking in primary schools.

1. Introduction

1.1. Computational Thinking

Now considered a fundamental life skill [1], computational thinking entails the use of computer science concepts to analyze problems, deconstruct them, and identify solutions [2,3,4,5,6,7,8,9,10]. Computational thinking is considered a multi-disciplinary, creative, and imaginative problem-solving mode with applications in many non-technological fields. However, it is considered to be very important for adapting to rapidly evolving technological developments [5,11]. Accordingly, many countries are updating their education systems by adding computational thinking courses to education curriculums, aiming to develop computational thinking skills in children from an early age [3,12,13]. There are many examples of digital technology courses and information education syllabuses focused on computational thinking development in the US [14], UK [15], Australia [16], and New Zealand [17].

1.2. 108 Curriculums

The Ministry of Education considers students’ life experiences, needs, and learning interests in Taiwan. It emphasizes students’ problem-solving and projects creating and implementing skills that enable students to solve technological literacy problems, thereby achieving the core goal of improving computational thinking in 108 courses [18]. According to technology domain curriculum standards in the 12-Year Basic Education guidelines, high school graduates must have a basic understanding of six technology-associated concepts: algorithms; programming; system platforms; data representation, processing, and analysis; the application of information technology (IT); and technology and human society [18]. In response to the absence of a curriculum model for computational thinking to follow, courses are organized in alternative curriculums to encourage the development of a long-term series of experimental course plans.
The idea of 108 alternative curriculums, consisting of 3–6 lessons per week for Taiwanese primary and secondary schools, began in 1998. These curriculums are under continuous revision and expansion. One of their key features is the freedom for individual schools to create learning activities in order to enhance learner interest in specific topic areas and encourage adaptive curriculum development. This flexibility allows schools to design and execute cross-semester and cross-subject courses based on local student characteristics and backgrounds. According to this approach, science and technology courses in the 108 curriculums do not represent a fixed number of Ministry-sanctioned courses in primary schools across Taiwan, but an accumulation of learning objectives and technology courses within alternative curriculums. To this end, local schools are encouraged to design curriculum modules that allow students to develop and assess computational thinking skills in order to ensure that all primary school students have a high level of technological literacy.

1.3. Computational Thinking Instruction

Teaching programming concepts and skills is currently the most common method for developing computational thinking skills [19,20,21,22,23], and there are numerous studies on the relationship between computational thinking and programming education [24,25,26,27,28]. However, researchers have found many examples of elementary school students becoming proficient with certain new technologies more efficiently and faster than older students and adults, despite a long-held belief that elementary school students need help to understand programming concepts. A small but growing number of researchers and educators are recommending an earlier start in programming education in order to develop computer logic and reasoning skills in young and enthusiastic learners [29,30,31,32,33,34]. In response, programming courses that aim at develop computational thinking skills have been widely introduced in primary schools around the world during the past decade [31,32,33,34,35]. However, individual schools and school districts need to catch up in terms of identifying the best teaching methods [11,36].
Some teachers have reported difficulties in building positive student attitudes regarding advanced programming topics and tools [37]. For example, many attempts to teach Python, Java, and C++ programming languages have been unsuccessful, with learners quickly losing interest when learning strict grammar rules [38]. Thus, the search is on to find age-appropriate methods with which to teach younger learners programming concepts such as algorithms without requiring them to write code [2,39,40]. In computer science education today, visual programming languages are increasingly used to help students practice computation skills involving creativity and problem solving [3,27,41]. These languages use blocks that are dragged and placed in sequences for task completion as their lack of written code requirements helps to reduce learning anxiety and improve self-confidence [42,43,44,45].
Four of the most popular visual programming websites and tools are Code.org [37,42], Scratch [26,46,47], Alice [48,49], and the MIT App Inventor [50,51]. While Scratch, Alice, and App Inventor provide examples for students to use in completing projects and assignments, course content must be selected or designed by researchers [52,53,54] or students in order to develop a new project [55,56]. As a result, it is not easy to reproduce and compare work for use in follow-up research. In contrast, Code.org provides several open courses from which researchers can adopt the required course content [42,57,58] in order to effectively produce and utilize the study results. And Code.org relies on animations and characters that are well known in target learner age groups to provide step-by-step instruction [59]. Code.org course content can also be adjusted to match existing knowledge levels at different ages, which is particularly useful for younger learners [42,60]. For this reason, we selected Code.org for use in our project.

1.4. Self-Regulated Learning

In the digital learning environment, students need to self-regulate in the learning process in order to effectively learn and master relevant knowledge and skills [61,62,63,64]. Self-regulated learning focuses on students’ independence in understanding and the ability to self-control, self-supervise, and learn autonomously without the encouragement of others [65,66]. Students self-regulate to arrange study times to complete tasks, translating to good learning performance [67]. Due to the characteristics of the Code.org course design, the operation of this course does not depend on the teacher’s knowledge. It focuses on allowing students to learn independently to realize self-regulated learning of computational thinking.
However, many students who are not self-regulated learners cannot use strategies to support learning and have a negative attitude, leading to failure to complete tasks [68,69,70,71]. When teachers lead and monitor student progress to ensure course completion, there is a need to coordinate all students’ progress. Nevertheless, each student’s different thought processes [72] require other study times, which can put individual students under time pressure. A better situation is to provide sufficient learning assistance and motivation through the courses so that students can self-regulate and arrange suitable learning times to complete the courses. Therefore, an assessment of the effectiveness of course content for self-regulated learning should not only consider academic performance but also improve learning motivation so that students can complete the course.

1.5. Learning Motivation

There are many questionnaires for learning motivation, among which the most commonly used is the Motivated Strategies for Learning Questionnaire (MSLQ) [73]. The MSLQ is an assessment tool developed based on the self-regulated learning framework to measure learning effectiveness [74], and items can be applied jointly or individually to various research questions [75]. Furthermore, this study is aimed at learning computational thinking, so Students’ Motivation Toward Science Learning (SMTSL) method [76], designed to assess the value of scientific learning, is also suitable for this study. Indeed, many studies have proved the reliability and effectiveness of the SMTSL questionnaire [77,78,79,80]. Finally, we selected Xiao’s Learning Motivation Scale for Programming Courses questionnaire. This was adapted from MSLQ, SMTSL, and Parenting Stress Index (PSI), for use on Taiwanese students in order to evaluate its reliability. Therefore, the questionnaire fits the student environment of this study and the items required for learning computational thinking with self-regulation.
The questionnaire assessed extrinsic motivation, learning value, self-efficacy, and seeking help. The first three items are used to motivate students to complete the course progress while seeking assistance in order understand students’ learning behavior. The self-regulated learning process allows students to choose from learning strategies that enable the completion of a task independently or through discussion with others. Therefore, help-seeking is a social strategy for self-regulated learning as it involves peers, teachers, and parents [81,82]. A help-seeking strategy, which consists of seeking support from individuals or other sources, can affect academic performance relatively quickly [83,84,85,86] and can be essential learning behavior in self-regulated learning. Therefore, we will conduct a detailed analysis and discussion of seeking assistance.

1.6. Research Aim

In order to meet the teaching objectives of the Education Ministry’s 108 curriculums policy, we created a set of computer science courses with 8-course periods that improve computational thinking skills, cultivate problem-solving aptitudes and build independent analytical proficiency. Code.org served as a source of teaching materials and curriculum guidance for self-study. A primary study goal was used to test the capacity of a game-based course to support self-regulated learning, with a puzzle-solving game serving as the primary instructional vehicle. During course implementation, students were divided into self-regulated and teacher-guided groups based on time management approaches. Combined with assessments and questionnaires, these methods were used to measure learners’ progress. The specific study aims were to
  • Measure the learning outcomes of using Code.org courses to enhance students’ computational thinking;
  • Discuss the effectiveness of Code.org courses in increasing student motivation;
  • Compare the differences in learning behavior and motivation between the two groups of students under different time management methods.
The following chapters of this article will explain the research methods and instruments used in Section 2; the results of data collection and analysis are presented in Section 3; discussions are carried out in Section 4 to respond to the research questions; Section 5 is the conclusion; and Section 6 explains the limitations and future directions of this research.

2. Materials and Methods

2.1. Participants and Procedure

This study adopted a quasi-experimental design. The study participants were 168 grade-3 and -4 students attending an urban primary school in Hsinchu, Taiwan. According to test scores and other data from their grade-2 and -3 classes, all students were within a normal distribution of academic performance for their grade level. Three of the six classes in the study were randomly selected to participate in group A courses with student self-regulated learning. The other three participated in group B courses guided by teachers. The teaching and experimentation in the six classes were conducted by one of our authors as the official teacher in the primary school. This study is based on existing programming teaching courses, only with the concept of self-regulated learning added as an experimental design. The course passed the review of the curriculum committee via the use of through public lectures and by inviting professional educators to observe the class, eliminating the possibility of potential risks to students in the research process. We used tests and questionnaires in the assessment plan of the course. The data used in the study were collected during the regular progress of the course, and no other external forces were involved.
Two groups of students took the same course from Code.org and completed tasks independently. We asked students to solve the tasks sequentially to reduce the error in the two sets of experimental factors. In the self-regulated group, after the teacher briefly explained the process and operation of the course, the students completed the task at their own pace. It did not matter if students needed more time to solve tasks and could only complete some tasks by the end of the course. The guided-learning group limits the time for students to solve tasks by themselves according to the suggestion of the lesson plan (30 min per lesson). The teacher will confirm that all the students have completed the tasks before continuing the class to ensure that the guided-learning group can complete all the tasks by the end of the class.
The formal course implements 8-course periods and assesses learning outcomes using the Bebras International Computational Thinking Challenge and a programming learning motivation scale prior to and one week after the course (Table 1). We exclude nine students absent for multiple class sessions and the six whose course completion needs more valid data (failed to complete Course 2). Our sample for statistical analysis consisted of 153 students whose scores, questionnaire responses, and other data are listed in Table 2.

2.2. Code.org

The Code.org (https://code.org/ (accessed on 20 June 2023)) curriculum was created in 2013 to improve computer science education worldwide and uses computer games and stories that help young learners to learn and practice programming concepts. The platform is a flexible web application that can work with almost any device. The learning consists of dragging and dropping programming blocks to complete tasks at various challenge levels. Many researchers have examined the use of Code.org in primary school classes, with most reporting positive learner attitudes [42,57] that appear to be dependent on age. Some researchers have observed negative attitudes from grade-1 and -2 pupils, but significantly positive feedback from grade-3 and -4 learners [58]. The above literature supports our course selection for 3rd and 4th graders according to the recommended ages for Code.org courses. Along these lines, we chose courses 2 and 3 by matching students’ cognitive development stage and understanding of course content with basic computer science information (https://code.org/lesson_plans (accessed on 20 June 2023)).
Course 2, which focuses on sequences, loops, debugging, and conditionals, is suitable for students without programming experience. Students learn to construct programs by solving problems associated with interactive games and stories. Course 3 focuses on functions, conditionals, debugging, and loops (Loops include “while loops” and “nested loops”). Students gain a deeper understanding of topics learned in Course 2 and learn increasingly flexible programming skills suitable for solving more complex problems.
The structure of an individual course consists of multiple lessons, each with its own set of tasks, as seen in Table 2. The content of the tasks incorporates animations and characters that are well known to learners in specific age groups. Task objectives require programming blocks to achieve mission objectives, thus supporting the targeting of specific programming concepts. Due to time limitations, we selected course content that fully addressed vital programming concepts to reduce the time spent repeating programming concepts. We removed all multiple-choice questions to help learners focus on the programming blocks. We selected 12 lessons containing 135 tasks from Code.org courses 2 and 3 for this research, as shown in Table 3.

2.3. Bebras

The Bebras International Computational Thinking Challenge (https://www.bebras.org/ (accessed on 20 June 2023)) is a learning outcome assessment tool. Bebras is part of an international initiative created in 2004 to promote computer science and computational thinking skills among students of all ages [87,88,89,90]. Each Bebras task entails at least one computer science concept. However, they can be solved without prior computer science knowledge, making them useful as pre-test measures of computational thinking ability and as tools for stimulating student interest [91]. Accordingly, there is a growing body of research using the Bebras tasks to teach and assess computer science skills at all levels [92,93,94,95]. As of 2021, more than 50 countries have participated in the annual Bebras competition. Taiwan joined the Bebras Challenge in 2012, starting with high school students and expanding to upper-grade elementary school students in 2016.
The 2017 Bebras Australia Computational Thinking Challenge was the source of our pre-test questions, which we translated into Chinese. We asked middle-school Mandarin teachers to review the questions and make suggestions regarding wording and syntax. To avoid influencing the post-test answers, the teachers in our study did not review pre-test results with their students. The post-test content consisted of modifications of pre-test conditional descriptions and options. Ten Bebras tasks designed for third- and fourth-year elementary school students were used to assess instrument difficulty (for details, see https://www.csiro.au/-/media/Digital-Careers/Files/Bebras-Files/2017-Bebras-Solution-Guide-Aus.pdf (accessed on 20 June 2023)). Our scoring method was adopted from the intermediate-level Bebras challenge and adjusted to a 1–100-point scale. Individual items are graded at level A, B, or C in difficulty (ranging from easy to hard). Correct answers received 8, 10, or 12 points. Otherwise, 2–3 points were deducted.
According to the Bebras guide, computational thinking ability consists of 4 aptitudes: decomposition, data interpretation (patterns and modeling), data representation (symbolism), and algorithms (following and describing). Note that we have modified the scope of the term ‘algorithm’. There are two options for an aptitude for algorithms in the guide—following and describing. Simply following a pattern to obtain an answer cannot appropriately be considered to constitute algorithmic ability. In contrast, describing a new algorithm requires students to understand the purpose of tasks and innovate ideas. To this end, we focus our examination of aptitude for algorithms on the ability to “describe”. This allows us to better allocate the learning of other necessary capabilities of decomposition, interpretation, and representation. Table 4 shows the Bebras task difficulty data and each test item’s relevant computational thinking requirements.

2.4. Questionnaire

Since influencing learner motivation is important for game-based learning [96,97,98], we incorporated Xiao’s [99] 14-item Learning Motivation Scale for Programming Courses into this project. The scale includes elements from Tuan et al.’s [76] Students’ Motivation Toward Science Learning, Pintrich et al.’s [73] Motivated Strategies for Learning Questionnaire, and Heppner and Petersen’s [100] Parenting Stress Index. In particular, Xiao’s questionnaire was translated and analyzed for reliability based on Taiwanese students in order to match the learning environment of the subjects. The total scale reliability for Taiwanese students was 0.859 and >0.7 for individual-scale components. Responses to Xiao questionnaire items were given using a 6-point Likert scale with options of strongly agree, agree, somewhat agree, somewhat disagree, disagree, and strongly disagree. These were scored from 6 to 1, respectively. Table 5 shows that the questionnaire addresses four learning motivation dimensions and scale item descriptions.
  • Extrinsic motivation, described as an activity that contributes to achieving results unrelated to the current activity, suggests that students interpret success in task outcomes as being rewarding or punishing incentives outside of the learning activity [101]. However, activities that are not interesting require extrinsic motivation [101] and motivate learners to find better ways to complete tasks and achieve learning goals [102,103].
  • Learning value refers to changes in learner interest and costs associated with perceived task usefulness and importance [104]. Perceived usefulness has significantly favorable effects on attitudes and behavioral intentions [105,106], especially in visual programming environments involving a gamified learning tools [107,108,109,110].
  • Self-efficacy, meaning confidence in one’s ability to accomplish tasks, promotes more positive attitudes toward activities and performance [111]. In the context of information science, this dimension entails confidence in the use of computer tools to accomplish learning tasks [112]. This, in turn, positively affects learner behavioral intentions to adopt and use digital learning systems [113,114,115]. Game-based learning approaches can improve the computer self-efficacy and cognition of programming skills in learners, thereby increasing learning intention and confidence [36,107].
  • Help-seeking from experts, teachers, or peers, has been shown to support cognitive information processing [116,117,118] and enable better understandings of task content and tool usage [119]. Help-seeking was the most commonly used strategy for self-regulated learning and was significantly positively associated with student learning outcomes, satisfaction, and engagement [120]. Therefore, students seeking help can adapt more quickly to learning environments, supporting better learning outcomes [85,121,122].
Table 5. Programming learning motivation questionnaire statements.
Table 5. Programming learning motivation questionnaire statements.
QuestionnaireNo.Statement
Extrinsic motivationQ1The most satisfying reason for learning programming is to make my classmates think I am very smart.
Q2In programming class, I care most about getting good grades.
Q3I want my programming grades to be higher than those in the rest of the class.
Learning valueQ4I think learning programming is important because it is used in everyday life.
Q5I think learning programming is important because it promotes my logical thinking.
Q6I think learning programming is important because I can learn how to solve problems.
Self-efficacyQ7Learning programming is easy for me.
Q8I think I am proficient in programming skills.
Q9I am confident that I can write programs with correct logic.
Q10I am confident I can correct program errors.
Help-seekingQ11I try my best to complete programming tasks with my classmates.
Q12I often spend time with my classmates discussing how to write programs.
Q13When I don’t understand a program, I consult with my classmates.
Q14I ask my teacher for help understanding programming concepts.

2.5. Data Analysis and Missing Values

Quantitative data analysis focused on Bebras Computational Thinking test scores and responses to Programming Motivation Scale items. SPSS 20.0 was used to perform all descriptive, paired t test and ANCOVA statistical analyses of changes in both learner computational thinking performance and learning motivation after students had completed the game-based learning course. A total of 153 valid data were analyzed, with 16 missing values (0.747%) and 13 missing values (0.607%) in the pre-test and post-test questionnaires. In order to ensure the integrity of the data, we used the fully conditional specification (FCS) multiple-imputation method to generate 10 data sets using linear regression. Subsequently, we calculated the pool value in order to provide a substitute for the missing value.

3. Result

3.1. Learning Outcome in Code.org Course

The pre-test scores of the Bebras test were decomposition (M = 22.19, SD = 16.67), data interpretation (M = 29.07, SD = 17.74), data representation (M = 20.26, SD = 14.68), and algorithmic ability (M = 12.26, SD = 17.27). The post-test scores of the Bebras test were decomposition (M = 23.83, SD = 16.30), data interpretation (M = 29.34, SD = 18.32), data representation (M = 20.02, SD = 15.12), and algorithmic ability (M = 14.98, SD = 17.54). Overall, the post-test total score (M = 42.08, SD = 28.99) was higher than the pre-test (M = 38.68, SD = 28.25) (Table 6).
Paired t tests were used to test the improvement of the Bebras test between the pre-test and post-test. The results showed that the algorithmic ability (t(152) = −2.408, p = 0.017, d = 0.156) and the total score (t(152) = −2.131, p = 0.035, d = 0.119) of the post-test group were significantly higher than those of the pre-test cohort; however, there were no significant differences between pre-test and post-test scores for decomposition (t(152) = −1.455, p = 0.148, d = 0.100), data interpretation (t(152) = −0.219, p = 0.827, d = 0.015), and data presentation (t(152) = 0.197, p = 0.844, d = 0.016) (Table 6).

3.2. Learning Motivation in Code.org Course

The pre-test scores of the programming motivation questionnaire were extrinsic motivation (M = 3.67, SD = 1.16), learning value (M = 4.94, SD = 1.16), self-efficacy (M = 4.17, SD = 1.13), and help-seeking (M = 4.70, SD = 1.05). The post-test scores of the programming motivation questionnaire were extrinsic motivation (M = 3.98, SD = 1.24), learning value (M = 5.18, SD = 1.07), self-efficacy (M = 4.38, SD = 1.19), and help-seeking (M = 4.84, SD = 1.05). Overall, the post-test total score (M = 4.60, SD = 0.74) was higher than the pre-test (M = 4.38, SD = 0.77) (Table 6).
Paired t tests were used to test the improvement of the programming motivation questionnaire between the pre-test and post-test results. The results showed that the extrinsic motivation (t(152) = −3.152, p = 0.002, d = 0.258), learning value (t(152) = −2.384, p = 0.018, d = 0.216), self-efficacy (t(152) = −2.071, p = 0.040, d = 0.180) and total (t(152) = −3.210, p = 0.002, d = 0.286) of programming motivation in the post-test were significantly higher than those in the pre-test. However, there was no significant difference between pre-test and post-test scores for help-seeking (t(152) = −1.543, p = 0.125, d = 0.140) (Table 6).
The students’ total scores on the Bebras tasks and learning motivation improved significantly in the post-test period, which shows the effectiveness of the course for teaching computational thinking. In computational thinking, the significant improvement in algorithmic ability could be related to Code.org’s curriculum design, being biased toward programming. In terms of learning motivation, extrinsic motivation, learning value, and self-efficacy were significantly improved, which shows that the course effectively attracts students to complete it. In addition, help-seeking requires us to discuss students’ learning behaviors, which we will discuss in detail in later chapters.

3.3. Self-Regulated Learning in Code.org Course

3.3.1. Learning Outcomes of Two Teaching Approaches

The Bebras post-test scores of the self-regulated group were decomposition (M = 25.56, SD = 15.30), data interpretation (M = 30.41, SD = 17.83), data representation (M = 20.76, SD = 14.64), and algorithmic ability (M = 16.15, SD = 18.08). The Bebras post-test scores of the guided-learning group were decomposition (M = 22.30, SD = 17.08), data interpretation (M = 28.40, SD = 18.80), data representation (M = 19.37, SD = 15.60), and algorithmic ability (M = 13.94, SD = 17.08). Overall, the total score of the self-regulated group (M = 44.10, SD = 28.30) was higher than that of the guided-learning group (M = 40.27, SD = 29.66) (Table 7).
From the results of the post-test period, the scores of the two groups of students showed that the abilities of the self-regulated group were better than those of the guided-learning group. To eliminate the difference between the two groups before the course, we used the pre-test score as a covariate in order to perform a one-way ANCOVA analysis on the scores of the two groups.
One-way ANCOVA was used to test the learning outcomes between the self-regulated and guided-learning groups and this required a homogeneity of regression and variance assumptions.
There was no significant different in terms of the homogeneity of regression within the group between different groups and pre-test scores of computational thinking items, which are decomposition (F(1, 149) = 0.549, p = 0.460), data interpretation (F(1, 149) = 0.070, p = 0.791), data representation (F(1, 149) = 0.482, p = 0.489), algorithmic ability (F(1, 149) = 0.397, p = 0.530), and total score (F(1, 149) = 0.031, p = 0.861). Therefore, this study excluded the influence of pre-test scores and tested the differences in the post-test computational thinking between students in the self-regulated and guided-learning groups. Moreover, the results of Levene’s test show that there was no significant difference in the error variance of the dependent variable of the two groups in computational thinking, thus satisfying the variance homogeneity assumption, which is decomposition (F(1, 151) = 0.284, p = 0.595), data interpretation (F(1, 151) = 0.037, p = 0.849), data representation (F(1, 151) = 0.010, p = 0.919), algorithmic ability (F(1, 151) = 2.830, p = 0.095), and total score (F(1, 151) = 0.102, p = 0.750). Therefore, ANCOVA can be applied to test the learning outcomes of two groups with different teaching methods.
The results from ANCOVA showed that the post-test scores of the algorithmic aptitude item were significantly higher in the self-regulated learning group than in the guided-learning group (F(1, 150) = 4.147, p = 0.043, η2 = 0.027). However, the post-test scores of the remaining computational thinking items included decomposition (F(1, 150) = 1.986, p = 0.161, η2 = 0.013), data interpretation (F(1, 150) = 0.014, p = 0.907, η2 = 0.000), data representation (F(1, 150) = 0.050, p = 0.823, η2 = 0.000), and total score (F(1, 150) = 1.975, p = 0.162, η2 = 0.013). There were no significant differences between the two groups (Table 8).
In terms of comparing the two different time management types, the self-regulated learning group had a significantly higher learning effect on the student’s aptitude for algorithms than the guided learning group. This result suggests that students may need more time to understand the task and that limiting time available to do so prevents students from thoroughly learning. In contrast, there was no significant difference in the other operational thinking abilities of decomposition, data interpretation, and data representation.

3.3.2. Learning Motivation of Two Teaching Approaches

The pre-test programming motivation scores of the self-regulated group were extrinsic motivation (M = 4.22, SD = 1.09), learning value (M = 5.33, SD = 0.79), self-efficacy (M = 4.52, SD = 1.08) and help-seeking (M = 4.96, SD = 0.86). The post-test programming motivation scores of the guided-learning group were extrinsic motivation (M = 3.76, SD = 1.32), learning value (M = 5.05, SD = 1.26), self-efficacy (M = 4.25, SD = 1.27) and help-seeking (M = 4.74, SD = 1.19). Overall, the total motivation of the self-regulated group (M = 4.76, SD = 0.66) was higher than that of the guided-learning group (M = 4.45, SD = 0.78) (Table 9).
Assessing the results of the post-test, we found that the scores of the two groups of students showed the motivation levels of the self-regulated group to be better than those of the guided-learning group. To eliminate the difference between the two groups before the course, we used the pre-test score as a covariate to perform a one-way ANCOVA analysis on the scores of the two groups.
One-way ANCOVA was used to test the learning motivation between the self-regulated and guided-learning groups, requiring homogeneity of regression and variance assumptions. However, the raw scores did not satisfy the assumption of homogeneity of regression slopes, so we divided them by 6 in order to distribute the values between 0 and 1. Then, we used arcsine transformation to meet the homogeneity assumption.
The homogeneity of regression within the group had no significant difference between different groups and pre-test scores of learning motivation items, which are extrinsic motivation (F(1, 149) = 0.146, p = 0.703), learning value (F(1, 149) = 0.039, p = 0.844), self-efficacy (F(1, 149) = 0.843, p = 0.360), help-seeking (F(1, 149) = 0.147, p = 0.227), and total (F(1, 149) = 1.716, p = 0.192). Therefore, this study excluded the influence of pre-test scores and tested the differences in post-test learning motivation between students in the self-regulated and guided-learning groups. Moreover, Levene’s test showed that there was not significant enough a difference in the error variance of the dependent variable of the two groups in terms of learning motivation to meet the homogeneity of variance assumption. These variables are extrinsic motivation (F(1, 151) = 1.230, p = 0.269), learning value (F(1, 151) = 2.884, p = 0.092), self-efficacy (F(1, 151) = 0.720, p = 0.397), help-seeking (F(1, 151) = 1.937, p = 0.166), and total (F(1, 151) = 0.052, p = 0.820). Therefore, ANCOVA can be applied to test the learning motivation of two groups with different teaching methods.
The results from ANCOVA showed no significant difference in individual motivation between the self-regulated learning group and the guided-learning group, including extrinsic motivation (F(1, 150) = 1.049, p = 0.307, η2 = 0.007), learning value (F(1, 150) = 2.190, p = 0.141, η2 = 0.014), self-efficacy (F(1, 150) = 0.939, p = 0.334, η2 = 0.006), and help-seeking (F(1, 150) = 0.162, p = 0.688, η2 = 0.001). However, there was a significant difference in total motivation (F(1, 150) = 4.504, p = 0.035, η2 = 0.029) (Table 10).
Under two different time management regimens, the overall learning motivation of the self-regulated learning group was significantly higher than that of the guided learning group; however, there was no significant difference in terms of individual motivational aspects. This result shows that the time management method has no specific relationship with the various learning motivations students can obtain in the course; however, it is still a better method in terms of giving students appropriate learning time.

3.3.3. Help-Seeking Motivation of Two Teaching Approaches

In the previous section, the self-regulated teaching method significantly improved the overall learning motivation of the students. In addition, one of the purposes of this study is to understand the effectiveness of this Code.org course on students’ self-regulated learning. As such, we divided students into self-regulated and guided-learning groups.
The item of help-seeking in the questionnaire, which could help us to infer students’ learning process and time management, included discussing with classmates and asking teachers for advice. For this reason, we analyzed the four help-seeking items (Q11–14) individually in order to understand whether students find answers by themselves when solving tasks in this course or are more inclined to discuss with classmates or ask teachers for advice.
The post-test scores of help-seeking motivation in the self-regulated group were Q11 (M = 5.19, SD = 1.06), Q12 (M = 4.56, SD = 1.36), Q13 (M = 5.03, SD = 1.15), and Q14 (M = 5.06, SD = 1.09). The scores of the guided-learning group were Q11 (M = 5.21, SD = 1.27), Q12 (M = 3.90, SD = 1.66), Q13 (M = 4.79, SD = 1.69), and Q14 (M = 5.07, SD = 1.33) (Table 11).
From the post-test results, the self-regulated group was higher than the guided-learning group in the questionnaire Q12 and Q13, and the scores of Q11 and Q14 of the two groups were very close. To eliminate the difference between the two groups before the course, we used the pre-test score as a covariate with which to conduct a one-way ANCOVA analysis of the scores of the two groups.
One-way ANCOVA was used to test the difference in motivation to seek help between the self-regulated and guided-learning groups, which required homogeneity of regression and variance assumptions.
The homogeneity of regression within the group meant that there was no significant difference between different groups and pre-test scores of help-seeking items, which are Q11 (F(1, 149) = 0.242, p = 0.623), Q12 (F(1, 149) = 1.523, p = 0.219), Q13 (F(1, 149) = 3.795, p = 0.053), and Q14 (F(1, 149) = 0.043, p = 0.836). Therefore, this study excluded the influence of pre-test scores and tested the differences in post-test learning motivation between students in the self-regulated and guided-learning groups. Moreover, Levene’s test shows that there is not a significant enough difference in the error variance of the dependent variable of the two groups in help-seeking items to meet the homogeneity of variance assumption. The relevant scores are Q11 (F(1, 151) = 2.305, p = 0.131), Q12 (F(1, 151) = 2.239, p = 0.137), and Q14 (F(1, 151) = 3.565, p = 0.061). However, there are significant differences in Q13 (F(1, 151) = 6.751, p = 0.010). Therefore, ANCOVA can be applied to test the questionnaire Q11, Q12, and Q14 for help-seeking from two groups with different teaching methods. However, Q13 cannot be converted into root-sign, logarithm, reciprocal, and arcsine transformations to satisfy the homogeneity assumption. The analytical results may increase Type I and Type II errors and should be treated conservatively.
The results from ANCOVA showed that the post-test scores of Q12 were significantly higher in the self-regulated learning group than in the guided-learning group (F(1, 150) = 7.768, p = 0.006, η2 = 0.049). However, the post-test scores of the remaining help-seeking motivation items included Q11 (F(1, 150) = 0.821, p = 0.366, η2 = 0.005), Q13 (F(1, 150) = 0.574, p = 0.450, η2 = 0.004), and Q14 (F(1, 150) = 0.028, p = 0.866, η2 = 0.000). There were no significant differences between the two groups (Table 12).
The results showed that the self-regulated group significantly differed from the guided-learning group in Q12, meaning that the group spent more time discussing with peers. Furthermore, there was no significant difference between the two groups in Q11 and Q13, which means that the learning behavior of the two groups of students in terms of completing the course or asking for advice from other students should be similar. Conversely, Q14 is the motivation to ask the teacher for advice, and there is no significant difference in this regard between the two groups. This indicates that in, a self-regulated environment, asking teachers for advice is still an essential way for students to learn.

4. Discussion

4.1. Learning Outcome in Code.org Course

This study provided Code.org’s courses to students as teaching materials. We screened out the repeated key programming topics to avoid spending time on repeating courses in order to better meet the time requirements of the course arrangement. The Bebras task evaluates students’ learning outcomes in computational thinking. The significant improvement in the total score of all students indicates that the method in this study is effective and suitable for selecting courses. Furthermore, algorithmic ability was significantly improved, indicating that selected Code.org courses use the programming concepts of sequence, loops, debug, and conditionals to cultivate the logical thinking ability required to design algorithms.
However, the computational thinking abilities of the students in terms of decomposition, data interpretation, and data representation remained the same. Code.org cut the course into multiple small tasks so that students can clearly understand the task’s goal. However, this also prevents students from getting practice in decomposing complex tasks by themselves, so there was no significant improvement in decomposing ability.
Regarding data interpretation, Code.org only provides programming blocks related to the topic for students to use when solving problems in a visual programming environment. Sometimes, a programming model with similar solutions is also provided, and students only need to perform several block replacements and modifications in the existing model. This course design method can help students to avoid thinking about the problem-solving rules and goals of the task from scratch, which dramatically reduces the learning time. However, it also reduces the learning opportunities for students to perform programming modeling by themselves.
Regarding data representation, some geometric figures or symbols are used in the Bebras task to present the problem. The Code.org tasks, by contrast, replace these graphics or symbols with cute characters and animals familiar to a cohort’s age group. This approach can attract students’ interest and improve learning motivation, but it also reduces the chance for students to become used to geometric figures or symbols. As a result, students took more time to answer these questions in the Bebras task and had no significant improvement in learning outcomes.
From the above, we find that the course design of Code.org has complete assistance for learning programming, enabling students to receive enough training in programming and significantly to improve their algorithmic ability. However, these assistants may reduce the training of students’ operational thinking ability in decomposition, data interpretation, and data representation. To this end, we suggest retaining the existing courses to ensure students’ basic training and adding later tasks, as in course 4 (we chose courses 2 and 3), with more complex tasks and harder challenges used to comprehensively train students’ computational thinking.

4.2. Learning Motivation in Code.org Course

On the other hand, from the learning motivation perspective, the overall learning motivation significantly improved after taking the course, indicating that the design of this course is attractive to students and thereby enhances students’ interest and attitude toward learning programming. The following passage explains and discusses why, as a result of improving motivation, there are significant improvements in extrinsic motivation, learning value, and self-efficacy, while help-seeking is improved but not significantly different.
Regarding extrinsic motivation, in order to cultivate their self-regulated learning ability, students should also seek help when encountering difficulties during the learning process. In addition to asking the teacher during the course, students are also allowed to discuss work with each other to enable them to understand the learning progress of other students at the same time. Code.org courses are age-appropriate and package complex programming syntax so that students can achieve their goals through trial and error rather than writing from scratch. This course design should be relatively simple for students to follow, especially in the earlier course chapters. When students complete the task and find that programming is not too difficult to learn, they then try to complete it earlier than other students. This learning attitude can encourage students to motivate each other to learn programming.
Regarding learning value, Code.org uses cute animals or cartoon characters to approach students’ life experiences in design courses, making the purpose of the tasks more meaningful to students. When the skills or knowledge required to solve the problem are considered valuable, students will be more motivated to solve tasks to achieve better learning outcomes [105,106,123,124].
Regarding self-efficacy, Code.org does not provide programming blocks that are unrelated to the task in order to focus students on essential programming skills. This course uses a game-based design to guide students in solving problems and completing tasks. When students achieve good performances and grades in tasks, they will have more confidence and motivation in the ability of the course to promote their learning outcomes in programming [125,126,127].
Compared with the previous three motivations based on the students’ programming experience, the activeness of seeking assistance is more based on the learning environment and educational policies of the student’s institution. As 108 curriculums have paid more attention to cultivating students’ independent learning ability in recent years [18], students may already have higher scores in the pre-test. Moreover, although students in this course can discuss work with each other, its point is to encourage students to learn to solve programming problems effectively rather than just obtain answers from discussions. Seeking help is just one strategy in self-directed learning. Therefore, it was reasonable that the motivation to seek help did not reach a significant difference.
In summary, this study provides a teaching model in elementary schools for reference in future computational thinking teaching courses. As discussed in the previous section, the selected Code.org courses successfully guide students into programming and significantly improve the learning outcomes and interest in programming learning. The following section provides some discussion of how to improve operational thinking courses based on the results of this research.
According to the results of the learning outcomes in the previous section, we suggest adding later tasks to the course to promote more comprehensive computational thinking abilities. It should be noted that the difficulty of added tasks must be carefully assessed to ensure that they are not detrimental to self-efficacy [128]. Therefore, we still recommend retaining the tasks selected in this study to ensure that students have basic training in programming, after which teachers can set other, more complex tasks for students. This approach is also conducive to allowing comparison with and discussion alongside the results of this study.
In addition, we found that the Bebras task assessment has different learning objectives than the Code.org course. The Code.org courses focus on practicing algorithmic skills and using characters or animal situations to stimulate students’ interest in programming. On the other hand, solving Bebras tasks requires alternative computational thinking and involves more computer science-like problem types that use geometric and algebraic notation. Therefore, we recommend modifying the selection of the Bebras task as a learning outcome of the Code.org course. Practitioners should reduce the number of geometric and algebraic notation topics and focus on topics related to algorithmic ability so that Bebras’ assessments are more accurate for the Code.org course content.

4.3. Self-Regulated Learning in Code.org Course

We divided students into self-regulated and guided-learning groups in order to explore the time management factors of self-regulated learning strategies. The guided-learning group limits the time for students to solve tasks according to the course plan to ensure that students can complete all course content. On the other hand, the self-regulated group had no time limit and completed the tasks at their own pace. In this context, we now discuss whether the two groups completed a sufficient number of Code.org tasks to allow them to achieve effective learning outcomes.

4.3.1. Learning Outcomes and Motivation of the Two Groups

As we expected from the results, the average number of course completions in the self-regulated group was about 25% less than the figure in the guided-learning group. This suggests that, although Code.org’s curriculum guides student learning and engages students’ motivation, self-regulated learning is significantly slower without teacher supervision. However, the results showed that the self-regulated group had significantly higher scores in algorithmic ability than the guided-learning group, as well as in overall learning motivation. Therefore, the time teachers give in the guided-learning group seems insufficient for students to have enough time to learn operational thinking. It is difficult for teachers to set a suitable learning time for all students with different levels so that the self-regulated learning approach can improve students’ learning outcomes and motivation more effectively. Therefore, self-regulated learning should put the essential courses in the first 75% of the program to ensure that students can learn the complete range of course topics. The latter 25% can provide more in-depth advanced courses for interested students.
Another possible explanation for the lower rate of learning progress is that students need more time to understand the task, and that time constraints can prevent students from learning thoroughly. As indicated by the significant improvement in the algorithmic ability and overall learning motivation of the self-regulated group, students prefer to be able to adjust the appropriate learning time by themselves. In addition, according to the deductions outlined in the previous section, the other three computational thinking abilities might be enhanced due to the design of learning assistance in the Code.org course, which reduces the training opportunities for students to disassemble complex tasks, identify programming puzzles required for tasks, and familiarize themselves with geometric figures and symbols. Since the above issue is due to the design structure of the course, even if students have more time to study, it is challenging to grow student competence significantly, which aligns with our inference. On the other hand, although there is no significant difference in terms of individual motivation, the overall learning motivation of the self-directed learning group is significantly higher than that of the guided-learning group. Students in the learning process generally hope to have more learning time to meet their learning needs better, meaning that the self-regulated group has better results in terms of learning outcomes and motivation.

4.3.2. Help-Seeking Behavior of the Two Groups

Explaining the correlation between the learning period and the learning behavior of the two groups of students requires a discussion of the help-seeking questionnaire items of this study. Help-seeking is a critical self-regulation ability that can prevent students from encountering difficulties and learning from stagnating. In particular, computational thinking is a problem-solving ability, and the process of solving problems is the most important. Therefore, we should look to understand whether students think on their own or seek help from others during the learning process of the Code.org course, which will affect the computational thinking learning results of the Bebras test.
For this reason, we analyze the four questions (Q11–14) of help-seeking items in the questionnaire respectively. We found that both groups of students improved in completing programming tasks with classmates (Q11, Q13) or asking teachers for help (Q14), but there was no significant difference between the two groups. The number of tasks students asking for help was similar between the two groups. Moreover, both groups had high scores on the motivation to ask teachers (Q14), which shows that many students still regard teachers as the primary source of learning assistance, even in a self-regulated learning environment.
On the other hand, students in the self-regulated group discussed programming tasks more with their classmates (Q12) and achieved significant improvements in algorithmic ability [27], whereas the guided-learning group took less time to obtain the answer. However, the students in the guided-learning group were able to reduce the discussion time due to time pressure or even due to seeking answers directly instead of having discussions. Asking for answers meant that students skipped the thinking process, which may explain the difference in learning outcomes between the two groups. Future research can add a group that excludes help-seeking to compare the impact and verify this discussion.

5. Conclusions

This study investigates the effects of using a digital game-based programming curriculum to develop computational thinking among third- and fourth-grade students in Taiwan. The Ministry of Education has established the computer programming and computational thinking syllabus in the 108 curriculums and regards technical information and media literacy as essential to K-12 education [18]. For this purpose, this research combines the localized Code.org course with Chinese translation and the programming learning motivation scale based on Taiwanese students in order to propose a teaching model of self-regulated programming learning.
Students’ overall computational thinking and learning motivation improved significantly, showing the effectiveness of the courses selected in this study. The experiment contains a self-regulated group and a guided-learning group with time management by teachers. Students in the self-regulated group spent more time on discussions and achieved better learning outcomes on Bebras tasks [27]. The results show that discussions benefit students in learning problem-solving skills, and thus self-regulated groups are more suitable for cultivating computational thinking.
We noted high student interest in the Code.org game-based course activities during class observation sessions. The course succeeded in helping third-grade learners to understand basic programming concepts while supporting the development of positive attitudes toward the topic [31,36,129]. Our results support the idea of using game-based learning to build computational thinking skills in young learners, which will assist their efforts to work with more advanced computer-related tasks in later grades.
We provide full details of selected Code.org courses and use quantitative and objective learning outcomes assessments. Future researchers can use this study as a teaching model, carrying out repeated verification and follow-up analysis of the experimental results to evaluate whether this course design is suitable as a formal course for elementary school computer science education.

6. Limitations and Further Research

This study only arranges six third- and fourth-grade classes that can match the content and course time of the experimental teaching of this study as the research objects. Regarding course content, the two groups of students adopted the same Code.org course, and students could freely choose to create a solution procedure in the set task goal, which belongs to the solution procedure level with a low degree of autonomy [130]. Therefore, when applying the findings of this study to other groups, the differences must be carefully considered to avoid misunderstandings.
The significant improvements in learning outcomes and motivation indicate that Code.org’s curriculum is appropriate for elementary school students; however, we hope to better support computational thinking skills in decomposition, data interpretation, and data presentation. For this reason, we suggest keeping the same course content to reproduce the experimental results of students’ learning motivation in this study and adding more complex tasks in the later courses to strengthen the teaching model so that students can complete training in computational thinking.
In other words, self-regulated learning environments require more course periods and content. Due to the schedule and limitations of each class, the experimental process only includes eight lessons and the results of Bebras tests taken one week before and after the trial. We hope that future research will use this study’s experimental results and experience to make more use of the flexibility of alternative courses to design long-term learning across semesters or study periods in order to track the correlation between student programming and future learning.
Furthermore, we found that the Bebras tasks differ from the Code.org course objectives; this may lead to gaps in students’ learning content and assessment. It would be best if there were an associated assessment tool like Dr. Scratch using the Scratch environment. However, a series of planned courses like Code.org is more suitable for establishing a long-term teaching model, helping reduce the cost of course design and facilitating research analysis and improvement. We hope that future research will develop evaluation tools based on the Code.org environment in order to better evaluate the experimental results of this study.
We also found from the programming learning motivation questionnaire that students in the self-regulation group spent more time in discussions. By contrast, due to time pressure, the guided-learning group may seek answers directly and have less time for discussion, resulting in poorer learning outcomes than the self-regulated group. Future research could add a help-seeking exclusion group to examine the impact of peer discussions.

Author Contributions

Conceptualization, C.-Y.C. and S.-W.S.; methodology, C.-Y.C., S.-W.S. and Y.-Z.L.; validation, C.-Y.C. and S.-W.S.; formal analysis, C.-Y.C.; investigation, Y.-Z.L.; resources, Y.-Z.L.; data curation, C.-Y.C. and Y.-Z.L.; writing—original draft preparation, C.-Y.C.; writing—review and editing, C.-Y.C.; visualization, C.-Y.C.; supervision, C.-T.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data will be available from corresponding authors on a reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Google. Future of the Classroom—Emerging Trends in K-12 Education Global Edition; Google: Mountain View, CA, USA, 2019. [Google Scholar]
  2. Cuny, J.; Snyder, L.; Wing, J.M. Demystifying Computational Thinking for Non-Computer Scientists. Unpublished Manuscript in Progress. 2010. Available online: http://www.cs.cmu.edu/~CompThink/resources/TheLinkWing.pdf (accessed on 20 June 2023).
  3. Grover, S.; Pea, R. Computational thinking in K–12: A review of the state of the field. Educ. Res. 2013, 42, 38–43. [Google Scholar] [CrossRef]
  4. Hemmendinger, D. A plea for modesty. Acm Inroads 2010, 1, 4–7. [Google Scholar] [CrossRef]
  5. Barr, V.; Stephenson, C. Bringing computational thinking to K-12: What is Involved and what is the role of the computer science education community? Acm Inroads 2011, 2, 48–54. [Google Scholar] [CrossRef]
  6. Aho, A.V. Computation and computational thinking. Comput. J. 2012, 55, 832–835. [Google Scholar] [CrossRef]
  7. García-Penalvo, F.J. What computational thinking is. J. Inf. Technol. Res. 2016, 9, v–viii. [Google Scholar]
  8. Llorens Largo, F. Dicen por Ahí… Que la Nueva Alfabetización Pasa por la Programación. Revis. Rev. Investig. Docencia Univ. Inf. 2015, 8, 11–14. [Google Scholar]
  9. Wing, J.M. Computational thinking. Commun. ACM 2006, 49, 33–35. [Google Scholar] [CrossRef]
  10. Wing, J. Computational thinking’s influence on research and education for all. Ital. J. Educ. Technol. 2017, 25, 7–14. [Google Scholar]
  11. Hsu, T.-C.; Chang, S.-C.; Hung, Y.-T. How to learn and how to teach computational thinking: Suggestions based on a review of the literature. Comput. Educ. 2018, 126, 296–310. [Google Scholar] [CrossRef]
  12. Angeli, C.; Valanides, N. Developing young children’s computational thinking with educational robotics: An interaction effect between gender and scaffolding strategy. Comput. Hum. Behav. 2020, 105, 105954. [Google Scholar] [CrossRef]
  13. Bocconi, S.; Chioccariello, A.; Dettori, G.; Ferrari, A.; Engelhardt, K.; Kampylis, P.; Punie, Y. Developing computational thinking in compulsory education. Eur. Comm. JRC Sci. Policy Rep. 2016, 68, JRC104188. [Google Scholar]
  14. ISTE; CSTA. Operational Definition of Computational Thinking for K-12 Education. 2011. Available online: https://cdn.iste.org/www-root/Computational_Thinking_Operational_Definition_ISTE.pdf (accessed on 20 June 2023).
  15. DfE. National Curriculum in England: Computing Programmes of Study. 2013. Available online: https://www.gov.uk/government/publications/national-curriculum-in-england-computing-programmes-of-study (accessed on 20 June 2023).
  16. ACARA. The Shape of the Australian Curriculum: Version 5.0. 2020. Available online: https://www.acara.edu.au/docs/default-source/curriculum/the_shape_of_the_australian_curriculum_version5_for-website.pdf (accessed on 20 June 2023).
  17. TKI. Technology in the New Zealand Curriculum. 2018. Available online: https://nzcurriculum.tki.org.nz/The-New-Zealand-Curriculum/Technology/ (accessed on 20 June 2023).
  18. NAER. Curriculum Guidelines of 12-Year Basic Education for Elementary, Junior High Schools and General Senior High Schools–Technology. 2018. Available online: https://www.naer.edu.tw/eng/PageSyllabus?fid=148 (accessed on 20 June 2023).
  19. DiSessa, A.A. Changing Minds: Computers, Learning, and Literacy; MIT Press: Cambridge, MA, USA, 2001. [Google Scholar]
  20. Hockly, N. Digital literacies. ELT J. 2012, 66, 108–112. [Google Scholar] [CrossRef]
  21. Vee, A. Understanding computer programming as a literacy. Lit. Compos. Stud. 2013, 1, 42–64. [Google Scholar] [CrossRef]
  22. Hambrusch, S.; Hoffmann, C.; Korb, J.T.; Haugan, M.; Hosking, A.L. A multidisciplinary approach towards computational thinking for science majors. ACM SIGCSE Bull. 2009, 41, 183–187. [Google Scholar] [CrossRef]
  23. Rubinstein, A.; Chor, B. Computational thinking in life science education. PLoS Comput. Biol. 2014, 10, e1003897. [Google Scholar] [CrossRef]
  24. Atmatzidou, S.; Demetriadis, S. Advancing students’ computational thinking skills through educational robotics: A study on age and gender relevant differences. Robot. Auton. Syst. 2016, 75, 661–670. [Google Scholar] [CrossRef]
  25. Bers, M.U.; Flannery, L.; Kazakoff, E.R.; Sullivan, A. Computational thinking and tinkering: Exploration of an early childhood robotics curriculum. Comput. Educ. 2014, 72, 145–157. [Google Scholar] [CrossRef]
  26. Brennan, K.; Resnick, M. New frameworks for studying and assessing the development of computational thinking. In Proceedings of the 2012 Annual Meeting of the American Educational Research Association, Vancouver, BC, Canada, 13–17 April 2012; p. 25. [Google Scholar]
  27. Lye, S.Y.; Koh, J.H.L. Review on teaching and learning of computational thinking through programming: What is next for K-12? Comput. Hum. Behav. 2014, 41, 51–61. [Google Scholar] [CrossRef]
  28. Oluk, A.; Korkmaz, Ö. Comparing Students’ Scratch Skills with Their Computational Thinking Skills in Terms of Different Variables. Int. J. Mod. Educ. Comput. Sci. 2016, 8, 1–7. [Google Scholar] [CrossRef]
  29. Karakasis, C.; Xinogalos, S. BlocklyScript: Design and pilot evaluation of an RPG platform game for cultivating computational thinking skills to young students. Inform. Educ. 2020, 19, 641–668. [Google Scholar] [CrossRef]
  30. Rijke, W.J.; Bollen, L.; Eysink, T.H.; Tolboom, J.L. Computational thinking in primary school: An examination of abstraction and decomposition in different age groups. Inform. Educ. 2018, 17, 77–92. [Google Scholar] [CrossRef]
  31. Zapata-Caceres, M.; Martin, E.; Roman-Gonzalez, M. Collaborative Game-Based Environment and Assessment Tool for Learning Computational Thinking in Primary School: A Case Study. IEEE Trans. Learn. Technol. 2021, 14, 576–589. [Google Scholar] [CrossRef]
  32. Zapata-Cáceres, M.; Martín-Barroso, E.; Román-González, M. Computational thinking test for beginners: Design and content validation. In Proceedings of the 2020 IEEE Global Engineering Education Conference (EDUCON), Porto, Portugal, 27–30 April 2020; pp. 1905–1914. [Google Scholar]
  33. Fagerlund, J.; Häkkinen, P.; Vesisenaho, M.; Viiri, J. Computational thinking in programming with Scratch in primary schools: A systematic review. Comput. Appl. Eng. Educ. 2021, 29, 12–28. [Google Scholar] [CrossRef]
  34. Nouri, J.; Zhang, L.; Mannila, L.; Norén, E. Development of computational thinking, digital competence and 21st century skills when learning programming in K-9. Educ. Inq. 2020, 11, 1–17. [Google Scholar] [CrossRef]
  35. Zhang, L.; Nouri, J. A systematic review of learning computational thinking through Scratch in K-9. Comput. Educ. 2019, 141, 103607. [Google Scholar] [CrossRef]
  36. Kazimoglu, C. Enhancing confidence in using computational thinking skills via playing a serious game: A case study to increase motivation in learning computer programming. IEEE Access 2020, 8, 221831–221851. [Google Scholar] [CrossRef]
  37. Du, J.; Wimmer, H.; Rada, R. “ Hour of Code”: Can It Change Students’ Attitudes Toward Programming? J. Inf. Technol. Educ. Innov. Pract. 2016, 15, 53. [Google Scholar] [CrossRef]
  38. Wing, J.M. Computational thinking and thinking about computing. Philos. Trans. R. Soc. A Math. Phys. Eng. Sci. 2008, 366, 3717–3725. [Google Scholar] [CrossRef]
  39. Bocconi, S.; Chioccariello, A.; Earp, J. The Nordic Approach to Introducing Computational Thinking and Programming in Compulsory Education. Report Prepared for the Nordic@ BETT2018 Steering Group. 2018, pp. 397–400. Available online: https://doi.org/10.17471/54007 (accessed on 20 June 2023).
  40. Stephens, M. Embedding algorithmic thinking more clearly in the mathematic curriculum. In ICME 24 School Mathematics Curriculum Reforms: Challenges, Changes and Opportunities; The University of Melbourne: Melbourne, Australia, 2018. [Google Scholar]
  41. Kumar, D. Digital playgrounds for early computing education. ACM Inroads 2014, 5, 20–21. [Google Scholar] [CrossRef]
  42. Kalelioğlu, F. A new way of teaching programming skills to K-12 students: Code.org. Comput. Hum. Behav. 2015, 52, 200–210. [Google Scholar] [CrossRef]
  43. Feijóo-García, P.G.; Kapoor, A.; Gardner-McCune, C.; Ragan, E. Effects of a Block-Based Scaffolded Tool on Students’ Introduction to Hierarchical Data Structures. IEEE Trans. Educ. 2021, 65, 191–199. [Google Scholar] [CrossRef]
  44. Park, K.; Mott, B.; Lee, S.; Glazewski, K.; Scribner, J.A.; Ottenbreit-Leftwich, A.; Hmelo-Silver, C.E.; Lester, J. Designing a Visual Interface for Elementary Students to Formulate AI Planning Tasks. In Proceedings of the 2021 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), St. Louis, MO, USA, 10–13 October 2021; pp. 1–9. [Google Scholar]
  45. Jung, I.; Choi, J.; Kim, I.-J.; Choi, C. Interactive learning environment for practical programming language based on web service. In Proceedings of the 2016 15th International Conference on Information Technology Based Higher Education and Training (ITHET), Istanbul, Turkey, 8–10 September 2016; pp. 1–7. [Google Scholar]
  46. Brennan, K.; Resnick, M. Stories from the scratch community: Connecting with ideas, interests, and people. In Proceedings of the 44th ACM Technical Symposium on Computer Science Education, Denver, CO, USA, 6–9 March 2013; pp. 463–464. [Google Scholar]
  47. Resnick, M.; Maloney, J.; Monroy-Hernández, A.; Rusk, N.; Eastmond, E.; Brennan, K.; Millner, A.; Rosenbaum, E.; Silver, J.; Silverman, B. Scratch: Programming for all. Commun. ACM 2009, 52, 60–67. [Google Scholar] [CrossRef]
  48. Moskal, B.; Lurie, D.; Cooper, S. Evaluating the effectiveness of a new instructional approach. In Proceedings of the 35th SIGCSE Technical Symposium on Computer Science Education, Norfolk, VA, USA, 3–7 March 2004; pp. 75–79. [Google Scholar]
  49. Dann, W.; Cosgrove, D.; Slater, D.; Culyba, D.; Cooper, S. Mediated transfer: Alice 3 to java. In Proceedings of the 43rd ACM Technical Symposium on Computer Science Education, Raleigh, NC, USA, 29 February–3 March 2012; pp. 141–146. [Google Scholar]
  50. Pokress, S.C.; Veiga, J.J.D. MIT App Inventor: Enabling personal mobile computing. arXiv 2013, arXiv:1310.2830. [Google Scholar]
  51. Patton, E.W.; Tissenbaum, M.; Harunani, F. MIT app inventor: Objectives, design, and development. In Computational Thinking Education; Springer: Singapore, 2019; pp. 31–49. [Google Scholar]
  52. Gökçe, S.; Yenmez, A.A. Ingenuity of scratch programming on reflective thinking towards problem solving and computational thinking. Educ. Inf. Technol. 2022, 28, 5493–5517. [Google Scholar] [CrossRef]
  53. Su, Y.-S.; Shao, M.; Zhao, L. Effect of mind mapping on creative thinking of children in scratch visual programming education. J. Educ. Comput. Res. 2022, 60, 906–929. [Google Scholar] [CrossRef]
  54. Yulianti, D.; Sugianto, S.; Ngafidin, K. Scratch Assisted Physics Learning with a STEM Approach in the Pandemic Era to Develop 21st Century Learning Skills. J. Pendidik. IPA Indones. 2022, 11, 185–194. [Google Scholar] [CrossRef]
  55. Garay, I.S.; Quintana, M.B. Creative Thinking in Primary Students with Scratch. Developing skills for the 21st Century in Chile. In Proceedings of the INTED2018 Proceedings, Valencia, Spain, 5–7 March 2018; pp. 9405–9412. [Google Scholar]
  56. Jiang, B.; Li, Z. Effect of Scratch on computational thinking skills of Chinese primary school students. J. Comput. Educ. 2021, 8, 505–525. [Google Scholar] [CrossRef]
  57. Theodoropoulos, A.; Antoniou, A.; Lepouras, G. The little ones, the big ones and the code: Utilization of digital educational games in primary school pupils. In Proceedings of the 7th Conference on Informatics in Education (CIE 2015), Greek Computer Society (GCS), Piraeus, Greece, 30 November–4 December 2015; pp. 40–49. [Google Scholar]
  58. Lambić, D.; Đorić, B.; Ivakić, S. Investigating the effect of the use of code.org on younger elementary school students’ attitudes towards programming. Behav. Inf. Technol. 2021, 40, 1784–1795. [Google Scholar] [CrossRef]
  59. Kim, A.S.; Ko, A.J. A pedagogical analysis of online coding tutorials. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education, Seattle, WA, USA, 8–11 March 2017; pp. 321–326. [Google Scholar]
  60. Ching, Y.-H.; Hsu, Y.-C.; Baldwin, S. Developing computational thinking with educational technologies for young learners. TechTrends 2018, 62, 563–573. [Google Scholar] [CrossRef]
  61. Greene, J.A.; Copeland, D.Z.; Deekens, V.M.; Seung, B.Y. Beyond knowledge: Examining digital literacy’s role in the acquisition of understanding in science. Comput. Educ. 2018, 117, 141–159. [Google Scholar] [CrossRef]
  62. Kizilcec, R.F.; Pérez-Sanagustín, M.; Maldonado, J.J. Self-regulated learning strategies predict learner behavior and goal attainment in Massive Open Online Courses. Comput. Educ. 2017, 104, 18–33. [Google Scholar] [CrossRef]
  63. Phillips, B.N.; Turnbull, B.J.; He, F.X. Assessing readiness for self-directed learning within a non-traditional nursing cohort. Nurse Educ. Today 2015, 35, e1–e7. [Google Scholar] [CrossRef]
  64. Chaudhary, V.; Agrawal, V.; Sureka, P.; Sureka, A. An experience report on teaching programming and computational thinking to elementary level children using lego robotics education kit. In Proceedings of the 2016 IEEE Eighth International Conference on Technology for Education (T4E), Mumbai, India, 2–4 December 2016; pp. 38–41. [Google Scholar]
  65. Zimmerman, B.J. Attaining self-regulation: A social cognitive perspective. In Handbook of Self-Regulation; Elsevier: Amsterdam, The Netherlands, 2000; pp. 13–39. [Google Scholar]
  66. Van Gog, T.; Hoogerheide, V.; Van Harsel, M. The role of mental effort in fostering self-regulated learning with problem-solving tasks. Educ. Psychol. Rev. 2020, 32, 1055–1072. [Google Scholar] [CrossRef]
  67. Wolters, C.A.; Won, S.; Hussain, M. Examining the relations of time management and procrastination within a model of self-regulated learning. Metacogn. Learn. 2017, 12, 381–399. [Google Scholar] [CrossRef]
  68. Cho, M.-H.; Heron, M.L. Self-regulated learning: The role of motivation, emotion, and use of learning strategies in students’ learning experiences in a self-paced online mathematics course. Distance Educ. 2015, 36, 80–99. [Google Scholar] [CrossRef]
  69. Milligan, C.; Littlejohn, A. How health professionals regulate their learning in massive open online courses. Internet High. Educ. 2016, 31, 113–121. [Google Scholar] [CrossRef] [PubMed]
  70. Barnard, L.; Lan, W.Y.; To, Y.M.; Paton, V.O.; Lai, S.-L. Measuring self-regulation in online and blended learning environments. Internet High. Educ. 2009, 12, 1–6. [Google Scholar] [CrossRef]
  71. Nuraisa, D.; Saleh, H.; Raharjo, S. Profile of Students’ Computational Thinking Based on Self-Regulated Learning in Completing Bebras Tasks. Prima J. Pendidik. Mat. 2021, 5, 40–50. [Google Scholar] [CrossRef]
  72. OECD. PISA 2018 Assessment and Analytical Framework; OECD Publishing: Paris, France, 2019. [Google Scholar]
  73. Pintrich, P.R.; Smith, D.A.; Garcia, T.; McKeachie, W.J. Reliability and predictive validity of the Motivated Strategies for Learning Questionnaire (MSLQ). Educ. Psychol. Meas. 1993, 53, 801–813. [Google Scholar] [CrossRef]
  74. Panadero, E. A review of self-regulated learning: Six models and four directions for research. Front. Psychol. 2017, 8, 422. [Google Scholar] [CrossRef]
  75. Duncan, T.G.; McKeachie, W.J. The making of the motivated strategies for learning questionnaire. Educ. Psychol. 2005, 40, 117–128. [Google Scholar] [CrossRef]
  76. Tuan, H.L.; Chin, C.C.; Shieh, S.H. The development of a questionnaire to measure students’ motivation towards science learning. Int. J. Sci. Educ. 2005, 27, 639–654. [Google Scholar] [CrossRef]
  77. Dermitzaki, I.; Stavroussi, P.; Vavougios, D.; Kotsis, K.T. Adaptation of the Students’ Motivation towards Science Learning (SMTSL) questionnaire in the Greek language. Eur. J. Psychol. Educ. 2013, 28, 747–766. [Google Scholar] [CrossRef]
  78. Shaakumeni, S.N.; Csapó, B. A cross-cultural validation of adapted questionnaire for assessing motivation to learn science. Afr. J. Res. Math. Sci. Technol. Educ. 2018, 22, 340–350. [Google Scholar] [CrossRef]
  79. Chan, Y.; Norlizah, C. Students’ motivation towards science learning and students’ science achievement. Int. J. Acad. Res. Progress. Educ. Dev. 2017, 6, 174–189. [Google Scholar]
  80. Cavas, P. Factors affecting the motivation of Turkish primary students for science learning. Sci. Educ. Int. 2011, 22, 31–42. [Google Scholar]
  81. Zimmerman, B.J. Investigating self-regulation and motivation: Historical background, methodological developments, and future prospects. Am. Educ. Res. J. 2008, 45, 166–183. [Google Scholar] [CrossRef]
  82. Newman, R.S. Children’s help-seeking in the classroom: The role of motivational factors and attitudes. J. Educ. Psychol. 1990, 82, 71. [Google Scholar] [CrossRef]
  83. Karabenick, S.A.; Berger, J.-L. Help Seeking as a Self-Regulated Learning Strategy; IAP Information Age Publishing: Charlotte, NC, USA, 2013. [Google Scholar]
  84. Nelson-Le Gall, S. Help-seeking: An understudied problem-solving skill in children. Dev. Rev. 1981, 1, 224–246. [Google Scholar] [CrossRef]
  85. Ryan, A.M.; Shin, H. Help-seeking tendencies during early adolescence: An examination of motivational correlates and consequences for achievement. Learn. Instr. 2011, 21, 247–256. [Google Scholar] [CrossRef]
  86. Ryan, A.M.; Pintrich, P.R. “Should I ask for help?” The role of motivation and attitudes in adolescents’ help seeking in math class. J. Educ. Psychol. 1997, 89, 329. [Google Scholar]
  87. Dagienė, V. Information technology contests—Introduction to computer science in an attractive way. Inform. Educ.-Int. J. 2006, 5, 37–46. [Google Scholar] [CrossRef]
  88. Cartelli, A.; Dagiene, V.; Futschek, G. Bebras contest and digital competence assessment: Analysis of frameworks. Int. J. Digit. Lit. Digit. Competence 2010, 1, 24–39. [Google Scholar] [CrossRef]
  89. Dagienė, V.; Futschek, G. Bebras international contest on informatics and computer literacy: Criteria for good tasks. In Proceedings of the International Conference on Informatics in Secondary Schools-Evolution and Perspectives, Vienna, Austria, 26–28 September 2008; pp. 19–30. [Google Scholar]
  90. Dagiene, V.; Stupuriene, G. Informatics education based on solving attractive tasks through a contest. KEYCIT 2014 Key Competencies Inform. ICT 2015, 7, 97. [Google Scholar]
  91. Dagienė, V.; Pelikis, E.; Stupurienė, G. Introducing Computational Thinking through a Contest on Informatics: Problem-solving and Gender Issues. Inf. Moksl. Inf. Sci. 2015, 73, 55–63. [Google Scholar]
  92. Duncan, C.; Bell, T. A pilot computer science and programming course for primary school students. In Proceedings of the Workshop in Primary and Secondary Computing Education, London, UK, 9–11 September 2015; pp. 39–48. [Google Scholar]
  93. Dagienė, V.; Sentance, S. It’s computational thinking! Bebras tasks in the curriculum. In Proceedings of the International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, Vienna, Austria, 26–28 September 2016; pp. 28–39. [Google Scholar]
  94. Bezáková, D.; Winczer, M. Teaching theoretical informatics to secondary school informatics teachers. In Proceedings of the International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, Bratislava, Slovakia, 26–29 October 2011; pp. 117–128. [Google Scholar]
  95. Gujberova, M.; Kalas, I. Designing productive gradations of tasks in primary programming education. In Proceedings of the 8th Workshop in Primary and Secondary Computing Education, Aarhus, Denmark, 11–13 November 2013; pp. 108–117. [Google Scholar]
  96. Buckley, P.; Doyle, E. Gamification and student motivation. Interact. Learn. Environ. 2016, 24, 1162–1175. [Google Scholar] [CrossRef]
  97. Habgood, M.J.; Ainsworth, S.E. Motivating children to learn effectively: Exploring the value of intrinsic integration in educational games. J. Learn. Sci. 2011, 20, 169–206. [Google Scholar] [CrossRef]
  98. Ames, C.A. Motivation: What teachers need to know. Teach. Coll. Rec. 1990, 91, 409–421. [Google Scholar] [CrossRef]
  99. Xiao, L.-J. A Study on Development and Applications of Learning Motivation Scale for Programming Courses in Universities. Unpublished Master’s Thesis, Da-Yeh University, Changhua, Taiwan, 2017. [Google Scholar]
  100. Heppner, P.P.; Petersen, C.H. The development and implications of a personal problem-solving inventory. J. Couns. Psychol. 1982, 29, 66. [Google Scholar] [CrossRef]
  101. Ryan, R.M.; Deci, E.L. Intrinsic and extrinsic motivations: Classic definitions and new directions. Contemp. Educ. Psychol. 2000, 25, 54–67. [Google Scholar] [CrossRef]
  102. Lin, Y.-G.; McKeachie, W.J.; Kim, Y.C. College student intrinsic and/or extrinsic motivation and learning. Learn. Individ. Differ. 2003, 13, 251–258. [Google Scholar] [CrossRef]
  103. Pintrich, P.R.; Schunk, D.H. Motivation in Education: Theory, Research, and Applications, 2nd ed.; Merrill/Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
  104. Wigfield, A.; Eccles, J.S. Expectancy–value theory of achievement motivation. Contemp. Educ. Psychol. 2000, 25, 68–81. [Google Scholar] [CrossRef]
  105. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  106. Venkatesh, V.; Davis, F.D. A model of the antecedents of perceived ease of use: Development and test. Decis. Sci. 1996, 27, 451–481. [Google Scholar] [CrossRef]
  107. Hu, Y.; Su, C.-Y.; Fu, A. Factors influencing younger adolescents’ intention to use game-based programming learning: A multigroup analysis. Educ. Inf. Technol. 2022, 27, 8203–8233. [Google Scholar] [CrossRef]
  108. Cheng, Y.-M.; Lou, S.-J.; Kuo, S.-H.; Shih, R.-C. Investigating elementary school students’ technology acceptance by applying digital game-based learning to environmental education. Australas. J. Educ. Technol. 2013, 29, 96–110. [Google Scholar] [CrossRef]
  109. Cheng, G. Exploring factors influencing the acceptance of visual programming environment among boys and girls in primary schools. Comput. Hum. Behav. 2019, 92, 361–372. [Google Scholar] [CrossRef]
  110. Shiue, Y.-M.; Hsu, Y.-C.; Liang, Y.-C. Modeling the continuance usage intention of game-based learning in the context of collaborative learning. In Proceedings of the 2017 International Conference on Applied System Innovation (ICASI), Sapporo, Japan, 13–17 May 2017; pp. 1106–1109. [Google Scholar]
  111. Bandura, A. Self-Efficacy: The Exercise of Control; W. H. Freeman and Company: New York, NY, USA, 1997. [Google Scholar]
  112. Compeau, D.R.; Higgins, C.A. Application of social cognitive theory to training for computer skills. Inf. Syst. Res. 1995, 6, 118–143. [Google Scholar] [CrossRef]
  113. Li, Y.; Duan, Y.; Fu, Z.; Alford, P. An empirical study on behavioural intention to reuse e-learning systems in rural China. Br. J. Educ. Technol. 2012, 43, 933–948. [Google Scholar] [CrossRef]
  114. Chien, T.C. Computer self-efficacy and factors influencing e-learning effectiveness. Eur. J. Train. Dev. 2012, 36, 670–686. [Google Scholar] [CrossRef]
  115. Lim, H.; Lee, S.-G.; Nam, K. Validating E-learning factors affecting training effectiveness. Int. J. Inf. Manag. 2007, 27, 22–35. [Google Scholar] [CrossRef]
  116. Zimmerman, B.J.; Pons, M.M. Development of a structured interview for assessing student use of self-regulated learning strategies. Am. Educ. Res. J. 1986, 23, 614–628. [Google Scholar] [CrossRef]
  117. Zimmerman, B.J.; Martinez-Pons, M. Construct validation of a strategy model of student self-regulated learning. J. Educ. Psychol. 1988, 80, 284. [Google Scholar] [CrossRef]
  118. Zimmerman, B.J.; Martinez-Pons, M. Student differences in self-regulated learning: Relating grade, sex, and giftedness to self-efficacy and strategy use. J. Educ. Psychol. 1990, 82, 51. [Google Scholar] [CrossRef]
  119. Karabenick, S.A. Seeking help in large college classes: A person-centered approach. Contemp. Educ. Psychol. 2003, 28, 37–58. [Google Scholar] [CrossRef]
  120. Anthonysamy, L.; Koo, A.-C.; Hew, S.-H. Self-regulated learning strategies and non-academic outcomes in higher education blended learning environments: A one decade review. Educ. Inf. Technol. 2020, 25, 3677–3704. [Google Scholar] [CrossRef]
  121. Martín-Arbós, S.; Castarlenas, E.; Dueñas, J.-M. Help-seeking in an academic context: A systematic review. Sustainability 2021, 13, 4460. [Google Scholar] [CrossRef]
  122. Wu, J.-Y. Learning analytics on structured and unstructured heterogeneous data sources: Perspectives from procrastination, help-seeking, and machine-learning defined cognitive engagement. Comput. Educ. 2021, 163, 104066. [Google Scholar] [CrossRef]
  123. Jarrah, A.M.; Almassri, H.; Johnson, J.D.; Wardat, Y. Assessing the impact of digital games-based learning on students’ performance in learning fractions using (ABACUS) software application. EURASIA J. Math. Sci. Technol. Educ. 2022, 18, em2159. [Google Scholar]
  124. Dayo, N.A.; Alvi, U.; Asad, M.M. Mechanics of digital mathematics games for learning of problem-solving: An extensive literature review. In Proceedings of the 2020 International Conference on Emerging Trends in Smart Technologies (ICETST), Karachi, Pakistan, 26–27 March 2020; pp. 1–6. [Google Scholar]
  125. Molins-Ruano, P.; Sevilla, C.; Santini, S.; Haya, P.A.; Rodríguez, P.; Sacha, G. Designing videogames to improve students’ motivation. Comput. Hum. Behav. 2014, 31, 571–579. [Google Scholar] [CrossRef]
  126. Zhao, W.; Shute, V.J. Can playing a video game foster computational thinking skills? Comput. Educ. 2019, 141, 103633. [Google Scholar] [CrossRef]
  127. Lee, M.J. Gidget: An online debugging game for learning and engagement in computing education. In Proceedings of the 2014 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Melbourne, Australia, 28 July–1 August 2014; pp. 193–194. [Google Scholar]
  128. Falloon, G. An analysis of young students’ thinking when completing basic coding tasks using Scratch Jnr. On the iPad. J. Comput. Assist. Learn. 2016, 32, 576–593. [Google Scholar] [CrossRef]
  129. Snodgrass, M.R.; Israel, M.; Reese, G.C. Instructional supports for students with disabilities in K-5 computing: Findings from a cross-case analysis. Comput. Educ. 2016, 100, 1–17. [Google Scholar] [CrossRef]
  130. Carlborg, N.; Tyrén, M.; Heath, C.; Eriksson, E. The scope of autonomy when teaching computational thinking in primary school. Int. J. Child-Comput. Interact. 2019, 21, 130–139. [Google Scholar] [CrossRef]
Table 1. Experimental procedure.
Table 1. Experimental procedure.
GroupWeek 1Week 2–9Week 10
Self-regulated groupPre-test
(Bebras test and
Questionnaire)
Complete Code.org tasks
on students’ own time
Post-test
(Bebras test and
Questionnaire)
Guided-learning groupComplete Code.org tasks
in a limited time
Table 2. Participant numbers and groups.
Table 2. Participant numbers and groups.
GroupStudentsAbsentIncompleteTotal
Self-regulated groupMale47−5−43872
Female38−2−234
Guided-learning groupMale47−2 4581
Female36 36
Table 3. Code.org course selection and details.
Table 3. Code.org course selection and details.
Course No.Lesson No.Theme and Target ConceptTasks
23Maze: Sequence9
6Maze: Loops12
8Bee: Loops13
10Bee: Debugging11
13Bee: Conditionals13
32Maze13
8Maze: Conditionals11
6Bee: Functions11
7Bee: Conditionals9
12Farmer: While Loops9
13Bee: Nested Loops12
14Bee: Debugging12
Total12 135
Table 4. Bebras task difficulty and corresponding computational thinking aptitudes.
Table 4. Bebras task difficulty and corresponding computational thinking aptitudes.
No.TaskDifficulty LevelScore
(Deduct)
Dec.Int.Rep.Alg.
1Ice Cream MachineA8 (−2)
2Bebras PaintingA8 (−2)
3Shelf SortA8 (−2)
4BottlesB10 (−2.5)
5Tube SystemB10 (−2.5)
6Car TripB10 (−2.5)
7BlossomB10 (−2.5)
8Party GuestsC12 (−3)
9MazesC12 (−3)
10Secret RecipeC12 (−3)
Maximum Score10050604844
Note: According to the Bebras guide, computational thinking ability consists of 4 aptitudes: decomposition (Dec.), data interpretation (Int.), data representation (Rep.), and algorithms (Alg.), and the abilities requirements for completing tasks are marked with ✓.
Table 6. Paired t test results for pretest and posttest learning outcomes and motivation.
Table 6. Paired t test results for pretest and posttest learning outcomes and motivation.
TestsAptitudesPre-Test
M (SD)
Post-Test
M (SD)
tpCohen’s d
Bebras TasksDecomposition22.19 (16.67)23.83 (16.30)−1.4550.1480.100
Data interpretation29.07 (17.74)29.34 (18.32)−0.2190.8270.015
Data representation20.26 (14.68)20.02 (15.12)0.1970.8440.016
Algorithm12.26 (17.27)14.98 (17.54)−2.4080.0170.156
Total38.68 (28.25)42.08 (28.99)−2.1310.0350.119
Motivation
Questionnaire
Extrinsic motivation3.67 (1.16)3.98 (1.24)−3.1520.0020.258
Learning value4.94 (1.16)5.18 (1.07)−2.3840.0180.216
Self-efficacy4.17 (1.13)4.38 (1.19)−2.0710.0400.180
Help-seeking4.70 (1.05)4.84 (1.05)−1.5430.1250.140
Total4.38 (0.77)4.60 (0.74)−3.2100.0020.286
Table 7. Descriptive statistics of Bebras post-test scores for two groups.
Table 7. Descriptive statistics of Bebras post-test scores for two groups.
AptitudesSelf-Regulated Group
M (SD)
Guided-Learning Group
M (SD)
Decomposition25.56 (15.30)22.30 (17.08)
Data interpretation30.41 (17.83)28.40 (18.80)
Data representation20.76 (14.64)19.37 (15.60)
Algorithm16.15 (18.08)13.94 (17.08)
Total44.10 (28.30)40.27 (29.66)
Course completions102.43 (23.78)135.00 (0.00)
Table 8. ANCOVA results of Bebras scores for two groups.
Table 8. ANCOVA results of Bebras scores for two groups.
AptitudesSourceSSdfMSFpη2
DecompositionPre-test16,492.841116,492.841105.3980.000
Groups310.8351310.8351.9860.1610.013
Error23,472.326150156.482
Data interpretationPre-test20,175.271120,175.27198.6640.000
Groups2.77612.7760.0140.9070.000
Error30,672.750150204.485
Data representationPre-test8870.74918870.74951.5190.000
Groups8.69018.6900.0500.8230.000
Error25,827.637150172.184
AlgorithmPre-test21,981.959121,981.959134.1450.000
Groups679.6361679.6364.1470.0430.027
Error24,580.052150163.867
TotalPre-test74,462.419174,462.419211.7910.000
Groups694.4251694.4251.9750.1620.013
Error52,737.574150351.584105.3980.000
Table 9. Descriptive statistics of motivation questionnaire post-test scores for two groups.
Table 9. Descriptive statistics of motivation questionnaire post-test scores for two groups.
AptitudesSelf-Regulated Group
M (SD)
Guided-Learning Group
M (SD)
Extrinsic motivation4.22 (1.09)3.76 (1.32)
Learning value5.33 (0.79)5.05 (1.26)
Self-efficacy4.52 (1.08)4.25 (1.27)
Help-seeking4.96 (0.86)4.74 (1.19)
Total4.76 (0.67)4.45 (0.77)
Course completions102.43 (23.78)135.00 (0.00)
Table 10. ANCOVA results of motivation questionnaire for two groups.
Table 10. ANCOVA results of motivation questionnaire for two groups.
AptitudesSourceSSdfMSFpη2
Extrinsic motivationPre-test8.14918.14938.9420.000
Groups0.22010.2201.0490.3070.007
Error31.3901500.209
Learning valuePre-test6.49116.49122.9160.000
Groups0.62010.6202.1900.1410.014
Error42.4901500.283
Self-efficacyPre-test9.02419.02439.5680.000
Groups0.21410.2140.9390.3340.006
Error34.2081500.228
Help-seekingPre-test4.64714.64719.6890.000
Groups0.03810.0380.1620.6880.001
Error35.4001500.236
TotalPre-test2.46412.46428.8990.000
Groups0.38410.3844.5040.0350.029
Error12.7911500.085
Table 11. Descriptive statistics of help-seeking for two groups.
Table 11. Descriptive statistics of help-seeking for two groups.
No.Self-Regulated Group
M (SD)
Guided-Learning Group
M (SD)
Q115.19 (1.06)5.21 (1.27)
Q124.56 (1.36)3.90 (1.66)
Q135.03 (1.15)4.79 (1.69)
Q145.06 (1.09)5.07 (1.33)
Table 12. ANCOVA results of help-seeking for two groups.
Table 12. ANCOVA results of help-seeking for two groups.
No.SourceSSdfMSFpη2
Q11Pre-test18.582118.58214.6600.000
Groups1.04111.0410.8210.3660.005
Error190.1281501.268
Q12Pre-test33.603133.60315.8810.000
Groups16.435116.4357.7680.0060.049
Error317.3851502.116
Q13Pre-test30.600130.60015.7860.000
Groups1.11211.1120.5740.4500.004
Error290.7761501.939
Q14Pre-test5.69715.6973.8910.050
Groups0.04210.0420.0280.8660.000
Error219.6361501.464
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chen, C.-Y.; Su, S.-W.; Lin, Y.-Z.; Sun, C.-T. The Effect of Time Management and Help-Seeking in Self-Regulation-Based Computational Thinking Learning in Taiwanese Primary School Students. Sustainability 2023, 15, 12494. https://doi.org/10.3390/su151612494

AMA Style

Chen C-Y, Su S-W, Lin Y-Z, Sun C-T. The Effect of Time Management and Help-Seeking in Self-Regulation-Based Computational Thinking Learning in Taiwanese Primary School Students. Sustainability. 2023; 15(16):12494. https://doi.org/10.3390/su151612494

Chicago/Turabian Style

Chen, Chien-Yu, Shih-Wen Su, Yu-Zhi Lin, and Chuen-Tsai Sun. 2023. "The Effect of Time Management and Help-Seeking in Self-Regulation-Based Computational Thinking Learning in Taiwanese Primary School Students" Sustainability 15, no. 16: 12494. https://doi.org/10.3390/su151612494

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop