Next Article in Journal
Designing a Board Game to Expand Knowledge About Parental Involvement in Teacher Education
Previous Article in Journal
The Influence of Teaching Songs with Text and a Neutral Syllable on 4-to-9-Year-Old Portuguese Children’s Vocal Performance
Previous Article in Special Issue
Rethinking Traditional Playgrounds: Temporary Landscape Interventions to Advance Informal Early STEAM Learning in Outdoors
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Computational Thinking in Primary and Pre-School Children: A Systematic Review of the Literature

by
Efrosyni-Alkisti Paraskevopoulou-Kollia
1,
Christos-Apostolos Michalakopoulos
1,
Nikolaos C. Zygouris
2 and
Pantelis G. Bagos
1,*
1
Department of Computer Science and Biomedical Informatics, University of Thessaly, Papasiopoulou 2-4, 35100 Lamia, Greece
2
Laboratory of Digital Neuropsychological Assessment, Department of Informatics and Telecommunications, University of Thessaly, 35100 Lamia, Greece
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(8), 985; https://doi.org/10.3390/educsci15080985 (registering DOI)
Submission received: 15 May 2025 / Revised: 9 July 2025 / Accepted: 30 July 2025 / Published: 2 August 2025
(This article belongs to the Special Issue Interdisciplinary Approaches to STEM Education)

Abstract

Computational Thinking (CT) has been an important concept for the computer science education community in the last 20 years. In this work we performed a systematic review of the literature regarding the computational thinking of children from kindergarten to primary school. We compiled a large dataset of one hundred and twenty (120) studies from the literature. Through analysis of these studies, we tried to reveal important insights and draw interesting and valid conclusions. We analyzed various qualitative and quantitative aspects of the studies, including the sample size, the year of publication, the country of origin, the studies’ design and duration, the computational tools used, and so on. An important aspect of the work is to highlight differences between different study designs. We identified a total of 120 studies, with more than half of them (>50%) originating from Asian countries. Most studies (82.5%) conducted some form of intervention, aiming to improve their computational thinking in students. A smaller proportion (17.5%) were assessment studies in which the authors conducted assessments regarding the children’s computational thinking. On average, intervention studies had a smaller number of participants, but differences in duration could not be identified. There was also a lack of large-scale longitudinal studies. Block-based coding (i.e., Scratch) and Plugged and Unplugged activities were observed in high numbers in both categories of studies. CT assessment tools showed great variability. Efforts for standardization and reaching a consensus are needed in this regard. Finally, robotic systems have been found to play a major role in interventions over the last years.

1. Introduction

In 2006 Jeanette Wing published an article entitled ‘Computational Thinking’ and attempted to render the process by which ‘thinking’, as performed by computer scientists, should be a skill for everyone (Fatourou et al., 2018; Li et al., 2020; Selby & Woollard, 2013). Teachers of all levels and children should not be exempted from this. In her text an attempt is made to describe that within ‘Computational Thinking’ are included the way we design systems, the way we solve problems and the way we understand, and the way we think and act based on reasoning and logic; thus, CT exponentially incorporates our behavior (social and private) (Cheryan et al., 2017; Romero et al., 2017). Wing essentially made the term CT more widely known (Grover & Pea, 2013; Shute et al., 2017), building on what Seymour Papert had studied, analyzed, and created a few decades earlier. Papert had a vision to help children who are taught computers to think through programming (Papert, 1980), and was the first to try to integrate programming into school classrooms by talking about the term CT. He created the Logo programming language, seeking to enable children of all ages to program, to think logically, methodically, and systematically without feeling threatened by any time limits and without being stressed by any unfavorable condition (Papert, 1980, 1996). So while Wing’s writings have been recognized as the 21st century’s dominant reference on computer science, the work of Papert, Perils (first recipient of the prestigious A. M. Turing Award from the Association for Computing Machinery), and Knuth (argued that humans develop a clearer understanding of tasks when they teach a computer to perform them (Ezeamuzie & Leung, 2022)), who first wondered, researched, and ended up talking universally and scientifically about the spirit of CT, paving the way for future researchers, is dominant.
Regarding the definitions concerning CT, the various researchers do not demonstrate completely compatible aspects and convergent thinking (Grover & Pea, 2013; Ioannidou et al., 2011; Lodi & Martini, 2021; Prat et al., 2020; Tsarava et al., 2017; Román-González et al., 2017; Romero et al., 2017). What they have in common is that they talk about computational thinking and that they state that computational thinking concerns everyone (Angeli et al., 2016; Prat et al., 2020), without exception, and regardless of age, although this is disputed by some (Denning, 2017). The differences and boundaries between the definitions are mainly related to details of the scientific view of each author–researcher and their personal idiosyncrasy (Romero et al., 2017). Consequently, the definition that was initially given has undergone changes, revisions, concretizations, and even finalizations by scientists who were involved in the CT field (Prommun et al., 2022). In an effort to avoid any possible confusion between the terms, the International Society for Technology in Education (ISTE) and the Computer Science Teachers Association (CSTA) concluded talking about the processes involved in CT. According to them, CT is related to problem solving as follows: “Formulating problems in a way that enables us to use a computer and other tools to help solve them, Logically organizing and analyzing data, Representing data through abstractions such as models and simulations, Automating solutions through algorithmic thinking (a series of ordered steps), Identifying, analyzing, and implementing possible solutions with the goal of achieving the most efficient and effective combination of steps and resources, Generalizing and transferring this problem solving process to a wide variety of problems” (International Society for Technology in Education, 2020, p. 7).
Analyzing further the interest in CT in recent years, it must be pointed out that this has peaked, because there is a current abundance of software, tools, and pioneering technology, which are clearly more improved than lately (Silapachote & Srisuphab, 2017). The availability of better computational tools easily accessible has shifted the scientific interest to the related concepts of CT analysis (Linda Talib et al., 2024). Most of what has recently been written about CT is based on what Papert has written, as mentioned above, but because they synthesize updated versions of the original, they mainly focus on the internet, games, big data, and creativity (Balid et al., 2013). If we try to conclude, having browsed through the literature, it would be wise to borrow what Zaranis, Papadakis, and Kalogiannakis (Zaranis et al., 2019) state: programming needs CT and, simultaneously, in order to teach CT, programming is usually used (and regularly needed) (Lye & Koh, 2014). Under a wider spectrum, programming (writing code in order to perform actions by the computer) and CT (thinking paths to solve problems) are considered to be different skills (Hu, 2011; Selby & Woollard, 2013; Voskoglou & Buckley, 2012).

2. Concepts of Computational Thinking

Whatever skills children end up having to acquire as learners, these (skills) change context and content, transform, in order to help them adapt to the evolving environment (Ananiadou & Claro, 2009). The many definitions concerning CT aim to describe the multitude of concepts and dimensions (Ntourou et al., 2021) that are related to it with more clarity than the definitions describing CT until now. A further goal is for the elements and concepts of CT to be adopted in the Curricula for the education of all grades (V. Barr & Stephenson, 2011; Voogt et al., 2015). As Barr and Stephenson state, CT is “a problem-solving methodology that can be automated and transferred and applied across subjects” (V. Barr & Stephenson, 2011, p. 51) and is not limited to the subject area and computer science educators (V. Barr & Stephenson, 2011) and as mentioned by Dourou, Psycharis and Kalogiannakis (Ntourou et al., 2021), all students happen to be able to acquire the skill of CT in a latent form. In general, concepts of CT are considered the following:
The concept of abstraction means that when and if CT is mastered, students can think abstractly: be relatively quickly able to exclude elements that are not significant and just as quickly to reach valid conclusions (Rijke et al., 2018).
The concept of generalization refers to, firstly, recognizing common elements in problems to be solved and, then, finding a common solution that will allow the same pattern to be used in many similar problems (V. Barr & Stephenson, 2011; Selby & Woollard, 2013).
The concept of algorithm contains a set of sequential/consecutive and scaffolding steps, with the purpose of solving a problem or fulfilling a goal (Kourti et al., 2023; Türker & Pala, 2020). The algorithm ‘represents’ the analytical thinking that occurs during CT and is its almost mandatory result (Choi et al., 2017); however, this process does not necessarily identify with the two concepts.
The concept of problem deconstruction, or in other words, problem’s segmentation, attributes how this process breaks down into manageable and editable smaller component-parameters (Howland et al., 2009; Selby & Woollard, 2013; Wing, 2006).
The process of data collection, that is, collecting the research material (whatever it is), automatically increases the ability to understand, perform, compare, hypothesize, and draw conclusions (Lewis Presser et al., 2023).
After the data are collected, they are analyzed. Through the analysis of the data, we attempt to interpret it, draw conclusions, and identify patterns that can be adopted across cases (D. Barr et al., 2011; V. Barr & Stephenson, 2011).
The concept of parallelism concerns the conduct of parallel, shorter-range tasks, experiments, and research in general, aiming at successfully solving any problem (D. Barr et al., 2011; V. Barr & Stephenson, 2011).
The concept of automation refers to working through machines or computers to achieve an easier goal (D. Barr et al., 2011). As Vourletsis and Politis (Vourletsis & Politis, 2019) rightly state, the concept of automation would benefit from being seen as a guide for the computer to execute a set of repetitive tasks faster and more effectively, in comparison to human performance.
The concept of modeling: A concept, a phenomenon, or an object’s representation that really exists in a computer, giving basis to the important data. In order to make a problem solvable by a computer, it must be modeled by a software model (Morrison, 2009; Voskoglou & Buckley, 2012).
The concept of simulation: The process of performing tasks, commands, and experiments through a constructed model in order to simulate real conditions.
The debugging process is about identifying mistakes; it is composed of specific attempts to find possible errors in the structure of some task, some algorithm, which need correction (Angeli et al., 2016; Papert, 1980; Wing, 2006).

3. Research Questions

While understanding that the concepts of CT described above are undoubtedly important to computer science and ICT (Information and Communication Technologies) courses at senior secondary and tertiary levels, Wing argued that more emphasis needs to be given to younger students, even in the early years of childhood (Wing, 2008). Chalmers later pointed out that the CT literature in primary schools is limited and asked for more research to be conducted on the subject (Chalmers, 2018). Following this line of thought, many researchers have discussed the significance of CT at the early childhood stage, providing clues that pre-schoolers could benefit from CT training (e.g., Pila et al., 2019; Relkin et al., 2021), and empirical evidence has shown that children of this age are even able to build and program robots (Marina Umaschi Bers et al., 2014; Marina U Bers et al., 2019). Thus, it is currently accepted that CT education can be beneficial to both children of pre-school age as well as to children of primary school age (Alam, 2022; Buitrago Flórez et al., 2017; Nouri et al., 2020; Zeeshan et al., 2024). Following these, several systematic reviews were published between 2019 and 2021 focusing exclusively on primary education (Fagerlund et al., 2021; Kakavas & Ugolini, 2019; Tikva & Tambouris, 2021; L. Zhang & Nouri, 2019). Similarly, several reviews published between 2020 and 2023 focused solely on pre-schoolers (Bakala et al., 2021; Bati, 2022; Su & Yang, 2023; Taslibeyaz et al., 2020).
In this work we seek to address the concept of CT in both pre-school age and in primary school. Given that many studies have been published during the last years, we aim to update the evidence obtained by other systematic reviews and obtain a unified view of CT from kindergarten to primary school. Thus, the main objective of this work is to identify, analyze, and quantify the scientific literature regarding CT in children of school and pre-school age. In particular, (Q1) we aim at identifying trends in the literature and studying the demographics of the studies (sample size, duration of studies, country of origin, and so on), as well as the software and tools used. A second important goal (Q2) is to systematically evaluate the differences, both in demographics and in tools used, between studies aimed at measuring computational thinking (assessment studies) and those aimed at developing an intervention or educational program to improve computational thinking (intervention studies).

4. Materials and Methods

In order to achieve our goals, the systematic review methodology was identified as the most appropriate approach (Vourletsis & Politis, 2019). Systematic reviews enable a comprehensive exploration of the scientific literature, facilitating comparisons, evaluations, and critical analyses that ultimately yield insights into the contemporary dimensions of a given subject (Gusenbauer & Haddaway, 2020; Parisi et al., 2020; Siemieniuk et al., 2020). The foundational step in conducting a systematic review is the establishment of well-defined parameters to guide the search process. To address our research questions, we undertook a systematic literature review, as this method ensures the reliability and rigor of the search outcomes. The systematic review draws upon international bibliographic sources, thereby enhancing its methodological credibility and robustness (Chalmeta & Santos-deLeón, 2020; Khotambekovna, 2021).
The purpose of the review is to identify studies conducted in primary school or pre-school age, so the initial phase of the study identification process involved defining keywords relevant to the research question. The following search term was constructed: (all in title) (K12 OR “elementary school” OR “primary school” OR kindergarten OR “pre school” OR preschool) AND “computational thinking”. This search was conducted using the Google Scholar search engine, encompassing published scientific articles, conference proceedings, and books. Google Scholar was chosen since it covers all major databases, both in science and humanities.
All relevant documents retrieved from the search were downloaded in full text, with the final update occurring on 21 November 2023. For articles that were inaccessible through the publisher’s website, additional efforts were made to locate them via alternative means, including general Google searches, institutional repositories, and direct communication with authors. By 31 March 2024, four (4) responses were received, providing access to the requested articles.
The selection criteria were as follows:
  • Relevance to Primary and Pre-school Students: The studies had to specifically address primary and pre-school education contexts.
  • Inclusion of Quantitative Characteristics: The studies were required to incorporate quantitative data, that is, to provide specific quantitative measurements in a well-defined sample of students.
These criteria ensured that the selected studies were both contextually relevant and methodologically robust, providing a solid foundation for the subsequent analysis and synthesis of findings. As it is obvious, opinion articles, reviews, policy papers, or papers that presented a concept or a tool without application in the class were not included in the review.
Subsequently, all collected articles were meticulously reviewed by two independent reviewers to ensure a thorough understanding of their content. Articles written in languages unfamiliar to the research team were translated using DeepL. If the translation was deemed sufficiently comprehensible for analysis, the articles were retained; otherwise, they were excluded. This process led to the exclusion of articles that were either irrelevant to the research topic or inaccessible due to translation challenges (e.g., articles in Hebrew), resulting in a final corpus of 120 documents. It is important to emphasize that the review process extended beyond abstract scanning; each article was read in its entirety to confirm its relevance to the study. Concurrently, the spreadsheet was populated with the extracted data, and a PRISMA flow diagram was constructed to visually represent the number of records retrieved from each source, ensuring transparency and reproducibility in the selection process (Rethlefsen & Page, 2022). The PRISMA checklist is given in Supplementary Table S1.
To systematically organize the data extracted from the studies, we developed a structured spreadsheet to record key information from each article. The following data points were selected for inclusion: year of publication, authors, title of the work, journal, country of origin, DOI, educational level targeted by the research, software/platforms utilized, research purpose (categorized into studies measuring CT and those proposing intervention methods), sample size, use of questionnaires, and duration of the research. The resulting data were graphically presented, and the summary statistics were given (percentages, mean, standard deviation, and so on). Where needed, statistical tests were performed (t-tests for comparing continuous variables and Pearson’s chi-square tests for comparing categorical variables).

5. Results

5.1. Chronological Progression of Studies Related to the Computational Thinking of Students

A total of two hundred seventy-seven (277) studies were initially identified on this specific topic. Following a rigorous screening process based on the predefined criteria, one hundred twenty (120) studies were retained for further analysis (see Figure 1 and Supplementary Table S2). Of these, 21 studies (17.5%) were classified as “assessment studies” in which the authors conducted assessments regarding the children’s computational thinking, whereas 99 (82.5%) were classified as “intervention studies” in which the authors conducted some form of intervention in or outside the classroom with the children, aiming to improve their computational thinking. In Figure 2, the number of publications per year from 2015 to 2023 is depicted. Despite some temporal variability, we observe that studies of both classes seem to follow an approximately linear increase after 2015. The studies that did not meet the inclusion criteria are given in Supplementary Table S3, along with the reasons for exclusion.

5.2. Countries Where Studies Were Conducted

The studies were conducted in thirty-three (33) different countries. Specifically, sixteen (16) countries originate from Europe, ten (10) from countries in Asia, three (3) from countries in South America, three (3) from countries in North America, and one (1) from Africa. Most studies were conducted in South Korea (24 studies), followed by China with fourteen (14) studies, the United States of America with nine (9) studies, Greece with seven (7) studies, Spain and Italy with six (6) studies, and Taiwan and Israel with five (5) studies. Other countries follow with fewer published studies. In general, we observe that countries of Asia lead in the publication of studies, as from a total of ten (10) countries (China, Indonesia, Israel, Japan, Kazakhstan, South Korea, Malaysia, Taiwan, Thailand, Turkey), we count sixty-four (64) published studies (more than 50%). Regarding the European countries, we count a total of sixteen (16) countries (Austria, Belgium, Croatia, Cyprus, Czech Republic, Estonia, Germany, Greece, Ireland, Italy, Portugal, Romania, Spain, Switzerland, Netherlands, England) from which thirty-seven (37) publications emerged. Countries of North America (Mexico, Canada, United States of America) and South America (Brazil, Colombia, Peru) reported eleven (11) and seven (7) studies, respectively, and finally, we have one (1) study from South Africa.
In Figure 3, we give the classification of the two types of studies (assessment and intervention studies, respectively) according to the countries in which they were conducted. Regarding assessment studies, most of the studies came from S. Korea (four studies), China and the USA (three studies each), and Turkey (two studies). Brazil, Croatia, Italy, Kazakhstan, Malaysia, Portugal, Romania, Spain, and Switzerland had one study (see Figure 3). Regarding intervention studies, we had a similar picture with S. Korea (20 studies), China (11 studies), Greece (7 studies), the USA (6 studies), Israel, Italy, Spain, and Taiwan (5 studies). Other countries follow with four or fewer studies (Figure 4).

5.3. Sample Size and Demographics of the Studies

Regarding the number of participants, the assessment studies had a sample mean of 496.43, with a minimum value of 23 and a maximum value of 2547 participants, whereas the standard deviation of our sample was 637.050. The number of students who participated in these studies is not symmetrically distributed around the mean of 496.43 participants. Regarding the intervention studies, the sample mean is 85.92 with a range of 3 to 2040 and a standard deviation of 216.49. Similarly, the distribution is also highly skewed. As expected, a two-sample t-test provides a highly statistically significant p-value regarding the equality of the means (p < 0.0001). Although in both types we had one large study (>2000 participants), it is clear that intervention studies, on average, are conducted using a smaller number of participants compared to assessment studies.
We then proceeded by classifying the studies regarding the educational level of participants. Specifically, in pre-school education, we had sixteen (16) intervention studies and three (3) assessment studies. Similarly, in primary education, eighty-one (81) intervention studies and eighteen (18) assessment studies were conducted. Additionally, there were two (2) studies that were conducted beyond primary education, also involving secondary education (Figure 4). Clearly, most of the studies were conducted on primary education; however, there are no differences between the two types of studies, since Pearson’s chi-square yielded a highly non-significant p-value (p = 0.495).
Of the twenty-one (21) studies that conducted an assessment of computational thinking, twelve (12) did not report their duration. In the remaining studies that reported their duration (9 studies), the durations varied. The shortest study lasted two (2) hours, while the longest study lasted 2 years. On average, the duration of the assessments was approximately 21.11 h with a range of 2 to 80 h and a standard deviation of 24.680 h. Regarding the studies that conducted some form of intervention, out of ninety-nine (99), only sixteen (16) did not report their duration. For better presentation of our analysis results and proper comparisons, we converted all time measurement units of the studies to hours. The conversion was based on the Greek educational system, where the Informatics course is taught one hour per week.
The duration varied in the category of studies that conducted interventions, with the shortest intervention lasting one (1) hour and the longest lasting one hundred and eighty (180) hours. On average, the duration of the interventions was approximately 16.577 h with a range of 1 to 180 h and a standard deviation of 26.64 h. Once again, the distribution is skewed with many of the studies having a duration of <10 h (40 studies), while long-duration studies were not favored, as only six (6) studies have a duration of more than thirty (30) hours. The t-test for comparing the mean duration between two groups provided no evidence for differences (t = 0.4879, p = 0.6268), but this needs to be interpreted with caution, considering the large variability among studies (large standard deviations) and the large number of missing values. We also need to mention that some of the studies aimed at improving/teaching computational thinking also conducted measurements of it, although this was not the main focus of the research. Based on this, we treated them as studies conducting interventions for computational thinking rather than measuring it.

5.4. Tools-Software Used in the Studies

For each type of study (assessment/intervention), the investigators used various tools and software to achieve their goals. We analyzed and categorized the various tools, and the results are given in Table 1 and Table 2.
Regarding assessment studies, we came up with three categories in which we classified 19 different tools or software (see Table 1):
  • CT assessment tools;
  • Block-based coding;
  • Various Plugged and Unplugged activities/hybrid methods.
The first category contains various methods that have been proposed in the past for measuring CT, which in most cases involve a standardized test. Interestingly, the particular field seems very fragmented since 15 studies used 13 different such tools. Tools like BCTt (Vourletsis & Politis, 2024; Zapata-Cáceres et al., 2020), CTtLP (S. Zhang et al., 2021), Sword (Lee & Jang, 2020), KBIT (Bain & Jaspers, 2010; Kaufman & Kaufman, 1990; Sherman et al., 2023), and so on were used only in one study; cCTt (D’Elia et al., 1996) and CTt-RG (Román-González et al., 2017; Román-González et al., 2016) were used in just two studies.
The Beginners’ CT test (BCTt) is a validated assessment tool designed to evaluate computational thinking skills among primary school students, specifically targeting grades 1 to 6, and it offers a structured method for measuring various dimensions of computational thinking (El-Hamamsy et al., 2022). The Computational Thinking Test for Lower Primary (CTtLP) is a specialized assessment tool designed to evaluate students’ computational thinking (CT) skills. Developed to align with early educational contexts, the CTtLP assesses children’s understanding of fundamental computational concepts, enabling educators to identify and support the individual learning trajectories of young learners in computational thinking (S. Zhang & Wong, 2023; S. Zhang et al., 2021). The development of the CTtLP reflects the growing recognition of computational thinking as a critical competence within the modern educational framework, paralleling skills such as critical thinking and creativity that are necessary for success in today’s digital environment (Ocampo et al., 2024). The Sword CT tool is a test that assesses computational thinking competencies to resolve challenging circumstances that arise in day-to-day living (Lee & Jang, 2020). The Kaufman Brief Intelligence Test (KBIT) is a widely used assessment tool designed to measure both verbal and non-verbal intelligence for individuals aged 4 to 90 years. It features three subtests that assess various cognitive abilities (Aishworiya et al., 2019; Khan et al., 2018). The KBIT is recognized for its efficiency, as it requires less administration time and training compared to comprehensive IQ tests, making it an accessible option for a diverse range of populations (Pitts & Mervis, 2016). However, it is important to note that while the KBIT has shown commendable reliability, it remains a screening tool rather than a complete diagnostic measure, and caution is advised when interpreting scores, especially in clinical contexts involving particular populations (Duggan et al., 2023). The Children’s Color Trails Test (cCTt) is a “neuropsychological test that measures attention, divided attention, and mental processing speed” (Konstantopoulos et al., 2015, p. 751). It is progressively used to assess children in various multicultural and cross-cultural environments for neurological and psychiatric disorders, learning and/or language difficulties, attention deficit/hyperactivity disorder, among others (El-Hamamsy et al., 2022). The Computational Thinking test, developed by Roman-Gonzalez (CTt-RG), is a comprehensive instrument designed to assess various components of computational thinking across educational environments. This CTt-RG evaluates essential components such as sequences, loops, conditionals, and problem-solving practices through a structured format of 28 multiple-choice questions, facilitating a reliable measurement of students’ computational thinking competencies (Herrero-Álvarez et al., 2023; Serrano et al., 2024).
When it comes to block-based coding, three studies used Scratch/Dr. Scratch. Scratch (Resnick et al., 2009) is a free online programming tool that greatly helps children learn to program and think computationally (Montiel & Gomez-Zermeño, 2021). It enables them to create multimedia projects, games, interactive stories, and animations that connect with their personal interests and experiences. Projects are designed by combining various graphical environments to produce digital characters that perform different actions and exhibit various behaviors (Fagerlund et al., 2021). Scratch has been compared to immersing in a novel, where the plot continuously evolves (Vitagliano et al., 2024). Dr. Scratch, on the other hand (Moreno-León & Robles, 2015; Moreno-León et al., 2015b), is a web-based tool designed to evaluate Scratch programming projects, focusing on assessing the development of computational thinking skills among learners. As an open-source application, Dr. Scratch analyzes various aspects of Scratch projects, providing valuable feedback to both students and educators by assigning computational thinking scores based on criteria such as abstraction, logic, and user interactivity (Chai et al., 2023; Moreno-León et al., 2015a). Its incorporation of automatic evaluation mechanisms allows educators to identify areas for improvement in student projects, thereby fostering a deeper understanding of programming concepts and enhancing problem-solving abilities (Anistyasari & Kurniawan, 2018; Troiano et al., 2019). Moreover, Dr. Scratch has been shown to motivate students as they engage in iterative learning to achieve higher scores, making it a useful formative assessment tool (Rich & Browning, 2022). The algorithmic evaluation provided by Dr. Scratch positions it as a key resource in educational contexts where teaching programming and computational thinking are prioritized (Demir & Seferoglu, 2021). Finally, concerning Plugged and Unplugged activities/hybrid methods, Bebras Tasks (International Challenge on Informatics and Computational Thinking, n.d.) was the most common approach. Bebras Tasks are useful and highly suitable for problem-solving and cultivating computational thinking because, first of all, their solution does not require a specific knowledge base (Datzko, 2021). Bebras tasks incorporate computational thinking, as their resolution necessitates decomposition, abstraction, algorithm recognition, and pattern recognition (Nuraisa et al., 2021).
Regarding the intervention studies, we classified the tools in five broad categories (Table 2):
  • Block-based coding;
  • Text-based coding;
  • Robotics;
  • Web-based platforms/spreadsheets;
  • Plugged and Unplugged activities/hybrid methods.
In the block-based coding category which is by far the largest covering nearly half of the studies, Scratch and its variants are the most widely used tool (used in 32 studies), followed by App Inventor (Wolber et al., 2011) and Entry (Cho et al., 2022; Noh & Lee, 2020) (3 studies). MIT App Inventor is a web-based platform designed to help students create mobile applications for Android devices. It employs a block-based visual programming language that facilitates the programming of applications without the complexities of traditional coding languages (Patton et al., 2019; Xie et al., 2015). Entry, on the other hand, “was developed in Korea and is visually organized in the form of blocks containing instructions that allow novice learners to learn programming easily by assembling them.” (Noh & Lee, 2020, p. 471).
Text-based coding tools are not so common (six studies), and most of them use Arduino (Banzi & Shiloh, 2022) and Python (Van Rossum & Drake, 1995). Arduino is an open-source electronic prototyping platform that allows users to create interactive projects and devices using hardware and software. The core of the Arduino system is the microcontroller, a small computer on a single board that can be programmed to control various sensors and devices (Alumona et al., 2019). Users write code using the Arduino Integrated Development Environment (IDE), which is designed to be accessible even for beginners without significant programming experience (El-Abd, 2017; Singh et al., 2019). Python is a high-level programming language that has gained remarkable popularity due to its readability, versatility, and broad applicability across various domains, including web development, data science, artificial intelligence, and automation (Alexandru et al., 2018; Resnawati et al., 2024).
The Robotics category is also very populated (nearly 40% of the studies), with tools like LEGO-WeDo 2.0 (The LEGO Group, 2025) (in five studies), mBot Arduino robot (Makeblock, n.d.) (in four studies), Bee-Bot, Micro:bit (Micro:bit Educational Foundation, n.d.) and Ozobot (Ozo EDU Inc., n.d.) (3 studies). Within the framework of STEAM education, the LEGO-WeDo 2.0 robot kit is a very well-known tool for introducing educational robotics in schools—it teaches students essential skills for the present and future Civil Society (Achmad, 2021; Anastasaki & Vassilakis, 2022). We can confidently say that the application of educational strategies combining WeDo 2.0 and Lego Education has proven highly suitable for improving logical thinking and mathematical skills, especially for elementary school students (Araujo et al., 2024). The mBot is an educational robot kit, designed for students, which utilizes the Arduino platform to provide a hands-on learning experience in robotics and programming (Геoргиева & Georgieva-Trifonova, 2023). Bee-Bot is a small, programmable robot designed primarily for educational purposes, aimed at teaching young children fundamental coding and sequencing skills through interactive play. Its friendly bee shape and simple navigation commands make it an engaging tool that helps pre-school and elementary students develop computational thinking by programming the robot to follow specific paths and complete tasks using directional buttons (Diago et al., 2022; Tallou, 2022). Micro:bit is a pocket-sized programmable device developed by the BBC, designed to introduce students to coding and digital creativity through hands-on learning. It has a user-friendly interface, built-in sensors, and an LED display (Minić & Deretić, 2023). Ozobot is a small, programmable floor robot that introduces children to the fundamentals of coding through engaging, interactive play. It utilizes color sensors to follow drawn lines and interpret color codes, allowing users to program its movements by simply drawing paths with markers or using a visual programming interface on a tablet or a computer (Tengler et al., 2021).
Web-based platforms and spreadsheets were used by nearly a quarter of the included studies, with Code.org (code.org, n.d.) being the most common tool (five studies). Code.org is a platform used in learning environments that has managed to capture children’s attention because the code they engage with is embedded in writing applications and games. It offers, of course, a wide variety of activity possibilities (Dilmen et al., 2023; Du et al., 2016). Lastly, a large number of studies (nearly 25%) used various types of Plugged and Unplugged activities and hybrid methods. In the case of Unplugged activities (12 studies), researchers used activities they designed themselves (unplugged activities), such as activities on paper, games, etc. It should be noted that in three studies, the tool software used by the researchers was not precisely mentioned.

6. Conclusions

We presented here a systematic review for CT focusing on studies performed on students of primary school and pre-school. This is not the first systematic review on the particular topic. The large amount of published literature in the field has prompted the publication of several similar studies over the last years. However, in all cases, either the included studies were limited, resulting in a smaller sample, or the inclusion criteria and the primary goal of the review were different. For instance, one of the first systematic reviews on the topic included students from all grades (including college students) and was mostly concerned with the assessment of CT (Tang et al., 2020), whereas another study sought to examine the nature, explicitness, and patterns of definitions of computational thinking in general (Ezeamuzie & Leung, 2022).
As we already mentioned earlier, several systematic reviews published between 2019 and 2021 focused exclusively on primary education but contain fewer studies compared to the current review (Fagerlund et al., 2021; Kakavas & Ugolini, 2019; Tikva & Tambouris, 2021; L. Zhang & Nouri, 2019). Similarly, several reviews published between 2020 and 2023 focused solely on pre-schoolers (Bakala et al., 2021; Bati, 2022; Su & Yang, 2023; Taslibeyaz et al., 2020).
Lastly, several important reviews were published focusing on the effects of CT on mathematics education and learning (Barcelos et al., 2018; Nordby et al., 2022; Subramaniam et al., 2022), on science education in general (Ogegbo & Ramnarain, 2022), or on the more general cognitive effects of computational thinking (Montuori et al., 2024). Other more focused studies examined the role of gamification on CT (Triantafyllou et al., 2024) or the role of different user interfaces implemented (Rijo-García et al., 2022).
Our current work contains information from 120 independent reports, a larger number compared to the previous studies mentioned above. Additionally, we performed a unified view on CT education in pre-school and primary school, contrary to previous studies that focused on only one level of education. The high number of identified studies is important on its own merit, considering the strict criteria that we imposed. More importantly, we included only studies that had quantitative measurements on a student sample, and we excluded theoretical considerations, opinion articles, or educational programs that were not applied. The goals of our review was to examine the qualitative and quantitative characteristics of these studies (e.g., sample size, year of publication, level of education, duration and type of study, country of origin, tools and software used, etc.), and in particular to identify differences between studies aimed at measuring computational thinking and those aimed at developing an intervention or educational program to improve computational thinking. Most of the studies identified were small-scale and short in duration. We also identified a trend for linear increase after 2015 that applies equally well to studies of both categories. However, it seems that there is a need for longitudinal studies that track CT development over time. Regarding the comparison of the studies of the two categories, we need to mention that the studies in the first category had, on average, larger samples, but they also exhibited greater diversity as they could generally be classified into additional subcategories depending on the specific goal each time:
  • Those that aimed to capture and measure computational thinking (CT) in a sample from a population. Such a study was (Kourti et al., 2023).
  • Those that aimed to measure CT and compare it between groups, for example, between children of different educational levels or between boys and girls.
  • Those that aimed to develop and standardize a diagnostic test for measuring CT.
The studies of the second category, which was more populated, had a smaller average sample size. Their differences mainly concerned the type of intervention, the educational level they referred to, and the specific tools used by the educational program they proposed. The duration of the studies of both categories was found to be comparable, but with large variabilities and several missing values.
Another important aspect of the analysis of the data from the included studies is the demographics. Previous bibliometric analyses have shown that CT research, at least for the first years, has been US-centric (Saqr et al., 2021), and it is unclear to what extent the field’s predominantly Western body of literature can accommodate the needs of students in other cultural groups. We have shown that the volume of publications increases after 2015, and more than half of the publications included in the current review originate from Asia, with China and South Korea being the major contributors. Of course, it cannot escape our attention that we are talking about a densely populated as well as diverse region, in which educational policies are not common. In general, ICT is an asset for future development (Wallet, 2014), although for some countries, the policy priority areas of basic education, with respect to ICT, are focused on poverty alleviation, whereas in most advanced economies, the policy priority supports knowledge deepening, acquisition, or creation (Yuen & Hew, 2018). A comparison of education policies regarding ICT between European and Asian countries has also highlighted important differences (Looi & David Hung, 2005). It has also been reported that computer science education correlates with increased rankings in PISA (which does not include computer science), and in these rankings, Asian countries generally fare well (Erümit & Keles, 2023). Given the above, it should be no surprise that there is an increased rate of relevant publications originating from these countries, as there is a strong interest in research in the field, both from a social and from an educational policy perspective.
Eventually, we analyzed and categorized the tools and software used in the included studies, though the framework was very fragmented given the wide variety of tools. Block-based coding (i.e., Scratch) and Plugged and Unplugged activities were observed in high numbers in both categories of the included studies (assessment vs. intervention). Regarding the assessment studies, we noticed that CT assessment tools showed great variability, too, since there is almost an equal number of tools and studies (practically, each tool is used by its creators). A recent systematic review of 50 studies focusing specifically on the tools and instruments used in measuring CT also identified a large variability of tools and several adjustments, pointing to similar conclusions (Ocampo et al., 2024). Perhaps an effort for standardization and reaching a consensus is needed in this regard, and future research should pivot toward the development, validation, and broader adoption of standardized CT assessment tools. Finally, robotic systems have been found to play a major role in interventions during the last years, employing tools like LEGO-WeDo 2.0, mBot, Bee-Bot and Micro:bit, as well as the web-based platforms using tools like Code.org.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/educsci15080985/s1, Supplementary Table S1: The PRISMA checklist (Page et al., 2021). Supplementary Table S2: The list of the included studies. Supplementary Table S3: The list of the excluded studies.

Author Contributions

Conceptualization, P.G.B.; methodology, E.-A.P.-K., N.C.Z. and P.G.B.; software, C.-A.M.; validation, C.-A.M., N.C.Z. and P.G.B.; formal analysis, E.-A.P.-K.; investigation, C.-A.M., N.C.Z. and E.-A.P.-K.; resources, E.-A.P.-K. and C.-A.M.; data curation, C.-A.M.; writing—original draft preparation, E.-A.P.-K.; writing—review and editing, C.-A.M., N.C.Z. and P.G.B.; visualization, C.-A.M.; supervision, P.G.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Acknowledgments

The authors would like to thank the the three anonymous reviewers whose constructive comments helped in improving the quality of the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputational Thinking
ICTInformation and Communication Technologies
ISTEInternational Society for Technology in Education and the
CSTAComputer Science Teachers Association

References

  1. Achmad, W. (2021). Citizen and netizen society: The meaning of social change from a technology point of view. Jurnal Mantik, 5(3), 1564–1570. [Google Scholar]
  2. Aishworiya, R., Cai, S., Chen, H., Phua, D. Y., Broekman, B. F. P., Daniel, L. M., Chong, Y. S., Shek, L. P., Yap, F., Chan, S.-Y., Meaney, M. J., & Law, E. (2019). Television viewing and child cognition in a longitudinal birth cohort in Singapore: The role of maternal factors. BMC Pediatrics, 19(1), 286. [Google Scholar] [CrossRef] [PubMed]
  3. Alam, A. (2022, March 25–26). Educational robotics and computer programming in early childhood education: A conceptual framework for assessing elementary school students’ computational thinking for designing powerful educational scenarios. 2022 International Conference on Smart Technologies and Systems for Next Generation Computing (ICSTSN), Villupuram, India. [Google Scholar]
  4. Alexandru, C. V., Merchante, J. J., Panichella, S., Proksch, S., Gall, H., & Robles, G. (2018, November 7–8). On the usage of pythonic idioms. 2018 ACM SIGPLAN International Symposium on New Ideas, New Paradigms, and Reflections on Programming and Software (pp. 1–11), Boston, MA, USA. [Google Scholar] [CrossRef]
  5. Alumona, T. L., Oranugo, C. O., & Eze, C. E. (2019). GSM based smart security system using arduino. IJARCCE, 8(10), 32–42. [Google Scholar] [CrossRef]
  6. Ananiadou, K., & Claro, M. (2009). 21st century skills and competences for new millennium learners in OECD countries (OECD education working papers, no. 41). OECD Publishing (NJ1). [Google Scholar]
  7. Anastasaki, E., & Vassilakis, K. (2022). Experimental commands development for LEGO WeDo 2.0 in Python language for STEAM robotics advanced classes. Advances in Mobile Learning Educational Research, 2(2), 443–454. [Google Scholar] [CrossRef]
  8. Angeli, C., Voogt, J., Fluck, A., Webb, M., Cox, M., Malyn-Smith, J., & Zagami, J. (2016). A K-6 computational thinking curriculum framework: Implications for teacher knowledge. Journal of Educational Technology & Society, 19(3), 47–57. [Google Scholar]
  9. Anistyasari, Y., & Kurniawan, A. (2018, October 12–14). Exploring computational thinking to improve energy-efficient programming skills. MATEC Web of Conferences, Shanghai, China. [Google Scholar]
  10. Araujo, M. A. L., Chacón-Castro, M., Goitia, J. M. G., & Arias-Flores, H. (2024). Wedo 2.0 and lego education for the logical development of elementary school children. International Conference on Information Technology & Systems. [Google Scholar]
  11. Bain, S. K., & Jaspers, K. E. (2010). Test review: Review of kaufman brief intelligence test, second edition: Kaufman, A. S., & Kaufman, N. L. (2004). Kaufman brief intelligence test, second edition. Bloomington, MN: Pearson, Inc. Journal of Psychoeducational Assessment, 28(2), 167–174. [Google Scholar] [CrossRef]
  12. Bakala, E., Gerosa, A., Hourcade, J. P., & Tejera, G. (2021). Preschool children, robots, and computational thinking: A systematic review. International Journal of Child-Computer Interaction, 29, 100337. [Google Scholar] [CrossRef]
  13. Balid, W., Abdulwahed, M., & Alrouh, I. (2013). Constructivist multi-access lab approach in teaching FPGA systems design with labview. International Journal of Engineering Pedagogy (IJEP), 3(S3), 39–46. [Google Scholar] [CrossRef]
  14. Banzi, M., & Shiloh, M. (2022). Getting started with Arduino: The open source electronics prototyping platform. Maker Media, Inc. [Google Scholar]
  15. Barcelos, T. S., Muñoz-Soto, R., Villarroel, R., Merino, E., & Silveira, I. F. (2018). Mathematics learning through computational thinking activities: A systematic literature review. Journal of Universal Computer Science, 24(7), 815–845. [Google Scholar]
  16. Barr, D., Harrison, J., & Conery, L. (2011). Computational thinking: A digital age skill for everyone. Learning & Leading with Technology, 38(6), 20–23. [Google Scholar]
  17. Barr, V., & Stephenson, C. (2011). Bringing computational thinking to K-12: What is involved and what is the role of the computer science education community? ACM Inroads, 2(1), 48–54. [Google Scholar]
  18. Bati, K. (2022). A systematic literature review regarding computational thinking and programming in early childhood education. Education and Information Technologies, 27(2), 2059–2082. [Google Scholar] [CrossRef]
  19. Bers, M. U., Flannery, L., Kazakoff, E. R., & Sullivan, A. (2014). Computational thinking and tinkering: Exploration of an early childhood robotics curriculum. Computers & Education, 72, 145–157. [Google Scholar] [CrossRef]
  20. Bers, M. U., González-González, C., & Armas–Torres, M. B. (2019). Coding as a playground: Promoting positive learning experiences in childhood classrooms. Computers & Education, 138, 130–145. [Google Scholar] [CrossRef]
  21. Buitrago Flórez, F., Casallas, R., Hernández, M., Reyes, A., Restrepo, S., & Danies, G. (2017). Changing a generation’s way of thinking: Teaching computational thinking through programming. Review of Educational Research, 87(4), 834–860. [Google Scholar] [CrossRef]
  22. Chai, X., Sun, Y., & Gao, Y. (2023). Towards data-driving multi-view evaluation framework for scratch. Tsinghua Science and Technology, 29(2), 517–528. [Google Scholar] [CrossRef]
  23. Chalmers, C. (2018). Robotics and computational thinking in primary school. International Journal of Child-Computer Interaction, 17, 93–100. [Google Scholar] [CrossRef]
  24. Chalmeta, R., & Santos-deLeón, N. J. (2020). Sustainable supply chain in the era of industry 4.0 and big data: A systematic analysis of literature and research. Sustainability, 12(10), 4108. [Google Scholar] [CrossRef]
  25. Cheryan, S., Ziegler, S. A., Montoya, A. K., & Jiang, L. (2017). Why are some STEM fields more gender balanced than others? Psychological Bulletin, 143(1), 1. [Google Scholar] [CrossRef]
  26. Cho, E.-J., Seong, Y.-O., & Seo, Y. G. (2022). A study on software education using physical computing to increase computational thinking in elementary school students. Journal of Digital Contents Society, 23(10), 1959–1968. [Google Scholar] [CrossRef]
  27. Choi, J., Lee, Y., & Lee, E. (2017). Puzzle based algorithm learning for cultivating computational thinking. Wireless Personal Communications, 93, 131–145. [Google Scholar]
  28. code.org. (n.d.). CODE. Available online: https://code.org/ (accessed on 21 November 2023).
  29. Datzko, C. (2021, November 3–5). A multi-dimensional approach to categorize bebras tasks. Rethinking Computing Education: 14th International Conference on Informatics in Schools: Situation, Evolution, and Perspectives, ISSEP 2021. Informatics in Schools (Proceedings 14), Virtual Event. [Google Scholar]
  30. D’Elia, L., Satz, P., Uchiyama, C. L., & White, T. (1996). Color trails test. PAR. [Google Scholar]
  31. Demir, Ö., & Seferoglu, S. S. (2021). A comparison of solo and pair programming in terms of flow experience, coding quality, and coding achievement. Journal of Educational Computing Research, 58(8), 1448–1466. [Google Scholar] [CrossRef]
  32. Denning, P. J. (2017). Remaining trouble spots with computational thinking. Communications of the ACM, 60(6), 33–39. [Google Scholar]
  33. Diago, P. D., González-Calero, J. A., & Yáñez, D. F. (2022). Exploring the development of mental rotation and computational skills in elementary students through educational robotics. International Journal of Child-Computer Interaction, 32, 100388. [Google Scholar] [CrossRef]
  34. Dilmen, K., Kert, S. B., & Uğraş, T. (2023). Children’s coding experiences in a block-based coding environment: A usability study on code. org. Education and Information Technologies, 28(9), 10839–10864. [Google Scholar] [CrossRef]
  35. Du, J., Wimmer, H., & Rada, R. (2016). “Hour of code”: Can it change students’ attitudes toward programming? Journal of Information Technology Education: Innovations in Practice, 15, 53. [Google Scholar] [CrossRef] [PubMed]
  36. Duggan, C., Irvine, A. D., Hourihane, J. O. B., Kiely, M., & Murray, D. M. (2023). ASQ-3 and BSID-iii’s concurrent validity and predictive ability of cognitive outcome at 5 years. Pediatric Research, 94(4), 1465–1471. [Google Scholar] [CrossRef] [PubMed]
  37. El-Abd, M. (2017). A review of embedded systems education in the arduino age: Lessons learned and future directions. International Journal of Engineering Pedagogy (IJEP), 7(2), 79. [Google Scholar] [CrossRef]
  38. El-Hamamsy, L., Zapata-Cáceres, M., Marcelino, P., Bruno, B., Dehler Zufferey, J., Martín-Barroso, E., & Román-González, M. (2022). Comparing the psychometric properties of two primary school Computational Thinking (CT) assessments for grades 3 and 4: The beginners’ CT test (BCTt) and the competent CT test (cCTt). Frontiers in Psychology, 13, 1082659. [Google Scholar] [CrossRef]
  39. Erümit, S. F., & Keles, E. (2023). Examining computer science education of Asia-Pacific countries successful in the PISA. Journal of Educational Technology and Online Learning, 6(1), 82–104. [Google Scholar] [CrossRef]
  40. Ezeamuzie, N. O., & Leung, J. S. (2022). Computational thinking through an empirical lens: A systematic review of literature. Journal of Educational Computing Research, 60(2), 481–511. [Google Scholar] [CrossRef]
  41. Fagerlund, J., Häkkinen, P., Vesisenaho, M., & Viiri, J. (2021). Computational thinking in programming with Scratch in primary schools: A systematic review. Computer Applications in Engineering Education, 29(1), 12–28. [Google Scholar] [CrossRef]
  42. Fatourou, E., Zygouris, N. C., Loukopoulos, T., & Stamoulis, G. I. (2018). Teaching concurrent programming concepts using scratch in primary school: Methodology and evaluation. International Journal of Engineering Pedagogy (IJEP), 8(4), 89–105. [Google Scholar] [CrossRef]
  43. Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher, 42(1), 38–43. [Google Scholar] [CrossRef]
  44. Gusenbauer, M., & Haddaway, N. R. (2020). Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Research Synthesis Methods, 11(2), 181–217. [Google Scholar] [CrossRef]
  45. Herrero-Álvarez, R., Miranda, G., León, C., & Segredo, E. (2023). Engaging primary and secondary school students in computer science through computational thinking training. IEEE Transactions on Emerging Topics in Computing, 11(1), 56–69. [Google Scholar] [CrossRef]
  46. Howland, K., Good, J., & Nicholson, K. (2009, September 20–24). Language-based support for computational thinking. 2009 IEEE Symposium on Visual Languages and Human-Centric Computing (VL/HCC), Corvallis, OR, USA. [Google Scholar]
  47. Hu, C. (2011, June 27–29). Computational thinking: What it might mean and what we might do about it. 16th Annual Joint Conference on Innovation and Technology in Computer Science Education, Darmstadt, Germany. [Google Scholar]
  48. International Challenge on Informatics and Computational Thinking. (n.d.). Bebras task examples. Available online: https://www.bebras.org/task-examples (accessed on 21 November 2023).
  49. International Society for Technology in Education. (2020). Computational thinking teacher resources. Available online: https://cdn.iste.org/www-root/2020-10/ISTE_CT_Teacher_Resources_2ed.pdf (accessed on 21 November 2023).
  50. Ioannidou, A., Bennett, V., Repenning, A., Koh, K. H., & Basawapatna, A. (2011). Computational thinking patterns. Online Submission. [Google Scholar]
  51. Kakavas, P., & Ugolini, F. C. (2019). Computational thinking in primary education: A systematic literature review. Research on Education and Media, 11(2), 64–94. [Google Scholar] [CrossRef]
  52. Kaufman, A. S., & Kaufman, N. L. (1990). Kaufman brief intelligence test. Pearson, Inc. [Google Scholar]
  53. Khan, N. A., Walk, A. M., Edwards, C. G., Jones, A. R., Cannavale, C. N., Thompson, S. V., Reeser, G. E., & Holscher, H. D. (2018). Macular xanthophylls are related to intellectual ability among adults with overweight and obesity. Nutrients, 10(4), 396. [Google Scholar] [CrossRef]
  54. Khotambekovna, E. M. (2021). Systematic analysis of education. Journal of Pedagogical Inventions and Practices, 3, 31–35. [Google Scholar]
  55. Konstantopoulos, K., Vogazianos, P., Thodi, C., & Nikopoulou-Smyrni, P. (2015). A normative study of the Children’s Color Trails Test (CCTT) in the Cypriot population. Child Neuropsychology, 21(6), 751–758. [Google Scholar] [CrossRef] [PubMed]
  56. Kourti, Z., Michalakopoulos, C.-A., Bagos, P. G., & Paraskevopoulou-Kollia, E.-A. (2023). Computational thinking in preschool age: A case study in Greece. Education Sciences, 13(2), 157. [Google Scholar] [CrossRef]
  57. Lee, J., & Jang, J. (2020). A study on path analysis between elementary school students’ computational thinking components. Journal of The Korean Association of Information Education, 24(2), 139–146. [Google Scholar] [CrossRef]
  58. Lewis Presser, A. E., Young, J. M., Rosenfeld, D., Clements, L. J., Kook, J. F., Sherwood, H., & Cerrone, M. (2023). Data collection and analysis for preschoolers: An engaging context for integrating mathematics and computational thinking with digital tools. Early Childhood Research Quarterly, 65, 42–56. [Google Scholar] [CrossRef]
  59. Li, Y., Schoenfeld, A. H., diSessa, A. A., Graesser, A. C., Benson, L. C., English, L. D., & Duschl, R. A. (2020). Computational thinking is more about thinking than computing (Vol. 3, pp. 1–18). Springer. [Google Scholar]
  60. Linda Talib, A., Maysam Raad, Y., Najwa Abdulmunem Jasim, A., & Ban Hassan, M. (2024). The impact of artificial intelligence on computational thinking in education at university. International Journal of Engineering Pedagogy (IJEP), 14(5), 192–203. [Google Scholar] [CrossRef]
  61. Lodi, M., & Martini, S. (2021). Computational thinking, between papert and wing. Science & Education, 30(4), 883–908. [Google Scholar] [CrossRef]
  62. Looi, C.-K., & David Hung, W. (2005). ICT-in-education policies and implementation in Singapore and other Asian countries. In Upon what does the turtle stand? (pp. 27–39) Springer. [Google Scholar]
  63. Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. [Google Scholar] [CrossRef]
  64. Makeblock. (n.d.). mBot: Kid’s first robot kit for coding and STEM learning. Available online: https://www.makeblock.com/pages/mbot-robot-kit (accessed on 21 November 2023).
  65. Micro:bit Educational Foundation. (n.d.). BBC micro:bit. Available online: https://microbit.org/ (accessed on 21 November 2023).
  66. Minić, S., & Deretić, N. (2023). Experience with using BBC micro: Bit in teaching. Obrazovanje i Vaspitanje, 18(20), 33–44. [Google Scholar] [CrossRef]
  67. Montiel, H., & Gomez-Zermeño, M. G. (2021). Educational challenges for computational thinking in k–12 education: A systematic literature review of “scratch” as an innovative programming tool. Computers, 10(6), 69. [Google Scholar] [CrossRef]
  68. Montuori, C., Gambarota, F., Altoé, G., & Arfé, B. (2024). The cognitive effects of computational thinking: A systematic review and meta-analytic study. Computers & Education, 210, 104961. [Google Scholar] [CrossRef]
  69. Moreno-León, J., & Robles, G. (2015, November 9–11). Dr. Scratch: A web tool to automatically evaluate scratch projects. WiPSCE ’15, Proceedings of the Workshop in Primary and Secondary Computing Education, London, UK. [Google Scholar]
  70. Moreno-León, J., Robles, G., & Román-González, M. (2015a). Dr. Scratch: Análisis automático de proyectos Scratch para evaluar y fomentar el pensamiento computacional. Revista de Educación a Distancia (RED), 46. [Google Scholar] [CrossRef]
  71. Moreno-León, J., Robles, G., & Román-González, M. (2015b). Dr. Scratch: Automatic analysis of scratch projects to assess and foster computational thinking. RED. Revista de Educación a Distancia, 46, 1–23. [Google Scholar]
  72. Morrison, M. (2009). Models, measurement and computer simulation: The changing face of experimentation. Philosophical Studies, 143(1), 33–57. [Google Scholar] [CrossRef]
  73. Noh, J., & Lee, J. (2020). Effects of robotics programming on the computational thinking and creativity of elementary school students. Educational Technology Research and Development, 68(1), 463–484. [Google Scholar] [CrossRef]
  74. Nordby, S. K., Bjerke, A. H., & Mifsud, L. (2022). Computational thinking in the primary mathematics classroom: A systematic review. Digital Experiences in Mathematics Education, 8(1), 27–49. [Google Scholar] [CrossRef]
  75. Nouri, J., Zhang, L., Mannila, L., & Norén, E. (2020). Development of computational thinking, digital competence and 21st century skills when learning programming in K-9. Education Inquiry, 11(1), 1–17. [Google Scholar] [CrossRef]
  76. Ntourou, V., Kalogiannakis, M., & Psycharis, S. (2021). A study of the impact of Arduino and Visual Programming in self-efficacy, motivation, computational thinking and 5th grade students’ perceptions on Electricity. Eurasia Journal of Mathematics, Science and Technology Education, 17(5), em1960. [Google Scholar] [CrossRef]
  77. Nuraisa, D., Saleh, H., & Raharjo, S. (2021). Profile of students’computational thinking based on self-regulated learning in completing bebras tasks. Prima: Jurnal Pendidikan Matematika, 5(2), 40–50. [Google Scholar]
  78. Ocampo, L. M., Corrales-Álvarez, M., Cardona-Torres, S. A., & Zapata-Cáceres, M. (2024). Systematic review of instruments to assess computational thinking in early years of schooling. Education Sciences, 14(10), 1124. [Google Scholar] [CrossRef]
  79. Ogegbo, A. A., & Ramnarain, U. (2022). A systematic review of computational thinking in science classrooms. Studies in Science Education, 58(2), 203–230. [Google Scholar] [CrossRef]
  80. Ozo EDU Inc. (n.d.). Ozobot. Available online: https://ozobot.com/ (accessed on 21 November 2023).
  81. Page, M. J., McKenzie, J. E., Bossuyt, P. M., Boutron, I., Hoffmann, T. C., Mulrow, C. D., Shamseer, L., Tetzlaff, J. M., Akl, E. A., Brennan, S. E., & Chou, R. (2021). The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. BMJ, 372, n71. [Google Scholar] [CrossRef] [PubMed]
  82. Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books. [Google Scholar]
  83. Papert, S. (1996). An exploration in the space of mathematics educations. International Journal of Computers for Mathematical Learning, 1(1), 95–123. [Google Scholar] [CrossRef]
  84. Parisi, R., Iskandar, I. Y., Kontopantelis, E., Augustin, M., Griffiths, C. E., & Ashcroft, D. M. (2020). National, regional, and worldwide epidemiology of psoriasis: Systematic analysis and modelling study. BMJ, 369, m1590. [Google Scholar] [CrossRef] [PubMed]
  85. Patton, E. W., Tissenbaum, M. B., & Harunani, F. (2019). MIT app inventor: Objectives, design, and development. In Computational thinking education (pp. 31–49). Springer. [Google Scholar] [CrossRef]
  86. Pila, S., Aladé, F., Sheehan, K. J., Lauricella, A. R., & Wartella, E. A. (2019). Learning to code via tablet applications: An evaluation of Daisy the Dinosaur and Kodable as learning tools for young children. Computers & Education, 128, 52–62. [Google Scholar] [CrossRef]
  87. Pitts, C. H., & Mervis, C. Β. (2016). Performance on the kaufman brief intelligence test-2 by children with williams syndrome. American Journal on Intellectual and Developmental Disabilities, 121(1), 33–47. [Google Scholar] [CrossRef]
  88. Prat, C. S., Madhyastha, T. M., Mottarella, M. J., & Kuo, C.-H. (2020). Relating natural language aptitude to individual differences in learning programming languages. Scientific Reports, 10(1), 3817. [Google Scholar] [CrossRef] [PubMed]
  89. Prommun, P., Kantathanawat, T., Pimdee, P., & Sukkamart, T. (2022). An integrated design-based learning management model to promote Thai undergraduate computational thinking skills and programming proficiency. International Journal of Engineering Pedagogy (IJEP), 12(1), 75–94. [Google Scholar] [CrossRef]
  90. Relkin, E., de Ruiter, L. E., & Bers, M. U. (2021). Learning to code and the acquisition of computational thinking by young children. Computers & Education, 169, 104222. [Google Scholar] [CrossRef]
  91. Resnawati, R., Fadjryani, Najar, A. M., Puspita, J. W., Mardi, A. B., & Abu, M. (2024). Pelatihan dan pendampingan pemrograman python dalam meningkatkan kompetensi siswa SMKN 5 Palu. Journal of Pharmaceutical and Scientific Devotion, 2(2), 6–12. [Google Scholar] [CrossRef]
  92. Resnick, M., Maloney, J., Monroy-Hernández, A., Rusk, N., Eastmond, E., Brennan, K., Millner, A., Rosenbaum, E., Silver, J. S., Silverman, B., & Silverman, B. (2009). Scratch: Programming for all. Communications of the ACM, 52(11), 60–67. [Google Scholar]
  93. Rethlefsen, M. L., & Page, M. J. (2022). PRISMA 2020 and PRISMA-S: Common questions on tracking records and the flow diagram. Journal of the Medical Library Association: JMLA, 110(2), 253. [Google Scholar] [CrossRef] [PubMed]
  94. Rich, P., & Browning, S. F. (2022). Using Dr. Scratch as a formative feedback tool to assess computational thinking. In Research anthology on computational thinking, programming, and robotics in the classroom (pp. 550–572). IGI Global. [Google Scholar]
  95. Rijke, W. J., Bollen, L., Eysink, T. H., & Tolboom, J. L. (2018). Computational thinking in primary school: An examination of abstraction and decomposition in different age groups. Informatics in Education, 17(1), 77–92. [Google Scholar] [CrossRef]
  96. Rijo-García, S., Segredo, E., & León, C. (2022). Computational thinking and user interfaces: A systematic review. IEEE Transactions on Education, 65(4), 647–656. [Google Scholar] [CrossRef]
  97. Román-González, M., Pérez-González, J.-C., & Jiménez-Fernández, C. (2017). Which cognitive abilities underlie computational thinking? Criterion validity of the computational thinking test. Computers in Human Behavior, 72, 678–691. [Google Scholar] [CrossRef]
  98. Román-González, M., Pérez-González, J.-C., Moreno-León, J., & Robles, G. (2016, November 2–4). Does computational thinking correlate with personality? the non-cognitive side of computational thinking. Fourth International Conference on Technological Ecosystems for Enhancing Multiculturality, Salamanca, Spain. [Google Scholar]
  99. Romero, M., Lepage, A., & Lille, B. (2017). Computational thinking development through creative programming in higher education. International Journal of Educational Technology in Higher Education, 14, 42. [Google Scholar] [CrossRef]
  100. Saqr, M., Ng, K., Oyelere, S. s., & Tedre, M. (2021). People, ideas, milestones: A scientometric study of computational thinking. ACM Transactions on Computing Education, 21(3), 20. [Google Scholar] [CrossRef]
  101. Selby, C., & Woollard, J. (2013). Computational thinking: The developing definition. Available online: https://eprints.soton.ac.uk/356481 (accessed on 21 November 2023).
  102. Serrano, A. D. l. H., Niño, L. V. M., Álvarez-Murillo, A., Tardío, M. Á. M., Cañada, F. C., & Juánez, J. C. (2024). Analysis of gender issues in computational thinking approach in science and mathematics learning in higher education. European Journal of Investigation in Health Psychology and Education, 14(11), 2865–2882. [Google Scholar] [CrossRef]
  103. Sherman, E. M. S., Tan, J. E., & Hrabok, M. (2023). Kaufman brief intelligence test (KBIT-2). In A compendium of neuropsychological tests: Fundamentals of neuropsychological assessment and test reviews for clinical practice (Vol. 4, p. 73). University Press. [Google Scholar]
  104. Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review, 22, 142–158. [Google Scholar] [CrossRef]
  105. Siemieniuk, R. A., Bartoszko, J. J., Zeraatkar, D., Kum, E., Qasim, A., Martinez, J. P. D., Izcovich, A., Lamontagne, F., Han, M. A., Agarwal, A., Agoritsas, T., Azab, M., Bravo, G., Chu, D. K., Couban, R., Devji, T., Escamilla, Z., Foroutan, F., Gao, Y., … Han, M. A. (2020). Drug treatments for COVID-19: Living systematic review and network meta-analysis. BMJ, 370, m2980. [Google Scholar] [CrossRef]
  106. Silapachote, P., & Srisuphab, A. (2017). Engineering courses on computational thinking through solving problems in artificial intelligence. International Journal of Engineering Pedagogy (IJEP), 7(3), 34–49. [Google Scholar] [CrossRef]
  107. Singh, R., Gehlot, A., & Singh, B. (2019). Introduction to arduino and arduino ide and toolbox_arduino_v3. In Arduino and SCILAB based projects (pp. 1–6). Bentham Science Publishers. [Google Scholar] [CrossRef]
  108. Su, J., & Yang, W. (2023). A systematic review of integrating computational thinking in early childhood education. Computers and Education Open, 4, 100122. [Google Scholar] [CrossRef]
  109. Subramaniam, S., Maat, S. M., & Mahmud, M. S. (2022). Computational thinking in mathematics education: A systematic review. Cypriot Journal of Educational Sciences, 17(6), 2029–2044. [Google Scholar] [CrossRef]
  110. Tallou, K. (2022). Marine plastic pollution in kindergarten as a means of engaging toddlers with STEM education and educational robotics. Advances in Mobile Learning Educational Research, 2(2), 401–410. [Google Scholar] [CrossRef]
  111. Tang, X., Yin, Y., Lin, Q., Hadad, R., & Zhai, X. (2020). Assessing computational thinking: A systematic review of empirical studies. Computers & Education, 148, 103798. [Google Scholar] [CrossRef]
  112. Taslibeyaz, E., Kursun, E., & Karaman, S. (2020). How to develop computational thinking: A systematic review of empirical studies. Informatics in Education, 19(4), 701–719. [Google Scholar] [CrossRef]
  113. Tengler, K., Kastner-Hauler, O., Sabitzer, B., & Lavicza, Z. (2021). The effect of robotics-based storytelling activities on primary school students’ computational thinking. Education Sciences, 12(1), 10. [Google Scholar] [CrossRef]
  114. The LEGO Group. (2025). LEGO® education WeDo 2.0 core set. Available online: https://www.lego.com/en-gr/product/lego-education-wedo-2-0-core-set-45300 (accessed on 21 November 2023).
  115. Tikva, C., & Tambouris, E. (2021). Mapping computational thinking through programming in K-12 education: A conceptual model based on a systematic literature Review. Computers & Education, 162, 104083. [Google Scholar] [CrossRef]
  116. Triantafyllou, S. A., Sapounidis, T., & Farhaoui, Y. (2024). Gamification and computational thinking in education: A systematic literature review. Salud, Ciencia y Tecnologia-Serie de Conferencias, 3, 659. [Google Scholar] [CrossRef]
  117. Troiano, G. M., Snodgrass, S., Argımak, E., Robles, G., Smith, G., Cassidy, M., Tucker-Raymond, E., Puttick, G., & Harteveld, C. (2019, June 12–15). Is my game OK Dr. Scratch? Exploring programming and computational thinking development via metrics in student-designed serious games for STEM. 18th ACM International Conference on Interaction Design and Children, Boise, ID, USA. [Google Scholar]
  118. Tsarava, K., Moeller, K., Pinkwart, N., Butz, M., Trautwein, U., & Ninaus, M. (2017, October 5–6). Training computational thinking: Game-based unplugged and plugged-in activities in primary school. European Conference on Games Based Learning (pp. 687–695), Graz, Austria. [Google Scholar]
  119. Türker, P. M., & Pala, F. K. (2020). The effect of algorithm education on students’ computer programming self-efficacy perceptions and computational thinking skills. International Journal of Computer Science Education in Schools, 3(3), 19–32. [Google Scholar] [CrossRef]
  120. Van Rossum, G., & Drake, F. L., Jr. (1995). Python tutorial (Vol. 620). Centrum voor Wiskunde en Informatica Amsterdam. [Google Scholar]
  121. Vitagliano, A., Cicinelli, E., Laganà, A. S., Favilli, A., Vitale, S. G., Noventa, M., Damiani, G. R., Dellino, M., Nicolì, P., D’Amato, A., Bettocchi, S., Matteo, M., & D’Amato, A. (2024). Endometrial scratching: The light at the end of the tunnel. Human Reproduction Update, 30(2), 238–239. [Google Scholar] [CrossRef]
  122. Voogt, J., Fisser, P., Good, J., Mishra, P., & Yadav, A. (2015). Computational thinking in compulsory education: Towards an agenda for research and practice. Education and Information Technologies, 20, 715–728. [Google Scholar] [CrossRef]
  123. Voskoglou, M. G., & Buckley, S. (2012). Problem solving and computational thinking in a learning environment. arXiv, arXiv:1212.0750. [Google Scholar] [CrossRef]
  124. Vourletsis, I., & Politis, P. (2019). Origin, conceptual development and future perspectives of computational thinking: A systematic literature review. Education Sciences, 2018(4), 72–92. [Google Scholar] [CrossRef]
  125. Vourletsis, I., & Politis, P. (2024). Greek translation, cultural adaptation, and psychometric validation of beginners computational thinking test (BCTt). Education and Information Technologies, 30, 2211–2235. [Google Scholar] [CrossRef]
  126. Wallet, P. (2014). Information and Communication Technology (ICT) in education in Asia: A comparative analysis of ICT integration and e-readiness in schools across Asia. Available online: https://policycommons.net/artifacts/8200342/information-and-communication-technology-ict-in-education-in-asia/9110580/ (accessed on 21 November 2023).
  127. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. [Google Scholar]
  128. Wing, J. M. (2008). Computational thinking and thinking about computing. Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 366(1881), 3717–3725. [Google Scholar] [CrossRef]
  129. Wolber, D., Abelson, H., Spertus, E., & Looney, L. (2011). App inventor. O’Reilly Media, Inc. [Google Scholar]
  130. Xie, B., Shabir, I., & Abelson, H. (2015, October 27). Measuring the usability and capability of app inventor to create mobile applications. 3rd International Workshop on Programming for Mobile and Touch, Pittsburgh, PA, USA. [Google Scholar] [CrossRef]
  131. Yuen, A., & Hew, T. (2018). Information and communication technology in educational policies in the Asian region. In Handbook of information technology in primary and secondary education (pp. 1–20). Springer. [Google Scholar]
  132. Zapata-Cáceres, M., Martín-Barroso, E., & Román-González, M. (2020, April 27–30). Computational thinking test for beginners: Design and content validation. 2020 IEEE Global Engineering Education Conference (EDUCON), Porto, Portugal. [Google Scholar]
  133. Zaranis, N., Papadakis, S., & Kalogiannakis, M. (2019). Evaluation of educational technologies for the promotion of computational thinking in preschool education [Aksiologisi ton ekpaideutikon texnologion gia tin proothisi tis ypologistikis skepsis stin prosxoliki ekpaideusi. ekpaideusi kai dia viou mathisi, ereuna kai texnologiki anaptiksi kainotomia kai oikonomia]. Education, Lifelong Learning, Research and Technological Development, Innovation and Economy, 2, 77–86. [Google Scholar] [CrossRef]
  134. Zeeshan, K., Hämäläinen, T., & Neittaanmäki, P. (2024). Computational Thinking and AI Coding for Kids to Develop Digital Literacy. International Journal of Education, 12(3), 55–74. [Google Scholar] [CrossRef]
  135. Zhang, L., & Nouri, J. (2019). A systematic review of learning computational thinking through Scratch in K-9. Computers & Education, 141, 103607. [Google Scholar] [CrossRef]
  136. Zhang, S., & Wong, G. K. W. (2023). Development and validation of a computational thinking test for lower primary school students. Educational Technology Research and Development, 71(4), 1595–1630. [Google Scholar] [CrossRef]
  137. Zhang, S., Wong, G. K. W., & Pan, G. (2021, December 5–8). Computational thinking test for lower primary students: Design principles, content validation, and pilot testing. 2021 IEEE International Conference on Engineering, Technology & Education (TALE), Wuhan, China. [Google Scholar]
  138. Георгиева, Д., & Georgieva-Trifonova, T. (2023). Developing mathematical competencies through makeblock mbot programming in computer modelling education. Tem Journal, 12, 2437–2447. [Google Scholar] [CrossRef]
Figure 1. PRISMA flow chart.
Figure 1. PRISMA flow chart.
Education 15 00985 g001
Figure 2. Graph showing the publications of the included studies, for the two categories, per year from 2015 to 2023.
Figure 2. Graph showing the publications of the included studies, for the two categories, per year from 2015 to 2023.
Education 15 00985 g002
Figure 3. Countries of origin for the publications of the included studies. (Left): intervention studies. (Right): assessment studies.
Figure 3. Countries of origin for the publications of the included studies. (Left): intervention studies. (Right): assessment studies.
Education 15 00985 g003
Figure 4. Number of publications per level of education. There were no significant differences between the studies of the two categories.
Figure 4. Number of publications per level of education. There were no significant differences between the studies of the two categories.
Education 15 00985 g004
Table 1. The classification of tools used in assessment studies into various categories.
Table 1. The classification of tools used in assessment studies into various categories.
CategoryToolsTotal
CT assessment toolscCTt (2), CTt-RG (2), BCTt (1), CTS (1), CTtLP (1), Sword (1), CT tool by Korea Education and Research Information Service (1), CT tool by researchers (1), CTA-CES (1), KBIT (1), TechCheck-2 (1), TechCheck-K (1), The Code-Free CT Assessment (1)15 (71.42%)
Block-based codingScratch/Dr. Scratch (3) 3 (14.28%)
Plugged and Unplugged activities/Hybrid methodsBebras Tasks (2), AlgoPaint Unplugged Computational Thinking Assessment (1), Plugged and Unplugged activities (1), Questionnaire (1)5 (23.8%)
Table 2. The classification of tools used in intervention studies into various categories.
Table 2. The classification of tools used in intervention studies into various categories.
CategoryToolsTotal
Block-based codingScratch/ScratchJr/Scratch4SL (32), App Inventor (3), Entry (3), Kodu Game Lab (2), mBlock (2), A progammable learning enviroment created by the researchers (1), BAC (1), Block based environment (1), BlockPy (1), Choregraphe (1), DuinoBlocks4Kids (DB4K) kit (1), OwlSpace (1), 49 (49.49%)
Text-based codingArduino (4), Python (2)6 (6.06%)
RoboticsLEGO-WeDo 2.0 (5), mBot Arduino robot (4), Bee—Bot (3), Micro:bit (3), Ozobot (3), Code and Go Robot Mouse Activity Set (2), Makey Makey (2), NAO (2), Zowi robot and Zowi BQ Robot programming Platform (2), Code-a-Pillar (1), Cubetto (1), Funboard (1), Hamster robot (1), Handmade robots from low cost materials (1), Jimu robot (1), KIBO-15 (1), Lego EV3 (1), LeGO-WeDo 1.0 (1), LittleBits (1), Matatalab (1), Thymio robots and mission R2T21 (modified version) (1), uKit Explore (1)39 (39.39%)
Web-based platforms/ SpreadsheetsCode.org (5), Bebras (2), Machine learning for kids (2), MS Excel (2), Tinkercad (2), AI for Oceans (1), AI-assisted Learning Tools (1), CodeMonkey (1), EasyLogic 3D (1), Google Sheets (1), Google Slides (1), IBM Watson (1), Moodle-G (1), Online web sketch (1), OpenSimulator (1), Wordpress (1)24 (24.24%)
Plugged and Unplugged activities/Hybrid methodsPlugged and Unplugged activities (12), Unknown tools-software (3), AutoThinking (1), Coding Ocean board game (1), Constructivist Universal Design Learning Package for Kindergarten Education to learning (1), CT learning media (1), Geometer’s Sketchpad (1), Interactive Activities (1), Labs from the National Research Council of Italy (1), Programming Sticker (1), The Fraction App (1), Unplugged Programming Learning (1), Visual Art and Unplugged Approaches (1)26 (26.26%)
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Paraskevopoulou-Kollia, E.-A.; Michalakopoulos, C.-A.; Zygouris, N.C.; Bagos, P.G. Computational Thinking in Primary and Pre-School Children: A Systematic Review of the Literature. Educ. Sci. 2025, 15, 985. https://doi.org/10.3390/educsci15080985

AMA Style

Paraskevopoulou-Kollia E-A, Michalakopoulos C-A, Zygouris NC, Bagos PG. Computational Thinking in Primary and Pre-School Children: A Systematic Review of the Literature. Education Sciences. 2025; 15(8):985. https://doi.org/10.3390/educsci15080985

Chicago/Turabian Style

Paraskevopoulou-Kollia, Efrosyni-Alkisti, Christos-Apostolos Michalakopoulos, Nikolaos C. Zygouris, and Pantelis G. Bagos. 2025. "Computational Thinking in Primary and Pre-School Children: A Systematic Review of the Literature" Education Sciences 15, no. 8: 985. https://doi.org/10.3390/educsci15080985

APA Style

Paraskevopoulou-Kollia, E.-A., Michalakopoulos, C.-A., Zygouris, N. C., & Bagos, P. G. (2025). Computational Thinking in Primary and Pre-School Children: A Systematic Review of the Literature. Education Sciences, 15(8), 985. https://doi.org/10.3390/educsci15080985

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop