Next Article in Journal
Parent Conceptions of Language, Mathematics, and Support in a French Immersion Context
Previous Article in Journal
The Role of Math and Science Attitudes and Beliefs in Shaping Migratory Adolescents’ Aspirational Engineering Identity: An Exploratory Study
Previous Article in Special Issue
Embodied and Shared Self-Regulation Through Computational Thinking Among Preschoolers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Plugged or Unplugged? A Comparative Study of Computational Thinking Development in Early Childhood

by
Maria-Emilia Garcia-Marques
1,*,
Adrián Pérez-Suay
2 and
Ismael García-Bayona
1
1
Departament de Didàctica de la Matemàtica, Universitat de València, 46022 Valencia, Spain
2
Image Processing Laboratory, Universitat de València, 46022 Valencia, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2026, 16(2), 333; https://doi.org/10.3390/educsci16020333
Submission received: 19 November 2025 / Revised: 11 January 2026 / Accepted: 11 February 2026 / Published: 18 February 2026
(This article belongs to the Special Issue Computational Thinking and Programming in Early Childhood Education)

Abstract

Computational thinking (CT) has increasingly been recognized as a fundamental skill that should be fostered from early childhood. This study investigated the comparative effectiveness of plugged (robot-based) and unplugged (without technology) instructional activities on the development of CT skills in young children. Two natural classroom groups participated, each receiving the same instructional content and assessment, differing only in intervention modality: one utilized the Bee-bot floor robot, while the other engaged in unplugged activities simulating the robot’s movements. Pre- and post-intervention assessments measured CT and spatial reasoning skills to evaluate learning gains. Results demonstrated significant improvements in CT across both groups, with no statistically significant differences in overall gains, suggesting that unplugged activities, when thoughtfully designed, can be as effective as technology-supported ones. These findings have important implications for designing inclusive and resource-sensitive early childhood CT curricula, emphasizing the value of developmentally appropriate and engaging learning experiences beyond technological availability.

1. Introduction

Computational thinking (CT) originated in 1980 with the ideas developed by Seymour Papert in his book Mindstorms: Children, Computers, and Powerful Ideas (Papert, 1980). However, the concept was not fully established until 2006, when Jeannette Wing defined it as a fundamental skill for everyone, not just for computer scientists, that involves solving problems, designing systems, and understanding human behaviour by drawing on the concepts fundamental to computer science (Wing, 2006, p. 33).
Over the years, the notion of CT has evolved, leading to numerous definitions proposed by different authors. In this paper, we will use the definition provided by Shute et al. (2017) that claims that CT is the conceptual foundation required to solve problems effectively and efficiently (i.e., algorithmically, with or without the assistance of computers) with solutions that are reusable in different contexts. This definition understands CT as a cognitive skill that involves several key concepts such as:
  • Abstraction: The ability to identify the relevant characteristics of a problem in order to find the main idea, thereby reducing its complexity.
  • Data analysis: The ability to search for, select, organize, and logically analyze data so as to identify patterns and draw conclusions.
  • Problem decomposition: The ability to break down a task into smaller parts in order to solve it more easily.
  • Algorithmic thinking: The ability to identify and establish the necessary steps to solve a problem through a set of instructions. This skill also promotes abstract thinking, which can be developed through the execution of sequences.
  • Debugging: The ability to detect and correct errors in a proposed solution.
  • Generalization: The ability to transfer a problem-solving process from one situation to a broader variety of problems.
The development of CT, and thus the acquisition of the skills described above, enables individuals to address not only problems related to computer science but also those arising in everyday life. As González-González (2020) points out, the importance of CT lies in the fact that the skills acquired are transferable beyond informatics, proving valuable for solving challenges in daily life as well as across STEM fields (Science, Technology, Engineering, and Mathematics). Furthermore, CT helps children to develop essential abilities such as fine motor skills, hand–eye coordination, problem-solving strategies, metacognitive skills, and logical reasoning (González-González, 2019).
CT has been internationally recognized as a key competence of the 21st century (Mohaghegh & McCauley, 2016). Its conceptualization has evolved beyond programming to encompass a general cognitive approach to problem-solving based on algorithmic and logical reasoning, applicable across multiple contexts. A 2022 European Commission report (European Commission: Joint Research Centre et al., 2022) examined the integration of CT into compulsory education across thirty countries, revealing its growing inclusion in early childhood education (ages 3–6), as well as in primary and secondary curricula. The study identified three main implementation models: as a cross-curricular theme, as a standalone subject, or embedded within specific disciplines such as mathematics and technology. Recent developments indicate a consolidation of CT within educational frameworks—including in Spain—beginning from early childhood education, highlighting its relevance to problem-solving processes and its essential role in cultivating mathematical competence.
There are currently numerous resources that support the development of CT in early childhood, many of which are based on playful and exploratory learning approaches. Two main types of activities are used: plugged and unplugged approaches.
The term plugged activities refers to learning experiences that employ computers, electronic devices, or other digital tools to foster students’ CT skills (Grover & Pea, 2013).
Plugged activities—such as visual programming and programming with robots—are generally regarded as the mainstream approach due to several advantages (Bers et al., 2014; Caballero-González et al., 2019; Stoeckelmayr et al., 2011). First, they are figurative: complex and abstract CT concepts can be visualized and thus made more comprehensible to young learners (Sullivan et al., 2015). Second, they are observable: teachers can directly monitor children’s programming behaviours to assess their CT levels and implement timely interventions (Bers, 2017). Third, they are effective: empirical research has demonstrated that plugged activities tend to be more successful in teaching CT skills (Elkin et al., 2016).
However, the implementation of plugged approaches presents several challenges.
First, the relatively high cost of programming robots limits their feasibility for large-scale classroom use (Lye & Koh, 2014; Xinogalos et al., 2017). Second, many teachers lack formal education in computer-related disciplines, creating barriers to effective instruction (Bell et al., 2009; Bers et al., 2002). Finally, integrating plugged activities into traditional curricula can be difficult due to misalignment with established subjects and instructional practices (Bell et al., 2009).
In contrast, unplugged activities do not rely on programmable digital artefacts; instead, they employ paper-and-pencil tasks, physical manipulatives, and games as non-digital means to foster computational thinking (CT) (Looi et al., 2018). In recent years, unplugged activities have received increased attention as a developmentally appropriate approach for introducing CT to young children (Lin et al., 2024).
Several advantages of unplugged activities have been identified in the literature. First, they can be implemented without sustained interaction with digital devices, which may help limit children’s overall screen exposure when integrated into early childhood curricula (Rodriguez et al., 2017; Wohl et al., 2015). Second, unplugged activities typically present a low entry threshold: children with little or no prior programming experience can participate in foundational CT practices through physical, hands-on, and play-based tasks (Rodriguez et al., 2017). Third, these activities may impose a lower extraneous cognitive load, as learners are not required to manage programming syntax or digital interfaces while engaging with core CT concepts (Bell & Vahrenhold, 2018; Yadav et al., 2018). Finally, unplugged activities can be implemented with minimal economic cost, as they do not require specialized hardware such as programmable robots.
Despite these advantages, unplugged activities also present certain limitations. First, assessing learning outcomes is challenging, as there is no standardized curriculum or unified assessment tool for evaluating CT development in unplugged contexts (Brackmann et al., 2017; Dennerlein et al., 2018; Jun, 2018). Second, designing and developing unplugged courses can be difficult due to the limited availability of high-quality, structured learning materials. Third, unplugged approaches may not fully support children’s later engagement with coding education. While these activities foster CT-related thinking, they do not necessarily enhance children’s programming language proficiency—a fundamental aspect of coding (Bers, 2017).
Research on early computational thinking (CT) in early childhood education has expanded rapidly in recent years. A recent meta-analysis reported a moderate overall positive effect of programming instruction on young learners’ CT development, with graphical programming approaches outperforming tangible and unplugged modalities (Wei et al., 2024). However, empirical comparisons across instructional modalities remain mixed. Several quasi-experimental studies conducted in preschool settings suggest that educational robotics and other plugged approaches yield greater gains in CT and executive function skills than unplugged activities (Zhang et al., 2025; Zurnacı & Turan, 2024).
In contrast, in their study with primary school students, Polat and Yilmaz (2022), found no differences in terms of CT skill development when they compared the effects of plugged and unplugged activities on children’s basic programming achievement and computational thinking skills, although the unplugged group in the study achieved better basic programming scores.
Instructional designs in this area are commonly grounded in developmental and embodied learning frameworks. For example, some studies integrate Piagetian theory and Total Physical Response principles to scaffold tangible, unplugged tasks prior to device-based programming (Saxena et al., 2020), while robotics-based curricula often adopt a Coding as Another Language pedagogy to embed programming within literacy and social learning contexts (Claerhout & Fanchamps, 2026).
Collectively, these findings highlight important equity and accessibility concerns, including differential effects related to gender, prior exposure, and access to digital devices. As a result, scholars have emphasized the need for age-appropriate CT language, differentiated scaffolding strategies, and careful consideration of device availability to prevent the widening of educational gaps (Lin et al., 2024). Overall, the literature suggests that practitioners may benefit from sequencing unplugged activities to establish conceptual understanding and vocabulary, followed by plugged or robotic tools to support iterative debugging and knowledge transfer, with instructional choices guided by developmental level, local resources, and inclusion goals (Zhang et al., 2025).
Although previous studies have shown that both plugged and unplugged activities can effectively foster young children’s computational thinking (CT) (Grover & Pea, 2013; Rodriguez et al., 2017), there remains a lack of comparative research examining their relative effectiveness using rigorous quantitative designs. To address this gap, the present study investigates the following research questions:
  • RQ1: Are there statistically significant differences in five-year-old children’s gains in computational thinking between a plugged (robot-based) instructional approach and an unplugged instructional approach?
  • RQ2: Are there statistically significant differences in five-year-old children’s gains in spatial ability in mental rotation between a plugged (robot-based) instructional approach and an unplugged instructional approach?
Accordingly, this study compares the effects of these two instructional approaches on both overall CT development and spatial ability in mental rotation. We hypothesize that instruction supported by floor robots will result in significantly greater gains in CT skills than unplugged activities, thereby justifying the additional cost associated with acquiring robotic materials.

2. Research Objective and Methodology

The objective of this study is to analyze the acquisition and development of computational thinking (CT) and spatial ability in mental rotation among five-year-old children under two instructional conditions: with and without the use of technological devices. Specifically, the study seeks to examine learning gains associated with each instructional approach in relation to the research questions outlined above.
The study was structured into three main phases. First, a pre-test was administered to establish the pupils’ baseline levels of computational thinking and mental rotation. Second, the intervention phase was conducted with two parallel groups: Class A completed the activities using programmable floor robots, while Class B performed the same activities without robots, physically enacting the programmed instructions. Finally, a post-test identical to the pre-test was administered to evaluate children’s learning gains following the intervention.

2.1. Participants

The participants were two natural classroom groups of five-year-old children enrolled in the same school. Both groups shared similar academic characteristics, which allowed for a reliable comparison. Class A comprised 24 pupils and carried out the intervention using floor robots. Class B initially consisted of 25 pupils, but three were excluded: one child due to specific learning difficulties that prevented completion of the tasks, and two others owing to repeated absences that limited participation. As a result, Class B was composed of 22 effective participants. To safeguard anonymity, each pupil was assigned a code. This coding system ensured confidentiality while still permitting the tracking of individual progress across the different stages of the study.

2.2. Instruments

The research relied on three assessment instruments, administered both before and after the intervention. These instruments targeted mental rotation, computational thinking, and the integration of both skills.
To ensure the rigour of the assessments, the instruments used in this study were adapted from validated measures with established reliability in previous research. Moreover, the same tests were administered before and after the intervention in order to measure the evolution of the participants under the same parameters. The mental rotation test (Test A) drew on tasks from Quaiser-Pohl (2003), Jansen et al. (2013), and Levine et al. (1999), which have demonstrated satisfactory internal consistency and sensitivity in primary-school-aged children. The computational thinking test (Test B) was adapted from selected activities of the BCtt assessment (Zapata-Cáceres et al., 2020), previously validated for early childhood populations. Test C, combining CT and mental rotation, was based on tasks from Diago and Arnau (2017) designed for young children and piloted in similar classroom settings. While minor modifications were made to ensure age-appropriateness, these adaptations preserved the original structure and content, and pilot testing confirmed that the tasks were clear and engaging for five-year-old participants.

2.2.1. Test A: Mental Rotation

The mental rotation test used in this study assessed spatial reasoning through three distinct tasks, that evaluate different aspects of mental rotation and spatial composition as shown in Figure 1. The first task (Figure 1a), inspired by Quaiser-Pohl (2003) and Jansen et al. (2013), required children to mentally rotate individual elements and identify the figure matching a highlighted model, thereby assessing their ability to perform mental rotation of single objects. The second task (Figure 1b), based on Levine et al. (1999), involved mentally combining shapes presented in the first box to form a target figure within a row, thus evaluating the integration of multiple elements to create composite shapes. The third task (Figure 1c), following Jiménez (2022), asked participants to mentally complete a square by joining partial shapes, assessing their spatial problem-solving skills through mental composition. Altogether, the instrument consisted of twelve items, systematically designed to measure the progression from single-element rotation to multi-element integration and spatial completion, providing a robust assessment of children’s spatial reasoning abilities.

2.2.2. Test B: Computational Thinking

Based on some activities from the BCtt assessment test (Zapata-Cáceres et al., 2020), this test focused on CT through path-following tasks. Children were asked to imagine and visualize the movements that a pencil should perform to complete a given route and then choose the correct option Figure 2 shows some examples of this test. This test consisted of four items, each requiring planning and visualization.

2.2.3. Test C: Combined CT and Mental Rotation

Drawing on the work of Diago and Arnau (2017), this instrument evaluated both CT and mental rotation simultaneously. In these tasks, children had to select the correct set of instructions to guide a bee to reach a flower, accounting for changes in orientation and perspective. The four items showed in Figure 3 composed this test.

2.3. Procedure

This study was conducted over five sessions. The first two sessions comprised the pre-test, presented in the Instruments Section, designed to assess baseline computational thinking and spatial reasoning competencies. The intervention consisted of two sessions per class, during which participants received a general explanation of the tasks and performed them in pairs. The final session involved a post-test identical to the pre-test, enabling direct comparison of learning outcomes.
Participants were divided into two natural classroom groups, each conducting the same experiment with an identical procedure. The sole difference between the groups was the type of intervention employed: one class utilized plugged tools, specifically the Bee-bot floor robot (Blue-Bot model), while the other class engaged in an unplugged option, physically simulating the robot’s movements themselves. Both groups followed the same instructions and tasks, ensuring that the intervention type was the only variable distinguishing their experiences.
The design of the study thus allowed for an empirical analysis of whether technology-enhanced interventions with programmable robots lead to greater gains in CT than unplugged approaches, in which children embodied the role of the robot themselves. The systematic use of pre- and post-tests ensured the comparability of results and contributed to the reliability of the findings.

2.3.1. Sessions 1 and 2: Pre-Test

Participants were not accustomed to the type of task that the assessment test proposed; therefore, to ensure they fully understood the requirements of the evaluation test the session began with a short whole-group assembly in which the teacher demonstrated examples with physical materials, helping children to understand the type of reasoning required. Before administration of Test A, an example exercise different from the test items was shown. This example utilized manipulative materials that allowed participants to manually rotate and move the figures, thus simulating the mental operations they would be required to perform during the actual test. Before administration of Test B, the paths were illustrated on the board to explain how the pencil’s movements corresponded to the task. Before the administration of Test C, the use of the Bee-bot robot was demonstrated to both groups, with particular emphasis on the fact that the robot has a face. The direction in which the Bee-bot’s face was pointing served as a critical reference for interpreting turns: a right turn would lead the robot either to the right or left depending on its current orientation. This highlighted the importance of the robot’s relative position in distinguishing between right and left turns, ensuring that participants clearly understood how directional commands corresponded to the robot’s movements.
After the demonstrations, each child received an individual test sheet, previously coded with their assigned number to ensure anonymity and traceability. The tests were corrected according to the number of correct responses (maximum of 12 in Test A, 4 in Test B, and 4 in Test C). This procedure generated quantitative data that made it possible to compare learning outcomes across both groups.

2.3.2. Sessions 3 and 4: Intervention

The intervention comprised four navigation tasks of increasing complexity, adapted from Diago and Arnau (2017) and analogous to those included in Test C. Each task required pupils, working in ability-balanced pairs, to follow a predefined path marked on a grid. The routes progressively incorporated directional changes that demanded mental rotation and spatial problem-solving. The first path involved no turns, the second incorporated one turn, the third required two turns in different directions, and the fourth required a change in orientation involving two consecutive turns.
Prior to executing each path, pairs planned the sequence of movements using direction cards. This planning phase required pupils to reason about step sequences, anticipate potential errors, and revise their proposals, thereby promoting logical ordering, problem-solving, and self-correction. Each pair was allowed up to three attempts to successfully complete each route; if unsuccessful after three tries, guidance was provided to facilitate progression.
The intervention spanned two sessions per class, with all procedural aspects identical across groups except for the intervention modality—plugged (Bee-bot robot) versus unplugged (physical simulation). In Class A, pupils programmed a Bee-bot robot to enact the planned route. In Class B, pupils alternated between giving instructions and physically enacting the movements themselves, taking the role of the “bee”. To simulate restricted visibility and ensure reliance on verbal guidance, the child performing the movements wore a mask. In both groups, pupils alternated roles within each task to guarantee equal participation in planning and execution. Working in pairs also encouraged collaboration through dialogue, negotiation, and joint decision-making.
Throughout the sessions, observers used a structured observation grid to document the number of attempts made, the success or failure of each route, and qualitative notes on strategies, dialogue, and other relevant behaviours exhibited during collaborative problem-solving. These data were used to support the interpretation of learning processes and outcomes.

2.3.3. Session 5: Post-Test

The post-test was identical to the pre-test, allowing for direct comparison of pupils’ performance before and after the intervention. Since the tasks were already familiar, no additional explanation was needed.

3. Results

Once the study had been implemented in both groups, the data obtained from the pre-tests, intervention, and post-tests were subjected to quantitative analysis. The following subsections will compare the progress made from pre-test to post-test, examining whether the technological tool produces a greater improvement compared to unplugged activities.

3.1. Pre-Test Results

The pre-test served as the initial diagnostic instrument, providing a baseline of pupils’ abilities in mental rotation and computational thinking. The distribution of results showed considerable variation: some pupils demonstrated very limited skills at baseline, while others were already capable of achieving more than half of the tasks correctly. This dispersion indicates a heterogeneous group in terms of their initial development of CT and mental rotation, which is consistent with findings in early childhood research that highlight wide individual differences in cognitive abilities at these ages (Jansen et al., 2013; Levine et al., 1999).

3.1.1. Class A Pre-Test Results

Figure 4 summarizes the individual pre-test performance of the 24 pupils in Class A across three measures: mental rotation (MR), computational thinking (CT) and the combined MR+CT measure.
Several important features can be observed from this data. First, baseline performance is heterogeneous: MR percentages range from 8.3% to 100.0%, (MMR ≈ 50.0%, MdnMR = 41.7%), while CT percentages range from 0% to 100% (MCT ≈ 57.3%, MdnCT = 50.0%). The combined MR+CT scores are markedly lower on average (MMR+CT ≈ 27.1%, MdnMR+CT = 25.0%), reflecting that fewer pupils succeeded on tasks requiring the integrated application of spatial and computational skills. Second, distributional summaries show that only 10 pupils scored above 50% on MR (1 scored exactly 50%), whereas 11 pupils scored above 50% on CT (4 scored exactly 50%). By contrast, only 2 pupils scored above 50% on the MR+CT measure (7 scored exactly 50%), and 15 pupils scored below 50% on that combined measure. These counts emphasize that, although many children could solve items targeting MR or CT separately, far fewer were able to combine those skills successfully at baseline. Third, the figure highlights individual outliers worth noting in the analysis: several pupils obtained perfect (100%) scores on CT while obtaining only moderate MR scores, indicating that computational tasks and spatial rotation tasks do not map one-to-one at baseline.
Finally, the information in Figure 4 justifies the subsequent use of within-subject comparisons and non-parametric group tests: substantial variability and several non-normal score patterns (especially the low and skewed MR+CT distribution) motivate the mixed-model ANOVA and Mann–Whitney procedures reported in Section 3.4.

3.1.2. Class B Pre-Test Results

Figure 5 presents the individual pre-test results for the 23 pupils in Class B across three domains: mental rotation (MR; maximum 12 points), computational thinking (PC; maximum 4 points), and the combined MR+CT measure (maximum 4 points).
The data reveal notable variability in baseline performance both within and across domains. MR scores range from 25.00% to 100.00% (MMR ≈ 48%, MdnMR = 41.7%), suggesting moderate overall spatial reasoning with a few high-performing pupils (e.g., Students 21 and 22). In contrast, CT scores (MCT ≈ 69.0%, MdnCT = 75.0%) show a bimodal distribution, with many pupils achieving perfect or near-perfect scores (100%) while others scored below 50%, reflecting uneven prior exposure to problem-solving or programming concepts. The combined MR+CT measure exhibits generally lower results (MMR+CT ≈ 34.0%, MdnMR+CT = 25.0%), consistent with the additional cognitive demands of integrating spatial and computational reasoning.
Closer examination highlights several interesting cases. For instance, Students 21 and 22 achieved 100% in both MR and CT, along with high combined scores (75–100%), suggesting strong general cognitive abilities or familiarity with similar tasks. Conversely, Students 12 and 17 showed moderate MR performance but minimal success on CT and MR+CT items, indicating potential difficulty transferring spatial reasoning into applied computational contexts. These contrasts suggest substantial individual differences in the readiness to connect abstract CT principles with visuospatial reasoning skills.
Overall, Figure 5 confirms a broad baseline heterogeneity typical of early CT interventions, justifying the use of within-subject comparisons and non-parametric analyses in subsequent statistical testing.

3.2. Post-Test Results

After the study of the baseline results, the next subsections analyze the effects of the intervention with robots (Class A) and the intervention with unplugged activities (Class B).

3.2.1. Class A Post-Test Results

Figure 6 displays the post-test performance of the 24 pupils in Class A across the same three domains assessed in the pre-test: mental rotation (MR), computational thinking (PC), and the combined MR+CT measure. Overall, the figure shows a clear upward shift in performance following the intervention.
MR scores improved notably, ranging from 25.0% to 91.7%, with means of approximately MMR = 60.4% and MdnMR = 58.3%, compared to an average of around 50% in the pre-test (see Figure 4). Many pupils who had scored below 50% at baseline (e.g., Pupils 2, 8, 10, and 22) surpassed this threshold after instruction, indicating meaningful development in spatial reasoning skills. In the CT domain, most pupils reached medium-to-high scores, with several achieving perfect or near-perfect results (e.g., Pupils 3–5, 7, 9–11, 14–15, 19, and 24). The distribution appears more homogeneous than in the pre-test, suggesting a general consolidation of computational reasoning and task familiarity. The combined MR+CT measure also shows a marked improvement, with mean performance rising to approximately MMR+CT = 61.5% (MdnMR+CT = 75%). A large proportion of students (17 out of 24) achieved scores equal to or above 50%, compared to only 9 in the pre-test. This reflects a significant enhancement in students’ ability to integrate computational and spatial reasoning following the plugged learning activities. When examining individual cases, Pupils 3, 4, and 5 stand out for achieving high scores across all measures, while Pupils 18 and 22 remained at lower levels, underscoring persistent individual variability.
In summary, Figure 6 illustrates the positive impact of the plugged intervention on Class A. Improvements are visible not only in individual task domains but, crucially, in the integrated MR+CT measure, suggesting that digital, interactive approaches can effectively support both isolated and combined facets of computational thinking in young learners.

3.2.2. Class B Post-Test Results

Figure 7 presents the post-test results for the 22 pupils in Class B, following the unplugged intervention. As in previous figures, performance is reported for three domains: mental rotation (MR; maximum 12 points), computational thinking (PC; maximum 4 points), and the combined MR+CT measure (maximum 4 points).
The results indicate a clear overall improvement across all domains compared to the pre-test (Figure 5), with MR performance rising from approximately MMR = 48.0% to MMR = 60.6%, and most pupils scoring at or above 50%. This improvement suggests that the unplugged activities effectively strengthened children’s visuospatial reasoning even without digital or robotic tools. Performance on the CT test also improved, with many pupils achieving perfect results (Students 1–5, 9, 12, 14, 17–20, 22). Mean CT scores increased to approximately MCT = 81.0%, indicating that the unplugged intervention successfully reinforced fundamental computational concepts through physical, screen-free problem-solving. However, a small subset of pupils (Students 13 and 15) still performed below 50%, reflecting variability in individual engagement or prior experience. The combined MR+CT measure demonstrates the most notable progress, increasing from a mean of roughly MMR+CT = 34.0% in the pre-test to MMR+CT = 63.0% in the post-test. Many pupils (over two-thirds of the class) achieved scores above 50%, with some (Students 9, 17, 19, and 21) reaching 100%. This suggests that the unplugged activities not only enhanced discrete skills but also promoted integration between spatial reasoning and computational thinking processes, an encouraging outcome for early CT pedagogy.
In summary, the results in Figure 7 highlight that unplugged activities can yield comparable learning gains to plugged approaches, fostering both spatial and computational reasoning while remaining accessible, low-cost, and screen-free.

3.3. Qualitative Observations

Qualitative observations were collected during task execution for both groups. These comments provide insight into pupils’ reasoning and strategies.
In both groups, pupils’ pre-test results aligned consistently with their performance during the intervention. Students who achieved higher scores in the pre-test also demonstrated stronger performance throughout the activities, and their responses to error situations were remarkably similar across groups. The most frequently observed behaviours included:
  • Arranging the programming cards to physically match the shape of the route.
  • Placing an instruction card on each square, including the starting position of the Bee-bot.
  • Realizing errors during execution and reacting spontaneously (e.g., “No! Not that way… I told you it was the other side”, “We missed half the path”, “It went too far, we need to remove one forward step”).
  • Identifying specific mistakes and correcting them directly (“It needs one more forward step”, “It went one too far”).
  • Modifying instructions by adding or removing a single step, assuming the problem was solved.
  • Repeatedly placing turn arrows instead of combining a turn with a forward step (“Let’s add another turn”, corrected by peers: “No! Otherwise it will just keep turning”).
These observations reveal that the pupils of both groups not only engaged with the task but also developed self-correction strategies, collaborative negotiation, and awareness of the logic behind the robot’s movements, which are key aspects of computational thinking development.
The principal difference observed between the two groups concerned the level of supervision and adult guidance required. The robot provided students with an autocorrective mechanism that promoted greater autonomy, enabling them to complete the tasks with minimal teacher intervention. In contrast, the unplugged group required continuous teacher supervision to check and verify the movements of the pupil acting as the robot, which constrained the teacher’s ability to oversee the activity efficiently and extended the duration of each exercise. While this increased mediation could have influenced the effectiveness of the unplugged intervention, the comparable gains observed across groups suggest that the teacher’s role primarily supported task accuracy rather than driving differences in learning outcomes. Nonetheless, future research should consider the potential confounding effect of teacher involvement when comparing instructional modalities, particularly in early childhood contexts.
Differences in engagement were also evident. Pupils in the robot group remained consistently focused throughout the activity. In the unplugged group, however, four pairs of students became distracted and had difficulty remaining engaged for the full duration. Notably, no student in the plugged group lost focus at any point.

3.4. Statistical Tests

Wilcoxon signed-rank tests were performed to compare the pre-test and post-test results in the three types of tests for the whole sample of 46 students. As is shown in Table 1, the differences were statistically significant in all cases (even after considering a Bonferroni-corrected p-value threshold), with large effect sizes, suggesting the positive effects of the interventions.
To examine differences between groups, Mann–Whitney U tests were performed for each test type and time point. No significant differences were found between the plugged (G1) and unplugged (G0) groups in any of the pre-test or post-test measures (all p > 0.05). Effect sizes were small across all comparisons (r < 0.25). Mean and standard deviation values for each group are presented in Table 2.
Overall, these results suggest that while students improved in the three measures, the type of instructional activity (plugged or unplugged) did not lead to statistically different outcomes. Since none of these latest tests ended up giving significant results, there is no need to apply a Bonferroni-corrected p-value threshold.
A mixed-model repeated measures ANOVA was conducted to examine whether improvement in performance differed across the three tests (MR, CT, and MR+CT) and whether it depended on the group type. The analysis revealed a statistically significant within-subjects effect—F(2, 88) = 13.244, p < 0.001, η2 = 0.231, 1 − β = 0.997—indicating differential improvement across test types.
Pairwise comparisons showed that improvement in the MR+CT test (M = 0.315, SD = 0.266) was significantly greater than that in the MR test (M = 0.112, SD = 0.147 p < 0.001, 95% CI [0.093, 0.031]) and the CT test (M = 0.147, SD = 0.221, p < 0.001, 95% CI [0.061, 0.276]). The between-subject effect was not significant—F(1, 44) = 0.700, p = 0.407, η2 = 0.016, 1 − β = 0.130—indicating that the type of activity did not significantly influence overall improvement. Similarly, the interaction between group and test type was not significant—F(2, 88) = 0.586, p = 0.559, η2 = 0.013, 1 − β = 0.145.

4. Conclusions and Future Work

The present study investigated the effectiveness of plugged and unplugged activities for developing CT and MR in five-year-old children, ensuring that both groups received equivalent instructional content, pedagogy, and assessment. Overall, the findings indicate that both modalities effectively enhanced CT and MR over a short-term intervention, with significant within-subject improvements across all measures. The absence of significant between-group or interaction effects suggests that the modality itself—plugged or unplugged—did not substantially influence overall learning outcomes.
There is limited published comparative research exclusively at the preschool level (ages 3–5) that directly contrasts plugged vs. unplugged interventions. Most controlled comparisons focus on primary and secondary school contexts (Wang et al., 2025). However, some studies, such as Montuori et al. (2023), combine plugged and unplugged activities, showing positive results in CT and visuospatial skills. Zhang et al. (2025) compared a robot intervention with a pen-and-paper intervention in children of the same age as our study. The results differ from ours. Their results revealed the superior effects of robot-programming activities on facilitating preschoolers’ CT over time than the unplugged programming and conventional kindergarten activities. Zurnacı and Turan (2024) also compared the effects of coding education with an educational robot (Bee-bot) and a set of unplugged coding activities on preschool students’ computational thinking and executive function skills. This study found a statistically significant difference in favour of the educational robot group between the post-test scores for computational thinking skills. However, it was observed that there was no statistically significant difference between the post-test scores of both groups in terms of total executive function skill scores.
The reasons for these differences could be the different approaches used in the unplugged activities—role playing vs. pen-and-paper and other strategies; however there is no final conclusion, and further studies should be performed. In fact, other studies, such as Sun et al. (2024), found that unplugged programming promoted CT skills more than plugged-in programming in the early primary grades.
Our results align with those of Messer et al. (2018), Polat and Yilmaz (2022) and Lin et al. (2024), suggesting that CT can be effectively developed through both technology-based and unplugged approaches when instructional design and learning objectives are carefully matched. This underscores the notion that meaningful engagement with CT concepts, rather than reliance on specific tools, is central to early learning. However, which approach is more efficient it is still an open topic that needs further research.
Our results also highlight modality-specific advantages. Plugged activities, such as robot-supported tasks, provided self-correcting mechanisms that streamlined classroom workflow and fostered greater student autonomy, allowing multiple groups to work simultaneously with minimal teacher intervention. Unplugged activities, by contrast, were cost-effective, inclusive, and accessible, requiring minimal materials, which makes them particularly suitable for early childhood contexts with limited digital resources.
An important consideration is the role of teacher mediation. The unplugged activities required continuous supervision to ensure task accuracy, whereas the robot-based tasks offered an autocorrective mechanism, promoting greater autonomy. In this study, teacher involvement was controlled so that support was limited to ensure correct execution without providing additional guidance. Nonetheless, future studies should explicitly consider teacher mediation as a potential confounding factor when comparing instructional modalities in early childhood settings. For instance, in the context of the use of micro:bit for training primary school pupils’ computational thinking skills, some authors bring awareness to the distribution of autonomy between pupils and teachers (Carlborg et al., 2018, 2019), presenting a model that indicates that as the scope of autonomy increases, pupils face a greater risk of feeling overwhelmed; however, they also highlight a corresponding increase in the potential for developing independent problem-solving skills.
Several limitations should be noted. The intervention was short-term, limiting the assessment of long-term retention and transfer. The sample was relatively small and restricted to a single age group, which may reduce generalizability. Contextual factors, including classroom environment and student motivation, as well as individual differences such as prior exposure to technology, cognitive development, or learning style, were not systematically examined. Future research could address these limitations by exploring longer interventions, hybrid models combining both modalities, and larger, more diverse populations. There are some promising studies about the combination of unplugged and plugged activities such as (del Olmo-Muñoz et al., 2020; Fiş Erümit, 2024).
Despite these limitations, the study contributes to a growing body of evidence on CT in early childhood, demonstrating that effective CT development is not solely dependent on technology. Thoughtful instructional design, scaffolding, and engagement with CT concepts are central in acquiring early computational skills, offering practical guidance for educators and curriculum designers in selecting and implementing age-appropriate CT activities.

Author Contributions

Conceptualization, M.-E.G.-M.; methodology, M.-E.G.-M.; validation, A.P.-S. and I.G.-B.; formal analysis, I.G.-B., A.P.-S. and M.-E.G.-M.; resources, M.-E.G.-M.; data curation, A.P.-S. and I.G.-B.; writing—original draft preparation, A.P.-S. and I.G.-B.; writing—review and editing, M.-E.G.-M. and I.G.-B.; supervision, M.-E.G.-M.; project administration, M.-E.G.-M.; funding acquisition, M.-E.G.-M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Generalitat Valenciana grant number CIGE/2023/103 and Ministerio de Ciencia, Innovación y Universidades (Spain) grant number PID2023-150960NB-I00.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and approved by the Institutional Review Board (or Ethics Committee) of Universitat de Valencia (protocol code 2024-MAGPED-3305268 and date of approval 9 May 2024) for studies involving humans.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The datasets generated and analyzed during the current research are available from the corresponding author upon request (email: emilia.garcia@uv.es).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CTComputational Thinking
MRMental Rotation

References

  1. Bell, T., & Vahrenhold, J. (2018). CS unplugged—How is it used, and does it work? In H.-J. Böckenhauer, D. Komm, & W. Unger (Eds.), Adventures between lower bounds and higher altitudes: Essays dedicated to juraj hromkovič on the occasion of his 60th birthday (pp. 497–521). Springer International Publishing. [Google Scholar] [CrossRef]
  2. Bell, T., Witten, I. H., & Fellows, M. (2009). Computer science unplugged: School students doing real computing without computers. The New Zealand Journal of Applied Computing and Information Technology, 13(1), 20–29. [Google Scholar]
  3. Bers, M. U. (2017). Coding as a playground: Programming and computational thinking in the early childhood classroom (1st ed.). Routledge. [Google Scholar] [CrossRef]
  4. Bers, M. U., Flannery, L., Kazakoff, E. R., & Sullivan, A. (2014). Computational thinking and tinkering: Exploration of an early childhood robotics curriculum (Vol. 72). [Google Scholar] [CrossRef]
  5. Bers, M. U., Ponte, I., Juelich, C., Viera, A., & Schenker, J. (2002). Teachers as designers: Integrating robotics in early childhood education. Information Technology in Childhood Education Annual, 2002(1), 123–145. Available online: https://www.learntechlib.org/p/8850 (accessed on 20 December 2025).
  6. Brackmann, C. P., Román-González, M., Robles, G., Moreno-León, J., Casali, A., & Barone, D. (2017, November 8–10). Development of computational thinking skills through unplugged activities in primary school. 12th Workshop on Primary and Secondary Computing Education (pp. 65–72), Nijmegen, The Netherlands. [Google Scholar] [CrossRef]
  7. Caballero-González, Y.-A., Muñoz, L., & Muñoz-Repiso, A. G.-V. (2019, October 9–11). Pilot experience: Play and program with Bee-Bot to foster computational thinking learning in young children. 2019 7th International Engineering, Sciences and Technology Conference (IESTEC) (pp. 601–606), Panama, Panama. [Google Scholar] [CrossRef]
  8. Carlborg, N., Tyrén, M., Heath, C., & Eriksson, E. (2018, June 18). The scope of autonomy model: Development of teaching materials for computational thinking in primary school. Proceedings of the Conference on Creativity and Making in Education (pp. 37–44), Trondheim, Norway. [Google Scholar] [CrossRef]
  9. Carlborg, N., Tyrén, M., Heath, C., & Eriksson, E. (2019). The scope of autonomy when teaching computational thinking in primary school. International Journal of Child-Computer Interaction, 21, 130–139. [Google Scholar] [CrossRef]
  10. Claerhout, C., & Fanchamps, N. (2026). The effect of unplugged and plugged-in programming with influencing factor of collaborative versus individual work regarding the impact on computational thinking in preschoolers. In K. Tammets, S. Sosnovsky, R. Ferreira Mello, G. Pishtari, & T. Nazaretsky (Eds.), Two decades of TEL. from lessons learnt to challenges ahead (pp. 123–137). Springer Nature. [Google Scholar]
  11. del Olmo-Muñoz, J., Cózar-Gutiérrez, R., & González-Calero, J. A. (2020). Computational thinking through unplugged activities in early years of primary education. Computers Education, 150, 103832. [Google Scholar] [CrossRef]
  12. Dennerlein, K., Kühnis, J., Opwis, K., & Mekler, E. D. (2018). Training computational thinking through board games: The case of crabs & turtles. International Journal of Serious Games, 5(2), 25–44. [Google Scholar] [CrossRef]
  13. Diago, P. D., & Arnau, D. (2017). Pensamiento computacional y resolución de problemas en Educación Infantil: Una secuencia de enseñanza con el robot Bee-bot. In Libro de actas viii congreso iberoamericano de educación matemática (cibem) (pp. 255–263). Federación Española de Sociedades de Profesores de Matemáticas (FESMP). Available online: https://cibem.semrm.com/images/site/LibroActasCIBEM/ComunicacionesLibroActas_CB901-1000.pdf (accessed on 20 December 2025).
  14. Elkin, M., Sullivan, A., & Bers, M. U. (2016). Programming with the KIBO robotics kit in preschool classrooms. Computers in the Schools, 33(3), 169–186. [Google Scholar] [CrossRef]
  15. European Commission: Joint Research Centre, Bocconi, S., Inamorato dos Santos, A., Chioccariello, A., Cachia, R., Kampylis, P., Giannoutsou, N., Dagienė, V., Punie, Y., Wastiau, P., Engelhardt, K., Earp, J., Horvath, M., Jasutė, E., Malagoli, C., Masiulionytė-Dagienė, V., & Stupurienė, G. (2022). Reviewing computational thinking in compulsory education: State of play and practices from computing education (A. Inamorato dos Santos, R. Cachia, N. Giannoutsou, & Y. Punie, Eds.). Publications Office of the European Union. [Google Scholar] [CrossRef]
  16. Fiş Erümit, S. (2024). Collaboration of unplugged and plugged activities for primary school students: Developing computational thinking skills with programming. International Journal of Computer Science Education in Schools, 6(3). [Google Scholar] [CrossRef]
  17. González-González, C. S. (2019). Estado del arte en la enseñanza del pensamiento computacional y la programación en la etapa infantil. Education in the Knowledge Society (EKS), 20, 17. [Google Scholar] [CrossRef]
  18. González-González, C. S. (2020). Pensamiento computacional y robótica en educación infantil: Una propuesta metodológica inclusiva [Doctoral dissertation, Universidad de Huelva]. Available online: http://hdl.handle.net/10272/19545 (accessed on 20 December 2025).
  19. Grover, S., & Pea, R. (2013). Computational thinking in K–12: A review of the state of the field. Educational Researcher, 42(1), 38–43. [Google Scholar] [CrossRef]
  20. Jansen, P., Schmelter, A., Quaiser-Pohl, C., Neuburger, S., & Heil, M. (2013). Mental rotation performance in primary school age children: Are there gender differences in chronometric tests? Cognitive Development, 28(1), 51–62. [Google Scholar] [CrossRef]
  21. Jiménez, D. (2022). Trabajo fin de máster universitario en investigación en didácticas específicas en matemáticas [Unpublished Master’s Thesis, Universitat de València]. [Google Scholar]
  22. Jun, W. (2018, October 17–19). A study on development of evaluation standards for unplugged activity. 2018 International Conference on Information and Communication Technology Convergence (ICTC) (pp. 279–281), Jeju, Republic of Korea. [Google Scholar] [CrossRef]
  23. Levine, S. C., Huttenlocher, J., Taylor, A., & Langrock, A. (1999). Early sex differences in spatial skill. Developmental Psychology, 35(4), 940. [Google Scholar] [CrossRef] [PubMed]
  24. Lin, Y., Liao, H., Weng, S., & Dong, W. (2024). Comparing the effects of plugged-in and unplugged activities on computational thinking development in young children. Education and Information Technologies, 29(8), 9541–9574. [Google Scholar] [CrossRef]
  25. Looi, C.-K., How, M.-L., Longkai, W., Seow, P., & Liu, L. (2018). Analysis of linkages between an unplugged activity and the development of computational thinking. Computer Science Education, 28, 255–279. [Google Scholar] [CrossRef]
  26. Lye, S. Y., & Koh, J. H. L. (2014). Review on teaching and learning of computational thinking through programming: What is next for K-12? Computers in Human Behavior, 41, 51–61. [Google Scholar] [CrossRef]
  27. Messer, D., Thomas, L., Holliman, A., & Kucirkova, N. (2018). Evaluating the effectiveness of an educational programming intervention on children’s mathematics skills, spatial awareness and working memory. Education and Information Technologies, 23(6), 2879–2888. [Google Scholar] [CrossRef]
  28. Mohaghegh, D. M., & McCauley, M. (2016). Computational thinking: The skill set of the 21st century. International Journal of Computer Science and Information Technologies, 7(3), 1524–1530. [Google Scholar]
  29. Montuori, C., Pozzan, G., Padova, C., Ronconi, L., Vardanega, T., & Arfé, B. (2023). Combined unplugged and educational robotics training to promote computational thinking and cognitive abilities in preschoolers. Education Sciences, 13(9), 858. [Google Scholar] [CrossRef]
  30. Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. Basic Books, Inc. [Google Scholar]
  31. Polat, E., & Yilmaz, R. M. (2022). Unplugged versus plugged-in: Examining basic programming achievement and computational thinking of 6th-grade students. Education and Information Technologies, 27(7), 9145–9179. [Google Scholar] [CrossRef]
  32. Quaiser-Pohl, C. (2003). The mental cutting test “Schnitte” and the picture rotation test—Two new measures to assess spatial ability. International Journal of Testing, 3(3), 219–231. [Google Scholar] [CrossRef]
  33. Rodriguez, B., Kennicutt, S., Rader, C., & Camp, T. (2017, March 8–11). Assessing computational thinking in CS unplugged activities. 2017 ACM SIGCSE Technical Symposium on Computer Science Education (pp. 501–506), Seattle, WA, USA. [Google Scholar] [CrossRef]
  34. Saxena, A., Lo, C. K., Hew, K. F., & Wong, G. K. W. (2020). Designing unplugged and plugged activities to cultivate computational thinking: An exploratory study in early childhood education. Asia-Pacific Education Researcher, 29(1), 55–66. [Google Scholar] [CrossRef]
  35. Shute, V. J., Sun, C., & Asbell-Clarke, J. (2017). Demystifying computational thinking. Educational Research Review, 22, 142–158. [Google Scholar] [CrossRef]
  36. Stoeckelmayr, K., Tesar, M., & Hofmann, A. (2011, August 15–16). Kindergarten children programming robots: A first attempt. 2nd International Conference on Robotics in Education (RiE), London, UK. [Google Scholar]
  37. Sullivan, A., Elkin, M., & Bers, M. U. (2015, June 21–25). KIBO robot demo: Engaging young children in programming and engineering. 14th International Conference on Interaction Design and Children (IDC’15) (pp. 418–421), Medford, MA, USA. [Google Scholar] [CrossRef]
  38. Sun, L., Liu, J., & Liu, Y. (2024). Comparative experiment of the effects of unplugged and plugged-in programming on computational thinking in primary school students: A perspective of multiple influential factors. Thinking Skills and Creativity, 52, 101542. [Google Scholar] [CrossRef]
  39. Wang, J., Fan, W., & Yu, J. (2025). Comparing the effects of unplugged activities and plugged activities on the development of students’ computational thinking: A meta-analysis. Educational Technology Research and Development, 73, 3247–3267. [Google Scholar] [CrossRef]
  40. Wei, Y., Wang, L., Tang, Y., Su, J., Lei, Y., & Peng, W. (2024). Influence of programming education modalities on the computational thinking in young children: A comprehensive review and meta-analysis. Journal of Computer Assisted Learning, 40(5), 2385–2397. [Google Scholar] [CrossRef]
  41. Wing, J. M. (2006). Computational thinking. Communications of the ACM, 49(3), 33–35. Available online: https://www.cs.cmu.edu/~15110-s13/Wing06-ct.pdf (accessed on 20 December 2025). [CrossRef]
  42. Wohl, B., Porter, B., & Clinch, S. (2015). Teaching computer science to 5–7 year-olds: An initial study with scratch, cubelets and unplugged computing. Association for Computing Machinery. [Google Scholar] [CrossRef]
  43. Xinogalos, S., Satratzemi, M., & Malliarakis, C. (2017). Microworlds, games, animations, mobile apps, puzzle editors and more: What is important for an introductory programming environment? Education and Information Technologies, 22(1), 145–176. [Google Scholar] [CrossRef]
  44. Yadav, A., Krist, C., Good, J., & Caeli, E. N. (2018). Computational thinking in elementary classrooms: Measuring teacher understanding of computational ideas for teaching science. Computer Science Education, 28(4), 371–400. [Google Scholar] [CrossRef]
  45. Zapata-Cáceres, M., Martín-Barroso, E., & Román-González, M. (2020, April 27–30). Computational thinking test for beginners: Design and content validation. 2020 IEEE Global Engineering Education Conference (EDUCON) (pp. 1905–1914), Porto, Portugal. [Google Scholar] [CrossRef]
  46. Zhang, X., Chen, Y., Hu, L., Hwang, G.-J., & Tu, Y.-F. (2025). Developing preschool children’s computational thinking and executive functions: Unplugged vs. robot programming activities. International Journal of STEM Education, 12(1), 10. [Google Scholar] [CrossRef]
  47. Zurnacı, B., & Turan, Z. (2024). Educational robotics or unplugged coding activities in kindergartens? Comparison of the effects on pre-school children’s computational thinking and executive function skills. Thinking Skills and Creativity, 53, 101576. [Google Scholar] [CrossRef]
Figure 1. Parts of the mental rotation test.
Figure 1. Parts of the mental rotation test.
Education 16 00333 g001
Figure 2. Illustrative examples of the CT test items.
Figure 2. Illustrative examples of the CT test items.
Education 16 00333 g002
Figure 3. Mental rotation and CT test.
Figure 3. Mental rotation and CT test.
Education 16 00333 g003
Figure 4. Individual percentage scores on the three pre-test measures for Class A (24 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Figure 4. Individual percentage scores on the three pre-test measures for Class A (24 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Education 16 00333 g004
Figure 5. Individual percentage scores on the three pre-test measures for Class B (22 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Figure 5. Individual percentage scores on the three pre-test measures for Class B (22 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Education 16 00333 g005
Figure 6. Individual percentage scores on the three post-test measures for Class A (24 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Figure 6. Individual percentage scores on the three post-test measures for Class A (24 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Education 16 00333 g006
Figure 7. Individual percentage scores on the three post-test measures for Class B (22 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Figure 7. Individual percentage scores on the three post-test measures for Class B (22 students): mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Education 16 00333 g007
Table 1. Wilcoxon tests for the difference in means between the pre-test and post-test for the whole sample for the mental rotation test, computational thinking test, and mental rotation + computational thinking test.
Table 1. Wilcoxon tests for the difference in means between the pre-test and post-test for the whole sample for the mental rotation test, computational thinking test, and mental rotation + computational thinking test.
TestM (Pre)Mdn (Pre)SD (Pre)IQR (Pre)M (Post)Mdn (Post)SD (Post)IQR (Post)Wpr
MR0.4930.4170.2210.3330.6050.5830.2010.25092.500<0.0010.637
CT0.6200.7500.3360.7500.7660.8750.2810.50040.000<0.0010.553
MR+CT0.3040.2500.2930.5000.6200.7500.2670.50011.500<0.0010.777
Table 2. Mann–Whitney U tests for the difference in means between the plugged (G1) and unplugged (G0) groups in pre-test and post-test measures for mental rotation, computational thinking, and mental rotation + computational thinking.
Table 2. Mann–Whitney U tests for the difference in means between the plugged (G1) and unplugged (G0) groups in pre-test and post-test measures for mental rotation, computational thinking, and mental rotation + computational thinking.
TestM (G1)Mdn (G1)SD (G1)IQR (G1)M (G0)Mdn (G0)SD (G0)IQR (G0)Up
Pre-test MR0.5000.4170.2320.3330.4850.4170.2150.250276.50.781
Post-test MR0.6040.5830.1980.2500.6060.5830.2080.292269.00.912
Pre-test CT0.5520.5000.3300.5000.6930.7500.3360.500197.50.133
Post-test CT0.7290.7500.2650.5000.8071.0000.2980.250213.00.226
Pre-test MR+CT0.2710.2500.2850.5000.3410.2500.3040.500228.50.415
Post-test MR+CT0.6150.7500.2760.5000.6250.7500.2640.310260.00.927
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garcia-Marques, M.-E.; Pérez-Suay, A.; García-Bayona, I. Plugged or Unplugged? A Comparative Study of Computational Thinking Development in Early Childhood. Educ. Sci. 2026, 16, 333. https://doi.org/10.3390/educsci16020333

AMA Style

Garcia-Marques M-E, Pérez-Suay A, García-Bayona I. Plugged or Unplugged? A Comparative Study of Computational Thinking Development in Early Childhood. Education Sciences. 2026; 16(2):333. https://doi.org/10.3390/educsci16020333

Chicago/Turabian Style

Garcia-Marques, Maria-Emilia, Adrián Pérez-Suay, and Ismael García-Bayona. 2026. "Plugged or Unplugged? A Comparative Study of Computational Thinking Development in Early Childhood" Education Sciences 16, no. 2: 333. https://doi.org/10.3390/educsci16020333

APA Style

Garcia-Marques, M.-E., Pérez-Suay, A., & García-Bayona, I. (2026). Plugged or Unplugged? A Comparative Study of Computational Thinking Development in Early Childhood. Education Sciences, 16(2), 333. https://doi.org/10.3390/educsci16020333

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop