Next Article in Journal
Accounting Support Using Artificial Intelligence for Bank Statement Classification
Previous Article in Journal
Analyzing Visitor Behavior to Enhance Personalized Experiences in Smart Museums: A Systematic Literature Review
Previous Article in Special Issue
Leveraging Technology to Break Barriers in Public Health for Students with Intellectual Disabilities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning

by
Alejandro De la Hoz Serrano
*,
Andrés Álvarez-Murillo
*,
Eladio José Fernández Torrado
,
Miguel Ángel González Maestre
and
Lina Viviana Melo Niño
Department of Experimental Science and Mathematics Teaching Area, University of Extremadura, 06006 Badajoz, Spain
*
Authors to whom correspondence should be addressed.
Computers 2025, 14(5), 192; https://doi.org/10.3390/computers14050192
Submission received: 14 April 2025 / Revised: 12 May 2025 / Accepted: 13 May 2025 / Published: 15 May 2025

Abstract

:
Education nowadays requires a certain variety of resources that allow for the acquisition of 21st-century skills, including computational thinking. Educational robotics emerges as a digital resource that supports the development of these skills in both male and female students across different educational stages. However, it is necessary to investigate in depth evaluations that analyze the acquisition of Computational Thinking skills in pre-service teachers, especially when scientific and mathematical content learning programs are designed. This study aims to analyze Computational Thinking skills using the SOLO taxonomy, with an approach to science and mathematics learning, through an intervention based on programming and Educational Robotics. A quasi-experimental design was used on a total sample of 116 pre-service teachers. The SOLO taxonomy categorization was used to associate each level of the taxonomy with the computational concepts analyzed through a quantitative questionnaire. The taxonomy levels associated with Computational Thinking skills correspond to uni-structural and multi-structural levels. Males presented better results before the intervention, while subsequently, females presented better levels of Computational Thinking, as well as a greater association with the higher complexity level of learning analyzed. In turn, there was a trend between the levels of the SOLO taxonomy and computational concepts, so that an increase in skill for a concept occurs similarly at both the uni-structural level and the multi-structural level. The SOLO taxonomy is presented as a proper tool for learning assessment since it allows for a more detailed understanding of the quality of students’ learning. Therefore, the SOLO taxonomy serves as a valuable resource in the evaluation of Computational Thinking skills.

1. Introduction

In the current era, rapid advances in Computer Science (CS), Robotics, and Artificial Intelligence (AI) are transforming society at an unprecedented pace, creating new demands for educational systems worldwide [1,2,3]. These technological changes require not only technical knowledge but also the development of transferable problem-solving skills applicable across disciplines. In response, education systems are being challenged to cultivate competencies that go beyond traditional literacy, giving rise to the concept of “code literacy” [1]—a form of digital fluency that equips individuals to participate actively and critically in computationally driven societies.
Among emerging educational priorities, Computational Thinking (CT) has gained significant global attention as a core set of cognitive and procedural skills derived from Computer Science. CT includes abstraction, decomposition, debugging, pattern recognition, and algorithmic thinking [4]. Recognizing its potential to enhance learning across domains, many countries—such as the United States, China, Australia, and members of the European Union—have adopted policies to integrate CT into their educational frameworks, extending its implementation across all educational stages, including higher education [5,6].
However, the incorporation of CT into formal education presents substantial challenges. Chief among them is the limited preparation of teachers, particularly those in pre-service training, to understand and implement CT effectively in classrooms [7]. This gap underscores the importance of equipping future educators with not only the technical tools but also the cognitive frameworks necessary to foster and evaluate CT in students.
In this context, assessing how students learn and apply CT becomes a critical concern. In recent years, various educational taxonomies have been used to assess the development of CT skills. Among them, Bloom’s Revised Taxonomy and Webb’s Depth of Knowledge (DOK) have been widely applied to describe and measure cognitive complexity [8,9]. However, although useful for categorizing learning objectives, these taxonomies often fall short in assessing how students structure and integrate their knowledge across disciplines and over time. In this context, the Structure of Observed Learning Outcomes (SOLO) taxonomy offers a compelling alternative.
To address this limitation, the Structure of Observed Learning Outcomes (SOLO) taxonomy offers a more dynamic and developmental model for assessing learning. SOLO classifies responses based on the structural complexity of student understanding, ranging from pre-structural (no understanding) to extended abstract (generalization and transfer). This is especially relevant given evidence that CT supports a range of cognitive functions—such as working memory, inhibitory control, and spatial reasoning—that are central to learning in science and mathematics [10,11,12,13]. Therefore, evaluating CT through the lens of SOLO allows for a deeper understanding of how learners progress cognitively as they engage in computational problem-solving, rather than simply what they can produce.
This study addresses a key gap in the literature: the absence of robust and developmentally sensitive frameworks for evaluating the quality and depth of learning in CT, particularly within the context of pre-service teacher education. To address this gap, the present study adopts the SOLO taxonomy to assess the development of CT skills during an Educational Robotics intervention within a science–mathematics learning environment. This approach allows for a more nuanced analysis of how students structure their knowledge in interdisciplinary CT tasks. The study also seeks to examine the taxonomy’s potential to inform the design of instructional strategies and assessment tools that are better aligned with the cognitive and structural demands of CT learning. To this end, the study addresses the following research questions (RQs):
-
RQ1: To what extent does the SOLO taxonomy effectively reflect the development of computational thinking skills in pre-service elementary teachers after engagement in an educational robotics program focused on science and mathematics?
-
RQ2: Are there gender differences in the SOLO taxonomy levels achieved by pre-service elementary teachers after completing an intervention with educational robotics focusing on developing computational thinking skills in science and mathematics?
To address these questions, the present study builds on a multidisciplinary body of research that explores the role of Computational Thinking in science and mathematics education. In the following section, we outline the theoretical foundations that support the integration of CT into interdisciplinary learning contexts and justify the use of the SOLO taxonomy as a developmental framework for evaluating students’ conceptual progression.

2. Theoretical Background

Computational Thinking (CT) has been widely recognized as a core competency for 21st-century learners [14,15] and is explicitly identified as a foundational practice in the Next Generation Science Standards (NGSS) [16]. Originally rooted in the fields of computer science and programming, CT has gradually evolved into a cross-disciplinary construct, with increasing attention directed toward its integration into science, mathematics, technology, and engineering education [13,17,18]. This interdisciplinary expansion is driven in part by the accelerating pace of technological advancement in these fields [19]. Innovations in areas such as bioinformatics, computational statistics, chemometrics, and data analytics have highlighted the need for a computational approach in both scientific investigation and problem-solving across academic and professional contexts [13].
Within science education, CT has emerged as a central focus for both pedagogical development and empirical investigation [20]. Science and computer science share methodological affinities in their use of the scientific method and the engineering design process—both of which involve iterative cycles of hypothesis generation, modeling, refinement, and evaluation. Moreover, both domains depend on core computational constructs such as abstraction, modularity, and algorithmic reasoning to represent and understand complex systems [20]. A similar convergence has been noted in the relationship between CT and mathematics, where CT supports mathematical learning through shared cognitive mechanisms such as problem decomposition, pattern recognition, and symbolic representation [21,22,23]. However, as Lockwood and De Chenne [24] note, not all mathematical concepts translate seamlessly into programming environments, and students often face significant cognitive dissonance when navigating between computational and mathematical logics [18]. Nonetheless, scholars such as Sinclair and Patterson [25] maintain that CT and mathematics remain deeply intertwined and highlight the opportunity for pedagogical innovation through their integration.
Despite growing interest in CT integration, significant gaps persist in the literature, particularly in the development of robust and context-sensitive assessment models. Shumway et al. [26] and Ye et al. [27] both underscore the lack of systematic reviews and empirical studies that explore CT’s intersection with mathematics education, particularly within interdisciplinary or project-based learning environments. Furthermore, there remains little consensus on how to meaningfully assess the depth and structure of students’ CT development, especially when embedded in cross-disciplinary settings such as science–mathematics curricula. While existing studies report promising outcomes from integrative interventions [28,29,30,31], they often rely on generalized evaluation frameworks or competency rubrics that emphasize surface-level indicators such as correctness or procedural fluency.
To address these limitations, Weintrop et al. [13] proposed a taxonomy tailored specifically to CT in mathematics and science education, comprising four core domains: data practices, modeling and simulation practices, computational problem-solving practices, and systems thinking practices (see Figure 1). This contribution has advanced the field by providing a domain-specific framework for instructional design. However, the assessment strategies aligned with such taxonomies often remain generic or summative in nature, failing to account for the evolving complexity of students’ conceptual understanding throughout the learning process.
However, despite the growing body of evidence on the positive effects of CT-related interventions, a critical gap persists in the literature regarding how to assess the depth and structure of students’ learning in CT. Most traditional assessment methods—such as performance rubrics or standardized tests based on taxonomies like Bloom’s—tend to emphasize accuracy or hierarchical levels of cognitive achievement [8,9].
To address this limitation, the present study adopts the Structure of Observed Learning Outcomes (SOLO) taxonomy as a framework for evaluating the quality of CT learning. Unlike other taxonomies, SOLO focuses on the structural complexity of students’ thinking, ranging from uni-structural responses to more relational and abstract levels of understanding.
In this context, the SOLO taxonomy emerges as a promising yet underutilized alternative for assessing CT development. Unlike Bloom’s taxonomy, which emphasizes types of cognitive processes (e.g., remembering, evaluating), or Webb’s Depth of Knowledge (DoK), which focuses on task complexity, SOLO centers on the structural transformation of learners’ responses—from fragmented to integrated and abstract understandings [32]. This feature is particularly relevant for CT, where learning involves not only acquiring isolated skills but synthesizing them into transferable strategies across domains. The SOLO taxonomy captures both quantitative and qualitative dimensions of learning progression (see Figure 2), making it well suited to interdisciplinary contexts where CT serves as a bridge between conceptual domains.
Furthermore, SOLO provides a non-hierarchical model that allows researchers to track students’ movement across five levels of cognitive complexity: pre-structural, uni-structural, multi-structural, relational, and extended abstract. These levels reflect not only the accumulation of content but also the increasing integration and generalization of knowledge, a key attribute of CT expertise [33]. Table 1 illustrates how specific action verbs align with each level of taxonomy, offering educators and researchers a practical tool for designing assessments that correspond to targeted learning outcomes [34].
While prior studies have applied the SOLO taxonomy to evaluate learning outcomes in fields such as computer science and statistics [35,36,37], its application to CT assessment in interdisciplinary STEM contexts remains limited. Several studies [38,39,40] advocate for its application in the teaching of various disciplines such as science and mathematics, with the aim of enhancing the assessment of learning quality at any educational level. The use of this tool can offer a more nuanced understanding of how students learn subject content, thereby supporting the design of student-centered interventions and improving the overall teaching–learning process
Some researchers have explored its utility in programming education and found it effective in capturing the nuanced development of computational proficiency beyond traditional rubrics. For example, Sari et al. [41] used SOLO to explore gender-based differences in mathematics learning, finding male students exhibited more frequent extended abstract responses. Similarly, Tsan et al. [42] observed parallel trends in programming contexts, while Obreque et al. [43] reported higher performance among female students in scientific content tasks using SOLO analysis.
Furthermore, as evidenced in previous studies [8,9], there is a growing interest in assessing Computational Thinking (CT) beyond the confines of computer science, particularly within teacher education programs, where future educators are required to develop transferable problem-solving competencies. These studies underscore the necessity for more refined assessment instruments, although they primarily rely on generalized evaluation frameworks or competency rubrics, rather than employing structured taxonomies such as SOLO. This gap presents an opportunity for the broader integration of the SOLO taxonomy to more effectively capture the qualitative depth of CT development within these emerging educational contexts.
Despite these promising insights, the current body of research remains fragmented and limited in scope, particularly when it comes to applying the SOLO taxonomy within interdisciplinary learning environments that combine science, mathematics, and Computational Thinking. Most studies focus either on computing-specific contexts or general subject domains, without addressing the unique cognitive demands that arise when CT is embedded into non-computing curricular areas through innovative methodologies such as Educational Robotics.

3. Materials and Methods

3.1. Study Design and Sample

This study employed a pre-experimental design with a quantitative approach, using both descriptive and inferential statistical techniques. The sample consisted of 116 pre-service teachers enrolled in the third year of a four-year Primary Education degree program at a Spanish university.
Of the total participants, 38 were male and 78 were female. This gender ratio is consistent with the typical demographic profile of Primary Education degree programs in Spain, where female enrollment significantly outnumbers male enrollment. This group was chosen specifically because they had not received formal instruction in Computational Thinking (CT) or Educational Robotics prior to the intervention. Their academic stage and profile make them an ideal target population for examining the development of CT-related skills in future educators of science and mathematics.

3.2. Intervention

In order to explore Computational Thinking (CT) skills, a block-based programming training program was implemented. This program consisted of five sessions grounded in constructivist learning approaches, enabling students to develop various skills throughout the proposed challenges, such as creativity, problem-solving, and self-regulation. During these sessions, students explored the wide range of possibilities offered by programming tools such as Scratch 3.0® and the Mind Designer® Educational Robotics Kit, both of which were integrated into teaching and learning contexts involving scientific and mathematical content.
All intervention activities were designed with the aim of fostering fundamental computational concepts such as addresses, loops, conditionals, and functions or sequences. Accordingly, both teacher demonstrations and student work were required to incorporate the use of programming code and blocks aligned with these computational concepts.
The first two sessions were dedicated to the introduction and use of Scratch 3.0® software within a teaching–learning framework focused on mathematical content. The primary goal of these sessions was to employ programming to create mandalas composed of various geometric shapes, which had to be drawn both separately and in overlapping arrangements, using the software’s functionalities such as loops, variable creation, and conditionals (see Figure 3).
Session 3 consisted of a theoretical explanation and experimentation with the possibilities offered by Educational Robotics (ER) and the Mind Designer® robotics kit. Students had the opportunity to experiment with this resource by completing various programming tasks, similar to those previously explained. The session concluded with an example of activities designed for teaching scientific and mathematical content. In the last two sessions, students, working in cooperative groups of four, were required to create a robotic board as a teaching tool for scientific learning. Specifically, the content focused on the scientific topic of healthy hydration habits, which allowed for interdisciplinary work with both science and mathematics content.
To measure Computational Thinking, a modified version of the test designed and validated by Román-González [44] was used. The shortened online questionnaire was administered in a similar manner both before and after the intervention. This design is characterized by evaluating different computational concepts aligned with the learning standards set by the ‘Computer Science Teachers Association’ (CSTA) for computer science education, and it allows for adaptation to different educational stages, as demonstrated in the present study [45]. The computational concepts employed are as follows: addresses: instructions that allow for moving the object in different directions; loops: this refers to performing the same sequence of instructions multiple times; conditionals: this involves understanding the implications of decision-making when carrying out a task or activity; and functions: common code modules are typically referred to as functions; they are like labeled drawers in which data can be stored until a program needs them.
The modifications were made with the goal of reducing the time required to complete the questionnaire while preserving the quality of the evaluation of the various computational concepts. Specifically, these adjustments focused on reducing the number of items (from 28 to 12) while ensuring that at least two questions were retained for each computational concept. This reduction was carried out based on the need to include sufficient elements to assess the participants’ level of Computational Thinking (CT), with the internal consistency coefficient calculated at 0.79. Data collection and analysis were conducted in accordance with the principles outlined in the Declaration of Helsinki [46].

3.3. Data Analysis

To analyze the level of Computational Thinking (CT) based on the Román-González questionnaire [44], specific questions from the test were selected and mapped to a particular level of the SOLO taxonomy corresponding to each computational concept. The 12 items in the test were carefully examined, and their statements were matched with the verbs described at each level of the SOLO taxonomy, as outlined by Brabrand and Dahl [34]. The use of the SOLO taxonomy, as shown in Appendix A, enabled the classification of each computational concept (addresses, loops, conditionals, and functions) into two levels: uni-structural and multi-structural.
For the quantitative analysis of the collected data, R-2.14.0 software [47] was employed to perform both descriptive and inferential statistics. Given the non-normality of the data (p < 0.05), non-parametric statistical tests were applied. Specifically, the Wilcoxon test was used to assess statistically significant differences between pre- and post-intervention results, while the Mann–Whitney U test was utilized to examine differences between the two independent groups, namely male and female students.

4. Results

The results analyzed in this study are presented below. First, the descriptive analysis results are shown, followed by the results of the inferential tests. As observed in Table 2 and Figure 4, regarding the pretest, there is a higher percentage overall in the uni-structural dimension (64.14%) compared to the multi-structural dimension (53.42%). Specifically, in four of the six computational concepts (addresses, loops, conditional while, and functions), a higher percentage is found in the uni-structural dimension.
Focusing on the sex of the study sample, both groups show a higher percentage in the uni-structural dimension for the same computational concepts as observed overall. When comparing the two groups, the male group has a higher percentage of correct answers across all six computational concepts in the uni-structural dimension. In contrast, in the multi-structural dimension, the female group shows a higher percentage in two of the six concepts (addresses and while conditionals). Overall, the male group exhibits a higher percentage than the female group in both dimensions.
Table 3 and Figure 5 display the percentage of correct responses in the post-test. It can be observed that, overall, there is a higher percentage of correct answers in the uni-structural dimension (73.21%) compared to the multi-structural dimension (63.84%). Specifically, in four out of the six computational concepts (addresses, loops, while conditionals, and functions), a higher percentage is found in the uni-structural dimension.
When considering the sex of the study sample, both groups show a higher percentage in the uni-structural dimension for the same computational concepts as observed in the overall results. Regarding the differences between the two, the male group exhibits a higher percentage in four out of the six computational concepts (addresses, loops, simple conditionals, and functions) in the uni-structural dimension. In contrast, in the multi-structural dimension, the female group shows a higher percentage across all concepts, except for compound conditionals. Overall, in both dimensions, the female group presents a higher percentage.
The results from the Mann–Whitney U test (Table 4) show that statistically significant differences (p < 0.05) between female and male participants were observed. Although statistically significant differences were found between male and female participants in certain dimensions and computational concepts, these differences are not consistent across all variables measured.
In relation to the pretest, statistically significant differences were found in the concepts of addresses (uni-structural dimension) and functions (both the uni-structural and multi-structural dimensions), as well as in the overall uni-structural dimension. Regarding the post-test, statistically significant differences were observed in the concept of while conditionals (uni-structural dimension). In both pretest cases, male participants showed a higher percentage of correct answers, whereas in the posttest, it was the female participants who demonstrated a higher percentage.
Furthermore, for the concepts of simple conditionals (both the uni-structural and multi-structural dimensions) and compound conditionals (uni-structural dimension), there are indications of statistical significance (p ≈ 0.05).
Regarding the differences between the pretest and post-test results, Table 5 indicates statistically significant (p < 0.05) improvements overall, largely driven by the female participants, who exhibited significant gains across multiple concepts and SOLO dimensions. In contrast, male participants showed limited improvement, with significant differences observed only for one concept (while conditionals, multi-structural dimension).
Statistically significant differences (p < 0.05) were observed in the post-test for the concepts while conditionals and functions (in both the uni-structural and multi-structural dimensions), as well as in the overall scores, and for the compound conditionals (uni-structural dimension). Additionally, near-significant values were detected for loops (multi-structural dimension) and compound conditionals (multi-structural dimension).
With regard to gender-based differences, statistically significant improvements were identified for the concept while conditionals (multi-structural dimension) for male participants. Among female participants, statistically significant differences were found for the concepts loops (multi-structural dimension), while conditionals (uni-structural dimension), and functions (both uni-structural and multi-structural dimensions), along with significant overall improvement across both dimensions.
These results highlight the utility of the SOLO taxonomy in providing detailed and differentiated insights into learning quality. The taxonomy enabled the identification of specific levels of cognitive complexity, with the uni-structural level emerging as the most frequently achieved. Furthermore, improvement was noted across both uni-structural and multi-structural levels, suggesting enhanced learning outcomes following the intervention.

5. Discussion

The development of programming and computational thinking (CT) skills is increasingly recognized as a fundamental element of 21st-century education—not only within computer science but also across disciplines such as mathematics and science. This growing recognition is reflected in international curricular frameworks and empirical studies, which emphasize the need to equip future teachers with transferable CT skills that can be applied across educational and real-world contexts [35,36,37]. The present study examined the evolution of CT skills in prospective primary teachers before and after a robotics-based intervention grounded in science and mathematics content, using the SOLO taxonomy as the primary evaluative framework
Regarding RQ1: How effectively does the SOLO taxonomy reflect changes in the computational thinking skills of prospective primary teachers as a result of participating in an educational robotics program focused on science and mathematics? The findings clearly support the utility of the SOLO taxonomy in capturing both quantitative and qualitative changes in participants’ CT skills. Pre- and post-intervention analyses revealed noticeable cognitive progression, from predominant uni-structural levels toward more frequent multi-structural responses, in key CT concepts such as loops, conditional, and functions. This shift suggests not only an increase in the volume of knowledge but also an improvement in the structural complexity with which participants organize and apply CT concepts.
While the statistical results indicate significant differences between pre- and post-intervention scores, particularly among female participants, it is important to interpret these changes as evidence of an evolving understanding of computational concepts rather than isolated score improvements. The progression from uni-structural to multi-structural responses reflects not only increased familiarity with coding elements but also a deeper integration of Computational Thinking processes. This development aligns with the structured nature of the intervention, which combined theoretical explanation with practical problem-solving tasks. Thus, the observed learning gains can be interpreted as a consequence of both cognitive engagement and instructional design, suggesting that targeted interventions can foster meaningful conceptual growth.
These results align with prior research demonstrating the positive impact of robotics-based interventions on CT skill development among future educators [48,49,50], particularly within disciplinary contexts such as science and mathematics education [51]. More specifically, the current study found that uni-structural reasoning remained dominant in both pre- and post-test responses, although an increase in multi-structural responses was observed after the intervention. This pattern mirrors findings from a study conducted by Uzumcu and Bay [40], which applied the SOLO taxonomy to CT skill tests before and after a similar educational robotics intervention. In their study, students predominantly displayed pre-structural levels prior to the intervention but reached higher levels—including relational and extended abstract understanding—after participating in the program.
The present study aligns with this trend. Specifically, dominance in the uni-structural level was observed for the same concepts (addresses, loops, while conditionals, and functions) before and after the intervention. A clear trend is evident between the two levels of the SOLO taxonomy and the various computational concepts analyzed. Notably, in concepts such as addresses, compound conditionals, and functions, where an increase in correct answers was observed after the intervention, this improvement is reflected similarly at both the uni-structural and multi-structural levels. Conversely, in concepts like loops, simple conditionals, and while conditionals, where a lower percentage of correct answers was observed post-intervention, the pattern was similar across both levels of the SOLO taxonomy.
The divergence between the present study and that of Uzumcu and Bay [40] lies in the post-intervention depth of cognitive development: while our participants showed improved performance, the majority did not achieve relational or extended abstract levels. This difference may be attributed to several contextual factors, such as the duration and intensity of the intervention, the cognitive demands for integrating CT with scientific and mathematical content, or the participants’ prior knowledge and teaching experience. This indicates the need for sustained and scaffolded interventions that allow learners to internalize and apply CT concepts in increasingly complex and interconnected ways.
In parallel, findings from other studies reinforce this trend. For instance, Fojcik et al. [36] investigated whether introductory programming courses in non-CS domains yield substantial learning gains, while Karvounidis et al. [37] and Almeida et al. [35] reported high frequencies of uni-structural and multi-structural reasoning in students’ programming work across languages. Notably, these studies also observed a transitional relationship between multi-structural and relational levels, reinforcing the need for further research to explore how students advance beyond surface-level CT understanding [40].
The SOLO taxonomy proves to be an effective tool for evaluating coding skills and providing guidance for structuring teaching strategies through the creation of personalized teaching pathways [46]. By identifying the SOLO level at which learners operate for each concept, educators can design personalized instructional pathways that close knowledge gaps and foster progression toward higher-order thinking. This perspective is supported by research in science and mathematics education, which similarly reports the predominance of surface-level SOLO responses and advocates for explicitly designed interventions that foster relational and extended abstract understanding [38,39,52,53].
Addressing RQ2: What differences, if any, exist between male and female prospective primary teachers in the SOLO taxonomy levels attained after completing a robotics-based intervention designed to enhance computational thinking within science and mathematics education? The data reveal statistically significant differences between male and female participants, both pre- and post-intervention.
Initially, male participants displayed higher CT proficiency, a pattern corroborated by previous studies [41,42,51,54]. However, following the intervention, female participants matched or surpassed their male peers, particularly in the multi-structural dimension, indicating a marked improvement in their ability to engage with complex CT constructs. These findings align with those of Obreque et al. [43] and De la Hoz et al. [51], suggesting that well-designed, inclusive interventions can play a pivotal role in reducing gender disparities in CT education.
Similarly, the results show that female participants exhibited statistically significant differences in more computational concepts and SOLO taxonomy levels than their male counterparts, suggesting a higher level of learning achieved by the female participants. While female participants demonstrated significant improvements across a wider range of computational concepts and SOLO levels, male participants showed more limited progress. This may be due to higher initial baseline performance among male participants, suggesting possible prior exposure or experience with computational thinking. Alternatively, the instructional design of the intervention may have better aligned with the learning styles or engagement patterns of the female participants. Given the 2:1 female-to-male ratio in the sample, these results also raise questions about statistical weighting and generalizability. Future studies should aim for balanced samples and potentially adapt the intervention to ensure it supports learners with varying levels of prior knowledge.
These findings reinforce the value of the SOLO taxonomy as both a diagnostic and formative tool in computational thinking education. Furthermore, they underscore the importance of sustained and contextually grounded interventions—such as those based on scientific and mathematical activities and the use of educational robotics—in fostering core CT competencies among future educators. However, the persistent difficulty in advancing toward the relational and extended abstract levels highlights the ongoing need for innovation in instructional design. Future research should investigate longitudinal interventions, interdisciplinary applications, and the integration of complementary assessment frameworks to better support the development of deep, transferable CT competencies.

6. Conclusions

This study aimed to examine the effectiveness of an educational robotics intervention—embedded within science and mathematics instruction—in enhancing Computational Thinking (CT) skills among prospective primary school teachers, using the SOLO taxonomy as a diagnostic and evaluative framework. It also sought to identify whether gender-based differences emerged in the SOLO levels achieved following the intervention.
The findings provide robust evidence of the SOLO taxonomy’s capacity to reveal qualitative changes in students’ CT competencies, particularly in concepts like while conditionals and functions, where a notable shift from uni-structural to multi-structural levels was observed. These results indicate not only an increase in content knowledge but also a deeper structural organization of that knowledge—reflecting meaningful cognitive development.
Gender differences were also identified. While male participants initially demonstrated stronger performance, female participants exhibited greater gains after the intervention, surpassing their male peers in several key CT dimensions, particularly in the multi-structural dimension. This suggests that inclusive and contextually grounded pedagogical approaches, such as the use of educational robotics in STEM content areas, can contribute to closing gender gaps in CT learning.
Nonetheless, persistent challenges in advancing participants toward the relational and extended abstract levels highlight the need for further refinement of instructional strategies. Addressing these higher-order dimensions of thinking will be essential to fostering transferable and sophisticated CT skills.
The SOLO taxonomy provided detailed insights into students’ learning, as a formative assessment tool and a guide for the design of personalized, equity-focused instructional practices.

7. Limitations of the Study and Future Lines of Research

This study analyzes Computational Thinking (CT) skills through a questionnaire that has been supported and utilized in multiple recent studies. However, the associative analysis with the SOLO taxonomy levels highlights more complex levels that are not fully assessed. Therefore, it is suggested to expand the questionnaires to better evaluate the learning levels within each stage of the taxonomy, allowing for a more specific determination of students’ skills. Additionally, the study recommends examining the students’ immediate environment as a relevant factor influencing their learning of these skills. This would provide a more comprehensive understanding of the factors affecting CT skill development and help to design more targeted educational interventions.

Author Contributions

Conceptualization, A.D.l.H.S. and A.Á.-M.; methodology, A.D.l.H.S. and A.Á.-M.; software, A.D.l.H.S.; validation, A.Á.-M., E.J.F.T., M.Á.G.M. and L.V.M.N.; formal analysis, A.D.l.H.S. and A.Á.-M.; investigation, A.D.l.H.S., A.Á.-M., E.J.F.T., M.Á.G.M. and L.V.M.N.; resources, A.D.l.H.S. and A.Á.-M.; data curation, A.D.l.H.S. and A.Á.-M.; writing—original draft preparation, A.D.l.H.S.; writing—review and editing, A.Á.-M., E.J.F.T., M.Á.G.M. and L.V.M.N.; visualization, A.D.l.H.S., A.Á.-M., E.J.F.T., M.Á.G.M. and L.V.M.N.; supervision, A.Á.-M., E.J.F.T., M.Á.G.M. and L.V.M.N.; project administration, L.V.M.N.; funding acquisition, A.D.l.H.S. and L.V.M.N. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the project PID2020-115214RB-I00 funded by MCIN/AEI/10.13039/50110001103.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki. Ethics Committee Name: approved by the Ethics Committee of the Extremadura University. (139/2023, on 28 September 2023).

Data Availability Statement

The original contributions presented in this study are included in the article.

Acknowledgments

This work was supported by The Ministry of Education and Vocational Training for granting a predoctoral contract (FPU20/04959).

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Items from Román-González Questionnaire [53] selected for association with the SOLO taxonomy dimensions.
Computational ConceptSOLO Taxonomy DimensionItems of Román-González Questionnaire [53] Associated with SOLO Taxonomy
AddressesUni-structural3
Multi-structural4
LoopsUni-structural7
Multi-structural8
ConditionalSimpleUni-structural16
Multi-structural14
CompoundUni-structural19
Multi-structural18
WhileUni-structural24
Multi-structural22
FunctionsUni-structural28
Multi-structural25

References

  1. National Institute of Educational Technologies and Teacher Training. School of Computational Thinking and Artificial Intelligence 2021/22. From Teacher Training to Change in Methodology. Research Finding. Ministry of Education and Vocational Training. 2023. Available online: https://code.intef.es/wp-content/uploads/2023/04/09_22_Experimentacion_Investigacion-EPCIA-21-22_Investigacion-R3_ing.pdf (accessed on 13 January 2025).
  2. Salas-Pilco, S.Z. The impact of AI and robotics on physical, social-emotional and intellectual learning outcomes: An integrated analytical framework. Br. J. Educ. Technol. 2020, 51, 1808–1825. [Google Scholar] [CrossRef]
  3. Yim, I.H.Y.; Su, J. Artificial intelligence (AI) learning tools in K-12 education: A scoping review. J. Comput. Educ. 2025, 12, 93–131. [Google Scholar] [CrossRef]
  4. Román-González, M.; Moreno-León, J.; Robles, G. Complementary Tools for Computational Thinking Assessment. In Proceedings of the International Conference on Computational Thinking Education—CTE 2017 The Education University of Hong Kong, Hong Kong, China, 13–15 July 2017. [Google Scholar]
  5. Grover, S.; Pea, R. Computational thinking in K-12: A review of the state of the field. Educ. Res. 2013, 42, 38–43. [Google Scholar] [CrossRef]
  6. Bocconi, S.; Chioccariello, A.; Kampylis, P.; Dagienė, V.; Wastiau, P.; Engelhardt, K.; Earp, J.; Stupurienė, G. Reviewing Computational Thinking in Compulsory Education State of Play and Practices from Computing Education; Inamorato, A., Cachia, R., Giannoutsou, N., Punie, Y., Eds.; Publications Office of the European Union: Luxembourg, 2022; Available online: https://publications.jrc.ec.europa.eu/repository/handle/JRC128347 (accessed on 28 March 2025).
  7. Mukasheva, M.; Omirzakova, A. Computational thinking assessment at primary school in the context of learning programming. World J. Educ. Technol. Curr. Issues. 2021, 13, 336–353. [Google Scholar] [CrossRef]
  8. Vrachnos, E.; Jimoyiannis, A. Secondary Education Students’ Difficulties in Algorithmic Problems with Arrays: An Analysis Using the SOLO Taxonomy. Themes Sci. Technol. Educ. 2017, 10, 31–52. [Google Scholar]
  9. Yadav, A.; Gretter, S.; Good, J.; McLean, T. Computational thinking in teacher education. In Emerging Research, Practice, and Policy on Computational Thinking. Educational Communications and Technology: Issues and Innovations; Rich, P., Hodges, C., Eds.; Springer: Cham, Switzerland, 2017. [Google Scholar] [CrossRef]
  10. Bers, M.; Strawhacker, A.; Sullivan, A. The State of the Field of Computational Thinking in Early Childhood Education; Report No. 274; OECD Publishing: Paris, France, 2022. [Google Scholar] [CrossRef]
  11. Nouri, J.; Zhang, L.; Mannila, L.; Norén, E. Development of computational thinking, digital competence and 21st century skills when learning programming in K-9. Educ. Inq. 2019, 11, 1–17. [Google Scholar] [CrossRef]
  12. Sun, L.; You, X.; Zhou, D. Evaluation and development of STEAM teachers’ computational thinking skills: Analysis of multiple influential factors. Educ. Inf. Technol. 2023, 28, 14493–14527. [Google Scholar] [CrossRef]
  13. Weintrop, D.; Beheshti, E.; Horn, M.; Orton, K.; Jona, K.; Trouille, L.; Wilensky, U. Defining computational thinking for mathematics and science classrooms. J. Sci. Educ. Technol. 2016, 25, 127–147. [Google Scholar] [CrossRef]
  14. Wing, J.M. Computational thinking. Commun. ACM 2016, 49, 33–35. [Google Scholar] [CrossRef]
  15. Grover, S.; Pea, R. Computational Thinking: A competency whose time has come. In Computer Science Education: Perspectives on Teaching and Learning in School; Sentance, S., Barendsen, E., Carsten, S., Eds.; Bloomsbury Publishing: London, UK, 2018; pp. 19–38. [Google Scholar]
  16. NGSS Lead States. Next Generation Science Standards: For States, by States; The National Academies Press: Washington, DC, USA, 2013. [Google Scholar]
  17. Resnick, M.; Rusk, N. Coding at a crossroads. Commun. ACM 2020, 63, 120–127. [Google Scholar] [CrossRef]
  18. Cui, Z.; Ng, O. The interplay between mathematical and computational thinking in primary 1. school students’ mathematical problem-solving within a programming environment. J. Educ. Comput. Res. 2021, 59, 988–1012. [Google Scholar] [CrossRef]
  19. Bailey, D.H.; Borwein, J.M. High-precision numerical integration: Progress and challenges. J. Symb. Comput. 2020, 46, 741–754. [Google Scholar] [CrossRef]
  20. Herrero, J.F.Á.; Bautista, C.V. Didáctica de las ciencias, ¿de dónde venimos y hacia dónde vamos? UTE Teach. Technol. (Univ. Tarracon.) 2019, 7, 5–19. [Google Scholar] [CrossRef]
  21. Pei, C.; Weintrop, D.; Wilensky, U. Cultivating computational thinking practices and mathematical habits of mind in Lattice Land. Math. Think. Learn. 2018, 20, 75–89. [Google Scholar] [CrossRef]
  22. Gadanidis, G. Coding as a Trojan Horse for mathematics education reform. J. Comput. Math. Sci. Teach. 2015, 34, 155–173. [Google Scholar]
  23. Rambally, G. Integrating computational thinking in discrete structures. In Emerging Research, Practice, and Policy on Computational Thinking; Springer: Cham, Switzerland, 2017; pp. 99–119. [Google Scholar]
  24. Lockwood, E.; De Chenne, A. Enriching students’ combinatorial reasoning through the use of loops and conditional statements in Python. Int. J. Res. Undergrad. Math. Educ. 2022, 6, 303–346. [Google Scholar] [CrossRef]
  25. Sinclair, N.; Patterson, M. The dynamic geometrisation of computer programming. Math. Think. Learn. 2018, 20, 54–74. [Google Scholar] [CrossRef]
  26. Shumway, J.F.; Welch, L.E.; Kozlowski, J.S.; Clarke-Midura, J.; Lee, V.R. Kindergarten students’ mathematics knowledge at work: The mathematics for programming robot toys. Math. Think. Learn. 2023, 25, 380–408. [Google Scholar] [CrossRef]
  27. Ye, H.; Liang, B.; Ng, O.L.; Chai, C.S. Integration of computational thinking in K-12 mathematics education: A systematic review on CT-based mathematics instruction and student learning. Int. J. STEM Educ. 2023, 10, 3. [Google Scholar] [CrossRef]
  28. Chevalier, M.; Giang, C.; Piatti, A.; Mondada, F. Fostering computational thinking through educational robotics: A model for creative computational problem solving. Int. J. STEM Educ. Res. 2020, 7, 39. [Google Scholar] [CrossRef]
  29. De la Hoz, A.; Melo, L.; Álvarez, A.; Cañada, F.; Cubero, J. The Promotion of Healthy Hydration Habits through Educational Robotics in University Students. Healthcare 2023, 11, 2160. [Google Scholar] [CrossRef]
  30. De la Hoz, A.; Cañada, F.; Melo, L.V.; Alvarez, A.; Cubero, J. Teaching Human Hydration Science Content Through Computational Thinking and Educational Robotics. In Proceedings of the 15th Conference of the European Science Education Research Association (ESERA), Cappadocia, Turkey, 28 August–1 September 2023. [Google Scholar]
  31. Jaipal-Jamani, K.; Angeli, C. Effect of robotics on elementary preservice teachers’ self- efficacy, science learning, and computational thinking. J. Sci. Educ. Technol. 2017, 26, 175–192. [Google Scholar] [CrossRef]
  32. Biggs, J.B.; Collis, K.F. Evaluating the Quality of Learning: The SOLO Taxonomy (Structure of the Observed Learning Outcome); Academic Press: Cambridge, MA, USA, 1982. [Google Scholar]
  33. Biggs, J. What the student does: Teaching for enhanced learning. High. Educ. Res. Dev. 2012, 31, 39–55. [Google Scholar] [CrossRef]
  34. Brabrand, C.; Dahl, B. Using the SOLO taxonomy to analyze competence progression of university science curricula. High. Educ. 2009, 58, 531–549. [Google Scholar] [CrossRef]
  35. Almeida, A.; Araújo, E.; Figueiredo, J. Avaliando a Construção do Conhecimento em Programação Através da Taxonomia SOLO. In Proceedings of the Anais do XXXI Simpósio Brasileiro de Informática na Educação, Natal, Brazil, 24–28 November 2020; pp. 1813–1822. [Google Scholar]
  36. Fojcik, M.K.; Fojcik, M.; Sande, O.; Refvik, K.A.; Frantsen, T.; Bye, H. A content analysis of SOLO-levels in different computer programming courses in higher education. In Proceedings of the Norsk IKT-Konferanse for Forskning og Utdanning, Stavanger, Norway, 24–25 November 2020. [Google Scholar]
  37. Karvounidis, T.; Ladias, A.; Douligeris, C. Assessment of data carriers with the SOLO taxonomy in Scratch. In Proceedings of the 7th South-East Europe Design Automation, Computer Engineering, Computer Networks and Social Media Conference (SEEDA-CECNSM), Ioannina, Greece, 23–25 September 2022; pp. 1–7. [Google Scholar]
  38. Vidal-Szabó, P.; Estrella, S. Conocimiento Estadístico Especializado en Profesores de Educación Básica, basado en la taxonomía SOLO. Rev. Chil. Educ. Matemática 2021, 13, 134–148. [Google Scholar] [CrossRef]
  39. Obreque, A.S.; Uribe, M.E.M.; Richards, G.R. El aprendizaje de la Biología desde la taxonomía SOLO: Niveles SOLO en estudiantes universitarios. Contextos Estud. Humanidades Cienc. Soc. 2003, 10, 199–212. [Google Scholar]
  40. Uzumcu, O.; Bay, E. The effect of computational thinking skill program design developed according to interest driven creator theory on prospective teachers. Educ. Inf. Technol. 2021, 26, 565–583. [Google Scholar] [CrossRef]
  41. Sari, D.I.; Budayasa, I.K.; Juniati, D. The analysis of probability task completion; Taxonomy of probabilistic thinking-based across gender in elementary school students. In AIP Conference Proceedings; AIP Publishing: New York, NY, USA, 2017; Volume 1868. [Google Scholar]
  42. Tsan, J.; Boyer, K.E.; Lynch, C.F. How early does the CS gender gap emerge? A study of collaborative problem solving in 5th grade computer science. In Proceedings of the 47th ACM Technical Symposium on Computing Science Education, Memphis, TN, USA, 2–5 March 2016; pp. 388–393. [Google Scholar]
  43. Obreque, A.S.; Salvatierra, M.O.; Danilo, D.L. Evaluation of school performance in students with excellent marks in biology. Rev. Espac. 2018, 39, 31. [Google Scholar]
  44. Román-González, M. Codigoalfabetización y Pensamiento Computacional en Educación Primaria y Secundaria: Validación de un Instrumento y Evaluación de Programas. Ph.D. Thesis, National University of Distance Education, Madrid, Spain, 2016. [Google Scholar]
  45. Lafuente Martínez, M.; Lévêque, O.; Benítez, I.; Hardebolle, C.; Zufferey, J.D. Assessing Computational Thinking: Development and Validation of the Algorithmic Thinking Test for Adults. J. Educ. Comput. Res. 2022, 60, 1436–1463. [Google Scholar] [CrossRef]
  46. World Medical Association. Declaration of Helsinki—Ethical Principles for Medical Research Involving Human Subjects. 2022. Available online: https://www.wma.net/policies-post/wma-declaration-of-helsinki-ethical-principles-for-medical-research-involving-human-subjects/ (accessed on 17 October 2024).
  47. Ocaña, R. Discovering R-Commander, 3rd ed.; Andalusian School of Public Health (EASP): Andalusia, Spain, 2019; Available online: http://www.repositoriosalud.es/handle/10668/2574 (accessed on 28 May 2024).
  48. Yun, M.; Crippen, K.J. Computational thinking integration into pre-service science teacher education: A systematic review. J. Sci. Teach. Educ. 2025, 36, 225–254. [Google Scholar] [CrossRef]
  49. Nannim, F.A.; Ibezim, N.E.; Oguguo, B.C.; Nwangwu, E.C. Effect of project-based Arduino robot application on trainee teachers computational thinking in robotics programming course. Educ. Inf. Technol. 2024, 29, 13155–13170. [Google Scholar] [CrossRef]
  50. Rodrigues, R.N.; Brito-Costa, S.; Abbasi, M.; Costa, C.; Martins, F. Pre-service teachers’ competencies to develop computational thinking: A Portuguese tool to analyse Computational Thinking. EURASIA J. Math. Sci. Technol. Educ. 2024, 20, em2528. [Google Scholar] [CrossRef] [PubMed]
  51. De la Hoz Serrano, A.; Melo Niño, L.V.; Álvarez Murillo, A.; Martín Tardío, M.Á.; Cañada Cañada, F.; Cubero Juánez, J. Analysis of Gender Issues in Computational Thinking Approach in Science and Mathematics Learning in Higher Education. Eur. J. Investig. Health Psychol. Educ. 2024, 14, 2865–2882. [Google Scholar] [CrossRef] [PubMed]
  52. Watson, J.; Wright, S.; Fitzallen, N.; Kelly, B. Consolidating understanding of variation as part of STEM: Experimenting with plant growth. Math. Educ. Res. J. 2023, 35, 961–999. [Google Scholar] [CrossRef]
  53. Mol, S.M.; Matos, D.A.S. Uma análise sobre a Taxonomia SOLO: Aplicações na avaliação educacional. Estud. Avaliação Educ. 2019, 30, 722–747. [Google Scholar] [CrossRef]
  54. Sun, L.; Liu, J. A gender differential analysis of educational robots’ effects on primary teachers’ computational thinking: Mediating effect of programming attitudes. Educ. Inf. Technol. 2024, 29, 19753–19782. [Google Scholar] [CrossRef]
Figure 1. Taxonomy of Computational Thinking in mathematics and science [10].
Figure 1. Taxonomy of Computational Thinking in mathematics and science [10].
Computers 14 00192 g001
Figure 2. Categorization of the SOLO taxonomy [32].
Figure 2. Categorization of the SOLO taxonomy [32].
Computers 14 00192 g002
Figure 3. Examples of programming and mandala.
Figure 3. Examples of programming and mandala.
Computers 14 00192 g003
Figure 4. Percentage of correct answers in pretest based on gender and SOLO taxonomy dimensions.
Figure 4. Percentage of correct answers in pretest based on gender and SOLO taxonomy dimensions.
Computers 14 00192 g004
Figure 5. Percentage of correct answers in post-test based on gender and SOLO taxonomy dimensions.
Figure 5. Percentage of correct answers in post-test based on gender and SOLO taxonomy dimensions.
Computers 14 00192 g005
Table 1. Verbs and terms associated with the SOLO taxonomy levels [34].
Table 1. Verbs and terms associated with the SOLO taxonomy levels [34].
Level 2
Uni-Structural
Level 3
Multi-Structural
Level 4
Relational
Level 5
Extended Abstract
Paraphrase
Define
Identify
Count
Name
Recite
Follow
Instructions
Combine
Classify
Structure
Describe
Enumerate
List
Algorithmic
approach
Analyze
Compare
Contrast
Integrate
Relate
Explain causes
Apply theory
Theorize
Generalize
Hypothesize
Predict
Judge
Reflect
Transfer theory
Table 2. Percentage of correct answers in pretest based on gender and SOLO taxonomy dimensions.
Table 2. Percentage of correct answers in pretest based on gender and SOLO taxonomy dimensions.
Computational ConceptSOLO DimensionFemaleMaleTotal
AddressesUni-structural83.54%100%88.39%
Multi-structural68.35%60.61%66.07%
LoopsRepeatUni-structural75.95%84.85%78.57%
Multi-structural35.44%45.45%38.39%
ConditionalSimpleUni-structural30.38%48.48%35.71%
Multi-structural64.56%81.82%69.64%
CompoundUni-structural63.29%63.64%63.39%
Multi-structural73.42%81.82%75.89%
WhileUni-structural45.57%63.64%50.89%
Multi-structural29.11%18.18%25.89%
FunctionsUni-structural64.56%75.76%67.86%
Multi-structural36.71%63.64%44.64%
TotalUni-structural60.55%72.73%64.14%
Multi-structural51.27%58.59%53.42%
Table 3. Percentage of correct answers in post-test based on gender and SOLO taxonomy dimensions.
Table 3. Percentage of correct answers in post-test based on gender and SOLO taxonomy dimensions.
Computational ConceptSOLO DimensionFemaleMaleTotal
AddressesUni-structural89.87%90.91%90.18%
Multi-structural77.22%72.73%75.89%
LoopsRepeatUni-structural78.48%84.85%80.36%
Multi-structural50.63%48.48%50.00%
ConditionalSimpleUni-structural37.97%48.48%41.07%
Multi-structural73.42%72.73%73.21%
CompoundUni-structural63.29%60.61%62.50%
Multi-structural74.68%75.76%75.00%
WhileUni-structural82.28%54.55%74.11%
Multi-structural43.04%39.39%41.96%
FunctionsUni-structural89.87%93.94%91.07%
Multi-structural67.09%66.67%66.96%
TotalUni-structural73.63%72.22%73.21%
Multi-structural64.35%62.63%63.84%
Table 4. Mann–Whitney U test results for pretest and post-test according to SOLO taxonomy dimensions between male and female students.
Table 4. Mann–Whitney U test results for pretest and post-test according to SOLO taxonomy dimensions between male and female students.
Computational ConceptSOLO DimensionMann–Whitney U Test
PretestPosttest
SpSp
AddressesUni-structural10890.01 *12900.87
Multi-structural12030.4312450.62
LoopsRepeatUni-structural11880.3012210.44
Multi-structural11730.3312760.84
ConditionalSimpleUni-structural10680.0711670.31
Multi-structural10790.0712950.94
CompoundUni-structural12990.9812690.79
Multi-structural11940.3512900.91
WhileUni-structural10680.089420.00 *
Multi-structural11610.2312560.73
FunctionsUni-structural11580.2512510.50
Multi-structural9530.01 *12980.97
TotalUni-structural9240.01 *12400.68
Multi-structural10680.1312890.93
* Statistically significant differences (p < 0.05).
Table 5. Wilcoxon test by gender and SOLO taxonomy dimensions between pretest and post-test.
Table 5. Wilcoxon test by gender and SOLO taxonomy dimensions between pretest and post-test.
Computational ConceptSOLO DimensionWilcoxon Test
MFT
SpSpSp
AddressesUni-structural60.15880.281370.70
Multi-structural370.302520.244750.12
LoopsRepeatUni-structural181.002170.723510.75
Multi-structural420.812220.04 *4500.06
ConditionalSimpleUni-structural391.003120.345610.40
Multi-structural300.352850.254930.56
CompoundUni-structural350.432590.124750.04 *
Multi-structural300.801550.073150.06
WhileUni-structural720.46760.00 *3440.00 *
Multi-structural5.00.02 *3150.094080.00 *
FunctionsUni-structural110.0777.50.00 *1430.00 *
Multi-structural560.821360.00 *3780.00 *
TotalUni-structural320.91780.00 *1110.00 *
Multi-structural320.53780.00 *1110.01 *
* Statistically significant differences (p < 0.05).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

De la Hoz Serrano, A.; Álvarez-Murillo, A.; Fernández Torrado, E.J.; González Maestre, M.Á.; Melo Niño, L.V. A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning. Computers 2025, 14, 192. https://doi.org/10.3390/computers14050192

AMA Style

De la Hoz Serrano A, Álvarez-Murillo A, Fernández Torrado EJ, González Maestre MÁ, Melo Niño LV. A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning. Computers. 2025; 14(5):192. https://doi.org/10.3390/computers14050192

Chicago/Turabian Style

De la Hoz Serrano, Alejandro, Andrés Álvarez-Murillo, Eladio José Fernández Torrado, Miguel Ángel González Maestre, and Lina Viviana Melo Niño. 2025. "A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning" Computers 14, no. 5: 192. https://doi.org/10.3390/computers14050192

APA Style

De la Hoz Serrano, A., Álvarez-Murillo, A., Fernández Torrado, E. J., González Maestre, M. Á., & Melo Niño, L. V. (2025). A Case Study of Computational Thinking Analysis Using SOLO Taxonomy in Scientific–Mathematical Learning. Computers, 14(5), 192. https://doi.org/10.3390/computers14050192

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop