Next Article in Journal
Examining the Effects of Habit and Self-Efficacy on Users’ Acceptance of a Map-Based Online Learning System via an Extended TAM
Previous Article in Journal
Navigating the Complexity of Generative Artificial Intelligence in Higher Education: A Systematic Literature Review
Previous Article in Special Issue
Using a Realistic Context to Motivate and Teach Engineering Students the Chain Rule
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education

by
Milton Garcia Tobar
*,
Natalia Gonzalez Alvarez
and
Margarita Martinez Bustamante
Grupo de Innovación Educativa en Ingeniería de Automoción, Universidad Politécnica Salesiana, Cuenca 010105, Ecuador
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(7), 827; https://doi.org/10.3390/educsci15070827
Submission received: 30 May 2025 / Revised: 24 June 2025 / Accepted: 26 June 2025 / Published: 29 June 2025
(This article belongs to the Special Issue Mathematics in Engineering Education)

Abstract

Identifying and classifying mathematical errors is crucial to improving the teaching and learning process, particularly for first-year engineering students who often struggle with foundational mathematical competencies. This study aims to select the most appropriate theoretical framework for error classification by applying the Analytic Hierarchy Process (AHP), a multicriteria decision-making method. Five established classification models—Newman, Kastolan, Watson, Hadar, and Polya—were evaluated using six pedagogical criteria: precision in error identification, ease of application, focus on conceptual and procedural errors, response validation, and viability in improvement strategies. Expert judgment was incorporated through pairwise comparisons to compute priority weights for each criterion. The results reveal that the Newman framework offers the highest overall performance, primarily due to its structured approach to error analysis and its applicability in formative assessment contexts. Newman’s focus on reading, comprehension, transformation, and encoding addresses the most common errors encountered in the early stages of mathematical learning. The study demonstrates the utility of the AHP as a transparent and replicable methodology for educational model selection. It addresses a gap in the literature regarding evidence-based criteria for designing diagnostic instruments. These findings support the development of targeted pedagogical interventions in mathematics education for engineering programs.

1. Introduction

Mathematics teaching constitutes a fundamental pillar in the training of engineering students. Mathematics is essential for expressing physical, chemical, and engineering laws, providing the necessary tools to confront and solve technical problems (Harris et al., 2015; Sazhin, 1998). The ability to apply various mathematical techniques is key to academic success, both in the classroom and in professional engineering practice (Strum & Kirk, 1979). However, engineering students often perceive mathematical concepts as complex and abstract, complicating their understanding and application. This situation is exacerbated by a lack of motivation and the perception that mathematics is irrelevant, a common issue among engineering undergraduates (Brandi & Garcia, 2017; Harris et al., 2015; López-Díaz & Peña, 2021). Integrating practical examples and real-world problems can help students appreciate the relevance of mathematics in their field (Brandi & Garcia, 2017; Sazhin, 1998). Furthermore, collaboration between mathematics and engineering departments can significantly enhance the teaching and learning of mathematics (Jaworski & Matthews, 2011). Identifying and analyzing the mathematical errors committed by students is therefore crucial for improving the teaching–learning process, allowing instructors to intervene effectively at critical points in students’ learning trajectories (Hoth et al., 2022; Nuraini et al., 2018).
In the context of mathematics education, a mathematical error is generally defined as a deviation from the correct solution or process, often revealing underlying misconceptions or flawed reasoning (Makonye & Fakude, 2016). Educational research has categorized such errors into several types (Suciati & Sartika, 2023). Conceptual errors arise when students demonstrate incorrect or incomplete understanding of fundamental mathematical principles (Irham, 2020; Nurhayati & Retnowati, 2019; Roselizawati Hj Sarwadi & Shahrill, 2014; Sehole et al., 2023). These errors reflect failures in conceptualizing basic ideas, preventing students from correctly applying such concepts in problem-solving contexts. One example includes confusion regarding the notion of derivatives in calculus, which can lead to misinterpretation (Booth et al., 2013). Procedural errors, on the other hand, occur when students understand the underlying concept but fail to execute the necessary steps to solve a mathematical problem (Mosia et al., 2023; Rushton, 2018). These errors are common in algebraic operations, such as incorrect simplification or misuse of mathematical properties (Star & Rittle-Johnson, 2008). Interpretation errors constitute another important category, typically emerging when students misread problem statements or data, leading to misinterpretations of variable relationships or solving the problem using inappropriate logic (Chauraya & Mashingaidze, 2017; Hu et al., 2022; Kontrová et al., 2022; Resnick et al., 1989). These errors are frequent in word problems and require strong reading comprehension skills (Movshovitz-Hadar et al., 1987). Application errors reflect a student’s inability to transfer theoretical mathematical knowledge into practical or novel situations (Lobato, 2008; Mukuka et al., 2023), a critical competency in engineering education (Nakakoji & Wilson, 2020). Arithmetic or calculation errors, while seemingly trivial, may significantly impact the outcomes of more complex problem-solving scenarios (Engelbrecht & Harding, 2005). Lastly, systematic errors recur consistently, indicating deeply ingrained misunderstandings that must be addressed with targeted instructional strategies (Papadouris et al., 2024).
This review examines existing research on categorizing, qualifying, and quantifying mathematical errors, particularly in studies relevant to undergraduate engineering students. Although much of the literature focuses on other academic levels and disciplines, the methodologies and findings remain applicable to engineering education, where the improper use of mathematical concepts can hinder the development of essential competencies, particularly in mathematical modeling (Levey & Johnson, 2020), problem-solving (Wedelin et al., 2015) and the application of mathematics (Armstrong & Croft, 1999; Faulkner et al., 2019). This review aims to provide a comprehensive overview of the state of the art, offering a solid foundation for future research and the development of diagnostic tools to improve mathematics teaching and learning in engineering programs.
Despite notable advances in the study of mathematical errors, the literature indicates a lack of precise diagnostic tools that would enable instructors to identify common student errors early, categorize them appropriately, and design targeted pedagogical strategies to correct them (Hiebert & Grouws, 2007; Morgan, 1990). This gap is particularly evident in engineering degree programs, where students often lack foundational mathematical competencies. Thus, there is a pressing need to develop specific diagnostic tests and support strategies aimed at improving their academic performance (Junpeng et al., 2020).
In response to this problem, a critical step in constructing effective diagnostic instruments is selecting the most appropriate theoretical framework for classifying mathematical errors, especially one that aligns with first-year engineering students’ cognitive and curricular characteristics. While multiple theoretical models, such as those proposed by Polya, Newman, Kastolan, Watson, and Hadar, are widely cited, the literature shows that selection often relies on subjective or discipline-based preferences, rather than comparative, pedagogically informed processes. To overcome this limitation, a structured methodology is required to evaluate and prioritize such models.
Among the various multicriteria decision-making methods used in educational research, the Analytic Hierarchy Process (AHP), developed by Saaty, has emerged as one of the most robust and versatile methodologies for addressing complex decisions in educational settings. This approach enables the decomposition of a global objective, such as selecting a theoretical framework for classifying errors, into a hierarchical structure composed of criteria and alternatives evaluated through pairwise comparisons and weighted priorities (Chen et al., 2018). This makes AHP particularly valuable in studies involving qualitative and quantitative variables (Zhang & Wang, 2021), ensuring a comprehensive and integrative assessment approach.
The AHP has been widely applied to curriculum evaluation, institutional competitiveness, educational technology assessment, and strategic academic planning in higher education. For instance, it has been used to assess the effectiveness of curricular systems in technical and vocational institutions by identifying evaluation indicators through expert interviews and determining their respective weights (Chen et al., 2018) and to evaluate the pedagogical utility for training purposes (Hester & Bachman, 2009). Moreover, it has facilitated the prioritization of key criteria in developing institutional strategies, highlighting factors such as sustainability and leadership as central to enhancing higher education competitiveness (Yulmaini et al., 2020).
In faculty and program evaluation, the AHP has provided a systematic and quantifiable framework for assessing academic performance, offering a more objective method to evaluate instructional quality, including in fields such as computer-aided design education. (Chang, 2014; Do & Chen, 2013). It has also proven helpful in understanding the factors that influence students’ choices of undergraduate engineering institutions, allowing for prioritizing such factors based on structured pairwise comparisons (Mahendran et al., 2014).
The AHP has been integrated with other multicriteria decision-making (MCDM) techniques in educational technology to assess and select learning management systems (LMS). It has been particularly instrumental in identifying platforms like Moodle as optimal solutions due to their adaptability and capabilities for integrating adaptive learning tools (Smahi et al., 2024).
The primary benefits of the AHP in educational contexts include its ability to offer a structured and transparent framework for tackling multi-dimensional decision problems (Akram & Adeel, 2023; Ramík, 2020), its flexibility in incorporating both qualitative and quantitative expert judgments (Zhang & Wang, 2021), and its methodological transparency based on consistent and traceable comparison logic (Do & Chen, 2013; Mu & Nicola, 2019). Nevertheless, certain limitations must be acknowledged, such as the risk of inconsistency in individual assessments and the increasing cognitive load associated with a larger number of alternatives, which can make the comparison process cumbersome (Hester & Bachman, 2009; Mahendran et al., 2014).
Despite the extensive use of the AHP in curricular, technological, and institutional domains within the education system, a significant gap remains. There is a lack of studies that apply this methodology to compare and select theoretical frameworks for classifying mathematical errors in university students. This gap is especially critical in engineering education, where early and structured identification of mathematical mistakes can substantially impact teaching and learning effectiveness. In response, the present study proposes using the AHP to evaluate five theoretical frameworks and select the most appropriate one for developing a diagnostic instrument to identify and classify errors in first-year engineering students.
The need for this study arises from the annual repetition of the same mistakes made by first-semester engineering students in mathematics. This recurrence highlights a persistent problem in the teaching–learning process, leading teachers to feel the urgent need to intervene proactively by designing and implementing methodological strategies to mitigate this situation. In response to this problem, a comprehensive project has been proposed: “Development, implementation, and validation of a guide to the most frequent mathematical errors made by first-year engineering students at Universidad Politécnica Salesiana (UPS), Cuenca campus”. Although this initial exploratory study frequently refers to first-semester engineering students, they did not participate directly in this project phase. Instead, it will serve as the methodological basis for designing a diagnostic tool that will be applied to first-semester students in a subsequent phase of the project.
The main contributions of this study are as follows:
  • To apply a structured and replicable AHP-based methodology for selecting theoretical frameworks in mathematics education.
  • To identify the most appropriate classification model for diagnostic purposes in first-year engineering students.
  • To provide a robust foundation for the future development of context-sensitive diagnostic tools and pedagogical intervention strategies.
This paper is organized as follows: Section 2 describes the adopted methodology and the application of the AHP model. Section 3 presents the analysis results, including visualizations and the final ranking. Section 4 discusses the findings in light of the reviewed literature. Finally, Section 5 presents the conclusions and outlines future research directions.

2. Materials and Methods

This section describes the procedures used to select and evaluate the criteria for categorizing mathematical errors. A detailed explanation of the methodological approach is provided, including the rationale behind choosing the Newman, Kastolan, Watson, Hadar, and Polya frameworks. The comparative evaluation was structured through a decision matrix, supplemented by a visual analysis using a radar chart to facilitate the comparison of the strengths and weaknesses of each framework. This approach offers a solid foundation for selecting the most appropriate criterion within the educational context of the Universidad Politécnica Salesiana, in Cuenca, Ecuador.

2.1. Criteria Selection and Justification

Various criteria and methodologies have been proposed to analyze mathematical errors, with the Newman Procedure being one of the most recognized. Developed in 1977 by Anne Newman, a renowned Australian mathematics educator (Noutsara et al., 2021), this procedure focuses on solving contextualized mathematical problems, also known as narrative or story-based problems (Arifin & Maryono, 2023). The Newman Procedure dissects the problem-solving process into five key stages: (1) reading errors, (2) comprehension errors, (3) transformation errors, (4) process skill errors, and (5) encoding errors (Nuryati et al., 2022). Each error type is associated with specific indicators that facilitate identifying and categorizing the most prevalent mistakes students make. These indicators help uncover students’ understanding and comprehension gaps and subsequent difficulties in correctly applying mathematical concepts. The classification of these errors is presented in Table 1, with accompanying examples for each error type. Including these examples is crucial, as they clarify the different error types, serving as a foundation for developing an evaluation tool based on the primary identified criteria. These examples significantly improve the precision of error identification, which will be indispensable in designing pedagogical strategies to address students’ specific needs. The primary contribution of the Newman criterion lies in its ability to reveal recurring error patterns during the problem-solving process (Arifin & Maryono, 2023). By identifying these patterns, errors can be corrected in real time and prevented by implementing targeted, proactive measures tailored to address students’ specific needs.
The Kastolan Criterion is widely acknowledged for its utility in analyzing mathematical errors, particularly in problems that are not of the narrative or contextual type (Suciati & Sartika, 2023). This criterion classifies errors into three main categories: conceptual, procedural, and technical (Mauliandri & Kartini, 2020). Conceptual errors refer to the incorrect application or misunderstanding of fundamental mathematical principles. These errors often stem from a lack of deep understanding of the underlying concepts, which may lead students to approach problems incorrectly. Procedural errors occur when students follow a correct sequence of steps but fail to correctly execute one or more parts of the procedure. These errors can be especially problematic in areas like algebra and calculus, where precise step-by-step reasoning is required. Technical errors are typically related to mechanical aspects of mathematics, such as incorrect arithmetic or improper notation. While these errors may seem trivial, they can significantly affect the final solution.
Each error type is associated with specific indicators that facilitate identification and analysis. This classification, summarized in Table 2, offers valuable insights into common student mistakes, enabling educators to pinpoint areas where intervention may be necessary.
(Sari & Pujiastuti, 2022) emphasize the need for a comprehensive approach to reinforcing mathematical concepts and procedures in the classroom. By addressing these errors at their root, instructors can help enhance students’ overall mathematical competence and prevent the recurrence of these errors, often indicative of deeper learning gaps.
On the other hand, Watson, a psychologist known for his stimulus–response approach, developed a criterion aimed at facilitating the identification of specific errors in mathematical reasoning processes (Buhaerah et al., 2022). This approach, known as Watson’s Criterion, is used in the analysis of errors in solving mathematical problems and classifies these errors into eight main categories: (1) inadequate data, (2) inadequate procedures, (3) omitted data, (4) omitted conclusions, (5) conflicts at the response level, (6) undirected manipulation, (7) issues with the hierarchy of skills, and (8) errors outside these categories. While this criterion provides a clear structure for classification, its application may introduce elements of subjectivity (Musa et al., 2021).
Table 3 summarizes the types of errors, their indicators, and representative examples. According to (Buhaerah et al., 2022) mathematical errors may vary depending on students’ predominant learning style, whether visual, auditory, or kinesthetic. This finding highlights the importance of adapting educational strategies to meet students’ individual needs to mitigate such errors. Similarly, (Rosita & Novtiar, 2021) recommend reinforcing fundamental concepts and providing more structured practices and exercises to reduce the incidence of mathematical errors.
Similarly, Hadar’s Criterion for the classification of mathematical errors was developed by Hadar, Zaslavsky, and Inbar in 1987 as an empirical model to categorize errors in mathematics at the secondary education level in Israel (Ganesan & Dindyal, 2014). This criterion identifies six types of errors that students may make when solving mathematical problems: (1) errors in data usage, (2) errors in language usage, (3) errors in the use of logic to conclude, (4) errors in the use of definitions or theorems, (5) errors due to failure to review the solution, and (6) technical errors (Fauzan & Minggi, 2024). The errors related to data usage occur when students fail to exactly copy or record the data given in the problem, as illustrated in Table 4. For example, in a situation involving population growth in two cities, the student may incorrectly record the given data, such as mixing up values or omitting essential information, leading to confusion during problem-solving. The errors in language usage typically involve misinterpretation or improper use of mathematical terminology, which may result in incorrect problem formulation. In terms of logical reasoning, students may fail to draw proper inferences based on the given conditions of the problem, leading to faulty conclusions. Other errors may stem from misapplication of definitions or theorems, such as using an incorrect formula for area or volume.
The error analysis proposed by Hadar is particularly suitable for identifying students’ mistakes when solving contextualized mathematical problems. According to Hadar, errors may be influenced by key mathematical skills such as conceptual understanding, reasoning, connections, and mathematical communication (Suciati & Sartika, 2023).
Problem-solving is a fundamental skill that every student must master (Hadi & Radiyatul, 2014). One of the most recognized and widely used methods for teaching this skill is the one developed by George Polya, a distinguished 20th-century mathematics professor (Sukayasa, 2012). Polya proposed a structured problem-solving model consisting of four steps: (1) understand the problem, (2) devise a plan, (3) carry out the plan, and (4) look back (Winarso & Toheri, 2021).
This approach has been effectively used to analyze and classify mathematical errors. Errors in the first two phases of Polya’s model typically reflect conceptual difficulties and problems in understanding the mathematical fundamentals. In the third phase, errors are related to failures in processes and algorithms, while in the final phase, errors arise from a lack of review and verification of the obtained solutions (Suharti et al., 2021). According to Sulistyaningsih et al., the types of errors, organized by frequency, include reading, comprehension, transformation, process skill, and coding errors. (Sulistyaningsih et al., 2021).
Table 5 presents a detailed description of these steps and their application in identifying and analyzing mathematical errors. One example is used for this purpose. Polya’s model helps analyze students’ errors when solving contextualized or narrative-type problems (Suciati & Sartika, 2023). (Wiyah & Nurjanah, 2021) found that students make errors when modifying sentences into variables and creating mathematical models, highlighting the relevance of Polya’s approach to improving mathematics teaching and learning.
To determine which of the five criteria best suits the context of the Universidad Politécnica Salesiana, a multicriteria evaluation was conducted using the Analytic Hierarchy Process (AHP) described below.

2.2. Multicriteria Evaluation Procedure Using the Analytic Hierarchy Process (AHP)

This study adopts a quantitative approach with a multicriteria strategy, employing the Analytic Hierarchy Process (AHP) to select the most appropriate framework for classifying mathematical errors made by first-year engineering students. The objective is to develop an assessment instrument grounded in a rigorous theoretical review to identify, categorize, and pedagogically address the detected errors and deficiencies in mathematical competencies that hinder the teaching–learning process.
The methodological procedure was developed following the hierarchical logic of the Analytic Hierarchy Process (AHP) proposed by (Saaty, 2008), which has been widely applied in research on evaluation and complex decision-making in education (Alonso & Lamata, 2006; Anis & Islam, 2015; Gamboa-Cruzado et al., 2024; Ishizaka & Labib, 2011). It was structured into three hierarchical levels: the overall objective (selecting the optimal framework for error classification according to the study’s aims), the evaluation criteria, and the alternatives, which correspond to five theoretical approaches to the classification of mathematical errors: Newman, Kastolan, Watson, Hadar, and Polya. This process is visually summarized in Figure 1.
Additionally, Figure 2 presents the detailed methodological flow of the AHP used in this study, from the problem definition to the final ranking output.
The definition of the evaluation criteria represents a critical step in the methodology, as these criteria determine the aspects by which the alternatives will be assessed. Unlike traditional approaches that select criteria either directly or based solely on theoretical grounds, this study developed a multicriteria decision matrix that provides an objective basis for choosing the six subcriteria considered. To this end, the five aforementioned frameworks were analyzed by breaking each one down into its corresponding types of errors, which were then evaluated through six key dimensions: precision in error identification, ease of application, focus on conceptual errors, focus on procedural errors, focus on response validation, and viability in improvement strategies.
Based on this analysis, a qualitative matrix was constructed with textual descriptions in each cell, incorporating pedagogical evidence, teaching experience, and bibliographic references. To facilitate its integration into the AHP model, these descriptions were translated into a quantitative scale ranging from 1 to 5, supported by a semantic coding system based on the criteria detailed in Table 6:
This quantitative scale transformed the descriptive matrix into a numerical structure suitable for multicriteria analysis, resulting in a consolidated table of average scores by criterion and framework (see Table 7). A panel of five experts in mathematics education and engineering instruction was consulted to establish these scores. They all had over ten years of university-level teaching experience and had worked extensively with first-year engineering students. Among the participants, two held master’s degrees in mathematics education, while the remaining three were engineers with graduate studies in mathematics-related fields applied to engineering.
Each expert participated individually by completing a structured digital questionnaire in which they compared the five frameworks using six pedagogical criteria previously defined: (i) precision in error identification, (ii) ease of application, (iii) focus on conceptual errors, (iv) focus on procedural errors, (v) response validation, and (vi) viability for informing instructional improvement strategies.
The comparisons were made using Saaty’s fundamental scale of relative importance (ranging from 1 to 9). Each expert assigned scores based on their professional judgment, and the final values used in the matrix were calculated as the arithmetic mean for each cell. This grading process ensured a transparent and reproducible evaluation methodology. Additionally, responses were anonymized and processed in aggregate form to minimize potential group bias.
The decision to work with five experts aligns with methodological recommendations found in the specialized literature on the AHP in educational contexts, where the quality and diversity of expert judgment are prioritized over sample size, especially in exploratory or methodological studies (Hester & Bachman, 2009; Ishizaka & Labib, 2011). These authors suggest that between three and seven experts are sufficient, provided that (a) the participants are representative of the field, (b) logical consistency in judgments is achieved (consistency ratio, CR < 0.1), and (c) the selection process is justified correctly. In this study, all these criteria were satisfactorily fulfilled, ensuring the validity and rigor of the results obtained.
Once the key frameworks and dimensions were defined, a pairwise comparison matrix A = a i j was constructed using Saaty’s 1-to-9 scale. In this matrix, the element a i j represents the relative importance of criterion i concerning criterion j , based on expert judgment from university faculty. The matrix is reciprocal, meaning that a j i = 1 / a i j and a i i = 1 . The resulting pairwise comparison matrix is presented in Table 8.
To obtain the weights of each criterion, the matrix was normalized by dividing each element by the sum of its corresponding column (Equation (1)). The priority vector ( w ) was then calculated by averaging each row of the normalized matrix (Equation (2)). The results are presented in Table 9.
a i j = a i j i = 1 n a i j
w i = 1 n j = 1 n a i j
Subsequently, the AHP consistency check was applied to ensure the validity of the model by calculating the maximum eigenvalue λ m a x using Equation (3):
λ m a x = 1 n i = 1 n ( A · w ) i w i
Then, the consistency index ( C I ) and the consistency ratio ( C R ) were calculated, as defined by Equations (4) and (5), respectively:
C I = λ m a x n n 1
C R = C I R I
where R I is the Random Index, which for n = 6 has a value of 1.24 (Saaty, 2008). In this case,
λ m a x = 6.3943 ;   C I = 6.3943 6 5 = 0.0789 ;   C R = 0.0789 1.24 = 0.0636
Since the consistency ratio (CR) was less than 0.10, the expert judgments are considered consistent and valid for decision-making purposes (Alonso & Lamata, 2006). Once the priority vector had been validated, the five mathematical error classification frameworks were evaluated using the six criteria outlined in Table 9. The weighted score for each framework was computed using Equation (6), which combines the performance value of each criterion with its corresponding weight:
S c o r e m e t h o d = i = 1 6 V a l u e i · w e i g h t i
Table 10 presents the resulting weighted scores and corresponding rankings. Figure 3 provides a graphical representation of the final ranking, offering a comparative view of the five frameworks’ overall performance across the defined evaluation criteria.

3. Results

Applying the Analytic Hierarchy Process (AHP) enabled the structured prioritization of five theoretical frameworks to classify mathematical errors in educational contexts. The objective was to identify the most appropriate framework for designing a subsequent diagnostic instrument aimed at first-year engineering students at a polytechnic university. The theoretical models analyzed included Newman, Kastolan, Watson, Hadar, and Polya, which were evaluated against six criteria: accuracy in error identification, ease of application, focus on conceptual errors, procedural errors, response validation, and viability in improvement strategies.
First, the results obtained from the decision matrix show that the Newman framework achieved the highest average weighted score (3.7), followed by Kastolan (3.2), Watson (2.8), and both Hadar and Polya in the same position (2.5). These values are summarized in Table 7, which was constructed from a qualitative rating scale transformed into quantitative data and subsequently weighted using the priority vectors derived from the AHP model. The consistency of the model was verified through the consistency ratio (CR < 0.1), thereby ensuring the validity of the expert judgments.
The weights assigned to each criterion within the model highlight that accuracy in error identification carried the most significant weight (39.7%), followed by focus on procedural errors (25.7%) and viability in improvement strategies (20.3%). Together, these three criteria account for over 85% of the total weight, indicating that, within the scope of this study, which aims to diagnose mathematical errors in first-year engineering students, priority is given to clearly identifying the type of error, focusing on algorithmic procedures, and generating concrete pedagogical strategies for instructional intervention.
Figure 4 presents a comparative radar chart that simultaneously displays the performance of each framework across the six defined criteria. The chart clearly shows that the Newman framework outperforms the others in three key areas: accuracy in error identification (4.5), ease of application (4.4), and viability in improvement strategies (3.8). These results support its suitability as a foundation for developing diagnostic assessment tools within the educational context under study. Although Newman scores lower in focus on conceptual errors (1.4) and procedural errors (2.6), its structural design enables it to capture mistakes at various stages of the problem-solving process, particularly in contextualized tasks and narrative problems, which often reveal the most frequent errors among students with limited mathematical maturity.
It is worth noting that the ease of application of a framework becomes particularly relevant in the context of polytechnic universities, which often have large student groups, diverse prior educational backgrounds, and limited time for individualized feedback. In such environments, a diagnostic model that can be efficiently implemented and replicated in face-to-face and virtual settings, such as the Newman framework, becomes a highly valuable pedagogical tool. From an educational standpoint, this feature aligns with the principles of didactic usability and curricular operability, enabling instructors to promptly detect error patterns without requiring excessive time investment or specialized training.
Figure 5a illustrates the frameworks’ performance concerning the accuracy criterion in error identification. The Newman framework stands out with a score of 4.5. This value reflects the framework’s capacity to break down the problem-solving process into observable phases: reading, comprehension, transformation, execution of the procedure, and result encoding, thereby enabling a precise detection of where the error occurs. Such diagnostic precision is especially critical for first-year students, who often struggle to articulate their reasoning or recognize their mistakes.
Figure 5b shows that Newman also ranks highest in ease of application. This attribute is particularly significant considering that engineering faculty typically manage large student cohorts, and not all instructors have formal training in mathematics education. A framework that can be implemented through rubrics, guided questionnaires, or interview protocols, such as Newman’s, facilitates its institutional adoption and integration into continuous or formative assessment systems.
Regarding the focus on conceptual errors, represented in Figure 5c, the Kastolan and Hadar frameworks demonstrate greater sensitivity (scoring 2.3), as they incorporate categories related to incorrect definitions, invalid reasoning, or misinterpretations of mathematical theorems. With a score of 1.4, Newman does not directly address this type of error from a conceptual perspective within its theoretical structure, as its approach is oriented toward analyzing the problem-solving process rather than conceptual content per se. However, in practice, it does capture manifestations of such errors through categories like transformation and encoding, where distortions in the student’s logical–formal reasoning may be detected.
Far from being a disadvantage in the context of this study, this limitation proves to be beneficial: The objective of the instrument is to classify and characterize errors that hinder mathematical performance in the early stages of learning, particularly among first-semester engineering students. From this perspective, focusing on operational, reading, comprehension, or execution errors, as addressed by the Newman framework, is more relevant than providing an exhaustive classification of conceptual errors, which tend to emerge more clearly at advanced levels. Consequently, Newman’s theoretical limitations in addressing conceptual depth are reframed as practical strengths for the specific context of the diagnostic instrument’s application.
In Figure 5d, which corresponds to the criterion of procedural errors, the Newman (2.6) and Watson (2.5) frameworks show the highest scores, followed by Kastolan (2.3). This outcome aligns with the theoretical structures of these frameworks, which give priority to failures in algorithm execution, symbolic manipulation, and the logical sequence of operational steps. In particular, the Newman framework includes procedural skills, which identify common errors in applying formulas, algebraic properties, and incorrect mechanical procedures.
Its capacity to detect essential operational failures makes it especially relevant for first-semester students, who are still consolidating basic problem-solving techniques. In this regard, Newman’s simplified structure focuses on the types of errors that most hinder progress during the initial stages of university-level engineering education, thus making it an effective tool for early diagnosis aimed at implementing concrete pedagogical interventions.
Figure 5e shows the performance across the dimension of response validation, where the Hadar (2.3) and Watson (2.0) frameworks lead due to their emphasis on explicit result checking, logical coherence verification, and retrospective analysis of solutions. Although Newman scores slightly lower (1.8), it indirectly allows for identifying omissions or inconsistencies in student responses through encoding errors, understood as how the student presents or expresses the final result.
While validation is not an explicit component of its theoretical structure, detecting earlier errors, such as those related to comprehension or transformation, serves as an early filter that reduces the need for subsequent verification. Rather than being a limitation, this feature enhances its practical utility in settings where instructors must act on student errors without relying on extensive self-review. In contexts characterized by low metacognitive autonomy, such as introductory-level courses, this structure supports practical instructional guidance without demanding competencies that students have not yet developed.
Finally, Figure 5f presents the results for the viability criterion in improvement strategies, where the Newman framework achieves the highest score (3.8), significantly outperforming the other approaches. This result is particularly relevant, as one of the central goals of this study is to develop a diagnostic instrument that not only detects errors but also provides concrete pedagogical inputs to enhance academic performance.
The hierarchical structure of the Newman model, which divides the problem-solving process into sequential and observable stages, allows instructors to quickly and accurately identify the breakdown point in student performance. This, in turn, enables the design of targeted remedial tasks, differentiated feedback, and personalized intervention plans. Within this framework, the apparent simplicity of Newman becomes a strength, as it allows for pedagogical action without requiring an overly complex diagnostic process, making it especially suitable for contexts in which timely and precise responses are needed to support students whose learning is still in development.
The overall ranking objectively indicates that the Newman approach is the most suitable for adoption in the design of the diagnostic instrument, as it meets the essential requirements of diagnostic clarity, applicability in real educational settings, and potential to support corrective pedagogical actions. Its sequential structure, focused on contextualized tasks, is particularly valuable for students who struggle with mathematical reading, interpreting instructions, and symbolic encoding—areas that, according to various studies, represent major obstacles during the early stages of engineering education.
Kastolan, with a global average score of 3.2, emerges as a solid alternative, especially useful in more advanced courses requiring greater conceptual and technical discrimination. Watson and Hadar demonstrate strengths in specific dimensions, such as validation and logical reasoning. Still, their lower ease of application and overall accuracy position them as complementary rather than primary approaches. Polya, while beneficial for the structured teaching of problem solving, does not exhibit the robustness necessary to serve as the primary criterion for diagnostic classification in this context.
In conclusion, the results obtained through the AHP model provide a solid foundation for selecting the Newman approach as the primary criterion for designing the diagnostic instrument. Its simplicity, applicability, and diagnostic effectiveness make it an ideal tool for identifying common errors among first-year engineering students and guiding more effective teaching and learning processes responsive to the practical needs of polytechnic university classrooms.

4. Discussion

The results obtained through the Analytic Hierarchy Process (AHP) not only enabled the prioritization of five theoretical approaches for the classification of mathematical errors but also provided an objective and context-sensitive justification for selecting the Newman approach as the foundation for designing a diagnostic instrument targeted at first-semester engineering students. This choice is not solely based on numerical outcomes but rather on structural coherence with the pedagogical goals of the study and the practical demands of the polytechnic university context.
Several studies have noted that first-year engineering students often face difficulties interpreting mathematical problems, selecting appropriate solution strategies, and executing basic procedures (Hoth et al., 2022; Suciati & Sartika, 2023). These difficulties are not always the result of deep conceptual gaps, but frequently stem from deficiencies in transversal skills such as mathematical reading, contextual comprehension, and symbolic encoding (Berraondo et al., 2004; Booth et al., 2013). In this regard, the Newman approach, by breaking down the problem-solving process into observable phases: reading, understanding, transformation, procedure, and encoding, allows for a structured mapping of the breakdown points in student performance.
From a didactic perspective, this sequential structure aligns with the principles of cognitive constructivism and formative assessment, as it enables instructors to identify the precise moment when a student’s reasoning begins to deviate, thus allowing for more accurate and targeted feedback (Jaworski & Matthews, 2011; Rushton, 2018). Moreover, the ease of application associated with the Newman approach, strongly supported by the results obtained (4.4 out of 5), facilitates its institutional adoption in large-scale courses without requiring complex curricular adaptations or specialized teacher training in error analysis.
While approaches such as Kastolan and Watson offer greater depth in identifying conceptual and procedural errors, their implementation demands more abstraction from instructors and students. This may prove unfeasible in early stages, where the priority stabilizes foundational learning (Faulkner et al., 2019; Sazhin, 1998). In contrast, the Newman approach allows for identifying common error patterns and establishing direct correlations between error types and the corresponding pedagogical intervention, fostering its integration into continuous assessment models and academic leveling processes.
In addition, Newman’s high score (3.8) on the viability in improvement strategies criterion reinforces its potential as a practical input for pedagogical planning. Its structure supports a diagnostic–formative approach, in which each error is recorded and used to inform instructional adjustments. This perspective aligns with the recommendations of scholars such as Hiebert and Grouws (Hiebert & Grouws, 2007), who argue that mathematics teaching improves significantly when educators have access to detailed and specific information about the errors their students make.
Another key factor is the high weight assigned to the accuracy criterion in error identification (39.7%) within the AHP model. This underscores the need for approaches that not merely classify errors in general terms but also enable the precise detection of their origin and nature. In this respect, the Newman approach offers a significant advantage, as it facilitates the recognition of the error and its precise location within the resolution process, an essential aspect for designing targeted, non-generic remedial interventions.
Furthermore, the lower scores assigned to Newman in conceptual error detection and response validation should not be interpreted as a weakness in the context of this study, but rather as a theoretical delimitation consistent with its intended purpose. In the early stages of academic training, as is the case for first-semester students, difficulties are more likely to manifest in interpreting instructions, strategy selection, and execution of basic operations, rather than in abstract theoretical misconceptions that require advanced conceptual mastery (Morgan, 1990; Sulistyaningsih et al., 2021). Therefore, focusing the instrument on the dimensions in which errors are most frequent and most amenable to correction is a methodological decision and a pedagogical strategy supported by evidence.
In summary, the findings of this study confirm that although the Newman approach has limitations in detecting higher-order errors, it fully meets the functional requirements of the application context: high diagnostic precision, ease of implementation, and direct applicability in the design of pedagogical improvement strategies. Its adoption as the primary classification criterion in developing the proposed diagnostic instrument responds not to theoretical supremacy, but to practical relevance, grounded in technical and pedagogical criteria that are meaningful for engineering education at the university level.
These results offer concrete pedagogical implications for teachers facing the challenge of improving mathematics teaching in the early stages of engineering. The AHP methodology applied not only allowed for a justified selection of the theoretical framework for classifying errors, but also provides a replicable procedure for substantiating teaching decisions that are traditionally made intuitively or arbitrarily. In this sense, choosing the appropriate criteria for error detection is not a minor issue: It involves defining what type of difficulties are prioritized, how they are interpreted, and what pedagogical approach to take. In teaching practice, this translates into the possibility of designing more effective diagnostic tools aligned with first-semester students’ cognitive and curricular profiles. Likewise, the proposed methodology can be adapted to evaluate other relevant teaching decisions, such as the selection of teaching strategies, the creation of rubrics aligned with specific types of errors, support materials or evaluation criteria, reports on feedback and correction strategies based on standard error patterns, and guidance on curricular adjustments in fundamental mathematics courses. Thus, the study contributes to the development of diagnostic tools and promotes a culture of evidence-based decision-making within the engineering mathematics classroom.

5. Conclusions

This study applied the Analytical Hierarchy Process (AHP) to evaluate five theoretical frameworks used to classify mathematical errors. The objective was to identify the most suitable model for developing diagnostic instruments for first-year engineering students.
The AHP results ranked the Newman model as the most appropriate framework based on expert judgment and six pedagogical criteria. This finding provides a theoretically grounded basis for constructing tools to identify and address common mathematical errors at the undergraduate level.
The structured AHP-based approach used in this study offers a replicable methodology for supporting model selection in educational research. Future work will focus on designing and validating diagnostic instruments aligned with the selected framework and implementing classroom interventions that improve error recognition and remediation in early engineering education.
Two main lines of future research have been identified:
The first is the design, validation, and application of an assessment instrument based on the Newman approach. This stage will involve developing items aligned with the model’s five error categories (reading, comprehension, transformation, procedure, and encoding), validating them with experts, and conducting a pilot study with first-semester students. This research will be developed in an independent survey to offer an operational and transferable tool for other higher education institutions.
The second is implementing pedagogical strategies focused on acquiring and improving mathematical competencies based on the types of errors identified. Once the diagnostic instrument is consolidated, the next step will be to design, implement, and evaluate specific educational interventions according to the identified error types. This line of work will give rise to future research on the effectiveness of improvement strategies and their impact on knowledge retention, academic performance, and students’ mathematical self-awareness.
Both lines of inquiry will complete the diagnosis, intervention, and evaluation cycle and generate empirical evidence on using theoretical error classification frameworks as tools for enhancing mathematics teaching in higher education. In this way, the present study becomes a starting point for a broader research agenda committed to developing contextualized, effective, and sustainable educational solutions for the mathematical training of future engineers.

Author Contributions

Conceptualization, M.G.T., N.G.A. and M.M.B.; methodology, M.G.T., N.G.A. and M.M.B.; validation, M.G.T., N.G.A. and M.M.B.; writing—original draft, M.G.T., N.G.A. and M.M.B.; preparation, M.G.T., N.G.A. and M.M.B.; writing—review and editing, M.G.T., N.G.A. and M.M.B.; project administration, N.G.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author(s).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Agoiz, Á. C. (2019). Errores frecuentes en el aprendizaje de las matemáticas en bachillerato. Cuadernos Del Marqués de San Adrián: Revista de Humanidades, 11, 129–141. [Google Scholar]
  2. Akram, M., & Adeel, A. (2023). Extended PROMETHEE method under multi-polar fuzzy sets. Studies in Fuzziness and Soft Computing, 430, 343–373. [Google Scholar] [CrossRef]
  3. Aksoy, N. C., & Yazlik, D. O. (2017). Student errors in fractions and possible causes of these errors. Journal of Education and Training Studies, 5(11), 219. [Google Scholar] [CrossRef]
  4. Alonso, J. A., & Lamata, M. T. (2006). Conistency in the analytic hierarchy process: A new approach. International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, 14(4), 445–459. [Google Scholar] [CrossRef]
  5. Anis, A., & Islam, R. (2015). The application of analytic hierarchy process in higher-learning institutions: A literature review. Journal for International Business and Entrepreneurship Development, 8(2), 166. [Google Scholar] [CrossRef]
  6. Arias Aristizábal, C. M. (2023). Errores que cometen los estudiantes de grado once de la Institución Educativa Nazario Restrepo cuando resuelven problemas con números racionales [Master’s thesis, Universidad Tecnológica de Pereira]. [Google Scholar]
  7. Arifin, S. A. N., & Maryono, I. (2023). Characteristics of student errors in solving geometric proof problems based on Newman’s theory. Union: Jurnal Ilmiah Pendidikan Matematika, 11(3), 528–537. [Google Scholar] [CrossRef]
  8. Armstrong, P. K., & Croft, A. C. (1999). Identifying the learning needs in mathematics of entrants to undergraduate engineering programmes in an English university. European Journal of Engineering Education, 24(1), 59–71. [Google Scholar] [CrossRef]
  9. Arroyo Valenciano, J. A. (2021). Las variables como elemento sustancial en el método científico. Revista Educación, 46(1), 621–631. [Google Scholar] [CrossRef]
  10. Barbosa, A., & Vale, I. (2021). A visual approach for solving problems with fractions. Education Sciences, 11(11), 727. [Google Scholar] [CrossRef]
  11. Berraondo, R., Pekolj, M., Pérez, N., & Cognini, R. (2004). Leo pero no comprendo. Una experiencia con ingresantes universitarios. Acta Latinoamericana de Matemática Educativa, 15(1), 131–136. [Google Scholar]
  12. Bolaños-González, H., & Lupiáñez-Gómez, J. L. (2021). Errores en la comprensión del significado de las letras en tareas algebraicas en estudiantado universitario. Uniciencia, 35(1), 1–18. [Google Scholar] [CrossRef]
  13. Booth, J. L., Barbieri, C., Eyer, F., & Pare-Blagoev, E. J. (2014). Persistent and pernicious errors in algebraic problem solving. The Journal of Problem Solving, 7(1), 3. [Google Scholar] [CrossRef]
  14. Booth, J. L., Lange, K. E., Koedinger, K. R., & Newton, K. J. (2013). Using example problems to improve student learning in algebra: Differentiating between correct and incorrect examples. Learning and Instruction, 25, 24–34. [Google Scholar] [CrossRef]
  15. Brandi, A. C., & Garcia, R. E. (2017, October 18–21). Motivating engineering students to math classes: Practical experience teaching ordinary differential equations. 2017 IEEE Frontiers in Education Conference (FIE) (pp. 1–7), Indianapolis, IN, USA. [Google Scholar] [CrossRef]
  16. Buhaerah, B., Jusoff, K., Nasir, M., & Dangnga, M. S. (2022). Student’s mistakes in solving problem based on watson’s criteria and learning style. Jurnal Pendidikan Matematika (JUPITEK), 5(2), 95–104. [Google Scholar] [CrossRef]
  17. Caronía, S., Zoppi, A. M., Polasek, M. d., Rivero, M., & Operuk, R. (2008). Un análisis desde la didáctica de la matemática sobre algunos errores en el álgebra. Obtenido de. Available online: https://funes.uniandes.edu.co/wp-content/uploads/tainacan-items/32454/1164520/Caronia2009Un.pdf (accessed on 30 May 2025).
  18. Chang, Q. (2014). Computer design specialty evaluation based on analytic hierarchy process theory. Energy Education Science and Technology Part A: Energy Science and Research, 32(6), 7865–7872. [Google Scholar]
  19. Chauraya, M., & Mashingaidze, S. (2017). In-service teachers’ perceptions and interpretations of students’ errors in mathematics. International Journal for Mathematics Teaching and Learning, 18(3), 273–279. [Google Scholar] [CrossRef]
  20. Checa, A. N., & Martínez-Artero, R. N. (2010). Resolución de problemas de matemáticas en las pruebas de acceso a la universidad. Errores significativos. Educatio Siglo XXI, 28(1), 317–341. [Google Scholar]
  21. Chen, J., Zhao, F., & Xing, H. (2018). Curriculum system of specialty group under the credit system of higher vocational colleges based on AHP structure analysis. IPPTA: Quarterly Journal of Indian Pulp and Paper Technical Association, 30(6), 841–849. [Google Scholar]
  22. Do, Q. H., & Chen, J.-F. (2013). Evaluating faculty staff: An application of group MCDM based on the fuzzy AHP approach. International Journal of Information and Management Sciences, 24(2), 131–150. [Google Scholar]
  23. Engelbrecht, J., & Harding, A. (2005). Teaching undergraduate mathematics on the internet. Educational Studies in Mathematics, 58(2), 253–276. [Google Scholar] [CrossRef]
  24. Faulkner, B., Earl, K., & Herman, G. (2019). Mathematical maturity for engineering students. International Journal of Research in Undergraduate Mathematics Education, 5(1), 97–128. [Google Scholar] [CrossRef]
  25. Fauzan, A., & Minggi, I. (2024). Analysis of students’ errors in solving sequence and series questions problems based on Hadar’s criteria in viewed from students’ mathematical abilities. Proceedings of International Conference on Educational Studies in Mathematics, 1(1), 259–264. [Google Scholar]
  26. Gamboa-Cruzado, J. G., Morante-Palomino, E. M., Rivero, C. A., Bendezú, M. L., & Fernández, D. M. M. F. (2024). Research on the classification and application of physical education teaching mode by neutrosophic analytic hierarchy process. International Journal of Neutrosophic Science, 23(3), 51–62. [Google Scholar] [CrossRef]
  27. Ganesan, R., & Dindyal, J. (2014). An investigation of students’ errors in logarithms. Mathematics Education Research Group of Australasia. [Google Scholar]
  28. Hadi, S., & Radiyatul, R. (2014). Metode pemecahan masalah menurut polya untuk mengembangkan kemampuan siswa dalam pemecahan masalah matematis di sekolah menengah pertama. EDU-MAT: Jurnal Pendidikan Matematika, 2(1), 53–61. [Google Scholar] [CrossRef]
  29. Harris, D., Black, L., Hernandez-Martinez, P., Pepin, B., & Williams, J. (2015). Mathematics and its value for engineering students: What are the implications for teaching? International Journal of Mathematical Education in Science and Technology, 46(3), 321–336. [Google Scholar] [CrossRef]
  30. Hester, P. T., & Bachman, J. T. (2009, October 14–17). Analytical hierarchy process as a tool for engineering managers. 30th Annual National Conference of the American Society for Engineering Management 2009, ASEM 2009 (pp. 597–602), Springfield, MI, USA. Available online: https://www.scopus.com/inward/record.uri?eid=2-s2.0-84879868104&partnerID=40&md5=31e0c0b124a9d6ba00c035beff1f571d (accessed on 25 June 2025).
  31. Hiebert, J., & Grouws, D. A. (2007). The effects of classroom mathematics teaching on students’ learning. Information Age Publishing. [Google Scholar]
  32. Hoth, J., Larrain, M., & Kaiser, G. (2022). Identifying and dealing with student errors in the mathematics classroom: Cognitive and motivational requirements. Frontiers in Psychology, 13, 1057730. [Google Scholar] [CrossRef]
  33. Hu, Q., Son, J.-W., & Hodge, L. (2022). Algebra teachers’ interpretation and responses to student errors in solving quadratic equations. International Journal of Science and Mathematics Education, 20(3), 637–657. [Google Scholar] [CrossRef]
  34. Huynh, T., & Sayre, E. C. (2019). Blending of conceptual physics and mathematical signs. arXiv, arXiv:1909.11618. [Google Scholar]
  35. Irham. (2020, October 19–20). Conceptual errors of students in solving mathematics problems on the topic of function. 3rd International Conference on Education, Science, And Technology (ICEST 2019), Makassar, Indonesia. [Google Scholar] [CrossRef]
  36. Ishizaka, A., & Labib, A. (2011). Review of the main developments in the analytic hierarchy process. Expert Systems with Applications, 38(11), 14336–14345. [Google Scholar] [CrossRef]
  37. Jaworski, B., & Matthews, J. (2011). Developing teaching of mathematics to first year engineering students. Teaching Mathematics and Its Applications, 30(4), 178–185. [Google Scholar] [CrossRef]
  38. Junpeng, P., Marwiang, M., Chinjunthuk, S., Suwannatrai, P., Chanayota, K., Pongboriboon, K., Tang, K. N., & Wilson, M. (2020). Validation of a digital tool for diagnosing mathematical proficiency. International Journal of Evaluation and Research in Education (IJERE), 9(3), 665. [Google Scholar] [CrossRef]
  39. Kontrová, L., Biba, V., & Šusteková, D. (2022). Teaching and exploring mathematics through the analysis of student’s errors in solving mathematical problems. European Journal of Contemporary Education, 11(1), 89–98. [Google Scholar] [CrossRef]
  40. Lee, K., Ng, S. F., Bull, R., Pe, M. L., & Ho, R. H. M. (2011). Are patterns important? An investigation of the relationships between proficiencies in patterns, computation, executive functioning, and algebraic word problems. Journal of Educational Psychology, 103(2), 269–281. [Google Scholar] [CrossRef]
  41. Levey, F. C., & Johnson, M. R. (2020, October 16–17). Fundamental mathematical skill development in engineering education. 2020 Annual Conference Northeast Section (ASEE-NE) (pp. 1–6), Bridgeport, CT, USA. [Google Scholar] [CrossRef]
  42. Lobato, J. (2008). When students don’t apply the knowledge you think they have, rethink your assumptions about transfer. In Making the connection (pp. 289–304). The Mathematical Association of America. [Google Scholar] [CrossRef]
  43. López-Díaz, M. T., & Peña, M. (2021). Mathematics training in engineering degrees: An intervention from teaching staff to students. Mathematics, 9(13), 1475. [Google Scholar] [CrossRef]
  44. Mahendran, P., Moorthy, M. B. K., & Saravanan, S. (2014). A fuzzy AHP approach for selection of measuring instrument for engineering college selection. Applied Mathematical Sciences, 41–44, 2149–2161. [Google Scholar] [CrossRef]
  45. Makonye, J. P., & Fakude, J. (2016). A study of errors and misconceptions in the learning of addition and subtraction of directed numbers in grade 8. Sage Open, 6(4), 2158244016671375. [Google Scholar] [CrossRef]
  46. Mallart Solaz, A. (2014). La resolución de problemas en la prueba de matemáticas de acceso a la universidad: Procesos y errores. Educatio Siglo XXI, 32(1), 233–254. [Google Scholar] [CrossRef]
  47. Marpa, E. P. (2019). Common errors in algebraic expressions: A quantitative-qualitative analysis. International Journal on Social and Education Sciences, 1(2), 63–72. [Google Scholar] [CrossRef]
  48. Mauliandri, R., & Kartini, K. (2020). Analisis kesalahan siswa menurut kastolan dalam menyelesaikan soal operasi bentuk aljabar pada siswa smp. AXIOM: Jurnal Pendidikan Dan Matematika, 9(2), 107. [Google Scholar] [CrossRef]
  49. Morgan, A. T. (1990). A study of the difficulties experienced with mathematics by engineering students in higher education. International Journal of Mathematical Education in Science and Technology, 21(6), 975–988. [Google Scholar] [CrossRef]
  50. Mosia, M., Matabane, M. E., & Moloi, T. J. (2023). Errors and misconceptions in euclidean geometry problem solving questions: The case of grade 12 learners. Research in Social Sciences and Technology, 8(3), 89–104. [Google Scholar] [CrossRef]
  51. Movshovitz-Hadar, N., Zaslavsky, O., & Inbar, S. (1987). An empirical classification model for errors in high school mathematics. Journal for Research in Mathematics Education, 18(1), 3. [Google Scholar] [CrossRef]
  52. Mu, E., & Nicola, C. B. (2019). Managing university rank and tenure decisions using a multi-criteria decision-making approach. International Journal of Business and Systems Research, 13(3), 297–320. [Google Scholar] [CrossRef]
  53. Mukuka, A., Balimuttajjo, S., & Mutarutinya, V. (2023). Teacher efforts towards the development of students’ mathematical reasoning skills. Heliyon, 9(4), e14789. [Google Scholar] [CrossRef]
  54. Mulungye, M. M. (2016). Sources of students’ errors and misconceptions in algebra and influence of classroom practice remediation in secondary schools: Machakos sub-county, Kenya [Unpublished doctoral dissertation, Kenyatta University]. [Google Scholar]
  55. Musa, H., Rusli, R., Ilhamsyah, & Yuliana, A. (2021). Analysis of student errors in solving mathematics problems based on watson’s criteria on the subject of two variable linear equation system (SPLDV). EduLine: Journal of Education and Learning Innovation, 1(2), 125–131. [Google Scholar] [CrossRef]
  56. Nakakoji, Y., & Wilson, R. (2020). Interdisciplinary learning in mathematics and science: Transfer of learning for 21st century problem solving at university. Journal of Intelligence, 8(3), 32. [Google Scholar] [CrossRef]
  57. Ningsih, E. F., & Retnowati, E. (2020, December 7). Prior knowledge in mathematics learning. SEMANTIK Conference of Mathematics Education (SEMANTIK 2019), Yoyakarta, Indonesia. [Google Scholar] [CrossRef]
  58. Noutsara, S., Neunjhem, T., & Chemrutsame, W. (2021). Mistakes in mathematics problems solving based on newman’s error analysis on set materials. Journal La Edusci, 2(1), 20–27. [Google Scholar] [CrossRef]
  59. Nuraini, N. L. S., Cholifah, P. S., & Laksono, W. C. (2018, September 21–22). Mathematics errors in elementary school: A meta-synthesis study. 1st International Conference on Early Childhood and Primary Education (ECPE 2018), Malang, Indonesia. [Google Scholar] [CrossRef]
  60. Nurhayati, R., & Retnowati, E. (2019). An analysis of errors in solving limits of algebraic function. Journal of Physics: Conference Series, 1320(1), 012034. [Google Scholar] [CrossRef]
  61. Nuryati, N., Purwaningsih, S., & Habinuddin, E. (2022). Analysis of errors in solving mathematical literacy analysis problems using newman. International Journal of Trends in Mathematics Education Research, 5(3), 299–305. [Google Scholar] [CrossRef]
  62. Papadouris, J. P., Komis, V., & Lavidas, K. (2024). Errors and misconceptions of secondary school students in absolute values: A systematic literature review. Mathematics Education Research Journal. [Google Scholar] [CrossRef]
  63. Pazos, A. L., & Salinas, M. J. (2012). Dificultades algebraicas en el aula de 1° bac. en ciencias y ciencias sociales. Investigación En Educación Matemática XVI, 417–426. [Google Scholar]
  64. Pianda, D. (2018). Categorización de errores típicos en ejercicios matemáticos cometidos por estudiantes de primer semestre de la universidad de nariño. Universidad de Nariño. [Google Scholar]
  65. Rafi, I., & Retnawati, H. (2018). What are the common errors made by students in solving logarithm problems? Journal of Physics: Conference Series, 1097, 012157. [Google Scholar] [CrossRef]
  66. Ramík, J. (2020). Applications in decision-making: Analytic hierarchy process—AHP revisited. Lecture Notes in Economics and Mathematical Systems, 690, 189–211. [Google Scholar] [CrossRef]
  67. Resnick, L. B., Nesher, P., Leonard, F., Magone, M., Omanson, S., & Peled, I. (1989). Conceptual bases of arithmetic errors: The case of decimal fractions. Journal for Research in Mathematics Education, 20(1), 8. [Google Scholar] [CrossRef]
  68. Rodríguez-Domingo, S., Molina, M., Cañadas, M. C., & Castro, E. (2015). Errores en la traducción de enunciados algebraicos entre los sistemas de representación simbólico y verbal. PNA. Revista de Investigación En Didáctica de La Matemática, 9(4), 273–293. [Google Scholar] [CrossRef]
  69. Roselizawati, H., Sarwadi, H., & Shahrill, M. (2014). Understanding students’ mathematical errors and misconceptions: The case of year 11 repeating students. Mathematics Education Trends and Research, 2014, 1–10. [Google Scholar] [CrossRef]
  70. Rosita, A., & Novtiar, C. (2021). Analisis kesalahan siswa smk dalam menyelesaikan soal dimensi tiga berdasarkan kategori kesalahan menurut watson. JPMI (Jurnal Pembelajaran Matematika Inovatif), 4(1), 193–204. [Google Scholar]
  71. Rushton, S. J. (2018). Teaching and learning mathematics through error analysis. Fields Mathematics Education Journal, 3(1), 4. [Google Scholar] [CrossRef]
  72. Saaty, T. L. (2008). Decision making with the analytic hierarchy process. International Journal of Services Sciences, 1(1), 83. [Google Scholar] [CrossRef]
  73. Sari, S. I., & Pujiastuti, H. (2022). Analisis kesalahan siswa dalam mengerjakan soal bilangan berpangkat dan bentuk akar berdasarkan kriteria kastolan. Proximal: Jurnal Penelitian Matematika Dan Pendidikan Matematika, 5(2), 21–29. [Google Scholar] [CrossRef]
  74. Saucedo, G. (2007). Categorización de errores algebraicos en alumnos ingresantes a la universidad. Itinerarios Educativos, 1(2), 22–43. [Google Scholar] [CrossRef]
  75. Sazhin, S. S. (1998). Teaching Mathematics to Engineering Students*. International Journal of Engineering Education, 14(2), 145–152. [Google Scholar]
  76. Sehole, L., Sekao, D., & Mokotjo, L. (2023). Mathematics conceptual errors in the learning of a linear function—A case of a technical and vocational education and training college in South Africa. The Independent Journal of Teaching and Learning, 18(1), 81–97. [Google Scholar] [CrossRef]
  77. Sidney, P. G., & Alibali, M. W. (2015). Making connections in math: Activating a prior knowledge analogue matters for learning. Journal of Cognition and Development, 16(1), 160–185. [Google Scholar] [CrossRef]
  78. Siegler, R. S., & Lortie-Forgues, H. (2015). Conceptual knowledge of fraction arithmetic. Journal of Educational Psychology, 107(3), 909–918. [Google Scholar] [CrossRef]
  79. Smahi, K., Labouidya, O., & El Khadiri, K. (2024). Towards effective adaptive revision: Comparative analysis of online assessment platforms through the combined AHP-MCDM approach. International Journal of Interactive Mobile Technologies, 18(17), 75–87. [Google Scholar] [CrossRef]
  80. Star, J. R., & Rittle-Johnson, B. (2008). Flexibility in problem solving: The case of equation solving. Learning and Instruction, 18(6), 565–579. [Google Scholar] [CrossRef]
  81. Strum, R. D., & Kirk, D. E. (1979). Engineering mathematics: Who should teach it and how? IEEE Transactions on Education, 22(2), 85–88. [Google Scholar] [CrossRef]
  82. Suciati, I., & Sartika, N. (2023). Students’ errors analysis in solving mathematics problems viewed from various perspectives. 12 Waiheru, 9(2), 149–158. [Google Scholar] [CrossRef]
  83. Suharti, S., Nur, F., & Alim, B. (2021). Polya steps for analyzing errors in mathematical problem solving. AL-ISHLAH: Jurnal Pendidikan, 13(1), 741–748. [Google Scholar] [CrossRef]
  84. Sukayasa, S. (2012). Pengembangan model pembelajaran berbasis fase-fase polya untuk meningkatkan kompetensi penalaran siswa SMP dalam memecahkan masalah matematika. AKSIOMA: Jurnal Pendidikan Matematika, 1(1), 46–54. [Google Scholar]
  85. Sulistyaningsih, D., Purnomo, E. A., & Purnomo, P. (2021). Polya’s problem solving strategy in trigonometry: An analysis of students’ difficulties in problem solving. Mathematics and Statistics, 9(2), 127–134. [Google Scholar] [CrossRef]
  86. Wedelin, D., Adawi, T., Jahan, T., & Andersson, S. (2015). Investigating and developing engineering students’ mathematical modelling and problem-solving skills. European Journal of Engineering Education, 40(5), 557–572. [Google Scholar] [CrossRef]
  87. Winarso, W., & Toheri, T. (2021). An analysis of students’ error in learning mathematical problem solving; the perspective of David Kolb’s theory. Turkish Journal of Computer and Mathematics Education (TURCOMAT), 12(1), 139–150. [Google Scholar] [CrossRef]
  88. Wiyah, R. B., & Nurjanah. (2021). Error analysis in solving the linear equation system of three variables using Polya’s problem-solving steps. Journal of Physics: Conference Series, 1882(1), 012084. [Google Scholar] [CrossRef]
  89. Yulmaini, Sanusi, A., Ariza Eka Yusendra, M., & Kholijah, S. (2020). implementation of analytic hierarchy process for determining priority criteria in higher education competitiveness development strategy based on RAISE++ model. Journal of Physics: Conference Series, 1529(2), 022064. [Google Scholar] [CrossRef]
  90. Zhang, D., & Wang, X. (2021, January 29–31). AHP-based evaluation model of multi-modal classroom teaching effect in higher vocational English. Proceedings—2021 2nd International Conference on Education, Knowledge and Information Management, ICEKIM 2021 (pp. 403–406), Xiamen, China. [Google Scholar] [CrossRef]
Figure 1. Hierarchical structure of the AHP applied to the classification of mathematical errors.
Figure 1. Hierarchical structure of the AHP applied to the classification of mathematical errors.
Education 15 00827 g001
Figure 2. General methodological flow of the AHP.
Figure 2. General methodological flow of the AHP.
Education 15 00827 g002
Figure 3. Ranking of error classification frameworks according to the AHP model.
Figure 3. Ranking of error classification frameworks according to the AHP model.
Education 15 00827 g003
Figure 4. Radar chart comparing the overall performance of the five mathematical error classification frameworks based on their weighted scores across the six AHP criteria.
Figure 4. Radar chart comparing the overall performance of the five mathematical error classification frameworks based on their weighted scores across the six AHP criteria.
Education 15 00827 g004
Figure 5. Individual criterion-based analysis through radar charts: (a) accuracy in error identification, (b) ease of application, (c) focus on conceptual errors, (d) focus on procedural errors, (e) response validation, (f) viability in improvement strategies.
Figure 5. Individual criterion-based analysis through radar charts: (a) accuracy in error identification, (b) ease of application, (c) focus on conceptual errors, (d) focus on procedural errors, (e) response validation, (f) viability in improvement strategies.
Education 15 00827 g005
Table 1. Classification of mathematical errors according to the Newman criteria.
Table 1. Classification of mathematical errors according to the Newman criteria.
Error TypeIndicator of ErrorExample
Reading
error
Fails to identify key information in the problem A   train   leaves   station   A   at   3 : 00   p . m .   and   arrives   at   station   B   at   5 : 00   p . m .   Another   train   leaves   station   B   at   4 : 00   p . m .   and   arrives   at   station   A at 6:00 p.m. At what time do the two trains cross paths?
When reading the problem, the student does not correctly identify the departure and arrival times of the trains. He confuses the schedules and assumes that both trains leave at 3:00 p.m. and arrive at 6:00 p.m. Based on this incorrect information, he miscalculates that the trains cross at 4:30 p.m. (Arias Aristizábal, 2023)
Incorrectly determines the known data The   student   confuses   the   independent   term   of   the   general   equation   of   the   line   2 x + 3 y + 1 = 0 as the ordinate to the origin (Saucedo, 2007).
Uses self-created symbols without explaining their meaningIn a math problem, the task is to calculate the total construction cost of a building, knowing that the cost per square meter is 800,000 pesos and increases by 5% each year.
The student writes the following expression:
C = 800,000 ( 1 + 0.05 ) ^ t
However ,   the   student   does   not   explain   the   meaning   of   the   variables   C   or   t . Using these symbols without prior definition makes what they represent unclear, which could confuse the interpretation of the solution (Arroyo Valenciano, 2021).
Comprehension
error
Responds incorrectly due to a lack of understanding or incomplete identification of problem elementsA water tank has a total capacity of 500 L. It currently holds 350 L. How long will the tank be full if water is added at 15 L per minute?
The student incorrectly interprets what is to be solved. He assumes that he must calculate how long it takes to fill the tank from zero, ignoring that it already contains 350 liters (Winarso & Toheri, 2021).
Writes a brief but unclear response, lacking sufficient argumentation when expressing what needs to be solvedA person has a rope 3 m long. He needs to cut the rope into 5 equal parts. How long will each part be?
Student’s answer: Each part will be smaller than 3 m.
The student writes something brief but unclear without providing a calculation or justifying his answer. He does not mention the necessary operation (division) or detail the procedure (Aksoy & Yazlik, 2017).
Transformation
error
Inaccuracy in converting information into mathematical formulas It   poses   x ( x + 1 ) = y 2 in the statement “A number plus its consecutive is equal to another minus 2”, erring in the conversion from natural language to mathematical language (Rodríguez-Domingo et al., 2015).
Process skill errorError when using arithmetic operations The   student   poses   as   answer   the   sum   of   7 6 + 1 4 + 3 2 :   11 12 , adding numerators and denominators together (Booth et al., 2014).
Incomplete procedures/steps In   the   equation   z ( z 3 + 1 ) = 0   the   student   ignores   the   solution   z = 0 ,   posing   only   z 3 + 1 = 0 . (Agoiz, 2019)
Encoding errorWriting answers inappropriately In   the   equation   x 2 5 x + 6 = 0 ,   the   student   obtains   the   solutions   x = 3   and   x = 2 ;   however ,   he   indicates   that   the   roots   are   x = 3   and   x = 2 . (Agoiz, 2019)
Answers are not appropriate to the context In   an   analytical   geometry   problem ,   the   student   is   asked   to   calculate   the   distance   between   points   A ( 2 , 3 )   and   B ( 5 , 7 )   in   the   Cartesian   plane .
When   solving ,   the   student   obtains   an   answer   of   d = 5 , which is inadequate because the distance between two points cannot be negative in any mathematical context (Checa & Martínez-Artero, 2010).
Inaccurate or inconsistent conclusions The   student   is   asked   to   reduce   the   like   terms   of   the   expression   3 a 3 b 3 a + 3 b .   The   student   concludes   that   the   result   is   0 a + 0 b , an answer that makes no sense (Marpa, 2019).
Table 2. Classification of mathematical errors according to the Kastolan criteria.
Table 2. Classification of mathematical errors according to the Kastolan criteria.
Error TypeIndicator of ErrorExample
Conceptual ErrorsIncorrect use of formulas or altered rules The   student   rewrites   ( 4 x ) 2   as   16 + x 2 ,   incorrectly   modifying   the   special   product   rule   and   disregarding   the   middle   term 8 x (Agoiz, 2019).
Selection of inappropriate formulasThe student uses the area formula to find the solution for the perimeter of a rectangle, failing to apply the correct perimeter formula.
Procedure ErrorsIrregularity in problem-solving steps In   the   equation   x 2 = 16 + y 2 ,   the   student   isolates   the   variable   x   by   individually   taking   each   term s   square   root   on   the   right-hand   side ,   incorrectly   concluding   that   x = 4 + y . (Agoiz, 2019).
Inability to simplifyThe given problem states that the sum of the first and the second numbers exceeds the third by two units; the second minus twice the first is ten units less than the third; and the sum of all three numbers is 24. The task is to determine the three numbers. The student sets up a 3 × 3 system of equations and solves it using Gaussian elimination, although a simple substitution method would suffice (Checa & Martínez-Artero, 2010).
Interruption of the resolution process The   equation   2 x 2 6 x = 3   is   presented   for   resolution .   The   student   correctly   completes   the   steps   and   obtains   the   solution   x = 6 ± 60 4 , but fails to simplify the final expression further (Pianda, 2018).
Technical ErrorsCalculation errors In   the   equation   5 3 = 5 5 , the student incorrectly adds the indices of the roots, leading to an invalid operation (Agoiz, 2019).
Errors in notation or writing In   the   equation   x 1 = 2 y 6 ,   the   student   incorrectly   rewrites   the   term   2 y   and   reformulates   the   equation   as   x + 2 y + 5 = 0 , introducing an erroneous sign change (Agoiz, 2019).
Inadequate substitution of values The   task   is   to   solve   the   equation   3 + 8 x + 1 3 = 5 .   The   student   assumes   x = 1 and substitutes it incorrectly into the equation, leading to an invalid result (Caronía et al., 2008)
Table 3. Classification of mathematical errors according to the Watson criteria.
Table 3. Classification of mathematical errors according to the Watson criteria.
Error TypeIndicator of ErrorExample
Inappropriate DataData does not match. The   student   applies   a + b 2 = a 2 + b 2   to   expand   x + 3 2 , a common error in basic algebra (Booth et al., 2013)
Misplaced data on the variable The   student   substitutes   x = 3   into   the   equation   y = 2 x + 1   and   calculates   y = 2 ( 4 ) + 1 = 9 ,   but   mistakenly   inputs   x = 4   leading   to   the   incorrect   calculation   of   y = 9 (Siegler & Lortie-Forgues, 2015).
Assigns known data to incorrect variables The   student   treats   the   constant   term   as   if   it   were   the   coefficient   of   x   in   2 x + 5 = 0 ,   incorrectly   solving   it   as   if   the   equation   were   5 x = 0 (Booth et al., 2013)
Inappropriate
Procedure
Using the wrong formula The   student   attempts   to   solve   x 2 4 = 0   using   the   completing   the   square   method   but   incorrectly   adds   a   term :   x 2 + 4 x + 4 = 4 + 4 x + 4     x + 2 2 = 12 (Ningsih & Retnowati, 2020).
Do not write down the steps when solving problems The   student   introduces   an   unnecessary   multiplication   step   in   ( x + 2 ) ( x 2 ) ,   writing ( x + 2 ) ( x 2 ) 1 = x 2 4 , without providing justification. (Barbosa & Vale, 2021).
Skipping essential steps The   student   omits   calculating   the   discriminant   before   applying   the   quadratic   formula   in   a x 2 + b x + c = 0 , which leads to incorrect conclusions about the nature of the solutions (Booth et al., 2013).
Missing DataOmission of given data The   student   omits   the   constant   term   c   in   the   quadratic   equation   a x 2 + b x + c = 0 ,   solving   it   as   if   c = 0 , which leads to incorrect conclusions about the solutions (Siegler & Lortie-Forgues, 2015).
Omitted ConclusionFails to use the obtained data to draw conclusions The   student   finds   the   solutions   x = 2   and   x = 2   for   x 2 = 4 ,   but   fails   to   determine   which   solution   satisfies   the   context   of   the   problem   ( e . g ,   x > 0 ) (Ningsih & Retnowati, 2020).
Response Level
Conflict
Lack of readiness during the process The   student   obtains   a   value   of   x = 5 or a length but fails to justify whether this solution should be accepted or discarded, neglecting to consider the contextual constraints of the problem. (Barbosa & Vale, 2021).
Indirect ManipulationApplication of arbitrary reasoning The   student   switches   between   factorization   methods   in   x 2 + 5 x + 6 = 0 (Ningsih & Retnowati, 2020).
The   student   begins   solving :   x 2 + 5 x + 6 = 0     x + 2 x 3 = 0 but interrupts the process, possibly due to uncertainty about the solution, and switches to completing the square method. They rewrite the equation as
x 2 + 5 x + m i s s i n g   t e r m = 6
To   complete   the   square ,   the   student   adds   5 2 2 = 25 4 to both sides, resulting in
x 2 + 5 x + 25 4 = 6 + 25 4
However, they simplify the right-hand side incorrectly, leading to an incorrect expression.
This behavior exemplifies arbitrary reasoning and a lack of procedural consistency. The student alternates between methods without properly executing them, leading to repeated errors and unresolved confusion.
Skill
Hierarchy Problem
Confusion in applying the hierarchy of mathematical operations The   student   incorrectly   solves   3 + 5 × 2   as   3 + 5 × 2 = 16 ,   instead   of   applying   the   correct   order   of   operations   and   calculating   multiplication   first ,   which   would   yield   3 + 10 = 13 (Sidney & Alibali, 2015)
Above OtherInappropriate reformulation of the question The   student   attempts   to   solve   3 x 4 = 11   but   incorrectly   reformulates   the   equation   as   3 x = 11 4 .   While   solving ,   they   write   x = 11 3 4 , improperly combining operations and leading to an incorrect result. (Booth et al., 2014)
Omission of the response Given   the   problem   Solve   the   equation   4 x + 7 = 19 ,   the   student   correctly   follows   the   steps :   4 x = 19 7 ;   4 x = 12 ;   and   x = 12 / 4 .   However ,   they   fail   to   write   the   final   result   x = 3 as the solution, leaving the work incomplete without an explicit answer. (Lee et al., 2011)
Disordered or inconsistent solution to the question The   student   simplifies   6 x 2 + 12 x 6 x   incorrectly   by   dividing   only   the   numerator   by   6 x ,   resulting   in   x + 2 x ,   and   presents   3 x   as   the   final   answer ,   which   does   not   match   the   correct   result   of   x + 2 (Mulungye, 2016)
Table 4. Classification of mathematical errors according to the Hadar criteria.
Table 4. Classification of mathematical errors according to the Hadar criteria.
Error TypeIndicator of ErrorExample
Misused dataThe student does not exactly copy data from the problemIn the problem: In 2020, the cow population in City A was 1600, and in City B, it was 500. Each month, the population in City A increases by 25, and in City B by 10. At some point, the population in City A triples that of City B. Determine the population of cows in City A at that moment.
For this problem, the student records
I = A B 1600 500
A = 1600   1625   1650
B = 500   510   520   530   540   550
Instead of correctly identifying the following:
Initial population in City A: 1600.
Initial population in City B: 500.
Monthly increase in City A: 25.
Monthly increase in City B: 10.
The student failed to accurately record any required data from the problem and did not create an appropriate mathematical model (Fauzan & Minggi, 2024).
Students add data that is not appropriate. The   student   rewrites   x 2 5 x + 6 = 0   as   x 2 5 x + 6 + y = 0 ,   adding   an   unnecessary   term   y without justification (Ningsih & Retnowati, 2020).
Ignores the data provided Students   fail   to   recognize   that   log 2   and   log 1 2 are equivalent (Ganesan & Dindyal, 2014)
States a condition that is not needed To   verify   if   point   P ( 2 , 3 )   lies   on   the   line   y = 2 x + 1 ,   the   student   unnecessarily   calculates   the   slope   of   the   line   through   the   origin   and   P and compares it with the given line’s slope (Mallart Solaz, 2014).
Interpreting information that does not follow the actual text Students   assume   that   log 3 4 is a negative number. (Ganesan & Dindyal, 2014).
Replacing the specified conditions with other inappropriate information For   the   problem   of   finding   the   equation   of   a   line   parallel   to   y = 2 x + 3   passing   through   P ( 1 , 4 ) , the student unnecessarily assumes the line must also be perpendicular to another line instead of using the given slope (Mallart Solaz, 2014).
Using the value of a variable for another variableFor the equation
log 5 x 2 x 2 8 2 log 5 x 2 x 2 8 = 0
the   student   assumes   y = log 5 x 2 x 2 8 , transforming it into
y 2 y = 0     y ( y 1 ) = 0
Solving   for   y ,   they   find   y = 0   and   y = 1   and   then   substitute   y = 5 x 2 ,   leading   to   x = 2 5   and   x = 3 5 . This demonstrates the misuse of one variable as another (Rafi & Retnawati, 2018).
Misinterpreted languageStudents’ mistakes in translating mathematical symbols into everyday language To   solve   the   expression   3 a + 2 b   under   specific   conditions   by   assigning   numerical   values   to   a   and   b ,   some   students   misinterpret   a   and   b   as   abbreviations   for   words   ( e . g . ,   a   for   animals   and   b for “bags”) instead of treating them as mathematical variables. (Bolaños-González & Lupiáñez-Gómez, 2021).
Writing symbols of a concept with other symbols that have different meaningsIn inequalities, students confuse the meaning of the greater than (>) and less than (<) signs. (Huynh & Sayre, 2019).
Logically invalid
inference
Mistakes are made when drawing incorrect conclusions from a problem While   expanding   the   binomial   x y 2 + z 2 3   the   student   incorrectly   writes   the   result   as   a 3 b 5 + b 5 a 3 y 5 + b 5 y 5 . This   error   is   classified   as   a   reasoning   mistake ,   as   the   student   introduces   variables   ( a   and   b ) that are not part of the original problem (Pazos & Salinas, 2012).
Distorted theorem or definitionErrors occur when students incorrectly apply formulas, theorems, or definitions that do not align with the problem. In   the   problem   log 25 5 ,   the   student   attempts   to   apply   the   rule   log a a = 1   but   incorrectly   computes   log 5 2 5 = 2 . This error reflects a misapplication of logarithmic properties (Ganesan & Dindyal, 2014).
Unverified solutionErrors arise when students fail to verify each step against the final result, often because they rush through the problem without reviewing their work. For   the   equation   log x 2 x + 15 = 2 ,   the   student   accepts   x = 5 and   x = 3 , as solutions without verifying their validity (Ganesan & Dindyal, 2014).
Technical errorCalculation errors The   student   solves   900 + 10 n = 3 ( 1600 + 25 n )   and   obtains   n = 20   However ,   the   correct   operation   yields   n = 60 ,   not   n = 20 (Fauzan & Minggi, 2024).
Errors in quoting dataIn the problem: In 2020, the cow population in City A was 1600, and in City B, it was 500. Each month, the population in City A increases by 25, and in City B by 10. At some point, the population in City A triples that of City B. Determine the population of cows in City A at that moment.
The student records the data as (Fauzan & Minggi, 2024)
I = A B 1600 500
A = 1600   1625   1650
B = 500   510   520   530   540   550
Instead of correctly applying the formula
U A n = 1600 + ( n 1 ) 25
Errors in manipulating symbols The   student   formulates   n 1 25 = 25 n + 25 incorrectly (Fauzan & Minggi, 2024).
Table 5. Classification of mathematical errors according to the Polya criteria.
Table 5. Classification of mathematical errors according to the Polya criteria.
Afgan wants to visit a total of 24 beaches with his three friends (Boy, Mondy, and Reva). Afgan can only take two friends per day. During the visits:
  • Boy and Mondy visited 6 beaches together.
  • Boy and Reva visited 4 beaches together.
  • Mondy and Reva visited 8 beaches together.
Determine how many beaches each friend visited using a linear system of three variables.
Error TypeIndicator of ErrorExample
Understanding the
problem
Students need to specify and identify what information is known, ask about the problem, and restate it in their own language. While   solving   the   problem ,   the   student   incorrectly   writes   the   equations   as   a + b = 6 ;   a + c = 4   and   b + c = 8 ,   when   they   should   actually   be :   a + b c = 6 ;   a + c b = 4   and   b + c a = 8 .
Devising
a plan
Students create a mathematical model, select an appropriate strategy that will be used, make estimates, and reduce things that are not related to the problem. The   student   attempts   to   represent   the   time   three   people   spend   visiting   beaches   using   variables   but   incorrectly   writes   24 x for each case, failing to relate it to the problem’s conditions.
Carrying out the planStudents implement the plans and strategies chosen and arrange to solve the problem.The number of beaches Boy visited in a day:
24   beaches   =   x days
24   beaches   =   x · 1 days
24 x   beaches   =   x x   days   =   24 x days
The   student   incorrectly   assumes   that   x can be canceled on both sides without justification, leading to an erroneous result.
Looking backStudents review the solutions and results obtained from the problem-solving steps to avoid errors in their answers. After   solving   the   system   of   linear   equations ,   the   student   fails   to   verify   whether   the   values   obtained   for   x ,   y ,   and   z satisfy the initial equations.
Table 6. Qualitative decision matrix.
Table 6. Qualitative decision matrix.
Quantitative ValueInterpretationKey Textual Indicators
5High clarity, evidence, or applicability“easy to identify”, “very easy”, “directly”, “evident”, “quickly recognized”
4High, but with minor potential for confusion“may be confused with…”, “quick but not direct”, “requires minimal analysis”
3Moderate clarity or applicability; requires interpretation“possibility of confusion”, “depends on the student”, “for the same reason”, “somewhat subjective”
2Low clarity or applicability, though still identifiable“difficult to identify”, “requires detailed analysis”, “not so clear”
1Absent, not validated, or difficult to detect or apply“not verified”, “not applicable”, “not focused”, “very difficult”, “unrelated to the concept”
Table 7. Decision matrix for selecting the most appropriate mathematical error classification criterion.
Table 7. Decision matrix for selecting the most appropriate mathematical error classification criterion.
CriterionPrecision in Error IdentificationEase of ApplicationFocus on Conceptual ErrorsFocus on Procedural ErrorsFocus on Response ValidationViability in Improvement StrategiesAverage
Newman4.54.41.41.81.83.83.1
Kastolan4.03.02.32.31.03.32.7
Watson3.12.81.02.52.03.02.4
Hadar3.02.82.31.72.32.72.5
Polya2.63.32.02.02.02.82.4
Table 8. Pairwise comparison matrix ( A ).
Table 8. Pairwise comparison matrix ( A ).
CriterionPrecision in Error IdentificationEase of ApplicationFocus on Conceptual ErrorsFocus on Procedural ErrorsFocus on Response ValidationViability in Improvement Strategies
Precision in Error Identification178293
Ease of Application1/7121/551/5
Focus on Conceptual Errors1/81/211/731/7
Focus on Procedural Errors1/257172
Focus on Response Validation1/91/51/31/711/8
Viability in Improvement Strategies1/3571/281
Table 9. Normalized matrix.
Table 9. Normalized matrix.
CriterionPrecision in Error IdentificationEase of ApplicationFocus on Conceptual ErrorsFocus on Procedural ErrorsFocus on Response ValidationViability in Improvement Strategies Priority   Vector   ( w )
Precision in Error Identification0.45200.37430.31580.50180.27270.46380.3967
Ease of Application0.06460.05350.07890.05020.15150.03090.0716
Focus on Conceptual Errors0.05650.02670.03950.03580.09090.02210.0453
Focus on Procedural Errors0.22600.26740.27630.25090.21210.30920.2570
Focus on Response Validation0.05020.01070.01320.03580.03030.01930.0266
Viability in Improvement Strategies0.15070.26740.27630.12540.24240.15460.2028
Table 10. Decision matrix for selecting the most appropriate mathematical error classification framework.
Table 10. Decision matrix for selecting the most appropriate mathematical error classification framework.
CriteriaPrecision in Error IdentificationEase of
Application
Focus on Conceptual ErrorsFocus on Procedural
Errors
Focus on Response ValidationViability in Improvement StrategiesScoreRanking
Newman4.54.41.42.61.83.83.71
Kastolan4.03.02.32.31.03.33.22
Watson3.12.81.02.52.03.02.83
Hadar3.02.82.31.72.32.72.54
Polya2.63.32.02.02.02.82.55
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Garcia Tobar, M.; Gonzalez Alvarez, N.; Martinez Bustamante, M. A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education. Educ. Sci. 2025, 15, 827. https://doi.org/10.3390/educsci15070827

AMA Style

Garcia Tobar M, Gonzalez Alvarez N, Martinez Bustamante M. A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education. Education Sciences. 2025; 15(7):827. https://doi.org/10.3390/educsci15070827

Chicago/Turabian Style

Garcia Tobar, Milton, Natalia Gonzalez Alvarez, and Margarita Martinez Bustamante. 2025. "A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education" Education Sciences 15, no. 7: 827. https://doi.org/10.3390/educsci15070827

APA Style

Garcia Tobar, M., Gonzalez Alvarez, N., & Martinez Bustamante, M. (2025). A Structured AHP-Based Approach for Effective Error Diagnosis in Mathematics: Selecting Classification Models in Engineering Education. Education Sciences, 15(7), 827. https://doi.org/10.3390/educsci15070827

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop