Next Article in Journal
Understanding the Social and Cognitive Nature of Collaboration: Implications for Practice
Previous Article in Journal
Group Physical Activity and Behavioral Practices in Adolescents with Autism: A Case Series on Integrated Educational Interventions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating CAD and Orthographic Projection in Descriptive Geometry Education: A Comparative Analysis with Monge’s System

by
Simón Gutiérrez de Ravé
,
Eduardo Gutiérrez de Ravé
and
Francisco J. Jiménez-Hornero
*
Department of Graphic Engineering and Geomatics, University of Córdoba, 14071 Córdoba, Spain
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(11), 1492; https://doi.org/10.3390/educsci15111492
Submission received: 15 September 2025 / Revised: 16 October 2025 / Accepted: 1 November 2025 / Published: 5 November 2025
(This article belongs to the Section Higher Education)

Abstract

Descriptive geometry plays a fundamental role in developing spatial reasoning and geometric problem-solving skills in engineering education. This study investigates the comparative effectiveness of two instructional methodologies—Monge’s traditional projection system and the CADOP method, which integrates computer-aided design tools with orthographic projection principles. A quasi-experimental design was implemented with 90 undergraduate engineering students randomly assigned to two groups. Both groups followed the same instructional sequence and were evaluated using baseline surveys, rubric-based performance assessments, and post-training reflections. Quantitative analysis included mean comparisons, t-tests, and effect sizes, while inter-rater reliability confirmed scoring consistency. The results showed that CADOP students significantly outperformed those taught with Monge’s method across all criteria—conceptual under-standing, graphical accuracy, procedural consistency, and spatial reasoning—with very large effect sizes. Qualitative data indicated that CADOP enhanced clarity, efficiency, and confidence, while Monge promoted conceptual rigor but higher cognitive effort. The findings confirm that CADOP effectively reduces procedural complexity and cognitive load, supporting deeper spatial comprehension. Integrating CADOP with selected manual practices offers a balanced pedagogical approach for modernizing descriptive geometry instruction in engineering education.

1. Introduction

Descriptive geometry (DG) plays a fundamental role in engineering education, particularly in developing spatial reasoning and geometric problem-solving abilities. Traditionally rooted in the graphical representation of three-dimensional objects on two-dimensional planes, DG facilitates the interpretation, visualization, and communication of complex spatial information using orthographic projections and geometric constructions (Bertoline et al., 2009). The classical instructional approach has historically relied on manual drafting, particularly Monge’s method, which employs horizontal and vertical reference planes to solve spatial problems through a sequence of 2D projections (Monge, 1799). While rigorous, this system can be procedurally demanding and error-prone for novice learners.
Over the past decades, engineering graphics instruction has been transformed by the integration of digital tools, most notably Computer-Aided Design (CAD) systems (Bokan et al., 2009; Migliari, 2012; Tomiczková & Lávicka, 2013; Kuna et al., 2018; Ladrón-de-Guevara-Muñoz et al., 2023; Balajti, 2024; Markova et al., 2024). CAD software enhances accuracy and efficiency in geometric constructions while also enabling dynamic visualization of spatial configurations (Sorby, 2009; Hunde & Woldeyohannes, 2022). Within this context, the CADOP method—an instructional approach that integrates CAD with orthographic projection—has emerged as a promising alternative. By allowing users to manipulate coordinate systems and generate projections through analytical operations, CADOP reduces procedural complexity while reinforcing spatial visualization skills (Gutiérrez de Ravé & Jiménez-Hornero, 2024).
Monge’s traditional method, despite its procedural challenges, continues to be valued for its pedagogical contribution. Manual drafting fosters discipline in step-by-step construction, develops precision through repeated practice, and reinforces the logic of projection systems without reliance on automated software (Bairaktarova, 2017; Moreno & Bazán, 2017). However, the same features that provide conceptual rigor can also become obstacles: successive auxiliary constructions often increase cognitive load and introduce opportunities for error, discouraging novice learners.
In contrast, CAD-based instruction offers significant advantages. Numerous studies demonstrate that CAD tools enhance spatial ability development, particularly among engineering students (Torner et al., 2014; Kösa & Karakuş, 2018; Katona & Nagy Kem, 2019; Dilling & Vogler, 2021). CAD environments support automated transformations, precision-driven operations, and intuitive manipulation of geometric elements, all of which align with the pedagogical objectives of DG. Research further shows that digital tools can mitigate shortcomings of manual methods, including inconsistencies in drawing accuracy, difficulties with auxiliary views, and limited student engagement (Taleyarkhan et al., 2018; Deng et al., 2022). These advantages are particularly relevant in disciplines such as mechanical design (Agbanglanon et al., 2024) and civil engineering (Abrahamzon et al., 2024), where spatial skills are central to professional practice. At the same time, the present study is also informed by prior research that emphasizes both the importance of spatial skills and the risks of abandoning manual approaches. Sorby (2009) demonstrated that spatial reasoning ability is a critical predictor of student success in engineering, underscoring the need for instructional methods that deliberately cultivate these skills. In parallel, Bairaktarova (2017) warned that the removal of descriptive geometry and manual drawing from curricula may weaken students’ conceptual understanding of engineering graphics.
Several scholars have proposed hybrid approaches to balance the strengths of manual and digital methods (Bokan et al., 2009; Suzuki, 2014; Moreno & Bazán, 2017). These works suggest that CAD can reinforce, rather than replace, the conceptual insights cultivated by Monge’s method. The CADOP method builds on this perspective by integrating CAD with orthographic projection in a systematic way. Through the use of coordinate transformations, CADOP preserves the theoretical foundations of projection while reducing extraneous complexity and enhancing accuracy (Gutiérrez de Ravé & Jiménez-Hornero, 2024).
Despite this growing body of research, a clear gap remains. Most prior studies highlight the benefits and limitations of CAD and Monge separately or advocate integration in general terms, but few provide quasi-experimental evidence directly comparing the two methods. This study contributes to that inquiry by empirically comparing the learning outcomes of undergraduate engineering students trained with Monge’s system and those instructed using the CADOP method. Through a quasi-experimental design involving pre- and post-instruction surveys, structured rubrics, and performance assessments, we evaluate the impact of each method on spatial reasoning, conceptual understanding, and technical accuracy. The findings provide new insight into the evolving landscape of engineering graphics education and suggest that hybrid methodologies combining CAD efficiency with manual rigor may offer a balanced pathway for teaching DG.
To interpret the comparative performance of Monge and CADOP, the study is grounded in established learning theories that explain how students acquire and process spatial knowledge. The following section presents the theoretical framework that informs our hypotheses.

2. Theoretical Framework

The comparative effectiveness of Monge’s system and the CADOP method can be understood through the lens of established learning theories in engineering education. Three perspectives are particularly relevant: Cognitive Load Theory (CLT), Dual Coding Theory (DCT), and research on spatial cognition and mental rotation.
Cognitive Load Theory (Sweller, 1988; Paas & van Merriënboer, 1994) emphasizes the limited capacity of working memory and distinguishes between intrinsic, extraneous, and germane cognitive load. Traditional Monge-based instruction often imposes a high extraneous load because students must perform multiple auxiliary constructions and track successive projections manually. This procedural complexity can consume cognitive resources that might otherwise be devoted to conceptual understanding. CADOP, by contrast, reduces extraneous load through the use of coordinate transformations and automated precision. By minimizing mechanical errors and redundant constructions, CADOP frees working memory for deeper engagement with geometric principles, thereby supporting higher achievement in conceptual understanding and method application.
Dual Coding Theory (Paivio, 1990) posits that learning is enhanced when information is processed simultaneously in verbal–symbolic and visual–pictorial systems. Monge’s method primarily engages students in symbolic manipulation of projection rules, with visual representations unfolding slowly through manual construction. CADOP, in contrast, provides immediate visual confirmation of geometric relationships while students apply symbolic rules within the CAD environment. This dual-channel processing enhances comprehension and supports more robust encoding of spatial relationships.
Finally, research on spatial cognition and mental rotation (Shepard & Metzler, 1971; Ileri et al., 2023) highlights the importance of practice with spatial transformations for developing engineering visualization skills. While Shepard and Metzler (1971) first demonstrated that mental rotation is a core cognitive process, more recent research (Ileri et al., 2023) shows that these skills are malleable and can be significantly improved through targeted training. This supports the idea that instructional methods such as CADOP can enhance spatial reasoning more effectively than manual approaches. Methods like Monge’s system can encourage deliberate reflection on projection rules, but they may hinder performance when tasks require rapid mental rotation and visualization. CADOP directly supports these skills by allowing for dynamic manipulation of objects and coordinate systems, which provides real-time visual feedback. This immediate feedback loop reduces uncertainty, enhances confidence, and fosters the development of mental rotation skills.
By reducing extraneous cognitive load, supporting dual coding through integrated visual–symbolic processing, and enhancing spatial cognition through interactive feedback, CADOP creates conditions more conducive to efficient and accurate learning in descriptive geometry. Guided by this theoretical framework, we designed a quasi-experimental study to compare the effectiveness of Monge’s system and the CADOP method. The methodology, participants, and assessment procedures are described below.

3. Materials and Methods

This study employed a quasi-experimental design to investigate the impact of two instructional methods—Monge’s descriptive geometry system and the CADOP method—on undergraduate engineering students’ development of spatial reasoning and geometric problem-solving skills. The sample consisted of 90 undergraduate students enrolled in an introductory engineering graphics course. Participants were randomly divided into two groups of equal size: the first group received instruction using traditional hand-drawing techniques based on Monge’s projection system, while the second group was taught using CADOP, a hybrid methodology that integrates computer-aided design (CAD) tools with orthographic projection principles.
The instructional period followed a carefully structured sequence of descriptive geometry training exercises, aligned with key topics such as true-length construction, point and edge views, intersections, and measurements involving lines and planes. Each group completed the same sequence of exercises, adapted to the particular method of instruction. Monge group students completed tasks using manual drafting on paper, with emphasis on step-by-step geometric construction and visualization. Students in the CADOP group used CAD software (Autodesk AutoCAD 2025) equipped with spatial coordinate system manipulation functions, including User Coordinate System (UCS) reorientation, coordinate transformations, and projection commands. These features enabled them to complete equivalent tasks digitally, with the benefit of real-time visual feedback.
To evaluate learning outcomes, a multi-phase assessment strategy was implemented. Instruction for both groups was provided by the same teaching team, consisting of the three authors of this study. A unified syllabus, identical exercise sequence, and common learning objectives were used to minimize instructor-related bias and ensure comparability between the Monge and CADOP cohorts.
Initially, a baseline diagnostic survey was administered to assess students’ prior knowledge of descriptive geometry concepts and spatial reasoning. Following instruction, students completed a final performance test comprising ten applied geometry exercises. These exercises required students to construct orthographic projections, determine true lengths and sizes, measure distances and angles in space, and interpret geometric relationships through auxiliary views or coordinate transformations. Each exercise was evaluated using a rubric with five criteria: Conceptual Understanding, Graphical/Model Accuracy, Use of Method (Monge/CADOP), Spatial Reasoning, and Completion and Consistency. Each criterion was rated on a five-point scale, allowing for consistent measurement across exercises and student groups.
The three authors independently graded the final student submissions using the rubric. To check the consistency of scoring, inter-rater reliability was calculated using Cohen’s kappa (κ). Cohen’s kappa corrects for chance agreement and provides a robust estimate of rater concordance for categorical data. Values of κ range from −1.00 to 1.00, where 0 indicates agreement equivalent to chance, negative values indicate less agreement than expected by chance, and positive values indicate greater-than-chance agreement. Following the benchmarks proposed by Landis and Koch (1977), κ values between 0.41 and 0.60 are considered moderate, 0.61–0.80 substantial, and 0.81–1.00 almost perfect agreement.
Quantitative analysis of the results included the calculation of mean scores and standard deviations for each rubric criterion across all exercises, allowing for comparisons between the Monge and CADOP groups.
This rubric-based approach enabled not only the assessment of final task performance but also the identification of patterns in conceptual mastery, procedural accuracy, and consistency. Complementing the quantitative data, post-training surveys were used to collect students’ self-assessments of their confidence and perceived improvement, as well as open-ended reflections on the instructional process.
Overall, the methodology was planned to ensure alignment between instructional content, performance evaluation, and comparative analysis of the two pedagogical approaches. This structure facilitated a rigorous, multi-dimensional investigation into how different instructional strategies affect students’ development in DG, with particular attention to the pedagogical benefits of integrating CAD tools in technical education.

4. Results

The results are organized into the baseline survey, targeted training exercises, and rubric-based assessments. They highlight initial knowledge gaps, subsequent improvements, and clear performance differences between the Monge and CADOP groups.

4.1. Descriptive Geometry Baseline Knowledge Survey

The DG baseline knowledge survey was designed as an anonymized diagnostic tool to assess the initial conceptual understanding and spatial reasoning abilities of undergraduate engineering students prior to targeted instructional intervention. Administered to a cohort of 90 students, the survey included a structured set of statements that address key theoretical and practical aspects such as line projections, planar geometry, spatial visualization, and construction techniques, all of which are crucial foundational skills for engineering students. The anonymized format ensured honest responses, allowing for an unbiased evaluation of students’ pre-training competencies. By identifying specific areas of conceptual weakness—such as difficulties with true-length identification, plane intersections, and point-to-line distance estimation—the survey provided essential data to inform the design of targeted training exercises and to establish a benchmark for measuring learning outcomes following instruction

4.1.1. Survey Statements

The statements corresponding to the True/False and Multiple Choice sections of the survey are listed in Table 1. The True/False section is strategically designed to identify common misconceptions and fundamental understanding of core geometric concepts. For example, statements like “A line appears as true length when it is parallel to the projection plane” and “A horizontal line appears true length in the side view” target basic yet frequently misunderstood aspects of DG. Addressing these statements reveals whether students have a clear grasp of orthographic projection fundamentals and helps educators pinpoint conceptual inaccuracies for targeted instructional intervention. In the Multiple Choice section, the provided items demand a deeper analytical understanding, requiring students not only to recognize correct statements but also to distinguish subtle differences between options related to geometric constructions. Statements such as “To obtain the true length of an inclined line, you must …” or “The angle between two intersecting planes can be found by …” compel students to apply theoretical principles to problem-solving scenarios. This format evaluates not merely recognition but also critical thinking skills, reflecting the students’ ability to practically apply theoretical knowledge to geometric problems.
Table 2 shows the statements related to Short Answer Questions, Practical Application and Visualization and Spatial Reasoning. The Short Answer Questions are formulated to explicitly assess students’ capability to articulate their understanding of key DG concepts in their own words. These open-ended questions, such as defining “point view of a line” or describing how to construct “the true-size view of a plane,” challenge students to demonstrate clear conceptual understanding and articulate procedures logically. This format effectively reveals the depth of student comprehension beyond rote memorization, thereby offering valuable insights into their foundational understanding and ability to communicate geometric concepts clearly. Additionally, the Practical Application section directly assesses students’ proficiency in applying theoretical concepts to solve DG problems. Questions such as describing “the steps necessary to determine the distance between two skew lines” or outlining procedures to find “the true angle between a line and a plane” measure students’ capability to synthesize multiple concepts into coherent, sequential solutions. This approach directly aligns with engineering practices where DG serves as a foundation for solving complex spatial problems. Finally, the section on Visualization and Spatial Reasoning directly addresses one of the core challenges in DG—the ability to mentally visualize spatial configurations and relationships. Questions that require students to “visualize the intersection line of two planes” or “visualize the true distance between two points located on different planes” effectively measure students’ spatial reasoning capabilities. These skills are critically important in engineering design and construction, where accurate spatial visualization often determines the success of practical engineering solutions.
In summary, the selected statements of the Descriptive Geometry Baseline Knowledge Survey comprehensively evaluate key competencies required for proficiency in this subject. They are thoughtfully designed to reveal students’ foundational understanding, practical application skills, and spatial visualization abilities, thus making them highly suitable for accurately assessing the baseline knowledge of undergraduate engineering students involved in this study.

4.1.2. Survey Results

The baseline survey results confirmed that students entered the course with generally low levels of descriptive geometry knowledge. As shown in Table 3, fewer than 60% of students answered correctly even on basic items, and critical skills such as determining the true length of a line or recognizing edge views of planes were correctly identified by only about one-third of participants.
Table 4 further highlights difficulties with short-answer and application tasks, where success rates typically fell below 35%. Particularly weak areas included measuring distances between skew lines, finding true angles between a line and a plane, and visualizing intersections of planes. The survey results consistently reflect a generally low baseline knowledge among these undergraduate engineering students. These findings establish a clear need for targeted instruction in projection principles, auxiliary views, and spatial reasoning.

4.2. Training Exercises Statements

The training exercises were designed to address the deficiencies identified in the baseline survey. As summarized in Table 5, each task was linked to a specific area of weakness, such as true-length construction, edge and point views, or line–plane and plane–plane relationships. The sequence of exercises progressed from fundamental constructions toward more complex spatial reasoning challenges, ensuring that students systematically reinforced their understanding of core descriptive geometry concepts.

4.3. Training Exercises Implementation

To implement the DG training exercises, the cohort of 90 undergraduate engineering students was divided into two groups of 45. One group learned through traditional hand-drawing methods based on Monge’s projection system, while the other used the CADOP approach (Gutiérrez de Ravé & Jiménez-Hornero, 2024). Both groups followed the same sequence of exercises, adapted to their instructional method. Table 6 outlines the parallel procedures: while Monge students relied on auxiliary views and manual constructions, CADOP students achieved the same goals by redefining coordinate systems within the CAD environment. This comparison highlights how CADOP simplifies procedures while preserving geometric rigor.

4.4. Test Tasked to the Students

The final assessment consisted of ten exercises designed around a tetrahedron model (Figure 1), each targeting a specific projection principle or spatial reasoning skill. Tasks included determining true lengths, generating point and edge views, obtaining true-size planes, calculating distances, projecting lines, measuring angular relationships, and finding intersections. Both Monge and CADOP students completed the same set of problems, but the solution paths differed: Monge relied on successive auxiliary views and manual constructions, while CADOP used coordinate transformations and system reorientation. Representative solutions are illustrated in Section 4.4.1, Section 4.4.2, Section 4.4.3, Section 4.4.4, Section 4.4.5, Section 4.4.6, Section 4.4.7, Section 4.4.8, Section 4.4.9 and Section 4.4.10. The following summaries highlight the pedagogical implications of each exercise, focusing on differences in complexity, accuracy, and cognitive demand rather than detailed step-by-step procedures.

4.4.1. Exercise 1: Determining True Lengths of Lines

Figure 2 illustrates the solutions to Exercise 1 of the DG test, which requires the determination of the true lengths of lines AB and CD. Monge’s system required auxiliary views and unfolding, while CADOP method solved it directly by aligning the coordinate system. This highlights CADOP’s ability to simplify procedures and reduce error sources.

4.4.2. Exercise 2: Point View of a Line

Figure 3 shows the resolution of Exercise 2 from the DG test, which involves obtaining the point view of line CD. In Monge approach, multiple projections were needed to reduce the line to a point view, whereas CADOP method achieved the same outcome instantly by redefining the UCS. The contrast shows how CAD tools minimize cognitive load in spatial orientation tasks.

4.4.3. Exercise 3: Edge View of a Plane

Figure 4 presents the solutions to Exercise 3 of the DG test, which involves obtaining the edge view of plane ACD. Monge solution required constructing an auxiliary view to see the plane in edge form; CADOP method aligned the UCS directly with the plane. The pedagogical takeaway is that CADOP accelerates comprehension of plane orientation.

4.4.4. Exercise 4: True-Size View of a Plane

Figure 5 illustrates the solution to Exercise 4 of the DG test, which requires obtaining the true-size view of plane ACD. Obtaining a plane’s true size took Monge procedure two sequential steps (edge view, then unfolding), while CADOP method delivered it immediately through UCS alignment. CADOP thus provides a more accessible route for students to grasp true-size concepts.

4.4.5. Exercise 5: Shortest Distance from a Point to a Line

Figure 6 presents the solutions to Exercise 5 of the DG test, which involves determining the shortest distance from point A to line CD. Monge system demanded prior point views before dropping a perpendicular; CADOP method provided the result directly once the UCS was set. This efficiency demonstrates how CAD supports rapid application of projection principles.

4.4.6. Exercise 6: Projecting a Line onto a Plane

Figure 7 displays the resolution of Exercise 6 from the DG test, which consists of projecting a line CD onto the plane ABC. Since point C is in plane ABC, it is only necessary to project point D onto that plane. The Monge approach involved constructing a perpendicular from the line to the plane, while CADOP projected directly by setting one coordinate to zero. This illustrates how CADOP strengthens conceptual understanding by making projection results visually immediate.

4.4.7. Exercise 7: Angle Between Two Intersecting Planes

Figure 8 illustrates the solution to Exercise 7 of the DG test, which requires determining the angle between two intersecting planes, specifically planes ACD and BCD. The Monge approach solved this by obtaining edge views of both planes through auxiliary constructions. CADOP aligned the UCS with the line of intersection, yielding the angle directly. This task shows how CAD reduces procedural complexity in angular relationships.

4.4.8. Exercise 8: Angle Between a Line and a Plane

Figure 9 presents the resolution of Exercise 8 from the DG test, which involves determining the angle between a line CD and a plane ABC. The Monge system required projecting the line onto the plane and then constructing a true view of the angle; CADOP used two UCS reorientations to measure it directly. The pedagogical gain is that CADOP makes complex angular relations more transparent.

4.4.9. Exercise 9: Intersection Line Between Two Planes

Figure 10 shows the solution to Exercise 9 of the DG test, which involves determining the intersection line between two planes, ACD and H. The Monge solution used cutting planes to define intersections, whereas CADOP identified points of intersection by projecting line segments, then connected them. This demonstrates CADOP’s capacity to make spatial intersections more intuitive.

4.4.10. Exercise 10: Shortest Distance Between Two Skew Lines

Figure 11 presents the solution to Exercise 10 of the DG test, which involves calculating the shortest distance between two skew lines: RS, located in plane ABC, and PQ, placed in plane ACD. The Monge method constructed auxiliary planes and perpendiculars to calculate the distance; CADOP achieved it by aligning the UCS and projecting directly. The advantage here is CADOP’s ability to reduce a cognitively demanding multi-step process to a straightforward operation.

4.5. Rubric-Based Assessment

The descriptive geometry evaluation rubric has been designed as an assessment tool to systematically measure students’ development in spatial reasoning and geometric problem-solving following completion of the test. Its primary purpose is to provide a consistent and transparent framework for evaluating student performance across both traditional (Monge’s system) and computer-assisted (CADOP) instructional modalities. The rubric captures not only the correctness of geometric constructions but also the depth of conceptual understanding, the clarity and precision of representations, and the ability to consistently apply learned methods in various spatial contexts.
The three authors independently graded the final student submissions using the rubric described in Table 7. Inter-rater reliability was calculated yielding an almost perfect agreement according to Cohen’s kappa values: Conceptual Understanding (κ = 0.83), Graphical/Model Accuracy (κ = 0.89), Use of Method (κ = 0.85), Spatial Reasoning (κ = 0.87), and Completion & Consistency (κ = 0.88). Thus, it was confirmed that the assessment results were consistent across evaluators and not dependent on individual judgment.
As shown in Table 7, the rubric is structured around five core criteria: Conceptual Understanding, which assesses the student’s grasp of the underlying geometric principles and their correct application; Graphical/Model Accuracy, which evaluates the precision and visual clarity of manual drawings or CAD models; Use of Method (Monge/CADOP), which focuses on how effectively the student applied the specific instructional technique relevant to their group; Spatial Reasoning, which considers the student’s capacity to interpret and manipulate three-dimensional relationships through projections; and Completion and Consistency, which examines the thoroughness and coherence of the work across all test components.
Each criterion is scored on a five-point scale: 5—Excellent (work is complete, precise, and demonstrates mastery), 4—Good (minor errors or omissions with strong overall performance), 3—Developing (adequate understanding but with notable inconsistencies), 2—Fair (limited grasp or partial application of the concept), and 1—Poor (incomplete or incorrect with minimal demonstration of understanding).
This scoring system enables evaluators to differentiate levels of student achievement and supports meaningful feedback for both instructional refinement and individual learning growth.
The rubric-based evaluation shown in Table 8 offers a detailed comparison of student performance across all test exercises, assessed according to the core criteria mentioned above. According to the results, CADOP students not only reported better performance but also demonstrated reduced variance, indicating a more uniformly distributed understanding of the subject matter. In contrast, the Monge group exhibited higher standard deviations, reflecting greater heterogeneity in mastery and possibly a more error-prone manual process.
In Conceptual Understanding, CADOP students outperform Monge-trained peers in every exercise, with the most pronounced gaps observed in Exercises 2 and 3. Similarly, graphical and model accuracy scores are consistently higher under the CADOP method, particularly in Exercises 5 and 8, where Monge students show lower precision. The Use of Method criterion also favors CADOP, highlighting its potential to facilitate procedural efficiency. Spatial reasoning, a core skill in descriptive geometry, shows marked improvement in the CADOP cohort, especially in exercises 3 and 7, where scores approach or exceed 3.8. Finally, in terms of completion and consistency, CADOP students again exhibit higher averages with lower standard deviations, suggesting they not only finish tasks more reliably but also do so with greater uniformity
A particularly telling example of the CADOP group’s advantage is observed in Exercise 10, which involves determining the shortest distance between two skew lines—one of the more complex tasks in the test. CADOP students reported a high mean score of 3.17 in Spatial Reasoning and 3.60 in Completion and Consistency, surpassing the Monge group (2.60 and 3.07, respectively). The lower standard deviations in the CADOP results further suggest a more consistent level of mastery among participants. This outcome highlights CADOP’s effectiveness in facilitating accurate spatial interpretation and methodical problem-solving in multidimensional contexts. It also reinforces the value of CAD-enhanced techniques in helping students manage spatial complexity through dynamic visualization and coordinate system manipulation.
A clear group comparison is shown in Table 9 which summarizes the comparative performance of the Monge and CADOP groups across the five rubric criteria, averaged over the ten test exercises. The CADOP group consistently achieved higher mean scores than the Monge group in all dimensions, with particularly notable gains in Graphical/Model Accuracy (3.64 ± 0.24 vs. 3.02 ± 0.24) and Spatial Reasoning (3.50 ± 0.26 vs. 2.90 ± 0.20).
Independent-samples t-tests (Table 10) confirmed that differences shown in Table 9 were statistically significant for every criterion (negative t indicates higher CADOP means; all p < 0.05). These results substantiate the abstract’s claim that CADOP instruction outperformed Monge’s system not only in graphical accuracy but also in conceptual understanding, method application, and task consistency.
The magnitude of the differences between groups was evaluated using Cohen’s d (Cohen, 1988), according to conventional benchmarks (d = 0.2 small, d = 0.5 medium, d = 0.8 large). Across all criteria, d values exceeded 1.8 (Table 11), which far surpasses the conventional threshold for a “large” effect. This indicates that the observed performance advantages of CADOP over Monge were not only statistically significant but also pedagogically substantial, reinforcing the conclusion that CADOP provided a more effective instructional strategy for descriptive geometry learning.
These findings highlight the pedagogical value of CADOP, whose integration of CAD tools and orthographic projection principles appears to enhance students’ ability to grasp and apply spatial concepts with precision and consistency—particularly in geometrically demanding tasks.
In addition to the quantitative comparisons, qualitative findings from the post-training surveys provided further insight into student experiences. CADOP students frequently emphasized the clarity and efficiency of the digital approach, highlighting the benefit of real-time visual feedback during coordinate transformations. In contrast, Monge students often noted the conceptual rigor of manual drafting but also described challenges with accuracy and time. Confidence levels also diverged: more than two-thirds of CADOP students reported feeling confident in solving projection problems after instruction, compared to fewer than half of the Monge students. These reflections reinforce the statistical findings, showing that CADOP not only improved performance outcomes but also enhanced students’ perceived confidence and engagement.

5. Discussion

The comparative analysis between Monge’s traditional system and the CADOP method provides important insights into the evolving pedagogy of descriptive geometry in engineering education. The findings confirm our initial hypothesis that the integration of computer-aided tools with orthographic projection enhances student performance in spatial reasoning, conceptual understanding, and procedural accuracy. These results align with prior research demonstrating the positive impact of CAD environments on the development of spatial visualization skills and engineering problem-solving (Sorby, 2009; Kösa & Karakuş, 2018; Dilling & Vogler, 2021).
The magnitude of the observed differences also deserves attention. The very large effect sizes indicate that the superiority of CADOP was not a marginal or technical advantage but a substantial pedagogical one. In practical terms, this means that CADOP did more than improve drawing precision: it helped students engage more effectively with spatial relationships, reduce cognitive overload from repetitive manual constructions, and complete complex tasks more consistently. These gains point to a genuine enhancement of learning rather than simply a byproduct of digital tools, underscoring CADOP’s potential to transform how descriptive geometry is taught.
The observed advantages of CADOP can be interpreted in light of the cognitive mechanisms discussed earlier. The data suggest that CADOP helps students manage cognitive resources more efficiently by reducing the procedural burden associated with manual drafting. Rather than repeatedly constructing auxiliary projections, students could focus on interpreting spatial relationships, which led to higher conceptual scores and fewer graphical errors. The significant differences in performance and consistency therefore indicate that CADOP successfully channels mental effort toward meaningful learning activities, validating its capacity to optimize cognitive load in practice.
The results also demonstrate that CADOP fosters deeper spatial reasoning by combining symbolic understanding with dynamic visual interaction. The immediate visual feedback provided by the CAD environment appears to have strengthened students’ capacity to visualize and mentally manipulate geometric forms, as reflected in their superior performance on spatial reasoning tasks and higher self-reported confidence. Conversely, Monge’s manual method, while valuable for reinforcing theoretical reasoning, imposed greater cognitive effort and time demands. Overall, the findings confirm that CADOP operationalizes the theoretical principles of reduced cognitive load, dual visual–symbolic processing, and enhanced spatial cognition, leading to measurable gains in both performance and student confidence.
Beyond confirming the superiority of CADOP in measured outcomes, it is important to understand why these differences emerged. One of the clearest advantages of CADOP is its reliance on digital precision. CAD environments automatically ensure exact line lengths, angles, and projections once the coordinate system is defined, reducing the risk of technical drawing errors. This helps explain the markedly higher graphical accuracy scores of CADOP students: the software minimized mechanical inaccuracies and freed students to concentrate on interpreting geometric relationships.
Conversely, Monge students, although disadvantaged in terms of graphical accuracy, engaged in a slower and more labor-intensive construction process. The manual steps required in Monge’s method often demanded repeated attention to projection rules and geometric reasoning. While this process sometimes led to inconsistencies in execution, it also encouraged reflection on the theoretical underpinnings of descriptive geometry. This may explain why some Monge students demonstrated relatively strong engagement with conceptual aspects, even if their final drawings lacked precision. In other words, the cognitive “cost” of manual drafting may support theoretical insight, while CADOP facilitates operational efficiency and accuracy.
From a pedagogical standpoint, these complementary outcomes highlight the potential of hybrid approaches. CADOP equips students with efficiency and precision in solving complex projection problems, while Monge’s method preserves the discipline and rigor of manual construction that can reinforce fundamental understanding. This balance is consistent with arguments by Bokan et al. (2009) and Moreno and Bazán (2017), who advocate for embedding CAD within descriptive geometry instruction in a way that preserves conceptual depth while reducing procedural barriers.
The higher mean scores and lower variability obtained by the CADOP group further suggest that digital approaches not only improve achievement but also promote more equitable learning outcomes. This observation resonates with Deng et al. (2022), who emphasized that interactive digital platforms help mitigate disparities in comprehension by providing immediate visual feedback and reducing the cognitive load associated with complex geometric constructions. In contrast, Monge’s system, although conceptually rigorous, appeared to amplify heterogeneity in student outcomes, given its reliance on successive auxiliary views and the potential for manual inaccuracies (Bairaktarova, 2017).
The implications extend beyond descriptive geometry into broader discussions on digital integration in engineering education. Spatial reasoning skills underpin competencies across disciplines such as civil engineering, mechanical design, and collaborative design workflows. Incorporating CADOP or similar methodologies may therefore enhance not only learning in DG but also the transfer of spatial skills to other professional contexts.

Future Research Directions

While the present study demonstrates clear advantages of the CADOP method over Monge’s traditional approach, additional research is needed to further validate and extend these findings. Longitudinal studies should explore whether the learning gains observed here persist in subsequent courses, particularly in advanced design and engineering subjects where spatial reasoning skills are critical. Cross-cultural and cross-institutional comparisons would also be valuable for testing the generalizability of CADOP, especially in educational contexts with differing traditions of manual drafting or unequal access to CAD technologies. Finally, future work could investigate the integration of CADOP with emerging instructional technologies such as augmented reality or AI-assisted design environments (Hunde & Woldeyohannes, 2022), which may open new possibilities for interactive and adaptive learning in descriptive geometry.

6. Conclusions

This study explored the comparative effectiveness of two instructional methodologies—Monge’s traditional system of descriptive geometry and the CADOP method—in supporting the development of spatial reasoning and geometric problem-solving skills among undergraduate engineering students. Using baseline knowledge surveys, targeted training exercises, and rubric-based assessments, it was found that the CADOP group consistently outperformed the Monge group across all evaluated criteria, including conceptual understanding, graphical accuracy, use of method, spatial reasoning, and consistency of execution.
The magnitude of these differences was not trivial: very large effect sizes confirmed that CADOP’s superiority was both statistically significant and educationally meaningful. In practice, this means that CADOP did more than enhance the precision of drawings—it also enabled students to manage spatial complexity more effectively, focus on geometric interpretation rather than procedural errors, and achieve more consistent results across tasks. These outcomes position CADOP as a powerful tool for modernizing descriptive geometry education.
From a theoretical perspective, the results demonstrate how CADOP effectively translates the principles of Cognitive Load Theory, Dual Coding Theory, and spatial cognition into measurable learning benefits. By reducing procedural complexity and automating projection steps, CADOP minimizes extraneous cognitive load, allowing students to focus their working memory on understanding geometric relationships. The real-time visual feedback and the integration of symbolic and pictorial information engage both processing channels, reinforcing conceptual comprehension in line with dual coding principles. Furthermore, the interactive manipulation of coordinate systems enhances mental rotation and spatial visualization skills. These cognitive mechanisms help explain the statistically significant and educationally meaningful superiority of CADOP observed in this study.
At the same time, the results highlight the continuing pedagogical value of Monge’s approach. Manual drafting, although more error-prone, promotes a step-by-step engagement with geometric principles that can strengthen conceptual reflection and theoretical understanding. Thus, the two methods should not be viewed as mutually exclusive. Instead, their strengths are complementary: CADOP provides efficiency, precision, and consistency, while Monge fosters discipline, rigor, and conceptual depth.
The implications for engineering education are twofold. First, integrating CAD-based strategies such as CADOP can significantly improve learning outcomes and promote more equitable performance across diverse student populations. Second, maintaining selected elements of manual projection can ensure that students continue to develop a strong theoretical foundation in geometry. Hybrid instructional models that combine the efficiency of CAD with the conceptual rigor of Monge’s system may therefore represent the most effective pathway for modern descriptive geometry education.
Future research should investigate the long-term retention of geometric concepts under these hybrid approaches and explore their transferability to advanced engineering contexts. Cross-institutional and cross-cultural studies will also be important for validating the generalizability of these findings. Additionally, the integration of CADOP with emerging technologies such as augmented reality and AI-driven design assistance may offer promising directions for the continued evolution of descriptive geometry instruction.
Overall, the findings confirm that CADOP is not only a more effective instructional approach for contemporary engineering education but also a necessary step in aligning descriptive geometry teaching with the digital design environments that dominate professional practice.

Author Contributions

Conceptualization, S.G.d.R., E.G.d.R. and F.J.J.-H.; methodology, S.G.d.R., E.G.d.R. and F.J.J.-H.; formal analysis and investigation, S.G.d.R., E.G.d.R. and F.J.J.-H.; writing—original draft preparation, S.G.d.R., E.G.d.R. and F.J.J.-H.; writing—review and editing, S.G.d.R., E.G.d.R. and F.J.J.-H.; resources, E.G.d.R. and F.J.J.-H.; supervision, E.G.d.R. and F.J.J.-H.; funding acquisition, E.G.d.R. and F.J.J.-H. All authors have read and agreed to the published version of the manuscript.

Funding

The APC was funded by the Research Program of the University of Córdoba (2025), Spain. It was also funded by the Department of Graphic Engineering and Geomatics of the University of Córdoba.

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study. Participants were clearly informed that participation was voluntary and anonymous, and that they could withdraw from the research at any time without any academic penalty or negative consequences.

Data Availability Statement

The data presented in this study are available on request from the corresponding author. The data are not publicly available due to privacy restrictions.

Acknowledgments

The authors gratefully acknowledge the support of the funding sources.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
CADComputer-Aided Design
CADOPCAD and Orthographic Projection
DGDescriptive Geometry

References

  1. Abrahamzon, L., Galantini, K., & Luna, A. (2024). Augmented reality for the development and reinforcement of spatial skills: A case applied to civil engineering students. International Journal of Engineering Pedagogy, 14(5), 88–108. [Google Scholar] [CrossRef]
  2. Agbanglanon, S. L., Lecorre, T., Komis, V., & Jaillet, A. (2024). Spatial skills and observed leadership behaviour: A case study of dyadic teamwork in mechanical design. European Journal of Engineering Education, 49(6), 1227–1245. [Google Scholar] [CrossRef]
  3. Bairaktarova, D. (2017). Coordinating mind and hand: The importance of manual drawing and descriptive geometry instruction in a CAD-oriented engineering design graphics class. Engineering Design Graphics Journal, 81(3), 1–16. [Google Scholar] [CrossRef]
  4. Balajti, Z. (2024). Challenges of engineering applications of descriptive geometry. Symmetry, 16(1), 50. [Google Scholar] [CrossRef]
  5. Bertoline, G. R., Wiebe, E. N., Hartman, N. W., & Ross, W. A. (2009). Fundamentals of graphics communication (4th ed.). McGraw-Hill Higher Education. [Google Scholar]
  6. Bokan, N., Ljucovic, M., & Vukmirovic, S. (2009). Computer-aided teaching of descriptive geometry. Journal for Geometry and Graphics, 13(2), 221–229. [Google Scholar]
  7. Cohen, J. (1988). Statistical power analysis for the behavioral sciences (2nd ed.). Lawrence Erlbaum Associates. [Google Scholar]
  8. Deng, Y., Mueller, M., Rogers, C., & Olechowski, A. (2022). The multi-user computer-aided design collaborative learning framework. Advanced Engineering Informatics, 51, 101446. [Google Scholar] [CrossRef]
  9. Dilling, F., & Vogler, A. (2021). Fostering spatial ability through computer-aided design: A case study. Digital Experiences in Mathematics Education, 7, 323–336. [Google Scholar] [CrossRef]
  10. Gutiérrez de Ravé, E., & Jiménez-Hornero, F. J. (2024). A 3D descriptive geometry problem-solving methodology using CAD and orthographic projection. Symmetry, 16(4), 476. [Google Scholar] [CrossRef]
  11. Hunde, B. R., & Woldeyohannes, A. D. (2022). Future prospects of computer-aided design (CAD)—A review from the perspective of artificial intelligence (AI), extended reality, and 3D printing. Results in Engineering, 14, 100478. [Google Scholar] [CrossRef]
  12. Ileri, Ç., Güler, Ö., Atit, K., & Uttal, D. H. (2023). Malleability of spatial skills: Bridging developmental studies and mental rotation training. Frontiers in Psychology, 14, 1137003. [Google Scholar] [CrossRef]
  13. Katona, J., & Nagy Kem, G. (2019). The CAD 3D course improves students’ spatial skills in the technology and design education. YBL Journal of Built Environment, 7, 26–37. [Google Scholar] [CrossRef]
  14. Kösa, T., & Karakuş, F. (2018). The effects of computer-aided design software on engineering students’ spatial visualization skills. European Journal of Engineering Education, 43(2), 296–308. [Google Scholar] [CrossRef]
  15. Kuna, P., Hašková, A., Palaj, M., Skačan, M., & Záhorec, J. (2018). How to teach CAD/CAE systems. International Journal of Engineering Pedagogy (iJEP), 8(1), 148–162. [Google Scholar] [CrossRef]
  16. Ladrón-de-Guevara-Muñoz, M. C., Alonso-García, M., de-Cózar-Macías, O. D., & Blázquez-Parra, E. B. (2023). The place of descriptive geometry in the face of industry 4.0 challenges. Symmetry, 15(12), 2190. [Google Scholar] [CrossRef]
  17. Landis, J. R., & Koch, G. G. (1977). The measurement of observer agreement for categorical data. Biometrics, 33(1), 159–174. [Google Scholar] [CrossRef]
  18. Markova, T., Shirokova, S., Rostova, O., & Misbakhova, C. (2024). Possibilities of using computer-aided design systems to improve the quality of technical specialists training. Lecture Notes in Networks and Systems, 951, 93–105. [Google Scholar] [CrossRef]
  19. Migliari, R. (2012). Descriptive geometry: From its past to its future. Nexus Network Journal, 14(3), 555–571. [Google Scholar] [CrossRef]
  20. Monge, G. (1799). Géométrie descriptive. Baudouin. [Google Scholar]
  21. Moreno, R., & Bazán, A. M. (2017). Automation in the teaching of descriptive geometry and CAD. High-level CAD templates using script languages. IOP Conference Series: Materials Science and Engineering, 245(6), 062040. [Google Scholar] [CrossRef]
  22. Paas, F. G. W. C., & van Merriënboer, J. J. G. (1994). Instructional control of cognitive load in the training of complex cognitive tasks. Educational Psychology Review, 6(4), 351–371. [Google Scholar] [CrossRef]
  23. Paivio, A. (1990). Mental representations: A dual coding approach. Oxford University Press. [Google Scholar]
  24. Shepard, R. N., & Metzler, J. (1971). Mental rotation of three-dimensional objects. Science, 171(3972), 701–703. [Google Scholar] [CrossRef]
  25. Sorby, S. A. (2009). Educational research in developing 3-D spatial skills for engineering students. International Journal of Science Education, 31(3), 459–480. [Google Scholar] [CrossRef]
  26. Suzuki, K. (2014). Traditional descriptive geometry education in the 3D-CAD/CG era. Journal for Geometry and Graphics, 18(2), 249–258. [Google Scholar]
  27. Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285. [Google Scholar] [CrossRef]
  28. Taleyarkhan, M., Dasgupta, C., García, J. M., & Magana, A. J. (2018). Investigating the impact of using a CAD simulation tool on students’ learning of design thinking. Journal of Science Education and Technology, 27, 334–347. [Google Scholar] [CrossRef]
  29. Tomiczková, S., & Lávicka, M. (2013). Computer-aided descriptive geometry teaching. Computers in the School, 30(1–2), 48–60. [Google Scholar] [CrossRef]
  30. Torner, J., Alpiste, F., & Brigos, M. (2014). Spatial ability in computer-aided design courses. Computer-Aided Design and Applications, 12(1), 36–44. [Google Scholar] [CrossRef]
Figure 1. Statements of the exercises included in the test administered to the students of both groups to evaluate the acquired skills in descriptive geometry after instruction.
Figure 1. Statements of the exercises included in the test administered to the students of both groups to evaluate the acquired skills in descriptive geometry after instruction.
Education 15 01492 g001
Figure 2. Determining the true length of a line: Monge requires auxiliary views, while CADOP solves it through UCS reorientation.
Figure 2. Determining the true length of a line: Monge requires auxiliary views, while CADOP solves it through UCS reorientation.
Education 15 01492 g002
Figure 3. Point view of a line: Monge uses multiple projections, whereas CADOP reduces it with a single UCS change.
Figure 3. Point view of a line: Monge uses multiple projections, whereas CADOP reduces it with a single UCS change.
Education 15 01492 g003
Figure 4. Edge view of a plane: Monge constructs an auxiliary projection, while CADOP aligns the UCS directly with the plane.
Figure 4. Edge view of a plane: Monge constructs an auxiliary projection, while CADOP aligns the UCS directly with the plane.
Education 15 01492 g004
Figure 5. True-size view of a plane: Monge requires unfolding after an edge view, while CADOP obtains it directly through UCS alignment.
Figure 5. True-size view of a plane: Monge requires unfolding after an edge view, while CADOP obtains it directly through UCS alignment.
Education 15 01492 g005
Figure 6. Shortest distance from a point to a line: Monge constructs projections and perpendiculars, while CADOP calculates it directly once the UCS is oriented.
Figure 6. Shortest distance from a point to a line: Monge constructs projections and perpendiculars, while CADOP calculates it directly once the UCS is oriented.
Education 15 01492 g006
Figure 7. Projection of a line onto a plane: Monge constructs perpendicular projections, while CADOP simplifies the task by UCS reorientation.
Figure 7. Projection of a line onto a plane: Monge constructs perpendicular projections, while CADOP simplifies the task by UCS reorientation.
Education 15 01492 g007
Figure 8. Angle between two planes: Monge uses successive auxiliary views, while CADOP measures the angle directly after UCS alignment.
Figure 8. Angle between two planes: Monge uses successive auxiliary views, while CADOP measures the angle directly after UCS alignment.
Education 15 01492 g008
Figure 9. Angle between a line and a plane: Monge requires projecting the line and constructing auxiliaries, while CADOP determines the angle through UCS reorientation.
Figure 9. Angle between a line and a plane: Monge requires projecting the line and constructing auxiliaries, while CADOP determines the angle through UCS reorientation.
Education 15 01492 g009
Figure 10. Intersection line between two planes: Monge determines intersections with auxiliary planes, while CADOP identifies intersection points directly by projection.
Figure 10. Intersection line between two planes: Monge determines intersections with auxiliary planes, while CADOP identifies intersection points directly by projection.
Education 15 01492 g010
Figure 11. Shortest distance between two skew lines: Monge requires auxiliary planes and perpendicular constructions, while CADOP obtains the result directly through UCS alignment. A 3D sketch of the solution is shown at the figure bottom.
Figure 11. Shortest distance between two skew lines: Monge requires auxiliary planes and perpendicular constructions, while CADOP obtains the result directly through UCS alignment. A 3D sketch of the solution is shown at the figure bottom.
Education 15 01492 g011
Table 1. Descriptive geometry baseline knowledge survey statements (Part 1).
Table 1. Descriptive geometry baseline knowledge survey statements (Part 1).
Survey SectionItemStatement
True/False1A line appears as true length when it is parallel to the projection plane.
2Parallel lines always appear parallel in all orthographic views.
3A point view of a line can be obtained when the observer’s line of sight is parallel to that line.
4Planar figures can be viewed as edges.
5A horizontal line appears true length in the profile or side view.
6If two planes are perpendicular, the normal view of either one will show an edge view of the other.
7An inclined plane never appears true size in any principal view.
8A line perpendicular to a plane appears perpendicular to the plane’s edge view.
Multiple choice1To obtain the true length of an inclined line, you must:
A. View it parallel to the line.
B. View it perpendicular to a projection plane.
C. View it from an oblique angle.
D. Project it on a frontal plane.
2Which of the following is true regarding an edge view of a plane?
A. It occurs when the plane is perpendicular to the line of sight.
B. It is always visible in the top view.
C. It occurs when a line on the plane appears as a point.
D. It never occurs in orthographic projection.
3The angle between two intersecting planes can be found by:
A. Constructing an edge view of both planes.
B. Constructing a true-length view of one plane.
C. Constructing a point view of the intersection line.
D. All of the above.
4How can you identify if two lines intersect in space?
A. They intersect in at least one view.
B. They share a common point in all views.
C. They are parallel in all views.
D. They appear perpendicular in one view.
Table 2. Descriptive geometry baseline knowledge survey statements (Part 2).
Table 2. Descriptive geometry baseline knowledge survey statements (Part 2).
Survey SectionItemStatement
Short answer1Define what is meant by a “point view” of a line.
2Explain how to determine the shortest distance between a point and a line.
3Describe how you would obtain the true-size view of a plane figure.
4Explain when a line is considered oblique and how it appears in orthographic projection.
Practical Application1Describe the steps necessary to determine the distance between two skew lines.
2Outline the steps needed to find the true angle between a line and a plane.
3Given two intersecting planes, explain how both planes can be seen in edge view in the same projection.
Visualization and Spatial Reasoning1How do you visualize the intersection line of two planes?
2When given an inclined plane figure, explain your strategy to find its true size and shape.
3Describe your method for visualizing the orientation of a line in space based on orthographic projections.
4Explain the process to visualize the true-length of a line.
Table 3. Descriptive geometry baseline knowledge survey results (Part 1).
Table 3. Descriptive geometry baseline knowledge survey results (Part 1).
Survey SectionItemCorrect AnswerPercentage
True/False1True:
A line parallel to a projection plane always appears as its true length.
58%
(52 students)
2True:
Parallel lines remain parallel in every orthographic view, as parallelism is preserved through projection.
50%
(45 students)
3True:
A line appears as a point when viewed parallel to its length (the line of sight aligns with the line).
45%
(41 students)
4True:
Planar figures in orthographic projections appear as edges when they are parallel to the line of sight.
53%
(48 students)
5False (Correction):
A horizontal line appears as true length in the top view, not the side view. This is a common misconception.
40%
(36 students)
6True:
If two planes are perpendicular, the orthographic (normal) view of one will display the other as an edge.
47%
(42 students)
7True:
An inclined plane, by definition, is not parallel to any principal projection plane and thus will never appear true size in standard principal views.
52%
(47 students)
8True:
If a line is perpendicular to a plane, it will appear perpendicular in the edge view of that plane.
49%
(44 students)
Multiple choice1B. View it perpendicular to a projection plane:
To determine the true length of an inclined line, the observer’s line of sight must be perpendicular to a plane that is parallel to the line
35%
(32 students)
2C. It occurs when a line on the plane appears as a point:
An edge view of a plane is obtained when you view along a line within the plane, making that line appear as a single point.
33%
(30 students)
3D. All of the above:
All methods listed—constructing an edge view of both planes, a true-length view of one plane, or a point view of the intersection line—can be used to find the angle between two intersecting planes
26%
(23 students)
4B. They share a common point in all views:
Two lines intersect in space only if they share a common intersection point consistently across all orthographic views.
38%
(34 students)
Note: The correct answers are highlighted in bold, along with the percentage of students who selected them.
Table 4. Descriptive geometry baseline knowledge survey results (Part 2).
Table 4. Descriptive geometry baseline knowledge survey results (Part 2).
Survey SectionItemIdeal AnswerPercentage
Short
answer
1Point view of a line:
A “point view” of a line is the view obtained when the observer’s line of sight is parallel to the line, causing the line to appear as a single point.
Correct:
34%
(31 students)
2Shortest distance between a point and a line:
To determine the shortest distance between a point and a line, construct a view where the line appears as a point (end view), and then measure the perpendicular distance from the given point to this line.
Correct
explanation:
30%
(27 students)
3True-size view of a plane:
A true-size view of a plane is obtained by looking at the plane perpendicularly—that is, along a line normal to the plane. This is typically done using an auxiliary view aligned with the plane’s normal direction.
Correct method
described:
31%
(28 students)
4Oblique line and its appearance in orthographic projection:
A line is oblique if it is not parallel to any principal plane. It will appear foreshortened (shorter than its true length) in all principal views.
Correct
description:
33%
(30 students)
Practical Application1Shortest distance between two skew lines:
-
First, create a view where one of the lines appears as a point (end view).
-
Then measure the perpendicular (shortest) distance between this point-view line and the other line.
Clearly
explained:
28%
(25 students)
2True angle between a line and a plane:
-
Find the projection of the line onto the plane.
-
Determine the plane formed by the line and its projection.
-
Find the true shape (true magnitude) of that plane.
-
In this true shape view, the angle between the line and its projection is visible—this is the true angle between the line and the plane.
Correct
methodology:
27%
(24 students)
3Given two intersecting planes, explain how both planes can be seen in edge view in the same projection:
-
First, identify the intersection line between the two planes.
-
Then, construct an auxiliary view with the line of sight parallel to the line of intersection.
-
In this auxiliary view, both planes will appear in edge view.
Correct steps:
32%
(29 students)
Visualization and Spatial Reasoning1Intersection line of two planes:
Find two points that belong to the line of intersection of both planes. These points are determined by intersecting each plane with an auxiliary plane (horizontal or vertical) and then finding the intersection of the resulting lines.
Correct
visualization:
30%
(27 students)
2True size and shape of an inclined plane figure:
First, find an edge view of the figure. Then, create an auxiliary view perpendicular to this edge view, providing the true size and shape.
Correct
strategy:
33%
(30 students)
3Visualizing the orientation of a line in space:
Use orthographic projections (front, top, side) to observe how the line changes length and angle. Combining these views mentally helps visualize its spatial orientation and true length.
Accurate
visualization:
29%
(26 students)
4True length of a line:
To visualize the true length of a line, create an auxiliary view where the line is parallel to the projection plane. In this view, the line appears in its true size because the line of sight is perpendicular to it.
Correct method:
35%
(32 students)
Note: The tasks are highlighted in bold, along with the percentage of students who gave the ideal answer to carry them out.
Table 5. Descriptive Geometry Training Exercises—Instructional Rationale and Survey Findings.
Table 5. Descriptive Geometry Training Exercises—Instructional Rationale and Survey Findings.
Training ExerciseReason for Improving KnowledgeSurvey Finding Addressed
True-Length Line ConstructionDevelops understanding of how line orientation affects projection and when a line appears in its true length.Only 35% of students correctly identified the condition for obtaining true length of a line.
True-Size View of a Plane Reinforces the two-step auxiliary view method and shows when a plane appears in true size.Just 31% could describe how to construct a true-size view of a plane.
Point View of a LineBuilds spatial reasoning by helping students reduce a line to a point for further construction (e.g., distances).Only 34% correctly defined the point view of a line.
Edge View of a PlaneIntroduces critical projection skill used in determining plane size, angles, and intersections.Just 33% knew the requirements for an edge view of a plane.
Shortest Distance from a Point to a LineApplies point-view techniques and perpendicular projection to determine shortest paths.Only 30% of students described the correct method to find this distance.
Angle Between Two Intersecting PlanesReinforces edge view construction and angle measurement between planes—important for multi-surface analysis.Only 26% correctly identified the method to determine this angle.
Projection of a Line onto a PlaneHelps visualize line-plane interaction and prepare for angle measurement by projecting geometry onto the plane.Students showed limited understanding of projection interactions between lines and planes.
Angle Between a Line and a PlaneDevelops combined skills in projection, perpendicular construction, and auxiliary viewing to find angles.Only 27% could correctly explain the steps to find this angle.
Shortest Distance Between Two Skew LinesIntegrates point view and auxiliary view techniques to measure perpendicular distance between non-intersecting lines.Only 28% could describe the process correctly, showing difficulty with spatial relationships.
Intersection Line of Two PlanesStrengthens the use of projecting auxiliary planes to solve descriptive geometry problems.Just 30% could accurately visualize or explain this intersection.
Table 6. Descriptive Geometry Training Exercises—Monge’s System and CADOP Procedures.
Table 6. Descriptive Geometry Training Exercises—Monge’s System and CADOP Procedures.
Training ExerciseMonge’s System ProcedureCADOP Procedure
True-Length Line ConstructionA view perpendicular to a given view of that line.Define a UCS aligned to the line and use a top or front view to reveal its true length.
True-Size View of a PlaneConstruct an edge view of the plane, then create a second view perpendicular to this edge.Create a UCS on the plane using three non-aligned points; the UCS top view reveals true size.
Point View of a LineConstruct a view where the observer’s line of sight is parallel to the line.Match UCS Z-axis to the line; the UCS top view shows the line as a point.
Edge View of a PlaneFind a line in the plane to appear as a point view; then construct the plane’s edge view.Align the UCS XY-plane with the target plane; an orthogonal view to the UCS Z-axis will show the plane in edge view.
Shortest Distance from a Point to a LineConstruct a point view of the line and drop a perpendicular from the point.Align UCS on the plane defined by the point and the line; draw perpendicular and measure the distance.
Angle Between Two Intersecting PlanesConstruct edge views of both planes; measure the angle between them.Match UCS Z-axis to the intersection line; measure angle from top view.
Projection of a Line onto a PlaneThe endpoints of the line are projected orthogonally onto the plane; the projected points define the line’s projection on the plane.Align the UCS XY-plane with the target plane; then set the UCS Z-coordinate (Zu) of the line’s endpoints to zero to obtain its projection on the plane.
Angle Between a Line and a PlaneDisplay the line in true length and the plane in edge view, then measure the angle.Project line onto the plane in UCS; measure angle between original and projected lines.
Shortest Distance Between Two Skew LinesConstruct point view of one line and measure perpendicular distance to the other.Construct point view of one line by matching UCS Z-axis to that line; measure perpendicular distance to the other line.
Intersection Line of Two PlanesThe cutting planes method finds the intersection line of two planes by introducing one or more auxiliary planes that intersect both given planes. The lines of intersection between the cutting planes and each original plane are then projected; their intersection points define the true intersection line of the original planes.Two lines from one plane are considered, and the point of intersection of each with the other plane is determined. The intersection line between both planes is found by connecting the two resulting points.
Table 7. Rubric to evaluate students’ development in descriptive geometry.
Table 7. Rubric to evaluate students’ development in descriptive geometry.
Criterion5—Excellent4—Good3—Developing2—Fair1—Poor
Conceptual UnderstandingDemonstrates thorough understanding of all relevant geometric concepts with accurate application.Demonstrates good understanding with minor conceptual inaccuracies.Shows basic understanding but includes some conceptual errors.Limited understanding with multiple errors or misconceptions.Lacks understanding of key concepts; work is mostly incorrect.
Graphical/Model
Accuracy
Drawings/models are precise, neat, and completely accurate.Drawings/models are mostly accurate with few minor errors.Drawings/models have some inaccuracies or imprecision.Frequent graphical errors; unclear or inconsistent representation.Drawings/models are largely incorrect or missing.
Use of Method (Monge/CADOP)Applies the method flawlessly and appropriately for the task.Applies the method correctly with minor procedural flaws.Uses the method with moderate success; some steps missed.Applies the method incorrectly or inconsistently.Does not use or misuses the assigned method entirely.
Spatial ReasoningConsistently demonstrates strong spatial reasoning across all tasks.Generally strong spatial reasoning with rare misinterpretations.Moderate spatial reasoning; occasional confusion evident.Spatial reasoning is weak; multiple misinterpretations.Little or no evidence of spatial reasoning.
Completion and ConsistencyAll tasks are fully completed and consistent throughout.Most tasks completed with good consistency.Some tasks completed but inconsistencies present.Incomplete tasks and inconsistent execution.Tasks mostly missing or incoherent.
Table 8. Comparative rubric-based evaluation of student performance in descriptive geometry: Monge system vs. CADOP method.
Table 8. Comparative rubric-based evaluation of student performance in descriptive geometry: Monge system vs. CADOP method.
Criterion
Conceptual UnderstandingGraphical/
Model Accuracy
Use of Method
(Monge/CADOP)
Spatial ReasoningCompletion & Consistency
MongeCADOPMongeCADOPMongeCADOPMongeCADOPMongeCADOP
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Mean
(Std. Dev.)
Exercise 12.56
(0.48)
2.98
(0.30)
3.09
(0.47)
3.61
(0.24)
3.29
(0.50)
3.75
(0.23)
2.94
(0.49)
3.42
(0.40)
3.25
(0.42)
3.87
(0.34)
Exercise 23.13
(0.45)
3.81
(0.37)
3.06
(0.59)
3.53
(0.28)
2.73
(0.41)
3.52
(0.25)
2.95
(0.41)
3.54
(0.26)
3.06
(0.46)
3.53
(0.37)
Exercise 32.89
(0.52)
3.36
(0.29)
3.11
(0.60)
3.71
(0.33)
2.85
(0.60)
3.49
(0.36)
3.20
(0.60)
3.80
(0.25)
3.17
(0.42)
3.52
(0.22)
Exercise 42.97
(0.53)
3.41
(0.35)
2.87
(0.50)
3.56
(0.39)
3.26
(0.43)
3.97
(0.39)
2.89
(0.59)
3.64
(0.30)
2.72
(0.45)
3.11
(0.33)
Exercise 53.40
(0.60)
3.72
(0.35)
3.39
(0.52)
4.04
(0.38)
2.94
(0.44)
3.49
(0.23)
2.79
(0.48)
3.29
(0.28)
3.05
(0.44)
3.96
(0.28)
Exercise 63.23
(0.48)
3.75
(0.34)
3.18
(0.52)
3.89
(0.28)
3.34
(0.54)
3.74
(0.37)
2.92
(0.50)
3.61
(0.37)
3.01
(0.46)
3.44
(0.21)
Exercise 73.06
(0.52)
3.76
(0.26)
2.64
(0.49)
3.34
(0.4)
2.81
(0.47)
3.17
(0.21)
3.20
(0.46)
3.94
(0.30)
3.19
(0.44)
3.91
(0.39)
Exercise 82.89
(0.49)
3.55
(0.36)
3.30
(0.50)
3.92
(0.26)
2.58
(0.58)
3.18
(0.32)
2.66
(0.55)
3.17
(0.29)
2.59
(0.55)
3.11
(0.32)
Exercise 92.86
(0.48)
3.20
(0.38)
2.83
(0.47)
3.49
(0.31)
3.00
(0.53)
3.88
(0.30)
2.85
(0.54)
3.39
(0.26)
3.32
(0.42)
4.19
(0.25)
Exercise 102.57
(0.43)
3.17
(0.31)
2.75
(0.49)
3.34
(0.22)
3.09
(0.43)
3.69
(0.33)
2.60
(0.59)
3.17
(0.36)
3.07
(0.57)
3.60
(0.39)
Table 9. Summary of rubric scores by group and criterion (means and standard. deviations across the ten exercise means).
Table 9. Summary of rubric scores by group and criterion (means and standard. deviations across the ten exercise means).
CriterionMonge
Mean
(Std. Dev.)
CADOP
Mean
(Std. Dev.)
Conceptual
Understanding
2.96
(0.27)
3.47
(0.29)
Graphical/Model
Accuracy
3.02
(0.24)
3.64
(0.24)
Use of Method (Monge/CADOP)2.99
(0.26)
3.59
(0.27)
Spatial Reasoning2.90
(0.20)
3.50
(0.26)
Completion & Consistency3.04
(0.23)
3.62
(0.36)
Table 10. Independent samples t-tests (unit of analysis = exercise means) comparing Monge vs. CADOP.
Table 10. Independent samples t-tests (unit of analysis = exercise means) comparing Monge vs. CADOP.
Criteriont-Statisticp-Value
Conceptual
Understanding
−4.120.0006
Graphical/Model
Accuracy
−5.72<0.0001
Use of Method (Monge/CADOP)−5.100.0001
Spatial Reasoning−5.83<0.0001
Completion & Consistency−4.320.0004
Table 11. Exercise-level standardized mean differences (Cohen’s d).
Table 11. Exercise-level standardized mean differences (Cohen’s d).
Criteriond
Conceptual
Understanding
1.82
Graphical/Model
Accuracy
2.58
Use of Method (Monge/CADOP)2.26
Spatial Reasoning2.59
Completion & Consistency1.92
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Gutiérrez de Ravé, S.; Gutiérrez de Ravé, E.; Jiménez-Hornero, F.J. Integrating CAD and Orthographic Projection in Descriptive Geometry Education: A Comparative Analysis with Monge’s System. Educ. Sci. 2025, 15, 1492. https://doi.org/10.3390/educsci15111492

AMA Style

Gutiérrez de Ravé S, Gutiérrez de Ravé E, Jiménez-Hornero FJ. Integrating CAD and Orthographic Projection in Descriptive Geometry Education: A Comparative Analysis with Monge’s System. Education Sciences. 2025; 15(11):1492. https://doi.org/10.3390/educsci15111492

Chicago/Turabian Style

Gutiérrez de Ravé, Simón, Eduardo Gutiérrez de Ravé, and Francisco J. Jiménez-Hornero. 2025. "Integrating CAD and Orthographic Projection in Descriptive Geometry Education: A Comparative Analysis with Monge’s System" Education Sciences 15, no. 11: 1492. https://doi.org/10.3390/educsci15111492

APA Style

Gutiérrez de Ravé, S., Gutiérrez de Ravé, E., & Jiménez-Hornero, F. J. (2025). Integrating CAD and Orthographic Projection in Descriptive Geometry Education: A Comparative Analysis with Monge’s System. Education Sciences, 15(11), 1492. https://doi.org/10.3390/educsci15111492

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop