Next Article in Journal
Paying the Cognitive Debt: An Experiential Learning Framework for Integrating AI in Social Work Education
Previous Article in Journal
Learning Through Cooperation in the Activities of 1st-Grade Pupils (7 Years Old) Using the Lesson Study Methodology: The Case of One Lithuanian School
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model

1
Department of Hotel and Restaurant Service, Tourism and Recreation Organization, Baltic International Academy, LV-1003 Riga, Latvia
2
Institute of Applied Computer Systems, Faculty of Computer Science, Information Technology and Energy, Riga Technical University, LV-1048 Riga, Latvia
3
Academy of Liepaja, Riga Technical University, LV-3401 Riga, Latvia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(10), 1302; https://doi.org/10.3390/educsci15101302
Submission received: 28 July 2025 / Revised: 22 September 2025 / Accepted: 23 September 2025 / Published: 2 October 2025

Abstract

This article discusses the development of a course maturity model, aimed at improving and aligning academic courses with labor market demands. The model, named Ten Tools for Improving Course (TTIC), is a continuous, cyclic, multi-component structure designed to assess and enhance course quality and effectiveness. Maturity models for the university environment are typically based on very specific, isolated domains, ignoring other key areas of university organizations. Moreover, existing university maturity models generally do not provide directions for activities and practices that enable assessment of the achieved level with the aim of fostering continuous improvement. The present study addresses these limitations by focusing on the “Algorithmization and Programming of Solutions” course at Riga Technical University, utilizing statistical data to predict student performance and reduce dropout rates. By using TTIC, the authors aim to enhance educational quality and develop professional competencies. The model evaluates various factors influencing student success, including content alignment, teaching methods, feedback, and adaptability. The paper highlights the use of statistical analysis to predict student performance and offers strategies for course enhancement.

1. Introduction

In today’s world, education plays a key role in shaping qualified professionals capable of effectively solving problems and adapting to constantly changing conditions. This creates the need for developing and implementing maturity models for academic courses, which will allow for assessing the quality of education and its alignment with labor market demands.
A maturity model is a structured, multi-level assessment framework designed to determine the current state of development of a process, organization, or system within a specific domain. Such models make it possible not only to assess the current state but also to identify gaps, set development priorities, and plan a pathway to higher levels of effectiveness, manageability, and adaptability (Adekunle et al., 2022). Originally developed to assess the maturity of software development processes (e.g., the Capability Maturity Model, CMM) (Paulk, 2009), maturity models are now widely applied in management, education, healthcare, logistics, and other fields. Maturity models serve as tools for diagnosis, improvement planning, and benchmarking, enabling organizations to systematically develop their competencies in line with best practices and strategic objectives. In this context, maturity is understood as the level of development of a system—its stability, manageability, degree of process standardization, and capacity for continuous improvement.
A maturity model is a tool that describes and analyzes the behaviors, practices, and processes that enable an organization to achieve reliable and sustainable results. The term “maturity” refers to “full development or a perfected state.” When applied to an organization, maturity corresponds to the ideal condition for achieving its objectives. Project maturity indicates that the organization is ideally equipped to manage its projects (Andersen & Jessen, 2003). Maturity models provide a clear vision with a description of the path ahead—a roadmap—for any organization interested in development (Tarhan et al., 2016). A maturity model (MM) is a conceptual framework of capabilities consisting of sequential, discrete levels of maturity for the evolution of processes, reflecting the expected, desired, or typical evolutionary path for these processes. Maturity models are widely used to assess and improve processes in many fields, such as industry, information technology, healthcare, education, and others (Kolukısa Tarhan et al., 2020).
For universities, maturity models are a comprehensive tool for assessing the diverse aspects of functioning to improve the educational process and development strategy (Szydło et al., 2024). Maturity models have proven their value, and according to Tocto-Cano et al. (2020), universities need to implement descriptive, prescriptive, or evaluative maturity models with elements of flexibility and automation to ensure quality and continuous improvement in education (Tocto-Cano et al., 2020). Overall, in the context of education, a maturity model can be defined as a diagnostic and managerial tool that allows for assessing the level of development of a course, program, or educational practice according to predefined criteria and components. Maturity models can help identify strengths and weaknesses, determine areas for improvement, and ensure a phased transition from fragmented, reactive approaches to systemic, proactive, and adaptive teaching strategies aligned with the requirements of Education 4.0 and the labor market.
However, maturity models for the university environment are typically based on very specific, isolated domains, ignoring other key areas of university organizations, and do not offer ways to enhance continuous improvement (Tocto-Cano et al., 2020).
In the field of education, there are several maturity models aimed at project-based learning (PBLCMM: Project Based Learning Capability Maturity Model) (Al Mughrabi & Jaeger, 2018), transforming student engagement, success, and retention in higher education institutions (SESR-MM: the Student Engagement Success and Retention Maturity Model) (Clarke et al., 2013), and the Engineering Education Active Learning Maturity Model (E2ALM2) (Arruda & Silva, 2021).
Generally, university maturity models do not provide directions for activities and practices that would allow for the assessment of the achieved level with the goal of fostering continuous improvement (Tocto-Cano et al., 2020). Therefore, the models presented in the literature are not entirely suitable as tools for the development and the continuous improvement of a specific subject’s curriculum. For example, PBLCMM can be effectively used for one type of assignment in a programming course to enhance the project component where students apply all acquired knowledge. The model most aligned with our tasks is the Engineering Education Active Learning Maturity Model (E2ALM2), whose authors developed a tool for integrating active learning methods into the educational process, such as Problem-Based Learning, Peer-to-peer, Cooperative and Collaborative Learning, EduScrum, and the Flipped Classroom.
In a generalized form, the maturity of an academic course can be defined as the extent to which the course meets quality education standards and contemporary requirements, including Education 4.0 (El Moutchou & Touate, 2024; García-Santiago & Díaz-Millón, 2024) and, especially for technical specialties, Industry 4.0 (Onu et al., 2024). On the basis of these provisions, a university course maturity model should include all elements significant for achieving educational goals, as well as a system of criteria and indicators that allow assessing the development level of the course.
The relevance of developing a maturity model for university courses is driven by several reasons. Table 1 presents these reasons, along with the opportunities provided by using the course maturity model.
Thus, overall, developing a maturity model for academic courses is a relevant and important step to enhance the quality of education, develop professional competencies, and ensure alignment with labor market demands. They can help identify weaknesses in educational programs and develop measures for improvement. Implementing maturity models will contribute to raising the quality of education and shaping qualified specialists ready for successful professional careers.
Designing maturity models is a multi-stage process including from three (Tapia et al., 2007) to eight stages (Lasrado et al., 2015).
We were guided by five stages of MM development: Scope, Design, Populate, Test, Deploy, Maintain (Table 2) (De Bruin et al., 2005). For each phase, dedicated studies are conducted, the results of which can be used to refine decisions made in previous phases.
This paper presents the results of the first three phases: we defined the Scope and developed the conceptual Design and Populate phases of a maturity model to improve an academic course.

2. Description of the Course “Algorithmization and Programming of Solutions”

The main CS1 course at RTU is the “Algorithmization and Programming of Solutions” course. In 2020–2021, the course “Algorithmization and Programming of Solutions” was conducted remotely due to COVID-19. In 2022/2023 study year, the course is held in-person again (Uhanova et al., 2023). Each year, the mandatory course “Algoritmization and Programming of Solutions” at Riga Technical University is attended by 450 to 600 first-year students (538 in 2021; 578 in 2022; 628 in 2023; and 522 in 2024) (Prokofyeva et al., 2024). Until 2024, the course consisted of two parts and was taught over two semesters. Starting in September 2024, both parts have been combined into a single course, retaining the total number of credit points, and are taught in the fall (first) semester.
The goal of the course is to teach students the fundamentals of algorithm development and programming for solving common tasks. The main objectives are providing students with the knowledge, skills, and competencies to understand and apply algorithmic concepts for solving various problems; teaching the use of a high-level programming language; developing skills in reading, writing, and using code examples for solving standard problems (Prokofyeva et al., 2015). The course result is achieved by developing a set of documented programs of varying complexity. During the semester, students complete one test and 12 laboratory works, develop a final project, and take a written exam.
The laboratory work tasks cover the following topics: linear and branching programs, loops, nested loops, arrays, multidimensional arrays, strings, recursion, lists, text and binary files, and creating a graphical user interface. Each lab consists of several programming exercises and a test to assess the acquired knowledge. One of the exercises is completed in class in pairs, while the rest are performed individually at home. Depending on the difficulty, students receive a certain number of points for each task, with a maximum of 10 points for each lab. The average score for the lab work accounts for 35% of the final course grade.
The last month of the semester is dedicated to completing the final project, which accounts for 15% of the final course grade. Students can choose between two options for the final project. The first option is to develop a system for processing a text file, capable of adding, deleting, modifying, searching, and sorting records. This task is completed individually. The second option is for a team of up to six students to implement a text file compression algorithm. Students are allowed to choose the compression method independently.
At the end of the semester, students take a test, which counts for 20% of the final course grade, and an exam, which accounts for an additional 30%.
In addition to these graded components, students are given self-assessment tasks during lectures to monitor their understanding of the material. These tasks are similar to the assignments covered in class. While completing them is voluntary and does not affect the final grade, they provide timely feedback for both instructors and students.
To improve student performance and reduce dropout rates, students without prior programming knowledge are encouraged to attend the supplementary course “Algorithmization Practice.” This course is optional and does not award credit points (Uhanova et al., 2023). Nevertheless, the dropout rate remains high, which is a significant issue for many higher education institutions (Gutiérrez-Monsalve et al., 2024; Rabelo & Zárate, 2025). The number of students failing “Algorithmization and Programming of Solutions” on the first attempt ranges from 25% (in 2024) to 35% (in 2022). It should be noted that, since this academic course is mandatory, students have the right to two additional attempts to pass the exam. This opportunity is used by students, and between 31% (in 2024) and 53% (in 2023) of those who did not pass the exam on the first attempt successfully complete the course with these extra attempts. If both of the additional attempts to pass the exam are unsuccessful, the student is required to retake the “Algorithmization and Programming of Solutions” course. Unfortunately, a portion of the students—approximately 12% in 2022 and up to 14% in 2021—drop out after the first semester for various reasons, including academic debts.
To identify at-risk students and improve the study course “Algorithmization and Programming of Solutions,” the personnel at Riga Technical University have been conducting statistical research for several years. The study was conducted in accordance with ethical principles, including ensuring the confidentiality of personal data and obtaining informed consent from participants. Data on student performance in this course was collected for analysis, alongside their prior knowledge, assessed through a test at the semester’s start. This test consisted of five questions. In the first question, students indicated whether they had taken programming lessons at their previous educational institution or any programming courses before. The next four questions involved tasks: the second and third required students to determine the function of a given algorithm, while the fourth and fifth asked what the given linear and cyclic programs would display on the screen (Uhanova et al., 2023).
The authors propose the creation of a maturity model for the course that facilitates its improvement and have conducted research to enhance education quality and develop professional competencies at Riga Technical University.

3. Predicting Student Performance Using Risk of Underachievement

Predicting student performance involves evaluating future academic results based on various factors such as current grades, class attendance, group work participation, and other indicators. This is a crucial component of the course maturity model, as it allows instructors and students to better understand learning prospects and develop strategies to improve performance. To enhance the CS1 course “Algorithmization and Programming of Solutions” and reduce dropout rates, the relationship between grades for final assessments (exam, test, and final project) and lab work completed during the semester was analyzed. Data from 377 first-year students who took the “Algorithmization and Programming of Solutions” course in the fall of 2024 was reviewed. Table 3 provides details on students’ success in final assessments.
As shown in Table 3, about 19% of students struggled with the exam, receiving low scores between 1 and 4. Less than 27% of students managed to achieve high scores of 8 to 10 on the written exam. A similar situation was observed with the final project, which was successfully completed by about 38% of students, while 21% did not manage to complete it. Students performed significantly better on the test, which, unlike the exam, contained only theoretical questions about programming language syntax.
To identify students at high risk of failing the course early on and to find a correlation between the final course grade and lab work grades, an analysis was conducted on lab scores and the number of completed assignments. For data comparison, contingency tables with Yate’s corrected Chi-square were used. All 12 labs, each consisting of several tasks (some performed at home and some in class), were divided into four stages, with three labs per stage. At each stage, students were split into two groups: the first group included those who did not complete all assignments, and the second included those who completed every task. It should be noted that lab work, like the final project and the test, was not mandatory but did impact the final course grade.
As a result, the following trend was observed: compared to the beginning of the semester (at the first stage, which included the first three lab works), by the end of the semester (the third stage, consisting of labs 7 to 9), a larger proportion of students became more responsible in completing assignments, despite the option not to do them (Table 4).
In Stage 3, students completed more assignments compared to Stage 1 (Yates corrected Chi-square 141.42, p = 0.0000, for the article p < 0.001). However, as shown in Table 3, the number of students who completed assignments but received low grades increased (Yates corrected Chi-square 22.09, p = 0.0000, for the article p < 0.001).
For prediction, we used risk assessments of final underachievement due to uncompleted assignments or low current performance. Assessing the risk of low final performance is a crucial component of the course maturity model, as it allows instructors and students to identify potential issues in advance and develop strategies to address them.
The assessed risks of underachievement on the final test are shown in Figure 1. The risks of underachievement in the “Algorithmization and Programming of Solutions” course are shown in Figure 2. In Figure 1 and Figure 2, the arrows in gray rectangles indicate the risks of low scores on the test (Figure 1) or in the course (Figure 2) if all assignments are incomplete or scored “0.” The p-level (Yates corrected Chi-square) for the contingency tables is indicated in parentheses.
As seen in Figure 1, the presence or absence of prior programming knowledge at stage 0, when students took a voluntary preliminary knowledge test, did not affect success in the theoretical programming test. However, students with prior programming knowledge had a lower risk of underachievement in the course, as practical programming skills were crucial for completing the final project and passing the written exam.
However, statistical data analysis revealed that Stage 2 was the most crucial for identifying students at high risk of failing the course (Figure 2). During this stage, students worked on labs 4 to 6, covering topics such as loops, arrays, nested loops, and two-dimensional arrays.
This effect can be explained by the fact that mastering these topics is crucial for acquiring programming skills.
Notably, as shown in Figure 2, students who did not pass the last three lab assignments in Stage 4, covering topics such as working with text and binary files and creating a graphical user interface, also faced a high risk of underachievement in the “Algorithmization and Programming of Solutions” course. However, by this stage at the end of the semester, it becomes difficult to provide support to struggling students, which is why Stage 2 is key to reducing dropout rates. We consider the second stage to be pivotal, as there is still an opportunity at this point to provide timely support to students by identifying the reasons for deviations from the learning process, which may stem from both a lack of motivation and insufficient foundational preparation—factors that can also negatively affect motivation and attitudes toward the subject. After the second stage, it is still not too late to implement measures to boost students’ motivation to study the subject and to strengthen the necessary knowledge in the domain. These actions improve final grades and the formation of professional competencies, while fostering students’ engagement in developing as new professionals.
Such measures can be undertaken without a defined strategy, relying only on teaching experience and the learning technologies adopted in the course. However, this approach is unlikely to be effective and will not enable the educational process to evolve in line with the requirements of Education 4.0 and Industry 4.0. Accordingly, we decided to develop a maturity model for the creation and development of the course “Algorithmization and Programming of Solutions.” The maturity model we set out to develop had to be practically applicable and diagnostically powerful, serving as a tool aimed at preventing academic underachievement, increasing motivation, and personalizing learning. In the next section, we present the outcome of our efforts, implemented as a Cyclical Maturity Model for Academic Course Development, titled TTIC—Ten Tools for Improving the Course.

4. Maturity Model TTIC: Ten Tools for Improving Course

Maturity models are action-oriented standards that consist of discrete maturity levels for classes of processes or organizations. They represent stages of increasing quantitative or qualitative changes in the capabilities of these processes or organizations to assess their progress relative to specific focus areas (Sadiq et al., 2021).
For assessing the maturity of an academic course, the following criteria and indicators can be used:
  • Alignment of course content with labor market demands and educational standards.
  • Student learning outcomes, alignment with the educational goals of the course and institution, and current market demands.
  • Relevance, novelty, and timely updating of course materials.
  • Effectiveness of teaching methods and technologies, including personalization.
  • Alignment of instructors with the set educational process trends.
  • Quality of the educational process organization from the perspectives of administration, instructors, and students.
  • Level of student engagement, including encouragement of their independence and involvement in the educational process.
Maturity models can have various structures, with most resembling an ascending ladder (Secundo et al., 2015; Król & Zdonek, 2020); however, some have more complex structures, like petals (Pigosso & Rozenfeld, 2012) or gears (Yildiz et al., 2017). We believe that continuous, cyclic models with a multi-component structure are better suited as dynamic tools for improving academic courses, allowing for adjustments to continuously changing educational and market demands. Our conviction is based on research in the field of maturity models and on studies supporting a similar perspective regarding the transformation of corporate culture within the paradigm of the circular economy (Uhrenholt et al., 2022), information security (Alencar Rigon et al., 2014), the improvement of the supply chain for sustainable business development (Reefke et al., 2014), the evolution of the client experience (Purcărea, 2017), the development of the electronic government portal (Al-Obathani & Ameen, 2019), addressing the key constraints that contribute to the failure of small and medium-sized enterprises, (Funchall et al., 2011), as well as on ICMM (Intellectual Capital Maturity Model) for Universities proposed by Secundo et al. (2015).
The maturity model we are developing is intended to serve as an instrument for the continuous improvement of the curriculum for a specific subject. In this context, the maturity of a technical course should, on the one hand, be determined by the extent to which it aligns with educational quality standards, current market requirements, Education 4.0 (El Moutchou & Touate, 2024; García-Santiago & Díaz-Millón, 2024), and Industry 4.0 (Onu et al., 2024). On the other hand, a mature course should optimally address students’ needs and support their professional development, equipping them with the competencies necessary to choose their future career paths and pursue professional growth. In addition, an essential component of a mature curriculum should be its ability to enable effective personalization, i.e., to possess tools for adapting to students with differing career interests and educational backgrounds.
Such a course, beyond its educational objectives, should also address issues of insufficient student motivation, low academic performance, and student attrition. It is important to note that low motivation and student attrition remain pressing issues in contemporary higher education (Tayebi et al., 2021). Typically, academic, socio-economic, psychological, and organizational factors (Nikolaidis et al., 2022) are considered key contributors to dropout rates; these rarely act in isolation and more frequently interact in complex ways. For example, low academic achievement may be caused by a combination of high fatigue, elevated anxiety, and/or poor financial situation, coupled with poor adaptation to the educational environment ((1) Yaacob et al. (2023); (2) Drăghici and Cazan (2022); (3) Chu et al. (2023)). Therefore, the maturity model that serves as the basis for preventive measures at the course (or university) level should incorporate solutions that address the complex nature of risks and implement effective student support systems. The maturity model for university-level courses should thus not only include all elements critical for achieving a comprehensive set of educational goals, but also take into account the dynamic interrelation of all significant internal and external connections.
An analysis of existing models used in education (PBLCMM (Al Mughrabi & Jaeger, 2018), SESR-MM (Al Mughrabi & Jaeger, 2018), and E2ALM2 (Arruda & Silva, 2021)) has shown that while they function well within their intended scope, they are not effective as tools for addressing the issues described above. However, the tools and approaches of PBLCMM, SESR-MM, and E2ALM2 can be adapted and modified for our proposed maturity model. For example, PBLCMM is designed for a specific purpose—namely, to assess educational value for both students and organizations through the academic results of courses that utilize Project-Based Learning (PBL) (Al Mughrabi & Jaeger, 2018). Meanwhile, the Maturity Model E2ALM2 encompasses five elements—Content Quality, Organizational Environment, Organizational Infrastructure, Lecturer, and Interactions—that are highly relevant to any educational course. However, this model was developed primarily to evaluate the maturity of active learning implementation in a curriculum or course (Arruda & Silva, 2021).
In our concept, building on existing concepts and examples of maturity models, we have chosen to approach the problem of curriculum improvement in a more comprehensive way and considered the use of ten elements in modeling the maturity of the programming course “Algorithmization and Programming of Solutions”. Unlike other models, we have included elements that capture the dynamics of development in relation to the external environment, specifically Development, Partnership, and Adaptability, which are presented as a separate domain.
All elements of the proposed maturity model are aimed at evaluating the effectiveness and quality of the course, identifying strengths and weaknesses, and formulating improvement measures in accordance with the tasks described above.
The elements are as follows:
  • Objectives and Goals: The course objectives and goals should be clearly defined and understood by all participants in the educational process. They must align with labor market demands and educational standards.
  • Content: The course content should be current and reflect modern programming trends. It should cover programming fundamentals as well as more advanced topics to provide students with in-depth knowledge and skills.
  • Educational Technologies: Teaching methods should foster critical thinking, problem-solving, and teamwork skills. They should include practical sessions, lab work, projects, and other learning formats.
  • Academic indicators (knowledge assessment, attendance, completion of assignments): Student knowledge should be assessed regularly and objectively, using tests, exams, practical assignments, and projects.
  • Feedback: Feedback from instructors and other educational participants should be timely and constructive, aiding students in improving their knowledge and skills.
  • Outcomes: Course outcomes should be evaluated based on criteria set at the beginning of the course, including not only knowledge assessment but also students’ ability to apply it in practice.
  • Development: To remain relevant and effective, the course should continually evolve and improve based on feedback not only from students and instructors, but also be aligned with the current development priorities of industry and higher education as a whole—namely, Industry 4.0 and Education 4.0. In this context, the university can serve as the resource that ensures the necessary communication to translate the needs of Industry 4.0 into advances in education at all levels within the lifelong learning paradigm.
  • Partnerships: Collaborations with industry and other educational institutions can enhance course quality and relevance. Partnerships can help address the issue of the motivation for acquiring the required competencies and involvement in the educational process, as well as of financial self-sufficiency, thereby reducing the risk of students leaving the university due to financial hardship and, at the same time, increasing students’ professionalism.
  • Resources: The availability of essential resources, such as pedagogues, textbooks, software, overall intellectual capital, equipment, and university infrastructure, is crucial for the successful implementation of the course.
  • Adaptability: The course should be tailored to meet students’ needs and preparation levels, helping them better grasp the material and develop their skills.
The result of this work is a course maturity model consisting of 10 elements, referred to as Ten Tools for Improving Course, or TTIC for short (Figure 3).
Our TTIC model is a continuous, cyclic, multi-component structure with elements placed in two interconnected fields: the “right” and the “left.” Elements such as Course Development, Partnerships, and Adaptability are in a separate field because they are interrelated and collectively impact the “left field,” depending on the effectiveness of its elements. In this model, “Resources” are at the top as a limiting factor for other educational system components. The course content is determined by objectives and goals, which are continually adjusted to meet new educational and market demands. Teaching methods should incorporate technologies to fulfill the objectives of other elements as needed. The course cannot be optimized for changing conditions without continuous assessment of knowledge, overall outcomes, and feedback from internal (department, university, students) and external (industry, market) factors.
It is important to note that each element of the model should develop according to the classic maturity scale, consisting of several, for example, six steps: Initial, Managed, Defined, Quantitatively Managed, Optimizing (Knosp et al., 2019).

5. Link Between TTIC and Empirical Data

The “Feedback” and “Academic Indicators” elements form the basis for forecasting and early intervention. The results presented in Figure 1 and Figure 2 show that systematic collection and analysis of current academic indicators (completion of lab work, current scores) make it possible to predict the risk of failure in final assessments. This directly corresponds to TTIC components:
  • Academic Indicators—regular monitoring of performance, attendance, and assignment completion;
  • Feedback—timely informing of students and instructors about risks, enabling adjustments to the learning trajectory.
The TTIC model emphasizes the need to adapt the course to individual student needs. Empirical data confirm that:
  • Students with a weak background are not “poor performers”—they need a different trajectory;
  • Stage 2 is a critical point where the workload can be adjusted, additional materials offered, and mentoring arranged.
The “Adaptability” component, among other tasks, is aimed at personalizing support based on real data. Implementing “Adaptability” in practice recognizes that not everyone learns in the same way, and a mature course should respond to signals (low scores, incomplete assignments) not with punishment but with support.
The “Development” component targets continuous improvement based on analysis of student performance at all stages, which serves as the primary feedback mechanism between the course/instructor and the student. In the studied cohort, by Stage 3 the number of students completing all assignments rose markedly, indicating increased engagement. At the same time, the number of those who completed the assignments but received low scores also increased—meaning that the quantity of completed work does not equal the quality of understanding. This provides input for “Development” to revise methods, assignments, explanations, and examples.
TTIC evaluates not only the final outcome in the “Results” component but also the process of achieving it. The data showed that 45% of students received the highest score on the syntax test, whereas the exam and the project were more challenging, with only 26.5% and 38% of students, respectively, earning high scores. Therefore, knowledge of syntax does not equal the ability to program. This is an important conclusion for “Results” in TTIC: it is necessary not only to assess knowledge but also to employ other mechanisms across all components of the model, returning to Objectives and Goals for data-driven adjustment. Dynamic goal adjustment will be one of the hallmarks of a mature course under TTIC.
Stage 2 (labs 4–6) was identified as a critical intervention point—this is a direct application of TTIC to increase the course’s maturity. The model enables not merely assessment but also anticipation, adaptation, and improvement—which is its main purpose.
Interestingly, prior knowledge did not affect success on the theoretical test (Figure 1) but reduced the risk of failing the course overall (Figure 2). This indicates that practical skills are more important than theoretical knowledge and confirms the need for the “Educational Technologies” component in TTIC, where the emphasis is on projects, laboratory work, and the application of knowledge.
Based on the proposed model, student assignments within the programming course maturity model should be diverse and tailored to different skill levels and needs. Examples include practical tasks, lab work, group and individual projects, program testing, reports and presentations, research projects, documentation development, error correction, and self-assessment.
Currently, the “Algorithmization and Programming of Solutions” course includes classroom and homework practical sessions, as well as lab work.
Programming lab work is a crucial element of a course maturity model, as it allows students to apply theoretical knowledge in practice and develop professional skills. Within this model, labs focus on providing practical application opportunities, enabling students to solve real problems and create working programs, facilitating a better understanding of how theoretical concepts are implemented in real projects. They develop programming skills such as algorithmic thinking, use of development tools, debugging, and code testing, as well as fostering collaboration among students, enhancing teamwork and knowledge-sharing skills. Labs offer diversity and personalization by varying in complexity and topics, allowing adaptation to different student skill levels and interests. Additionally, they provide feedback, as instructors can help identify and correct student errors to improve programming skills and assess student progress to identify areas needing additional focus.
Homework and lab work in programming serve additional functions, reinforcing material covered in lectures and practical sessions, which fosters deeper understanding and retention. They also promote independence by encouraging students to learn and solve problems on their own, enhancing self-education and critical thinking skills.
Self-assessment, especially without grading, is a crucial element of the course maturity model, as it fosters student responsibility for their learning, reflecting their ability to evaluate and regulate their academic activities. It aids in developing skills in self-directed learning, self-assessment, and critical thinking. Additionally, self-assessment helps students develop organizational skills and identify their strengths and weaknesses. Monitoring progress can motivate further development, and when progress is lacking, instructors should provide feedback and tailor approaches to address student challenges. Overall, self-assessment leads to a deeper understanding and assimilation of material as students analyze and reflect on their knowledge, which is essential for a successful career in any field and a key success factor in today’s world.
In addition to what has been described, in Table 5 we present some possibilities for using the elements of the model in teaching practice depending on the empirical results.
The data obtained not only enable the forecasting of academic performance but also provide a practical foundation for applying the TTIC model, transforming it from a conceptual framework into a tool for educational quality management. Particularly valuable is the identification of a critical intervention point—Stage 2—which enables timely student support, instructional adjustments, and reduced attrition, fully in line with the objectives of the TTIC maturity model.

6. Limitations

This study and the proposed TTIC model have a number of limitations that should be taken into account. The inspiration for developing the model came from the problems identified in the course “Algorithmization and Programming of Solutions” at a single educational institution. Of course, in developing the TTIC we aimed for its broad application; however, it is necessary to verify the possibility of its use in other disciplines and educational institutions.
At present, the TTIC has neither internal nor external limitations researched or described, and barriers such as accreditation requirements of academic programs, the degree of autonomy of instructors in curriculum design, and various obstacles to a personalized approach, among others, are not taken into account. For example, the “Resources” component identifies resource scarcity as a limiting factor, but no exploration has been carried out regarding strategies to overcome structural institutional barriers.
Finally, the maturity levels of the model have been defined only theoretically and currently lack practical implementation, as no validated assessment protocols exist. For applicability, further research is required to elaborate the Populate stage, to conduct testing in the “Algorithmization and Programming of Solutions” course as well as in other courses, to implement the Deploy stage at least within a single subject, and to develop the Maintain aspects, including the automation of most processes.

7. Future Research

At present, the TTIC remains primarily a theoretical construct. Therefore, the first step will involve a comprehensive analysis of all TTIC components, with a description of their practical application and the capacity to respond to changing internal and external conditions. For testing the TTIC, it will be necessary to define indicators for each of the six maturity stages (from initial to optimal) for every component.
In addition, future research will aim to address the aforementioned limitations. To develop and optimize all components of the model, interviews and focus groups will be conducted with students, instructors, and other internal and external stakeholders. This is particularly important for refining the “Adaptability” and “Feedback” components, so that they become more responsive and oriented both toward the human dimension and, at the same time, toward the principles of Education 4.0 and Industry 4.0. Separate studies will also be devoted to analyzing and exploring possible forms of implementation of the “Partnership” component.
Finally, we propose conducting a longitudinal study that follows students over several semesters to determine whether the sustained application of TTIC leads to measurable improvements not only in academic performance, but also in long-term professional competencies, self-efficacy, and career trajectories. This would allow TTIC to be positioned not merely as a tool for reducing attrition, but as a driver of transformative lifelong learning aligned with the paradigms of Education 4.0 and Industry 4.0.
In summary, future research will aim to validate, digitize, humanize, and scale the TTIC model, evolving it from a pilot initiative into a globally applicable, evidence-based system for the continuous enhancement of higher education curricula.

8. Conclusions

This article reflects the culmination of a ten-year journey to find opportunities and tools for improving the course program. This necessity arises from two main challenges: (1) maintaining student performance, which is essential for developing competent specialists who meet changing environmental demands, and (2) aligning the course program with evolving educational environments and industry and market needs. This long journey led to the understanding that a dynamic tool is needed to comprehensively evaluate all significant external and internal factors, allowing for timely and effective action. According to the authors, maturity models possess these qualities. Thus, the research resulted in a continuous, cyclic maturity model for courses, consisting of 10 elements, called the Ten Tools for Improving Courses, or TTIC.
This model can be used to develop educational interventions aimed at increasing the maturity of the course. For example, by analyzing data from academic indicators (results) and comparing them with feedback from all participants in the educational process as well as the “right field,” it is possible to improve tasks, refine objectives, and make necessary adjustments to the educational process (Educational technologies, Content), including optimization of personalized learning aspects. When utilized fully, this model helps ensure that no learning factors aligned with current market demands—and essential for fostering and maintaining student motivation, including among at-risk groups—are overlooked.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/educsci15101302/s1.

Author Contributions

Conceptualization, D.Z., M.U. and N.P.; methodology, D.Z., M.U. and N.P.; software, D.Z.; validation, S.K. and V.Z.; formal analysis, D.Z. and M.U.; investigation, D.Z. and N.P.; resources, M.U. and A.J. (Aleksejs Jurenoks); data curation, M.U. and A.J. (Aleksejs Jurenoks); writing—original draft preparation, S.K. and V.Z.; writing—review and editing, S.K.; visualization, S.K. and V.Z.; supervision, A.J. (Anita Jansone); project administration, N.P.; funding acquisition, A.J. (Aleksejs Jurenoks) and A.J. (Anita Jansone). All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Riga Technical University [RTU-PA-2024/1-0088] and the APC was funded by Riga Technical University.

Institutional Review Board Statement

We would like to clarify that, according to Latvian legislation on personal data protection, academic achievement or student performance data does not constitute sensitive personal data. In accordance with the Latvian Personal Data Processing Law, sensitive personal data includes only data that indicates a person’s race, ethnic origin, religious, philosophical and political beliefs, trade union membership, as well as information about a person’s health or sex life. As such, the data used in our research (i.e., student academic performance) is not categorized as sensitive and does not require special approval from an Ethics Committee or Institutional Review Board per local law. Thus, there is no approval code or date applicable in our case. Further information regarding this regulation can be found in the Latvian Personal Data Processing Law, which carries out the General Data Protection Regulation (GDPR) in Latvia. For your reference, the definition of sensitive personal data in Latvian law is as follows: “sensitive personal data - personal data that indicates a person’s race, ethnic origin, religious, philosophical and political beliefs, trade union membership, as well as provides information about a person’s health or sex life.” [Latvian Personal Data Processing Law, https://www.vestnesis.lv/ta/id/4042-fizisko-personu-datu-aizsardzibas-likums (accessed on 1 September 2025).

Informed Consent Statement

The authors of the article, as lecturers, collected data on student performance in the course “Algorithmization and Programming of Solutions” in electronic format. This option is provided by the educational portal https://ortus.rtu.lv/ (accessed on 30 January 2025).

Data Availability Statement

The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request. The study was conducted in accordance with ethical principles, including ensuring the confidentiality of personal data.

Acknowledgments

This research has been supported by Research and Development grant No RTU-PA-2024/1-0088 under the EU Recovery and Resilience Facility funded project No. 5.2.1.1.i.0/2/24/I/CFLA/003 “Implementation of consolidation and management changes at Riga Technical University, Liepaja University, Rezekne Academy of Technology, Latvian Maritime Academy and Liepaja Maritime College for the progress towards excellence in higher education, science, and innovation”.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Adekunle, S. A., Aigbavboa, C., Ejohwomu, O., Ikuabe, M., & Ogunbayo, B. (2022). A critical review of maturity model development in the digitisation era. Buildings, 12(6), 858. [Google Scholar] [CrossRef]
  2. Alencar Rigon, E., Merkle Westphall, C., Ricardo dos Santos, D., & Becker Westphall, C. (2014). A cyclical evaluation model of information security maturity. Information Management & Computer Security, 22(3), 265–278. [Google Scholar] [CrossRef]
  3. Al Mughrabi, A., & Jaeger, M. (2018). Utilising a capability maturity model to optimise project based learning–case study. European Journal of Engineering Education, 43(5), 679–692. [Google Scholar] [CrossRef]
  4. Al-Obathani, F. S., & Ameen, A. A. (2019, November 4–5). Measurement-based smart government maturity model. Measurement Based Smart Government Maturity Model (Vol. 4, p. 5), Amsterdam, The Netherlands. [Google Scholar]
  5. Andersen, E. S., & Jessen, S. A. (2003). Project maturity in organisations. International Journal of Project Management, 21(6), 457–461. [Google Scholar] [CrossRef]
  6. Arruda, H., & Silva, É. R. (2021). Assessment and evaluation in active learning implementations: Introducing the engineering education active learning maturity model. Education Sciences, 11(11), 690. [Google Scholar] [CrossRef]
  7. Chu, T., Liu, X., Takayanagi, S., Matsushita, T., & Kishimoto, H. (2023). Association between mental health and academic performance among university undergraduates: The interacting role of lifestyle behaviors. International Journal of Methods in Psychiatric Research, 32(1), e1938. [Google Scholar] [CrossRef] [PubMed]
  8. Clarke, J. A., Nelson, K. J., & Stoodley, I. D. (2013, July 1–4). The place of higher education institutions in assessing student engagement, success and retention: A maturity model to guide practice. The 2013 Higher Education Research and Development Society of Australasia Conference (pp. 1–11), Auckland, New Zealand. [Google Scholar]
  9. De Bruin, T., Rosemann, M., Freeze, R., & Kaulkarni, U. (2005). Understanding the main phases of developing a maturity assessment model. In Australasian conference on information systems (ACIS) (pp. 8–19). Australasian Chapter of the Association for Information Systems. [Google Scholar]
  10. Drăghici, G. L., & Cazan, A. M. (2022). Burnout and maladjustment among employed students. Frontiers in Psychology, 13, 825588. [Google Scholar] [CrossRef]
  11. El Moutchou, K., & Touate, S. (2024). Pedagogical innovation in the context of higher education 4.0: A systematic literature review. Multidisciplinary Reviews, 7(11), 2024275. [Google Scholar] [CrossRef]
  12. Funchall, D., Herselman, M. E., & Van Greunen, D. (2011, November 9–11). People innovation capability maturity model (PICaMM) for measuring SMMEs in South Africa. CIRN Prato Community Informatics Conference 2011, Prato, Italy. [Google Scholar]
  13. García-Santiago, L., & Díaz-Millón, M. (2024). Pedagogical and communicative resilience before industry 4.0 in higher education in translation and interpreting in the twenty-first century. Education and Information Technologies, 29(17), 23495–23515. [Google Scholar] [CrossRef]
  14. Goeman, K., & Dijkstra, W. (2022). Creating mature blended education: The European maturity model guidelines. Higher Education Studies, 12(3), 34–46. [Google Scholar] [CrossRef]
  15. Gutiérrez-Monsalve, J. A., Garzón, J., Forero-Meza, M. F., Estrada-Jiménez, C., & Segura-Cardona, A. M. (2024, October). Factors associated with dropout in engineering: A structural equation and logistic model approach. In Workshop on engineering applications (pp. 225–236). Springer Nature Switzerland. [Google Scholar]
  16. Kasser, J. E., & Frank, M. (2010, July 12–15). 1.2.2 A maturity model for the competency of systems engineers. INCOSE International Symposium (Vol. 20, pp. 37–50), Chicago, IL, USA. [Google Scholar]
  17. Knosp, B. M., Barnett, W. K., Anderson, N. R., & Embi, P. J. (2019). Corrigendum: Research IT maturity models for academic health centers: Early development and initial evaluation. Journal of Clinical and Translational Science, 3(1), 45. [Google Scholar] [CrossRef]
  18. Kolukısa Tarhan, A., Garousi, V., Turetken, O., Söylemez, M., & Garossi, S. (2020). Maturity assessment and maturity models in health care: A multivocal literature review. Digital Health, 6, 2055207620914772. [Google Scholar] [CrossRef]
  19. Król, K., & Zdonek, D. (2020). Analytics maturity models: An overview. Information, 11(3), 142. [Google Scholar] [CrossRef]
  20. Lasrado, L. A., Vatrapu, R., & Andersen, K. N. (2015). Maturity models development in IS research: A literature review. IRIS: Selected Papers of the Information Systems Research Seminar in Scandinavia, 6, 6. [Google Scholar]
  21. Nikolaidis, P., Ismail, M., Shuib, L., Khan, S., & Dhiman, G. (2022). Predicting student attrition in higher education through the determinants of learning progress: A structural equation modelling approach. Sustainability, 14(20), 13584. [Google Scholar] [CrossRef]
  22. Onu, P., Ikumapayi, O. M., Omole, E. O., Amoyedo, F. E., Ajewole, K. P., Jacob, A. S., Akiyode, O. O., & Mbohwa, C. (2024, April 2–4). Science, technology, engineering and mathematics (STEM) education and the influence of Industry 4.0. 2024 International Conference on Science, Engineering and Business for Driving Sustainable Development Goals (SEB4SDG) (pp. 1–8), Omu-Aran, Nigeria. [Google Scholar]
  23. Paulk, M. C. (2009). A history of the capability maturity model for software. ASQ Software Quality Professional, 12(1), 5–19. [Google Scholar]
  24. Pigosso, D. C. A., & Rozenfeld, H. (2012). Ecodesign maturity model: The ecodesign practices. In Design for innovative value towards a sustainable society, proceedings of EcoDesign 2011: 7th international symposium on environmentally conscious design and inverse manufacturing (pp. 424–429). Springer Netherlands. [Google Scholar]
  25. Prokofyeva, N., Uhanova, M., Zagulova, D., Katalnikova, S., Ziborova, V., & Uhanovs, M. (2024, October 3–4). Educational group project organization approaches and their impact on student academic performance. 2024 IEEE 65th International Scientific Conference on Information Technology and Management Science of Riga Technical University (ITMS) (pp. 1–7), Riga, Latvia. [Google Scholar]
  26. Prokofyeva, N., Uhanova, M., Zavyalova, O., & Katalnikova, S. (2015, June 18–20). Structuration of courses at studying disciplines of programming. Environment Technology Resources Proceedings of the International Scientific and Practical Conference (Vol. 3, pp. 159–163), Rezekne, Latvia. [Google Scholar]
  27. Purcărea, T. (2017). Marketing’s renaissance by committing to improve CX. Holistic Marketing Management, 7(3), 30–47. [Google Scholar]
  28. Rabelo, A. M., & Zárate, L. E. (2025). A model for predicting dropout of higher education students. Data Science and Management, 8(1), 72–85. [Google Scholar] [CrossRef]
  29. Reefke, H., Ahmed, M. D., & Sundaram, D. (2014). Sustainable supply chain management—Decision making and support: The SSCM maturity model and system. Global Business Review, 15(Suppl. 4), 1S–12S. [Google Scholar] [CrossRef]
  30. Sadiq, R. B., Safie, N., Abd Rahman, A. H., & Goudarzi, S. (2021). Artificial intelligence maturity model: A systematic literature review. PeerJ Computer Science, 7, e661. [Google Scholar] [CrossRef]
  31. Secundo, G., Elena-Perez, S., Martinaitis, Ž., & Leitner, K. H. (2015). An intellectual capital maturity model (ICMM) to improve strategic management in European universities: A dynamic approach. Journal of Intellectual Capital, 16(2), 419–442. [Google Scholar] [CrossRef]
  32. Silva, C., Ribeiro, P., Pinto, E. B., & Monteiro, P. (2021). Maturity model for collaborative R&D university-industry sustainable partnerships. Procedia Computer Science, 181, 811–817. [Google Scholar]
  33. Szydło, J., Sakowicz, A., & Di Pietro, F. (2024). University maturity model—A bibliometric analysis. Economics and Environment, 91(4), 938. [Google Scholar] [CrossRef]
  34. Tapia, R. S., Daneva, M., & van Eck, P. (2007, April 23–26). Developing an inter-enterprise alignment maturity model: Research challenges and solutions. First International Conference on Research Challenges in Information Science, RCIS 2007, Ouarzazate, Morocco. [Google Scholar]
  35. Tarhan, A., Turetken, O., & Reijers, H. A. (2016). Business process maturity models: A systematic literature review. Information and Software Technology, 75, 122–134. [Google Scholar] [CrossRef]
  36. Tayebi, A., Gómez, J., & Delgado, C. (2021). Analysis on the lack of motivation and dropout in engineering students in Spain. IEEE Access, 9, 66253–66265. [Google Scholar] [CrossRef]
  37. Tocto-Cano, E., Paz Collado, S., López-Gonzales, J. L., & Turpo-Chaparro, J. E. (2020). A systematic review of the application of maturity models in universities. Information, 11(10), 466. [Google Scholar] [CrossRef]
  38. Uhanova, M., Prokofyeva, N., Katalnikova, S., Zavjalova, O., & Ziborova, V. (2023). The influence of prior knowledge and additional courses on the academic performance of students in the introductory programming course CS1. Procedia Computer Science, 225, 1397–1406. [Google Scholar] [CrossRef]
  39. Uhrenholt, J. N., Kristensen, J. H., Rincón, M. C., Adamsen, S., Jensen, S. F., & Waehrens, B. V. (2022). Maturity model as a driver for circular economy transformation. Sustainability, 14(12), 7483. [Google Scholar] [CrossRef]
  40. Yaacob, Z., Zakaria, N., Mokhtar, M., Jalal, S. F., Pati, K. M. I., & Rahmat, N. H. (2023). Exploring students’ fatigue: Is there a relationship between outcome with effort and performance? International Journal of Academic Research in Business and Social Sciences, 13(9), 155–173. [Google Scholar] [CrossRef] [PubMed]
  41. Yildiz, D., Boztoprak, H., & Guzey, Y. (2017). A research on project maturity perception of techno-entrepreneurship firms. PressAcademia Procedia, 4(1), 357–368. [Google Scholar] [CrossRef]
Figure 1. Risk Assessments of Underachievement on the Final Theoretical Test. Note. In the gray rectangles on the arrows, the following are indicated: (1) the risk of low exam scores if all assignments are incomplete or scored “0,” (2) in parentheses, the p-level (Yates corrected Chi-square) for the contingency tables. In the rectangles at the bottom of the figure, ‘Home’ indicates homework, ‘Class’ refers to laboratory work performed during classes, ’Grade’ refers to final grade of laboratory work, and ‘NMT’ (non-mandatory tests) represents optional tests conducted during lectures.
Figure 1. Risk Assessments of Underachievement on the Final Theoretical Test. Note. In the gray rectangles on the arrows, the following are indicated: (1) the risk of low exam scores if all assignments are incomplete or scored “0,” (2) in parentheses, the p-level (Yates corrected Chi-square) for the contingency tables. In the rectangles at the bottom of the figure, ‘Home’ indicates homework, ‘Class’ refers to laboratory work performed during classes, ’Grade’ refers to final grade of laboratory work, and ‘NMT’ (non-mandatory tests) represents optional tests conducted during lectures.
Education 15 01302 g001
Figure 2. Risk Assessments of Final Underachievement in the “Algorithmization and Programming of Solutions” Course. Note. In the gray rectangles on the arrows, the following are indicated: (1) the risk of low exam scores if all assignments are incomplete or scored “0,” (2) in parentheses, the p-level (Yates corrected Chi-square) for the contingency tables. Statistically insignificant risks are shown as “–”. In the rectangles at the bottom of the figure, ‘Home’ indicates homework, ‘Class’ refers to laboratory work performed during classes, ’Grade’ refers to final grade of laboratory work, and ‘NMT’ (non-mandatory tests) represents optional tests conducted during lectures.
Figure 2. Risk Assessments of Final Underachievement in the “Algorithmization and Programming of Solutions” Course. Note. In the gray rectangles on the arrows, the following are indicated: (1) the risk of low exam scores if all assignments are incomplete or scored “0,” (2) in parentheses, the p-level (Yates corrected Chi-square) for the contingency tables. Statistically insignificant risks are shown as “–”. In the rectangles at the bottom of the figure, ‘Home’ indicates homework, ‘Class’ refers to laboratory work performed during classes, ’Grade’ refers to final grade of laboratory work, and ‘NMT’ (non-mandatory tests) represents optional tests conducted during lectures.
Education 15 01302 g002
Figure 3. A Cyclical Maturity Model for Academic Course Development: “TTIC—Ten Tools for Improving the Course”.
Figure 3. A Cyclical Maturity Model for Academic Course Development: “TTIC—Ten Tools for Improving the Course”.
Education 15 01302 g003
Table 1. Possibilities of Using the Course Maturity Model.
Table 1. Possibilities of Using the Course Maturity Model.
ReasonOpportunities of the Maturity Model (MM)
Enhancing the quality of education.MM allows for assessing how well a course aligns with current requirements and standards, and identifies areas for improvement. This contributes to enhancing the quality of education and the training of professionals.
Improving interaction between instructors and students.The model helps identify weaknesses in the educational process and develop strategies to address them. Instructors can adapt their teaching methods to meet students’ needs, which promotes more effective interaction.
Development of professional competencies.The model enables the identification of competencies that students acquire through the course. This helps instructors design assignments and projects that foster the development of professional skills.
Assessment of the educational process’s effectiveness.The model provides tools for evaluating the effectiveness of the educational process. Instructors can track student progress and make adjustments to the curriculum as needed.
Attracting investments and partners.Developing a maturity model can attract the attention of investors and partners to the academic course. This may foster collaboration and experience sharing with other universities and organizations.
Alignment with labor market demands.The maturity model takes labor market demands into account and helps adapt the educational process to meet employer needs. This enhances graduates’ competitiveness in the job market.
Providing feedback.The model includes feedback from instructors, students, and other participants in the educational process. This enables timely responses to changes and improvements in the quality of education.
Integration with other courses.The model can be integrated with other courses and programs, ensuring a more systematic and comprehensive development of students’ professional competencies.
Source: generated by authors based on the works of Kasser and Frank (2010), Goeman and Dijkstra (2022) and Silva et al. (2021).
Table 2. Description of the five stages of maturity model development.
Table 2. Description of the five stages of maturity model development.
Steps/StageDescription
ScopeDefining the scope and focus of the MM, which determines the direction for all subsequent development.
DesignDefining the structure/architecture of the MM.
PopulateDefining the content of the MM, including the domain’s components and subcomponents; external and internal actors (individual specialists and organizations), processes, interrelationships, and the system for their evaluation, etc.
TestVerification of the relevance, rigor, validity, and reliability of the MM—its design and content.
DeployPhased implementation of the MM; expansion and scaling; selection and optimization of elements, processes, and stakeholders.
MaintainEnsuring the capability to process large data volumes and to scale and extend the MM; creating a repository to track the model’s evolution and development.
Source: generated by authors based on the works of (De Bruin et al., 2005).
Table 3. Distribution of Students Based on Grades for the Final Exam, Final Project, and Test.
Table 3. Distribution of Students Based on Grades for the Final Exam, Final Project, and Test.
Exam Pass RatesFinal Project Success RatesTest Success Rates
Exam ScoresNumber of StudentsProject Total Scores (Max 15)Number of StudentsTest Scores (Max 20)Number of Students
From 1 to 471 (18.83%)0–380 (21.22%)5–83 (0.80%)
From 5 to 7206 (54.64%)4–640 (10.61%)9–1214 (3.71%)
From 8 to 10100 (26.53%)7–950 (13.26%)13–16173 (45.89%)
Total377 (100.0%)10–1263 (16.71%)17–20171 (45.36%)
13–15144 (38.20%)No Grade16 (4.24%)
Total377 (100.0%)Total377 (100.0%)
Table 4. Contingency Tables of Student Distribution.
Table 4. Contingency Tables of Student Distribution.
(a) Group 1Group 2
Stage 1 (Labs 1–3)276 (18.3%)480 (31.7%)Yates corrected
Chi-square = 141.42, p < 0.001
Stage 3 (Labs 7–9)79 (5.2%)677 (44.8%)
(b) Group 1Group 2
Stage 1 (Labs 1–3)122 (16.4%)256 (34.3%)Yates corrected
Chi-square = 22.09, p < 0.001
Note. (a) Based on the Number of Completed Assignments: Group 1: Did not complete all assignments, Group 2: Completed all assignments; (b) Based on Lab Work Grades: Group 1: Grades < 8 points, Group 2: Grades ≥ 8 points.
Table 5. Possible links between the components of the TTIC model and the empirical findings of the study.
Table 5. Possible links between the components of the TTIC model and the empirical findings of the study.
TTIC
Component
Empirical Findings Practical Application
1. Objectives and GoalsStage 2 (labs 4–6) is a critical point for predicting failure. The course objectives should include early risk detection.The course objectives are adjusted: the task “identify and support students at Stage 2” is added, which reduces attrition and improves performance.
2. ContentStage-2 topics (loops and arrays) are crucial for developing programming skills. Insufficient mastery of these topics creates a risk of failure.The Stage 2 content is strengthened by adding additional examples, visualizations, step-by-step guides, and micro-lectures.
3. Educational TechnologiesStudents pass the theoretical test (syntax) but struggle with the exam/project; therefore, practice-oriented methods are required.The share of practical, project-based, and problem-oriented assignments is increased. The emphasis is on application rather than memorization.
4. Academic indicatorsStage-wise analysis of laboratory work enables the prediction of final outcomes (p < 0.001).A monitoring system is implemented: automatic alerts to the instructor for low activity/performance at Stage 2.
5. FeedbackThe risk of failure is statistically significant already at Stage 2—there is still time to intervene.A mechanism for timely feedback is implemented: personalized recommendations, tutor meetings, and individual roadmaps.
6. OutcomesSuccess in the course ≠ success on a test. What matters is not only scores but the ability to apply knowledge (projects, exams).Assessment is revised: greater weight is assigned to projects and practical tasks. Evaluation of “progress” and “mastery,” not just “completion,” is introduced.
7. DevelopmentBy Stage 3, more students complete the assignments, but the quality declines; therefore, the content or methodology needs refinement.The course is continuously reviewed: Stage-3 assignments are adapted, with added clarifications, checklists, and sample solutions.
8. PartnershipsFinancial difficulties are a driver of attrition. Stage-2 “at-risk” students are a target group for support.Partners are engaged: IT companies offer scholarships, hackathons, and short-term internships to provide motivation and financial support.
9. ResourcesInsufficient support at Stage 2 poses a high risk of failure. Resources are needed: tutors, materials, and time.Resources are allocated: a pool of teaching assistants is created, supplementary materials are developed, and time is reserved for consultations.
10. AdaptabilityStudents with different levels of preparation handle Stage 2 differently—personalization is required.A differentiated approach is introduced: “basic” and “advanced” assignment tracks, the option to choose difficulty, and flexible deadlines.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zagulova, D.; Uhanova, M.; Jurenoks, A.; Prokofyeva, N.; Jansone, A.; Katalnikova, S.; Ziborova, V. Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model. Educ. Sci. 2025, 15, 1302. https://doi.org/10.3390/educsci15101302

AMA Style

Zagulova D, Uhanova M, Jurenoks A, Prokofyeva N, Jansone A, Katalnikova S, Ziborova V. Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model. Education Sciences. 2025; 15(10):1302. https://doi.org/10.3390/educsci15101302

Chicago/Turabian Style

Zagulova, Diana, Marina Uhanova, Aleksejs Jurenoks, Natalya Prokofyeva, Anita Jansone, Sabina Katalnikova, and Viktorija Ziborova. 2025. "Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model" Education Sciences 15, no. 10: 1302. https://doi.org/10.3390/educsci15101302

APA Style

Zagulova, D., Uhanova, M., Jurenoks, A., Prokofyeva, N., Jansone, A., Katalnikova, S., & Ziborova, V. (2025). Learning Course Improvement Tools: Search Has Led to the Development of a Maturity Model. Education Sciences, 15(10), 1302. https://doi.org/10.3390/educsci15101302

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop