Next Article in Journal
Simulation-Based Assessment of Vibration Effects on IGBT Module Lifetime in Wind Turbines Through Modal and Random Vibration Analysis
Next Article in Special Issue
The Role of Artificial Intelligence in Computer Science Education: A Systematic Review with a Focus on Database Instruction
Previous Article in Journal
Paleoenvironmental Controls on Organic Matter Enrichment in Member 4 of the Yingcheng Formation Source Rocks, Xujiaweizi Fault Depression
Previous Article in Special Issue
A Machine Learning Framework for Student Retention Policy Development: A Case Study
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Rapid Application Development Courses into Higher Education Curricula

by
Urtė Radvilaitė
* and
Diana Kalibatienė
Department of Information Systems, Faculty of Fundamental Sciences, Vilnius Gediminas Technical University, 10223 Vilnius, Lithuania
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(6), 3323; https://doi.org/10.3390/app15063323
Submission received: 7 February 2025 / Revised: 15 March 2025 / Accepted: 16 March 2025 / Published: 18 March 2025
(This article belongs to the Special Issue ICT in Education, 2nd Edition)

Abstract

:
As the development of technology and business improvement is rapidly advancing these days, higher education (HE) should continually provide and develop up-to-date knowledge and skills for students. This is crucial for training competitive specialists, addressing digital transformation and enhancing digital readiness of HE institutions, as well as increasing students’ employment opportunities. Therefore, this paper explores the development and implementation of the new courses for teaching Rapid Application Development (RAD) on the Oracle Application Express platform at five European universities. Consequently, a new and flexible methodology for the integration of developed courses into existing study programs with different integration strategies is proposed and implemented. The effectiveness of the courses’ integration, implementation and students’ satisfaction were evaluated using Kirkpatrick’s model. The results reveal that students’ knowledge of RAD increased after completing the courses, which can improve students’ employment opportunities and promote digital transformation in HE institutions and studies. In addition, a majority of the students expressed positive feedback for both modules, finding the courses relevant, well delivered and motivating for future study. This study and its results are expected to inspire researchers, teachers and practitioners for further work towards the digital transformation of HE and offer valuable insights for future HE digitalization and research.

1. Introduction

Higher education (HE) is responsible for preparing young specialists. Therefore, higher education institutions (HEIs), especially universities, should provide study programs that are up-to-date and resemble the latest development of technologies regarding the EU Digital Education Action Plan (2021–2027) [1] that prioritizes the improvement of digital skills and competencies to facilitate digital transformation. This entails creating opportunities and providing support for the digitalization of teaching methods and learning processes, as well as developing infrastructure for inclusive and resilient remote learning. As HE is a complex process including different stakeholders such as students, teachers and administrative personnel of HEIs, any changes impact all of these parts.
There is no unified definition of digital transformation (DT) from the HEI point but the review on research from 1980 in this field is presented in [2]. Although the creation of the Internet in 1983 is considered one of the foundations of DT, an increase in DT research in HEIs has been observed only since 2016. Significant changes to DT in HEIs have been driven by the need to transition to online learning during the pandemic. Educational technology has taken a dominant position, becoming one of the essential factors for the sustainable development of HE [3,4]. While technological advances affect HE, DT in HEIs should include teaching, research, pedagogical approaches, administrative processes and people [2,5,6].
Technological improvement arouses the rapid changes that HEIs need adjusting to, and those changes need to be cultural as well as technical [7]. Universities respond to different digital technologies and take actions to improve learning experience. Artificial intelligence (AI) [8,9,10], cloud computing [11], big data [12] and the Internet of Things (IoT) [8,11,13] are impacting digitalization in HEIs. While technology development is transforming HE, digitalizing the learning contents and using educational solutions like Learning Management Systems (LMS) are interconnected rather than separate paths [14,15].
Several studies [2,7,16,17] agree that the pandemic and COVID-19 accelerated technology integration in HE. As the pandemic affected the area of education, universities need to be prepared to adapt digital technology in any unexpected situation [18]. Therefore, the necessary skills that are required in the 21st century should be revised as technological growth urges HEIs to offer courses that focus on technical, knowledge-based and digital skills [19]. While technological advancements offer benefits like flexibility and personalized learning, they also affect emotional wellbeing by requiring students to manage technology, offering less face-to-face interaction and more self-regulation [20].
Furthermore, DT in HE significantly influences student dropout rates through various mechanisms, both beneficial and detrimental [21]. The integration of digital tools and methodologies reshapes learning environments, impacting student engagement and success. Digital platforms provide students with greater access to learning materials and resources, which can enhance understanding and retention [22]. Online learning options allow students to learn at their own pace, potentially reducing dropout rates for those who may struggle with traditional classroom settings [23]. Nevertheless, practical fields, compared to theoretical fields, may face challenges that lead to higher dropout rates [24]. Moreover, students from marginalized backgrounds may struggle with access to technology, which can lead to them dropping out [22].
The factors influencing dropout rates have been studied [25,26,27,28] and can be grouped into two main categories. The first category is the student themself, with personal and academic data [26]. The second is external and consists of information about the university, environment and support for the students [27]. While learning analytics (LA) or other models can contribute to predicting students’ performance, it is important to consider privacy issues as well [25,29,30].
As HEIs tend to collect feedback from students about the courses they took, such textual data can help to predict the dropout rates in HEIs [27]. Despite the different learning types, whether it is online education or in person, dropping out is an essential problem that needs to be minimized. Dropping out should be explored as a complex process including students’ experience and satisfaction related to the HEI [31]. Student satisfaction is one of the components that has an impact on students’ motivation to learn and show good performance, as well as showing the quality of teaching. It is confirmed that the lecturer itself as well as high-quality and up-to-date course content are two key factors that enhance students’ motivation and can reduce dropout in HEIs [31].
In conclusion, DT in HE has multifaceted effects, influencing student dropout rates and the overall learning experience. While DT in HE offers opportunities for enhanced engagement and accessibility, it also presents challenges that require careful observation and analysis. Furthermore, understanding the dynamics of DT in HE is crucial for institutions aiming to optimize their digital strategies.
In the scope of the aforementioned and in order to modernize the existing HE studies on databases and rapid application development, five European universities (Vilnius Gediminas Technical University (VILNIUS TECH) in Lithuania as the coordinator, Tallinn University of Technology (TalTech) in Estonia, Riga Technical University (RTU) in Latvia, Technological University Dublin (TU Dublin) in Ireland and University of Rijeka (UNIRI) in Croatia) in cooperation with the Oracle Academy joined to implement the project KA220-HED-E99B8F14 “Embracing rapid application development (RAD) skills opportunity as a catalyst for employability and innovation” (RAD-Skills) in 2022, introducing a new approach to software development. In order to achieve the project’s goals, the project participants, on the basis of the Oracle Academy, provided material to develop two modules for delivering fundamental (Module 1) and intermediate (Module 2) knowledge and skills on database and rapid application development (RAD). A number of local and international workshops and roundtables were organized for the business and HE representatives and students to disseminate the project results and inform interested parties on rapid application development, specifically targeting Oracle APEX low-code development platforms (LCDP).
This research presents the achieved VILNIUS TECH project results on implementing Module 1 and Module 2. Consequently, the main aim of this research is to investigate the effect of digital transformation at VILNIUS TECH in the scope of implementing RAD-Skills. Achieving this aim involves solving such research questions as follows: (1) How do the developed modules fit and integrate into existing university study programs? (2) Were the modules effective in providing students with knowledge and skills in database and RAD? (3) Were students satisfied with the delivered courses?
The main contributions and novelty of this paper are as follows:
  • A methodology that is sufficiently flexible to accommodate the integration of the prepared modules into a variety of study programs, with courses of different credit sizes, is proposed.
  • An approach to assessing students’ knowledge and skills in database and RAD using Kirkpatrick’s Model Level 2: Learning Survey is developed.
  • An approach to assessing student satisfaction with the courses delivered using Kirkpatrick’s Model Level 1: Reaction Survey is developed.
  • The proposed methodology is implemented by integrating the developed modules into two study programs, delivered to the students, at VILNIUS TECH.
  • The efficacy of the developed courses in imparting knowledge and skills in database and RAD to students is investigated.
  • The level of satisfaction among students with regards to the courses they have received is examined.
The rest of the paper is structured as follows. Related work is discussed in Section 2. Section 3 presents the course design and structure as well as the methodology used to evaluate the course. The results of the evaluation are given in Section 4. The discussion is held in Section 5 and finally, the conclusions are presented.

2. Related Work

Training or course evaluation and its effectiveness are widely researched. There are 414,417 papers that can be found in the Web of Science database (WoS) when the search query “(training OR course) AND (effectiveness OR evaluation)” is used (Figure 1). For this scoping and quantitative review, WoS has been chosen, since it is suggested as the principal search system in [32] and allows the acquisition of a sufficient volume of bibliometric data and is suitable for quantitative analysis [33]. Moreover, WoS is among 1 of 14 well-suited academic search systems for systematic literature review (SLR) and contains more than 73,000,000 publications in multidisciplinary subjects. As scoping and quantitative review here is used as an auxiliary tool for choosing the most suitable model for course evaluation, WoS is selected because it is recognized as the one having the highest quality standards and used by various authors [34,35].
Over the past 20 years, training evaluation has become a more popular research field due to the rising possibilities of e-learning and online trainings that digital transformation has enabled. Moreover, it is needed to explore the effectiveness of training or courses and how to prevent or minimize dropout. While in 2004 the number of papers considering training effectiveness of evaluation was less than 1% of all papers published in WoS, this number increased by 2.23% in 10 years and by 8.78% in 20 years. Such an increase shows that evaluation of the courses or trainings are significant.
According to [36], evaluation is a very important part and cannot be excluded when determining whether the training or course objectives have been accomplished. The author has not only discussed the need for training evaluation but has also provided a review of different training evaluation models. Although there are many various models for evaluating the trainings, ref. [37] also discussed similar models as [36], explaining their advantages and disadvantages.
The most common models for evaluating trainings, courses or educational programs are summarized in Table 1. Also, in the scientific literature, some other methods exist, like Phillip’s ROI model, Kaufman’s Five model evaluation, Rossi’s Five Domain Evaluation model and Holton’s model [36,37,38]. But, they are not analyzed here because of their specificity and narrow applications. Consequently, Table 1 consists of the following columns: (1) model, defining the name of the model; (2) levels, showing what is evaluated in the model; (3) usage, specifying the field in which the model is applied; (4) data collection tool, indicating how the data for evaluation are collected; and (5) purpose, defining the evaluation objective of the model.
Among the chosen models, Bloom’s taxonomy also is presented, but it does not measure the effectiveness of training, it just helps to determine the level that students or trainees have achieved after the course or training.
As presented in Table 1 Column (4), most models, like Kirkpatrick’s model, CIPP or CIRO models, collect data by using questionnaires, while some of them (Brinkerhoff’s Success Case Method (SCM) or IPO model) use interviews in addition to surveys. For example, in Brinkerhoff’s SCM, the survey helps to identify two groups: one, of successful cases; second, of unsuccessful cases. Later, these individuals are interviewed to acquire a deeper understanding of what worked for them and what did not. Several other models, e.g., IPO model, focus also on the cost of trainings and can be more appropriate for business organizations or companies.
Authors of [36,37] concluded their review that Kirkpatrick’s model is a universally recognized and accepted framework for training evaluation. The Kirkpatrick Four-Level Training Evaluation Model is designed to objectively measure the effectiveness of training. The model was created by Donald Kirkpatrick in 1959 and became an inspiration and basis for developing other evaluation models like Phillip’s ROI model, Kaufman’s model, Rossi’s Five Domain Evaluation Model or Holton’s model [36,37,38,46].
The acceptance and popularity of Kirkpatrick’s model also can be seen from the number of publications in WoS.
The growth in number of publications about Kirkpatrick’s model through the years is quite steady as seen in Figure 2. This number almost doubled in 2020, from 44 publications in 2019 to 83 publications in 2020, and stayed at this high number until now. Such a rise can be explained by the COVID-19 pandemic and emergence of e-learning and online teaching.
Viewing the usage of training evaluation models from the research area perspective (see Figure 3), Bloom’s taxonomy and Kirkpatrick’s model are two of the most popular approaches in the “Education Educational Research” area. In the “Computer Science” area, the leading method is Brinkerhoff’s Success Case Method. It allows the determination of what is effective in training by detecting the most successful cases and the opposite—the worst cases. Brinkerhoff’s SCM is more suitable for organizations where interviews with trainees can be easily organized due to a smaller number of participants as in [53]. Here, the authors selected 14 success stories and interviewed these participants. In order to avoid subjective opinions, the authors included external people in nominating the participants and another three experts in performing 40–60 min long interviews [53]. Thus, the inclusion of this method in HEI courses can be complicated because of the complex adaptation procedure of the method itself and the large number of participants in a course. The use of Brinkerhoff’s SCM will require more resources such as time or people in order to conduct high-quality evaluations of the course.
Another researcher has shown that Kirkpatrick’s model is preferred over others because it is easy to use and applicable almost everywhere [38]. Consequently, Kirkpatrick’s model can be used in HE. Its advantages as well as limitations are discussed in [54]. The author empathizes that HEIs usually use only the first two levels of Kirkpatrick’s model. Higher levels of this model need to be measured in workplaces and for longer time periods. Thus, measuring these levels should be adjusted in the context of HE [54].
In conclusion, the reviewed works presented various models for evaluating trainings or courses and highlighted the necessity of applying appropriate models to measure the effectiveness and efficiency of the trainings or courses. Due to clarity and ease, Kirkpatrick’s model is mostly used. HEIs can effortlessly adapt Kirkpatrick’s model, especially Level 1 Reaction and Level 2 Learning that will be discussed in the next section together with the methodology of implementing the newly developed modules.

3. Materials and Methods

This section presents the developed methodology to accommodate the integration of the prepared modules into a variety of study programs, with courses of different credit sizes. The methodology was developed together with all five project partner universities to fit study curricula universally. So, the methodology of integration of the developed modules into universities’ study programs is presented in Figure 4. It consists of three strategies as follows:
  • The two developed modules are added to the study program as separate courses each of 3 ECTS (Figure 4, colored in blue). To implement this methodology, a university should support the 3 ECTS system.
  • The two developed modules are added together to form one single course of 6 ECTS (Figure 4, colored in green). To implement this methodology, a university should support the 6 ECTS system.
  • The two developed modules are added in conjunction with other supplementary topics to make courses of >3 ECTS (Figure 4, colored in orange). To implement this methodology, a university should support any ECTS system. This is the most flexible strategy of incorporating the module into the existing study program. For this strategy, new modules or courses should not be developed, i.e., the topics of existing courses should be modified by including the topics of the developed modules.
The proposed methodology consists of three quite different strategies, as Figure 4 presents. These strategies vary not only by the scope of the modules and the size of the credits. The first two strategies require the creation of completely new courses while the third strategy incorporates the topics of developed modules into the existing study courses at the university. Furthermore, the third strategy requires less effort to introduce new skills to the students.
The RAD-Skills project consortium developed two modules (each of 3ECTS) as follows: (1) the fundamental course (Module 1), which provides fundamental knowledge of databases and is intended for students both of computer science and who have no knowledge in information technologies; (2) the intermediate course (Module 2), which provides an advanced level of knowledge for students. Table 2 presents the topics for both modules.
All topics were classified into categories for further analysis as follows:
  • Introduction to Databases (Module 1)
  • Relational Databases (Module 1)
  • Data Modelling (Module 1)
  • Introduction to SQL (Module 1)
  • Advanced SQL (Module 2)
  • Application and page design in APEX (Module 2)
  • Forms and data integrity in APEX (Module 2)
  • Reports in APEX (Module 2)
The approach for assessing students’ knowledge and skills in database and RAD using Kirkpatrick’s model Level 2: Learning Survey and assessing student satisfaction with the courses using Kirkpatrick’s model Level 1: Reaction Survey is developed and presented in Figure 5. The same schema was used in both developed modules.
The proposed approach for assessing students’ knowledge and skills in database and RAD using Kirkpatrick’s model Level 2: Learning Survey and assessing student satisfaction with the courses using Kirkpatrick’s model Level 1: Reaction Survey, presented in Figure 5, can be applied once per course if the first, second or third strategies are chosen (Figure 4). The proposed approach can also be applied twice, i.e., separately for Module 1 and Module 2, if the second strategy is chosen (Figure 4, green color). In the case of using the third strategy for the implementation of the developed modules into the study program, additional questions, covering other HE topics on databases, should be developed and included into Kirkpatrick’s model Level 2: Learning Survey to assess the knowledge and skills of the whole course.
For assessing students’ satisfaction with the course, Kirkpatrick’s model Level 1: Reaction Survey was developed and used at the end of the course. The questions were adopted from the Kirkpatrick’s model Level 1: Reaction Survey template, because it fully met the research objectives. It consists of 7 questions on a 5-point Likert scale, ranging from 1 (Strongly Disagree) to 5 (Strongly Agree) as follows:
  • Q1. I was satisfied with the course overall.
  • Q2. This course enhanced my knowledge of the subject matter.
  • Q3. The course was relevant to what I might be expected to develop rapid applications/a need to develop applications rapidly.
  • Q4. This course provided content that is relevant to my daily job.
  • Q5. This course provided delivery methods and materials appropriately.
  • Q6. I would recommend this course to others.
  • Q7. This course acted as a motivator towards further learning.
The students’ knowledge and skills in database and RAD were evaluated by Kirkpatrick’s model Level 2: Learning Survey, which consists of two parts, i.e., a pre-test and a post-test. Students were asked to take the same test before starting the course module and after finishing it. The results of both tests were compared to determine learning effectiveness.
A pre-test and post-test question set was created based on the 36 existing Oracle APEX questions from different topics included in the developed modules. For the assessment of Module 1, the project partners selected 19 questions by voting, for Module 2, 20 questions were selected. The types of questions were either multiple choice or true/false.
The tests for both modules were combined for the questions of the same weights since they have predefined answers. These questions satisfy the knowledge level according to Bloom’s taxonomy [39]. Higher levels of Bloom’s taxonomy were evaluated by practical tasks performed during the course. The main scope of this study is related to the assessment of student achievements and knowledge level using only tests. Based on the collected number of points for the tests, the knowledge level of the students was evaluated using three levels as follows:
  • threshold level (i.e., satisfactory) when the student knows the most important theories and principles of the course and is able to convey basic information and problems;
  • typical level when the student knows the most important theories and principles and is able to apply knowledge by solving standard problems, and possesses learning skills necessary for further and self-study;
  • outstanding level (i.e., advanced) when the student identifies the latest sources of the course, knows the theory and principles and can create and develop new ideas.
An example of the question is presented in Figure 6.
The collected answers were tabulated and compared by applying paired t-tests as presented in [55,56], confidence intervals [57,58] and effect sizes [58,59].

4. Results

This section presents the results of implementing the proposed methodology and approaches at VILNIUS TECH. Other universities have presented their cases of implementing the proposed methodology in their publications, like TalTech [60,61]. Several partner universities (i.e., TalTech, RTU and UNIRI) have chosen to implement the first strategy, i.e., to add the newly developed module into their study program. Others (i.e., RTU, TU DUBLIN) have chosen to integrate the topics from developed modules into already existing courses in the study program.
At VILNIUS TECH, the third strategy was used for the integration of the developed modules into two existing study programs of “Information Systems” (ISsp) and “Software Engineering” (SEsp). These study programs already had the Databases (with coursework) (6 ETCS) and Database Management (with coursework) (6 ETCS) courses in their curriculum, so the necessary topics of Module 1 and Module 2 were included in the existing ones. These courses are taught in the fourth and fifth semesters, i.e., the second year of studies.
More details about the implementation of Module 1 are presented in [62].
The assessment of students’ knowledge and skills in database and RAD using Kirkpatrick’s model Level 2: Learning Survey and student satisfaction with the courses using Kirkpatrick’s model Level 1: Reaction Survey were evaluated by applying the approach presented in Figure 5.
Kirkpatrick’s model Level 1: Reaction Survey was performed at the end of the course. In total, 8 ISsp students and 25 SEsp students (i.e., 33 students in total) answered the satisfaction survey for Module 1. In total, 8 ISsp students and 19 SEsp students (i.e., 27 students in total) answered the satisfaction survey for Module 2. Note that answering the satisfaction survey was optional. Also, ISsp student groups are smaller. Finally, the number of survey responses was sufficient to draw some conclusions about the students’ feedback on Module 1 and Module 2.
The responses, in percentages, to each module are shown in Table 3. As can be seen from the table, more than half of the responses (59.83% for Module 1 and 58.79% for Module 2) were Somewhat Agree (33.62% for Module 1 and 35.16% for Module 2) or Strong Agree (26.20% for Module 1 and 23.63% for Module 2). The results indicate that the students see the courses as effective and relevant.
Visualizations of the survey responses are shown in Figure 7 for Module 1 and Figure 8 for Module 2. Those figures show that 70% (Figure 7, Module 1, row 1) and 55% (Figure 8, Module 2, row 1) of students were satisfied with the course overall, 76% (Figure 7, Module 1, row 2) and 67% (Figure 8, Module 2, row 2) agreed that this course increased their knowledge of the subject matter, 63% (Figure 7, Module 1, row 3) and 38% (Figure 8, Module 2, row 3) agreed that the course was relevant to what they might expect from RAD, 61% (Figure 7, Module 1 and Module 2, row 4) agreed that the course provided appropriate delivery methods and materials, 61% (Figure 7, Module 1, row 6) and 42% (Figure 8, Module 2, row 6) would recommend this course to others and for 53% (Figure 7, Module 1, row 7) and 38% (Figure 8, Module 2, row 7) of students, the modules acted as a motivator for further learning.
As the courses were well received by the students, they also agreed that their knowledge had increased after the courses.
The learning results and how the students’ knowledge had improved were determined by the means of the pre-test and post-test. In total, for Module 1, 40 (i.e., 12 ISsp and 28 SEsp) students participated in the pre-test, 42 (i.e., 8 ISsp and 34 SEsp) students participated in the post-test. In total, for Module 2, 37 (i.e., 8 ISsp and 29 SEsp) students participated in the pre-test, 47 (i.e., 4 ISsp and 43 SEsp) students participated in the post-test. The numbers of pre-test and post-test responses are sufficient to draw some conclusions about the students’ knowledge level on Module 1 and Module 2.
The obtained grades for Module 1 and Module 2 are shown in Figure 9 and Figure 10, where the pre-test results are colored in yellow, and the post-test results in green. The overall points were grouped as follows:
  • [0; 4.8)—students who failed the test;
  • [4.8; 7.4)—students who have satisfactory knowledge level;
  • [7.4; 8.4)—students who have typical knowledge level;
  • [8.4; 10]—students who achieved the advanced knowledge level.
As can be seen from Figure 9, the majority (80%) of students obtained grades less than 7 in the pre-test for Module 1, indicating that their knowledge of databases is basic. Only 20% of students achieved a typical or advanced knowledge level, meaning that they could be interested in databases and studied independently. Such results could also be explained by the topics of Module 1 being more or less covered in other courses of these study programs. After teaching the course (Module 1), the post-test results had increased, i.e., 45.24% of students obtained grades less than 7 while 54.76% of students achieved a typical or advanced knowledge level. The results show that the course presented the fundamental knowledge of databases and established the ground understanding necessary for the next course with more advanced topics.
As shown in Figure 10, the majority (70.27%) of students obtained grades less than 7 in the pre-test for Module 2, showing only satisfactory levels of knowledge. Only 29.73% of students achieved a typical or advanced knowledge level. After teaching the course (Module 2), the post-test results had increased, i.e., 29.79% of students obtained grades less than 7 while 70.21% of students achieved a typical or advanced knowledge level. So, the notable increase in knowledge level is observed as Module 2 presents more specific knowledge and skills not covered in any other course at the university.
The pre-test and post-test grades according to categories (see Section 3) of both modules are shown in Figure 11. The red color represents the wrong answers to the questions while the correct answers are shown in green color. As the figure shows, knowledge on the introduction to database (DB) topic was improved slightly by 6%. This small improvement suggests that the students are more or less familiar with the concept of database. A significant increase in knowledge was observed on the topic of Relational Databases (DB)—33%. This topic presents the fundamental knowledge and formal definitions of the relational databases. Although students might be acquainted with databases before this course by self-study, they are usually more interested in practical knowledge. Therefore, the results indicate that the course provides a theoretical background for relational databases. Other increases were as follows: Data Modelling—10%, Introduction to SQL—10%, Application (App) and page design in APEX—11%, Forms and data integrity in APEX—14%, and Reports in APEX—15%.
The results of the paired sample t-test for Module 1 and Module 2 are presented in Table 4. The results show a statistically significant difference in the students’ knowledge before and after attending Module 1 (t(79) = −3.7964, p ≤ 0.01) and Module 2 (t(77) = −2.8808, p ≤ 0.01). Thus, students achieved better performance on the post-test for both modules.

5. Discussion

The results presented in this study help to answer the research questions.
In order to answer the first research question (1) How do the developed modules fit and integrate into existing university study programs?, a methodology with three different integration strategies has been suggested and presented in Figure 4. The developed modules introduce new skills and can be easily integrated into existing courses at different universities with various modularity of courses. The proposed methodology is quite flexible and provides choices for the implementation of modules based on how much effort the university wants to spend on improving the study programs. While the developed modules were included as part of bigger courses in this study, universities can choose to create entirely new courses based on the developed Module 1 and Module 2. Consequently, the proposed methodology helps to improve existing study programs and add digitalization to the universities.
The answer to the second research question, i.e., (2) Were the modules effective in providing students with knowledge and skills in database and RAD?, is based on the conducted Kirkpatrick’s model Level 2: Learning Survey consisting of the pre-test and post-test. The results of the paired sample t-test for Module 1 (t(79) = −3.7964, p ≤ 0.01) and Module 2 (t(77) = −2.8808, p ≤ 0.01) showed a statistically significant difference in the students’ knowledge before and after attending these modules. Also, the increase in knowledge is grounded by the large effect size for Module 1 (0.838) and the intermediate effect size for Module 2 (0.634) that both corresponds to zone of desired effect. Therefore, the conclusion that the modules are effective in providing students with knowledge and skills in database and RAD can be drawn, although this effectiveness depends on the topic taught. From Figure 11, it can be seen that the knowledge improvement is smaller in some topics and bigger in others. Also, it should be noted that this effectiveness and learning outcome depends strongly on factors such as: (1) the student’s motivation to learn, (2) methods used for teaching and (3) the teacher’s character traits. These factors are intertwined together and closely depend on each other. It is confirmed in [63] that teaching quality significantly influences the students’ satisfaction because students appreciate a lecturer’s presentation style more than content itself. Teaching quality, as shown in [26], together with a good student–teacher relationship, affects students’ satisfaction. Satisfaction with the teacher impacts satisfaction with the course and motivation to achieve high learning outcomes [31]. In any case, better knowledge improvement should be pursued in the future by supplementing the developed courses with the newest teaching methods, such as Flipped Classroom [64,65,66], Gamification and Simulation [67,68,69], etc., which encourage and motivate students to learn and increase digitalization of the developed courses.
Furthermore, it was noticed that the questions in the pre-test and post-test should be revised and improved. The current questions were taken from Oracle Academy materials and approved by the partners of the project mentioned in the introduction, since they were suitable for the project purposes. Nevertheless, the analysis of the pre-test and post-test results show that in the future, the question bank should be augmented with new questions to have a bigger coverage of topics. As the number of questions increases, it will be necessary to consider whether the tests are too long for the number of credits in the module.
The results for Kirkpatrick’s model Level 1 (Figure 7 and Figure 8) have shown that the majority of the students were satisfied with the course, and this provides the answer to the third research question (3) Were students satisfied with the delivered courses? Although there is room for an increase in satisfaction level, these results depend on several factors. Besides the student’s motivation to learn, the teacher is also one of the main factors influencing satisfaction as noticed by several authors [26,31,63]. Therefore, it is important to ensure the continuous improvement of teachers’ pedagogical competencies and the inclusion of new digital methods in teaching like generative AI (GenAI) tools, i.e., ChatGPT, offering great pedagogical potential [70,71,72,73].
Here, the fourth question can be raised. Did how a teacher is as a person influence the level of students’ satisfaction? The fifth question can also be raised. How do digital technologies, especially GenAI tools, influence learning outcomes? However, these research questions were out of the scope of this study. The satisfaction survey was created for the course and its contents. Consequently, the results in this study do not reflect the students’ opinions on the teaching methods or the teacher as a person. Despite that, the results of Kirkpatrick’s model Level 1: Reaction Survey are useful, as they both reveal the students’ satisfaction and emotional level, and find places for future improvements and digitalization of the course.
On the other hand, the number of students that participated in this study was small but sufficient to perform statistical analysis and draw initial conclusions. As students differ in skills, motivation and abilities, the analyzed results should take into account these factors as well. Perhaps another evaluation model can be considered to complement the analysis of the result from a different perspective. Moreover, the methodology presented in this study has been applied to IT students in two study programs. The results could differ if different study programs, i.e., not IT, were chosen.
A similar study has been performed by the project partners [60,61], and their results also indicate that the course was well-received by the majority of students. In [60], TalTech found that around 62% of correspondents were satisfied with the RAD course material, while in this study, around 60% of students were satisfied. This slight difference can be explained by the use of different teaching techniques, i.e., partners at TalTech chose self-regulated learning in online modes, minimizing contact with a lecturer, which could be the factor resulting in slightly better results. Similar results were obtained by partners for learning outcomes, i.e., TalTech results [61] show a knowledge improvement of 36% while results at VILNIUS TECH indicate an improvement of 35%. Similar results in different universities imply that implementing the database and RAD skills course enhanced students’ knowledge and digital skills.

Limitations of the Study

Some limitations of the study were discussed during the discussion. Consequently, summarizing the previous discussion, the following limitations of the study can be observed:
  • The developed strategy was implemented only for the courses taught for the entire semester (i.e., 16 weeks). This study has not examined the case when the courses are taught in cycles (i.e., ~4 weeks). However, the proposed methodology does not set strict time limits for teaching and could be applied without time limitations.
  • The bias of students regarding the feedback about the course. Another limitation of this study is that Kirkpatrick’s model Level 1: Reaction Survey has been constructed to obtain feedback from the student about the course and its contents. The questions in a survey do not distinguish the content of the course from the teaching quality and the teacher as a person. Nevertheless, the performed Kirkpatrick’s model Level 1: Reaction Survey satisfies the scope and aim of the current study. So, the refinement of the feedback part of the survey is left for future works.
  • There are limitations regarding the pre-test and post-test questions. During the study, it was observed that for a more detailed evaluation of the students’ knowledge, some questions should be revised or additional questions should be added. Nevertheless, the current set of questions satisfies the scope and aim of the study. So, the extension and refinement of the questions remains for future works.
  • There were a limited number of participants and duration of the study. The methodology and developed modules were integrated into existing study programs only two years ago. Therefore, only a limited number of students and only in two study programs at VILNIUS TECH could be investigated. However, the study is ongoing and the effect of the newly integrated modules on students’ knowledge levels is under investigation with new students’ groups.

6. Conclusions

Responding to technological development, HEIs should address digital transformation in order to stay competitive in the market. Furthermore, by introducing new skills or implementing new courses into study programs, HEIs stay attractive to students, and can minimize dropout rates and digitalize HE.
As the analysis of the related works shows, the high number of publications on training or course evaluation highlights the significance and importance of evaluating the courses themselves. There exist many various models for evaluating trainings, satisfaction of the participants, knowledge level gained by participants, etc., that allow HEIs to obtain feedback from the participants and to improve the quality of the courses. Among the analyzed courses evaluation models, Kirkpatrick’s model is the one most accepted and easily applied in HE, especially its lowest two levels—Reaction and Learning. Consequently, it was chosen for the presented study to evaluate students’ satisfaction level and gained knowledge level.
A sufficiently flexible methodology has been developed for the integration of the newly prepared modules on RAD. The main advantage of the proposed methodology is its suitability for quick and flexible integration of newly developed courses into existing study programs by applying one of the following three strategies: (1) introducing the new module as a separate course; (2) combining two modules into a single bigger course and (3) incorporating new topics from the developed module into existing courses at the university. The last strategy was used at VILNIUS TECH as the most suitable and was discussed in this study.
The results of Kirkpatrick’s model Level 1 showed that a majority of the students (~59%) expressed positive feedback for both Module 1 and Module 2. Therefore, the results indicate that the developed modules were well received by the students. Also, the responses from the students show that they acknowledged an increase in their knowledge of the subject matter and found the courses to be relevant and well delivered as well as motivating them to study further.
The results of Kirkpatrick’s model Level 2 allowed for the observation and measurement of the improvements in knowledge for different topics. The average increase in knowledge was 12% in five topics of the course, while two topics, Introduction to Database and Relational Database, stood out with the lowest 6% and the highest 33% increases, respectively.
The data gathered in this study can help to improve the course in the future as the results highlighted areas needing further enhancement. The satisfaction with the developed modules indicate that the topics are relevant and effectively covered with appropriate teaching methods and material. However, the small increase in knowledge in some topics suggests that they need to be improved in terms of teaching quality, new digital learning methods and techniques. Although most of the topics are relevant and provide new skills, the insights of this study can help to better address students’ needs as well as maximize learning outcomes.
In conclusion, the presented results demonstrate that implementing new courses can increase students’ skills and enhance the digital transformation in HEIs and studies. Thus, the undergoing digital transformation urges researchers to perform studies similar to the research presented in this paper in order to measure and evaluate the impact of that transformation on society, business and education.

Future Research

Based on the research results, discussions and limitations of this study, future research can be summarized as follows:
  • Deeper and more extensive investigation of the developed methodology at other partner universities.
  • Investigation of the proposed methodology with different study time cycles.
  • Supplementing the developed courses with the newest teaching methods, which encourage and motivate students for learning and increase digitalization of the developed courses.
  • Improvement and extension of the feedback survey with the possibility to exclude the students’ and teachers’ biases from the results.
  • Improvement and extension of the pre-test and post-test questions.
  • Longitudinal survey of the integrated modules with different students over several years, i.e., collecting more test responses and conducting similar research with a larger set of respondents.
  • Evaluating the teacher as a person and the teacher’s influence on the learning outcomes, as well as investigating the impact of digital technologies on the learning outcomes and students’ satisfaction.
  • Conducting similar studies with other study programs and courses.
  • Summarizing the obtained results together across all five partner universities to highlight and generalize the best practices of RAD course implementation, teaching and course digitalization.

Author Contributions

Conceptualization, U.R. and D.K.; Data curation, D.K.; Formal analysis, U.R.; Investigation, U.R. and D.K.; Methodology, D.K.; Visualization, U.R. and D.K.; Writing—original draft, U.R. and D.K.; Writing—review and editing, U.R. and D.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets generated and analyzed during the current study are not publicly available to preserve individuals’ privacy under the European General Data Protection Regulation but are available from the corresponding author upon reasonable request.

Acknowledgments

This study would not have been possible without the support of the Erasmus+ KA220-HED project “Embracing rapid application development (RAD) skills opportunity as a catalyst for employability and innovation” funded by the European Union.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Digital Education Action Plan (2021–2027). Available online: https://education.ec.europa.eu/focus-topics/digital-education/action-plan (accessed on 31 October 2024).
  2. Benavides, L.M.C.; Tamayo Arias, J.A.; Arango Serna, M.D.; Branch Bedoya, J.W.; Burgos, D. Digital transformation in higher education institutions: A systematic literature review. Sensors 2020, 20, 3291. [Google Scholar] [CrossRef] [PubMed]
  3. Deroncele-Acosta, A.; Palacios-Núñez, M.L.; Toribio-López, A. Digital Transformation and Technological Innovation on Higher Education Post-COVID-19. Sustainability 2023, 15, 2466. [Google Scholar] [CrossRef]
  4. González-Zamar, M.-D.; Abad-Segura, E.; López-Meneses, E.; Gómez-Galán, J. Managing ICT for Sustainable Education: Research Analysis in the Context of Higher Education. Sustainability 2020, 12, 8254. [Google Scholar] [CrossRef]
  5. Alenezi, M. Digital Learning and Digital Institution in Higher Education. Educ. Sci. 2023, 13, 88. [Google Scholar] [CrossRef]
  6. Alenezi, M. Deep dive into digital transformation in higher education institutions. Educ. Sci. 2021, 11, 770. [Google Scholar] [CrossRef]
  7. Mohamed Hashim, M.A.; Tlemsani, I.; Matthews, R. Higher education strategy in digital transformation. Educ. Inf. Technol. 2022, 27, 3171–3195. [Google Scholar] [CrossRef]
  8. Khan, W.; Sohail, S.; Roomi, M.A.; Nisar, Q.A.; Rafiq, M. Opening a new horizon in digitalization for e-learning in Malaysia: Empirical evidence of COVID-19. Educ. Inf. Technol. 2024, 29, 9387–9416. [Google Scholar] [CrossRef]
  9. Wang, L.; Li, W. The Impact of AI Usage on University Students’ Willingness for Autonomous Learning. Behav. Sci. 2024, 14, 956. [Google Scholar] [CrossRef]
  10. Ilić, M.P.; Păun, D.; Popović Šević, N.; Hadžić, A.; Jianu, A. Needs and Performance Analysis for Changes in Higher Education and Implementation of Artificial Intelligence, Machine Learning, and Extended Reality. Educ. Sci. 2021, 11, 568. [Google Scholar] [CrossRef]
  11. Guerrero-Osuna, H.A.; García-Vázquez, F.; Ibarra-Delgado, S.; Mata-Romero, M.E.; Nava-Pintor, J.A.; Ornelas-Vargas, G.; Castañeda-Miranda, R.; Rodríguez-Abdalá, V.I.; Solís-Sánchez, L.O. Developing a Cloud and IoT-Integrated Remote Laboratory to Enhance Education 4.0: An Approach for FPGA-Based Motor Control. Appl. Sci. 2024, 14, 10115. [Google Scholar] [CrossRef]
  12. Ruiz-Palmero, J.; Colomo-Magaña, E.; Ríos-Ariza, J.M.; Gómez-García, M. Big Data in Education: Perception of Training Advisors on Its Use in the Educational System. Soc. Sci. 2020, 9, 53. [Google Scholar] [CrossRef]
  13. Spaho, E.; Çiço, B.; Shabani, I. IoT Integration Approaches into Personalized Online Learning: Systematic Review. Computers 2025, 14, 63. [Google Scholar] [CrossRef]
  14. Petchamé, J.; Iriondo, I.; Korres, O.; Paños-Castro, J. Digital transformation in higher education: A qualitative evaluative study of a hybrid virtual format using a smart classroom system. Heliyon 2023, 9, e16675. [Google Scholar] [CrossRef] [PubMed]
  15. Bygstad, B.; Øvrelid, E.; Ludvigsen, S.; Dæhlen, M. From dual digitalization to digital learning space: Exploring the digital transformation of higher education. Comput. Educ. 2022, 182, 104463. [Google Scholar] [CrossRef]
  16. Nicklin, L.L.; Wilsdon, L.; Chadwick, D.; Rhoden, L.; Ormerod, D.; Allen, D.; Witton, G.; Lloyd, J. Accelerated HE digitalisation: Exploring staff and student experiences of the COVID-19 rapid online-learning transfer. Educ. Inf. Technol. 2022, 27, 7653–7678. [Google Scholar] [CrossRef]
  17. Lu, H.P.; Wang, J.C. Exploring the effects of sudden institutional coercive pressure on digital transformation in colleges from teachers’ perspective. Educ. Inf. Technol. 2023, 28, 15991–16015. [Google Scholar] [CrossRef]
  18. Cramarenco, R.E.; Burcă-Voicu, M.I.; Dabija, D.-C. Student Perceptions of Online Education and Digital Technologies during the COVID-19 Pandemic: A Systematic Review. Electronics 2023, 12, 319. [Google Scholar] [CrossRef]
  19. Akour, M.; Alenezi, M. Higher education future in the era of digital transformation. Educ. Sci. 2022, 12, 784. [Google Scholar] [CrossRef]
  20. Norabuena-Figueroa, R.P.; Deroncele-Acosta, A.; Rodríguez-Orellana, H.M.; Norabuena-Figueroa, E.D.; Flores-Chinte, M.C.; Huamán-Romero, L.L.; Tarazona-Miranda, V.H.; Mollo-Flores, M.E. Digital Teaching Practices and Student Academic Stress in the Era of Digitalization in Higher Education. Appl. Sci. 2025, 15, 1487. [Google Scholar] [CrossRef]
  21. Rahmani, A.M.; Groot, W.; Rahmani, H. Dropout in online higher education: A systematic literature review. Int. J. Educ. Technol. High. Educ. 2024, 21, 19. [Google Scholar] [CrossRef]
  22. Qolamani, K.I.B.; Mohammed, M.M. The digital revolution in higher education: Transforming teaching and learning. QALAMUNA J. Pendidik. Sos. Agama 2023, 1, 837–846. [Google Scholar] [CrossRef]
  23. Thai, D.T.; Quynh, H.T.; Linh, P.T.T. Digital transformation in higher education: An integrative review approach. TNU J. Sci. Technol. 2021, 226, 139–146. [Google Scholar] [CrossRef]
  24. Tinjić, D.; Nordén, A. Crisis-driven digitalization and academic success across disciplines. PLoS ONE 2024, 19, e0293588. [Google Scholar] [CrossRef]
  25. de Oliveira, C.F.; Sobral, S.R.; Ferreira, M.J.; Moreira, F. How does learning analytics contribute to prevent students’ dropout in higher education: A systematic literature review. Big Data Cogn. Comput. 2021, 5, 64. [Google Scholar] [CrossRef]
  26. Nurmalitasari; Awang Long, Z.; Faizuddin Mohd Noor, M. Factors influencing dropout students in higher education. Educ. Res. Int. 2023, 2023, 7704142. [Google Scholar] [CrossRef]
  27. Phan, M.; De Caigny, A.; Coussement, K. A decision support framework to incorporate textual data for early student dropout prediction in higher education. Decis. Support Syst. 2023, 168, 113940. [Google Scholar] [CrossRef]
  28. Roy, R.; Al-Absy, M.S.M. Impact of Critical Factors on the Effectiveness of Online Learning. Sustainability 2022, 14, 14073. [Google Scholar] [CrossRef]
  29. Li, C.; Herbert, N.; Yeom, S.; Montgomery, J. Retention Factors in STEM Education Identified Using Learning Analytics: A Systematic Review. Educ. Sci. 2022, 12, 781. [Google Scholar] [CrossRef]
  30. Hoca, S.; Dimililer, N. A Machine Learning Framework for Student Retention Policy Development: A Case Study. Appl. Sci. 2025, 15, 2989. [Google Scholar] [CrossRef]
  31. Guzmán Rincón, A.; Sotomayor Soloaga, P.A.; Carrillo Barbosa, R.L.; Barragán-Moreno, S.P. Satisfaction with the institution as a predictor of the intention to drop out in online higher education. Cogent Educ. 2024, 11, 2351282. [Google Scholar] [CrossRef]
  32. Gusenbauer, M.; Haddaway, N.R. Which academic search systems are suitable for systematic reviews or meta-analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources. Res. Synth. Methods 2020, 11, 181–217. [Google Scholar] [CrossRef] [PubMed]
  33. Donthu, N.; Kumar, S.; Mukherjee, D.; Pandey, N.; Lim, W.M. How to conduct a bibliometric analysis: An overview and guidelines. J. Bus. Res. 2021, 133, 285–296. [Google Scholar] [CrossRef]
  34. Kalibatienė, D.; Miliauskaitė, J. A hybrid systematic review approach on complexity issues in data-driven fuzzy inference systems development. Informatica 2021, 32, 85–118. [Google Scholar] [CrossRef]
  35. Yan, L.; Zhiping, W. Mapping the Literature on Academic Publishing: A Bibliometric Analysis on WOS. Sage Open 2023, 13. [Google Scholar] [CrossRef]
  36. Dhankhar, K. Training effectiveness evaluation models. A comparison. Indian J. Train. Dev. 2020, 3, 66–73. [Google Scholar]
  37. Liu, S.; Zu, Y. Evaluation Models in Curriculum and Educational Program-A Document Analysis Research. J. Technol. Hum. 2024, 5, 32–38. [Google Scholar] [CrossRef]
  38. Ali, M.S.; Tufail, M.; Qazi, R. Training Evaluation Models: Comparative Analysis. Res. J. Soc. Sci. Econ. Rev. 2022, 3, 51–63. [Google Scholar] [CrossRef]
  39. Prasad, G.N.R. Evaluating student performance based on bloom’s taxonomy levels. In Proceedings of the Journal of Physics: Conference Series, Kalyani, India, 8–9 October 2020; IOP Publishing: Bristol, UK, 2021; Volume 1797, p. 012063. [Google Scholar] [CrossRef]
  40. Ullah, Z.; Lajis, A.; Jamjoom, M.; Altalhi, A.; Saleem, F. Bloom’s taxonomy: A beneficial tool for learning and assessing students’ competency levels in computer programming using empirical analysis. Comp. Appl. Eng. Educ. 2020, 28, 1628–1640. [Google Scholar] [CrossRef]
  41. West, J. Utilizing Bloom’s taxonomy and authentic learning principles to promote preservice teachers’ pedagogical content knowledge. Soc. Sci. Hum. Open 2023, 8, 100620. [Google Scholar] [CrossRef]
  42. Qiu, Z.; Wang, S.; Chen, X.; Xiang, X.; Chen, Q.; Kong, J. Research on the Influence of Nonmorphological Elements’ Cognition on Architectural Design Education in Universities: Third Year Architecture Core Studio in Special Topics “Urban Village Renovation Design”. Buildings 2023, 13, 2255. [Google Scholar] [CrossRef]
  43. Joseph-Richard, P.; Cadden, T. Delivery of e-Research-informed Teaching (e-RIT) in Lockdown: Case Insights from a Northern Irish University. In Agile Learning Environments amid Disruption: Evaluating Academic Innovations in Higher Education During COVID-19; Springer International Publishing: Cham, Switzerland, 2022; pp. 495–512. [Google Scholar] [CrossRef]
  44. Asghar, M.Z.; Afzaal, M.N.; Iqbal, J.; Waqar, Y.; Seitamaa-Hakkarainen, P. Evaluation of In-Service Vocational Teacher Training Program: A Blend of Face-to-Face, Online and Offline Learning Approaches. Sustainability 2022, 14, 13906. [Google Scholar] [CrossRef]
  45. Heydari, M.R.; Taghva, F.; Amini, M.; Delavari, S. Using Kirkpatrick’s model to measure the effect of a new teaching and learning methods workshop for health care staff. BMC Res. Notes 2019, 12, 338. [Google Scholar] [CrossRef] [PubMed]
  46. Alsalamah, A.; Callinan, C. The Kirkpatrick model for training evaluation: Bibliometric analysis after 60 years (1959–2020). Ind. Commer. Train. 2022, 54, 36–63. [Google Scholar] [CrossRef]
  47. Ghasemi, R.; Akbarilakeh, M.; Fattahi, A.; Lotfali, E. Evaluation of the Effectiveness of Academic Writing Workshop in Medical Students Using the Kirkpatrick Model. Nov. Biomed. 2020, 8, 29824. [Google Scholar] [CrossRef]
  48. Khan, N.F.; Ikram, N.; Murtaza, H.; Javed, M. Evaluating protection motivation based cybersecurity awareness training on Kirkpatrick’s Model. Comput. Secur. 2023, 125, 103049. [Google Scholar] [CrossRef]
  49. Gultom, C.S.H.; Komala, R.; Akbar, M. Flight Attendant Training Program Evaluation Based on Kirkpatrick Model. Nusant. Sci. Technol. Proc. 2021, 2021, 352–361. [Google Scholar] [CrossRef]
  50. Toosi, M.; Modarres, M.; Amini, M.; Geranmayeh, M. Context, Input, Process, and Product Evaluation Model in medical education: A systematic review. J. Educ. Health Promot. 2021, 10, 199. [Google Scholar] [CrossRef]
  51. Valarmathi, S.; Sivaranjani, E.; Sundar, J.S.; Srinivas, G.; Kalpana, S. Evaluation of Research Methodology Workshop Using CIRO Model. J. Comm. Med. Public Health Rep. 2024, 5, 14. [Google Scholar] [CrossRef]
  52. Ching, L.K.; Lee, C.Y.; Wong, C.K.; Lai, M.T.; Lip, A. Assessing the Zoom learning experience of the elderly under the effects of COVID in Hong Kong: Application of the IPO model. Interact. Technol. Smart Educ. 2023, 20, 367–384. [Google Scholar] [CrossRef]
  53. Yuan, S.; Rahim, A.; Kannappan, S.; Dongre, A.; Jain, A.; Kar, S.S.; Mukherjee, S.; Vyas, R. Success stories: Exploring perceptions of former fellows of a global faculty development program for health professions educators. BMC Med. Educ. 2024, 24, 1072. [Google Scholar] [CrossRef]
  54. Cahapay, M. Kirkpatrick model: Its limitations as used in higher education evaluation. Int. J. Assess. Tools Educ. 2021, 8, 135–144. [Google Scholar] [CrossRef]
  55. Afifah, S.; Mudzakir, A.; Nandiyanto, A.B.D. How to calculate paired sample t-test using SPSS software: From step-by-step processing for users to the practical examples in the analysis of the effect of application anti-fire bamboo teaching materials on student learning outcomes. Indones. J. Teach. Sci. 2022, 2, 81–92. [Google Scholar] [CrossRef]
  56. Fiandini, M.; Nandiyanto, A.B.D.; Al Husaeni, D.F.; Al Husaeni, D.N.; Mushiban, M. How to calculate statistics for significant difference test using SPSS: Understanding students comprehension on the concept of steam engines as power plant. Indones. J. Sci. Technol. 2024, 9, 45–108. [Google Scholar] [CrossRef]
  57. Janczyk, M.; Pfister, R. Confidence Intervals. In Understanding Inferential Statistics; Springer: Berlin/Heidelberg, Germany, 2023; pp. 69–80. [Google Scholar] [CrossRef]
  58. Lenhard, W.; Lenhard, A. Computation of Effect Sizes. Available online: https://www.psychometrica.de/effect_size.html (accessed on 10 March 2025).
  59. Kraft, M.A. Interpreting Effect Sizes of Education Interventions. Educ. Res. 2020, 49, 241–253. [Google Scholar] [CrossRef]
  60. Robal, T.; Reinsalu, U.; Jürimägi, L.; Heinsar, R. Introducing rapid web application development with Oracle APEX to students of higher education. New Trends Comput. Sci. 2024, 2, 69–80. [Google Scholar] [CrossRef]
  61. Robal, T.; Reinsalu, U.; Leoste, J.; Jürimägi, L.; Heinsar, R. Teaching Rapid Application Development Skills for Digitalisation Challenges. In Digital Business and Intelligent Systems; Lupeikienė, A., Ralyté, J., Dzemyda, G., Eds.; Springer: Cham, Switzerland, 2024; Volume 2157. [Google Scholar] [CrossRef]
  62. Radvilaitė, U.; Kalibatienė, D.; Stankevič, J. Implementing a rapid application development course in higher education and measuring its impact using Kirkpatrick’s model: A case study at Vilnius Gediminas Technical University. New Trends Comput. Sci. 2024, 2, 81–90. [Google Scholar] [CrossRef]
  63. Bell, K. Increasing undergraduate student satisfaction in Higher Education: The importance of relational pedagogy. J. Furth. High. Educ. 2022, 46, 490–503. [Google Scholar] [CrossRef]
  64. Baig, M.I.; Yadegaridehkordi, E. Flipped classroom in higher education: A systematic literature review and research challenges. Int. J. Educ. Technol. High. Educ. 2023, 20, 61. [Google Scholar] [CrossRef]
  65. Yangari, M.; Inga, E. Educational Innovation in the Evaluation Processes within the Flipped and Blended Learning Models. Educ. Sci. 2021, 11, 487. [Google Scholar] [CrossRef]
  66. Colomo Magaña, A.; Colomo Magaña, E.; Guillén-Gámez, F.D.; Cívico Ariza, A. Analysis of Prospective Teachers’ Perceptions of the Flipped Classroom as a Classroom Methodology. Societies 2022, 12, 98. [Google Scholar] [CrossRef]
  67. Khaldi, A.; Bouzidi, R.; Nader, F. Gamification of e-learning in higher education: A systematic literature review. Smart Learn. Environ. 2023, 10, 10. [Google Scholar] [CrossRef]
  68. Mellado, R.; Cubillos, C.; Vicari, R.M.; Gasca-Hurtado, G. Leveraging Gamification in ICT Education: Examining Gender Differences and Learning Outcomes in Programming Courses. Appl. Sci. 2024, 14, 7933. [Google Scholar] [CrossRef]
  69. de la Peña, D.; Lizcano, D.; Martínez-Álvarez, I. Learning through play: Gamification model in university-level distance learning. Entertain. Comput. 2021, 39, 100430. [Google Scholar] [CrossRef]
  70. Kurtz, G.; Amzalag, M.; Shaked, N.; Zaguri, Y.; Kohen-Vacs, D.; Gal, E.; Zailer, G.; Barak-Medina, E. Strategies for Integrating Generative AI into Higher Education: Navigating Challenges and Leveraging Opportunities. Educ. Sci. 2024, 14, 503. [Google Scholar] [CrossRef]
  71. Khlaif, Z.N.; Ayyoub, A.; Hamamra, B.; Bensalem, E.; Mitwally, M.A.A.; Ayyoub, A.; Hattab, M.K.; Shadid, F. University Teachers’ Views on the Adoption and Integration of Generative AI Tools for Student Assessment in Higher Education. Educ. Sci. 2024, 14, 1090. [Google Scholar] [CrossRef]
  72. Nikolovski, V.; Trajanov, D.; Chorbev, I. Advancing AI in Higher Education: A Comparative Study of Large Language Model-Based Agents for Exam Question Generation, Improvement, and Evaluation. Algorithms 2025, 18, 144. [Google Scholar] [CrossRef]
  73. Huang, Q.; Lv, C.; Lu, L.; Tu, S. Evaluating the Quality of AI-Generated Digital Educational Resources for University Teaching and Learning. Systems 2025, 13, 174. [Google Scholar] [CrossRef]
Figure 1. The number of publications in 1988–2024 related to training or course evaluation.
Figure 1. The number of publications in 1988–2024 related to training or course evaluation.
Applsci 15 03323 g001
Figure 2. The number of publications in 2004–2024 related to different training evaluation models.
Figure 2. The number of publications in 2004–2024 related to different training evaluation models.
Applsci 15 03323 g002
Figure 3. The number of publications with different training evaluation models in the “Education Educational Research” area and “Computer Science” area.
Figure 3. The number of publications with different training evaluation models in the “Education Educational Research” area and “Computer Science” area.
Applsci 15 03323 g003
Figure 4. The schema of the methodology of integration of the developed modules into universities’ study programs.
Figure 4. The schema of the methodology of integration of the developed modules into universities’ study programs.
Applsci 15 03323 g004
Figure 5. The approach for assessing students’ knowledge and skills in database and RAD using Kirkpatrick’s model Level 2: Learning Survey and assessing student satisfaction with the courses using Kirkpatrick’s model Level 1: Reaction Survey.
Figure 5. The approach for assessing students’ knowledge and skills in database and RAD using Kirkpatrick’s model Level 2: Learning Survey and assessing student satisfaction with the courses using Kirkpatrick’s model Level 1: Reaction Survey.
Applsci 15 03323 g005
Figure 6. An example of the question for pre-test or post-test.
Figure 6. An example of the question for pre-test or post-test.
Applsci 15 03323 g006
Figure 7. Satisfaction survey results for Module 1.
Figure 7. Satisfaction survey results for Module 1.
Applsci 15 03323 g007
Figure 8. Satisfaction survey results for Module 2.
Figure 8. Satisfaction survey results for Module 2.
Applsci 15 03323 g008
Figure 9. Kirkpatrick’s model Level 2: Learning Survey grades for Module 1.
Figure 9. Kirkpatrick’s model Level 2: Learning Survey grades for Module 1.
Applsci 15 03323 g009
Figure 10. Kirkpatrick’s model Level 2: Learning Survey grades for Module 2.
Figure 10. Kirkpatrick’s model Level 2: Learning Survey grades for Module 2.
Applsci 15 03323 g010
Figure 11. The results of tests for Module 1 and Module 2 according to categories.
Figure 11. The results of tests for Module 1 and Module 2 according to categories.
Applsci 15 03323 g011
Table 1. Comparison of most often used evaluation models.
Table 1. Comparison of most often used evaluation models.
ModelLevelsUsageData Collection ToolPurpose
(1)(2)(3)(4)(5)
Bloom’s taxonomy1. Knowledge
2. Comprehension
3. Application
4. Analysis
5. Synthesis
6. Evaluation [39,40,41]
Preparing assessment questions, planning learning outcomes and assessment [40,41]Set of 30 questions, 5 for each level [39]
Empirical test [40]
Determine which learning (competency) level has been achieved [40,42]
Brinkerhoff’s Success Case Method (SCM)1. Goal Setting
2. Program Design
3. Program Implementation
4. Immediate Outcomes
5. Intermediate or Usage Outcomes
6. Impacts and Worth [38]
Online teaching for postgraduates [43]Survey and interviews [43]Identify the additional factors impacting the success of failure
Kirkpatrick’s model1. Reaction
2. Learning
3. Behavior
4. Results [36,37,44]
Training for healthcare staff [45]
Medical education [46]
Scientific writing workshop for medical students [47]
Cybersecurity training [48]
Flight attendant training program [49]
Questionnaire for reaction; pre-test and post-test for learning;
observational checklist for behavior [45,48]
Effectiveness of training, learning measurement [38]
CIPP model1. Context evaluation
2. Input evaluation
3. Process evaluation
4. Product evaluation [36,37,44]
for formal education systems [36,37]
medical education programs [50]
Questionnaires [50]Improve the curriculum or the educational program [50]
CIRO model1. Context
2. Input
3. Reaction
4. Output [36]
research methodology workshop for postgraduate students from medical colleges [51]Feedback questionnaires; follow-up test; pre- and post-test [51]Monitor trainee’s progress before, during and after training [38]
IPO model1. Input
2. Process
3. Output
4. Outcome [36]
Elderly students’ perceptions regarding their Zoom learning experiences [52]Online survey and focus group interviews [52]Maximize the efficiency of training but lower (reduce) the cost of training [38]
Table 2. Topics for the developed modules.
Table 2. Topics for the developed modules.
Module 1Module 2
1. Introduction to Module 1
2.  Introduction to Databases
3.  Relational Databases
4.  Database Normalization (1–3)
5.  Physical Data Model
6.  Access to Oracle APEX Environment
7.  Introduction to Structured Query Language (SQL)
8.  Application (App) Development in APEX (at wizard level)
1.  Introduction to Module 2
2.  APEX Course Project
3.  Advanced Data Normalization (3+ additional)
4.  Advanced SQL
5.  App building in APEX: pages and reports
6.  App building in APEX: forms
7.  App building in APEX: navigation and styles
8.  Other Advanced Functions in APEX
Table 3. Kirkpatrick’s model Level 1: Reaction Survey responses for Module 1 and Module 2.
Table 3. Kirkpatrick’s model Level 1: Reaction Survey responses for Module 1 and Module 2.
AnswersModule 1Module 2
Strongly Disagree (1)9.174.40
Somewhat Disagree (2)8.3011.54
Neither Agree nor Disagree (3)22.7125.27
Somewhat Agree (4)33.6235.16
Strong Agree (5)26.2023.63
Number of responses3327
Table 4. A paired sample t-test and confidence level to measure students’ knowledge level for Module 1 and Module 2.
Table 4. A paired sample t-test and confidence level to measure students’ knowledge level for Module 1 and Module 2.
AttributesModule 1 ValuesModule 2 Values
Variance for pre-test1.681.54
Variance for post-test2.271.50
Mean for pre-test6.056.78
Mean for post-test7.237.56
DF7977
t Stat−3.7964−2.8808
P(T ≤ t) two-tail0.000290.00514
t Critical two-tail1.990451.99125
Confidence level0.950.95
Confidence interval0.378–1.290.193–1.075
Effect size0.8380.634
Based on Table 4, for Module 1 the effect size equals 0.838 that corresponds to a large effect or zone of desired effect according to [58]. For Module 2 the effect size is 0.634 that corresponds to an intermediate effect or zone of desired effect.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Radvilaitė, U.; Kalibatienė, D. Integrating Rapid Application Development Courses into Higher Education Curricula. Appl. Sci. 2025, 15, 3323. https://doi.org/10.3390/app15063323

AMA Style

Radvilaitė U, Kalibatienė D. Integrating Rapid Application Development Courses into Higher Education Curricula. Applied Sciences. 2025; 15(6):3323. https://doi.org/10.3390/app15063323

Chicago/Turabian Style

Radvilaitė, Urtė, and Diana Kalibatienė. 2025. "Integrating Rapid Application Development Courses into Higher Education Curricula" Applied Sciences 15, no. 6: 3323. https://doi.org/10.3390/app15063323

APA Style

Radvilaitė, U., & Kalibatienė, D. (2025). Integrating Rapid Application Development Courses into Higher Education Curricula. Applied Sciences, 15(6), 3323. https://doi.org/10.3390/app15063323

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop