Abstract
Talent cultivation is the fundamental mission of higher education institutions, and the key to improving the quality of talent cultivation lies in enhancing the quality of teaching. In this regard, the Joint Committee recommends that the United Nations Educational, Scientific and Cultural Organization (UNESCO) should be invited to participate in this conference, in accordance with their respective mandates. However, in China, research on course evaluation systems and mechanisms in application-oriented universities is relatively scarce, and the evaluation dimensions are often limited; therefore, the evaluation of graduate courses in universities faces challenges such as a lack of specialized assessment systems, limitation of evaluation methods, and an imbalance between emphasis on outcomes and neglect of the teaching process. In this study, a comprehensive evaluation system for the Advanced Structural Dynamics (ASD) course is constructed based on the context-input-process-product (CIPP) evaluation model. The evaluation was conducted from four perspectives: teaching objectives, teaching inputs, teaching processes, and teaching outcomes. The fuzzy analytic hierarchy process (AHP) and simulated annealing particle swarm algorithm (SAPSO) are employed to study evaluation indicators and weights at various levels for the ASD course, and the proposed method is validated through practical examples. This study combines qualitative and quantitative evaluation indicators to achieve comprehensive assessment and adopts more scientifically rational algorithms for weight calculation, aiming to improve the accuracy and efficiency of weight calculation. The research findings of this study can further enhance the evaluation level of teaching quality and talent cultivation in graduate courses at application-oriented universities.
1. Introduction
Teaching quality is a core component of educational quality, particularly in higher education, where the execution of teaching tasks is crucial for cultivating high-quality talent and serves as a key indicator for measuring the level of university education. Among myriad courses, Advanced Structural Dynamics (ASD), as a foundational subject, has particularly stringent educational quality requirements due to its unique disciplinary characteristics and profound impact on students’ future engineering practices. Virgin [1] enhanced the teaching of structural dynamics through practical experience with 3D-printed structures. When discussing teaching tools and software resources, Brandt’s ABRAVIBE toolbox [2] serves as an example of how software can be integrated into the teaching of vibration analysis and structural dynamics. Panagiotopoulos and Manolis [3] have developed a web-based educational software that provides interactive experiments for teaching structural dynamics. ASD not only requires students to master complex theoretical knowledge, but also demands an ability to apply this knowledge to practical engineering problems, making the application of teaching methods and technology particularly important [4]. However, current ASD teaching still faces numerous challenges, including a disconnect between theory and practice, insufficient teaching resources, and low student engagement, all of which directly affect teaching quality and student learning outcomes. Therefore, establishing a comprehensive indicator evaluation system and employing scientifically accurate calculation methods to determine the weights of indicators are essential for achieving more reasonable and accurate evaluation results. Moreover, the evaluation system for course indicators is influenced by a variety of factors, some of which are difficult to describe quantitatively, and this adds to the complexity of the evaluation system design.
1.1. CIPP Model
The CIPP model is a framework for evaluating and improving projects, policies, programs, and practices that was first proposed by American scholar Stufflebeam in 1966. CIPP stands for context, input, process, and product, which form the core of the CIPP model. The CIPP model provides a comprehensive evaluation perspective, focuses on process and development, promotes multi-party involvement, emphasizes feedback and improvement, and facilitates continuous program improvement. It helps evaluators to gain a comprehensive understanding of the curriculum and to identify problems, and it provides effective improvement measures to enhance the quality and effectiveness of the curriculum. The CIPP model has been widely used in the evaluation of educational teaching programs. Some studies have shown that the CIPP model has the potential to help professionals (teachers and administrators) in Teaching English to Speakers of Other Languages (TESOL) to improve their professional practice, curriculum design, and program assessment [5]. The CIPP model has also been applied to the evaluation of TESOL programs. In addition, CIPP has been applied to English-language teaching quality assessment [6,7,8], the quality assessment of school education [9], training program evaluation [10], evaluation of the practical professions program curriculum [11], character education evaluation [12], policy evaluation of entrepreneurship practice programs [13], and the evaluation of pre-school education programs [14], etc.
1.2. Course Evaluation
The evaluation of teaching and learning curricula plays an important role in the field of education and is significant in providing feedback and opportunities for improvement, optimizing curriculum design, promoting research and innovation in teaching and learning, and supporting decision making and quality assurance, as well as enhancing the transparency and equity of teaching and learning. Through effective evaluation, the quality of education can be continuously improved and upgraded, and students can be provided with better learning opportunities and space for development. Li and Hu [15] constructed a teaching quality assurance index system suitable for colleges and universities, based on the CIPP model, and validated the effectiveness of this system in university teaching evaluation through empirical research. Keskin et al. [16] concluded that there was no significant difference in teachers’ perceptions of the mathematics curriculum according to gender and educational status. In addition, scholars who have conducted research on higher mathematics and high school geography subjects have also conducted teaching evaluation studies. In terms of professional course evaluation, Rooholamini et al. [17] provided data for undergraduate medical students’ course program evaluation. Mahmoudabad et al. [18] used the CIPP model to assess clerkship programs in the public health curriculum at Yazd University. Al-Shanawan [19] applied the CIPP model and used a mixed-method design to evaluate the self-learning curriculum of a kindergarten in Saudi Arabia, which included randomly selecting and surveying teachers, interviewing school inspectors, and conducting content analysis. In the field of civil engineering, Zhang et al. [20] developed an assessment framework based on CIPP to improve interdisciplinary BIM (building information modeling) education in highway engineering. Atmacasoy et al. [21] evaluated the Introduction to Industrial Engineering course at Sabanci University based on the CIPP model.
1.3. Intelligent Algorithms
Intelligent algorithms can provide personalized assessment and feedback based on individual student differences and learning. It can identify students’ areas of weakness and learning needs, and customize evaluation indicators and improvement measures for students. Based on the evaluation results and model analysis of the intelligent algorithm, teachers can understand their teaching strengths and weaknesses and make targeted teaching improvements. This helps to improve the quality and effectiveness of teaching and better meets students’ learning needs. Intelligent algorithms can analyze and mine large-scale educational data to provide data support and decision-making reference for educational decision-makers. This helps to formulate more scientific and reasonable education policies and improvement measures in order to enhance the quality and effectiveness of education.
Currently, simulated annealing algorithms [22,23,24], particle swarm algorithms [25,26,27], hierarchical analysis [28,29,30], genetic algorithm [31], sailfish optimization [32], the technique for order preference by similarity to an ideal solution (TOPSIS) method, artificial intelligence, and deep learning [33,34,35,36,37,38] have all been applied in various fields, such as engineering optimization and curriculum evaluation. Sun et al. [39] developed a deep learning-assisted online intelligent English-teaching system that integrates decision tree algorithms and neural networks, aiming to enhance students’ English-learning efficiency and provide personalized teaching based on students’ knowledge and personalities. Fang [40] developed an intelligent online English-teaching system based on support vector machine algorithms and complex networks to improve teaching activities and enhance teaching quality. Hamsa et al. [41] developed academic performance prediction models using decision trees and fuzzy genetic algorithms for BS and MS students in computer science and electronics and communications.
This paper will be based on a combination of fuzzy hierarchical analysis and a genetic algorithm, which aims to obtain more satisfactory results in evaluating the multi-indicator program and to obtain more accurate weight coefficients of evaluation indexes, in order that this evaluation can reflect the opinions of experts as well as students, thus promoting the construction and development of the discipline of ASD. This paper is structured as follows: Section 2 introduces the construction method of course teaching evaluation system based on CIPP; Section 3 optimizes the weight coefficients based on the simulated annealing particle swarm hybrid algorithm; Section 4 designs the weight coefficient importance survey and the questionnaire for the evaluation of teaching effect; Section 5, in combination with the ASD course, determines the weight values of evaluation indexes and performs the fuzzy comprehensive evaluation of teaching effect; and Section 6 draws the research conclusions of this paper.
2. Construction of CIPP-Based Course Teaching Evaluation System
2.1. CIPP Model
The CIPP evaluation model is based on the outcome-based education (OBE) model, which advocates a goal-driven approach to education and focuses not only on proving results, but also on “continuous improvement” in terms of the purpose of the evaluation, which is to make the work program more effective. The core evaluation meanings and theoretical implications are summarized below:
- (1)
- Background evaluation: Determines the basic background of each curriculum plan and the implementation of the activities of the management organization, clarifies the objectives of the evaluated objects and their specific needs, clarifies the opportunities to meet the objectives, diagnoses some basic problems that need to be faced, and judges whether the target activities basically solve practical problems.
- (2)
- Input evaluation: A comprehensive evaluation of alternative curriculum planning options in order to further assist decision-makers in rationally selecting the best means of realizing activities that will meet the curriculum objectives.
- (3)
- Process evaluation: The main objective is to accurately describe the data of the actual design process of the course and to scientifically determine the scientific problems that will inevitably exist in analysis or scientific prediction in the course teaching design work itself or in the implementation process, so as to provide a scientific and effective evaluation information basis for decision-makers at all levels, all of which aims to correct current curriculum-planning problems.
- (4)
- Outcome evaluation: Objective measurement, interpretation and evaluation, and fair evaluation of various achievements in the implementation of the existing curriculum plan, which aims to explain the actual value basis and methodological advantages of the evaluation results.
In this paper, the CIPP model is used to evaluate the ASD course. It not only pays attention to the achievement of teaching objectives, but also overcomes the limitations of the traditional target model. It integrates diagnostic evaluation, formative evaluation, and summative evaluation, and thus can form a more scientific and comprehensive evaluation model.
2.2. Construction of Teaching Evaluation System Based on Fuzzy Hierarchical Analysis Method
The fuzzy analytic hierarchical process (FAHP) is a system analysis method combining qualitative and quantitative methods proposed by Prof. T. L. Saaty of American Operations Research in the 1970’s. FAHP improves the problems that face the traditional AHP and improves the reliability of decision making. The teaching evaluation steps based on FAHP are the construction of a hierarchical model, establishment of a fuzzy judgment matrix, establishment of a fuzzy consistency matrix, hierarchical single ranking, hierarchical total ranking, and fuzzy comprehensive evaluation. Chen et al. [42] proposed a novel framework for teaching performance evaluation that combines the fuzzy analytic hierarchy process (FAHP) with the fuzzy comprehensive evaluation method. In this framework, the teaching evaluation hierarchy includes six factors: planning and preparation; communication and interaction; teaching for learning; managing the learning environment; student evaluation; and professionalism. Each of these is further divided into two or more sub-factors. In the AHP hierarchy model proposed by Thanassoulis et al. [43], the criteria for teaching evaluation consist of two aspects: course and teacher. The course is further analyzed through two sub-criteria: overall interest and the perceived practicality of the course from the students’ perspective. The teacher dimension is analyzed into four sub-criteria: preparation, professionalism, presence, and supporting material. Taking the ASD course as the evaluation object, combined with the direction of evaluation of college courses that have appeared in the previous literature, this paper comprehensively unfolds the teaching evaluation index system in order to carry out a division of evaluation indexes in all aspects, and it constructs a multi-level index system of the target layer, criterion layer, and index layer. Based on the CIPP model, the course evaluation index system is divided into four first-level indicators and 14 second-level indicators, as shown in Table 1. The first-level indicators include four items, including teaching background, teaching input, teaching process, teaching results, teaching effect, etc. This indicator involves the main aspects of course construction, and is the main factor affecting the construction of the course and the core objective of course evaluation. There are 14 items in the secondary indicators, which are the further refinement of the corresponding primary indicators and the items to be evaluated in each main aspect. They are the sub-targets of the course evaluation and are a series of basic target layers around the realization of the sub-targets, and they are also the evaluation elements for the specific realization of the sub-targets.
Table 1.
Evaluation indicators for the ASD course.
2.3. Establishment of Fuzzy Consistency Matrix
The basic idea of fuzzy hierarchical analysis is basically the same as the analytical steps of the hierarchical analysis proposed by T. L. Staaty, and the core problem of fuzzy hierarchical analysis is to establish a fuzzy consistency matrix. The core problem of fuzzy hierarchical analysis is to establish a fuzzy consistency matrix, and how to construct a fuzzy consistency matrix easily and objectively is particularly important. Fuzzy hierarchical analysis of the objective existence of uncertainty using an [0, 1] interval for qualitative description, and the fuzzy set theory of fuzzy consistency relations and fuzzy consistency matrix based on the evaluation of multiple indicators, combined with fuzzy program preferences, can obtain more satisfactory results; therefore, it is necessary to provide the definition and properties of a fuzzy consistency matrix and to evaluate it using fuzzy hierarchical analysis.
2.3.1. Definition of a Fuzzy Agreement Matrix
Noting , the fuzzy consistency matrix is defined as follows:
Definition 1.
Let the matrix be set, where is the element of the row and column of the matrix, and is the total number of rows or columns of the matrix, and, if the matrix satisfies , then is said to be a fuzzy matrix.
Definition 2.
A fuzzy matrix is said to be a fuzzy complementary matrix if the fuzzy matrix satisfies , denoting the fuzzy complementary matrix as .
Definition 3.
Fuzzy matrix is said to be fuzzy consistent if the fuzzy complementary matrix satisfies .
2.3.2. Properties of Fuzzy Consistent Matrices
Theorem 1.
Let the fuzzy matrix be a fuzzy consistent matrix, then there are and , and the sum of the elements of the rows of and the columns of of is , and the sub-matrix obtained by removing any rows and their corresponding columns from the matrix is still a fuzzy consistent matrix. The sub-matrix obtained by removing any row and its corresponding column is still a fuzzy consistent matrix.
Theorem 2.
If the fuzzy complementary matrix is summed by rows, denoted as
the following transformation is implemented:
then the resulting matrix is a fuzzy consistent matrix.
2.3.3. Establishment of the Fuzzy Agreement Matrix
Construct a fuzzy complementary judgment matrix based on the mutual comparison of the importance of the elements, which compares the relative importance scores between two factors of each indicator at the same level and with the same affiliation. Perform dimensional sub-division on the use of the 0.1–0.9 scale method, which, from the psychological point of view, is the current method that reflects the maximum number of grading people can accept when grading things. Use the 0.1–0.9 scale method for scoring comparisons, so that any two elements on the relative importance of a criterion can be quantitatively described for the nine-level scale method, as shown in Table 2.
Table 2.
Scale 0.1 to 0.9.
After obtaining the fuzzy judgment matrix , the fuzzy consistency matrix is obtained by transforming it according to the property theorem of fuzzy consistency matrix. The evaluation model establishes the fuzzy matrix as a fuzzy complementary matrix, is a fuzzy consistent matrix obtained by transforming the fuzzy judgment matrix according to Theorem 2, and the weights of the indicators are configured based on the fuzzy consistent matrix in order to obtain more accurate results.
2.3.4. Consistency Test
After obtaining the weight values of each indicator, a consistency test is needed in order to judge whether the calculated weight values are reasonable. In this paper, in the consistency test of the fuzzy consistency matrix, the selected indicators are the compatibility indicators of the fuzzy judgment matrix and its feature matrix. The definition of the compatibility index , obtained from the feature matrix , and the judgment matrix is given below:
Definition: Let the vector be the importance weight vector computed by processing the fuzzy consistency matrix , where is the importance weight coefficient, , and let , then the identity matrix of the judgment matrix is the order matrix:
Let the matrices and both be fuzzy judgment matrices, and is called the compatibility index between and , and its expression is as follows:
When the compatibility indicator , where is the attitude of the decision-maker, then the consistency of the fuzzy judgment matrix is considered to meet the requirements. The value reflects the attitude of the decision-maker, and the smaller the value, the higher the consistency requirement of the decision-maker. If we test the consistency of fuzzy judgment matrix , we need to let , i.e., in general take 0.1.
3. Optimization of Weight Coefficients Based on Simulated Annealing Particle Swarm Hybrid Algorithm
3.1. Principles of the Hybrid Simulated Annealing Particle Swarm Algorithm
3.1.1. Simulated Annealing Algorithm
The earliest idea of the simulated annealing (SA) algorithm was proposed by N. Metropolis et al. in 1953 and successfully introduced into the field of combinatorial optimization in 1983 by S. Kirkpatrick et al. It is a stochastic optimization algorithm based on the Monte-Carlo iterative solution strategy, and its starting point is based on the similarity between the annealing process of solids in physics and general combinatorial optimization problems. The simulated annealing algorithm starts from a higher initial temperature, accompanied by the decreasing temperature parameter, and combines the probabilistic jumping characteristic in order to randomly search for the global optimal solution of the objective function in the solution space, i.e., the local optimal solution can be probabilistically jumped out of the local optimal solution and ultimately converge with the global optimal solution. The SA algorithm is an optimization algorithm that avoids falling into the serial structure of local minima and eventually converges with the global optimum by giving the search process time varying, and eventually converges to a zero probability of jumping. The SA algorithm updates the solution by accepting the new solution if it is better than the current one; otherwise, SA decides whether to accept the new solution based on the Metropolis criterion.
The acceptance probability is:
where is the energy of the system at the moment of ; is the energy of the system at the moment of + 1; is the rate of temperature decrease; and represents the temperature.
3.1.2. Particle Swarm Optimization
The particle swarm optimization (PSO) algorithm is an optimization algorithm based on group intelligence, developed by J. Kennedy and R. C. Eberhart et al. in 1995. PSO simulates the behavior of a flock of birds foraging for food and finds the optimal solution through collaboration and information sharing among individuals in the flock. PSO originated from the study of bird flock behavior, which uses the sharing of information among individuals in the flock to make the motion of the whole flock evolve from disorder to order in the problem solution space, so as to obtain the optimal solution. In PSO, each solution to the optimization problem is called a particle, which moves in the search space to find the optimal solution. Particles have two attributes: position, which represents their position in the search space, and velocity, which represents how fast they are moving. The process of PSO mainly includes initializing the particle swarm, evaluating the particles, searching for individual extremes, searching for the global optimal solution, and modifying the velocity and position of the particles. PSO demonstrates its superiority in solving practical problems in its advantages in easy implementation, high accuracy, and fast convergence. The particle velocity update formula of particle swarm algorithm is as follows:
where represents the current velocity of the particle; is the distance between the current position of a particle and the optimal position; represents the distance between the current position of a particle and the best position of the group; represents the particle; represents the dimensionality of the actual problem; is the number of iterations; and are the learning factors, which take the value of 2.0; and is the random number uniformly distributed in the range of 0–1.
3.2. Hybrid Algorithms
PSO is a process of designing a massless particle to simulate a bird in a flock, at the same time inputting the two attributes of speed and position of the particles, and searching for the optimal solution in the search space individually. SAPSO is based on the basic operation process of PSO, simulated annealing mechanism, self-cognition, the social cognition of the adaptive adjustment of PSO, and aims to achieve the global convergence of the algorithm. In order to improve the performance of the algorithm, the simulated annealing mechanism is introduced to adaptively adjust the self-cognitive and social-cognitive parts of the PSO algorithm to improve the performance of the hybrid algorithm, thus achieving an effective balance between global convergence and local convergence. The execution process is as follows: the initial population is randomly generated, then the random search is started to generate new individuals through the basic PSO algorithm, then the SA is performed on the generated local optimal individuals to judge whether the results can be used as the individuals in the next generation of the population or not, until the optimal results are searched. The introduction of the simulated annealing principle not only enhances the global search ability of PSO, but also solves the refusal non-convergence of the PSO algorithm to a large extent. The position update formula for SAPSO before improvement is shown in Equation (6).
The way of updating the speed of the algorithm before improvement is mainly through the speed of the particle in the previous moment and the current position; however, when the first term of the formula is 0, then the updating is only related to the current position and the algorithm at this time can easily fall into the local optimal solution. Therefore, in order to improve the global search ability of the algorithm, the inertia weight of the PSO is improved in this study, and the inertia weight coefficient is introduced here. is the proportionality coefficient related to the velocity of the previous moment, and the setting of the inertia weight has an important influence on the convergence speed and the result of the algorithm. The larger the weights are, the stronger the global search ability is; on the contrary, a smaller weight favors the local search. Therefore, in solving practical problems, we can constantly adjust to achieve the goal of searching the ideal results.
The velocity update equation after adding inertia weights is
where the inertia weights are calculated by the following formula:
where and represent the maximum and minimum values of the inertia weights, respectively, which generally range from 0.4 to 0.95; represents the number of iterations; and is the hyperbolic tangent function.
3.3. Validation of the Performance of the Hybrid Algorithm
In order to evaluate the actual optimization performance of the improved SAPSO, common optimization algorithm test functions are introduced for testing. The relevant test functions are as follows:
The test function iteration process is shown as Figure 1.
Figure 1.
Test function iteration process.
3.4. Weight Calculation of SAPSO Algorithm
The improved simulated annealing particle swarm algorithm is used to solve the objective function, which is the fuzzy consistency matrix established in the evaluation of the ASD course, so as to obtain the weight coefficients of the indicator layer and the program layer, and the process of its solution is realized in Matlab 2023b. The fuzzy consistency matrix is calculated using the improved algorithm in order to solve for the weight coefficients with optimal consistency. The process of evaluating the ASD course is shown in Figure 2.
Figure 2.
Evaluation process of the ASD course.
4. Teaching Questionnaire
4.1. Survey of the Importance of Weighting Factors
The survey on the importance of the weighting factors of each indicator layer of the evaluation of the ASD course was conducted by means of an online questionnaire (Questionstar), mainly among the master’s degree students of the class of 2022 taught by Professor Huang Minshu of the Wuhan Institute of Technology, with a total number of 50 students from the civil engineering program, according to the four teaching evaluation guideline layers and the 14 indicator layers under them (Figure 3).
Figure 3.
Questionnaire on the importance of weighting coefficients of course evaluation indicators.
The scoring scale is set from 0.1 to 0.9, and each evaluation element and its corresponding scale is shown in Table 1. The specific scoring method refers to Table 2, comparing two evaluation elements. The 0.5 scale is equally important, 0.6 scale is slightly important, 0.7 scale is obviously important, 0.8 scale is strongly important, 0.9 scale is extremely important, and the higher the scale, the higher the importance of the former element represented. 0.1–0.4 scale is for the inverse comparison, i.e., the importance of the latter element over the former element is 1–(0.1~0.4) scale. In order to facilitate statistical calculations, the original scale is enlarged by 10 times when setting the options in the questionnaire, i.e., 5 is used to represent 0.5 scale, and so on. For example, suppose “teaching background” B1 is compared with “teaching results” B4, and the respondents think that “teaching background” is slightly more important than “teaching results”. For example, if “teaching background” B1 is slightly more important than “teaching achievements” B4, the scale is 0.6, corresponding to option 6; if “teaching achievements” is slightly more important than “teaching background”, the scale is 0.4, corresponding to option 4. The design of the questionnaire page is shown in Figure 3.
4.2. Teaching Effectiveness Evaluation Survey
The evaluation survey of teaching effectiveness in the ASD course was also conducted anonymously through an online questionnaire designed by Questionstar, using a method of inviting civil engineering graduate students of the Wuhan Institute of Technology from the class of 2022 who participated in the course, as well as relevant personnel, to evaluate the course, one by one, on each of the 14 indicators evaluated in this course. The evaluation method is to give each indicator a score according to the percentage system: [90, 100] is excellent; [80, 90) is good; [60, 80) is average; and below 60 is poor. The elements of each indicator level and their corresponding descriptions are given in Table 1. The questionnaire design is shown in Figure 4.
Figure 4.
Questionnaire for evaluating the effectiveness of course teaching.
5. Evaluation of Teaching Effectiveness
5.1. Determination of Evaluation Indicator Weights
Through the online questionnaire survey (Questionstar), the indicators at all levels in the established teaching evaluation index system were scored according to the 0.1–0.9 scale method (Table 2), the questionnaire survey results of 50 civil engineering graduate students from the class of 2022 at the Wuhan Institute of Technology in the ASD teaching class were collected, and the scoring results of each indicator were normalized to obtain five fuzzy judgment matrices, which are A (teaching evaluation), B1 (teaching background), B2 (teaching input), B3 (teaching process), B4 (teaching results), and the fuzzy judgment matrices are shown in Table 3, Table 4, Table 5, Table 6 and Table 7.
Table 3.
Teaching evaluation fuzzy judgment matrix F-A.
Table 4.
Teaching context fuzzy judgment matrix F-B1.
Table 5.
Fuzzy judgment matrix of teaching inputs F-B2.
Table 6.
Fuzzy judgment matrix of teaching process F-B3.
Table 7.
Fuzzy judgment matrix for teaching outcomes F-B4.
The fuzzy judgment matrix established from the evaluation results is organized, and the fuzzy judgment matrix is transformed according to the fuzzy consistent matrix Theorem 2, and the results of the fuzzy consistent matrix are shown in Table 8, Table 9, Table 10, Table 11 and Table 12:
Table 8.
Teaching evaluation fuzzy agreement matrix R-A.
Table 9.
Teaching context fuzzy consistency matrix R-B1.
Table 10.
Fuzzy consistency matrix of teaching inputs R-B2.
Table 11.
Instructional process matrix fuzzy consistency matrix R-B3.
Table 12.
Fuzzy consistency matrix of instructional outcomes R-B4.
After obtaining the fuzzy consistency matrix, the consistency test is needed. In Matlab, combined with simulated annealing particle swarm algorithm in order to calculate the weights of the indicators, the weights of the indicators are summarized in single sort and total sort, and combined with the calculation of the weight vector . Then, calculate the compatibility index of the fuzzy judgment matrix, and test the consistency of the calculation results. The weights of the indicators are summarized in single sort and total sort, taking the calculation of teaching evaluation A as an example:
According to the fuzzy consistency matrix of teaching evaluation A, combined with simulated annealing particle swarm algorithm, the weight vector is calculated as
According to Equation (1), the feature matrix of teaching evaluation A is calculated as
According to Equation (4), calculate the compatibility index between the fuzzy judgment matrix F-A and , so it can be considered that fuzzy judgment matrix F-A is satisfactorily consistent, and the weight set reasonableness calculated through the fuzzy consistency matrix, combined with the simulated annealing particle swarm algorithm, is in line with requirements. By calculating and verifying one by one, the weights of each evaluation index are sorted in a single level, and the results are shown in Table 13, Table 14, Table 15, Table 16 and Table 17:
Table 13.
Calculation of teaching evaluation A.
Table 14.
Calculation results of teaching context B1.
Table 15.
Calculation of teaching input B2.
Table 16.
Calculation results of teaching process B3.
Table 17.
Calculation of teaching and learning outcome B4.
After calculation, each fuzzy judgment matrix and its feature matrix compatibility index is less than 0.1; therefore, the consistency test of the fuzzy judgment matrix passes and the reasonableness of the calculated weight allocation is verified. The calculation results are obtained as a single order of the relative weights between the elements in this level relative to the previous level, and the weight indicators are calculated and summarized in the hierarchical analysis structure chart (Figure 5), which can clearly see the relative weights of each element.
Figure 5.
Calculation of evaluation indicator weights.
5.2. Fuzzy Integrated Evaluation
The fuzzy comprehensive evaluation method is a comprehensive evaluation method based on fuzzy mathematics, in which fuzzy objects and fuzzy concepts reflecting certain properties of fuzzy objects are treated as a fuzzy set, and qualitative evaluation is converted into quantitative evaluation based on the fuzzy mathematical affiliation function. After deriving the weights of the indicators for each indicator level for the evaluation of the course, for the implementation of the evaluation of the course, a number of students who participated in the course in the current semester, as well as those who participated in the course in previous semesters, were invited to conduct an online questionnaire survey on the course.
5.2.1. Commentary Set Setting
The establishment of a course evaluation indicator system has been determined. The four first-level evaluation indicators and 14 second-level evaluation indicators form the indicator set C of this evaluation, and the composition of the indicator set can be seen in the first-level and second-level indicator layers of this evaluation indicator system:
Evaluation set, i.e., the collection of evaluation level determination, is in accordance with the general evaluation habit. This paper divides the evaluation set into four levels, and organizes the score interval and assignment of each level as shown in Table 18:
Table 18.
Classification of evaluation indicators and their assigned values.
Through the establishment of the indicator set and evaluation set, the comprehensive evaluation of the ASD course implementation scoring can be performed, and the collected data for the normalization operation can thus obtain the affiliation value of each evaluation indicator, in order to obtain the final score of the course. Evaluation matrix is indeed a fuzzy insinuation from the indicator set to the evaluation set, and, for the comprehensive evaluation of the course, the final score statistics need to be accomplished by the weight values established for each indicator. The one-dimensional vector composed of the weights of the indicators calculated above is denoted as , and the fuzzy comprehensive evaluation of is obtained by combining the weights of the second-level evaluation indicators and the first-level indicators, i.e., , and the final evaluation grade is calculated by using the principle of maximum affiliation.
5.2.2. Calculation of Integrated Evaluation Results
The evaluation panel for the ASD course was established, inviting course participants and teachers to evaluate each of the indicators in the course evaluation, and the comments were given according to the set of comments identified above. For example, in the evaluation of the teaching objectives C11 in the teaching context B1, 80% of the members of the evaluation team evaluated “talented”, 20% evaluated “favorable”, and no one evaluated “general” or “mediocre”; the score for this indicator is calculated as , based on the percentage of comments that appear.
The results of this evaluation team’s scoring for each indicator of the course’s evaluation are summarized in Table 19.
Table 19.
Results of the evaluation team’s scores for each indicator of the course evaluation.
The evaluation matrix was obtained:
A weighted average type of comprehensive evaluation model was used to obtain a vector of evaluation results for teaching context B1:
The vector of evaluation results for instructional input B2, instructional process B3, and instructional outcomes B4 is derived in the same way:
This results in a comprehensive evaluation matrix :
The final result is calculated by combining the weights of the evaluation indicators at the first level. The calculation gives the evaluation for the ASD course as
According to the final calculation results and the combination of all the evaluation indicators and weights, it is found that the degree of students and experts recognizing the ASD course as “excellent” reaches 74.35%; the degree of affiliation to “good” is 21.94%; the degree of affiliation to “fair” is only 3.81%; and the degree of affiliation to “poor” is 0. According to the principle of maximum affiliation in fuzzy evaluation, the evaluation result of this course is recognized as “excellent”. According to the principle of maximum affiliation in fuzzy evaluation, the evaluation result of this course is recognized as “excellent”. According to the evaluation set , the value of the composite rating was calculated as 92.273, which is within the range of excellence of the evaluation set.
6. Conclusions
In this paper, a fuzzy hierarchical analysis, integrated with a simulated annealing particle swarm algorithm, was employed to assess the ASD course. As a foundational course within the civil engineering discipline, the course has garnered significant recognition from evaluators following continuous endeavors to develop and enhance its content and teaching conditions. The weighting of the course evaluation indicators reveals that the ASD teaching team at the Wuhan Institute of Technology is relatively outstanding and that the course material is abundant. However, future iterations of the course should focus on improving the teaching environment, as well as upgrading the hardware and software teaching facilities. This study truly exemplifies the principle of “enhancing learning and teaching through evaluation”, offering valuable insights for the future development and construction of this course. The main findings of this study are as follows:
- (1)
- In indicator C33 (student activity), teamwork involves structural dynamics analysis tasks based on theoretical and numerical methods. Teachers require students to choose methods and write MATLAB programs to conduct structural dynamic analysis after understanding the theory of ASD. Homework statistics show that most students opt to use the finite element method, considering it a superior approach to solving dynamic problems due to its ability to handle complex issues and its high precision, efficiency, and flexibility.
- (2)
- Essentially, science is derived from experience and practice; yet, it transcends them, and intuitions, being a significant source of innovation, serve as an important bridge from experience and practice to science. In this study, the extension of students’ engineering intuition in structural vibration can be assessed through intuitive perception, logical analysis processes (indicator C42, student capacity building), and experimental operation skills (indicator C22, base and test equipment). Only when students have a deep understanding of the key points of the ASD course and possess engineering intuition can they draw inferences about other matters in subsequent professional courses and have an added advantage in future engineering practice.
Although this study reveals the advantages of the ASD course in terms of the teaching team and course content, the evaluation results also point out areas for improvement, particularly in the teaching environment and facilities, as well as in enhancing students’ engineering intuition. This reflects an important principle in educational evaluation practice: evaluation is not only a summary of past achievements, but also a guide for future development directions. In order to continuously improve the teaching quality of the ASD course, we recommend adopting the following measures:
- (1)
- Optimization of the teaching environment: Consider introducing more flexible and modern classroom layouts to promote teacher–student interaction and cooperative learning among students. At the same time, increase natural lighting and ventilation in classrooms to create a more comfortable learning atmosphere.
- (2)
- With the rapid development of civil engineering technology, it is necessary to update the hardware and software facilities required for teaching to ensure that students have access to the latest technical tools. For example, introduce advanced structural analysis software and virtual reality (VR) or augmented reality (AR) technologies to provide more intuitive and engaging learning experiences.
- (3)
- Establish a regular evaluation mechanism to collect feedback from students, teachers, and industry experts, and adjust course content, teaching methods, and facility configurations in a timely manner. Through continuous evaluation, ensure that the ASD course remains synchronized with industry demands, cultivating more civil engineering talents with practical abilities and innovative spirits.
- (4)
- To cultivate and enhance students’ engineering intuition, firstly, continuous practice and experience should be used to strengthen intuitive feelings, recognizing that the development of this intuition is a gradual process that requires constant sensation, insight, and reinforcement. Secondly, through experimental teaching, students should be encouraged to conduct structural dynamics experiments themselves to gain hands-on experience, which further clarifies and reinforces the basic theories and concepts taught in class. Lastly, engineering case teaching should be employed to foster students’ attention to relevant engineering cases, which complements the development of engineering intuition.
Author Contributions
Conceptualization, M.H. and D.T.; Methodology, M.H., Z.H. and D.T.; Resources, M.H.; Writing—original draft, J.Z. and Z.H.; Writing—review & editing, Z.D., Z.H. and M.H.; Funding acquisition, D.T. and M.H. All authors have read and agreed to the published version of the manuscript.
Funding
The research is funded by the 2022 Construction Project of Top-Class Graduate Courses of the Wuhan Institute of Technology (2022GFC13) and the 2023 Educational and Scientific Research Project of the Hubei Provincial Higher Education Association (2023XD098).
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Acknowledgments
The authors would like to thank the journal editors and anonymous reviewers for their valuable and thought-provoking comments and suggestions. The authors remain responsible for any errors or mistakes.
Conflicts of Interest
The authors declare no conflicts of interest.
References
- Virgin, L. Enhancing the teaching of structural dynamics using additive manufacturing. Eng. Struct. 2017, 152, 750–757. [Google Scholar] [CrossRef]
- Brandt, A. The ABRAVIBE toolbox for teaching vibration analysis and structural dynamics. In Special Topics in Structural Dynamics, Volume 6: Proceedings of the 31st IMAC, A Conference on Structural Dynamics, 2013; Springer: New York, NY, USA, 2013; pp. 131–141. [Google Scholar]
- Panagiotopoulos, C.G.; Manolis, G.D. A web-based educational software for structural dynamics. Comput. Appl. Eng. Educ. 2016, 24, 599–614. [Google Scholar] [CrossRef]
- Kamiński, M. Symbolic computations in modern education of applied sciences and engineering. Comput. Assist. Methods Eng. Sci. 2022, 15, 143–163. [Google Scholar]
- Sopha, S.; Nanni, A. The cipp model: Applications in language program evaluation. J. Asia TEFL 2019, 16, 1360. [Google Scholar] [CrossRef]
- Ebtesam, E.; Foster, S. Implementation of CIPP model for quality evaluation at Zawia University. Int. J. Appl. Linguist. Engl. Lit. 2019, 8, 106. [Google Scholar]
- Darma, I.K. The effectiveness of teaching program of CIPP evaluation model. Int. Res. J. Eng. IT Sci. Res. 2019, 5, 1–13. [Google Scholar] [CrossRef]
- Agustina, N.Q.; Mukhtaruddin, F. The CIPP Model-Based Evaluation on Integrated English Learning (IEL) Program at Language Center. Engl. Lang. Teach. Educ. J. 2019, 2, 22–31. [Google Scholar] [CrossRef]
- Aziz, S.; Mahmood, M.; Rehman, Z. Implementation of CIPP Model for Quality Evaluation at School Level: A Case Study. J. Educ. Educ. Dev. 2018, 5, 189–206. [Google Scholar] [CrossRef]
- Umam, K.A.; Saripah, I. Using the Context, Input, Process and Product (CIPP) model in the evaluation of training programs. Int. J. Pedagog. Teach. Educ. 2018, 2, 183–194. [Google Scholar] [CrossRef]
- Rachmaniar, R.; Yahya, M.; Lamada, M. Evaluation of Learning through Work Practices Industry Program at University with the CIPP Model Approach. Int. J. Environ. Eng. Educ. 2021, 3, 59–68. [Google Scholar] [CrossRef]
- Haryono, H.; Florentinus, T.S. The evaluation of the CIPP model in the implementation of character education at junior high school. Innov. J. Curric. Educ. Technol. 2018, 7, 65–77. [Google Scholar]
- Eryanto, H.; Swaramarinda, D.R.; Nurmalasari, D. Effectiveness of entrepreneurship practice program: Using CIPP program evaluation. J. Entrep. Educ. 2019, 22, 1–10. [Google Scholar]
- Basaran, M.; Dursun, B.; Gur Dortok, H.D.; Yilmaz, G. Evaluation of Preschool Education Program According to CIPP Model. Pedagog. Res. 2021, 6, em0091. [Google Scholar] [CrossRef]
- Li, Y.; Hu, C. The Evaluation Index System of Teaching Quality in Colleges and Universities: Based on the CIPP Model. Math. Probl. Eng. 2022, 2022, 1–8. [Google Scholar] [CrossRef]
- Keskin, I. Evaluation of the Curriculum of High School Mathematics According to CIPP Model. Bull. Educ. Res. 2020, 42, 183–214. [Google Scholar]
- Rooholamini, A.; Amini, M.; Bazrafkan, L.; Dehghani, M.R.; Esmaeilzadeh, Z.; Nabeiei, P.; Rezaee, R.; Kojuri, J. Program evaluation of an integrated basic science medical curriculum in Shiraz Medical School, using CIPP evaluation model. J. Adv. Med. Educ. Prof. 2017, 5, 148. [Google Scholar] [PubMed]
- Mazloomy Mahmoudabad, S.S.; Moradi, L. Evaluation of Externship curriculum for public health Course in Yazd University of Medical Sciences using CIPP model. Educ. Strateg. Med. Sci. 2018, 11, 28–36. [Google Scholar]
- Al-Shanawani, H.M. Evaluation of Self-Learning Curriculum for Kindergarten Using Stufflebeam’s CIPP Model. Sage Open 2019, 9, 2158244018822380. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, C.; Wang, J.; Li, H.; Huijser, H. Evaluation framework for an interdisciplinary bim capstone course in highway engineering. Int. J. Eng. Educ. 2020, 36, 1889–1900. [Google Scholar]
- Atmacasoy, A.; Ok, A.; Şahin, G. An evaluation of introduction to industrial engineering course at Sabanci University using CIPP model. In Proceedings of the International Conference Engineering Education for Sustainable Development (EESD), Glassboro, NJ, USA, 3–6 June 2018. [Google Scholar]
- Bellio, R.; Ceschia, S.; Di Gaspero, L.; Schaerf, A.; Urli, T. Feature-based tuning of simulated annealing applied to the curriculum-based course timetabling problem. Comput. Oper. Res. 2016, 65, 83–92. [Google Scholar] [CrossRef]
- Leite, N.; Melício, F.; Rosa, A.C. A fast simulated annealing algorithm for the examination timetabling problem. Expert Syst. Appl. 2019, 122, 137–151. [Google Scholar] [CrossRef]
- Mafarja, M.M.; Mirjalili, S. Hybrid Whale Optimization Algorithm with simulated annealing for feature selection. Neurocomputing 2017, 260, 302–312. [Google Scholar] [CrossRef]
- Liu, T.; Yin, S. An improved particle swarm optimization algorithm used for BP neural network and multimedia course-ware evaluation. Multimed. Tools Appl. 2017, 76, 11961–11974. [Google Scholar] [CrossRef]
- Huang, M.; Zhang, J.; Li, J.; Deng, Z.; Luo, J. Damage identification of steel bridge based on data augmentation and adaptive optimization neural network. Struct. Health Monit.—Int. J. 2024. [Google Scholar] [CrossRef]
- Huang, M.; Cheng, S.; Zhang, H.; Gul, M.; Lu, H. Structural Damage Identification Under Temperature Variations Based on PSO-CS Hybrid Algorithm. Int. J. Struct. Stab. Dyn. 2019, 19, 1950139. [Google Scholar] [CrossRef]
- Lin, H.-F. An application of fuzzy AHP for evaluating course website quality. Comput. Educ. 2010, 54, 877–888. [Google Scholar] [CrossRef]
- Lucas, R.I.; Promentilla, M.A.; Ubando, A.; Tan, R.G.; Aviso, K.; Yu, K.D. An AHP-based evaluation method for teacher training workshop on information and communication technology. Eval. Program Plan. 2017, 63, 93–100. [Google Scholar] [CrossRef] [PubMed]
- Yu, D.; Kou, G.; Xu, Z.; Shi, S. Analysis of Collaboration Evolution in AHP Research: 1982–2018. Int. J. Inf. Technol. Decis. Mak. 2021, 20, 7–36. [Google Scholar] [CrossRef]
- Huang, M.; Gul, M.; Zhu, H. Vibration-Based Structural Damage Identification under Varying Temperature Effects. J. Aerosp. Eng. 2018, 31, 04018014. [Google Scholar] [CrossRef]
- Huang, M.; Ling, Z.; Sun, C.; Lei, Y.; Xiang, C.; Wan, Z.; Gu, J. Two-stage damage identification for bridge bearings based on sailfish optimization and element relative modal strain energy. Struct. Eng. Mech. 2023, 86, 715–730. [Google Scholar]
- Deng, Z.; Huang, M.; Wan, N.; Zhang, J. The Current Development of Structural Health Monitoring for Bridges: A Review. Buildings 2023, 13, 1360. [Google Scholar] [CrossRef]
- Huang, M.; Zhang, J.; Hu, J.; Ye, Z.; Deng, Z.; Wan, N. Nonlinear modeling of temperature-induced bearing displacement of long-span single-pier rigid frame bridge based on DCNN-LSTM. Case Stud. Therm. Eng. 2024, 53, 103897. [Google Scholar] [CrossRef]
- Zhang, J.; Huang, M.; Wan, N.; Deng, Z.; He, Z.; Luo, J. Missing measurement data recovery methods in structural health monitoring: The state, challenges and case study. Measurement 2024, 231, 114528. [Google Scholar] [CrossRef]
- Wan, N.; Huang, M.; Lei, Y. High-Efficiency Finite Element Model Updating of Bridge Structure Using a Novel Physics-Guided Neural Network. Int. J. Struct. Stab. Dyn. 2024, 2650006. [Google Scholar] [CrossRef]
- Huang, M.; Wan, N.; Zhu, H. Reconstruction of structural acceleration response based on CNN-BiGRU with squeeze-and-excitation under environmental temperature effects. J. Civ. Struct. Health Monit. 2024, 1–19. [Google Scholar] [CrossRef]
- Xu, W.; Ouyang, F. A systematic review of AI role in the educational system based on a proposed conceptual framework. Educ. Inf. Technol. 2022, 27, 4195–4223. [Google Scholar] [CrossRef]
- Sun, Z.; Anbarasan, M.; Praveen Kumar, D. Design of online intelligent English teaching platform based on artificial intelligence techniques. Comput. Intell. 2021, 37, 1166–1180. [Google Scholar] [CrossRef]
- Fang, C. Intelligent online English teaching system based on SVM algorithm and complex network. J. Intell. Fuzzy Syst. 2021, 40, 2709–2719. [Google Scholar] [CrossRef]
- Hamsa, H.; Indiradevi, S.; Kizhakkethottam, J.J. Student academic performance prediction model using decision tree and fuzzy genetic algorithm. Procedia Technol. 2016, 25, 326–332. [Google Scholar] [CrossRef]
- Chen, J.-F.; Hsieh, H.-N.; Do, Q.H. Evaluating teaching performance based on fuzzy AHP and comprehensive evaluation approach. Appl. Soft Comput. 2015, 28, 100–108. [Google Scholar] [CrossRef]
- Thanassoulis, E.; Dey, P.K.; Petridis, K.; Goniadis, I.; Georgiou, A.C. Evaluating higher education teaching performance using combined analytic hierarchy process and data envelopment analysis. J. Oper. Res. Soc. 2017, 68, 431–445. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).