Next Article in Journal
BiGTA-Net: A Hybrid Deep Learning-Based Electrical Energy Forecasting Model for Building Energy Management Systems
Previous Article in Journal
Design of a Digital Twin in Low-Volume, High-Mix Job Allocation and Scheduling for Achieving Mass Personalization
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach

1
School of Computer and Artificial Intelligence, Huaihua University, Huaihua 418000, China
2
Key Laboratory of Intelligent Control Technology for Wuling-Mountain Ecological Agriculture in Hunan Province, Huaihua 418000, China
3
Key Laboratory of Wuling-Mountain Health Big Data Intelligent Processing and Application in Hunan Province Universities, Huaihua 418000, China
*
Author to whom correspondence should be addressed.
Systems 2023, 11(9), 455; https://doi.org/10.3390/systems11090455
Submission received: 18 July 2023 / Revised: 28 August 2023 / Accepted: 31 August 2023 / Published: 2 September 2023

Abstract

:
Although previous research has explored the correlation between teacher characteristics and teaching quality, effective methods for identifying key factors that influence teaching quality are still lacking. This study aims to address this issue by developing an identification methodology based on a computational pedagogy research paradigm to identify the key characteristics of teachers and courses that influence their teaching quality. We developed quantitative models to quantify the characteristics of teaching quality, based on those identified in previous studies. Correlation and multiple correlation analyses were conducted to identify the key influencing characteristics, and grey correlation analysis was used to calculate the degree of correlation between these key characteristics and teaching quality. Our methodology was applied to 27 computer science discipline teachers and 82 courses, and validated with teaching data from eight additional teachers. Our findings demonstrate the effectiveness of our method in identifying the key influence characteristics of teachers and courses on teaching quality and confirm significant correlations between these key influential characteristics and teaching quality. This innovative approach provides new insights and tools for predicting and improving the teaching quality across disciplinary majors. Our research has significant implications for future education studies, particularly for the development of effective methods for identifying key factors that influence teaching quality. By providing a more comprehensive understanding of the key factors that influence teaching quality, our study can inform the development of evidence-based strategies to improve the teaching effectiveness for different disciplinary majors.

1. Introduction

Enhancing undergraduate education quality involves improving course-teaching. However, teaching quality varies between teachers and courses. Scholars [1,2,3,4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26,27,28] researched to determine whether a correlation exists between the quality of course teaching and factors such as teachers’ educational background, degree, professional title, gender, age, teaching experience, Pedagogical Content Knowledge (PCK), Technological Pedagogical Content Knowledge (TPACK), teacher burnout, teaching style, academic achievement, course difficulty, course type, and teaching evaluation. These studies suggest a complex relationship between course-teaching quality and the teacher and course-object characteristics. Nevertheless, research has yet to disclose the nature and laws of these issues, owing to insufficient computing technology, data support, and theoretical and practical difficulties in technology integration in education during the offline classroom teaching era.
Implementing a new generation of information technology, particularly artificial intelligence (AI), has significantly impacted course teaching and made course information construction and teaching the norm, resulting in vast data. Computational pedagogy [29,30,31,32,33,34,35], an educational research paradigm based on massive data computing, has become an essential approach to educational research. It aims to construct educational theories; solve educational problems; reveal the laws of teaching and learning; explore the nature and laws of education in depth; and effectively apply data, algorithms, and technologies in educational practice. This shift in perspective and value in educational research provides new research methods and approaches to uncovering the covariation between teacher characteristics and course quality.
Using computational pedagogy as a perspective and computer science courses as an example, this study aims to analyze the key factors influencing characteristics that have covariance with course teaching quality; calculate the degree of association of these factors with course teaching quality; and uncover the hidden covariance among the relationships by intelligently collecting, organizing, and quantifying a large number of relevant characteristics related to teachers and courses. Universities can use this study to improve teaching quality by determining course teaching arrangements.

2. Literature Review

In recent years, scholars have studied the characteristics that affect the quality of teaching in courses and computational pedagogy.

2.1. Correlation between Teacher Characteristics and Teaching Quality and Evaluation

(1) Teachers’ Characteristics and Teaching Quality.
Various studies have explored the relationship between teacher characteristics and teaching quality in order to improve educational outcomes. Saloviita et al. [1] identified a correlation between teacher burnout and teaching quality, and Tan et al. [2] found that teaching age had a significant influence on burnout. Palali et al. [3] found a nonlinear positive association between teacher scholarships and teaching quality. Sacre et al. [4] demonstrated that research-active teachers exhibit a higher teaching quality. Kulgemeyer et al. [5] discovered a correlation between teachers’ PCK and their teaching quality. Li et al. [6] observed variations in the PACK sub-dimensions based on teachers’ educational levels. Ma et al. [7] identified significant differences in teaching quality between teachers with different academic titles. Han [8] determined that students’ evaluation of teaching quality was not significantly influenced by teachers’ gender, but varied based on their titles. Deng et al. [9] conducted ANOVA tests on student evaluations, and found no significant differences based on different semesters or teacher titles. Gabalán-Coello et al. [10] analyzed the determinants of teaching quality in a Master of Engineering program at a Colombian university, and found that students valued the teaching methods and research experience of their professors. These studies provide a strong scientific foundation for investigating the connection between teachers’ characteristics and their teaching quality.
(2) Teacher characteristics and teaching evaluation.
Education scholars have increasingly analyzed the relationship between teacher characteristics and teaching evaluation data to enhance teaching quality. Notable findings include Gordon et al.’s [11] observation that female teachers received lower teaching evaluation scores than did male teachers. Santiesteban et al. [12] found a gender bias in computer science teaching evaluations that mainly affected the evaluation results of professors and had a smaller impact on the evaluation results of student teachers. Arrona-Palacios et al. [13] noted that undergraduate students rarely consider gender when evaluating professors but prefer male teachers to recommend the best professor. Flegl et al. [14] revealed that experience significantly affects evaluation results more than gender, and that age has an even greater influence in some fields. Bianchini et al. [15] revealed that students’ evaluation of teaching effectiveness was influenced by the instructor’s age, seniority, gender, and research output, with seniority having varying effects on discipline and research output having a positive impact on evaluation. Bao et al. [16] found that factors such as teachers’ professional titles could significantly affect students’ evaluations. Joye et al. [17] showed that teachers’ age and gender affect their evaluation of teaching. Han et al. [18] noted that teachers’ education and professional titles had a significant impact on students’ evaluation scores. However, the interaction between them was not insignificant. Tian et al. [19] found that teachers’ age, title, professional background, and course credit have a positive effect on evaluation scores, whereas teachers’ educational background has a negative impact on student evaluations. Additionally, the administrative positions of the evaluated teachers did not affect the student evaluations. Using one-way ANOVA, Li et al. [20] determined that the gender, age, and title of college teachers had no significant effect on students’ evaluation scores. However, academic characteristics significantly affected evaluation scores, with notable differences between faculty members with master’s degrees and those with doctoral degrees. Binderkrantz et al. [21] found no gender bias in Danish universities, but students tended to rate teachers of the same gender more highly, and this gender preference may be related to students’ different perceptions of teachers’ behavior and characteristics. These studies offer insights into the correlation between teacher characteristics and teaching evaluations, thereby guiding improvement in teaching quality.
(3) Other aspects.
In addition to the correlations between teacher characteristics and teaching quality and evaluation, other scholars have also investigated the relationships between teacher characteristics and other aspects of teaching. For instance, Jaekel et al. [22] found that a teacher’s communication skills, ability to maintain good relationships with students, and time spent in the classroom are closely related to teaching quality ratings and student learning experiences. Rodríguez-García et al. [23] discovered that a teacher’s instructional method had a significant impact on students’ reading scores, and that positive teaching styles were associated with higher achievement. Zhang et al. [24] found that a teacher’s professional characteristics had a significant impact on job satisfaction and that professional collaboration and teaching self-efficacy were key factors. Aldahdouh et al. [25] found that teachers’ learning styles and background characteristics were related to their ability to implement teaching reforms during the COVID-19 pandemic. Asare [26] discovered that teachers’ cognitive and emotional characteristics, as well as their teaching practices, had an impact on graduate students’ statistical learning anxiety and attitudes. Marici et al. [27] found that a teacher’s appearance had an impact on students’ learning attitudes and evaluations of the teacher, with students being more likely to accept teachers who were more attractive. Finally, Khokhlova et al. [28] found that students exhibited gender bias in the personality trait ratings of male and female teachers, with male teachers receiving higher scores in promoting learning and engagement.
In summary, there was a correlation between teacher characteristics and teaching quality. However, inconsistencies in research findings can be attributed to the limitations of the classical educational science research paradigm, which decomposes complex objects into lower level components. Empirical research with human subjects often faces challenges, such as irreproducibility, unverifiability, and limited applicability to real-world settings. Overcoming these difficulties is crucial to ensuring consistency in research outcomes and meeting the needs of contemporary educational science research.

2.2. Correlation between Course Characteristics and Teaching Evaluation

Course difficulty is a complex educational issue that lacks a consensus or unified model. Questionnaires are commonly used to examine the relationship between course difficulty and teaching evaluations. Researchers have also explored this relationship. For instance, Huang et al. [29] found that three dimensions of MOOCs (course vividness, teachers’ subject knowledge, and interactivity) had a positive impact on students’ willingness to revisit MOOCs, but course difficulty had different moderating effects on these influences. Addison et al. [30] observed that difficult courses negatively affected students’ evaluations. Cheng et al. [31] found a positive correlation between students’ perceptions of course difficulty and their preference for teacher-directed strategies. Cavanaugh et al. [32] noted that students’ perceptions of course difficulty decreased by 6% during the COVID-19 pandemic, with a slightly greater decrease in course difficulty observed among professors without online teaching experience.
Furthermore, researchers have examined course types and their impacts on teaching evaluations. Liu et al. [33] identified significant effects of course type on instructional effectiveness and its importance to students. Johnson et al. [34] determined that course characteristics, such as class size, course level, and elective/required status, significantly influenced student teaching evaluations. Cai [35] applied descriptive statistics and ANOVA to demonstrate that course type did not affect the teaching evaluation scores.
In summary, the correlation between course characteristics and teaching evaluation is an important measure of teaching quality. However, inconsistencies in research findings persist because of the limitations of the classical educational science research paradigms.

2.3. Research on Computational Pedagogy

Computational pedagogy is a research paradigm that utilizes data mining, machine learning, and emerging technologies to analyze educational behavior. It values interdisciplinarity and practical applications and addresses real educational problems. In recent years, computational pedagogy has gained attention and matured.
Li et al. [36] discussed the necessity of establishing computational pedagogy in the era of big data. Wang et al. [37,38,39] explained its emergence in relation to the characteristics of the big data era and the limitations of the traditional research paradigms. Liu et al. [40,41] described computational pedagogy as an interdisciplinary discipline that uses technology- and data-intensive methods to understand educational activities and complex systems. Zheng et al. [42] emphasized the importance of computational pedagogy in China, suggesting that it can provide new research paths and improve educational decision making.
This study aimed to identify the limitations of the classical educational science research paradigm and advocate for the adoption of computational pedagogy. By applying complex systems science, leveraging large-scale data, and studying the characteristics and relationships within the teaching and learning process, this study seeks to enhance educational decision making and quality. Computational pedagogy offers both theoretical and technical guidance, and this study provides empirical tests to promote theory development.
The literature review above provides clear evidence of the correlation between teacher characteristics, course characteristics, and course-teaching quality, thus providing a direct theoretical and empirical basis for this study. However, the following problems remain in existing studies.
(1) Further analysis of the influential characteristics and covariate relationships with course-teaching quality is required. Existing studies have only verified one or a few characteristics that influence teaching quality or have only dealt with one kind of correlation, while ignoring the study of the covariation of many influencing characteristics on teaching quality and the lack of a method to analyze the covariation between influencing characteristics.
(2) There is a need for more effective methods to verify the credibility of the quantitative models of influence characteristics. Several studies have been conducted to quantitatively extract and model these characteristics. However, there is a lack of methods to verify the reliability of the quantitative model of influence characteristics to ensure that it objectively and accurately reflects changes in course teaching quality.
(3) However, the exiting researches have not been able to identify the essential influencing characteristics contributing to teaching quality, which makes it challenging to rank course teachers based on these characteristics. Therefore, the current study aims to develop more effective methods to evaluate and identify the key influencing characteristics.
Therefore, this study adopts a computational pedagogy research paradigm to pinpoint the key influencing characteristics that impact teaching quality by exploring the correlations between various characteristics and teaching quality and utilizing this knowledge to arrange courses for teachers scientifically. This approach is expected to introduce a novel and efficient means of enhancing the caliber of university teachings.

3. Methods

3.1. Collection and Analysis of Teaching Quality Influence Characteristics Data

Complex and diverse characteristics influence the quality of teaching. Correlation analyses of a single or a few characteristics are limited. We propose a human-machine approach to intelligently acquire multiple influence characteristic data to address this issue. These data were collected from the databases of various business platforms and multiple data sources using machine learning and natural language processing techniques. These data can be divided into two types: simple influence characteristics, such as gender, age, professional title, educational background, and degree; complex influence characteristics data, such as data on teaching evaluation, including the mean value of student evaluation, mean value of peer evaluation, and mean value of supervisor evaluation for course teaching evaluation; and TPACK characteristic data obtained from questionnaire surveys, including Technological Knowledge (TK), Content Knowledge (CK), Pedagogical Knowledge (PK), PCK, Technological Pedagogical Knowledge (TPK), Technological Content Knowledge (TCK), and TPACK.

3.2. Credible Quantitative Modeling of the Influence Characteristics of Teaching Quality

A scientific measurement method has been proposed in response to the lack of effective methods for constructing quantitative models of influence characteristics and validating their reliability. This method constructs quantitative models of different influence characteristics and conducts reliability tests to provide credible data for identifying key influence characteristics. Influence characteristics can be classified as simple or complex, based on the complexity of their quantification processes.

3.2.1. Quantification of Simple Influence Characteristics

Basic teacher information was obtained using intelligent data-collection technology on a faculty-management platform. Characteristic values, such as gender, educational background, degree, professional title, age, and teaching age, were defined by experts’ argumentation and literature review according to the requirements of pedagogical theories and methods and machine learning data attribution to ensure the credibility of the quantification results.
Definition 1. 
Educational background (Eb). If the teacher’s education is of doctoral students, its Eb characteristic value is set to 1; if it is of master’s students, it is set to 0.7; and if it is of bachelor’s students, it is set to 0.5.
Definition 2. 
Degree (Dg). If the teacher’s degree is Ph.D., the Dg value is set to 1, 0.7 if it is a master’s degree, 0.5 if it is a bachelor’s degree, and 0 otherwise.
Definition 3. 
Professional title (Pt). If the teacher has a full senior title (professor or other full seniors), the Pt characteristic value of the title is set to 1; if it is an associate senior title (associate professor or other associates senior), it is set to 0.8; if it is an intermediate title (lecturer or other intermediate), it is set to 0.5; and if it is a junior title (assistant professor or other junior), it is set to 0.2.
Definition 4. 
Gender (Gd). Gender characteristics were set to 1 for male teachers and 0.5 for female teachers.
Definition 5. 
Age (Ag). According to reference [43] and expert argumentation, the age of university teachers can be divided into six stages, with the Ag characteristic value set to 0.3 for those aged 35 and below, 0.4 for those aged 36–40, 0.6 for those aged 41–45, 0.7 for those aged 46–50, 0.8 for those aged 51–55, and 1 for those aged 55 and above.
Definition 6. 
Teaching age (Ta). This indicates the years the teacher has been engaged in teaching, reflects the teachers’ teaching development course, and is a marker of their professional growth stage in teaching. According to reference [2] and expert arguments, teaching age can be divided into six segments, with the teaching age factor set to 0.1 for less than two years, 0.3 for 3–5 years, 0.5 for 6–9 years, 0.7 for 10–15 years, 0.9 for 16–20 years, and 1 for more than 21 years.

3.2.2. Quantification of the Complex Influence Characteristics

Owing to the complexity and diversity of the factors that influence teaching quality and the large amount of data involved, we selected multiple characteristics that have been verified by relevant studies as research elements closely related to teaching quality. The following section describes the complex influencing characteristics and the methods used to obtain them. To ensure the accuracy and reliability of our measurements, we referred to validation methods described in the literature [44,45].
(1) Teaching quality of courses (Tq)
Teaching quality is a complex and multifaceted concept encompassing various aspects of the teaching process. This study focuses on the teaching quality of courses, which refers to the quality of the instruction provided by teachers in a specific course. To create a reliable and valid scale for evaluating course-teaching quality, we adopted a standardized scale development process.
First, we used semi-structured interviews to construct the initial measurement items based on the connotations and influencing factors of teaching quality found in existing literature. Second, we invited experts and students to conduct content validity tests to ensure that the measurement items were consistent with concepts. We then conducted a pretest using an online questionnaire to determine whether it was suitable for formal research. Exploratory Factor Analysis (EFA) was used to streamline the items and determine their dimensions by eliminating items that were inconsistent with the concept. The KMO test and Bartlett’s sphericity test were conducted to determine whether the requirements for conducting EFA were met. In the EFA process, principal component analysis and orthogonal rotation were used to obtain the total contribution of the explained variables and determine whether the division of the question items into dimensions was reasonable. Confirmatory Factor Analysis (CFA) was used to verify the overall goodness of fit of the factor structure.
Finally, we assessed the scale using Cronbach’s alpha coefficient, Construct Reliability of the scale dimensions, and the Average Variance Extraction value to ensure that the scale had high reliability, construct, and convergent validity, and met dimensionality and discriminant validity requirements.
The resulting scale provides a comprehensive survey of course-teaching quality by gathering objective data from the teaching management platform at the end of each course. Information was collected and analyzed using intelligent data collection technology to produce quantitative values for course-teaching quality. This approach allowed us to identify the key influencing characteristics of course-teaching quality and their impacts on students’ learning experiences.
(2) Teacher burnout (Tb)
Teacher burnout is a form of negative emotion and attitude related to prolonged job stress. It typically appears in three dimensions: emotional exhaustion, depersonalization, and personal accomplishment. Teacher burnout can negatively affect students, and may even lead to psychological disorders. To gauge teacher burnout quantitatively, the survey by Maslach et al. [46] was used in this study. The survey was part of the Burnout Scale (MBI-educators’ survey), and consisted of 22 questions, each with five response options that corresponded to a score of 1–5. This survey has been widely used in the field of education. The quantitative model used in this study is illustrated in Figure 1.
To evaluate teacher burnout, we used the MBI educators’ survey tool, which computes overall scores and scores for emotional exhaustion, depersonalization, and personal accomplishment. The sum of the scores for each dimension determined the presence or absence of burnout. Burnout severity can be assessed by using various score ranges and combinations. To obtain a more comprehensive and accurate understanding of teacher burnout, it is necessary to utilize various techniques and tools, such as interviews and observations.
(3) Teaching style (Ts)
Teachers’ teaching styles uniquely express their personal preferences, habits, and long-term teaching experiences. This is a combination of personality, teaching skills, and techniques. Scholars such as Sternberg have studied teaching styles and proposed various theories to categorize them, including cognitive-style theory and psychological self-control theory [47]. In this study, we chose Sternberg’s classification system and used the Thinking Styles in Teaching Inventory (TSTI) to measure teachers’ teaching styles.
The TSTI contains seven subscales, each with seven questions rated on a 7-point scale. These questions covered seven teaching styles: legislative (Leg), executive (Exe), judicial (Jud), global (Glo), local (Loc), liberal (Lib), and conservative (Con). We incorporated the TSTI into our teaching assessment system to ensure accurate data, and regularly distributed it to students based on the instructor’s instructions. We used Cronbach’s alpha to verify the credibility of the seven items, and obtained quantitative values for each teaching style. Figure 2 illustrates the design of the quantitative model.
(4) Academic Achievement (Aa).
The relationship between course-teaching and academic and pedagogical research is significant. Academic research can improve course content to cater effectively to students’ learning needs, whereas course instruction can promote advances in science and technology. Research can provide teachers with more theoretical knowledge and more effective teaching techniques to enhance the quality of their education. Thus, scholarship, pedagogical research, and course teaching are interdependent, and their achievements reflect scholarly excellence.
Scholarly achievements include research output and impacts. Journal articles and monographs characterize research output, while academic impact stems from the peer evaluation of research results. The evaluation depends on how well others value, recognize, and cite research results.
To facilitate the evaluation, intelligent data collection techniques were employed to gather relevant data from research management and third-party paper search platforms, including the number of published papers, monographs, patents, and academic influence. Data were weighed to determine mean values. Academic influence was measured using the H-index proposed by Hirsch [48], which considers the frequency of citations and number of articles to prevent the one-sided pursuit of the number of papers and stimulate faculty members’ enthusiasm to create high-quality papers.
(5) Technological Pedagogical Content Knowledge (TPACK)
In 1986, a study by Shulman discovered that good teachers do not rely on specific teaching methods, but rather use effective methods to help students quickly acquire and understand knowledge [49]. Shulman argued that teachers’ professionalism was reflected in their ability to transfer knowledge. During his research, Shulman identified the intersection between the CK and PK. Park Soonhye at the University of Iowa has conducted a series of PCK studies based on science subject areas since 2007, and has developed a PCK pentagonal structure. This structure considers PCK to be an orientation toward teaching, student understanding, curriculum knowledge, PK, and assessment knowledge, which are mutually influential and interrelated.
With the continued evolution of information technology in education, teachers need to be proficient not only in traditional CK and PK but also in information technology knowledge and integrating technology into subject teaching. In 2009, Schmidt et al. [50] proposed a new framework, TPACK, to integrate teachers’ knowledge of the effective use of technology in teaching and learning into their expertise. TPACK encompasses PCK and emphasizes how teachers can integrate subject matter and educational knowledge when using technology to make teaching and learning more effective.
This study primarily used the TPACK structural framework as the theoretical basis, and measured the TPACK level of each teacher using a questionnaire. The questionnaire consisted of 49 questions on seven component dimensions of TPACK: TK, CK, PK, PCK, TPK, TCK, and TPACK. The questions were divided into five categories using a Likert scale and assigned a value of 1 to 5 for data analysis.
Before the formal questionnaire survey was conducted, a reliability test was conducted to ensure the validity and reliability of the questionnaire. Factor analysis was conducted on the TPACK scale using SPSS to verify the correspondence between the factors and questions, explore the internal logical structure between the questions, and assess the structural validity of the questionnaire.
Finally, we analyzed the reliability of the questionnaire using Cronbach’s alpha coefficient to verify consistency among the questions in the questionnaire. The modeling process is illustrated in Figure 3.
Once the reliability test was completed, the TPACK scale was integrated into the teaching assessment system. Teachers were then required to measure it regularly in order to obtain accurate TPACK data for each dimension.
(6) Course difficulty (Cd):
There is no consensus on the concept of course difficulty in the academic community and there is no unified model of course difficulty. Therefore, in this study, we used a questionnaire to obtain the course difficulty of the discipline majors, using a scale ranging from 0.1 to 1.0, with ten levels indicating varying degrees of difficulty, and in the statistics we use Equation (1) to organize the course difficulty.
c d f i = λ 1 n k = 1 n c d i , k + ( 1 λ ) 1 m r = 1 m c d i , r
where cdfi denotes the course difficulty of i; n and m are the numbers of teacher and student questionnaires, respectively; λ(0 < λ < 1) is the proportion of teacher questionnaires in the course difficulty; cdi,k denotes the difficulty given by the kth teacher of i; and cdi,r denotes the difficulty given by the rth student of course i.
By embedding the course difficulty questionnaire into the teaching management system and distributing it in a targeted manner, the characteristic difficulty values for each course were obtained by applying Equation (1) after completing the questionnaires. This research method can effectively solve the difficulties in measuring the difficulty of college courses and ensure the objectivity and reliability of the research results.
(7) Teaching Evaluation (Te)
Using the Analytic Hierarchy Process (AHP) [51] proposed by Saaty et al. as a quantitative evaluation index system, multilevel, multifaceted, and multi-angle diversified teaching quantitative evaluation results were used as the basis. Through the teaching management system, evaluation subjects, such as supervisors, peers, and students, gave their evaluations and weighted averages each semester. A comprehensive evaluation result was obtained, and various schools adopted a standard teaching evaluation system.

3.3. Designing a Method to Identify Key Influence Characteristics of Teaching Quality

We developed an analytical technique that accurately identified the key influencing characteristics of course-teaching quality. Our approach integrates correlation, multiple correlation, and grey correlation analyses to extract the key influencing characteristics and ensure precise analysis results. The credibility and reliability of the outcomes were assessed using rigorous testing. Our method employs correlation and multiple correlation analyses to investigate the linear and nonlinear relationships between characteristics while utilizing gray correlation analysis to evaluate the degree of gray correlation among them. This comprehensive methodology provides valuable insights into the factors that influence the quality of teaching.

3.3.1. Standardizing Influence Characteristics

Various characteristics can influence a course’s teaching quality, each with its own value. To collect data, we quantified the characteristics of 27 teachers and their course-teaching data from computer science majors at a university over a five-year period.
We standardized the data using a range normalization method to eliminate the impact of differing magnitudes among these characteristics, ensuring that all values fell within the normalized range of 0.1 to 0.99.
Within this process, we prioritized the most recent values for teachers’ educational background, degree, and professional title upgrades over the past five years. The average score was calculated for each course’s evaluation and quality. The age and teaching age were based on the latest available years.
By employing these methods, we obtained a normalized dataset of teaching quality influence characteristics, denoted as Cf = (T_id (teacher ID), C_id (course ID), Tq, Eb, Dg, Pt, Gd, Ag, Ta, Tb, Leg, Exe, Jud, Glo, Loc, Lib, Con, Aa, TK, PK, CK, PCK, TCK, TPK, TPACK, Cd, Te).

3.3.2. Correlation Analysis and Multiple Correlation Analysis of Key Influence Characteristics

A correlation analysis was conducted on the influence characteristics to examine the relationship between teaching quality (Tq) and other characteristic variables within dataset Cf. This analysis resulted in the construction of a correlation matrix for the characteristic samples, denoted by matrix (2).
r ( i , j ) n × n = [ 1 r 1 , 2 r 1 , n r 2 , 1 1 r 2 , n r n , 1 r n , 2 1 ]
where ri,j = ( x x ¯ ) ( y y ¯ ) ( x x ¯ ) 2 ( y y ¯ ) 2 represents the correlation coefficient between characteristic variables i and j. n is the number of influence characteristics, and x and y represent the values of characteristic variables i and j. It is essential to emphasize the significance of this analysis in determining the relationship between Tq and other variables in the dataset.
Through a correlation coefficient test of the correlation matrix and considering the r-value and p-value matrices, we can identify the candidate key influence characteristics that exhibit strong correlations with Tq. These characteristics, denoted as Ckf = (Tq, x1, x2, …, xp), where pn, are considered candidate key influences on the teaching quality.
Multiple correlation analyses must be performed within the Ckf dataset to determine whether these candidate key characteristics influence Tq. This involves creating a multiple linear regression model with Tq as the dependent variable and other candidate key characteristics as independent variables.
T q = b 0 + b 1 x 1 + b 2 x 2 + + b p x p + ε
To obtain a prediction model for multiple linear regressions, we used linear regression to estimate b0bp.
T q ^ = b 0 + b 1 x 1 + b 2 x 2 + + b p x p
By conducting multiple correlation analyses between Tq and the candidate key characteristic variables, which can be transformed into a simple correlation analysis, denoted as T q ^ , we obtain Equation (5).
R =   corr ( Tq , x 1 , , x p ) =   corr ( Tq , T q ^ ) = cov ( T q , T q ^ ) var ( T q ) var ( T q ^ ) = ( T q i ^ T q ¯ ) 2 ( T q i T q ¯ ) 2
where R 2 = ( T q i ^ T q ¯ ) 2 ( T q i T q ¯ ) 2 represents the determination coefficient that indicates the goodness-of-fit of the model. Higher R2 values indicate a greater degree of explanation of the dependent variable by the independent variables. Thus, different combinations of variables were utilized to select the subset with the highest R2 representing the key influencing characteristics affecting teaching quality. The resulting subset is denoted as Kf = (y1, y2, …, yn), where np. f-tests and bilateral t-tests were conducted on the correlation and determination coefficients to ensure the reliability of the analysis results. These tests validated the significance and accuracy of the analysis.

3.3.3. Analysis of Association Degree between Key Influence Characteristics and Teaching Quality

Correlation analysis alone does not measure the strength of the association between the characteristic variables and teaching quality. To determine the relative strength of the association between each key influence characteristic and teaching quality, Grey Relation Analysis (GRA) [52] was employed. GRA evaluates the degree of influence of each key influence characteristic on teaching quality.
In GRA, the reference column is denoted as Tq = Tq(k), and the comparison column is denoted as yi = yi(k), i = 1, 2, …, p, where i represents the number of key influence characteristics (p), k denotes the year (1, 2, …, m), and m denotes the years. The correlation coefficient was calculated using Equation (6):
ζ i ( k ) = min i min k | T q ( k ) y i ( k ) | + ρ max i max k | T q ( k ) y i ( k ) | | T q ( k ) y i ( k ) | + ρ max i max k | T q ( k ) y i ( k ) |
The correlation coefficient signifies the correlation value between the comparison series for each year and the reference series. We used the annual average correlation value ( r i = 1 m k = 1 m ξ i ( k ) ) for a holistic comparison and studied the correlation between the key influence characteristics and teaching quality.
Through the above correlation analysis, multiple correlation analysis, and grey correlation degree analysis, the key characteristics of different disciplines and majors affecting teaching quality can be identified, providing a scientific basis for the intelligent recommendation of teachers of subsequent courses.

4. Results

4.1. Experimental Data

In this experiment, we collected basic information from 27 computer science teachers at a university, including their gender, educational background, degree, professional title, age, and teaching age. These characteristics were quantified according to predetermined criteria. Subsequently, data related to the complex influencing characteristics were gathered and quantified using an appropriate method.

4.1.1. Evaluation of Teaching Quality

To assess course-teaching quality, a survey was conducted on 82 courses taught by teachers over the past five years, resulting in the collection of 1286 valid questionnaires. By calculating the weighted average for courses taught by the same teacher, we obtained 143 sets of course-teaching-quality data. The validity of the data was verified using SPSS and the results are presented in Table 1.
We conducted a validity study using a factor analysis to ensure the meaningfulness and reasonableness of the variables. This analysis assessed the appropriateness and effectiveness of study items. Three key indicators were examined: KMO values, commonalities, and factor loading.
All research items exhibited a communality value exceeding 0.4, indicating the effective extraction and summarization of information. The KMO value, which exceeded the accepted threshold of 0.7 at 0.704, demonstrated the adaptability of data extraction and generalization, thus reflecting high data validity.
Moreover, the maximum loading coefficient for each factor exceeded 0.4, indicating clear correspondence between the options and factors. After confirming the validity of the study data, reliability analysis was conducted to assess its consistency. The standardized Cronbach’s alpha coefficient was 0.726, indicating the high consistency and stability of the collected data.
Overall, based on our comprehensive analysis of validity and reliability, the collected data on teaching quality were highly reliable and valid. This provided scientific and reasonable data to support our research questions.

4.1.2. Teacher Burnout

A detailed questionnaire was administered to the 27 teachers using an educational version of the Burnout Scale (MBI-Educators Survey). Subsequently, validity and reliability tests were conducted on the collected data, and the results are presented in Table 2.
Based on Table 2, the commonalities corresponding to all research items were greater than 0.4, indicating that information on the research items can be effectively extracted. Factor loadings greater than 0.4 indicate a correspondence between the options and the factors. A CITC value greater than 0.3 indicates a positive correlation between the research items and the total test score. In addition, the KMO value was 0.624, which was greater than 0.6, indicating that the data could be extracted effectively. A Cronbach’s α value of 0.741 indicated the strong consistency and stability of the collected data.
Figure 4 shows the varying burnout situations of university teachers, highlighting the need for more extensive and comprehensive research on their work environments and psychological conditions.

4.1.3. Teaching Styles

The TSTI assesses teachers’ teaching styles. Student questionnaires were used to collect 1568 valid responses, enabling an analysis of the teaching styles of 27 teachers. Validity and reliability tests were performed to ensure data accuracy; the results are presented in Table 3.
Based on the results presented in Table 3, we can confidently state that the data collected on teaching styles were of excellent quality and reliability. The community value for all the research items was higher than 0.4, indicating effective information extraction. Additionally, the factor-loading coefficients were above 0.4, confirming the correspondence between options and factors. The CITC values were also above 0.4, indicating a significant correlation between the analyzed items. A KMO value of 0.866, which is greater than the desirable criterion of 0.6, indicated efficient information extraction from the data. Moreover, a Cronbach’s alpha coefficient of 0.968 ensures high internal consistency. These high-quality data provide a solid foundation for future research, facilitating scientific and effective decision making based on accurate and reliable information.

4.1.4. Academic Achievement

Over the past decade, teachers’ academic achievements have been comprehensively assessed by meticulously reviewing papers published as first or corresponding authors in reputable databases (e.g., CNKI, Web of Science, and EI). We also considered the number of published monographs and patents filed to indicate their academic contributions and innovation skills. To gauge teachers’ academic influence accurately, we relied on the H-index, which considers the number of papers and the frequency of citations. This metric provides a more precise reflection of the academic impact.
All data were normalized to ensure a fair comparison of different scholarly achievements. This normalization eliminates the magnitude effect between data points, thereby evaluating diverse accomplishments using the same criteria. The results of this analysis are shown in Figure 5.

4.1.5. TPACK

We modified the TPACK-level measurement tool for pre-service teachers to better align with the computer science discipline. This adaptation was aimed at ensuring a more accurate assessment of TPACK levels, specifically for computer science teachers. Subsequently, the revised questionnaires were then distributed to teachers and feedback was collected. A thorough analysis of the questionnaire data was conducted to verify their validity and accuracy. Table 4 presents the results of this analysis.
Upon examining the results in Table 4, it is evident that all 49 questions exhibited strong correlations with their respective factors, as indicated by factor loadings that exceeded 0.4. Furthermore, the high commonality values, surpassing 0.7, suggest that the gathered questionnaire data are of excellent quality and consistency.
Our analysis also revealed that the KMO values for all seven dimensions exceeded 0.5, and small p-values indicated the suitability of the data for factor analysis, thereby confirming the validity of our findings. In addition, we assessed the reliability of the data and obtained a Cronbach’s alpha coefficient of 0.619. Although not exceptionally high, it fell within the acceptable range, indicating reliability of the data.
Considering the validity and reliability analyses, we can confidently assert that the TPACK-level data are scientifically robust and valid. The values obtained for each TPACK dimension from 27 teachers provided a solid foundation for this study.

4.1.6. Course Difficulty

To evaluate the difficulty levels of the 82 computer science courses, we conducted an extensive survey using a detailed questionnaire. The questionnaire used a scale ranging from 0.1 to 1.0, with ten levels indicating varying degrees of difficulty. Teachers and students participated in the survey and provided comprehensive and insightful feedback.
To account for potential disparities in the perceptions of course difficulty between teachers and students, we introduced a weighting parameter λ, set at 0.6. This ensured fairness and comprehensiveness in our analysis, with 60% of the final difficulty assessment value attributed to teachers’ assessments and 40% attributed to students’ assessments. By incorporating the professional opinions of teachers and learning experiences of students, our results achieved greater accuracy and impartiality.
We calculated the comprehensive difficulty assessment value for each course by using the weighted average method. These values are presented in Figure 6, which clearly illustrates our findings.

4.1.7. Course Teaching Evaluations

Using the Teaching Service Management System, we gathered 1014 evaluations from 27 teachers who had taught relevant courses over the past five years. These evaluations combined scores from students, peers, and supervisors to assess teaching quality comprehensively.
Multiple evaluations of the same course were averaged to eliminate bias, resulting in 141 comprehensive datasets. We then normalized the data to ensure fairness and to enable easy analysis. The normalized data are presented in Figure 7 for a visual representation.
Using the data acquisition and processing processes described above, we successfully constructed an experimental dataset, Cf = (T_id, C_id, Tq, Eb, Dg, Pt, Gd, Ag, Ta, Tb, Leg, Exe, Jud, Glo, Loc, Lib, Con, Aa, TK, PK, CK, PCK, TCK, TPK, TPACK, Cd, Te). The dataset contains 130 teaching records from 27 teachers.

4.2. Experimental Results and Analysis

4.2.1. Correlation Analysis

Using Python’s Pandas and Numpy libraries, we performed a correlation analysis on the Cf dataset as part of our experiment. We aimed to identify characteristics that strongly correlate with teaching quality (Tq). To accomplish this, we calculated Pearson’s correlation coefficient for each characteristic in relation to Tq. Table 5 presents the candidate influencing characteristics obtained by selecting the top N characteristics with correlation values exceeding a threshold of 0.05.
Our analysis of Table 5 identified candidate characteristics that were strongly correlated with the teaching quality (Tq of computer courses. These include teacher characteristics and course difficulty, defined as Ckf = (T_id, C_id, Tq, Eb, Pt, Gd, Ta, Leg, Exe, Jud, Glo, Loc, Lib, Con, TK, PK, TCK, Cd).
At this stage, we identified the key characteristics that were strongly associated with the quality of computer course teaching. This dataset served as the foundation for subsequent multiple correlation analyses. Our next step involves conducting a comprehensive analysis to ascertain the specific impact of these influential characteristics on teaching quality.

4.2.2. Multiple Correlation Analysis

Multiple correlation analyses were conducted to assess the combined effects of the various factors on teaching quality (Tq). The Ckf dataset contains potential correlations between the candidate key influencing characteristics and Tq. Using a multiple linear regression model, we identify the key influential variables by treating the candidate key influence characteristics as independent variables and Tq as the dependent variable (Equation (3)). We estimated the coefficients for each independent variable through regression analysis, as presented in Table 6.
These coefficients indicate the predicted effect of each independent variable on the teaching quality. We established a prediction model (Equation (4)) by combining the following coefficients, and calculated the correlation between Tq and the predicted teaching quality using a regression model to determine the coefficient of determination, which quantifies the extent to which the independent variable explains the dependent variable (Table 7).
Among the characteristics analyzed, those with a coefficient of determination exceeding 0.5 are highlighted in bold in Table 7. Influential characteristics, such as Eb, Gd, Exe, Jud, Loc, TK, and Cd, demonstrated a goodness-of-fit above 0.5. These specific variables significantly contributed to the teaching quality (Tq). Thus, we identified this set of characteristics as the key influencing factors (Kf) for teaching quality in the examined computer science classes, and denoted their set as Kf = (Eb, Gd, Exe, Jud, Loc, TK, Cd). Moving forward, our analysis focused on these key characteristics. We aimed to conduct a more comprehensive study to better understand how they influence the quality of teaching.

4.2.3. Analysis of Association Degree between Key Influence Characteristics and Teaching Quality

We employed GRA to accurately assess the impact of various influence characteristics on teaching quality (Tq). GRA is a quantitative method that facilitates the evaluation of the relationships between system factors, enabling us to identify the key influential characteristics that have the strongest association with teaching quality.
By examining each influencing characteristic within the Kf dataset, Table 8 presents the degree of correlation between each influencing characteristic within the Kf dataset and Tq. The results demonstrate that, as the correlation value approaches 1, the influence of the characteristics on teaching quality becomes more pronounced.
The data in Table 8 indicate that all the key characteristics influencing teaching quality are strongly connected, with an association degree exceeding 0.5. This reaffirms our earlier findings from the multiple correlation analyses and deepens our understanding of the factors that impact teaching quality. The grey correlation analysis underscores the significance of these key influence characteristics and establishes a basis for future optimization efforts with a more targeted approach.

4.2.4. Validation of Teaching Quality Based on Key Influence Characteristics

To validate the interpretability and impact of the identified key influence characteristics, we conducted an empirical analysis using teaching data from 29 courses taught by eight teachers over five years.
Our initial step involved constructing a validation dataset (Vf), following the same data collection and standardization process. Subsequently, we employed the multiple linear regression model outlined in Equation (4) to predict the teaching quality based on the identified key influence characteristics. The predicted results were compared with actual teaching quality, as shown in Figure 8.
Figure 8 illustrates a close alignment between the predicted teaching quality derived from the key influence characteristics and actual teaching quality. The Mean Squared Error (MSE) calculation yielded a small value of 0.008051157, indicating the accuracy of our predictions. This validation outcome confirmed the significant impact of the identified key influence characteristics on teaching quality. Furthermore, our multiple linear regression model demonstrated high prediction accuracy, offering a valuable perspective on teaching quality and establishing a robust foundation for improving teaching practice.

5. Conclusions and Discussion

This study utilized a computational pedagogy approach to identify the key influential characteristics that impact the quality of teaching in computer science courses. Our findings revealed that teacher and course characteristics significantly influence teaching quality, which is consistent with the existing literature [53,54], emphasizing the importance of teacher and course characteristics in improving educational quality.
In addition to the teacher and course characteristics identified in our study, other factors such as teaching interaction, communication skills, the ability to maintain good relationships, instructional methods, and cognitive and emotional factors may also play a role in teaching quality. Future research could explore these factors to provide further insight into improving the quality of computer science courses.
To gain a deeper understanding of the complex interplay between gender, burnout, teaching style, and teaching quality, future studies should investigate how these factors interact with teachers and the course characteristics identified in our study. A mixed-methods approach that combines quantitative analysis with qualitative data collection is ideal for this purpose.
However, further exploration of this research paradigm based on computational pedagogy is necessary. First, the accuracy and persuasiveness of this study largely depended on data quality and credibility [55]. Therefore, future research should focus on quality control during the data collection, processing, and analysis. In addition, this study proposes a novel computational model that requires further validation for applicability and effectiveness in different contexts. Second, the characteristics that affect the quality of teaching may vary across disciplines [56], necessitating broader research to expand the scope of this study. Moreover, the applicability of the model constructed in this study to other research areas related to teaching quality should be tested.
Despite these challenges, our research provides a new perspective and tool to improve the teaching quality of computer science courses. Our approach emphasizes the correlation between teaching quality and teacher and course characteristics. This methodology can be adjusted and optimized according to various research backgrounds and requirements.
In summary, our study offers an effective approach to enhancing the teaching quality of computer science courses. Although further research is needed to explore additional factors influencing teaching quality and validate our proposed model, our findings contribute to a better understanding of the factors that impact teaching quality. Future research can build on our findings to develop more effective strategies to improve educational outcomes in computer science courses.

Author Contributions

Conceptualization, D.Y. and J.L.; methodology, D.Y.; software, J.L.; validation, D.Y. and J.L.; formal analysis, D.Y. and J.L.; investigation, D.Y.; resources, J.L.; data curation, D.Y. and J.L.; writing—original draft preparation, D.Y.; writing—review and editing, J.L.; visualization, D.Y.; supervision, J.L.; funding acquisition, D.Y. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Natural Science Foundation of Hunan Province (grant number 2022JJ50316).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The datasets used and analyzed during the current study are available from the corresponding author upon reasonable request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Saloviita, T.; Pakarinen, E. Teacher Burnout Explained: Teacher-, Student-, and Organisation-Level Variables. Teach. Teach. Educ. 2021, 97, 103221. [Google Scholar] [CrossRef]
  2. Tan, J.; Deng, Y.; Yang, L. A Study on the Effects of Teaching Age on Burnout, Appreciative Social Support, and Trait Coping Styles of Young College Teachers. Heilongjiang Res. High. Educ. 2011, 2011, 110–112. [Google Scholar] [CrossRef]
  3. Palali, A.; van Elk, R.; Bolhaar, J.; Rud, I. Are Good Researchers Also Good Teachers? The Relationship between Research Quality and Teaching Quality. Econ. Educ. Rev. 2018, 64, 40–49. [Google Scholar] [CrossRef]
  4. Sacre, H.; Akel, M.; Haddad, C.; Zeenny, R.M.; Hajj, A.; Salameh, P. The Effect of Research on the Perceived Quality of Teaching: A Cross-Sectional Study among University Students in Lebanon. BMC Med. Educ. 2023, 23, 31. [Google Scholar] [CrossRef]
  5. Kulgemeyer, C.; Riese, J. From Professional Knowledge to Professional Performance: The Impact of CK and PCK on Teaching Quality in Explaining Situations. J. Res. Sci. Teach. 2018, 55, 1393–1418. [Google Scholar] [CrossRef]
  6. Li, S.; Liu, Y.; Su, Y.-S. Differential Analysis of Teachers’ Technological Pedagogical Content Knowledge (TPACK) Abilities According to Teaching Stages and Educational Levels. Sustainability 2022, 14, 7176. [Google Scholar] [CrossRef]
  7. Ma, L.; Xiong, Y.; Dong, L. The Higher the Professional Title, the Better the Teaching Quality? Teach. Educ. Res. 2020, 28, 3–10. [Google Scholar] [CrossRef]
  8. Han, T. Evaluation of Multimedia Physical Education Teaching Quality Considering Data Analysis Model. Math. Probl. Eng. 2022, 2022, 4347673. [Google Scholar] [CrossRef]
  9. Deng, S.; Que, X. Research on the Teaching Assessment of Students of Science and Engineering Teachers in a University. Comput. Appl. Eng. Educ. 2019, 27, 5–12. [Google Scholar] [CrossRef]
  10. Gabalán-Coello, J.; Vásquez-Rizo, F.E.; Laurier, M. Characteristics and Well Practices About Teaching Learning Process in Graduate Programs According to the Stakeholders. J. Latinos Educ. 2023, 00, 1–13. [Google Scholar] [CrossRef]
  11. Gordon, N.; Alam, O. The Role of Race and Gender in Teaching Evaluation of Computer Science Professors: A Large Scale Analysis on RateMyProfessor Data. In Proceedings of the 52nd ACM Technical Symposium on Computer Science Education, Virtual, 13–20 March 2021; ACM: New York, NY, USA, 2021; pp. 980–986. [Google Scholar]
  12. Santiesteban, P.; Endres, M.; Weimer, W. An Analysis of Sex Differences in Computing Teaching Evaluations. In Proceedings of the Third Workshop on Gender Equality, Diversity, and Inclusion in Software Engineering, Pittsburgh, PA, USA, 20 May 2022; ACM: New York, NY, USA, 2022; pp. 84–87. [Google Scholar]
  13. Arrona-Palacios, A.; Okoye, K.; Camacho-Zuñiga, C.; Hammout, N.; Luttmann-Nakamura, E.; Hosseini, S.; Escamilla, J. Does Professors’ Gender Impact How Students Evaluate Their Teaching and the Recommendations for the Best Professor? Heliyon 2020, 6, e05313. [Google Scholar] [CrossRef] [PubMed]
  14. Flegl, M.; Andrade Rosas, L.A. Do Professor’s Age and Gender Matter or Do Students Give Higher Value to Professors’ Experience? Qual. Assur. Educ. 2019, 27, 511–532. [Google Scholar] [CrossRef]
  15. Bianchini, S.; Lissoni, F.; Pezzoni, M. Instructor Characteristics and Students’ Evaluation of Teaching Effectiveness: Evidence from an Italian Engineering School. Eur. J. Eng. Educ. 2013, 38, 38–57. [Google Scholar] [CrossRef]
  16. Bao, S.; Chen, J. The Effectiveness and Influencing Factors of College Students’ Teaching Evaluation. Mod. Educ. Manag. 2022, 54–63. [Google Scholar] [CrossRef]
  17. Joye, S.; Wilson, J.H. Professor Age and Gender Affect Student Perceptions and Grades. J. Scholarsh. Teach. Learn. 2015, 15, 126–138. [Google Scholar] [CrossRef]
  18. Han, M.; Chen, Q.; Wang, P. Students’ Evaluations of University Teaching: Effects of Course and Teacher Characters. J. SOUTH CHINA Norm. Univ. Sci. Ed. 2010, 2010, 44–47. [Google Scholar]
  19. Tian, T.; Li, X.; Tong, D. Research on the Influencing Factors of College Students’ Evaluation of Teaching:An Empirical Analysis Based on Panel Data. J. Anhui Agric. Univ. Sci. Ed. 2022, 31, 78–84. [Google Scholar] [CrossRef]
  20. Li, C.; Zhang, J. An Empirical Analysis of the Influence of Teachers’ Background Characteristics on College Students’ Teaching Evaluation Score. J. South-Central Univ. Natl. Sci. Ed. 2021, 40, 512–518. [Google Scholar] [CrossRef]
  21. Binderkrantz, A.S.; Bisgaard, M. A Gender Affinity Effect: The Role of Gender in Teaching Evaluations at a Danish University. High. Educ. 2023, 2023, 1–20. [Google Scholar] [CrossRef]
  22. Jaekel, A.-K.; Fütterer, T.; Göllner, R. Teaching Characteristics in Distance Education—Associations with Teaching Quality and Students’ Learning Experiences. Teach. Teach. Educ. 2023, 132, 104174. [Google Scholar] [CrossRef]
  23. Rodríguez-García, A.; Arias-Gago, A.R. Uso Metodológico Docente y Rendimiento Lector Del Alumnado: Análisis Fundamentado En PISA Lectura 2018. Rev. Electrónica Interuniv. Form. del Profr. 2021, 24, 149–165. [Google Scholar] [CrossRef]
  24. Zhang, X.; Cheng, X.; Wang, Y. How Is Science Teacher Job Satisfaction Influenced by Their Professional Collaboration? Evidence from Pisa 2015 Data. Int. J. Environ. Res. Public Health 2023, 20, 1137. [Google Scholar] [CrossRef] [PubMed]
  25. Aldahdouh, T.Z.; Murtonen, M.; Riekkinen, J.; Vilppu, H. Innovativeness and Instructional Adaptation to COVID-19: Association with Learning Patterns and Teacher Demographics. Educ. Inf. Technol. 2023. [Google Scholar] [CrossRef] [PubMed]
  26. Asare, P.Y. Profiling Teacher Pedagogical Behaviours in Plummeting Postgraduate Students’ Anxiety in Statistics. Cogent Educ. 2023, 10, 2222656. [Google Scholar] [CrossRef]
  27. Marici, M.; Runcan, R.; Iosim, I.; Haisan, A. The Effect of Attire Attractiveness on Students’ Perception of Their Teachers. Front. Psychol. 2023, 13, 1–11. [Google Scholar] [CrossRef]
  28. Khokhlova, O.; Lamba, N.; Kishore, S. Evaluating Student Evaluations: Evidence of Gender Bias against Women in Higher Education Based on Perceived Learning and Instructor Personality. Front. Educ. 2023, 8, 1158132. [Google Scholar] [CrossRef]
  29. Huang, L.; Zhang, J.; Liu, Y. Antecedents of Student MOOC Revisit Intention: Moderation Effect of Course Difficulty. Int. J. Inf. Manage. 2017, 37, 84–91. [Google Scholar] [CrossRef]
  30. Addison, W.E.; Best, J.; Warrington, J.D. Students’ Perceptions of Course Difficulty and Their Ratings of the Instructor. Coll. Stud. J. 2006, 40, 409–416. [Google Scholar]
  31. Cheng, X.; Ma, X.; Luo, C.; Chen, J.; Wei, W.; Yang, X. Examining the Relationships between Medical Students’ Preferred Online Instructional Strategies, Course Difficulty Level, Learning Performance, and Effectiveness. Adv. Physiol. Educ. 2021, 45, 661–669. [Google Scholar] [CrossRef]
  32. Cavanaugh, J.; Jacquemin, S.J.; Junker, C.R. Variation in Student Perceptions of Higher Education Course Quality and Difficulty as a Result of Widespread Implementation of Online Education During the COVID-19 Pandemic. Technol. Knowl. Learn. 2022. [Google Scholar] [CrossRef]
  33. Liu, Z.; Yu, H. An Empirical Study of Student Evaluation of Teaching in Higher Education. China Univ. Teach. 2021, 8, 51–56. [Google Scholar] [CrossRef]
  34. Johnson, M.D.; Narayanan, A.; Sawaya, W.J. Effects of Course and Instructor Characteristics on Student Evaluation of Teaching across a College of Engineering. J. Eng. Educ. 2013, 102, 289–318. [Google Scholar] [CrossRef]
  35. Cai, Y. Analysis on Influencing Factors and Management of Teaching Evaluation in Local University. J. Minnan Norm. Univ. Sci. 2022, 35, 119–126. [Google Scholar] [CrossRef]
  36. Li, Z.; Wen, J. Computational Education: Whether Possible and How Possible? J. Distance Educ. 2019, 37, 12–18. [Google Scholar] [CrossRef]
  37. Wang, J.; Yang, Y.; Song, Q.; Zheng, Y. Computational Education: What It Means, What to Do and How to Do. Mod. Distance Educ. Res. 2020, 32, 27–35. [Google Scholar] [CrossRef]
  38. Wang, J.; Zhang, Y.; Song, Q.; Ma, Y. Computational Education: Research Trends and Application Scenarios. Open Educ. Res. 2020, 26, 59–66. [Google Scholar] [CrossRef]
  39. Wang, J.; Yang, Y.; Zheng, Y.; Xia, H. From Big Data to Computational Pedagogy: Conceptual, Motivation and Outlet. China Educ. Technol. 2020, 1, 85–92. [Google Scholar] [CrossRef]
  40. Liu, S.; Zhou, Z.; Li, Q. Revisiting Computational Pedagogy: How Artificial Intelligence Changes Educational Research. Educ. Res. 2022, 43, 18–27. [Google Scholar]
  41. Liu, S.; Yang, Z.; Li, Q. Computational Education: Connotations and Approaches. Educ. Res. 2020, 41, 152–159. [Google Scholar]
  42. Zheng, Y.; Yan, X.; Wang, J.; Wang, Y.; Liu, S. Establishing Computational Education Subject: Position, Paradigm and System. J. East China Norm. Univ. Sci. 2020, 38, 1–19. [Google Scholar] [CrossRef]
  43. Li, C.; Hu, P. Multifactor ANOVA on the Effect of Faculty Characteristics on Student Evaluation of Teaching in Higher Education Institutions. China Mark. 2018, 200–202. [Google Scholar] [CrossRef]
  44. Pérez-Morán, J.C.; Morales-Páez, M.; Bernal-Baldenebro, B.; Cano-Gutiérrez, J.C. Psychometric Properties and Invariance of the Scale to Measure Attitude of Researchers for University-Industry Collaboration. Front. Educ. 2023, 8, 971367. [Google Scholar] [CrossRef]
  45. Capa Luque, W.; Bello Vidal, C.; Mora Silva, N.; Manrique Borjas, G.; Villanueva Benites, M.E.; Ochoa Vigo, K. Construcción y Validación de Un Cuestionario Para Medir La Participación Comunitaria En Población Adulta. Rev. Argent. Cienc. Comport. 2019, 11, 81–92. [Google Scholar] [CrossRef]
  46. Galanakis, M.; Moraitou, M.; Garivaldis, F.J. Factorial Structure and Psychometric Properties of the Maslach Burnout Inventory (MBI) in Greek Midwives Anastasios Stalikas. Eur. J. Psychol. 2009, 5, 52–70. [Google Scholar] [CrossRef]
  47. Sternberg, R.J.; Grigorenko, E.L. Thinking Styles and the Gifted. Roeper Rev. 1993, 16, 122–130. [Google Scholar] [CrossRef]
  48. Hirsch, J.E. An Index to Quantify an Individual’s Scientific Research Output. Proc. Natl. Acad. Sci. USA 2005, 102, 16569–16572. [Google Scholar] [CrossRef]
  49. Shulman, L.S. Those Who Understand: A Conception of Teacher Knowledge. Am. Educ. 1986, 10, 4–14. [Google Scholar]
  50. Schmidt, D.A.; Baran, E.; Thompson, A.D.; Mishra, P.; Koehler, M.J.; Shin, T.S. Technological Pedagogical Content Knowledge (TPACK). J. Res. Technol. Educ. 2009, 42, 123–149. [Google Scholar] [CrossRef]
  51. Saaty, T.L. Fundamentals of the Analytic Hierarchy Process. In The Analytic Hierarchy Process in Natural Resource and Environmental Decision Making; Springer: Dordrecht, The Netherlands, 2001; pp. 15–35. [Google Scholar]
  52. Liu, Y. Analysis on the Influencing Factors of Graduate Training Quality Based on Grey Relevance Theory. In Proceedings of the First International Symposium on Management and Social Sciences (ISMSS 2019), Wuhan, China, 13–14 April 2019; Atlantis Press: Paris, France, 2019; Volume 309, pp. 276–279. [Google Scholar]
  53. Hattie, J. Visible Learning; Routledge: London, UK, 2008; ISBN 9781134024124. [Google Scholar]
  54. Frey, B.B. Danielson Framework. In The SAGE Encyclopedia of Educational Research, Measurement, and Evaluation; SAGE Publications, Inc.: Thousand Oaks, CA, USA, 2018; ISBN 9780615597829. [Google Scholar]
  55. Jonsson, A.; Svingby, G. The Use of Scoring Rubrics: Reliability, Validity and Educational Consequences. Educ. Res. Rev. 2007, 2, 130–144. [Google Scholar] [CrossRef]
  56. Becher, T. Quality assurance and disciplinary differences. Aust. Univ. Rev. 1994, 37, 4–7. [Google Scholar]
Figure 1. Developing a quantitative model for teacher burnout.
Figure 1. Developing a quantitative model for teacher burnout.
Systems 11 00455 g001
Figure 2. A quantitative model of teaching style.
Figure 2. A quantitative model of teaching style.
Systems 11 00455 g002
Figure 3. The TPACK quantitative model.
Figure 3. The TPACK quantitative model.
Systems 11 00455 g003
Figure 4. A comparison of 27 teachers’ burnout. As can be seen from the comparison results, each teacher had inconsistent results on all three dimensions of burnout and differences in total burnout.
Figure 4. A comparison of 27 teachers’ burnout. As can be seen from the comparison results, each teacher had inconsistent results on all three dimensions of burnout and differences in total burnout.
Systems 11 00455 g004
Figure 5. Results of academic achievement of 27 teachers.
Figure 5. Results of academic achievement of 27 teachers.
Systems 11 00455 g005
Figure 6. The difficulty of 82 computer science courses.
Figure 6. The difficulty of 82 computer science courses.
Systems 11 00455 g006
Figure 7. Comprehensive evaluation scores of 27 teachers’ course lectures.
Figure 7. Comprehensive evaluation scores of 27 teachers’ course lectures.
Systems 11 00455 g007
Figure 8. Comparison of teaching quality and predicted value. The dark blue color in the figure shows the teaching quality scale survey results, and the red dotted line shows the predicted quality of teaching based on key influential characteristics.
Figure 8. Comparison of teaching quality and predicted value. The dark blue color in the figure shows the teaching quality scale survey results, and the red dotted line shows the predicted quality of teaching based on key influential characteristics.
Systems 11 00455 g008
Table 1. Teaching quality questionnaire validity analysis. Column 1 of the table is the final question obtained based on the aforementioned scale generation method, column 2 is the maximum factor loading coefficient of the survey results, and column 3 is the degree of commonality.
Table 1. Teaching quality questionnaire validity analysis. Column 1 of the table is the final question obtained based on the aforementioned scale generation method, column 2 is the maximum factor loading coefficient of the survey results, and column 3 is the degree of commonality.
ItemsThe Largest Factor LoadingsCommunalities
Q1: Well-prepared for teaching, skillful content, and full spirit.0.6150.752
Q2: Lecturing seriously and actively maintaining the order of classroom teaching.0.7140.592
Q3: The teaching content is systematic and full of information, the pace and progress are reasonably arranged, and the important and difficult points are outstanding.0.6230.636
Q4: Establish morality and educate people with professional knowledge and quality at the right time.0.8210.745
Q5: The ability to link theory with practice and reflect on the discipline frontier.0.8250.708
Q6: Focus on inspiration and guidance and the ability to flexibly use various teaching methods to help students accept and understand relevant knowledge and develop their thinking and creative abilities.0.7050.620
Q7: Using board books and modern teaching techniques in a timely and appropriate manner with good results0.8850.811
Q8: Guiding students in actively participating in teaching sessions and the classroom atmosphere is enthusiastic.0.8230.680
Q9: Students learn something and have a high degree of mastery of their course knowledge.0.7770.724
Q10: Cultivate students’ learning habits, learning methods, inquiring spirit, and innovative ability.0.7300.636
Q11: Organized, accurate lecture knowledge, focused, and difficult breakthroughs.0.7670.627
Q12: Fluent expression, concise language, and strong appeal.0.7160.583
KMO0.704
Bartlett’s Test of Sphericity (Chi-Square)58.789
p-value0.001
Table 2. Validity and reliability statistics for the teacher burnout and their sum, including factor loadings, corrected item-total correlation (CITC), commonalities, corresponding Cronbach’s alpha values, and KMO.
Table 2. Validity and reliability statistics for the teacher burnout and their sum, including factor loadings, corrected item-total correlation (CITC), commonalities, corresponding Cronbach’s alpha values, and KMO.
ItemsFactor LoadingsCITCCommunalitiesCronbach α
Emotional exhaustion0.8060.5290.7850.741
Depersonalization0.7900.6730.712
Personal Accomplishment0.9320.3590.878
Teacher burnout0.9771.0001.000
KMO0.624
Table 3. Validity and reliability statistics for the seven dimensions of teaching style, including factor loadings, CITC, commonalities, corresponding Cronbach’s alpha values, and KMO.
Table 3. Validity and reliability statistics for the seven dimensions of teaching style, including factor loadings, CITC, commonalities, corresponding Cronbach’s alpha values, and KMO.
Style TypesFactor LoadingsCITCCommunalitiesCronbach α
Legislative0.8910.8530.7940.968
Executive0.8600.8030.739
Judicial0.9540.9360.909
Global0.9460.9240.896
Local0.9370.9160.878
Liberal0.9370.9140.878
Conservative0.9030.8670.815
KMO0.866
Table 4. Validity analysis. The table gives the values of factor loadings and communities for the 49 questions of the TPACK scale, the KMO and p-values for the seven dimensions, the overall KMO, p-values, and Cronbach’s alpha.
Table 4. Validity analysis. The table gives the values of factor loadings and communities for the 49 questions of the TPACK scale, the KMO and p-values for the seven dimensions, the overall KMO, p-values, and Cronbach’s alpha.
DimensionsItemsFactor LoadingsCommunalitiesKMOP
TK10.7420.8200.6300.000
20.4590.915
30.6790.786
40.9040.907
50.8240.941
60.4390.965
70.8550.969
80.5030.942
90.5480.842
100.4580.890
110.5670.957
120.7900.889
130.8890.969
140.8560.899
150.7010.937
160.5450.922
PK170.4680.9680.8820.000
180.5690.904
190.8900.913
200.5420.907
210.8190.890
CK220.7910.9400.6310.000
230.9600.976
240.4300.916
250.8670.963
260.5530.868
PCK270.5530.8830.6430.000
280.5900.814
290.8250.789
300.5450.940
TCK310.8890.9460.8580.000
320.6220.931
330.8360.922
340.5790.904
350.5020.890
360.4690.925
370.8440.872
380.4160.939
TPK390.8400.9020.8220.000
400.7780.871
410.8600.883
420.9090.935
430.4090.904
440.7110.878
450.8030.913
TPACK460.4160.879 0.000
470.4280.827
480.7900.896
490.9240.907
KMO0.668
p-value0.004
Cronbach’s alpha0.619
Table 5. Correlation coefficients of Tq with different characteristics. The bold characteristics have correlation coefficients greater than the threshold value.
Table 5. Correlation coefficients of Tq with different characteristics. The bold characteristics have correlation coefficients greater than the threshold value.
CharacteristicsCorrelation Coefficients
Jud0.186820
Exe0.164657
Glo0.157737
Ta0.147954
Lib0.143140
Con0.137340
Loc0.109996
Pt0.105862
TCK0.080511
Cd0.076653
Leg0.075396
Gd0.065621
PK0.058221
Eb0.056243
TK0.050682
Aa0.044805
TPACK0.039777
Te0.036798
CK0.033117
Tb0.032526
PCK0.031813
Ag0.031474
Dg0.023780
Table 6. Coefficients of each item of the multiple regression model.
Table 6. Coefficients of each item of the multiple regression model.
ItemsCorrelation Coefficients
Eb0.08762497
Pt−0.18192852
Gd−0.16146295
Ta0.16760372
Leg−0.91275239
Exe−0.11504773
Jud0.31069709
Glo−0.00707843
Loc0.18233648
Lib0.69943627
Con−0.06296348
TK0.04605904
PK−0.09686554
TCK−0.02922831
Cd−0.09414485
Intercept0.7192219170075398
R-squared0.14485157622728662
Table 7. Coefficients of determination for each characteristics.
Table 7. Coefficients of determination for each characteristics.
CharacteristicsCoefficient of Determination R2
Eb0.74849181
Pt0.31818084
Gd0.50910173
Ta0.43757353
Leg0.27792175
Exe0.72262906
Jud0.67067184
Glo0.33800404
Loc0.58113391
Lib0.23069742
Con0.3413322
TK0.73740502
PK0.45785315
TCK0.29118119
Cd0.54061881
Table 8. Analysis of the correlation between key impact characteristics and teaching quality.
Table 8. Analysis of the correlation between key impact characteristics and teaching quality.
Key Influence CharacteristicsAssociation Degree
Eb0.65991873
Gd0.50405441
Exe0.72121656
Jud0.77624262
Loc0.73482096
TK0.71722238
Cd0.71548622
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yao, D.; Lin, J. Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach. Systems 2023, 11, 455. https://doi.org/10.3390/systems11090455

AMA Style

Yao D, Lin J. Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach. Systems. 2023; 11(9):455. https://doi.org/10.3390/systems11090455

Chicago/Turabian Style

Yao, Dunhong, and Jing Lin. 2023. "Identifying Key Factors Influencing Teaching Quality: A Computational Pedagogy Approach" Systems 11, no. 9: 455. https://doi.org/10.3390/systems11090455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop