Next Article in Journal
Leveraging Artificial Intelligence and Provenance Blockchain Framework to Mitigate Risks in Cloud Manufacturing in Industry 4.0
Next Article in Special Issue
Computational Intelligence Supporting the Safe Control of Autonomous Multi-Objects
Previous Article in Journal
Analysis and Optimization Design Scheme of CMOS Ultra-Wideband Reconfigurable Polyphase Filters on Mismatch and Voltage Loss
Previous Article in Special Issue
Separable ConvNet Spatiotemporal Mixer for Action Recognition
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis

1
Training and Basic Education Management Office, Southwest University, Chongqing 400715, China
2
College of Electronic and Information Engineering, Southwest University, Chongqing 400715, China
3
School of Computing and Information Science, Faculty of Science and Engineering, Anglia Ruskin University, Cambridge CB1 1PT, UK
*
Author to whom correspondence should be addressed.
Electronics 2024, 13(3), 659; https://doi.org/10.3390/electronics13030659
Submission received: 15 December 2023 / Revised: 1 February 2024 / Accepted: 2 February 2024 / Published: 5 February 2024

Abstract

:
In recent years, the informatization of the educational system has caused a substantial increase in educational data. Educational data mining can assist in identifying the factors influencing students’ performance. However, two challenges have arisen in the field of educational data mining: (1) How to handle the abundance of unlabeled data? (2) How to identify the most crucial characteristics that impact student performance? In this paper, a semi-supervised feature selection framework is proposed to analyze the factors influencing student performance. The proposed method is semi-supervised, enabling the processing of a considerable amount of unlabeled data with only a few labeled instances. Additionally, by solving a feature selection matrix, the weights of each feature can be determined, to rank their importance. Furthermore, various commonly used classifiers are employed to assess the performance of the proposed feature selection method. Extensive experiments demonstrate the superiority of the proposed semi-supervised feature selection approach. The experiments indicate that behavioral characteristics are significant for student performance, and the proposed method outperforms the state-of-the-art feature selection methods by approximately 3.9% when extracting the most important feature.

1. Introduction

Data mining, as a powerful tool for information extraction, aims to discover potential patterns, associations, and knowledge from large-scale datasets, providing strong support for decision-making and problem-solving [1,2,3]. Its application covers various fields such as business, healthcare, and machine learning [4,5,6,7], specifically used in technologies of non-negative matrix factorization [8,9], multi-view clustering [10,11]. In the field of education, the use of data mining techniques is also becoming increasingly widespread [12,13]. With the increased popularity of digital teaching and online learning tools, the data generated by students is growing rapidly during the learning process. The application of data mining technology in the field of education involves analyzing extensive educational datasets to optimize the teaching process. This application is referred to as educational data mining (EDM) [14,15].
EDM encompasses the gathering of information from diverse outlets, including students’ academic achievements, online learning platform interactions, as well as scores from tests and assignments. Subsequently, data mining techniques are employed to unveil patterns, trends, and associations within these data [16]. This analytical process aids educators in gaining deeper insights into students’ learning methods, recognizing hurdles in their learning journeys, and offering tailored educational support and suggestions. Based on the results of EDM, educators can develop targeted instructional strategies and interventions for students’ learning needs and characteristics to enhance student learning outcomes [17,18].
In the field of education, collected labeled data are generally limited, typically encompassing student exam scores and personal background information. However, there are also a large amount of unlabeled data, such as students’ classroom performances and discussion records, which contain great potential information value but lack corresponding labeling [19]. The presence of abundant unlabeled data increases the complexity and difficulty of educational data mining tasks [20]. Therefore, it has become a critical challenge to effectively utilize unlabeled data, to discover patterns and regularities [21].
In the research on EDM, there has been a significant focus on the impact of different student characteristics on their academic performance [22]. Student characteristics encompass aspects such as the frequency of raising hands, online learning activities, and assignment submissions. There may be potential associations between these characteristics and students’ academic performance. By delving deeper into the relationship between different student characteristics and academic performance, this can help to understand students’ learning patterns and behavioral habits, and provide important clues for personalized education and student intervention [23]. The identification of the primary characteristics that influence student performance is an essential task in educational data mining. However, educational datasets often contain numerous irrelevant features that can negatively affect the accuracy of models. If a dataset contains many features, the corresponding model may encounter disruptions from redundant or noisy elements, leading to challenges in precisely assessing the influence of student characteristics on academic performance.
To tackle these issues, this paper introduces an approach called semi-supervised feature selection based on generalized linear regression (SFSGLR), with the objective of identifying the most critical characteristics influencing students’ academic performance. Specifically, the proposed method adopts the idea of semi-supervised learning to process a large amount of unlabeled data by using only a small number of labeled instances. This approach can efficiently utilize data resources and reduce the workload of manually labeling data. A feature-selection-based model is introduced to select the features that affect the academic performance of students. By solving a feature selection matrix, the importance of each feature for student performance is determined and ranked. Finally, to evaluate the performance of the proposed semi-supervised feature selection method, four popular classifiers were employed and extensive experiments were conducted. The experimental results indicated that the proposed method demonstrated superior performance in identifying key features, and it was found that behavioral features are crucial factors influencing students’ academic performance. This enables educators and policymakers to focus on such features to develop targeted teaching strategies and intervention measures.
The remaining sections of this paper are structured as follows: Section 2 provides the background. Section 3 introduces the materials and methods used in the paper. In Section 4, a semi-supervised feature selection method based on generalized linear regression is proposed. In Section 5, an optimization approach is designed to solve the proposed model. In Section 6, experiments are conducted to analyze the important characteristics and demonstrate the effectiveness of the proposed method. The conclusions are discussed in Section 7.

2. Background

With the development of the education field and the advancement of technology, EDM is rapidly emerging and has become a much-anticipated research direction in the field of education [24,25]. This section reviews the existing literature and research relevant to this paper’s topic, aiming to achieve a comprehensive understanding of the current state of research on the impact of student characteristics on academic performance.

2.1. Research on Student Performance Based on Semi-Supervised Learning

In [20], the authors employed various widely recognized semi-supervised methods to forecast the performance of high school students in the ’Mathematics’ module’s final exam. The experiment conducted in this study was divided into two stages. In each stage, the attributes of students in their first semester were first evaluated, followed by an evaluation of all attributes across two semesters. The evaluation employed semi-supervised algorithms such as self-training, co-training, tri-training, de-tri-training, and democratic co-learning. These algorithms were combined with several supervised classifiers, including naive Bayes (NB), C4.5 decision tree, K-nearest neighbors (KNN), and sequential minimal optimization (SMO). The experimental results revealed that self-training, tri-training, and co-training, among the semi-supervised methods, outperformed the commonly used supervised method (NB classifier). The work in [26] explored the effectiveness of semi-supervised methods in forecasting academic performance in distance higher education. The work in [27] utilized semi-supervised learning to classify the performance of first-year students. It adopted k-means clustering to classify students into three clusters and then used a naive Bayes classifier to classify them and predict student performance. The work in [28] investigated the role of social influence in predicting academic performance. The study first constructed students’ social relationships by analyzing their school behaviors and finding similarities in academic performance among friends. Next, a semi-supervised learning approach was used to build social networks to predict student achievement.

2.2. Research on Student Performance Based on Feature Selection Methods

The work in [22] proposed an academic performance classification model aimed at investigating the impact of student behavioral characteristics on academic performance in educational datasets. The study used the experience API (xAPI) tool to collect data and three common data mining methods (artificial neural networks (ANN), decision tree classifier (DT), and NB) to classify the data and assess student behaviors during the learning process and their impact on academics. The work in [29] employed four data mining techniques (NB, ANN, DT, and support vector machine (SVM)) for predicting students’ academic performance and identifying the features that impacted their learning outcomes. The results indicated that the SVM method exhibited a superior performance. The work in [30] aimed to establish an effective model for predicting students’ learning performance by discussing various data mining techniques. Four feature selection methods, including a genetic algorithm, gain ratio, relief, and information gain, were utilized to preprocess the data. Subsequently, five classification algorithms, namely KNN, NB, bagging, random forest, and J48 decision tree, were employed to analyze and evaluate the students’ performance. The experimental results demonstrated that the combination of a genetic algorithm and KNN classifier exhibited the best accuracy measurement compared to other the methods.

3. Material and Methods

3.1. Notations and Definitions

Some notations and definitions are summarized in Table 1. In this paper, scalars are written as lowercase letters, vectors are written as boldface lowercase letters, and matrices are written as boldface uppercase letters.

3.2. Research Methodology

The workflow for semi-supervised feature selection of educational data mining for student performance analysis is shown in Algorithm 1.
First, pro-processing is performed on the data X ^ R m × n , where m is the number of the features, n is the number of the samples. The original educational data matrix X ^ always contains some texts, and it is hard to employ directly for feature selection based on generalized linear regression. Thus, the texts are replaced by numerical values to obtain a numerical dataset X . Then, in order to reduce the influence of certain samples with large values, normalization is employed on j-th column using x j = x j / | | x j | | 2 .
Second, the proposed semi-supervised feature selection method is employed to obtain a feature selection matrix W R m × c , where c is the number of classifications. The feature selection matrix represents the importance of each feature,  sorting features according to w i 2 in descending order, so that the f most important features can be selected. In order to prevent accidental results, the experiment was performed 10 times.
The final stage is computing the classification results using some classifiers. The data matrix can be reconstructed as X R f × n , where f is the number of selected features in the last stage. Then, the final classification results are computed using some classifiers on the data matrix after selection. A 10-fold cross-validation was used in the experiment.
Algorithm 1 Workflow of the semi-supervised feature selection on educational data mining for student’s performance analysis.
Require: 
Educational data matrix X ^ R m × n ,
  1:
Convert data as numerical data set X R m × n ,
  2:
Normalize data x j = x j / | | x j | | 2 ,
  3:
for  k = 1 to 10 do
  4:
      Divide data into label data and randomly according to a proportion,
  5:
      Get the feature data matrix W ( k ) R m × c by the proposed semi-supervised feature selection method in Algorithm 2,
  6:
end for
  7:
Compute the mean value of W = 1 10 k = 1 10 W ( k ) ,
  8:
Calculate and sort each feature according to w i 2 in descending order,
  9:
Select the f most important features,
10:
Construct the data matrix after selection X R f × n ;
11:
Compute the classification results on 10-fold cross-validation with classifiers on data matrix after selection,
Ensure: 
The importance of each feature w i 2 , the important order of all features, the final classification results.

4. The Proposed Method

X R m × n is a data matrix with c classes, where m denotes the number of features, n denotes the number of samples, and  Y R n × c is the label matrix, which represents the relationships between samples and classes. In semi-supervised learning, supposing l samples are labeled, and  u = n l samples are unlabeled, it can be seen that X = [ X L , X U ] which are associated with Y = [ Y L ; Y U ] . X L denotes the labeled data, X U denotes the unlabeled data, Y L represents the feature selection matrix with labeled data, and Y U represents the feature selection matrix with unlabeled data. Y U is a binary matrix in which y i j = 1 if x i belongs to the j-th class.
A generalized linear regression to represent the relationship between X and Y can be expressed as follows [31]:
f ( X ) = X T W + 1 b T
where W R m × c is the feature selection matrix, which represents the importance of each feature for different clusters, b R m × 1 denotes the bias, and 1 is a column vector where all entries are 1.
Then, in order to reduce the gap between f ( X ) and Y , the loss function can be denoted by
min W , b l o s s ( X T W + 1 b T , Y ) + λ g ( W )
where g ( W ) is a regularization term of the feature selection matrix W , and λ is the parameter used to control the term.
The Frobenius norm is commonly used to denote the loss of the generalized linear regression in feature selection methods.  In order to select the most important features, several methods employ a l 2 , 1 norm for a feature select matrix W to enhance the sparseness [32]. However, the l 2 , 1 norm is more effective when the data have numerous features. In educational data mining, although the number of samples is large, the features are more difficult to extract and their number is not always large. Therefore, the l 2 , 1 norm may cause over sparseness of W . In addition, in educational data mining, the feature selection methods also aims to analyze the the importance of all features relatively. Thus, reserving the values in the feature selection matrix of all features is significant. Therefore, the Frobenius norm is taken for W in the proposed method. Equation (2) can be written as
min W , b X T W + 1 b T Y F 2 + λ W F 2 s . t . Y U 0 , Y U 1 = 1
As a subspace is projected from the original data space, the manifold structures of features in subspace should be similar to the original space [33,34]. In other words, when two features are similar in the original space, they are also similar in the subspace. Maintaining manifold structures between the features is beneficial for extracting a well-structured feature selection matrix. The Euclidean distance is commonly employed to measure the similarity of the features. If the Euclidean distance w i w j 2 2 between two features w i and w j is close, their similarity s i j is large. Thus, the manifold regularization is expressed as
min W 1 2 i = 1 m j = 1 m s i j w i w j 2 2 = min W T r ( W T L W )
where L R m × m is the graph Laplacian matrix and L = D S , S is the similarity matrix, and d i i = i = 1 s i j is the degree matrix.
Considering the above aspects, the final objective function can be formulated as
min W , b X T W + 1 b T Y F 2 + λ W F 2 + α T r ( W T L W ) s . t . Y U 0 , Y U 1 = 1
where X is the data matrix, W is the features selection matrix, m is the bias vector, and Y is the label matrix. λ and α are non-negative parameters.

5. Optimization

5.1. Optimization Steps

An efficient alternating optimization algorithm with a variable that is updated with the other variables fixed is designed to solve the objective function (5).
Update W while fixing b and Y U :
When b and Y u are fixed, the objective function (5) becomes
min W X T W + 1 b T Y F 2 + λ W F 2 + α T r ( W T L W )
By taking the partial derivative of the formula (6) with respect to W , and setting the derivative to zero, we have
2 X ( X T W + 1 b T Y ) + 2 λ W + 2 α L W = 0
Then, the optimal solution of W is
W = ( X X T + λ I + α L ) 1 X ( Y 1 b T )
Update b while fixing W and Y U :
When W and Y U are fixed, b can be updated by solving
min b X T W + 1 b T Y F 2
By taking the partial derivative of the formula (9) with respect to b and setting the derivative to zero, the closed-form solution of b is expressed as
b = 1 n ( Y T 1 W T X 1 )
Update Y U while fixing W and b :
This can be
min W , b W T x i + b y i 2 2 s . t . y i 0 , y i T 1 = 1
The augmented Lagrangian function of problem (11) is
L ( Y U , φ , ψ ) = a i y i 2 2 + φ ( y i T 1 1 ) y i T ψ i
where φ are ψ i the Lagrangian multipliers, and  a i = W T x i + b .
Then, following the solution method in [35], the optimal solution of y i is
y i = ( a i + φ ) +
where φ can be obtained by solving y i T 1 = 1 .
The convergent condition of the algorithm is expressed as
| o b j t 1 o b j t | | o b j t | < ϵ
where o b j t denotes the value of the objective function in the t-th iteration. ϵ is a small positive parameter that controls the convergent condition of the algorithm. The entire update process is summarized in Algorithm 2.
Algorithm 2 Semi-supervised Feature Selection via Generalized Linear Regression (SFSGLR) for Students’ Performance Analysis
Require: 
Data matrix X R m × n , parameters λ and α ;
  1:
Set t = 0 , ϵ = 10 7 , initialize b and Y U in random value range from 0 to 1;
  2:
while not converged do
  3:
    Update W t by Equation (8);
  4:
    Update b t by Equation (10);
  5:
    Update Y U t , in which each y i is calculated by Equation (13);
  6:
    Check the convergent condition by formula (14)
  7:
     t = t + 1 ;
  8:
end while
Ensure: 
Calculate and sort each feature according to w i 2 in descending order, and then select the f most important features.

5.2. Computational Complexity

The computational complexity of Algorithm 2 is analyzed in this section. Where m denotes the number of features, n denotes the number of samples, c denotes the number of classes, and u denotes the number of unlabeled samples. Updating W costs O ( m 3 + m 2 n + m n c + n c ) , updating b costs O ( m c 2 + n c ) , and updating Y U costs O ( u m c ) . Therefore, the overall complexity in an iteration is O ( m 3 + m 2 n + m n c + m c 2 + n c + u m c ) in one iteration.

5.3. Convergence Analysis

The convergence of the Algorithm 2 is demonstrated in this section. The update rules for b and Y U are all based on the closed-form solutions. Thus, only the convergence of updating W needs to be demonstrated. An auxiliary function construction method [36] can be adopted to solve the problem.
Definition 1. 
ϕ ( h , h ) is an auxiliary function of F ( h ) if the following conditions are satisfied:
F ( h ) ϕ ( h , h ) , F ( h ) = ϕ ( h , h ) .
Lemma 1. 
If ϕ is an auxiliary function, then F is non-increasing under the following updating rule:
h t + 1 = arg min h ϕ ( h , h ) .
Proof. 
F ( h t + 1 ) ϕ ( h t + 1 , h t ) ϕ ( h t , h t ) = F ( h t )
If a suitable auxiliary function can be found to satisfy (17), convergence with respect to W can be proved. □
Let F ( W ) denote the function of Problem (6), which has
F ( W ) = X T W + 1 b T Y F 2 + λ W F 2 + α T r ( W T L W )
The first and second partial derivatives of F ( W ) with respect to the variable w i j can be calculated as follows:
F i j = F w i j = 2 [ X ( X T W ) + λ W + α L W ] i j
F i j = F i j w i j = 2 [ X X T ] i i + 2 λ + 2 α L j j
Therefore, the Taylor series expansion of F ( w i j ) can be expressed as
F ( w i j ) = F ( w i j t ) + F i j ( w i j t ) ( w i j w i j t ) + 1 2 F i j ( w i j t ) ( w i j w i j t ) 2
Lemma 2. 
The function ϕ is an auxiliary function for F ( W ) when it satisfies the following condition:
ϕ ( w i j , w i j t ) = F ( w i j t ) + F i j ( w i j t ) ( w i j w i j t ) + h i j w i j t ( w i j w i j t ) 2
where
h i j = [ X X T W t + λ W t + α D W t ] i j
Proof. 
According to Equations (21) and (22), it can be obtained that F ( w i j t ) = ϕ ( w i j t , w i j t ) . Then, F ( w i j t ) = ϕ ( w i j , w i j t ) holds if the following formula satisfies:
h i j w i j t [ X X T ] i i + λ + α L j j
The following three formulas hold:
( X X T W t ) i j = i = 1 m w i j t w i j t [ X X T ] i i
λ [ W t ] i j w i j t = λ
α [ D W t ] i j = i = 1 m α w i j t D j j α w i j t D j j α w i j t [ D S ] j j = α w i j t L j j
Therefore, ϕ ( w i j , w i j t ) is an auxiliary function for F ( w i j t ) . Finally, it can be obtained that the value of the objective function of Algorithm 2 is non-increasing until achieving convergence. □

6. Experiments

6.1. Dataset

The educational dataset xAPI was adopted in the experiments.
xPAI [22] (https://www.kaggle.com/datasets/aljarah/xAPI-Edu-Data, accessed on 10 November 2023) is a student academic performance dataset collected from a learning management system. It consists of 480 student records. Sixteen features are contained in the dataset, including gender, nationality, place of birth, stage ID, grade ID, section ID, topic, semester, relation, raised hands, visited resources, announcements view, discussion, parent answering survey, parent school satisfaction, and student absence days. The label used in xAPI is ’class’, including three classifications: low-level, middle-level, and high-level.
xAPI contains text data that are hard to process directly with the proposed algorithm. Thus, text data are replaced with numeric data. For example, the feature ’student absence days’ includes two kinds of text data, which are ’under-7 ’ and ’above-7’. Thus, ’under-7’ is replaced by ’1’ and ’above-7’ is replaced by ’0’. Statistical information of the xAPI educational dataset is given in Table 2. Some statistical information is shown in Figure 1.
Figure 1a illustrates the distribution of student nationalities. The figure shows that Kuwait and Jordan have the highest representation, with 179 and 172 individuals, respectively.
Figure 1b presents the students’ birthplaces. Kuwait and Jordan, with 180 and 176 individuals, respectively, emerged as the most prevalent birthplaces.
Figure 1c illustrates the distribution of students across different grade levels. The figure indicates that G-02 has the highest number of students, with 147 individuals, while G-01 and G-03 have no students.
Figure 1d showcases the subjects the students are studying, referred to as course topics. The most popular subjects include IT, French, Arabic, Science, and English. The popularity of these subjects may be indicative of students’ interests, future career choices, or the curriculum offered by the school.
Figure 1e displays the frequency of student participation in class by raising their hands. The figure reveals that 90 individuals raised their hands between 10 and 20 times, 80 individuals raised their hands between 70 and 80 times, and 66 individuals raised their hands between 80 and 90 times.
Figure 1f displays the frequency of student access to course content. The figure reveals that 107 students accessed the course content between 80 and 89 times, 76 students accessed it between 90 and 99 times, and 63 students accessed it between 0 and 9 times.
Figure 1g presents the frequency of students viewing new announcements. The figure indicates that 78 students viewed the announcements between 10 and 19 times, 72 students viewed them between 0 and 9 times, and 63 students viewed them between 20 and 29 times.
Figure 1h showcases the frequency of student participation in discussion groups. The figure shows that 77 students participated in discussions between 11 and 20 times, 69 students participated between 31 and 40 times, and 66 students participated between 21 and 30 times.

6.2. Experimental Settings

Using Algorithm 2, a feature selection matrix W can be obtained. The feature selection matrix W indicates the importance of each feature. By calculating and sorting | | w i | | 2 in descending order, a ranking of the importance of the characteristics can be obtained.
After selecting different numbers for the most important features, the four classifiers K-Nearest neighbors (KNN), decision tree (Dtree), random forest (RF), and support vector machine (SVM) are adopted to measure the performance of the proposed method.

6.2.1. Comparison Methods

To demonstrate the effectiveness of the proposed SFSGLR, three feature selection methods are adopted to compare with SFSGLR, and they are introduced briefly as follows:
Unsupervised discriminate feature selection (UDFS) [32]: UDFS is a unsupervised feature selection method based on linear discrimination, with a l 2 , 1 norm in the feature selection matrix to enhance sparseness.
Non-negative discriminant feature selection (NDFS) [37]: NDFS is a unsupervised feature selection method based on non-negative spectral analysis and l 2 , 1 norm regularization.
Semi-supervised feature selection via rescaled linear regression (SFSRLR) [35]: This is a semi-supervised feature selection method with linear regression and a l 2 , 1 norm.

6.2.2. Classifiers

In this paper, four classification techniques are employed to assess the factors that influenced students’ performance or grade level. The methods used for classification included K-nearest neighbors (KNN), decision tree (Dtree), random forest (RF), and support vector machine (SVM).
K-nearest neighbors (KNN) is an instance-based classification algorithm that determines the class of a new sample based on the distance between the samples [38]. It selects the nearest K samples as a reference, and determines the category of the new sample based on the majority voting principle.
The automated rule discovery technique known as decision tree (Dtree) [39] analyzes and learns from training data, producing a series of branching decisions that classify the data based on the values of different feature attributes.
Random forest (RF) [40] represents an ensemble learning method that accomplishes classification by constructing numerous decision trees and combining their outcomes. This widely used machine learning algorithm harnesses the diversity and collective knowledge of multiple decision trees to enhance prediction accuracy and robustness. The ultimate classification decision is reached by aggregating the predictions from all individual trees, typically through a majority voting mechanism.
Support vector machine (SVM) [29] is a binary classification algorithm that aims to find an optimal hyperplane in a high-dimensional feature space, to separate different classes of data points. The key idea is to map the data into a high-dimensional feature space and transform a nonlinear problem into a linearly separable or approximately linearly separable problem.

6.2.3. Evaluation Metrics

Four widely used evaluation metrics are adopted to measure the performance of the classification: accuracy (ACC), Fscore, precision and recall. They are formulated as follows:
A C C = T P + T N T P + T N + F P + F N , P r e c i s i o n = T P T P + F P
R e c a l l = T P T P + F N , F s c o r e = ( 1 + β 2 ) × P r e c i s i o n × R e c a l l β 2 × P r e c i s i o n + R e c a l l
where β > 0 is the parameter for Fscore and always equals to 1, while TP, TN, FP, and FN denote true negative, true positive, false positive, and false negative, respectively. For all of the four metrics, a larger value means a better performance.

6.3. Student Performance Characteristic Analysis

In the student performance characteristics experiments, the proposed algorithm is adopted to sort the importance of the different features for the students’ academic performance. Figure 2d shows the ranking of the most important features.
Of all the features, f11 influences the students’ performance most, making up about 30% of the importance. f11 denotes times of visited resources, which means how many times a student visited course contents. Next, f10 and f7 are of equal importance, with each accounting for approximately 15% of the total importance. f10 and f7 represent the number of times a student raises his or her hand in class and course topics, respectively. In addition, it is found that the importance of f1, f14, f8, and f15 are all lower than 1%. Thus, they were not important for the students’ academic performance.
The 16 features have different tendencies and they can be divided into five categories. The first category is personal features, including f1 (gender) and f16 (student absence days). The second category is a social-related category, including f2 (nationality), f3 (place of birth), and f9 (relation). The third category is a school-related category, including f4 (stage ID), f5 (grade ID), f6 (section ID), f7 (topic), and f8 (semester). The fourth is a behavioral category, including f10 (times of raising hands), f11 (times of visiting resources), f12 (times of announcements), and f13 (times of discussions). The fifth is a family-related category, including f14 (parents answering survey) and f15 (student absence days).
The top four most important features are f11, f10, f7, and f12. And the top eight most important features are f11, f10, f7, f12, f5, f6, f13, and f2. This means that all the behavioral characteristics are significant for the student’s performance.
Figure 2 shows a comparison of the student’s performance characteristics ranking with different methods. Differently from the proposed SFSGLR, the importance matrices of the other methods are more sparse. The proposed method adopts the Frobenius norm for the feature selection matrix, while the others adopts the l 2 , 1 norm. As Figure 2 shows, the importance values of most of the features with the comparison methods were close to 0. This is not convenient for comparing the importance between features. With UDFS, the top five most important features are f15 (33.28%), f4 (31.7%), f9 (18.36%), f1 (14.95%), and f5 (1.66%). For NDFS, the top four most important features are f10 (31.56%), f13 (25.4%), f11 (22.06%), and f12 (20.97%). NDFS also indicates that the behavioral characteristics are important for the features. The feature selection matrix for SFSRLR seems overly sparse, with only two important values of features being larger than 1%. With SFSRLR, the top two most important features are f12 (69.17%) and f13 (30.61%). In SFSRLR, the behavioral characteristics also makes up the greatest percentage of importance.
Table 3 shows the classification results with the different methods and the number of features on xAPI with 50% labeled data. The feature selection methods aims to extract the most important features, and xAPI has only 16 features in total. Thus, the classification results with few most important features reflect the performance of the methods. With respect to ACC, the proposed SFSGLR performs the best of the four methods. While using 1 most important feature, SFSGLR+RF outperforms UDFS+RF, NDGS+RF, and SFSRLR+RF by approximately 18.2%, 3.9%, and 6.0%, respectively. When using 3 most important features, SFSGLR+RF outperforms UDFS+RF, NDGS+RF, and SFSRLR+RF by approximately 19%, 10%, and 8.1%, respectively. This indicates that the proposed SFSGLR selects the correct features.

6.4. Student Performance Characteristics Analysis for Different Topics

The top four most important features are f11, f10, f7, and f12, with f11, f10, and f12 all being behavioral characteristics, and f7 being a school-related characteristic. Thus, in this section, the content in f7 (topic) is used as a basis to select the importance of each feature under different topics. The results are shown in Figure 3 and Figure 4. Based on the figures, the importance of each feature for different topics can be observed. And through further analysis, a deeper understanding of these results could be gained.
For the IT, Arabic, and Spanish topics, the number of times of times students raised hands (f10) accounts for the highest importance. This indicates that students’ active participation in class discussions has a greater impact on their academic performance in these topics.
In five topics, English, Quran, French, History, and Chemistry, the number of times students accesses a particular course content (f11) emerges as the most important characteristic. This indicates that in these subjects, in-depth learning and exploration of course materials play a crucial role in students’ academic performance. Regularly accessing course resources and materials helps students better understand concepts, retain knowledge, and apply it to real-world problems. Within the domains of math and science, the paramount factor is the frequency of students checking for new announcements (f12). This underscores that, in these subjects, students’ attention to updated information and course announcements significantly influences their academic performance.
Conversely, in the realms of biology and geology, the most critical characteristic is the frequency of student participation in discussion groups (f13). This implies that students can enhance their understanding of course concepts and gain more learning benefits through active engagement in group discussions.
To summarize, the varying importance of different characteristics across different topics highlights the diverse influences on students’ academic performance. Nevertheless, a closer examination of the figures reveals that f10, f11, f12, and f13 consistently hold higher rankings for importance across these twelve topics. This suggests that these four characteristics generally play a pivotal role in shaping students’ academic performance.
These findings emphasize the importance of active participation in class discussions, in-depth study of course contents, attention to updated information, and participation in discussion groups. Educators can use this information to develop teaching strategies. This could help to improve students’ academic performance and foster their academic development.

6.5. Performance with Different Numbers of Selected of Features

After sorting | | w i | | 2 in descending order, the ranking of the importance of characteristics can be obtained. In order to demonstrate the effectiveness of the proposed SFSGLR algorithm, four classifiers are adopted to measure the performance after selecting different numbers of the features.
In general, when more features are selected for classifying, the classifiers obtain a better performance. In Table 4, the classification results are shown. A 10-fold cross-validation is performed, and the mean and standard deviation of the results are recorded. While selecting the two most important features, SFSGLR+KNN has around 94% the performance of selecting all 16 features. SFSGLR+DTree, SFSGLR+RF, and SFSGLR+SVM have approximately 79%, 74%, and 86% in this case, respectively. While selecting the four most important features, SFSGLR+KNN has around 91% the performance of selecting all 16 features. SFSGLR+DTree, SFSGLR+RF, and SFSGLR+SVM have approximately 84%, 85%, and 86% in this case, respectively. Therefore, the performance of the proposed SFSGLR in selecting important features is superior.
Figure 5 shows the classification performance of SFSGLR with four classifiers. It can be seen that when increasing the number of selected features, the classification performance increased gradually. On xAPI, SFSGLR+RF performs best and SFSGLR+KNN performs worst. When selecting eight features, SFSGLR+RF shows around an 17%, 8%, and 11% improvement compared with SFSGLR+KNN, SFSGLR+DTree, and SFSGLR+SVM, respectively.

6.6. Performance with Different Percentages of Labeled Data

Table 5 shows the classification results with different percentages of labeled data while selecting the top six most important features. Figure 6 shows the curves of the SFSGLR with four classifiers on four evaluation metrics. As shown in Figure 6, the curves of SFSGLR+RF, SFSGLR+SVM, and SFSGLR+DTree increase when increasing the amount of labeled data. SFSGLR+KNN performs unstably when the percentage of labeled data varies from 10% to 90%.SFSGLR+RF performs best of the four classifiers, while SFSGLR+KNN performs worst. When 80% data are labeled, SFSGLR+RF obtains the best performance. It shows around 34%, 11%, and 9% improvements compared with SFSGLR+KNN, SFSGLR+DTree, and SFSGLR+SVM, respectively.

6.7. Parameter Sensitivity Analysis

In the proposed method, λ is used to control the W F 2 and α is used to control the manifold regularization. In the experiments, the percentage of labeled data is set to 50%, and SVM is adopted to obtain the classification performance. The grid search method is adopted to tune the parameters, which means that λ and α are selected from the set of [0.001, 0.01, 0.1, 1, 10, 100]. In Figure 7, the classification performance between the different λ and the number of selected features when fixing α = 0.1 are shown. Figure 8 shows the classification performance between different α and the number of selected features when fixing λ = 0.1 . It is shown that both λ and α are not very sensitive in the range [0.001, 0.1]. In addition, the proposed method obtains the best performance when the values of λ and α are selected in the range of [0.01, 0.1]. Generally, the recommended selections of λ and α are in the interval [0.01, 0.1].

6.8. Convergence Study

The convergence of the proposed algorithm is demonstrated theoretically in Section 5.3. Figure 9 shows the curve of objective function value (5) with respect to the number of iterations. Solving the objective function (5) in the Algorithm 2 can obtain the optimal feature selection matrix W . In Figure 9, it is evident that the objective function value declines gradually with the increase in iterations in the proposed algorithm on the xAPI dataset. In the first 100 iterations, the objective function value decreases quickly, which demonstrates the superior convergence performance of the proposed method. Therefore, the convergence analysis in Section 5.3 is validated in the experiment.

7. Conclusions

In order to process unlabeled data and identify the most crucial characteristics impacting student performance, a semi-supervised feature selection approach is presented using generalized linear regression. The experiments lead to the conclusion that behavioral characteristics play a pivotal role in a student’s performance. Experiments that compared with other state-of-the-art methods demonstrates the effectiveness of the proposed method. Furthermore, analyzing the impact factors across different subjects reveals that IT, Arabic, Science, English, History, Chemistry, and Geology are most influenced by the behavioral characteristics. In addition to the behavioral factors, Math, Quran, and Spanish are noticeably influenced by school-related factors. In addition, French and Biology are impacted by social factors. Finally, four classifiers are employed to evaluate the performance of the proposed method. The extensive experiments demonstrate the superiority of the semi-supervised feature selection approach.
The semi-supervised feature selection approach aims to rank the importance of the characteristics, greatly aiding in the analysis of factors affecting student performance. Consequently, the proposed method can greatly assist education departments in decision-making processes. However, a notable limitation of the proposed method is its lack of predictive ability, which restricts its applicability in certain scenarios. In the future, it would be advantageous to develop a semi-supervised feature selection method that also has a superior predictive capability.

Author Contributions

Conceptualization, S.Y., Y.C. and B.P.; methodology, S.Y., Y.C. and B.P.; software, S.Y., Y.C. and B.P.; validation, Y.C., B.P. and M.-F.L.; formal analysis, B.P. and M.-F.L.; investigation, Y.C. and B.P.; resources, B.P. and M.-F.L.; data curation, B.P. and M.-F.L.; writing—original draft preparation, S.Y.; writing—review and editing, Y.C. and M.-F.L.; visualization, S.Y., Y.C. and B.P.; supervision, Y.C. and B.P.; project administration, M.-F.L.; funding acquisition, M.-F.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used to support the findings of the study are available from the first author upon request. The author’s email address is [email protected].

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Hussain, S.; Dahan, N.A.; Ba-Alwib, F.M.; Ribata, N. Educational data mining and analysis of students’ academic performance using WEKA. Indones. J. Electr. Eng. Comput. Sci. 2018, 9, 447–459. [Google Scholar] [CrossRef]
  2. Adekitan, A.I.; Noma-Osaghae, E. Data mining approach to predicting the performance of first year student in a university using the admission requirements. Educ. Inf. Technol. 2019, 24, 1527–1543. [Google Scholar] [CrossRef]
  3. Azevedo, A. Data mining and knowledge discovery in databases. In Advanced Methodologies and Technologies in Network Architecture, Mobile Computing, and Data Analytics; IGI Global: Hershey, PA, USA, 2019; pp. 502–514. [Google Scholar]
  4. Jin, J.; Liu, Y.; Ji, P.; Kwong, C.K. Review on recent advances in information mining from big consumer opinion data for product design. J. Comput. Inf. Sci. Eng. 2019, 19, 010801. [Google Scholar] [CrossRef]
  5. Keserci, S.; Livingston, E.; Wan, L.; Pico, A.R.; Chacko, G. Research synergy and drug development: Bright stars in neighboring constellations. Heliyon 2017, 3, e00442. [Google Scholar] [CrossRef] [PubMed]
  6. Liu, C.; Wu, S.; Li, R.; Jiang, D.; Wong, H.S. Self-supervised graph completion for incomplete multi-view clustering. IEEE Trans. Knowl. Data Eng. 2023, 35, 9394–9406. [Google Scholar] [CrossRef]
  7. Pan, B.; Li, C.; Che, H. Nonconvex low-rank tensor approximation with graph and consistent regularizations for multi-view subspace learning. Neural Netw. 2023, 161, 638–658. [Google Scholar] [CrossRef]
  8. Che, H.; Wang, J. A nonnegative matrix factorization algorithm based on a discrete-time projection neural network. Neural Netw. 2018, 103, 63–71. [Google Scholar] [CrossRef] [PubMed]
  9. Che, H.; Wang, J.; Cichocki, A. Bicriteria sparse nonnegative matrix factorization via two-timescale duplex neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 2021, 34, 4881–4891. [Google Scholar] [CrossRef]
  10. Pu, X.; Che, H.; Pan, B.; Leung, M.F.; Wen, S. Robust Weighted Low-Rank Tensor Approximation for Multiview Clustering With Mixed Noise. IEEE Trans. Comput. Soc. Syst. 2023. [Google Scholar] [CrossRef]
  11. Cai, Y.; Che, H.; Pan, B.; Leung, M.F.; Liu, C.; Wen, S. Projected cross-view learning for unbalanced incomplete multi-view clustering. Inf. Fusion 2024, 102245. [Google Scholar] [CrossRef]
  12. Tair, M.M.A.; El-Halees, A.M. Mining educational data to improve students’ performance: A case study. Int. J. Inf. 2012, 2, 140–146. [Google Scholar]
  13. Senthil, S.; Lin, W.M. Applying classification techniques to predict students’ academic results. In Proceedings of the 2017 IEEE International Conference on Current Trends in Advanced Computing (ICCTAC), Bangalore, India, 2–3 March 2017; pp. 1–6. [Google Scholar]
  14. Bharara, S.; Sabitha, S.; Bansal, A. Application of learning analytics using clustering data Mining for Students’ disposition analysis. Educ. Inf. Technol. 2018, 23, 957–984. [Google Scholar] [CrossRef]
  15. Arcinas, M.M.; Sajja, G.S.; Asif, S.; Gour, S.; Okoronkwo, E.; Naved, M. Role of data mining in education for improving students performance for social change. Turk. J. Physiother. Rehabil. 2021, 32, 6519–6526. [Google Scholar]
  16. Bakhshinategh, B.; Zaiane, O.R.; ElAtia, S.; Ipperciel, D. Educational data mining applications and tasks: A survey of the last 10 years. Educ. Inf. Technol. 2018, 23, 537–553. [Google Scholar] [CrossRef]
  17. Bousbia, N.; Belamri, I. Which contribution does EDM provide to computer-based learning environments? In Educational Data Mining: Applications and Trends; Springer: Cham, Switzerland, 2014; pp. 3–28. [Google Scholar]
  18. Romero, C.; Ventura, S. Educational data science in massive open online courses. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2017, 7, e1187. [Google Scholar] [CrossRef]
  19. Subramanya, A.; Talukdar, P.P. Graph-Based Semi-Supervised Learning; Springer Nature: Cham, Switzerland, 2022. [Google Scholar]
  20. Kostopoulos, G.; Livieris, I.E.; Kotsiantis, S.; Tampakas, V. Enhancing high school students’ performance based on semi-supervised methods. In Proceedings of the 2017 8th International Conference on Information, Intelligence, Systems & Applications (IISA), Larnaca, Cyprus, 27–30 August 2017; pp. 1–6. [Google Scholar]
  21. Wang, Y.; Wang, J.; Che, H. Two-timescale neurodynamic approaches to supervised feature selection based on alternative problem formulations. Neural Netw. 2021, 142, 180–191. [Google Scholar] [CrossRef] [PubMed]
  22. Amrieh, E.A.; Hamtini, T.; Aljarah, I. Preprocessing and analyzing educational data set using X-API for improving student’s performance. In Proceedings of the 2015 IEEE Jordan Conference on Applied Electrical Engineering and Computing Technologies (AEECT), Amman, Jordan, 3–5 November 2015; pp. 1–5. [Google Scholar]
  23. Almutairi, S.; Shaiba, H.; Bezbradica, M. Predicting students’ academic performance and main behavioral features using data mining techniques. In Proceedings of the First International Conference on Computing, ICC 2019, Riyadh, Saudi Arabia, 10–12 December 2019; pp. 245–259. [Google Scholar]
  24. Alsulami, A.A.; AL-Ghamdi, A.S.A.M.; Ragab, M. Enhancement of E-Learning Student’s Performance Based on Ensemble Techniques. Electronics 2023, 12, 1508. [Google Scholar] [CrossRef]
  25. Tran, H.; Vu-Van, T.; Bang, T.; Le, T.V.; Pham, H.A.; Huynh-Tuong, N. Data Mining of Formative and Summative Assessments for Improving Teaching Materials towards Adaptive Learning: A Case Study of Programming Courses at the University Level. Electronics 2023, 12, 3135. [Google Scholar] [CrossRef]
  26. Kostopoulos, G.; Kotsiantis, S.; Pintelas, P. Predicting student performance in distance higher education using semi-supervised techniques. In Proceedings of the 5th International Conference, MEDI 2015, Rhodes, Greece, 26–28 September 2015; pp. 259–270. [Google Scholar]
  27. Widyaningsih, Y.; Fitriani, N.; Sarwinda, D. A Semi-Supervised Learning Approach for Predicting Student’s Performance: First-Year Students Case Study. In Proceedings of the 2019 12th International Conference on Information & Communication Technology and System (ICTS), Surabaya, Indonesia, 18 July 2019; pp. 291–295. [Google Scholar]
  28. Yao, H.; Nie, M.; Su, H.; Xia, H.; Lian, D. Predicting academic performance via semi-supervised learning with constructed campus social network. In Proceedings of the 22nd International Conference, DASFAA 2017, Suzhou, China, 27–30 March 2017; pp. 597–609. [Google Scholar]
  29. Li, F.; Zhang, Y.; Chen, M.; Gao, K. Which factors have the greatest impact on student’s performance. J. Phys. Conf. Ser. 2019, 1288, 012077. [Google Scholar] [CrossRef]
  30. Ahmed, M.R.; Tahid, S.T.I.; Mitu, N.A.; Kundu, P.; Yeasmin, S. A comprehensive analysis on undergraduate student academic performance using feature selection techniques on classification algorithms. In Proceedings of the 2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT), Kharagpur, India, 1–3 July 2020; pp. 1–6. [Google Scholar]
  31. Zeng, Z.; Wang, X.; Zhang, J.; Wu, Q. Semi-supervised feature selection based on local discriminative information. Neurocomputing 2016, 173, 102–109. [Google Scholar] [CrossRef]
  32. Yang, Y.; Shen, H.T.; Ma, Z.; Huang, Z.; Zhou, X. l2,1-norm regularized discriminative feature selection for unsupervised learning. In Proceedings of the 22nd IJCAI International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011. [Google Scholar]
  33. Dong, Y.; Che, H.; Leung, M.F.; Liu, C.; Yan, Z. Centric graph regularized log-norm sparse non-negative matrix factorization for multi-view clustering. Signal Process. 2023, 217, 109341. [Google Scholar] [CrossRef]
  34. Li, C.; Che, H.; Leung, M.F.; Liu, C.; Yan, Z. Robust multi-view non-negative matrix factorization with adaptive graph and diversity constraints. Inf. Sci. 2023, 634, 587–607. [Google Scholar] [CrossRef]
  35. Chen, X.; Yuan, G.; Nie, F.; Huang, J.Z. Semi-supervised Feature Selection via Rescaled Linear Regression. In Proceedings of the 26th International Joint Conference on Artificial Intelligence (IJCAI-17), Melbourne, VIC, Australia, 19–25 August 2017; Volume 2017, pp. 1525–1531. [Google Scholar]
  36. Chen, K.; Che, H.; Li, X.; Leung, M.F. Graph non-negative matrix factorization with alternative smoothed L0 regularizations. Neural Comput. Appl. 2023, 35, 9995–10009. [Google Scholar] [CrossRef]
  37. Li, Z.; Yang, Y.; Liu, J.; Zhou, X.; Lu, H. Unsupervised feature selection using nonnegative spectral analysis. Proc. AAAI Conf. Artif. Intell. 2012, 26, 1026–1032. [Google Scholar] [CrossRef]
  38. Amra, I.A.A.; Maghari, A.Y. Students performance prediction using KNN and Naïve Bayesian. In Proceedings of the 2017 8th International Conference on Information Technology (ICIT), Amman, Jordan, 17–18 May 2017; pp. 909–913. [Google Scholar]
  39. Han, J.; Kamber, M.; Mining, D. Concepts and Techniques; Morgan Kaufmann: Burlington, MA, USA, 2006; Volume 340, p. 94104-3205. [Google Scholar]
  40. Ahmed, N.S.; Sadiq, M.H. Clarify of the random forest algorithm in an educational field. In Proceedings of the 2018 International Conference on Advanced Science and Engineering (ICOASE), Duhok, Iraq, 9–11 October 2018; pp. 179–184. [Google Scholar]
Figure 1. Some statistical information of features form the xAPI dataset.
Figure 1. Some statistical information of features form the xAPI dataset.
Electronics 13 00659 g001
Figure 2. Student performance characteristics ranking. (a) UDFS. (b) NDFS. (c) SFSRLR. (d) The proposed SFSGLR.
Figure 2. Student performance characteristics ranking. (a) UDFS. (b) NDFS. (c) SFSRLR. (d) The proposed SFSGLR.
Electronics 13 00659 g002
Figure 3. Student performance characteristics ranking for different topics. (a) IT. (b) Math. (c) Arabic. (d) Science. (e) English. (f) Quran.
Figure 3. Student performance characteristics ranking for different topics. (a) IT. (b) Math. (c) Arabic. (d) Science. (e) English. (f) Quran.
Electronics 13 00659 g003
Figure 4. Student performance characteristics ranking for different topics. (a) Spanish. (b) French. (c) History. (d) Biology. (e) Chemistry. (f) Geology.
Figure 4. Student performance characteristics ranking for different topics. (a) Spanish. (b) French. (c) History. (d) Biology. (e) Chemistry. (f) Geology.
Electronics 13 00659 g004
Figure 5. The classification performance with respect to the different number of features of the different classifiers on xAPI. (a) ACC. (b) Fscore. (c) Precision. (d) Recall.
Figure 5. The classification performance with respect to the different number of features of the different classifiers on xAPI. (a) ACC. (b) Fscore. (c) Precision. (d) Recall.
Electronics 13 00659 g005
Figure 6. The classification performance with respect to different percentages of labeled data with the top 6 most important features on xAPI. (a) ACC. (b) Fscore. (c) Precision. (d) Recall.
Figure 6. The classification performance with respect to different percentages of labeled data with the top 6 most important features on xAPI. (a) ACC. (b) Fscore. (c) Precision. (d) Recall.
Electronics 13 00659 g006
Figure 7. Classification performance between the different λ and numbers of selected features. (a) ACC. (b) Fscore.
Figure 7. Classification performance between the different λ and numbers of selected features. (a) ACC. (b) Fscore.
Electronics 13 00659 g007
Figure 8. Classification performance between different α and numbers of selected features. (a) ACC. (b) Fscore.
Figure 8. Classification performance between different α and numbers of selected features. (a) ACC. (b) Fscore.
Electronics 13 00659 g008
Figure 9. Convergence study of the proposed algorithm with the xAPI.
Figure 9. Convergence study of the proposed algorithm with the xAPI.
Electronics 13 00659 g009
Table 1. Notations and Definitions.
Table 1. Notations and Definitions.
NotationsMeaning
x , x , X Scalar, vector, matrix
x i The (i)-th row of X
x j The (j)-th column of X
x i j The ( i , j )-th entry of X
x 2 The l 2 norm of vector x and x 2 = i x i 2
X F The Frobenius norm of matrix X and X F = i j x i j 2
X 2 , 1 The l 2 , 1 norm of matrix X and X 2 , 1 = j x j 2
T r ( · ) The traces of matrix
Table 2. The statistical information of the xAPI dataset.
Table 2. The statistical information of the xAPI dataset.
Feature NumberFeature NameFeature Characteristics (Number of Samples)Feature Replacement
1GenderMale (305), Female (175)1, 0
2NationalityKuwait, Lebanon, Egypt, Saudi Arabia, USA, Jordan, Venezuela, Iran, Tunis, Morocco, Syria, Palestine, Iraq, Libya1–14
3Place of birthKuwait, Lebanon, Egypt, Saudi Arabia, USA, Jordan, Venezuela, Iran, Tunis, Morocco, Syria, Palestine, Iraq, Libya1–14
4Stage IDLow level (199), Middle school (248), High school (33)1, 2, 3
5Grade IDG-02, G-04, G-05, G-06, G-07, G-08, G-09, G-10, G-11, G-121, 2, 3, 4, 5, 6, 7, 8, 9, 10
6Section IDSection A (283), Section B (167), Section C (30)1, 2, 3
7TopicIT, Math, Arabic, Science, English, Quran, Spanish, French, History, Biology, Chemistry, Geology1–12
8SemesterFirst (245), Second (235)1, 2
9RelationFather (283), Mother (197)1, 0
10Times of raising hands0–100-
11Times of visiting resources0–100-
12Times of announcements0–100-
13Times of discussions0–100-
14Parents answering surveyYes (270), No (210)1, 0
15Parents school satisfactionGood (292), Bad (188)1, 0
16Student absence daysUnder-7 (289), Above-7 (191)1, 0
Table 3. Classification (mean ± standard deviation) with the different methods and the number of features on xAPI with 50% labeled data.
Table 3. Classification (mean ± standard deviation) with the different methods and the number of features on xAPI with 50% labeled data.
Features NumberMethodsACCFscorePrecisionRecall
1 featureUDFS+RF0.4688 ± 0.04740.4565 ± 0.03060.3654 ± 0.02550.6431 ± 0.1911
NDFS+RF0.5333 ± 0.07750.4210 ± 0.05670.4156 ± 0.04010.4297 ± 0.0802
SFSRLR+RF0.5229 ± 0.02070.3992 ± 0.02450.3893 ± 0.02610.4120 ± 0.0422
SFSGLR+RF0.5542 ± 0.06750.4277 ± 0.05010.4317 ± 0.05400.4252 ± 0.0526
2 featureUDFS+RF0.4563 ± 0.03320.4378 ± 0.03500.3524 ± 0.02250.6085 ± 0.1525
NDFS+RF0.5354 ± 0.05650.4283 ± 0.05430.4245 ± 0.05180.4336 ± 0.0614
SFSRLR+RF0.5208 ± 0.04050.3903 ± 0.02760.3890 ± 0.02230.3921 ± 0.0355
SFSGLR+RF0.5667 ± 0.07400.4369 ± 0.06620.4450 ± 0.08280.4304 ± 0.0538
3 featureUDFS+RF0.5458 ± 0.03650.4196 ± 0.03490.4079 ± 0.04040.4367 ± 0.0556
NDFS+RF0.5917 ± 0.06750.4555 ± 0.07410.4539 ± 0.07070.4596 ± 0.0846
SFSRLR+RF0.6021 ± 0.05420.4573 ± 0.04340.4463 ± 0.04180.4711 ± 0.0568
SFSGLR+RF0.6512 ± 0.04720.4945 ± 0.05230.4819 ± 0.05070.5116 ± 0.0739
4 featureUDFS+RF0.5875 ± 0.05880.4442 ± 0.04960.4237 ± 0.04710.4717 ± 0.0745
NDFS+RF0.6188 ± 0.07150.4693 ± 0.06220.4662 ± 0.03920.4754 ± 0.0880
SFSRLR+RF0.6542 ± 0.05570.5007 ± 0.05230.4886 ± 0.06230.5162 ± 0.0516
SFSGLR+RF0.6562 ± 0.07560.5122 ± 0.07530.5112 ± 0.09910.5170 ± 0.0605
The best classification results are highlighted in bold.
Table 4. Classification results (Mean ± Standard Deviation) with respect to different features number of different classifiers on xAPI with 50% labeled data.
Table 4. Classification results (Mean ± Standard Deviation) with respect to different features number of different classifiers on xAPI with 50% labeled data.
Features NumberMethodsACCFscorePrecisionRecall
2 featuresSFSGLR+KNN0.5792 ± 0.03900.4559 ± 0.04230.4595 ± 0.06980.4569 ± 0.0300
SFSGLR+DTree0.5771 ± 0.10160.4559 ± 0.07090.4619 ± 0.08820.4515 ± 0.0559
SFSGLR+RF0.5667 ± 0.07400.4369 ± 0.06620.4450 ± 0.08280.4304 ± 0.0538
SFSGLR+SVM0.6229 ± 0.04860.4931 ± 0.04730.5086 ± 0.05890.4801 ± 0.0459
4 featuresSFSGLR+KNN0.5583 ± 0.08720.4565 ± 0.07390.4579 ± 0.10610.4592 ± 0.0458
SFSGLR+DTree0.6125 ± 0.08850.4828 ± 0.07800.4854 ± 0.08450.4830 ± 0.0795
SFSGLR+RF0.6562 ± 0.07560.5122 ± 0.07530.5112 ± 0.09910.5170 ± 0.0605
SFSGLR+SVM0.6250 ± 0.04050.4943 ± 0.03480.5021 ± 0.05930.4893 ± 0.0247
6 featuresSFSGLR+KNN0.5792 ± 0.08550.4658 ± 0.08860.4690 ± 0.11460.4652 ± 0.0661
SFSGLR+DTree0.6271 ± 0.08860.4903 ± 0.09270.4877 ± 0.08590.4966 ± 0.1085
SFSGLR+RF0.6667 ± 0.07480.5218 ± 0.07930.5140 ± 0.09100.5341 ± 0.0781
SFSGLR+SVM0.6188 ± 0.04060.4887 ± 0.03790.4898 ± 0.06530.4917 ± 0.0332
8 featuresSFSGLR+KNN0.6083 ± 0.05450.4729 ± 0.05730.4736 ± 0.07560.4756 ± 0.0509
SFSGLR+DTree0.6604 ± 0.07550.5089 ± 0.07540.5088 ± 0.07570.5112 ± 0.0835
SFSGLR+RF0.7125 ± 0.07780.5615 ± 0.09140.5539 ± 0.10570.5715 ± 0.0822
SFSGLR+SVM0.6396 ± 0.05110.4924 ± 0.03320.4983 ± 0.03670.4893 ± 0.0483
10 featuresSFSGLR+KNN0.0603 ± 0.06170.4701 ± 0.05940.4720 ± 0.08070.4716 ± 0.0479
SFSGLR+DTree0.6792 ± 0.09980.5322 ± 0.09900.5248 ± 0.10270.5419 ± 0.1010
SFSGLR+RF0.7188 ± 0.09880.5686 ± 0.12720.5672 ± 0.12840.5710 ± 0.1289
SFSGLR+SVM0.6417 ± 0.07270.4942 ± 0.05980.5056 ± 0.07710.4849 ± 0.0474
12 featuresSFSGLR+KNN0.6146 ± 0.05580.4741 ± 0.05690.4764 ± 0.07650.4749 ± 0.0474
SFSGLR+DTree0.7313 ± 0.09490.6017 ± 0.09970.6094 ± 0.11140.5970 ± 0.0955
SFSGLR+RF0.7604 ± 0.08630.6274 ± 0.08560.6400 ± 0.09610.6171 ± 0.0833
SFSGLR+SVM0.7396 ± 0.05220.5967 ± 0.07430.6105 ± 0.10500.5868 ± 0.0580
14 featuresSFSGLR+KNN0.6104 ± 0.05730.4738 ± 0.05680.4765 ± 0.07660.4744 ± 0.0477
SFSGLR+DTree0.7217 ± 0.08700.6021 ± 0.09080.6120 ± 0.10220.5953 ± 0.0926
SFSGLR+RF0.7708 ± 0.06510.6290 ± 0.08530.6286 ± 0.09110.6302 ± 0.0833
SFSGLR+SVM0.7104 ± 0.05420.5640 ± 0.06090.5763 ± 0.09200.5552 ± 0.0398
16 featuresSFSGLR+KNN0.6146 ± 0.05580.4741 ± 0.05690.4764 ± 0.07650.4749 ± 0.0474
SFSGLR+DTree0.7292 ± 0.09110.6019 ± 0.10820.6086 ± 0.11510.5986 ± 0.1144
SFSGLR+RF0.7688 ± 0.09290.6381 ± 0.11340.6417 ± 0.12580.6372 ± 0.1116
SFSGLR+SVM0.7229 ± 0.06950.5786 ± 0.07690.5941 ± 0.09970.5665 ± 0.0666
The best classification results are highlighted in bold.
Table 5. Classification results (mean ± standard deviation) with respect to different percentages of labeled data with the top 6 most important features on xAPI.
Table 5. Classification results (mean ± standard deviation) with respect to different percentages of labeled data with the top 6 most important features on xAPI.
Percentage of Labeled DataMethodsACCFscorePrecisionRecall
10%SFSGLR+KNN0.6208 ± 0.06650.4804 ± 0.05440.4762 ± 0.05810.4870 ± 0.0604
SFSGLR+DTree0.6458 ± 0.03800.4899 ± 0.01500.4906 ± 0.01900.4899 ± 0.0224
SFSGLR+RF0.6708 ± 0.04260.5128 ± 0.05070.5034 ± 0.03900.5261 ± 0.0801
SFSGLR+SVM0.6208 ± 0.06930.4807 ± 0.06610.4783 ± 0.06720.4841 ± 0.0711
20%SFSGLR+KNN0.5979 ± 0.05900.4501 ± 0.03840.4502 ± 0.04280.4507 ± 0.0380
SFSGLR+DTree0.6146 ± 0.05130.4653 ± 0.03400.4575 ± 0.03500.4750 ± 0.0446
SFSGLR+RF0.6771 ± 0.04840.5147 ± 0.04470.5068 ± 0.04350.5238 ± 0.0497
SFSGLR+SVM0.6583 ± 0.05740.5041 ± 0.04670.5094 ± 0.05220.5000 ± 0.0481
30%SFSGLR+KNN0.5917 ± 0.05830.4462 ± 0.04310.4501 ± 0.03950.4443 ± 0.0554
SFSGLR+DTree0.6063 ± 0.07570.4615 ± 0.05640.4533 ± 0.06650.4739 ± 0.0619
SFSGLR+RF0.6854 ± 0.06170.5250 ± 0.06160.5210 ± 0.07040.5318 ± 0.0645
SFSGLR+SVM0.6250 ± 0.06360.4805 ± 0.04440.4866 ± 0.05060.4754 ± 0.0422
40%SFSGLR+KNN0.6208 ± 0.06570.4780 ± 0.04550.4752 ± 0.04520.4828 ± 0.0570
SFSGLR+DTree0.6271 ± 0.8420.4795 ± 0.06430.4702 ± 0.06420.4911 ± 0.0723
SFSGLR+RF0.6771 ± 0.06610.5285 ± 0.06260.5090 ± 0.05490.5528 ± 0.0837
SFSGLR+SVM0.6375 ± 0.08110.4878 ± 0.05780.4909 ± 0.05790.4861 ± 0.0631
50%SFSGLR+KNN0.6104 ± 0.04400.4627 ± 0.03130.4572 ± 0.03580.4702 ± 0.0384
SFSGLR+DTree0.6146 ± 0.05490.4586 ± 0.05000.4485 ± 0.04600.4702 ± 0.0586
SFSGLR+RF0.7104 ± 0.05850.5524 ± 0.06210.5421 ± 0.05050.5655 ± 0.0837
SFSGLR+SVM0.6146 ± 0.07100.4667 ± 0.05510.4690 ± 0.05350.4649 ± 0.0585
60%SFSGLR+KNN0.6229 ± 0.05150.4716 ± 0.03930.4683 ± 0.03620.4764 ± 0.0498
SFSGLR+DTree0.6313 ± 0.07150.4812 ± 0.06320.4689 ± 0.05600.4950 ± 0.0742
SFSGLR+RF0.7042 ± 0.03780.5406 ± 0.03610.5356 ± 0.04060.5476 ± 0.0465
SFSGLR+SVM0.6396 ± 0.05650.4938 ± 0.04140.4957 ± 0.04580.4934 ± 0.0480
70%SFSGLR+KNN0.5854 ± 0.06770.4656 ± 0.04390.4650 ± 0.06330.4689 ± 0.0317
SFSGLR+DTree0.6604 ± 0.08950.5193 ± 0.09660.5238 ± 0.09130.5153 ± 0.1028
SFSGLR+RF0.7417 ± 0.05740.5842 ± 0.05870.5884 ± 0.06750.5814 ± 0.0565
SFSGLR+SVM0.6958 ± 0.06150.5428 ± 0.05540.5448 ± 0.05070.5415 ± 0.0634
80%SFSGLR+KNN0.5688 ± 0.06950.4624 ± 0.04150.4580 ± 0.03960.4697 ± 0.0565
SFSGLR+DTree0.6896 ± 0.05850.5433 ± 0.05760.5391 ± 0.05810.5494 ± 0.0656
SFSGLR+RF0.7646 ± 0.05810.6216 ± 0.07210.6165 ± 0.07640.6277 ± 0.0712
SFSGLR+SVM0.7042 ± 0.06350.5549 ± 0.06770.5572 ± 0.07540.5544 ± 0.0672
90%SFSGLR+KNN0.5792 ± 0.06500.4448 ± 0.06400.4326 ± 0.06240.4602 ± 0.0743
SFSGLR+DTree0.6875 ± 0.06050.5337 ± 0.05290.5305 ± 0.05300.5386 ± 0.0622
SFSGLR+RF0.7396 ± 0.05750.5784 ± 0.06330.5778 ± 0.07160.5803 ± 0.0593
SFSGLR+SVM0.6938 ± 0.06140.5417 ± 0.05660.5431 ± 0.05680.5405 ± 0.0577
The best classification results are highlighted in bold.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yu, S.; Cai, Y.; Pan, B.; Leung, M.-F. Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis. Electronics 2024, 13, 659. https://doi.org/10.3390/electronics13030659

AMA Style

Yu S, Cai Y, Pan B, Leung M-F. Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis. Electronics. 2024; 13(3):659. https://doi.org/10.3390/electronics13030659

Chicago/Turabian Style

Yu, Shanshan, Yiran Cai, Baicheng Pan, and Man-Fai Leung. 2024. "Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis" Electronics 13, no. 3: 659. https://doi.org/10.3390/electronics13030659

APA Style

Yu, S., Cai, Y., Pan, B., & Leung, M. -F. (2024). Semi-Supervised Feature Selection of Educational Data Mining for Student Performance Analysis. Electronics, 13(3), 659. https://doi.org/10.3390/electronics13030659

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop