Next Article in Journal
An Evaluation of Entropy Measures for Microphone Identification
Next Article in Special Issue
Effect of Different Local Vibration Frequencies on the Multiscale Regularity of Plantar Skin Blood Flow
Previous Article in Journal
Information Processing in the Brain as Optimal Entropy Transport: A Theoretical Approach
Previous Article in Special Issue
Revised Stability Scales of the Postural Stability Index for Human Daily Activities
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Construction and Application of Functional Brain Network Based on Entropy

Department of Computer, Nanchang University, Nanchang 330029, China
*
Author to whom correspondence should be addressed.
Entropy 2020, 22(11), 1234; https://doi.org/10.3390/e22111234
Submission received: 21 July 2020 / Revised: 15 October 2020 / Accepted: 27 October 2020 / Published: 30 October 2020
(This article belongs to the Special Issue Multiscale Entropy Approaches and Their Applications II)

Abstract

:
Functional brain network (FBN) is an intuitive expression of the dynamic neural activity interaction between different neurons, neuron clusters, or cerebral cortex regions. It can characterize the brain network topology and dynamic properties. The method of building an FBN to characterize the features of the brain network accurately and effectively is a challenging subject. Entropy can effectively describe the complexity, non-linearity, and uncertainty of electroencephalogram (EEG) signals. As a relatively new research direction, the research of the FBN construction method based on EEG data of fatigue driving has broad prospects. Therefore, it is of great significance to study the entropy-based FBN construction. We focus on selecting appropriate entropy features to characterize EEG signals and construct an FBN. On the real data set of fatigue driving, FBN models based on different entropies are constructed to identify the state of fatigue driving. Through analyzing network measurement indicators, the experiment shows that the FBN model based on fuzzy entropy can achieve excellent classification recognition rate and good classification stability. In addition, when compared with the other model based on the same data set, our model could obtain a higher accuracy and more stable classification results even if the length of the intercepted EEG signal is different.

1. Introduction

A functional brain network (FBN) is the characterization of the brain network topology and dynamic properties [1]. Different brain network typologies and characterizations can be obtained according to different construction methods based on the same electroencephalogram (EEG) data set, which have a profound impact on the research of detection and recognition of different brain states. Therefore, the research of the FBN construction method is very important. Meier et al. [2] constructed the union of shortest path trees (USPT) as a new topology for the FBN, which can be uniquely defined. The new concept of the USPT of the FBN also allows interesting interpretation and may represent the “highway” of the brain after interpreting the link weights of the FBN. Kabbara et al. [3] used EEG source connectivity computed in different frequency bands to reconstruct functional brain networks based on the EEG data of 20 participants (10 Alzheimer’s disease patients and 10 healthy individuals) in a resting state network, and used graph theoretical analyses to evaluate the differences between both groups. The results revealed that Alzheimer’s disease networks, compared to networks of age-matched healthy controls, are characterized by lower global information processing (integration) and higher local information processing (segregation). Zou et al. [4] proposed a functional brain network construction method based on the combination of shortest path tree (CSPT) (abbreviated as CSP-FBN), and compared with the classification accuracy of the same frequency band (beta-band). They found that the fatigue state recognition functional brain network constructed by combining the shortest path tree was superior to functional brain networks constructed by other methods, with an accuracy of 99.17%, and found that Fz, F4, Fc3, Fcz, Fc4, C3, Cz4, Cp3, Cpz, Cp4, P3, Pz, and P4 are important electrodes for identifying the state of fatigue driving, reflecting that the correct central area of the central parietal lobe of the brain is closely related to fatigue driving. Conrin et al. [5] used the PACE algorithm, which permits a dual formulation. The positive and negative edges of the FBN were all given the equivalent modular structure of the connection group. The clinical relevance of the relationship between gender differences and age were more thoroughly examined, in which they found the supporting infrastructure framework and conceptualized the functional connector. Zhao et al. [6] applied an FBN to construct a fatigue recognition model based on EEG data and graph theory. It was observed that the coherence of the frontal, central, and temporal lobe of the brain was significantly improved when using this method, and in addition the clustering coefficients and length of the character path β and α are significantly increased. These valuable research experiences confirmed that different FBN methods will lead to different brain network topology, which has a certain impact on the classification and recognition of different brain activity states.
Entropy can effectively describe the complexity, non-linearity, and uncertainty of brain electrical signals [7]. A series of studies on the use of entropy features to characterize EEG signal to detect different EEG states of the brain have been carried out at home and abroad. Min et al. [8] used multiple entropy fusion analysis to capture four important channel regions based on electrode weights. Four classifiers are combined together to construct an evaluation model for detecting driver fatigue state, and finally, obtained a classification accuracy of 98.3%. Ye [9] proposed a driving fatigue state recognition method based on sample entropy, t-test, and kernel principal component analysis. Fatigue driving data was classified and its recognition accuracy rate was 99.27%. Vladimir et al. [10] explored the multi-scale brain signal complexity of the EEG spectrum and the changes of the power law scaling index, finding that nonlinear dynamical properties of brain signals accounted for a smaller portion of entropy changes, especially in stage 2 sleep. Zou et al. [11] proposed a method based on the empirical mode decomposition (EMD) of multi-scale entropy on the recorded forehead EEG signals to recognize fatigue driving. Results indicated that the classification recognition rate of EMD multi-scale fuzzy entropy feature is up to 88.74%, which is 23.88% higher than single-scale fuzzy entropy and 5.56% higher than multi-scale fuzzy entropy. The above valuable studies show that entropy can well characterize the complexity, non-linearity, and uncertainty of EEG signals. Due to the characteristics of entropy, the FBN constructed using entropy features can more accurately identify the brain states. Due to the threshold selected by the FBN through the sliding window which has a certain time-domain resolution, the accuracy of the FBN constructed using entropy is more stable when the EEG signal classification of different time intervals is intercepted.
Therefore, an EEG state recognition model of an FBN based on entropy features is proposed in this paper. By testing the entropy-based FBN model on the real data set of fatigue driving, an FBN fatigue driving state recognition model based on fuzzy entropy (FE) [12] (abbreviated as FE_FBN) is constructed. In this paper, fuzzy entropy, sample entropy (SE) [13,14], approximate entropy (AE) [15], and spectral entropy (SPE) [16] are used to calculate the original signal; mutual information (MI) [17,18], pearson correlation coefficient (PEA) [18,19], and correntropy coefficient (CORE) [18,20] are used to measure the correlation among electrodes; and then six kinds of classifiers are chosen in the experimental environment to find the most suitable classifier. Finally, entropy, correlation coefficient, and the appropriate classifier are combined to construct the fatigue driving state recognition model. The FE_FBN has the best performance in fatigue driving state recognition. On the same data set, even if the lengths of the intercepted EEG signals are different, a higher accuracy and more stable classification results can be obtained than that of the model proposed by Ye [9].

2. Materials and Methods

2.1. Entropy-Based FBN Model Architecture

The FBN models based on FE, SE, AE, and SPE (abbreviated as FE/SE/AE/SPE_FBN), are collectively referred to as the entropy-based FBN model (abbreviated as EN_FBN). The model structure and components of EN_FBN are illustrated in this section. A brief description of the model structure is shown in Figure 1.

2.2. Implementation Method of FBN Model Based on Entropy

2.2.1. Entropy Feature Calculation

FE, SE, AE, and SPE are calculated on the original data. AE was proposed by Pincus et al. [15]. It is a nonlinear dynamic parameter used to quantify the regularity and unpredictability of time series fluctuations, which uses a non-negative number to represent the complexity of a time series and reflects the possibility of new information in the time series. The more complex the time series, the greater the approximate entropy. Its calculation method is mentioned in the literature [15]. SE is an improvement of approximate entropy and its calculation method does not depend on the data length, which was proposed by Richman et al. [13]. It is also a measurement method of time series complexity and analysis of a nonlinear signal. Its calculation method is mentioned in the literature [14]. SPE describes the relationship between power spectrum and entropy rate. In this paper, the normalized Shannon entropy is used to evaluate SPE [16]. Its calculation method is mentioned in the literature [16]. FE was first proposed by Chen et al. [12]. Due to the best classification performance being on the model of FE_FBN, its calculation method is shown below:
Given a N-dimensional time series [ u ( 1 ) , u ( 2 ) , , u ( N ) ] , and define phase space dimensions m ( m N 2 ) and similarity tolerance r ( r = 0.2 × s t d ) , reconstruct phase space:
X ( i ) = [ u ( i ) , u ( i + 1 ) , , u ( i + m 1 ) ] u o ( i ) , i = 1 , 2 , , N m + 1
where u 0 ( i ) is u 0 ( i ) = 1 m j = 0 m 1 u ( i + j ) ;
Fuzzy membership function is introduced as:
A ( x ) = 1 , x = 0 e x p ( ( x ) 2 r ) , x > 0
where r is the similarity tolerance. Calculate i = 1 , 2 , , N m + 1 as:
A i j m = e x p ( ( d i j m ) 2 r ) , j = 1 , 2 , , N m + 1 ,   and   j i
Where d i j m is the maximum absolute distance between the window vectors X ( i ) and X ( j ) , calculated as:
d i j m = d [ X ( i ) , X ( j ) ] = m a x p = 1 , 2 , , m ( | u ( i + p 1 ) u 0 ( i ) | | u ( j + p 1 ) u 0 ( j ) | )
After calculating the average for each i, the following formula can be obtained:
C i m = 1 N m j = 1 , j i N m + 1 A i j m
Define:
Φ m ( r ) = 1 N m + 1 j = 1 , j i N m + 1 log C i m ( r )
The fuzzy entropy formula of the original time series is:
F u z z y E n ( m , r , N ) = lim N [ ln ( Φ m ( r ) Φ m 1 ( r ) ) ]
For a finite data set, the fuzzy entropy formula is:
F u z z y E n ( m , r , N ) = ln ( Φ m ( r ) Φ m 1 ( r ) )
In this paper, r = 0.2 × s t d , where std = 1.25, r = 0.25, are described specifically in the paper of Ye [9].

2.2.2. The Required Method of FBN Construction

(1)
Synchronization correlation coefficient
MI, which was proposed by Kraskov et al. [17], is a quantity used to measure the correlation between two pieces of information and can be used to analyze the information flow between two systems or between the constituent subsystems of a complex system. Its calculation formula is as shown below.
Let X and Y be random variables with probability density functions. p ( x ) = P X = x , p ( y ) = P Y = y . The mutual information between X and Y is:
I ( X , Y ) = x y p ( x , y ) l o g p ( x , y ) p ( x ) p ( y )
PEA was proposed by Pearson et al. [19]. Since the channel based on the EEG data is simultaneous and the entropy property is fixed, PEA can be used to calculate the strong-weak connection between different electrodes. Its calculation formula is as shown below:
R x y = N · i = 1 , j = 1 N x i y j ( i = 1 N x i ) ( j = 1 N y j ) N · i = 1 N x i 2 ( i = 1 N x i ) 2 N · j = 1 N y j 2 ( j = 1 N y j ) 2
CORE is an extension of the correlation coefficient, which performs well on high-order statistical sum, or nonlinear relationship signals. In this paper, CORE are calculated for the entropy properties between different electrodes. The calculation formula is as shown below:
Define the CORE of random variables X and Y as: V δ ( X , Y ) = E [ k δ ( X , Y ) ] , where E represents the expectation operator, k δ ( · ) represents the kernel function, and δ > 0 is the kernel width. The Gaussian kernel is usually selected as the kernel function:
k δ ( x y ) = G δ ( x y ) = 1 2 π δ e x p ( ( x y ) 2 2 δ 2 )
The selection criteria of the kernel function is very strict, and the selection of δ is based on Silverman’s rule of thumb [21]: δ = 0.9 A N 1 5 , where A is the minimum value of the data standard deviation, and N is the number of data samples.
Assuming that the joint distribution function of random variables X and Y is expressed as F x y ( x , y ) , the CORE is expressed as: V δ ( X , Y ) = G δ ( x y ) d F x y ( x , y ) . For the limited amount of data and the joint distribution F x y function is unknown, the CORE can be estimated by averaging two finite samples:
V δ = 1 N i = 1 N G δ ( e i )
where e i = x i y i , ( x i y i ) i = 1 N are N sampling points of the joint distribution function F x y .
(2)
Threshold selection
In paper [22], Guo has proved that FBN has the best classification recognition if the network sparsity is between 8% and 32%, and take 1% as the step size of threshold selection [22]. Threshold selection of our model is based on his theory. On the other word, the best threshold of the network sparsity between 8% and 32% is obtained based on the classification recognition rate.
(3)
Network measurement
Three classically representative network measurements are used in the experiment, the name of which are average path length (APL) [22], clustering coefficient (CC) [22], and local efficiency (LE) [22].

2.2.3. Verification Standard of “Small World” Property of Network

Watts et al. referred to a network with the high clustering coefficient and the shortest path length as a “small world” network [23], and Humphries et al. believed the networks that conform to indicator σ = γ / λ > 1 are “small world” networks, where γ = C p / C p r a n d , λ = L p / L p r a n d . The bigger of σ , the stronger of “small world” property [24].
In this paper, C p represents the coefficient of polymerization of the model, C p r a n d is the coefficient of polymerization of the ER random network, which is obtained by the arithmetic averaging of the clustering coefficients of all nodes in the network [22]. L p refers to the characteristic path length of the model, L p r a n d is the characteristic path length of the ER random network, which is the average of the shortest path length between any two nodes in the network [22].
The ER random network, proposed by Erdos and Renyi [25], is a simple graph without repeated edges and loops. The basic idea of the model is to connect each pair of N nodes with probability p to generate an ER random network. The ER random network used in this paper has 30 nodes corresponding to 30 electrodes. The points are connected with a probability of 0.1. The specific generated graph is shown in Section 3.2.

2.2.4. Classifer

The classifiers artificial neural network (ANN), decision tree (DT), random forest (RF), k-nearest neighbor (KNN), adaboost (AD), and support vector machine (SVM) are used. For each classifier, the test result of 10-fold cross-validation is used as the classification accuracy.

2.3. The Model Framework Construction Flow of EN_FBN

In order to allow readers to understand our model step by step, a brief introduction to the model construction is given instead of a detailed description of carrying specific data. The detailed description is shown in Section 2.4.3. The organizational framework of the model structure is shown in Figure 2.
Calculate entropy under different fatigue driving states in S seconds of R individuals and construct the matrix. Suppose the entropy is E ( S × R ) × l ) . ( S × R ) stands for the size of row of E. l stands for size of column of E and the electrode numbers;
Construct synchronization correlation coefficient matrix. The adjacent matrix is assumed to be C m × n , where m and n stand for rows and columns of C, and n represents the electrode numbers;
Construct the model EN_FBN;
Extract the network measurement matrix as the feature matrix. The network measurement matrix is assumed to be M i × j , where i and j stand for rows and columns of M, and j represents the electrode numbers;
Put the feature matrix into classifier and get the test result through 10-fold cross-validation.

2.4. Data Matrix Construction and EN_FBN Model Construction Algorithm Based on the Real Data Set of Fatigue Driving

2.4.1. Experiment Data

The EEG data was provided by Mu’s team [26]. This data set was obtained from 26 volunteers who volunteered to participate in the car simulation driving experiment. Each of them recorded two sets of experimental data—fatigue driving state and non-fatigue driving state data. The current states of the individual’s self-perception were recorded before and after the experiment, in order to understand the volunteers’ mental and fatigue states. During the experiment, at the beginning, EEG data in the current resting state (or we can say EEG data in non-fatigue driving state) of these experimental individuals were recorded under the condition of adequate sleep and regular diet on the previous night. Then these individuals were required to keep driving for 40 min, and then through the professional questionnaire to check the current state of these individuals [26] and record the fatigue EEG data.
The EEG data is a 32-electrode 600-second time series at a sampling rate of 1000 Hz, which are 300 s of fatigue data and 300 s of non-fatigue data, respectively. After collecting the data, Mu et al. used Neuroscan 4.5 to preprocess the collected data. The frequency range of the data was 0.15 Hz to 45 Hz. The main steps of data preprocessing include drift removal, electrooculogram removal, artifact removal, baseline correction, and filtering [26]. In view of the abnormal conditions that may appear in the experimental process, such as sneezing, coughing, being suddenly frightened, and so on, the EEG drift was removed via an artificial method. For the obvious electrooculogram, mainly vertical electrooculogram, the team deleted it. They used transform-artifact rejection to remove artifacts in EEG signals and chose the time-domain (time) according to the experience, which was in the range of ± 50 to ± 100 . For the data that does not appear in the baseline after processing, one linear correction or two baseline correction are usually needed. The main purpose of digital filtering was to get the EEG data of the main frequency band. In this paper, 1.5 Hz to 70 Hz band-pass filter was used. The study met the ethical principles of the Declaration of Helsinki and the current legal requirements. Ethical approval for this work was also obtained from the Academic Ethics Committee of the Jiangxi University of Technology. All individuals gave their informed consent for their inclusion before they participated in the study.

2.4.2. Construction of Data Matrix

Bring the real data set of fatigue driving into the model framework, which is mentioned in Section 2.3, the symbol S and R are assumed to S and R . S seconds EEG signal samples of R individuals were taken as an example. Due to the experimental results of different numbers of people and seconds being compared many times, no specific data is given in Section 2. In order to calculate the synchronization correlation coefficient and network measurement, the content of this section was divided into three parts.
In the first part, the construction process of the entropy matrix is described in detail. In the second part, the construction process of the adjacency matrix is described in detail. In the third part, the network measurement of FBN is calculated and the network measurement matrix is constructed to form the feature matrix.
(1)
Construction of the entropy matrix
Firstly, calculate entropy and construct the entropy matrix. Four entropy data matrices called FE matrix, SE matrix, AE matrix, and SPE matrix are obtained through the constructed method. The data format of each of them is explained as follows: The EEG data of S second non-fatigue driving state and S second fatigue driving state are intercepted from the data of R individuals. Calculate an entropy value within 1000 Hz EEG data per second. Assume that every entropy value is a symbol e. The S seconds non-fatigue driving state entropy data matrix E J X and the S seconds fatigue driving state entropy data matrix E Z D of each entropy are shown in Figure 3. The rows stand for the entropy value of every second in 30 electrodes, and columns stand for the entropy value of every electrode in S seconds of R individuals in the figure.
(2)
Construction of adjacent matrix
Secondly, the synchronous correlation coefficients between different electrodes are calculated. Three matrices called MI matrix, PEA matrix, and CORE matrix are obtained for each entropy matrix by this constructed method. The size of each of them is ( 2 × ( R × I × ( 30 × 30 ) ) ) ) , where 2 means that each individual has two EEG states, and R represents the number of individuals participating in the experiment. Most authors construct an FBN with every second of EEG data, which will lead to a lot of calculations and repeated information. In this paper, the intercepted S seconds are divided into I parts of each individual. Every adjacent matrix will be calculated by the data of entropy matrix within every S /I second, which can reduce the amount of calculation while removing duplicate information. Within S seconds, each individual has I adjacent matrices. 30 × 30 represents the correlation coefficient matrix size, which will construct an FBN after the threshold selection. The value of every synchronous correlation coefficient is assumed as a symbol c. Two matrices are used to express the synchronous correlation coefficient matrix of S seconds of R individuals (into fatigue driving state data matrix ( C Z D ) and non-fatigue driving state data matrix ( C J X )), as shown in Figure 4. The symbol c using the example c 12 means that the synchronous correlation coefficient is based on entropy value in electrode 1 and electrode 2 within S /I seconds.
(3)
Construction of network measurement matrix
Finally, after selecting the appropriate threshold, the network measurement of the data matrix is calculated and the network measurement matrix is constructed. The size of each network measurement matrix is ( 25 × ( 2 × ( R × I × 30 ) ) ) . A specific explanation is similar with the construction of the adjacent matrix. Assume that the value of each network measurement is a symbol m. The adjacency matrix of 30 × 30 in each row is calculated as one row, and the row size is ( 1 × 30 ) . There are 25 pairs of network measurement matrices under non-fatigue state ( M J X ) and fatigue state ( M Z D ) of all experimental individuals, as shown below:

2.4.3. Construction Algorithm of EN_FBN Model Based on the Real Data Set of Fatigue Driving

According to the flow in Section 2.3, the specific algorithm steps used for the real data set of fatigue driving to construct an EN_FBN fatigue driving recognition model are provided. In this section, two algorithms are provided and the EN_FBN model will be constructed accordingly.
(1)
The first algorithm: Sparse-based FBN algorithm
Algorithm input: Synchronous correlation coefficient matrix C m × n of size M × N , where the network sparsity is d ( 0 < d < 1 ) .
Algorithm output:d functional brain networks g k based on entropy.
The algorithm begins;
Set the threshold minimum value d and the maximum value d _ m a x through the method mentioned in Section 2.2.1;
Define the loop invariant d d m a x , and the loop begins;
Calculate the number of edges V of the matrix C m × n , and sort the weights of the edges of the matrix C m × n from large to small;
Select the sparsity d, and generate the number of network edges V 1 according to the formula V 1 = V × d ;
Reserve the front side V 1 of the matrix C m × n , and round off the rest (set the corresponding position of the matrix to 0). Then, generate an FBN g k d ;
Increase value d by the formula d = d + 0.01 , and compare the sparsity d and d m a x . If d d m a x , jump back to the third step to continue the calculation;
If d > d m a x , the loop ends;
The algorithm ends.
(2)
The second algorithm: EN_FBN construction algorithm
Calculate the entropy features under different fatigue driving states in S seconds of R individuals (the specific method is mentioned in Section 2.2.1) and construct entropy matrix (the specific method is mentioned in Section 2.4.2). Suppose the entropy is E ( S × R ) × 30 , where ( S × R ) stands for the size of row of E, and 30 stands for size of column of E, which represents the electrode numbers;
Construct the synchronous correlation coefficients matrix based on the matrix E ( S × R ) × 30 (the specific method is mentioned in Section 2.2.2) and construct adjacent matrix (the specific method is mentioned in Section 2.4.2). The adjacent matrix is assumed to C ( R × I × 30 ) × 30 , where ( R × I × 30 ) stands for the size of row of C, and 30 stands for size of column of C, which represents the electrode numbers;
Construct the sparse-based FBN model according to the first algorithm;
Construct the network measurement matrix (the specific method is mentioned in Section 2.4.2). The network measurement matrix is assumed to M ( R × I ) × 30 , where ( R × I × 30 ) stands for the size of row of M, and 30 stands for size of column of M, which represents the electrode numbers;
Input each pair matrix M J X and M Z D to the classifiers proposed in Section 2.2.3, and get the test result through 10-fold cross-validation.

3. Results and Discussion

3.1. Experiment and Result Analysis of FBN Based on Four Different Entropy

In Section 3.1 to Section 3.1.2, for the selection and analysis method, the fatigue driving data of 26 experimental individuals are used. Since it is reasonable to use 30 s to perform fatigue driving recognition test in practical applications, the model based on the experimental results of 30 s non-fatigue state data and fatigue state data is mainly analyzed. The 30 s data of each individual are divided into five groups, and the method is shown in Section 2.4.2. The available feature matrix size of each network matrix is ( 25 × ( 260 × 30 ) ) . By putting the specific data into Figure 5, the matrix is as shown in Figure 6.
These test results were obtained by 10-fold cross-validation, and compared from two aspects of recognition rate and stability in Section 3.1. In Section 3.2, it is verified whether the EN_FBN model meets the “small world” property, and the reliability of the model is further confirmed. In Section 3.3, the appropriate threshold is selected. In Section 3.4, the stability is compared between SE_T_KPCA and FE_FBN.

3.1.1. Comparison Test Results of Classification Recognition Rate among FE/AE/SE/SPE_FBN

This section is devoted to analyzing the classification recognition rate of the proposed model and selecting the best method to build the model.
It can be seen from Table 1 and Table 2 that from the perspective of the classifier selection, the FBN model constructed by using the four entropies respectively proposed performs well in DT, RF, and SVM, among which the performance of RF is the best and its classification accuracy is the highest. Therefore, RF is used as the classifier for classification and recognition in following experiments.
From the perspective of method selection, the classification recognition rate of FBN constructed by FE is higher than that of SE, AE, and SPE. In addition, MI is selected as the synchronous correlation coefficient to construct the FBN. As seen from Figure 7, the optimal recognition rate is up to 99.62%.
In Table 1, FE_MI_APL represents FBN constructed by FE, MI, and APL. The other abbreviations in the table have a similar meaning, such as FE_MI_CC, FE_MI_LE, FE_PEA_APL, FE_PEA_CC, FE_PEA_LE, and so on. The abbreviations in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10 and Table 11 are similar to Table 1.
In Figure 7, the abscissa represents the method combined entropy, synchronous correlation coefficients, and classifiers. The ordinate represents the threshold of classification accuracy, and the number on each column represents the specific value of the classification accuracy of each method.

3.1.2. The Stability Test Results of Each Threshold Recognition Rate of FE/SE/AE/SPE_FBN

It has been confirmed that an FBN model constructed by FE and MI obtained the highest classification recognition. In this section, the feasibility of that is further confirmed through the mean and variance of each optional parameter. The test results are shown in Figure 8 and Figure 9 and Table 5 and Table 6.
In Figure 8 and Figure 9, the abscissa represents the network measurement under different correlation coefficients of FBN, and the ordinate represents the threshold of classification recognition rate, the number on each column represents the specific value of the classification accuracy of each method.
According to these figures, the average and the variance of classification recognition rate of FE_FBN on RF perform better than that of SE/AE/SPE_FBN, and the classification recognition rate of FBN constructed by MI also performs better than that of PEA and CORE on all classifiers.
FE describes the fuzzy degree of a fuzzy set. For fuzzy sets, entropy is an important numerical feature of fuzzy variables. It is an important tool for processing fuzzy information, which used to measure the uncertainty of fuzzy variables. Fuzzy sets are used to describe the set classes in which elements cannot be clearly defined as belonging to a given set. Fuzzy variables take values from this fuzzy set with uncertainty. Therefore, in the process of constructing an FBN, it is more appropriate to describe the uncertainty of the data. The network measurement of FE_FBN makes the topology of resting state and fatigue state of the brain network more separable. Therefore, using FE to construct FBN is the best choice.
Due to the size of MI being closely related to the relationship between variables X and Y. If X and Y are more closely related, I ( X , Y ) is greater. The reason why MI performs well on this model is that it can more accurately estimate the synchronization strength. As for PEA and CORE, compared with the MI with less data restriction and wide application range, the PEA, which is more affected by outliers, and its expansion coefficient on the high-order non-statistical sum CORE, perform poorly in entropy.
According to Table 5 and Table 6, the classification accuracy of FE_FBN using clustering coefficient and local efficiency as features at each threshold is more stable than that of the average path length, while the average recognition rate of using the average path length is higher than that of the clustering coefficient and local efficiency.

3.2. “Small World” Property Analysis of EN_FBN

Since the EN_FBN model is first proposed, the result of its “small world” property is shown in this section, which shows that the model is credible. The “small world” property test specific method is described in Section 2.2.3. The ER random graph used in this paper is shown in Figure 10. Since MI performs best on EN_FBN, considering the length of the full text, only the “small world” property of the EN_FBN is constructed using MI as the synchronization correlation coefficient displayed.
In this experiment, 26 individuals were selected, and five functional brain networks were constructed for each individual in 30 s. Each entropy has 260 functional brain networks, due to space limitations, only the “small world" property data of one FBN of each entropy are selected for display in each driving state. The displayed results are shown in Figure 11. In Figure 11, the abscissa represents the 25 thresholds of FBN, and the ordinate represents the δ value threshold. From the figure, it can be seen that the δ values of the networks constructed by this model are bigger than 1 in all threshold, and the trend is generally up with the threshold value, which conforms to the “small world” property described in reference [24].

3.3. Threshold Selection of FE_FBN

In this section, the appropriate thresholds for the model were chosen. The threshold selection is based on the FE, MI, and RF (abbreviated as FE + MI + RF). The threshold in the highest value set of each combination method is used as the optimal threshold. Table 7, Table 8 and Table 9 show the accuracy of FE_FBN which contains all indications at each threshold. As we can see, the accuracy of the model is the highest when the network sparsity is 11% and 28%, so they can be selected as the best threshold, on which high classification recognition rate is concentrated. What is more, when network sparsity is 28%, the classification recognition rate performs good, which is only 0.37% lower than the highest point of 99.62%.

3.4. Stability Comparison between SE_T_KPCA and FE_FBN

In this section, to further verify the stability of this method, the method SE_T_KPCA proposed by Ye [9] and FE_FBN are compared in the same data set from the aspects of the highest classification accuracy, classification accuracy mean, and variance of all threshold. The reason why the algorithm of Ye was chosen is as follows: First of all, the two methods used the same data set for experimentation. The data comes from the fatigue driving EEG data set produced by the Mu team. Secondly, the team of Ye used the EEG signal entropy feature extraction algorithm based on t-test and KPCA. Our model constructs FBN based on the entropy feature of the EEG signal to identify the EEG state. Both of the two studies are based on the entropy characteristics of EEG signals. In order to further verify the stability of the method, comparing the classification accuracy of SE_T_KPCA and FE_FBN under the same data set in 10 s, 20 s, 30 s, 40 s, 50 s, and 60 s, which are suitable for fatigue driving, to verify that the maximum classification recognition rate of FE_FBN model is higher than that of SE_T_KPCA under different seconds. It can be confirmed that our model not only contains the uncertain characteristics of EEG data but also has a more realistic network topology, even if the length of the intercepted EEG signal is different, it can gain a higher accuracy and more stable classification results than other methods on the same data set. The feature that this model is not sensitive to the length of time, which is of great significance in terms of practicality.
In this experiment, in order to explain the effect of the experiment better, the number of experimental individuals was divided into two groups as Ye [9] divided. The experiment based on the data of 10 individuals is called group one, and the experiment based on the data of 15 individuals is called group two.
In order to avoid under-fitting or over-fitting caused by insufficient data, the number of trees in random forest were adjusted. The number of trees in group one is 2, and the number of trees in group two is 4. The experimental results are shown in Table 10 and Table 11.
As seen from these tables, the highest classification recognition rate of FE_FBN model is higher than that of SE_T_KPCA. The mean value and variance of SE_T_KPCA and FE_FBN in the two group under 10 s to 60 s are shown in Table 12. From the table, it can be prove that our model has a higher recognition accuracy, larger classification mean, and smaller variance.
Entropy characterizes the possibility of new information in a time series, and can effectively describe the complexity, non-linearity, and uncertainty of EEG signals. FBN can effectively describe the network topology. The threshold selected by the FBN through the sliding window has a certain time domain resolution, which can study the dynamic characteristics of the synchronization behavior between brain signals. FE_FBN has a certain time domain resolution, which makes the FBN model contain the chaotic characteristics of EEG signals insensitive to external factors (such as the length of the window for intercepting EEG signals) while ensuring high-precision classification, and can better adapt to changes in an EEG signal length. Therefore, regardless of the length of the window used to intercept the EEG signal data, the accuracy of the model changes little, and the accuracy value performs better.
In this regard, Table 13 and Table 14 are used to explain the classification and recognition of the FE_FBN in 10 s to 60 s at each threshold. In these tables, the average and variance of classification recognition rate performance of APL, CC, and LE of FE_FBN at each threshold performed well in 10 s to 60 s. Among them, the mean value of the first group is between 95 to 97% of the population, and the variance is also between 0.00010 to 0.00035; the mean value of the second group is between 97 to 98% of the population, and the variance is between 0.00010 to 0.00020, so the performance is ideal.
Combined with Table 1, Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8, Table 9, Table 10, Table 11, Table 12, Table 13 and Table 14 and Figure 3, Figure 4, Figure 5, Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, it is recommended to use FE + MI + CC + RF to construct the FBN model, in which the number of trees can be adjusted according to the data size. It can achieve the ideal effect of a high classification accuracy, large classification mean, and small classification variance under each threshold.

4. Conclusions

In this paper, we focus on selecting appropriate entropy features to characterize EEG signals and construct an FBN. On the real data set of fatigue driving, FBN models based on different entropies are constructed to identify the state of fatigue driving. We think the fuzzy idea of FE is more suitable for this model due to the uncertainty of EEG signals. It makes the topology of the resting state and fatigue state of the brain network more separable. Through analyzing network measurement indicators, the experiment shows that the FBN model based on fuzzy entropy can achieve excellent classification recognition rate and good classification stability. In addition, when compared with the other model based on the same data set, our model can obtain a higher accuracy and more stable classification results even if the length of the intercepted EEG signal is different. This means that this model is not sensitive to the length of time which is of great significance in terms of practicality.
However, there are certain deficiencies. Firstly, in the single-person separation experiment, the classification and recognition rate of this model needs to be improved. Secondly, there was a failure to perform a real-time EEG detection application. We look forward to more research in our model.

Author Contributions

Conceptualization, T.Q.; methodology, L.Z. and T.Q.; software, L.Z., T.Q., Z.L., S.Z. and X.B.; validation, L.Z., T.Q. and Z.L.; writing–original draft preparation, L.Z.; writing–review and editing, L.Z., T.Q., Z.L., S.Z. and X.B. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported by the National Natural Science Foundation of China (Nos. 61841201, 81460769, 61662045, and 61762045).

Acknowledgments

We thank Jianfeng Hu’s team for providing EEG experiment data.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Liang, X.; Wang, J.H.; He, Y. Human brain connection group research: Brain structure network and brain function network. Chin. Sci. Bull. 2010, 55, 1565–1583. [Google Scholar] [CrossRef]
  2. Meier, J.; Tewarie, P.; Mieghem, P. The Union of Shortest Path Trees of Functional Brain Networks. Brain Connect. 2015, 5, 575–581. [Google Scholar] [CrossRef] [PubMed]
  3. Kabbara, A.; Eid, H.; Falou, W.E.; Khalil, M.; Hassan, M. Reduced integration and improved segregation of functional brain networks in alzheimer’s disease. J. Neural Eng. 2018, 15, 026023. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zou, S.L.; Qiu, T.R.; Huang, P.F.; Luo, H.W.; Bai, X.M. The functional brain network based on the combination of shortest path tree and its application in fatigue driving state recognition and analysis of the neural mechanism of fatigue driving. Biomed. Signal Process. Control 2020, 62, 102129. [Google Scholar] [CrossRef]
  5. Conrin, S.D.; Zhan, L.; Morrissey, Z.D.; Xing, M.; Forbes, A.; Maki, P.M.; Milad, M.R.; Ajilore, O.; Leow, A. Sex-by-age differences in the resting-state brain connectivity. arXiv 2018, arXiv:1801.01577. [Google Scholar]
  6. Zhao, C.L.; Zhao, M.; Yang, Y. The Reorganization of Human Brain Network Modulated by Driving Mental Fatigue. IEEE J. Biomed. Health Inform. 2017, 21, 743–755. [Google Scholar] [CrossRef]
  7. Rifkin, H. Entropy: A New World View; Shanghai Translation Publishing House: Shanghai, China, 1987. [Google Scholar]
  8. Min, J.; Wang, P.; Hu, J. Driver fatigue detection through multiple entropy fusion analysis in an EEG-based system. PLoS ONE 2017, 12, e0188756. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  9. Ye, B.G. Research on Recognition Method of Fatigue Driving State Based on KPCA; Nanchang University: Nanchang, China, 2019. [Google Scholar]
  10. Vladimir, M.; Kevin, J.M.; Jack, R.; Kimberly, A.C. Changes in EEG multiscale entropy and power-law frequency scaling during the human sleep cycle. Hum. Brain Mapp. 2019, 40, 538–551. [Google Scholar]
  11. Zou, S.L.; Qiu, T.R.; Huang, P.F.; Bai, X.M.; Liu, C. Constructing Multi-scale Entropy Based on the Empirical Mode Decomposition(EMD) and its Application in Recognizing Driving Fatigue. J. Neurosci. Methods 2020, 341, 108691. [Google Scholar] [CrossRef]
  12. Kumar, Y.; Dewal, M.L.; Anand, R.S. Epileptic seizure detection using DWT based fuzzy approximate entropy and support vector machine. Neurocomputing 2014, 133, 271–279. [Google Scholar] [CrossRef]
  13. Lake, D.E.; Richman, J.S.; Griffin, M.P.; Moorman, J.R. Sample entropy analysis of neonatal heart rate variability. Am. J. Physiol. 2002, 283, R789–R797. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Zhang, Y.; Luo, M.W.; Luo, Y. Wavelet transform and sample entropy feature extraction methods for EEG signals. CAAI Trans. Intell. Syst. 2012, 7, 339–344. [Google Scholar]
  15. Pincus, M.S. Approximate entropy as a measure of system complexity. Proc. Natl. Acad. Sci. USA 1991, 88, 2297–2301. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Sun, Y.; Ma, J.H.; Zhang, X.Y. EEG Emotional Recognition Based on Nonlinear Global Features and Spectral Features. J. Comput. Eng. Appl. 2018, 54, 116–121. [Google Scholar]
  17. Kraskov, A.; Stgbauer, H.; Grassberger, P. Estimating mutual information. Phys. Rev. E Stat. Nonlinear Soft Matter Phys. 2004, 69, 066138. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  18. Hanieh, B.; Sean, F.; Azinsadat, J.; Tyler, G.; Kenneth, P. Detecting synchrony in EEG: A comparative study of functional connectivity measures. Comput. Biol. Med. 2019, 105, 1–15. [Google Scholar]
  19. Pearson, K. On the Criterion that a Given System of Deviations from the Probable in the Case of a Correlated System of Variables is Such that it Can be Reasonably Supposed to have Arisen from Random Sampling. In Breakthroughs in Statistics; Springer: New York, NY, USA, 1992. [Google Scholar]
  20. Gunduz, A.; Principe, J.C. Correntropy as a novel measure for nonlinearity tests. Signal Process. 2009, 89, 14–23. [Google Scholar] [CrossRef]
  21. Silverman, B.M. Destiny Estimation for Statistics and Data Analysis; CRC Press: Boca Raton, FL, USA, 1996; Volume 26. [Google Scholar]
  22. Guo, H. Analysis and Classification of Abnormal Topological Attributes of Resting Function Network in Depression; Taiyuan University of Technology: Taiyuan, China, 2013. [Google Scholar]
  23. Watts, D.J.; Strogatz, S.H. Collective dynamics of “small-world” networks. Nature 1998, 393, 440–442. [Google Scholar] [CrossRef]
  24. Humphries, M.D.; Gurney, K.; Prescott, T.J. The brainstem reticular formation is a small-world, not scale-free, network. Proc. R. Soc. B Biol. Sci. 2006, 273, 503–511. [Google Scholar] [CrossRef] [Green Version]
  25. Erdos, P.; Renyi, A. On random graphs. Publ. Math. Debrecen 1959, 6, 290–297. [Google Scholar]
  26. Mu, Z.D.; Hu, J.F.; Min, J.L. Driver Fatigue Detection System Using Electroencephalography Signals Based on Combined Entropy Features. Appl. Sci. 2017, 7, 150. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Architecture diagram of the EN_FBN (entropy-based functional brain network) model.
Figure 1. Architecture diagram of the EN_FBN (entropy-based functional brain network) model.
Entropy 22 01234 g001
Figure 2. EN_FBN model framewoek.
Figure 2. EN_FBN model framewoek.
Entropy 22 01234 g002
Figure 3. Data matrix of each entropy.
Figure 3. Data matrix of each entropy.
Entropy 22 01234 g003
Figure 4. Adjacent matrix of each synchronous correlation coefficient of each entropy matrix.
Figure 4. Adjacent matrix of each synchronous correlation coefficient of each entropy matrix.
Entropy 22 01234 g004
Figure 5. Feature matrix of each network measurement of each adjacent matrix.
Figure 5. Feature matrix of each network measurement of each adjacent matrix.
Entropy 22 01234 g005
Figure 6. Feature matrix of each network measurement of each adjacent matrix on experimental data.
Figure 6. Feature matrix of each network measurement of each adjacent matrix on experimental data.
Entropy 22 01234 g006
Figure 7. Highest accuracy comparison chart of FE/SE/AE/SpE_FBN (unit: %).
Figure 7. Highest accuracy comparison chart of FE/SE/AE/SpE_FBN (unit: %).
Entropy 22 01234 g007
Figure 8. Mean value of classification recognition rate on RF of FE/SE/AE/SPE_FBN (unit: %).
Figure 8. Mean value of classification recognition rate on RF of FE/SE/AE/SPE_FBN (unit: %).
Entropy 22 01234 g008
Figure 9. Variance of classification recognition rate on RF of FE/SE/AE/SPE_FBN.
Figure 9. Variance of classification recognition rate on RF of FE/SE/AE/SPE_FBN.
Entropy 22 01234 g009
Figure 10. ER random graph.
Figure 10. ER random graph.
Entropy 22 01234 g010
Figure 11. “Small world” property sampling display.
Figure 11. “Small world” property sampling display.
Entropy 22 01234 g011
Table 1. The highest accuracy of each network measurement of FE_FBN (unit: %). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 1. The highest accuracy of each network measurement of FE_FBN (unit: %). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
FE_MI_APL 98.3698.5299.6298.9774.6093.79
FE_MI_CC 97.7499.1099.4398.4481.7998.77
FE_MI_LE 93.7498.1399.4399.1276.8398.66
FE_PEA_APL 92.3393.8995.5388.1286.1893.96
FE_PEA_CC 89.1193.8595.2888.2992.1384.39
FE_PEA_LE 89.4193.4195.1986.6594.7288.47
FE_CORE_APL 90.3594.5896.0487.4286.1395.53
FE_CORE_CC 90.7693.8995.1987.5593.3885.00
FE_CORE_LE 86.9493.5396.1388.0696.0488.53
Table 2. The highest accuracy of each network measurement of SE_FBN (unit: %, sample entropy (SE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 2. The highest accuracy of each network measurement of SE_FBN (unit: %, sample entropy (SE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
SE_MI_APL 96.1997.4499.0695.5672.9396.02
SE_MI_CC 95.2596.7198.3096.6980.2097.20
SE_MI_LE 93.2896.8198.2196.1894.4495.91
SE_PEA_APL 90.8794.5394.9587.9583.3794.72
SE_PEA_CC 89.3493.3295.0087.9692.4186.28
SE_PEA_LE 89.1694.3295.1987.9195.2889.07
SE_CORE_APL 91.1493.6994.9187.5186.4394.12
SE_CORE_CC 88.7295.4996.2387.6192.1884.31
SE_CORE_LE 87.2193.4495.0086.7694.9087.53
Table 3. The highest accuracy of each network measurement of AE_FBN (unit: %, approximate entropy (AE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 3. The highest accuracy of each network measurement of AE_FBN (unit: %, approximate entropy (AE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
AE_MI_APL 94.8196.5797.9296.1374.8795.78
AE_MI_CC 94.6396.3298.8796.3282.3896.64
AE_MI_LE 93.0196.6998.3095.7289.5295.6
AE_PEA_APL 90.3394.0994.8187.785.1195.09
AE_PEA_CC 90.6293.0895.5788.5892.2985.53
AE_PEA_LE 87.1793.6695.7588.3395.2887.84
AE_CORE_APL 89.9993.9495.9488.9383.8795.09
AE_CORE_CC 89.4394.2595.4487.1394.0983.43
AE_CORE_LE 90.2693.4995.1388.2195.4287.02
Table 4. The highest accuracy of each network measurement of SPE_FBN (unit: %, spectral entropy (SPE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 4. The highest accuracy of each network measurement of SPE_FBN (unit: %, spectral entropy (SPE)). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
SPE_MI_APL 90.5793.4796.0488.8868.4492.9
SPE_MI_CC 88.0893.3495.7289.2473.791.75
SPE_MI_LE 87.6294.1096.1089.6787.4789.47
SPE_PEA_APL 90.0094.1295.5787.6787.8793.68
SPE_PEA_CC 89.893.4295.1688.8993.6486.31
SPE_PEA_LE 89.8693.9995.3885.8195.0986.8
SPE_CORE_APL 91.1894.6794.7587.0185.6194.62
SPE_CORE_CC 93.0293.2295.4488.1792.1585.39
SPE_CORE_LE 86.9393.1895.4787.3694.9987.86
Table 5. Classification precision variance of the network measurement matrix at each threshold of FE_FBN. (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 5. Classification precision variance of the network measurement matrix at each threshold of FE_FBN. (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
FE_MI_APL 0.002450.000090.000070.000180.002190.00091
FE_MI_CC 0.000590.000080.000030.000090.007550.00173
FE_MI_LE 0.000640.000070.000050.000230.007260.00553
Table 6. Mean value of the classification accuracy of the network measurement matrix at each threshold of FE_FBN (unit: %). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
Table 6. Mean value of the classification accuracy of the network measurement matrix at each threshold of FE_FBN (unit: %). (ANN: artificial neural network; DT: decision tree; RF: random forest; KNN: k-nearest neighbor; AD: adaboost; SVM: support vector machine).
ClassifiersANNDTRFKNNADSVM
Feature
FE_MI_APL 92.1197.1598.4696.8465.8687.87
FE_MI_CC 93.5897.1598.4396.4560.9792.42
FE_MI_LE 90.4596.8498.2796.0758.7287.51
Table 7. The accuracy of FE + MI + RF at threshold 8% to 16% (unit: %).
Table 7. The accuracy of FE + MI + RF at threshold 8% to 16% (unit: %).
Threshold1 (8%)2 (9%)3 (10%)4 (11%)5 (12%)6 (13%)7 (14%)8 (15%)9 (16%)
Feature
APL 98.2398.6698.1199.6297.1796.4996.6797.9299.12
CC 98.8798.2399.2598.398.4998.6597.0798.4898.53
LE 98.8798.2398.1198.8799.0697.3297.2699.1298.34
Table 8. The accuracy of FE + MI + RF at threshold 17% to 24% (unit: %).
Table 8. The accuracy of FE + MI + RF at threshold 17% to 24% (unit: %).
Threshold10 (17%)11 (18%)12 (19%)13 (20%)14 (21%)15 (22%)16 (23%)17 (24%)
Feature
APL 98.9399.2598.4997.3397.9299.4399.0698.74
CC 98.6298.6897.9597.5498.2398.398.398.96
LE 97.7898.5899.0698.1197.6698.6898.0298.36
Table 9. The accuracy of FE + MI + RF at threshold 25% to 32% (unit: %).
Table 9. The accuracy of FE + MI + RF at threshold 25% to 32% (unit: %).
Threshold18 (25%)19 (26%)20 (27%)21 (28%)22 (29%)23 (30%)24 (31%)25 (32%)
Feature
APL 99.2598.8498.9699.2598.8798.9698.6897.92
CC 97.8598.2198.399.4398.3399.0698.3498.87
LE 98.297.7498.0199.4397.7498.397.3198.68
Table 10. Precision comparison table of SE_T_KPCA and FE_FBN in different seconds (group: group one, unit: %). (LDA: linear discriminant analysis; RF: random forest).
Table 10. Precision comparison table of SE_T_KPCA and FE_FBN in different seconds (group: group one, unit: %). (LDA: linear discriminant analysis; RF: random forest).
MethodSE_T_KPCAFE_MI_APLFE_MI_CCFE_MI_LE
Second LDARF, tree=2RF, tree=2RF, tree=2
10 s75.6197.5797.9698.10
20 s85.1997.9898.5098.33
30 s99.2799.3998.9799.25
40 s86.9699.2198.7399.33
50 s90.5598.9298.9397.94
60 s94.6198.5699.1098.27
Table 11. Precision comparison table of SE_T_KPCA and FE_FBN in different seconds (group: group two, unit: %). (LDA: linear discriminant analysis; RF: random forest).
Table 11. Precision comparison table of SE_T_KPCA and FE_FBN in different seconds (group: group two, unit: %). (LDA: linear discriminant analysis; RF: random forest).
MethodSE_T_KPCAFE_MI_APLFE_MI_CCFE_MI_LE
Second LDARF, tree=4RF, tree=4RF, tree=4
10 s80.3398.6099.5299.03
20 s87,6099.4199.1999.41
30 s85.0899.1999.6899.31
40 s90.4699.3599.4899.03
50 s92.3699.0099.3598.92
60 s94.7499.3599.1999.52
Table 12. Classification accuracy mean value and variance in two groups between SE_T_KPCA and FE_FBN.
Table 12. Classification accuracy mean value and variance in two groups between SE_T_KPCA and FE_FBN.
MethodSE_T_KPCAFE_MI_APLFE_MI_CCFE_MI_LE
Group Mean|VarMean|VarMean|VarMean|Var
Group one88.70%|0.0067498.61%|0.0000598.70%|0.0000298.54%|0.00004
Group two88.43%|0.0027499.15%|0.00000999.40%|0.00000499.23%|0.000006
Table 13. Mean and variance of FE_FBN (group: group one).
Table 13. Mean and variance of FE_FBN (group: group one).
MeasurementAPLCCLE
Second Mean|VarMean|VarMean|Var
10 s95.57%|0.0001695.35%|0.0002795.30%|0.00021
20 s96.15%|0.0001395.94%|0.0003395.47%|0.00026
30 s96.39%|0.0001696.11%|0.0001996.06%|0.00026
40 s96.07%|0.0003395.80%|0.0001995.73%|0.00025
50 s96.74%|0.0001295.95%|0.0001795.96%|0.00015
60 s96.23%|0.0001696.02%|0.0002096.12%|0.00028
Table 14. Mean and variance of FE_FBN (group: group two).
Table 14. Mean and variance of FE_FBN (group: group two).
MeasurementAPLCCLE
Second Mean|VarMean|VarMean|Var
10 s96.88%|0.0001197.27%|0.0001697.11%|0.00015
20 s97.62%|0.0001197.54%|0.0001097.62%|0.00011
30 s97.49%|0.0001697.72%|0.0001497.91%|0.00020
40 s97.83%|0.0002097.62%|0.0001197.58%|0.00011
50 s97.12%|0.0001497.62%|0.0000197.50%|0.00010
60 s97.62%|0.0001497.66%|0.0001097.54%|0.00017
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, L.; Qiu, T.; Lin, Z.; Zou, S.; Bai, X. Construction and Application of Functional Brain Network Based on Entropy. Entropy 2020, 22, 1234. https://doi.org/10.3390/e22111234

AMA Style

Zhang L, Qiu T, Lin Z, Zou S, Bai X. Construction and Application of Functional Brain Network Based on Entropy. Entropy. 2020; 22(11):1234. https://doi.org/10.3390/e22111234

Chicago/Turabian Style

Zhang, Lingyun, Taorong Qiu, Zhiqiang Lin, Shuli Zou, and Xiaoming Bai. 2020. "Construction and Application of Functional Brain Network Based on Entropy" Entropy 22, no. 11: 1234. https://doi.org/10.3390/e22111234

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop