Next Article in Journal
Evaluating Hemodynamic Changes in Preterm Infants Using Recent YOLO Models
Previous Article in Journal
CT-Based Habitat Radiomics Combining Multi-Instance Learning for Early Prediction of Post-Neoadjuvant Lymph Node Metastasis in Esophageal Squamous Cell Carcinoma
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Unveiling Early Signs of Preclinical Alzheimer’s Disease Through ERP Analysis with Weighted Visibility Graphs and Ensemble Learning

by
Yongshuai Liu
1,*,
Jiangyi Xia
2,
Ziwen Kan
1,
Jesse Zhang
3,
Sheela Toprani
4,
James B. Brewer
5,
Marta Kutas
6,
Xin Liu
1 and
John Olichney
2
1
Department of Computer Science, University of California, Davis, CA 95616, USA
2
Center for Mind and Brain and Neurology Department, University of California, Davis, CA 95618, USA
3
Department of Computer Science, University of Southern California, Los Angeles, CA 90089, USA
4
Department of Neurology, Division of Epilepsy, University of California, Davis, CA 95817, USA
5
Departments of Radiology and Neurosciences, University of California, San Diego, CA 92037, USA
6
Department of Cognitive Science, University of California, San Diego, CA 92037, USA
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(8), 814; https://doi.org/10.3390/bioengineering12080814
Submission received: 20 May 2025 / Revised: 24 July 2025 / Accepted: 25 July 2025 / Published: 29 July 2025

Abstract

The early detection of Alzheimer’s disease (AD) is important for effective therapeutic interventions and optimized enrollment for clinical trials. Recent studies have shown high accuracy in identifying mild AD by applying visibility graph and machine learning methods to electroencephalographic (EEG) data. We present a novel analytical framework combining Weighted Visibility Graphs (WVG) and ensemble learning to detect individuals in the “preclinical” stage of AD (preAD) using a word repetition EEG paradigm, where WVG is an advanced variant of natural Visibility Graph (VG), incorporating weighted edges based on the visibility degree between corresponding data points. The EEG signals were recorded from 40 cognitively unimpaired elderly participants (20 preclinical AD and 20 normal old) during a word repetition task. Event-related potential (ERP) and oscillatory signals were extracted from each EEG channel and transformed into a WVG network, from which relevant topological features were extracted. The features were selected using t-tests to reduce noise. Subsequent statistical analysis reveals significant disparities in the structure of WVG networks between preAD and normal subjects. Furthermore, Principal Component Analysis (PCA) was applied to condense the input data into its principal features. Leveraging these PCA components as input features, several machine learning algorithms are used to classify preAD vs. normal subjects. To enhance classification accuracy and robustness, an ensemble method is employed alongside the classifiers. Our framework achieved an accuracy of up to 92% discriminating preAD from normal old using both linear and non-linear classifiers, signifying the efficacy of combining WVG and ensemble learning in identifying very early AD from EEG signals. The framework can also improve clinical efficiency by reducing the amount of data required for effective classification and thus saving valuable clinical time.

1. Introduction

Alzheimer’s disease (AD) presents a growing challenge to global public health due to its impact on cognitive function and quality of life. As the population ages, the prevalence of AD is expected to increase, underscoring the need for effective early intervention. Detecting AD at its earliest stages, particularly in the preclinical phase (preAD), is critical for timely interventions and dementia prevention.
AD is characterized by a sequence of biological events that begins years before clinical symptoms [1]. Amyloid- β (A β ) deposition on PET scans or low A β levels in Cerebrospinal fluid (CSF) are considered early indicators of AD in normal older individuals, who may be classified as having preclinical AD (preAD) [2]. A β peptides are known to influence synaptic activity with inhibitory effects at post-synaptic sites and excitatory effects at pre-synaptic sites [3,4]. Their pathological accumulation disrupts synaptic transmission [5], alters network-level neuronal activity [3,4], and causes synaptic loss [6,7]. Synaptic abnormalities may occur before amyloid plaque deposition [3,8] and are central to the pathophysiology of AD in preclinical and symptomatic stages [9]. Synaptic loss is strongly associated with the severity of clinical symptoms [10], underscoring the value of identifying biomarkers that can detect early synaptic dysfunction.
EEGs capture summated excitatory and inhibitory postsynaptic potentials [11] and provide a non-invasive measure of synaptic and network functions. EEG-derived measures, such as event-related brain potentials (ERPs), are sensitive to subtle brain changes in early AD [12,13], even in the preclinical stage [14,15]. With its high temporal resolution, EEG is well-suited to track changes in cognitive processes such as memory. Using a word repetition paradigm that elicits language- and memory-related brain activity, our group has identified several ERP measures that distinguish individuals across different AD stages from healthy controls [15,16,17,18,19,20]. For example, the N400 (linked to semantic processing) and the P600 or ‘Late Positive Component’ (LPC, linked to verbal memory) are reliably observed in healthy elderly but not in mild cognitive impairment (MCI) or AD patients [15,18,19,20]. Abnormalities in these ERP components have also been found in preAD [15], suggesting that EEG/ERP paradigms may provide sensitive biomarkers of synaptic and network alterations before any detectable cognitive dysfunction.
A limitation of traditional EEG analyses is their usual focus on pre-defined time windows, electrode locations, and frequency bands, at the expense of the overall pattern and complexity of EEG data. Methodological differences between studies also limit their scalability in large-scale studies. To address these limitations, we applied visibility graph (VG) features and machine learning to the word repetition EEG data [21]. The VG method maps one-dimensional, non-stationary time series into two-dimensional graphs based on mutual visibility between data points, allowing the exploration of the underlying dynamics of EEG data through graph-theoretical analysis [22,23]. VG is shown to preserve certain properties of the time series. For instance, periodic series yield regular graphs, while random series produce randomness [23]. Prior work demonstrates that VG is an effective approach for probing the underlying dynamics from EEG data [24], but often ignores the variable strength of network connections, which offer additional information. Therefore, we consider an improved VG method that can weigh network connections accordingly.
This paper proposes a novel analytical framework that integrates Weighted Visibility Graphs (WVG) with ensemble learning for the early detection of preAD using multichannel scalp EEG recorded during word repetition. WVG extends the natural VG approach by incorporating edge weights that reflect the visibility degree between time points, providing more correlation information of EEG dynamics [25,26]. WVG also enhances traditional ERP analysis methods by providing a more comprehensive representation of brain dynamics. Applied to the word repetition EEG data, this framework may also offer insight into how cerebral amyloidosis affects brain function in preAD.
Specifically, the framework transforms each EEG channel into a WVG, from which graph features are extracted. To reduce feature noise, t-tests are used for feature selection. Statistical analyses reveal structural differences in WVG networks between preAD and normal elderly participants, supporting the identification of preAD.
Next, Principal Component Analysis (PCA) reduces the input data into their principal components, which are then fed into various machine learning algorithms. To improve classification accuracy and generalization, we apply an ensemble method, which helps mitigate the tendency of individual models to overfit certain EEG trials, especially when training data are limited [27]. Our experiments demonstrate the framework’s effectiveness in distinguishing preAD from normal old participants/individuals, using both linear and non-linear classifiers.

2. Methods

2.1. Participants

Participants were recruited from the University of California, Davis (UCD) Alzheimer’s Disease Research Center (ADRC) and the UC San Diego (UCSD) Shiley-Marcos ADRC. All participants provided informed written consent in accordance with the guidelines of the UCD and UCSD Human Research Protection Programs.
Participants exhibited no significant cognitive impairment on detailed neuropsychological testing and were given a clinical diagnosis of “normal cognition” by their ADRC following a comprehensive case conference review. They were classified as preclinical AD (preAD) if they had an abnormal amyloid PET scan, indicated by increased florbetapir binding in at least two brain regions per clinical read, and met current research criteria for preclinical AD in any of its three stages [2]. Those whose amyloid PET scans were normal (no increased florbetapir binding or only mildly increased in a single brain region per clinical read) were classified as “normal old” (NO). The study included 20 patients diagnosed with preAD (mean age = 73.6 years; range: 69–81). Additionally, 20 normal old persons participated (mean age = 72.8 years; range: 64–85).

2.2. Word Repetition Paradigm

During each trial, participants were exposed to an auditory phrase indicating a category (e.g., “a type of wood”, “a breakfast food”), followed by the presentation of a visual target word approximately 1 s later (stimulus duration = 0.3 s, visual angle 0.4 degrees). These target words, which were nouns, had a fifty-fifty chance of being semantically congruous (e.g., ‘cedar’) or incongruous with the preceding category phrase. The congruous and incongruous words were carefully matched on usage frequency (mean = 32, SD = 48) and word length (mean = 5.8 characters, SD = 1.6).
Participants were instructed to wait for 3 s following the onset of each target word, then read/articulate the word aloud, and follow it with a yes/no judgment regarding its congruity with the preceding category. No time constraint was placed on participants’ responses. Among all category-word pairs, one-third were presented only once, one-third were presented twice, and the remaining one-third were presented three times (with congruous and incongruous pairs being counterbalanced). For items presented twice, the interval between the first and second presentations was brief (ranging from 0 to 3 intervening trials, spanning approximately 10 to 40 s). For items presented three times, the intervals between presentations were longer (ranging from 10 to 13 intervening trials, spanning approximately 100 to 140 s). The experimental data were parsed into six conditions: All New (AN), New Congruous (NC), New Incongruous (NI), All Old (AO), Old Congruous (OC), and Old Incongruous (OI) words. Further details of the experimental design have been published previously [20,21,28].

2.3. EEG Signal Preparation

EEG recordings were obtained across participants using 32 channels [21,28] embedded in an elastic cap (ElectroCap, Eaton OH). Electrode placements were defined by the International 10–20 system from midline (Fz, Cz, Pz, Poz), lateral frontal (F3, F4, F7, F8, FC1, FC2, FP1, FP2), temporal (T5, T6), parietal (P3, P4, CP1, CP2), and occipital sites (O1, O2, PO7, PO8). Additional sites included approximate locations of Broca’s area (Bl/Br), Wernicke’s area (Wl/Wr) and their right hemisphere homologues, and Brodmann area 41 (L41/R41). The EEG signals were sampled at 250 Hz, band-pass filtered within the range of 0.016 to 100 Hz, and offline re-referenced to averaged mastoids. Electrode impedances were kept below 5 k Ω . Data preprocessing and artifact rejection were carried out using MATLAB with the EEGLAB [29] and Fieldtrip toolboxes [30]. EEG epochs, time-locked to the onset of target words, were extracted with a duration of 2 s before and 2 s after visual word onset. Visual inspection was conducted to identify and discard non-physiological artifacts. Subsequently, independent component analysis was employed to isolate and remove eye movement artifacts.
The artifact-free EEG epochs were then extended to 8 s by mirror-padding (adding 2 s to both the beginning and end). Subsequently, they were band-pass filtered into five frequency bands ( δ : 1–4 Hz, θ : 4–8 Hz, α : 8–13 Hz, β : 13–30 Hz, γ : 30–45 Hz) using zero-phase Hamming-windowed sync finite impulse response filters, as implemented in EEGLAB (pop_eegfiltnew). This function automatically determined the optimal filter order and transition bandwidth to minimize distortions and maximize time precision.
For each of the five frequency bands of interest, a high-pass filter was initially applied, followed by a low-pass filter. Transition band widths were set to be 25% of the passband edge for passband edges >4 Hz, with a −6 dB cutoff frequency at the center of the transition band. Specifically, for the 4 Hz passband, a transition bandwidth of 2 Hz was employed, while for the 1 Hz passband ( δ band), a transition bandwidth of 1 Hz was utilized. Finally, both raw and band-pass filtered EEG segments were extracted, covering 1 s before and 2 s after the word onset, to facilitate further analyses.

2.4. Time Series Preprocessing

For each patient, we conducted 72 word repetition trials across each experimental condition. To enhance the signal-to-noise ratio in the EEG data and extract event-related information, we averaged the trials within each condition, resulting in a single averaged EEG time series per (condition, frequency band, channel) combination for each individual. Each time series was subsequently averaged into non-overlapping epochs of 80 ms, with the values of every 20 timesteps being averaged together. All-time series were uniformly shortened to cover 1 s before the stimulus onset and 2 s after it. This approach reduces signal noise and improves analysis efficiency:
Noise Reduction: This step effectively mitigated the risk of overfitting and minimized signal noise. The preprocessing technique acted as a low-pass filter, reducing the variance within individual EEG signals.
Efficiency: By reducing the signal length, we expedited the data analysis process.

2.5. Weighted Visibility Graphs (WVG)

The EEG signal represents the electrical activity of neurons in the brain, detected at the scalp. It exhibits prominent characteristics of non-stationarity, non-linearity, and dynamics. The VG method offers a way to explore the underlying dynamics of EEG data, converting time series into two-dimensional visual representations. Different EEG signal channels capture electrophysiological information from distinct scalp regions, enabling the creation of single-channel complex networks.1 Multiple channels yield multi-layer networks. WVG is an advanced variant of natural VG, incorporating weighted edges based on the visibility degree between corresponding data points. The construction of brain networks via WVG is illustrated schematically in Figure 1.
In constructing a WVG from univariate EEG data { x i } i = 1 N , where x i = x ( t i ) , individual observations are treated as vertices. The weighted adjacency matrix W with size N × N is derived. Nodes in the WVG network correspond to time points { t i } , with each edge representing a connection between two time points [31]. The nodes x ( t i ) and x ( t j ) are considered connected if they are “visible” from each other, which means the equation
x ( t i ) x ( t k ) t k t i > x ( t i ) x ( t j ) t j t i
is satisfied for all time points t k , where t i < t k < t j . The absolute value of the edge weight between two nodes is then determined as follows:
w i , j = arctan x ( t i ) x ( t j ) t i t j , i < j

2.6. Feature Extraction

To capture the characteristics of the WVG networks associated with Preclinical Alzheimer’s disease (PreAD) and those of normal subjects, we compute 17 different topological features. A previous study of mild AD dementia and MCI converters introduced 12 of these features [21], including Clustering Coefficient (CC), Graph Index Complexity (GIC), Local Efficiency (LE), Global Efficiency (GE), Clustering Coefficient Sequence Similarity (CCSS), Small-worldness (SW), Size of Max Clique (SMaC), Cost of TSP (CTSP), Graph Density (GD), Independence Number (IN), Size of Minimum Cut (SMiC), Vertex Coloring Number (VCN). In this section, we introduce an additional five features: Average Weighted Degree (AWD), Degree Distribution (DD), Network Entropy (NE), Modularity (M), and Average Path Length (APL).

2.6.1. Average Weighted Degree

The Average Weighted Degree is the average of the weights of all edges connected to a node. It captures the average strength of connections that each node has with its neighbors in the graph. It serves as a significant metric in discerning networks with varying topologies. This parameter is computed by averaging the weights of the links incident upon all nodes within the network [32]:
w d = 1 N i G i s i

2.6.2. Degree Distribution Index

The degree distribution refers to the statistical distribution of node degrees across the graph. It tells us how degrees are spread out among all nodes. This metric P d e g ( k ) is commonly utilized to categorize complex networks, derived by tallying the occurrence of each degree across nodes. In this study, a probability distribution entity is acquired by aligning the Poisson distribution with the degree distribution vector. The degree distribution P d e g ( k ) is articulated as follows:
P d e g ( k ) = λ k k ! e λ
The degree distribution index is characterized by the λ values of the fitted distribution [33].

2.6.3. Network Entropy

The network entropy measures the distribution of edge weights and connectivity patterns across the graph. The computation of network entropy relies on the degree distribution.
S = k P d e g ( k ) log P d e g ( k )

2.6.4. Modularity

Modularity measures the degree to which a network can be divided into distinct, non-overlapping communities or modules. It serves as a significant metric for assessing the quality of clusters, or communities, derived from network partitioning [32]. The modularity Q of a weighted network is defined as follows:
Q = 1 2 m i , j ( a i , j k i k j 2 m ) δ ( C i , C j )
Here, m represents the sum of weights of all links in the network, k i denotes the sum of weights of links attached to node i, C i indicates the community to which vertex i belongs, and the function δ ( C i , C j ) equals 1 if nodes i and j are in the same community and 0 otherwise. In our study, we applied the Louvain method [34] to allocate nodes into various communities. This method comprises two steps. Initially, each node is allocated to neighboring communities to maximize the gain in modularity Q. Subsequently, a new network is constructed, where each node represents a small community from the first step, and the weights of new links are determined by the sum of weights of links between nodes in the corresponding original communities. These steps are iterated until maximal modularity is achieved, and nodes cease to move. The modularity gain Δ Q is defined as follows [35]:
Δ Q = i n + k i , i n 2 m t o t + k i 2 m 2 i n 2 m t o t 2 m 2 k i 2 m 2
where i n denotes the sum of weights of links within the community C, t o t represents the sum of weights of links attached to nodes in C, k i signifies the sum of weights of links attached to node i, k i , i n indicates the sum of weights of links from node i to nodes in C, and m represents the sum of weights of all links in the network.

2.6.5. Average Path Length

Average Path measures the average number of steps or connections required to travel between any two nodes in the graph. It stands as a crucial metric for gauging the information transmission capability of networks. It serves to assess the connectivity of the overall functional network, encompassing both local and distant connections. The average path length L is defined as follows:
L = 1 N ( N 1 ) i , j , i j l i , j

3. Feature Selection

3.1. Two-Tailed t-Test

During the feature selection phase, our methodology employs a rigorous approach. An important consideration is that, in the literature [21,26,36], feature selection is often performed on the entire dataset. However, this approach introduces information leakage [37], because the test data have already been seen during the feature selection stage. The impact of such data leakage can be large, especially when the number of features is much larger than the number of samples. To address this issue, we randomly select 85% of the original data for training and use the remaining 15% to construct a test set. It is important to note that our feature selection process is conducted only using the training set. This step is critical because it avoids artificial inflation of classification accuracy.
The extensive feature extraction yields a total of 8676 features, which may contain considerable noise. The number of 8676 is derived from 15 channels × ((5 bands + 1 raw) × 16 single-channel features × 6 conditions) + ((5 bands + 1 raw) × 1 all-channel feature (CCSS) × 6 conditions)). To discern the statistical significance of graph features between different groups within the training data, we employ the two-tailed t-test in the training set to obtain significance levels (p-values) using the Scipy open-source library. We set a significance threshold of 0.01 and only the features that pass this test are utilized in subsequent analyses.

3.2. Principal Component Analysis (PCA)

Although we retain only features with a significance level of p < 0.01, the number of features remains large. To further reduce dimensionality, we use PCA, which linearly transforms features from a higher-dimensional space to a lower-dimensional space defined by the eigenvectors that capture the directions of greatest variance. Following the suggestion by Ahmadlou et al. [36], we employ PCA to reduce the feature space to 11 dimensions.

4. Machine Learning Classifiers

We evaluate various machine learning algorithms for classification, including linear logistic regression (LR), linear soft-margin support vector machines (SVM), linear discriminant analysis (LDA), k-nearest neighbor (KNN), random forest (RF), and a fully connected artificial neural network (ANN). Each algorithm is selected to serve specific purposes: logistic regression and support vector machines are chosen to evaluate linear separability. LDA models the data distributions of both classes as Gaussian distributions with equal covariances. It subsequently draws a linear decision boundary between the means of these Gaussian distributions. K-Nearest neighbor and random forest algorithms do not make assumptions about the distribution of the data. Last, the inclusion of a fully connected artificial neural network aims to approximate more intricate decision boundaries. For this purpose, we employ a simple 2-layer fully connected neural network with Rectified Linear Unit (ReLU) activations.

5. Ensemble

To enhance the reliability and robustness of the classifiers, we incorporate an ensemble method in conjunction with the classifiers. In our dataset, each channel comprises 72 trials, which we systematically divide into 31 distinct non-overlapping subgroups. For each of these subgroups, we independently train the classifiers, allowing them to learn from different subsets of the data. Subsequently, we employ a majority voting mechanism to aggregate the predictions generated by the individual classifiers.
This ensemble strategy leverages the diversity inherent in the training data subsets, thereby mitigating the risk of overfitting and enhancing the generalization capacity of our classification model. By combining the predictions from multiple classifiers trained on diverse data subsets, we aim for a more robust and accurate classification outcome.

6. Statistical Analysis on Features

As stated in Section 3.1, to address the issue of data leakage, we use a randomized approach to select 85% of the original data (40 subjects) as the training dataset. The remaining 15% serves as the testing dataset, with this process repeated for 100 rounds. Feature selection using two-tailed t-tests is conducted only on the training set.
The feature extraction produces a total of 8676 features across different channels, bands, and conditions. We evaluated the statistical differences between preAD patients and normal groups feature by feature. Table 1 presents the number of features selected by each frequency band and word repetition paradigm condition using the two-tailed t-tests, including the mean and standard deviation (std) of the 100 splits. Only features with a significance level of p < 0.01 are selected. We can see that the largest number of features comes from the raw band. Among the different conditions, the OI condition contributes the highest number of features.
Table 2 summarizes feature distribution after a two-tailed t-test across different channels, revealing potential variations in neural responses across distinct brain regions. This analysis enables us to identify channels that are particularly informative in discriminating between subject groups, thereby enhancing our understanding of the underlying neural mechanisms at play. Table 2 reveals that most selected features are derived from Fz, Pz, and Cz. This observation leads us to believe that focusing on midline sites may provide more substantial assistance in identifying preAD. The next three most helpful sites were over the temporal scalp (Wl, Wr, T6).
Moreover, Table 3 provides a summary of the most frequently selected features after the two-tailed t-test, offering insight into which features are particularly beneficial for distinguishing between preAD patients and normal participants in our approach. Table 3 highlights that features, such as Clustering Coefficient, Local Efficiency and Clustering Coefficient Sequence Similarity are the most selected features.
Figure 2 presents the normalized feature values (averaged across subjects) for both preAD and control groups under the OI condition, raw band, and Cz channel. Notably, the Clustering Coefficient, Local Efficiency, and CCSS values of the preAD group are lower than those of the normal group with p < 0.01 (marked by **). Additionally, the Average Weighted Degree, Graph Index Complexity and network entropy of the preAD group are higher than those of the normal group with p < 0.05, while the Average Path Length of the preAD group is lower than that of the normal group with p < 0.05 (marked by *). Similar distinguishable results can be observed for other conditions, bands and channel combinations.
Moreover, we visualize the separation between preAD and normal by projecting the selected features down to two dimensions using PCA, based on one representative split out of 100, as shown in Figure 3. All classifiers are able to separate the two groups, even in two dimensions, although there are a small number of misclassified points.
The ten most important features for each two-dimensional PCA projection from Figure 3 are listed in Table 4. The raw and delta bands produced the largest number of features, with the most common being Clustering Coefficient and Local Efficiency in electrodes Fz, Pz, and Cz, as well as Clustering Coefficient Sequence Similarity across all channels.

7. Machine Learning Results

Each model undergoes training and testing repeatedly for 100 splits, where the dataset is randomly split into a training set comprising 85% of the subjects and a test set containing the remaining 15%, with a matched number of preAD and normal subjects. These repeated evaluations validate the framework’s generalizability to new, unseen patients—supporting its potential for real-world applications. Classification metrics, in terms of accuracy, precision, recall, and AUROC, are computed based on the performance on the test set, with the results averaged across all 100 splits for each model. Moreover, we also present the K-fold (K = 8) cross-validation results using the same data and features in the Appendix A.

7.1. Classification Without Ensemble

The discrimination between preAD and Normal individuals averaged 88% across all classifiers, as shown in Table 5. The K-Nearest Neighbor, SVM, and Random Forest classifier achieved a discrimination accuracy > 89% for preAD vs Normal. In comparison, if we use a standard VG without ensemble methods, we only achieved an accuracy of 79%. Precision, recall, and AUROC are also reported in Table 5. We note that the std is relatively high. This is due to the inherent issue that the number of samples is relatively small. Any other algorithms with similar average performance would have the same range of std.

7.2. Classification with Ensemble

To enhance the reliability and precision of our classification, we incorporate an ensemble method in conjunction with the classifiers. We partition the trials in each channel into 31 distinct non-overlapping subgroups. For each subgroup, classifiers are trained independently, enabling them to learn from diverse data subsets. Subsequently, we employ a majority voting mechanism to consolidate predictions from individual classifiers. We observe a consistent improvement in accuracy across all classifiers (Table 6). This ensemble approach yields an accuracy of approximately 91%. Additionally, the Random Forest achieved the highest discrimination accuracy of 92%.

7.3. Reduce Channel/Trials/Band Without Ensemble

In our dataset, we observe 15 distinct channels, each comprising 72 trials. To improve clinical efficiency, we reduce the number of channels to expedite electrode setup and minimize the number of trials per channel to reduce data collection time.
Specifically, we use the training set to select five channels (Fz, Pz, Cz, Wl, and Wr), which yield the most features after the two-tailed t-test. The classification results with these five reduced channels remain largely unchanged (Table 7). Moreover, reducing the number of trials per channel to 30 results in accuracy that remains largely stable or experiences a slight decrease (Table 8).
Similarly, we can also reduce the number of bands used to make the classification while maintaining similar classification accuracy (Table 9). Although reducing the number of bands does not necessarily reduce EEG data collection time, it can reduce the number of features and computational costs.

7.4. Reduce Channel/Trials/Band with Ensemble

When we integrate ensemble techniques with the above data reduction strategies, we observe an overall accuracy increase of approximately 2–3% across all scenarios (Table 10, Table 11 and Table 12). This suggests that leveraging ensemble methods can effectively complement data reduction efforts, enhancing the robustness and performance of our classification models.

8. Discussion

The primary contribution of this paper lies in the development and validation of a novel analytical framework for the early detection of preclinical Alzheimer’s disease using cognitive ERP/EEG. The integration of Weighted Visibility Graphs (WVG) with ensemble learning techniques offers a robust approach to identify preAD participants with classification accuracy up to 92%. This degree of classification accuracy is comparable to AD biomarker platforms recently approved by the FDA to identify patients with amyloid pathology associated with AD [38]. Also remarkable is that this high degree of accuracy was achieved in a sample of preclinical AD, who have increased amyloid binding on florbetapir PET scans, but no significant cognitive deficits were evident on comprehensive neuropsychological testing conducted in an ADRC setting.
Specifically, this paper makes the following contributions:
  • Integration of WVG with Ensemble Learning: Our framework of integrating WVG and ensemble Learning enhances traditional ERP and EEG analysis methods for the early detection of AD in its preclinical stages.
  • Experimental Validation: The efficacy of the proposed framework was demonstrated through experimentation on a dataset comprising 20 preAD and 20 normal old. The results showed that the framework achieves an accuracy up to 92% with both linear and non-linear classifiers, highlighting its potential clinical utility. Some specific strengths of our analytic approach are highlighted below in Section 8.1 and Section 8.2.
  • Improving clinical efficiency: Our experimental results demonstrate that our framework can achieve comparable classification results while utilizing less data, e.g., by employing fewer task conditions, a reduced number of channels and filter bands, and a smaller number of trials per channel. These outcomes indicate the potential of saving valuable clinical time.
An important study limitation is the modest size of our preclinical AD sample (n = 20). Therefore, the replication of these results in larger and independent samples is essential and will be a focus of our future research. Another limitation is that we did not obtain tau PET or tau biomarkers from CSF or plasma in the majority of these participants.

8.1. Ensemble

Ensemble learning is a powerful technique in machine learning where multiple models are combined to improve overall performance and robustness. By combining the predictions of multiple models, ensemble methods often achieve better accuracy than any individual model. This is because the errors of individual models can cancel each other out. Ensembles can also reduce the variance of the model. Moreover, ensemble models are generally more robust to noise and outliers in the data. The combined decision-making process helps in smoothing out irregularities and makes the model less sensitive to the peculiarities of the training data.

8.2. Classification Significance of Conditions/Bands/Channels/Features

Among the different conditions analyzed, the OI condition (Old incongruous words) contributes the highest number of features, as shown in Table 1. This finding suggests that the OI condition may be particularly sensitive to neural changes associated with preAD. Moreover, we can see that the largest number of features comes from the raw band. This indicates that raw EEG signals hold significant information beyond that obtained within any single traditional EEG frequency band (e.g., α , β ), which is critical for distinguishing between preAD and normal subjects.
The majority of selected features are derived from midline sites, specifically Fz, Pz, and Cz, as shown in Table 2. These midline channels are known to be sensitive to changes in preAD relative to robust normal elderly on the ERP word repetition and congruity effects [15]. For example, the preAD group showed a severe reduction in the size of the P600 repetition effect, which was largest over the centro-parietal midline channels in robust normal elderly [15]. The same group of preAD also showed a reduction in the typical centro-posterior N400 effect; their N400 effect was largest over centro-anterior scalp sites. The midline sites are known to be involved in a variety of cognitive functions, including attention, executive function, and memory processing [39]. Midline/medial brain regions such as the posterior cingulate and precuneus are among the earliest brain predilection sites for early amyloid deposition in AD [40]. Our observations may help future research on EEG source localization techniques—which, due to poor spatial resolution, struggle to accurately pinpoint the precise location of brain activity [41]—in determining whether these brain regions generate the midline cognitive ERP effects. Nevertheless, focusing on these midline sites may provide more substantial assistance in identifying preclinical AD. It is possible that changes in the connectivity and activity patterns in these regions are more pronounced or detectable, making them reliable indicators for early diagnosis.
Following the midline channels, we found that left and right temporal channels (Wl, Wr, T6, T5) provided the next largest number of features used in discriminating PreAD from normal old (Table 2). This may reflect that the temporal cortex is a predilection site for neurofibrillary tangles in early AD (Braak stage II–III). Also, the temporal cortex is particularly critical for semantic processing and the classification task used in our ERP experiment.
Table 3 and Figure 2 highlight that features, such as Clustering Coefficient, Local Efficiency, and Clustering Coefficient Sequence Similarity are the most selected features. In graph theory, the clustering coefficient of a node measures the extent to which its neighbors form a complete graph (i.e., how interconnected the neighbors are). In the context of visibility graphs constructed from EEG signals, a high clustering coefficient indicates that the neighboring time points (nodes) have strong mutual visibility, reflecting a robust local network structure. AD is characterized by progressive neural degeneration, which disrupts both local and global brain connectivity [42]. This disruption can manifest as changes in the clustering coefficient within EEG-derived visibility graphs. A decrease in the clustering coefficient in EEG visibility graphs could indicate a decrease in local synaptic density, the loss of neuronal connectivity and synaptic dysfunction, all of which are early signs of AD. Because synaptic density is one of the strongest predictors of AD severity [10], tracking changes in the clustering coefficient over time may allow clinicians to sensitively monitor AD progression.
Local efficiency of a node in a graph measures the efficiency of information transfer within its immediate neighborhood. It is calculated as the average efficiency of the subnetwork formed by the node’s neighbors, excluding the node itself. High local efficiency indicates that the neighbors are well-connected, facilitating efficient local information processing. A decrease in local efficiency in EEG visibility graphs can indicate early disruptions in local neural circuits. This can be an early biomarker for AD, as synaptic dysfunction and local network breakdown are early pathological features of the disease.
Clustering Coefficient Sequence Similarity (CCSS) measures the resemblance between the clustering coefficient sequences of nodes across visibility graphs (VGs) derived from different EEG time series channels. AD disrupts normal brain network organization, leading to alterations in local clustering properties across different brain regions. This can result in reduced CCSS, as the similarity in local network organization between different regions becomes less pronounced. A decrease in CCSS between EEG channels could indicate early disruptions in brain connectivity, especially in neural networks associated with AD pathology.

9. Conclusions

In summary, we report here an effective framework for early detection of preclinical Alzheimer’s disease using multichannel EEG signals by combining Weighted Visibility Graphs, statistical feature selection, PCA-based dimensionality reduction, and ensemble machine learning. The optimization of these models achieved up to a 92% classification accuracy in discriminating preclinical AD from normal old, demonstrating its potential as a robust diagnostic tool with possible clinical utility. These findings provide further evidence that EEG/ERP measures may be sensitive to preclinical disease and that synaptic dysfunction is very common in preclinical AD, and could be the earliest type of pathophysiology [3].
An important study limitation is our modest sample size. Therefore, the replication of these results in larger and independent samples is essential before deploying them in clinical settings or in clinical trials for AD prevention. A focus of our future research will be to study larger samples of preclinical AD, ideally across multiple ADRC sites and in clinical settings such as subjective memory complaints or MCI. It will also be important to follow up on the long-term clinical outcomes and neuropathological diagnoses of these well-characterized ADRC participants. We will be able to test if the misclassified preAD cases are less likely to convert to MCI or show subsequent decline in memory compared with preAD cases with abnormalities detected by our WVG and ensemble learning method. Prognostic markers are extremely valuable in preclinical AD, as they can identify who is most in need of an intervention (pharmacologic, behavioral, lifestyle), some of which may be invasive or carry substantial risks (e.g., brain hemorrhage or edema associated with amyloid immunotherapies [43]).

Author Contributions

Conceptualization, Y.L., J.X., X.L., J.O., Z.K. and S.T.; methodology, Y.L.; software, Y.L. and J.Z.; validation, Y.L.; formal analysis, Y.L.; investigation, Y.L. and J.X.; resources, X.L. and J.O.; data curation, Y.L. and J.X.; writing—original draft preparation, Y.L.; writing—review and editing, Y.L., J.X., X.L., J.O. and M.K.; visualization, Y.L.; supervision, X.L. and J.O.; project administration, X.L. and J.O.; funding acquisition, J.O. and J.B.B. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by National Institute of Health grants RO1-AG048252, P30 AG062429 and P30 AG072972.

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki, and the protocol was approved by the Institutional Review Board of the University of California, Davis (protocol code: 749614, date of approval: 24 July 2024).

Informed Consent Statement

All participants provided informed written consent according to the guidelines of the UCD and UCSD Human Research Protection Programs.

Data Availability Statement

The datasets used in this study were provided with permission from the Alzheimer’s Disease Research Centers at the University of California, San Diego, and the University of California, Davis. Requests for access to these datasets should be directed to John Olichney, M.D.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Additional Classification Results

In Table A1 and Table A2, we present the K-fold (K = 8) cross-validation results using the same data and features as described in the main text (Section 7.1 and Section 7.2). We achieve comparable results across the board to those listed in the main text. Specifically, we achieve an average accuracy of 88% for the preAD vs. normal classification without ensemble, and 91% average accuracy for the preAD vs. normal classification with ensemble.
Table A1. 8-fold classification without ensemble.
Table A1. 8-fold classification without ensemble.
AccuracyPrecisionRecallAUROC
Neural Network88.11∼14.3484.45∼15.4381.43∼22.5487.45∼12.54
3-Nearest Neighbor87.34∼14.2385.34∼16.4384.34∼20.4588.45∼16.45
SVM87.34∼15.3491.45∼15.5681.34∼21.4585.45∼17.45
Logistic Regression87.23∼14.2390.45∼15.4382.50∼23.2385.45∼14.45
Random Forest90.34∼16.3492.34∼14.4582.50∼20.4588.65∼15.45
LDA86.34∼14.4590.34∼16.3481.50∼21.4586.56∼15.45
Table A2. 8-fold classification with ensemble.
Table A2. 8-fold classification with ensemble.
AccuracyPrecisionRecallAUROC
Neural Network91.34∼15.3493.45∼15.4592.54∼21.4589.54∼15.34
3-Nearest Neighbor91.34∼15.4593.54∼15.5692.45∼20.5485.34∼20.54
SVM90.34∼16.3392.45∼16.5692.45∼18.4584.34∼20.54
Logistic Regression91.34∼14.2393.45∼15.6591.43∼21.2488.34∼20.34
Random Forest90.34∼14.3491.45∼16.5490.00∼18.3489.34∼19.34
LDA91.34∼14.3492.54∼21.5690.34∼21.4585.34∼21.34
Moreover, reducing the number of trials per channel to 60 or 30 results in accuracy that remains largely stable or experiences a slight decrease (Table A3 and Table 8). However, further reduction to 10 trials per channel leads to a noticeable decrease in accuracy (Table A4).
Table A3. Reduce number of trials to 60 without ensemble (mean-std across 100 splits).
Table A3. Reduce number of trials to 60 without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network88.45∼15.4581.4∼32.3478.34∼31.3484.23∼22.34
3-Nearest Neighbor87.34∼17.5684.34∼25.3481.34∼21.3482.34∼22.34
SVM87.34∼18.4580.34∼22.3382.34∼31.3482.34∼22.34
Logistic Regression88.34∼19.3481.34∼32.4578.34∼31.3484.45∼22.45
Random Forest87.34∼19.4581.23∼31.3481.34∼31.3484.23∼25.34
LDA88.34∼20.3481.34∼32.3476.34∼32.3478.34∼22.34
Table A4. Reduce number of trials to 10 without ensemble (mean-std across 100 splits).
Table A4. Reduce number of trials to 10 without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network82.34∼15.3487.45∼11.4581.34∼23.2391.34∼14.34
3-Nearest Neighbor83.34∼13.4594.34∼15.2381.34∼22.3482.45∼16.34
SVM84.34∼14.4591.34∼14.4583.12∼21.3489.23∼12.34
Logistic Regression82.34∼14.4592.34∼11.3480.34∼21.3491.23∼16.34
Random Forest83.34∼13.5691.45∼15.3485.23∼20.4585.34∼24.34
LDA79.34∼13.4594.23∼12.3482.12∼21.3488.34∼13.45
In Table A5 and Table A6, we present the classification results using five reduced channels (Fz, Pz, Cz, C3, and C4). In Table A7 and Table A8, we present the classification results using three reduced bands (theta, beta, and gamma). These channels and bands were selected based on the study by Perez-Valero et al. [44].
Table A5. Reduce number of channels to 5 (Fz, Pz, Cz, C3 and C4) without ensemble (mean-std across 100 splits).
Table A5. Reduce number of channels to 5 (Fz, Pz, Cz, C3 and C4) without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network87.45∼12.3495.45∼13.3482.34∼22.3490.45∼15.34
3-Nearest Neighbor86.34∼14.3493.45∼14.3480.34∼22.3485.23∼14.34
SVM85.34∼14.3490.45∼15.2381.46∼21.5686.45∼16.45
Logistic Regression87.45∼13.4593.56∼14.4581.45∼22.4690.56∼13.56
Random Forest87.34∼15.3492.45∼14.5684.56∼21.4687.56∼17.45
LDA84.45∼13.4593.45∼13.5783.57∼21.5686.56∼15.57
Table A6. Reduce number of channels to 5 (Fz, Pz, Cz, C3 and C4) with ensemble (mean-std across 100 splits).
Table A6. Reduce number of channels to 5 (Fz, Pz, Cz, C3 and C4) with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network88.23∼14.2390.46∼14.7391.47∼20.2688.73∼20.74
3-Nearest Neighbor88.45∼13.4590.73∼15.2690.72∼21.4789.45∼21.47
SVM88.12∼13.4591.56∼15.3490.45∼21.4787.34∼21.25
Logistic Regression87.34∼14.2390.65∼15.5790.48∼20.4389.57∼21.75
Random Forest88.34∼13.4591.92∼14.5691.78∼21.6389.72∼18.27
LDA86.34∼13.2390.28∼14.8291.28∼20.8286.38∼19.73
Table A7. Reduce number of bands to 3 (theta, beta, gamma) without ensemble (mean-std across 100 splits).
Table A7. Reduce number of bands to 3 (theta, beta, gamma) without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network85.45∼12.4493.34∼15.5684.44∼22.4583.45∼20.34
3-Nearest Neighbor86.45∼13.3493.57∼13.6788.38∼21.5680.34∼22.45
SVM85.45∼15.4594.72∼14.7290.67∼21.5681.56∼21.76
Logistic Regression86.45∼14.4592.46∼13.9390.03∼22.5683.56∼21.56
Random Forest87.45∼15.3492.56∼12.7491.46∼19.5680.73∼20.84
LDA86.34∼13.3493.64∼13.4791.37∼16.7581.74∼20.47
Table A8. Reduce number of bands to 3 (theta, beta, gamma) with ensemble (mean-std across 100 splits).
Table A8. Reduce number of bands to 3 (theta, beta, gamma) with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network87.34∼15.4589.64∼13.8391.37∼21.7485.75∼13.64
3-Nearest Neighbor88.45∼13.4593.54∼13.5789.38∼21.3586.74∼13.37
SVM89.34∼14.4594.58∼14.8591.37∼19.3785.82∼16.82
Logistic Regression87.34∼14.3493.48∼14.8289.92∼20.3889.82∼19.72
Random Forest88.45∼13.4592.82∼15.3891.83∼21.4888.63∼17.63
LDA87.34∼14.2391.78∼18.3491.83∼21.3788.72∼20.73

References

  1. Jack, C.R.; Holtzman, D.M. Biomarker modeling of Alzheimer’s disease. Neuron 2013, 80, 1347–1358. [Google Scholar] [CrossRef]
  2. Sperling, R.A.; Aisen, P.S.; Beckett, L.A.; Bennett, D.A.; Craft, S.; Fagan, A.M.; Iwatsubo, T.; Jack, C.R., Jr.; Kaye, J.; Montine, T.J.; et al. Toward defining the preclinical stages of Alzheimer’s disease: Recommendations from the National Institute on Aging-Alzheimer’s Association workgroups on diagnostic guidelines for Alzheimer’s disease. Alzheimer’s Dement. 2011, 7, 280–292. [Google Scholar] [CrossRef]
  3. Mucke, L.; Selkoe, D.J. Neurotoxicity of amyloid β-protein: Synaptic and network dysfunction. Cold Spring Harb. Perspect. Med. 2012, 2, a006338. [Google Scholar] [CrossRef]
  4. Palop, J.J.; Mucke, L. Amyloid-β–induced neuronal dysfunction in Alzheimer’s disease: From synapses toward neural networks. Nat. Neurosci. 2010, 13, 812–818. [Google Scholar] [CrossRef] [PubMed]
  5. Lesne, S.E.; Sherman, M.A.; Grant, M.; Kuskowski, M.; Schneider, J.A.; Bennett, D.A.; Ashe, K.H. Brain amyloid-β oligomers in ageing and Alzheimer’s disease. Brain 2013, 136, 1383–1398. [Google Scholar] [CrossRef] [PubMed]
  6. Lacor, P.N.; Buniel, M.C.; Furlow, P.W.; Clemente, A.S.; Velasco, P.T.; Wood, M.; Viola, K.L.; Klein, W.L. Aβ oligomer-induced aberrations in synapse composition, shape, and density provide a molecular basis for loss of connectivity in Alzheimer’s disease. J. Neurosci. 2007, 27, 796–807. [Google Scholar] [CrossRef] [PubMed]
  7. Lue, L.F.; Kuo, Y.M.; Roher, A.E.; Brachova, L.; Shen, Y.; Sue, L.; Beach, T.; Kurth, J.H.; Rydel, R.E.; Rogers, J. Soluble amyloid β peptide concentration as a predictor of synaptic change in Alzheimer’s disease. Am. J. Pathol. 1999, 155, 853–862. [Google Scholar] [CrossRef]
  8. Mucke, L.; Masliah, E.; Yu, G.Q.; Mallory, M.; Rockenstein, E.M.; Tatsuno, G.; Hu, K.; Kholodenko, D.; Johnson-Wood, K.; McConlogue, L. High-level neuronal expression of Aβ1–42 in wild-type human amyloid protein precursor transgenic mice: Synaptotoxicity without plaque formation. J. Neurosci. 2000, 20, 4050–4058. [Google Scholar] [CrossRef]
  9. Selkoe, D.J. Alzheimer’s disease is a synaptic failure. Science 2002, 298, 789–791. [Google Scholar] [CrossRef]
  10. Terry, R.D.; Masliah, E.; Salmon, D.P.; Butters, N.; DeTeresa, R.; Hill, R.; Hansen, L.A.; Katzman, R. Physical basis of cognitive alterations in Alzheimer’s disease: Synapse loss is the major correlate of cognitive impairment. Ann. Neurol. Off. J. Am. Neurol. Assoc. Child Neurol. Soc. 1991, 30, 572–580. [Google Scholar] [CrossRef]
  11. Nunez, P.L.; Srinivasan, R. A theoretical basis for standing and traveling brain waves measured with human EEG with implications for an integrated consciousness. Clin. Neurophysiol. 2006, 117, 2424–2435. [Google Scholar] [CrossRef]
  12. Lizio, R.; Vecchio, F.; Frisoni, G.B.; Ferri, R.; Rodriguez, G.; Babiloni, C. Electroencephalographic Rhythms in Alzheimer’s Disease. Int. J. Alzheimer’S Dis. 2011, 2011, 927573. [Google Scholar] [CrossRef]
  13. Olichney, J.M.; Yang, J.C.; Taylor, J.; Kutas, M. Cognitive event-related potentials: Biomarkers of synaptic dysfunction across the stages of Alzheimer’s disease. J. Alzheimer’S Dis. 2011, 26, 215–228. [Google Scholar] [CrossRef] [PubMed]
  14. Javitt, D.C.; Martinez, A.; Sehatpour, P.; Beloborodova, A.; Habeck, C.; Gazes, Y.; Bermudez, D.; Razlighi, Q.R.; Devanand, D.; Stern, Y. Disruption of early visual processing in amyloid-positive healthy individuals and mild cognitive impairment. Alzheimer’s Res. Ther. 2023, 15, 42. [Google Scholar] [CrossRef] [PubMed]
  15. Olichney, J.M.; Pak, J.; Salmon, D.P.; Yang, J.C.; Gahagan, T.; Nowacki, R.; Hansen, L.; Galasko, D.; Kutas, M.; Iragui-Madoz, V.J. Abnormal P600 word repetition effect in elderly persons with preclinical Alzheimer’s disease. Cogn. Neurosci. 2013, 4, 143–151. [Google Scholar] [CrossRef]
  16. Mazaheri, A.; Segaert, K.; Olichney, J.; Yang, J.C.; Niu, Y.Q.; Shapiro, K.; Bowman, H. EEG oscillations during word processing predict MCI conversion to Alzheimer’s disease. Neuroimage Clin. 2018, 17, 188–197. [Google Scholar] [CrossRef] [PubMed]
  17. Olichney, J.M.; Iragui, V.J.; Salmon, D.P.; Riggins, B.R.; Morris, S.K.; Kutas, M. Absent event-related potential (ERP) word repetition effects in mild Alzheimer’s disease. Clin. Neurophysiol. 2006, 117, 1319–1330. [Google Scholar] [CrossRef]
  18. Olichney, J.; Morris, S.; Ochoa, C.; Salmon, D.; Thal, L.; Kutas, M.; Iragui, V. Abnormal verbal event related potentials in mild cognitive impairment and incipient Alzheimer’s disease. J. Neurol. Neurosurg. Psychiatry 2002, 73, 377–384. [Google Scholar] [CrossRef]
  19. Olichney, J.; Taylor, J.; Gatherwright, J.; Salmon, D.; Bressler, A.; Kutas, M.; Iragui-Madoz, V. Patients with MCI and N400 or P600 abnormalities are at very high risk for conversion to dementia. Neurology 2008, 70, 1763–1770. [Google Scholar] [CrossRef]
  20. Olichney, J.M.; Van Petten, C.; Paller, K.A.; Salmon, D.P.; Iragui, V.J.; Kutas, M. Word repetition in amnesia: Electrophysiological measures of impaired and spared memory. Brain 2000, 123, 1948–1963. [Google Scholar] [CrossRef]
  21. Zhang, J.; Xia, J.; Liu, X.; Olichney, J. Machine learning on visibility graph features discriminates the cognitive event-related potentials of patients with early Alzheimer’s disease from healthy aging. Brain Sci. 2023, 13, 770. [Google Scholar] [CrossRef]
  22. Deng, Z.; Xu, P.; Xie, L.; Choi, K.S.; Wang, S. Transductive joint-knowledge-transfer TSK FS for recognition of epileptic EEG signals. IEEE Trans. Neural Syst. Rehabil. Eng. 2018, 26, 1481–1494. [Google Scholar] [CrossRef]
  23. Lacasa, L.; Luque, B.; Ballesteros, F.; Luque, J.; Nuno, J.C. From time series to complex networks: The visibility graph. Proc. Natl. Acad. Sci. USA 2008, 105, 4972–4975. [Google Scholar] [CrossRef]
  24. Cai, L.; Wang, J.; Cao, Y.; Deng, B.; Yang, C. LPVG analysis of the EEG activity in Alzheimer’s disease patients. In Proceedings of the 2016 12th World Congress on Intelligent Control and Automation (WCICA), Guilin, China, 12–15 June 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 934–938. [Google Scholar]
  25. Cai, L.; Deng, B.; Wei, X.; Wang, R.; Wang, J. Analysis of spontaneous EEG activity in Alzheimer’s disease using weighted visibility graph. In Proceedings of the 2018 40th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, USA, 18–21 July 2018; IEEE: Piscataway, NJ, USA, 2018; pp. 3100–3103. [Google Scholar]
  26. Yu, H.; Zhu, L.; Cai, L.; Wang, J.; Liu, J.; Wang, R.; Zhang, Z. Identification of alzheimer’s eeg with a wvg network-based fuzzy learning approach. Front. Neurosci. 2020, 14, 641. [Google Scholar] [CrossRef]
  27. Džeroski, S.; Ženko, B. Is combining classifiers with stacking better than selecting the best one? Mach. Learn. 2004, 54, 255–273. [Google Scholar] [CrossRef]
  28. Xia, J.; Mazaheri, A.; Segaert, K.; Salmon, D.P.; Harvey, D.; Shapiro, K.; Kutas, M.; Olichney, J.M. Event-related potential and EEG oscillatory predictors of verbal memory in mild cognitive impairment. Brain Commun. 2020, 2, fcaa213. [Google Scholar] [CrossRef] [PubMed]
  29. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef] [PubMed]
  30. Oostenveld, R.; Fries, P.; Maris, E.; Schoffelen, J.M. FieldTrip: Open source software for advanced analysis of MEG, EEG, and invasive electrophysiological data. Comput. Intell. Neurosci. 2011, 2011, 156869. [Google Scholar] [CrossRef]
  31. Zou, Y.; Donner, R.V.; Marwan, N.; Donges, J.F.; Kurths, J. Complex network approaches to nonlinear time series analysis. Phys. Rep. 2019, 787, 1–97. [Google Scholar] [CrossRef]
  32. Supriya, S.; Siuly, S.; Wang, H.; Cao, J.; Zhang, Y. Weighted visibility graph with complex network features in the detection of epilepsy. IEEE Access 2016, 4, 6554–6566. [Google Scholar] [CrossRef]
  33. Stephen, A.T.; Toubia, O. Explaining the power-law degree distribution in a social commerce network. Soc. Netw. 2009, 31, 262–270. [Google Scholar] [CrossRef]
  34. Blondel, V.D.; Guillaume, J.L.; Lambiotte, R.; Lefebvre, E. Fast unfolding of communities in large networks. J. Stat. Mech. Theory Exp. 2008, 2008, P10008. [Google Scholar] [CrossRef]
  35. Deng, Z.; Jiang, Y.; Choi, K.S.; Chung, F.L.; Wang, S. Knowledge-leverage-based TSK fuzzy system modeling. IEEE Trans. Neural Netw. Learn. Syst. 2013, 24, 1200–1212. [Google Scholar] [CrossRef]
  36. Ahmadlou, M.; Adeli, H.; Adeli, A. New diagnostic EEG markers of the Alzheimer’s disease using visibility graph. J. Neural Transm. 2010, 117, 1099–1109. [Google Scholar] [CrossRef] [PubMed]
  37. de Jong, J. Feature Selection, Cross-Validation And Data Leakage. 2017. Available online: https://johanndejong.wordpress.com/2017/08/06/feature-selection-cross-validation-and-data-leakage (accessed on 20 May 2025).
  38. Catania, M.; Battipaglia, C.; Perego, A.; Salvi, E.; Maderna, E.; Cazzaniga, F.A.; Rossini, P.M.; Marra, C.; Vanacore, N.; Redolfi, A.; et al. Exploring the ability of plasma pTau217, pTau181 and beta-amyloid in mirroring cerebrospinal fluid biomarker profile of Mild Cognitive Impairment by the fully automated Lumipulse® platform. Fluids Barriers Cns 2025, 22, 9. [Google Scholar] [CrossRef]
  39. Jobson, D.D.; Hase, Y.; Clarkson, A.N.; Kalaria, R.N. The role of the medial prefrontal cortex in cognition, ageing and dementia. Brain Commun. 2021, 3, fcab125. [Google Scholar] [CrossRef]
  40. Ali, D.G.; Bahrani, A.A.; Barber, J.M.; El Khouli, R.H.; Gold, B.T.; Harp, J.P.; Jiang, Y.; Wilcock, D.M.; Jicha, G.A. Amyloid-PET levels in the precuneus and posterior cingulate cortices are associated with executive function scores in preclinical Alzheimer’s disease prior to overt global amyloid positivity. J. Alzheimer’s Dis. 2022, 88, 1127–1135. [Google Scholar] [CrossRef]
  41. McWeeny, S.; Norton, E.S. Understanding event-related potentials (ERPs) in clinical and basic language and communication disorders research: A tutorial. Int. J. Lang. Commun. Disord. 2020, 55, 445–457. [Google Scholar] [CrossRef]
  42. Sperling, R. The potential of functional MRI as a biomarker in early Alzheimer’s disease. Neurobiol. Aging 2011, 32, S37–S43. [Google Scholar] [CrossRef]
  43. Agarwal, A.; Gupta, V.; Brahmbhatt, P.; Desai, A.; Vibhute, P.; Joseph-Mathurin, N.; Bathla, G. Amyloid-related imaging abnormalities in Alzheimer disease treated with anti–Amyloid-β therapy. Radiographics 2023, 43, e230009. [Google Scholar] [CrossRef]
  44. Perez-Valero, E.; Morillas, C.; Lopez-Gordo, M.A.; Minguillon, J. Supporting the detection of early Alzheimer’s disease with a four-channel EEG analysis. Int. J. Neural Syst. 2023, 33, 2350021. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Example of a time series (8 data points) and the associated graph derived from the weighted visibility algorithm. In the graph, every node corresponds, in the same order, to time series data. Every edge is weighted by the slope angle.
Figure 1. Example of a time series (8 data points) and the associated graph derived from the weighted visibility algorithm. In the graph, every node corresponds, in the same order, to time series data. Every edge is weighted by the slope angle.
Bioengineering 12 00814 g001
Figure 2. The features (averaged across participants) are for both preAD and normal groups, with error bars representing the standard error across subjects. The level of significance is calculated using a two-tailed t-test across all subjects. ** denotes a significant level (p ≤ 0.01), while * indicates (p ≤ 0.05).
Figure 2. The features (averaged across participants) are for both preAD and normal groups, with error bars representing the standard error across subjects. The level of significance is calculated using a two-tailed t-test across all subjects. ** denotes a significant level (p ≤ 0.01), while * indicates (p ≤ 0.05).
Bioengineering 12 00814 g002
Figure 3. Two-dimensional PCA projections of the data with associated decision boundaries for all classifiers and data points, based on one representative split out of 100. We use two colors (red and black) to represent the two participant groups, and two markers (‘+’ and ‘−’) to distinguish between the training and test sets. For each plot, the PCA components were computed using only the data in that specific plot to reflect the actual input to the ML algorithms. In two dimensions, it is evident that preAD and normal groups are linearly separable with the features we extracted.
Figure 3. Two-dimensional PCA projections of the data with associated decision boundaries for all classifiers and data points, based on one representative split out of 100. We use two colors (red and black) to represent the two participant groups, and two markers (‘+’ and ‘−’) to distinguish between the training and test sets. For each plot, the PCA components were computed using only the data in that specific plot to reflect the actual input to the ML algorithms. In two dimensions, it is evident that preAD and normal groups are linearly separable with the features we extracted.
Bioengineering 12 00814 g003
Table 1. The number of features (mean-std across 100 splits) produced by each band after two-tailed t-test.
Table 1. The number of features (mean-std across 100 splits) produced by each band after two-tailed t-test.
NANCNIOAOCOITotal
Raw4.1~1.74.1~3.52.2~2.59.4~3.76.5~3.120.1~4.346.4~15.3
Delta0.9~0.34.0~1.90.2~0.13.9~2.62.9~1.88.4~3.920.3~13.6
Theta2.1~1.11.9~0.70.6~0.20.5~0.31.6~0.50.6~0.37.3~3.7
Alpha2.2~1.62.3~1.71.3~0.91.1~1.01.1~0.80.9~0.68.9~2.7
Beta0.4~0.21.3~0.40.6~0.40.3~0.20.4~0.32.3~2.05.3~3.5
Gamma0.5~0.53.7~2.60.4~4.65.0~4.62.7~4.63.0~0.315.3~4.3
Total10.2~1.417.3~4.25.3~1.820.2~7.815.2~6.335.3~8.6103.5~29.5
Table 2. The number of features (mean-std across 100 splits) produced by each channel after a two-tailed t-test. e_c is the CCSS feature across all channels.
Table 2. The number of features (mean-std across 100 splits) produced by each channel after a two-tailed t-test. e_c is the CCSS feature across all channels.
NANCNIOAOCOITotal
Fz0.9~0.70.4~0.30.5~4.51.7~0.41.9~0.44.9~3.510.3~6.4
Pz0.2~0.22.1~1.50.3~0.22.3~1.21.9~1.38.6~1.815.4~5.7
Cz0.5~0.32.8~1.90.6~0.83.6~1.93.3~3.26.6~1.717.4~3.5
F70.7~0.31.0~0.60.5~0.40.6~0.21.0~0.60.6~0.54.4~2.4
F80.4~0.20.5~0.30.3~0.20.7~0.30.4~0.20.1~0.12.4~0.6
Bl0.1~0.10.7~0.30.1~0.10.1~0.10.1~0.10.2~0.11.3~0.4
Br0.2~0.10.8~0.60.1~0.10.0~0.00.1~0.10.2~0.11.4~0.4
L411.6~1.50.4~0.50.1~0.21.2~0.90.1~0.11.1~0.94.5~3.0
R410.3~0.50.6~0.40.4~0.60.3~0.40.0~0.01.8~0.93.4~1.4
Wl1.3~0.80.5~0.40.5~0.71.0~0.80.9~0.64.3~3.28.5~4.2
Wr0.7~0.20.3~0.50.6~0.91.9~1.11.2~0.72.8~1.67.5~1.9
T50.9~0.70.7~0.60.4~0.50.8~0.70.9~0.71.2~1.34.9~4.6
T60.4~0.60.8~0.60.3~0.64.0~2.31.7~0.50.3~0.27.5~3.7
O10.2~0.30.1~0.10.3~0.20.2~0.10.0~0.00.5~0.41.3~1.4
O20.5~1.00.9~0.80.0~0.01.1~1.20.6~0.80.3~0.43.4~2.6
e_c1.3~1.34.7~1.00.3~1.90.7~3.01.1~1.01.8~1.69.9~2.3
Total10.2~1.417.3~4.25.3~1.820.2~7.815.2~6.335.3~8.6103.5~29.5
Table 3. The number of features (mean-std across 100 splits) produced after two-tailed t-test under different conditions.
Table 3. The number of features (mean-std across 100 splits) produced after two-tailed t-test under different conditions.
NANCNIOAOCOITotal
CC1.3~2.60.4~1.40.1~0.14.2~1.32.4~1.57.8~2.516.2~4.3
AWD0.8~0.51.2~0.80.4~0.51.4~0.91.9~0.63.7~2.19.4~1.4
GIC0.8~0.80.9~0.60.5~0.31.4~0.70.9~0.32.9~1.27.4~1.3
DD0.9~0.70.5~0.40.8~0.60.4~0.20.7~0.41.0~0.84.3~5.5
NE0.8~0.60.7~0.60.4~1.11.2~0.51.3~0.73.0~1.77.4~4.8
M0.7~0.61.1~0.80.3~0.20.8~0.60.5~0.31.1~0.94.5~1.3
LE0.6~2.12.3~1.10.5~0.93.8~2.12.4~2.15.6~2.115.2~4.4
GE0.2~0.10.4~0.30.6~0.30.2~0.10.0~0.01.0~0.42.4~1.4
APL0.5~0.41.3~0.20.1~0.12.5~2.11.1~0.92.8~2.18.3~3.5
CCSS0.9~1.76.2~3.20.0~0.01.7~0.61.6~1.33.0~1.213.4~3.5
SW0.6~0.30.2~0.20.1~0.10.0~0.00.2~0.10.3~0.21.4~3.5
SMaC0.4~0.30.6~0.30.2~0.20.3~0.20.5~0.30.4~0.22.4~2.6
CTSP0.3~0.70.3~0.20.3~0.20.7~0.50.6~0.41.0~0.33.2~5.3
GD0.4~0.40.5~0.4s0.2~0.10.3~0.20.6~0.30.7~0.42.7~1.4
in0.2~0.20.1~0.10.3~0.20.5~0.30.0~0.00.3~0.21.4~0.5
SMiC0.3~0.20.4~0.30.2~0.10.2~0.20.1~0.20.3~0.21.5~0.5
VCN0.5~0.40.2~0.20.3~0.20.6~0.30.4~0.20.4~0.22.4~1.5
Total10.2~1.417.3~4.25.3~1.820.2~7.815.2~6.335.3~8.6103.5~29.5
Table 4. Top 10 features for each two-dimensional PCA projection from Figure 3.
Table 4. Top 10 features for each two-dimensional PCA projection from Figure 3.
preAD vs. NormalBand/Electrode/FeatureMagnitude
Component 1Raw/Cz/CC0.245
Raw/Cz/LE0.245
Raw/Pz/CC0.244
Delta/Pz/CC0.244
Delta/CCSS0.232
Raw/Fz/LE0.223
Raw/Fz/AWD0.213
Gamma/CCSS0.210
Raw/Fz/GIC0.204
Gamma/Cz/CC0.198
Component 2Raw/Cz/AWD0.301
Raw/Cz/GIC0.301
Raw/Fz/CC0.298
Raw/Fz/NE0.297
Delta/Fz/CC0.272
Raw/CCSS0.240
Delta/Wl/LE0.239
Gamma/Cz/CC0.214
Delta/Wl/AWD0.205
Raw/Wr/NE0.176
Table 5. Classification without ensemble (mean-std across 100 splits).
Table 5. Classification without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network87.45~13.6792.33~14.2280.34~25.3487.45~16.34
3-Nearest Neighbor89.45~13.5692.33~14.2284.23~21.4585.45~18.34
SVM89.45~15.3492.33~14.2284.34~22.4586.34~18.34
Logistic Regression87.34~14.2393.89~13.9381.34~23.4585.34~19.34
Random Forest89.34~14.3494.34~12.4485.34~21.3489.32~18.34
LDA87.34~14.4591.34~17.5682.34~21.4585.23~18.34
Table 6. Classification with ensemble (mean-std across 100 splits).
Table 6. Classification with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network91.00~14.3492.33~19.0685.00~25.0084.75~19.00
3-Nearest Neighbor91.45~14.4593.83~16.1085.00~23.9882.00~16.99
SVM91.01~14.4589.33~20.6284.50~25.1984.00~20.47
Logistic Regression91.34~14.3491.83~19.5184.50~25.1984.25~19.89
Random Forest92.01~13.5690.33~21.5084.00~26.3484.88~28.09
LDA89.34~14.5691.17~18.0287.00~23.0484.75~24.48
Table 7. Reduce number of channels to 5 without ensemble (mean-std across 100 splits).
Table 7. Reduce number of channels to 5 without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network88.45~13.4596.34~11.3481.34~23.3491.34~16.34
3-Nearest Neighbor88.34~14.4594.23~14.3481.34~22.4584.34~15.23
SVM86.3~15.3491.34~14.3482.34~21.3487.34~15.34
Logistic Regression88.34~12.5695.34~13.4482.34~22.3491.34~14.45
Random Forest89.34~14.7891.34~15.3485.34~22.4586.34~22.12
LDA87.45~14.4694.34~14.3484.34~22.4587.34~16.34
Table 8. Reduce number of trials to 30 without ensemble (mean-std across 100 splits).
Table 8. Reduce number of trials to 30 without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network87.34~14.4581.34~33.4578.34~32.3485.34~21.34
3-Nearest Neighbor88.34~14.6584.23~26.4581.34~27.3482.34~20.34
SVM86.34~14.4581.23~32.3481.45~31.3484.23~23.34
Logistic Regression86.34~14.3481.34~32.4578.34~31.3484.45~21.34
Random Forest87.45~16.4579.34~31.5681.45~31.4582.34~23.65
LDA86.34~14.3483.45~32.4572.34~30.3481.34~23.45
Table 9. Reduce number of bands to 3 (Raw, Delta, Gamma) without ensemble (mean-std across 100 splits).
Table 9. Reduce number of bands to 3 (Raw, Delta, Gamma) without ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network87.34~13.4594.45~14.4584.34~23.3483.34~22.34
3-Nearest Neighbor88.34~14.3494.45~14.4587.34~20.4581.34~21.45
SVM87.34~16.3494.45~14.4591.34~22.3482.34~22.45
Logistic Regression87.34~15.3494.45~14.4591.34~22.3484.34~22.45
Random Forest89.56~14.5693.34~13.4590.34~17.3481.34~21.45
LDA88.34~14.4594.45~14.4590.34~17.3481.23~21.34
Table 10. Reduce number of channels to 5 with ensemble (mean-std across 100 splits).
Table 10. Reduce number of channels to 5 with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network90.34~15.4591.34~15.3490.34~21.4589.34~21.34
3-Nearest Neighbor90.34~15.2391.34~15.3490.56~20.5681.45~20.46
SVM90.45~14.4592.34~14.4591.56~20.5686.45~20.54
Logistic Regression90.34~15.4591.45~16.3491.33~21.5688.43~20.45
Random Forest91.34~14.3491.56~15.3990.34~20.4589.45~16.23
LDA91.34~15.4590.34~15.5690.45~21.4585.34~18.34
Table 11. Reduce number of trials to 30 with ensemble (mean-std across 100 splits).
Table 11. Reduce number of trials to 30 with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network90.34~14.4690.34~19.3490.34~23.4582.34~21.34
3-Nearest Neighbor91.35~15.4590.34~22.3492.34~22.4583.34~16.34
SVM90.45~14.6490.34~18.3490.34~22.3481.34~20.34
Logistic Regression91.45~15.5690.45~17.3492.34~22.9081.34~20.34
Random Forest89.45~14.4588.34~21.4590.34~22.3481.45~23.45
LDA91.34~14.4589.34~22.3489.34~22.3482.34~23.45
Table 12. Reduce number of bands to 3 with ensemble (mean-std across 100 splits).
Table 12. Reduce number of bands to 3 with ensemble (mean-std across 100 splits).
AccuracyPrecisionRecallAUROC
Neural Network91.45~15.4590.34~14.3490.45~22.4586.34~12.45
3-Nearest Neighbor91.45~15.4594.34~15.3488.34~20.3485.23~12.45
SVM90.34~15.4595.23~15.4590.23~18.3484.45~17.45
Logistic Regression92.45~15.4595.23~15.4589.34~21.3488.34~18.34
Random Forest91.45~14.4593.45~16.4590.34~20.3487.34~15.34
LDA90.34~17.3490.23~19.4590.34~20.3486.34~21.64
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Xia, J.; Kan, Z.; Zhang, J.; Toprani, S.; Brewer, J.B.; Kutas, M.; Liu, X.; Olichney, J. Unveiling Early Signs of Preclinical Alzheimer’s Disease Through ERP Analysis with Weighted Visibility Graphs and Ensemble Learning. Bioengineering 2025, 12, 814. https://doi.org/10.3390/bioengineering12080814

AMA Style

Liu Y, Xia J, Kan Z, Zhang J, Toprani S, Brewer JB, Kutas M, Liu X, Olichney J. Unveiling Early Signs of Preclinical Alzheimer’s Disease Through ERP Analysis with Weighted Visibility Graphs and Ensemble Learning. Bioengineering. 2025; 12(8):814. https://doi.org/10.3390/bioengineering12080814

Chicago/Turabian Style

Liu, Yongshuai, Jiangyi Xia, Ziwen Kan, Jesse Zhang, Sheela Toprani, James B. Brewer, Marta Kutas, Xin Liu, and John Olichney. 2025. "Unveiling Early Signs of Preclinical Alzheimer’s Disease Through ERP Analysis with Weighted Visibility Graphs and Ensemble Learning" Bioengineering 12, no. 8: 814. https://doi.org/10.3390/bioengineering12080814

APA Style

Liu, Y., Xia, J., Kan, Z., Zhang, J., Toprani, S., Brewer, J. B., Kutas, M., Liu, X., & Olichney, J. (2025). Unveiling Early Signs of Preclinical Alzheimer’s Disease Through ERP Analysis with Weighted Visibility Graphs and Ensemble Learning. Bioengineering, 12(8), 814. https://doi.org/10.3390/bioengineering12080814

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop