You are currently on the new version of our website. Access the old version .
BiomimeticsBiomimetics
  • Article
  • Open Access

21 July 2023

Design of Intelligent Neuro-Supervised Networks for Brain Electrical Activity Rhythms of Parkinson’s Disease Model

,
,
and
1
Department of Computer Science and Information Engineering, Graduate School of Engineering, Science and Technology, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
2
Department of Computer Science and Information Engineering, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
3
Future Technology Research Center, National Yunlin University of Science and Technology, Yunlin 64002, Taiwan
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Biomimicry for Optimization, Control, and Automation

Abstract

The objective of this paper is to present a novel design of intelligent neuro-supervised networks (INSNs) in order to study the dynamics of a mathematical model for Parkinson’s disease illness (PDI), governed with three differential classes to represent the rhythms of brain electrical activity measurements at different locations in the cerebral cortex. The proposed INSNs are constructed by exploiting the knacks of multilayer structure neural networks back-propagated with the Levenberg–Marquardt (LM) and Bayesian regularization (BR) optimization approaches. The reference data for the grids of input and the target samples of INSNs were formulated with a reliable numerical solver via the Adams method for sundry scenarios of PDI models by way of variation of sensor locations in order to measure the impact of the rhythms of brain electrical activity. The designed INSNs for both backpropagation procedures were implemented on created datasets segmented arbitrarily into training, testing, and validation samples by optimization of mean squared error based fitness function. Comparison of outcomes on the basis of exhaustive simulations of proposed INSNs via both LM and BR methodologies was conducted with reference solutions of PDI models by means of learning curves on MSE, adaptive control parameters of algorithms, absolute error, histogram error plots, and regression index. The outcomes endorse the efficacy of both INSNs solvers for different scenarios in PDI models, but the accuracy of the BR-based method is relatively superior, albeit at the cost of slightly more computations.

1. Introduction

Parkinson’s disease illness (PDI) is a neurological disorder normally caused by an early significant death of dopaminergic neurons, and the resulting deficiency of dopamine within the basal ganglia results in movement disorders [1]. PDI patients may suffer from tremors/shaking, kinetic problems, postural instability, and rigidity and anxiety, as highlighted in Part 1 of the graphical abstract provided in Figure 1. In 2016, around 6.1 million people were affected by the PDI [2], and a rapid increase in PDI patients has been observed in the past two decades [3,4]. The current therapeutic treatments for PDI are based on restoring dopamine levels. These remedies are helpful in providing symptomatic relief to PDI patients, but they are not disease modifying, and, therefore, PDI has remained incurable [5]. Mathematical modeling of PDI may help in better understanding the dynamics of the disease and, thus, improved treatments for its recovery. Different mathematical models of PDI have been proposed [6]. For instance, in [7], Anninou et al. developed a mathematical model for PDI by exploiting the concept of fuzzy cognitive maps, and a generic algorithm was proposed based on nonlinear Hebbian learning techniques. Recently, a wide use of artificial intelligence methodologies for modeling different diseases has emerged [8,9,10], but, as per our exhaustive search, these methodologies are not yet exploited to study the dynamics of PDI. Thus, it seems promising to exploit the well-established strength of machine learning and artificial intelligence techniques to study the dynamics of PDI.
Figure 1. Graphical abstract of the PDI study using INSNs.
The objectives of the current investigation are as follows:
  • Study the dynamics of a mathematical model for PDI, governed with three differential classes to represent the rhythms of brain electrical activity that are measured at different locations of the cerebral cortex.
  • Construct intelligent neuro-supervised networks (INSNs) by exploiting the knacks of multilayer structure neural networks backpropagated with the Levenberg–Marquardt (LM) and the Bayesian regularization (BR) approaches.
  • Optimize the mean squared error based fitness function for sundry scenarios of PDI models by the variation of sensor locations to measure the impact of the rhythms of brain electrical activity.
  • Compare the outcomes on the basis of exhaustive simulations of the proposed INSNs via both LM and BR methodologies with reference solutions of PDI models by means of learning curves on MSE, adaptive control parameters of algorithms, absolute error, histogram error plots, and regression index.
The structure of the remaining paper is as follows: Section 2 presents the details of the related works; Section 3 provides the mathematical model of PDI along with a description of the proposed INSNs; Section 4 discusses the simulation results for different scenarios of PDI; and Section 5 concludes the study by noting some potential future research directions.

3. Proposed Methodology

Before developing the INSNs, first the mathematical model of the PDI is introduced in this section. Let the rhythms x 1 ( t i ) , x 2 ( t i ) , , x k ( t i ) , i = 1 , 2 , , N of cerebral activity at k point of cerebral cortex be measured by EEG and defined as [13]:
x 1 ( t i ) = y 1 ( t i ) + ε 1 ( t i ) , x 2 ( t i ) = y 2 ( t i ) + ε 2 ( t i ) , x k ( t i ) = y k ( t i ) + ε k ( t i ) ,
where y 1 ( t i ) , y 2 ( t i ) , , y k ( t i ) shows the discrete approximation of y ( t ) = [ y 1 ( t ) , y 2 ( t ) , , y k ( t ) ] T and ε 1 ( t i ) , ε 2 ( t i ) , , ε k ( t i ) represent the white Gaussian noise. The accurate acquisition of the EEG signals is of great significance, and denoising of the signal is required before further processing. The multiscale principle component analysis (MSPCA) plays vital role in the denoising of a signal, which is a combination of principle component analyses and wavelet [59], and which is used for robust motor imagery brain computer interface classification [60,61]. The system of differential equations are constructed as [13]:
y ˙ 1 ( t ) = m = 1 k a 1 m y m ( t ) + y T ( t ) B 1 y ( t ) + c 1 , y ˙ 2 ( t ) = m = 1 k a 2 m y m ( t ) + y T ( t ) B 2 y ( t ) + c 2 , y ˙ k ( t ) = m = 1 k a k m y m ( t ) + y T ( t ) B k y ( t ) + c k .
The (k × j) matrix of unknown coefficients D, matrix Y, and matrix Y ˙ are introduced in (3), (4), and (5), respectively:
D = [ c 1 a 11 a 1 k b 11 1 b k k 1 2 b 12 1 2 b k 1 , k 1 c 2 a 21 a 2 k b 11 2 b k k 2 2 b 12 2 2 b k 1 , k 2 c k a k 1 a k k b 11 k b k k k 2 b 12 k 2 b k 1 , k k ]
Y = [ 1 y 11 y k 1 y 11 2 y k 1 2 y 11 y 21 y k 1 , 1 y k , 1 1 y 12 y k 2 y 12 2 y k 2 2 y 12 y 22 y k 1 , 2 y k , 2 1 y 1 N y k N y 1 N 2 y k N 2 y 1 N y 2 N y k 1 , N y k , N ]
Y ˙ = [ y ˙ 11 y ˙ 21 y ˙ k 1 y ˙ 12 y ˙ 22 y ˙ k 2 y ˙ 1 N y ˙ 2 N y ˙ k N ]
Considering k = 3 and c = [ c 1 , c 2 , , c k ] T = 0 [13]:
y ˙ 1 ( t ) = a 11 y 1 ( t ) + a 12 y 2 ( t ) + a 13 y 3 ( t ) + b 11 1 y 1 2 ( t ) + b 22 1 y 2 2 ( t ) + b 33 1 y 3 2 ( t )       + 2 b 12 1 y 1 ( t ) y 2 ( t ) + 2 b 13 1 y 1 ( t ) y 3 ( t ) + 2 b 23 1 y 2 ( t ) y 3 ( t ) y ˙ 2 ( t ) = a 21 y 1 ( t ) + a 22 y 2 ( t ) + a 23 y 3 ( t ) + b 11 2 y 1 2 ( t ) + b 22 2 y 2 2 ( t ) + b 33 2 y 3 2 ( t )       + 2 b 12 2 y 1 ( t ) y 2 ( t ) + 2 b 13 2 y 1 ( t ) y 3 ( t ) + 2 b 23 2 y 2 ( t ) y 3 ( t ) y ˙ 3 ( t ) = a 31 y 1 ( t ) + a 32 y 2 ( t ) + a 33 y 3 ( t ) + b 11 3 y 1 2 ( t ) + b 22 3 y 2 2 ( t ) + b 33 3 y 3 2 ( t )       + 2 b 12 3 y 1 ( t ) y 2 ( t ) + 2 b 13 3 y 1 ( t ) y 3 ( t ) + 2 b 23 3 y 2 ( t ) y 3 ( t )
For practical application, let the EEG signal of the subject be taken at three different points of his cerebral cortex and considered as a three-time series describing the behavior in the three-dimensional space. In standard medical procedure, measuring sensors are placed on some defined points of the cerebral cortex. For this study, we considered the magnitude of electrical impulses at points P3, P4, and O1, as well as C3, C4, and T5, designated by coordinate y 1 ( t ) , y 2 ( t ) , and y 3 ( t ) . A three-dimensional system based on these time series was constructed as:
y ˙ 1 ( t ) = 20.93 + 1.55 y 1 ( t ) + 6.20 y 2 ( t ) 7.05 y 3 ( t ) + 0.016 y 1 2 ( t ) + 0.17 y 2 2 ( t )       0.16 y 3 2 ( t ) 0.10 y 2 ( t ) y 1 ( t ) + 0.13 y 3 ( t ) y 1 ( t ) 0.08 y 3 ( t ) y 2 ( t ) y ˙ 2 ( t ) = 3.87 2.60 y 1 ( t ) + 2.12 y 2 ( t ) 2.62 y 3 ( t ) 0.01 y 1 2 ( t ) + 0.034 y 2 2 ( t )       0.13 y 3 2 ( t ) 0.17 y 2 ( t ) y 1 ( t ) + 0.32 y 3 ( t ) y 1 ( t ) + 0.025 y 3 ( t ) y 2 ( t ) y ˙ 3 ( t ) = 12.12 + 1.36 y 1 ( t ) + 3.20 y 2 ( t ) 3.56 y 3 ( t ) + 0.03 y 1 2 ( t ) + 0.06 y 2 2 ( t )       0.14 y 3 2 ( t ) 0.14 y 2 ( t ) y 1 ( t ) + 0.09 y 3 ( t ) y 1 ( t ) + 0.08 y 3 ( t ) y 2 ( t )
The system (7) simulated the impulses at C3, C4, and T5, while the system of the differential equation presented in (8) and (9) simulated the impulse at the P3, P4, and O1 points:
y ˙ 1 ( t ) = 3.11 + 0.19 y 1 ( t ) + 0.72 y 2 ( t ) 1.19 y 3 ( t ) + 0.022 y 1 2 ( t ) 0.04 y 2 2 ( t )       + 0.045 y 3 2 ( t ) + 0.04 y 2 ( t ) y 1 ( t ) 0.01 y 3 ( t ) y 1 ( t ) 0.06 y 3 ( t ) y 2 ( t ) y ˙ 2 ( t ) = 4.58 1.69 y 1 ( t ) + 0.39 y 2 ( t ) + 1.37 y 3 ( t ) 0.03 y 1 2 ( t ) 0.09 y 2 2 ( t )       + 0.05 y 3 2 ( t ) + 0.11 y 2 ( t ) y 1 ( t ) + 0.08 y 3 ( t ) y 1 ( t ) 0.06 y 3 ( t ) y 2 ( t ) y ˙ 3 ( t ) = 7.88 + 6.91 y 1 ( t ) 5.69 y 2 ( t ) 0.71 y 3 ( t ) + 0.23 y 1 2 ( t ) + 0.025 y 2 2 ( t )       + 0 y 3 2 ( t ) 0.17 y 2 ( t ) y 1 ( t ) 0.1 y 3 ( t ) y 1 ( t ) + 0.06 y 3 ( t ) y 2 ( t )
y ˙ 1 ( t ) = 1.24 1.15 y 1 ( t ) + 2.34 y 2 ( t ) 0.83 y 3 ( t ) 0.04 y 1 2 ( t ) + 0 y 2 2 ( t )       + 0.02 y 3 2 ( t ) + 0.235 y 1 ( t ) y 1 ( t ) 0.015 y 3 ( t ) y 1 ( t ) 0.12 y 3 ( t ) y 2 ( t ) y ˙ 2 ( t ) = 3.68 3.91 y 1 ( t ) + 1.01 y 2 ( t ) + 2.54 y 3 ( t ) 0.16 y 1 2 ( t ) 0.08 y 2 2 ( t )       + 0.02 y 3 2 ( t ) + 0.15 y 2 ( t ) y 1 ( t ) + 0.10 y 3 ( t ) y 1 ( t ) 0.04 y 3 ( t ) y 2 ( t ) y ˙ 3 ( t ) = 5.9 + 5.22 y 1 ( t ) 6.15 y 2 ( t ) 0.31 y 3 ( t ) + 0.13 y 1 2 ( t ) + 0.022 y 2 2 ( t )       + 0 y 3 2 ( t ) 0.13 y 2 ( t ) y 1 ( t ) 0.28 y 3 ( t ) y 1 ( t ) + 0.05 y 3 ( t ) y 2 ( t )
Now, the details regarding the implementation of the proposed intelligent neuro supervised networks are presented. The proposed scheme is implemented in two steps:
  • Reference dataset generation: First, the reference dataset for the INSNs is generated through determining the numerical results of the PDI models presented in (7) to (9). The state of the art Adams procedure is used to determine the numerical results of the PDI models of (7) to (9) through the ‘NDSolve’ routine of Mathematica software for finding the solution of the systems represented by the differential equations for t [ 0 , 5 ] , with a step size 0.2, i.e., total 251 input (time instances), and, accordingly, a 753 output (number of measurements) with 251 discrete instances for each y1, y2, y3. The value of the parameters of the quantities of interest and initial population representing the location of sensors for electrical rhythms of the brain are taken from the reported study [13]. Further information regarding the justification of the parameter on the basis of theoretical analyses, i.e., global and local stability and population dynamics, can be seen in the reported study [13].
  • Developing neuro-supervised networks: The INSNs are constructed through a neural networks structure with logistic activation function to solve the PDI models of (7) to (9). For backpropagation, two different optimization algorithms are used, i.e., Levenberg–Marquardt (LM) and Bayesian regularization (BR). In LM, the number of hidden neurons is taken as 20 for all three PDI models of (7) to (9), while in the case of BR, the number of hidden neurons for the PDI model of (7) are 50, and for remaining two PDI models of (8) and (9), the neurons are 100.
The optimizers based on LM and BR adjust the weights of the neural networks through minimizing the deviation from the reference numerical solution in the mean square error (MSE) sense. The MSE, absolute error (AE), to assess the performance of the proposed INSNs is defined as:
M S E y 1 = m e a n ( y 1 y ˜ 1 ) 2 ; M S E y 2 = m e a n ( y 2 y ˜ 2 ) 2 ; M S E y 3 = m e a n ( y 3 y ˜ 3 ) 2
A E y 1 = a b s ( y 1 y ˜ 1 ) ; A E y 2 = a b s ( y 2 y ˜ 2 ) ; A E y 3 = a b s ( y 3 y ˜ 3 )
The proposed INSNs may play a significant role in solving the PDI mathematical models. As PDI is a complex neurological disorder, its accurate mathematical modeling can help understand the underlying mechanisms, its progression prediction, and developing effective treatment strategies. The proposed INSNs are capable of analyzing the complex data, identifying patterns, and making predictions, and they thus may contribute in advancing the knowledge of PDI and improving patient care. Therefore, in this study, the authors proposed a neural networks-based intelligent framework for solving the PDI mathematical model. However, this framework can be extended for clinical contributions in terms of early and efficient diagnosis as well as the prediction of PDI. Moreover, the proposed INSNs can assist in optimizing PDI treatment strategies by considering various factors, such as age, symptoms, and medication history. The proposed INSNs can help predict the most effective treatment options and dosages for individual patients. This can help in enhancing personalized medicine approaches and improved patient outcomes.
The INSNs normally demand more computational requirements, especially when dealing with large datasets. Training and optimizing INSNs for PDI models may require significant computational resources and time. This may affect the practical implementation of the INSNs, particularly for researchers with limited computing resources.

4. Performance Analyses

The simulation results of the proposed INSNs for PDI models 1, 2, and 3 presented in (7), (8), and (9), respectively, are provided in this section by considering both the BR and LM optimization algorithms.
In order to analyze the performance of the proposed INSNs, first, a reference dataset through the Adams solver is generated for PDI models 1, 2, and 3 that are presented in (7), (8), and (9), respectively, for inputs [0, 5] with a step size of 0.025. The dataset for all three PDI models is arbitrarily segmented into training, testing, and validation, with a proportion of 80, 10, and 10, respectively. The block diagram representation of the proposed INSN layer structure is presented in part 3 in Figure 1, and is implemented in the Matlab fitting tool.
The results of INSNs with BR (INSN-BR) are given in Figure 2, Figure 3, Figure 4 and Figure 5, and the results of INSNs with LM (INSN-LM) are provided in Figure 6, Figure 7, Figure 8 and Figure 9. The results of INSN-LM are provided in Figure 2, Figure 3 and Figure 4 for PDI models 1, 2, and 3, respectively. Figure 2a, Figure 3a, and Figure 4a provide the state transition values; Figure 2b, Figure 3b, and Figure 4b provide the learning curves; Figure 2c, Figure 3c, and Figure 4c present the histogram analyses; Figure 2d, Figure 3d, and Figure 4d provide the regression results; and Figure 2e, Figure 3e, and Figure 4e present the fitting results.
Figure 2. Results of INSN-BR for PDI model 1: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 3. Results of INSN-BR for PDI model 2: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 4. Results of INSN-BR for PDI model 3: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 5. Comparative results of the proposed INSN-BR with the reference numerical solution: (a) PDI model 1; (b) PDI model 1; (c) PDI model 2; (d) PDI model 2; (e) PDI model 3; (f) PDI model 3.
Figure 6. Results of INSN-LM for PDI model 1: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 7. Results of INSN-LM for PDI model 2: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 8. Results of INSN-LM for PDI model 3: (a) State transition values; (b) Learning curve; (c) Histogram; (d) Regression results; (e) Fitting results.
Figure 9. Comparative results of the proposed INSN-LM with the reference numerical solution: (a) PDI model 1; (b) PDI model 1; (c) PDI model 2; (d) PDI model 2; (e) PDI model 3; (f) PDI model 3.
The best training performance of the proposed INSN-BR is 1.2819 × 10−10 at 1000 epochs, 4.9609 × 10−10 at 1000 epochs, and 1.531 × 10−9 at 33 epochs for PDI models 1, 2, and 3, respectively. The corresponding gradient and learning rates are [5.0029 × 10−5, 2.5064 × 10−5, 3.7703 × 10−8] and [50, 5 and 500,000], respectively. Further, it is observed from the histogram analyses that the bin with a reference value of zero error value has error values of around 3.29 × 10−6, −1.1 × 10−5, and −3.7 × 10−6 for PDI models 1, 2, and 3, respectively. Moreover, the regression results show that the value for the coefficient of determination is R = 1 for all three PDI models, which confirms the correctness of the proposed INSN-BR.
In order to further demonstrate the accuracy/correctness of the proposed ISNS-BR, the absolute error is calculated for all three PDI models, and the results are presented in Figure 5 along with the comparison of the proposed solutions obtained through the INSN-BR with the reference numerical solutions. The results presented in Figure 5 endorse the efficacy of the proposed INSN-BR.
The results of the proposed INSN-LM are provided in Figure 6, Figure 7 and Figure 8 for PDI models 1, 2, and 3, respectively. Figure 6a, Figure 7a, and Figure 8a provide the state transition values; Figure 6b, Figure 7b, and Figure 8b provide the learning curves; Figure 6c, Figure 7c, and Figure 8c present the histogram analyses; Figure 6d, Figure 7d, and Figure 8d provide the regression results; and Figure 6e, Figure 7e, and Figure 8e present the fitting plots.
The best validation performance of the proposed INSN-LM is 5.0059 × 10−3 at 1000 epochs, 7.8394 × 10−5 at 1000 epochs, and 4.0022 × 10−3 at 466 epochs for PDI models 1, 2, and 3, respectively. The corresponding gradient and the learning rates are [0.0010, 0.0011, 0.0030] and [1 × 10−6, 1 × 10−5 and 1 × 10−6], respectively. Further, it is observed from the histogram analyses that the bin with a reference value of zero error has error values of around −7.7 × 10−4, −4.25 × 10−3, and 1.29 × 10−2 for PDI models 1, 2, and 3, respectively. Moreover, the regression results show the value for coefficient of determination is R = 1 for all three PDI models, which confirms the correctness of the proposed INSN-LM. In order to further demonstrate the accuracy/correctness of the proposed ISNS-LM, the absolute error is calculated for all three PDI models, and the results are presented in Figure 9 along with the comparison of the proposed solutions obtained through the INSN-LM with the reference numerical solutions. The results presented in Figure 9 endorse the efficacy of the proposed INSN-LM.
The comparison of the INSN-BR and ISNS-LM is also conducted with respect to the MSE-based fitness values, the number of epochs, the time consumed in the computation, and the BR/LM parameters, such as gradient and learning rate, for all three PDI models, and the results are presented in Figure 10.
Figure 10. Performance comparison of INSN-BR and INSN-LM for PDI models: (a) INSN-BR for PDI model 1; (b) INSN-BR for PDI model 2; (c) INSN-BR for PDI model 3; (d) INSN-LM for PDI model 1; (e) INSN-LM for PDI model 2; (f) INSN-LM for PDI model 3.
Figure 10a–c provides the results of INSN-BR for PDI models 1, 2, and 3, respectively, while the respective results of INSN-LM are given in Figure 10d–f, where the # means number. The INSN-BR attains the performance of 1.2819 × 10−10, 4.9609 × 10−10, and 1.531 × 10−9 in times of 0:02:08, 0:06:44, and 0:00:02 with 1000, 1000, and 34 epochs. The INSN-LM, meanwhile, attains the performance of 3.12 × 10−4, 6.54 × 10−5, and 1.10 × 10−4 in times of 0:00:08, 0:00:08, and 0:00:01 with 1000, 1000, and 472 epochs. The results clearly indicate that the INSN-BR provides more accurate results than the ISNS-LM but at the cost of bit more computation.
In order to further analyze the behavior of the PDI models presented in (7) to (9), the parametric plots are also drawn and presented in Figure 11, Figure 12 and Figure 13 for PDI model 1, 2, and 3, respectively.
Figure 11. Parametric plots of PDI model 1: (a) y1 vs. y2; (b) y1 vs. y3; (c) y2 vs. y3; (d) y1 vs. y2 vs. y3.
Figure 12. Parametric plots of PDI model 2: (a) y1 vs. y2; (b) y1 vs. y3; (c) y2 vs. y3; (d) y1 vs. y2 vs. y3.
Figure 13. Parametric plots of PDI model 3: (a) y1 vs. y2; (b) y1 vs. y3; (c) y2 vs. y3; (d) y1 vs. y2 vs. y3.
Figure 11a, Figure 12a and Figure 13a show the parametric plot of y1 and y2 for PDI model 1, 2, and 3, respectively. Similarly, Figure 11b, Figure 12b and Figure 13b provide the parametric plots of y1 and y2 for PDI model 1, 2, and 3, respectively, and Figure 11c, Figure 12c and Figure 13c provide the parametric plots of y2 and y3 for PDI models 1, 2, and 3. To further deepen the analyses, the 3D parametric plots are also constructed and presented in Figure 11d, Figure 12d and Figure 13d for PDI models 1, 2, and 3, respectively. The parametric plots of Figure 11, Figure 12 and Figure 13 further establish the stability of the PDI models.

5. Conclusions

  • This study presented intelligent neuro-supervised networks, INSNs, in order to study the dynamics of Parkinson’s disease illness (PDI) through the rhythms of brain electrical activity measured at different locations on the cerebral cortex, represented with three differential classes. Two types of INSNs are constructed by neural networks multilayer architecture backpropagated with the Levenberg–Marquardt and the Bayesian regularization algorithms, i.e., INSN-LM and INSN-BR. The Adams solver is used to generate the reference data for grids of input and target samples of INSNs for different PDI models obtained by varying the sensor locations in order to measure the impact of rhythms of brain electrical activity. The dataset for all three PDI models is arbitrarily segmented into training, testing, and validation, with a proportion of 80, 10, and 10, respectively, by optimizing the fitness function based on the mean squared error criterion. The values of mean square error and absolute error endorse the accuracy and the correctness of the proposed INSN-LM and INSN-BR for all three of the PDI models. Further, the analyses by means of histogram error plots, learning curves, control parameters, and regression index all confirm the efficacy of the proposed INSNs for the PDI models, although the accuracy of INSN-RB is relatively superior to the INSN-LM, albeit at the cost of slightly more computational budget requirements.
  • In future, it looks promising to incorporate the fractional gradient-based algorithms [62,63] for backpropagation in INSNs for analyzing PDI models, and to investigate early and efficient diagnosis as well as prediction of PDI through the proposed INSNs.

Author Contributions

Conceptualization, M.A.Z.R.; methodology, R.M. and N.I.C.; software, R.M.; validation, M.A.Z.R.; writing—original draft preparation, R.M.; writing—review and editing, N.I.C. and M.A.Z.R., supervision, C.-Y.C. and M.A.Z.R., project administration, C.-Y.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

All authors declared that there are no potential conflict of interest.

References

  1. Kalia, L.V.; Lang, A.E. Parkinson’s disease. Lancet 2015, 386, 896–912. [Google Scholar] [CrossRef]
  2. Feigin, V.L.; Nichols, E.; Alam, T.; Bannick, M.S.; Beghi, E.; Blake, N.; Fischer, F. Global, regional, and national burden of neurological disorders, 1990–2016: A systematic analysis for the Global Burden of Disease Study 2016. Lancet Neurol. 2019, 18, 459–480. [Google Scholar] [CrossRef] [PubMed]
  3. Dorsey, E.; Sherer, T.; Okun, M.S.; Bloem, B.R. The emerging evidence of the Parkinson pandemic. J. Parkinson’s Dis. 2018, 8, S3–S8. [Google Scholar] [CrossRef]
  4. Deuschl, G.; Beghi, E.; Fazekas, F.; Varga, T.; Christoforidi, K.A.; Sipido, E.; Feigin, V.L. The burden of neurological diseases in Europe: An analysis for the Global Burden of Disease Study 2017. Lancet Public Health 2020, 5, e551–e567. [Google Scholar] [CrossRef]
  5. Bakshi, S.; Chelliah, V.; Chen, C.; van der Graaf, P.H. Mathematical biology models of Parkinson’s disease. CPT Pharmacomet. Syst. Pharmacol. 2019, 8, 77–86. [Google Scholar] [CrossRef] [PubMed]
  6. Sarbaz, Y.; Pourakbari, H. A review of presented mathematical models in Parkinson’s disease: Black-and gray-box models. Med. Biol. Eng. Comput. 2016, 54, 855–868. [Google Scholar] [CrossRef]
  7. Anninou, A.P.; Groumpos, P.P. Modeling of Parkinson’s disease using fuzzy cognitive maps and non-linear Hebbian learning. Int. J. Artif. Intell. Tools 2014, 23, 1450010. [Google Scholar] [CrossRef]
  8. Babichev, S.; Yasinska-Damri, L.; Liakh, I. A Hybrid Model of Cancer Diseases Diagnosis Based on Gene Expression Data with Joint Use of Data Mining Methods and Machine Learning Techniques. Appl. Sci. 2023, 13, 6022. [Google Scholar] [CrossRef]
  9. Mahajan, A.; Sharma, N.; Aparicio-Obregon, S.; Alyami, H.; Alharbi, A.; Anand, D.; Sharma, M.; Goyal, N. A Novel Stacking-Based Deterministic Ensemble Model for Infectious Disease Prediction. Mathematics 2022, 10, 1714. [Google Scholar] [CrossRef]
  10. Brunese, L.; Mercaldo, F.; Reginelli, A.; Santone, A. A Neural Network-Based Method for Respiratory Sound Analysis and Lung Disease Detection. Appl. Sci. 2022, 12, 3877. [Google Scholar] [CrossRef]
  11. Parakkal Unni, M.; Menon, P.P.; Wilson, M.R.; Tsaneva-Atanasova, K. Ankle push-off based mathematical model for freezing of gait in parkinson’s disease. Front. Bioeng. Biotechnol. 2020, 8, 552635. [Google Scholar] [CrossRef]
  12. Hayete, B.; Wuest, D.; Laramie, J.; McDonagh, P.; Church, B.; Eberly, S.; Ravina, B. A Bayesian mathematical model of motor and cognitive outcomes in Parkinson’s disease. PLoS ONE 2017, 12, e0178982. [Google Scholar] [CrossRef]
  13. Belozyotov, V.Y.; Zaytsev, V.G. Mathematical modelling of parkinson’s illness by chaotic dynamics methods. Probl. Math. Model. Theory Differ. Equ. 2017, 9, 21–39. [Google Scholar]
  14. Borah, M.; Das, D.; Gayan, A.; Fenton, F.; Cherry, E. Control and anticontrol of chaos in fractional-order models of Diabetes, HIV, Dengue, Migraine, Parkinson’s and Ebola virus diseases. Chaos Solitons Fractals 2021, 153, 111419. [Google Scholar] [CrossRef]
  15. Danciu, D. A CNN-based approach for a class of non-standard hyperbolic partial differential equations modeling distributed parameters (nonlinear) control systems. Neurocomputing 2015, 164, 56–70. [Google Scholar] [CrossRef]
  16. Mwata-Velu, T.Y.; Avina-Cervantes, J.G.; Cruz-Duarte, J.M.; Rostro-Gonzalez, H.; Ruiz-Pinales, J. Imaginary Finger Movements Decoding Using Empirical Mode Decomposition and a Stacked BiLSTM Architecture. Mathematics 2021, 9, 3297. [Google Scholar] [CrossRef]
  17. Stoean, R.; Ionescu, L.; Stoean, C.; Boicea, M.; Atencia, M.; Joya, G. A deep learning-based surrogate for the xrf approximation of elemental composition within archaeological artefacts before restoration. Procedia Comput. Sci. 2021, 192, 2002–2011. [Google Scholar] [CrossRef]
  18. Atencia, M.; Stoean, R.; Joya, G. Uncertainty quantification through dropout in time series prediction by echo state networks. Mathematics 2020, 8, 1374. [Google Scholar] [CrossRef]
  19. Issa, D.; Demirci, M.F.; Yazici, A. Speech emotion recognition with deep convolutional neural networks. Biomed. Signal Process Control 2020, 59, 101894. [Google Scholar] [CrossRef]
  20. Zhou, J.; Zhou, J.; Ye, H.; Ali, M.L.; Chen, P.; Nguyen, H.T. Yield estimation of soybean breeding lines under drought stress using unmanned aerial vehicle-based imagery and convolutional neural network. Biosyst. Eng. 2021, 204, 90–103. [Google Scholar] [CrossRef]
  21. Khan, Z.A.; Chaudhary, N.I.; Abbasi, W.A.; Ling, S.H.; Raja, M.A.Z. Design of Confidence-Integrated Denoising Auto-Encoder for Personalized Top-N Recommender Systems. Mathematics 2023, 11, 761. [Google Scholar] [CrossRef]
  22. Malik, M.F.; Chang, C.-L.; Chaudhary, N.I.; Khan, Z.A.; Kiani, A.K.; Shu, C.-M.; Raja, M.A.Z. Swarming intelligence heuristics for fractional nonlinear autoregressive exogenous noise systems. Chaos Solitons Fractals 2023, 167, 113085. [Google Scholar] [CrossRef]
  23. Munawar, S.; Javaid, N.; Khan, Z.A.; Chaudhary, N.I.; Raja, M.A.Z.; Milyani, A.H.; Ahmed Azhari, A. Electricity Theft Detection in Smart Grids Using a Hybrid BiGRU–BiLSTM Model with Feature Engineering-Based Preprocessing. Sensors 2022, 22, 7818. [Google Scholar] [CrossRef] [PubMed]
  24. Altaf, F.; Chang, C.-L.; Chaudhary, N.I.; Cheema, K.M.; Raja, M.A.Z.; Shu, C.-M.; Milyani, A.H. Novel Fractional Swarming with Key Term Separation for Input Nonlinear Control Autoregressive Systems. Fractal Fract. 2022, 6, 348. [Google Scholar] [CrossRef]
  25. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z.; Milyani, A.H.; Azhari, A.A. Nonlinear Hammerstein System Identification: A Novel Application of Marine Predator Optimization Using the Key Term Separation Technique. Mathematics 2022, 10, 4217. [Google Scholar] [CrossRef]
  26. Wang, F.; Zhang, Z.; Liu, C.; Yu, Y.; Pang, S.; Duić, N.; Shafie-Khah, M.; Catalão, J.P. Generative adversarial networks and convolutional neural networks based weather classification model for day ahead short-term photovoltaic power forecasting. Energy Convers. Manag. 2019, 181, 443–462. [Google Scholar] [CrossRef]
  27. Mahmood, T.; Ali, N.; Raja, M.A.Z.; Chaudhary, N.I.; Cheema, K.M.; Shu, C.-M.; Milyani, A.H. Intelligent backpropagated predictive networks for dynamics of the power-law fluidic model with moving wedge and flat plate. Waves Random Complex Media 2023, 1–26. [Google Scholar] [CrossRef]
  28. Mahmood, T. Novel adaptive Bayesian regularization networks for peristaltic motion of a third-grade fluid in a planar channel. Mathematics 2022, 10, 358. [Google Scholar] [CrossRef]
  29. Raja, M.A.Z.; Sabati, M.; Parveen, N.; Awais, M.; Awan, S.E.; Chaudhary, N.I.; Shoaib, M.; Alquhayz, H. Integrated intelligent computing application for effectiveness of Au nanoparticles coated over MWCNTs with velocity slip in curved channel peristaltic flow. Sci. Rep. 2021, 11, 22550. [Google Scholar] [CrossRef]
  30. Raja, M.A.Z.; Khan, Z.; Zuhra, S.; Chaudhary, N.I.; Khan, W.U.; He, Y.; Islam, S.; Shoaib, M. Cattaneo-christov heat flux model of 3D hall current involving biconvection nanofluidic flow with Darcy-Forchheimer law effect: Backpropagation neural networks approach. Case Stud. Therm. Eng. 2021, 26, 101168. [Google Scholar] [CrossRef]
  31. Lipu, M.H.; Hannan, M.A.; Karim, T.F.; Hussain, A.; Saad, M.H.M.; Ayob, A.; Miah, M.S.; Mahlia, T.I. Intelligent algorithms and control strategies for battery management system in electric vehicles: Progress, challenges and future outlook. J. Clean. Prod. 2021, 292, 126044. [Google Scholar] [CrossRef]
  32. Kaffash, S.; Nguyen, A.T.; Zhu, J. Big data algorithms and applications in intelligent transportation system: A review and bibliometric analysis. Int. J. Prod. Econ. 2021, 231, 107868. [Google Scholar] [CrossRef]
  33. Ahmad, A.; Cuomo, S.; Wu, W.; Jeon, G. Intelligent algorithms and standards for interoperability in Internet of Things. Future Gener. Comput. Syst. 2019, 92, 1187–1191. [Google Scholar] [CrossRef]
  34. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z.; Siuly, S.; Ding, W. Toward the development of versatile brain–computer interfaces. IEEE Trans. Artif. Intell. 2021, 2, 314–328. [Google Scholar] [CrossRef]
  35. Yu, X.; Aziz, M.Z.; Sadiq, M.T.; Fan, Z.; Xiao, G. A new framework for automatic detection of motor and mental imagery EEG signals for robust BCI systems. IEEE Trans. Instrum. Meas. 2021, 70, 1006612. [Google Scholar] [CrossRef]
  36. Sadiq, M.T.; Akbari, H.; Siuly, S.; Li, Y.; Wen, P. Alcoholic EEG signals recognition based on phase space dynamic and geometrical features. Chaos Solitons Fractals 2022, 158, 112036. [Google Scholar] [CrossRef]
  37. Akbari, H.; Sadiq, M.T.; Payan, M.; Esmaili, S.S.; Baghri, H.; Bagheri, H. Depression Detection Based on Geometrical Features Extracted from SODP Shape of EEG Signals and Binary PSO. Trait. Du Signal 2021, 38, 13–26. [Google Scholar] [CrossRef]
  38. Akbari, H.; Sadiq, M.T.; Jafari, N.; Too, J.; Mikaeilvand, N.; Cicone, A.; Serra-Capizzano, S. Recognizing seizure using Poincaré plot of EEG signals and graphical features in DWT domain. Bratisl. Med. J./Bratisl. Lek. Listy 2023, 124, 12–24. [Google Scholar] [CrossRef]
  39. Xiang, Y.; Zhou, Y.; Huang, H.; Luo, Q. An Improved Chimp-Inspired Optimization Algorithm for Large-Scale Spherical Vehicle Routing Problem with Time Windows. Biomimetics 2022, 7, 241. [Google Scholar] [CrossRef] [PubMed]
  40. Zhou, G.; Miao, F.; Tang, Z.; Zhou, Y.; Luo, Q. Kohonen neural network and symbiotic-organism search algorithm for intrusion detection of network viruses. Front. Comput. Neurosci. 2023, 17, 1079483. [Google Scholar] [CrossRef]
  41. Zhang, T.; Zhou, Y.; Zhou, G.; Deng, W.; Luo, Q. Discrete Mayfly Algorithm for spherical asymmetric traveling salesman problem. Expert Syst. Appl. 2023, 221, 119765. [Google Scholar] [CrossRef]
  42. Wei, Y.; Wei, X.; Huang, H.; Bi, J.; Zhou, Y.; Du, Y. SSMA: Simplified slime mould algorithm for optimization wireless sensor network coverage problem. Syst. Sci. Control Eng. 2022, 10, 662–685. [Google Scholar] [CrossRef]
  43. Li, N.; Zhou, Y.; Luo, Q.; Huang, H. Discrete complex-valued code pathfinder algorithm for wind farm layout optimization problem. Energy Convers. Manag. X 2022, 16, 100307. [Google Scholar] [CrossRef]
  44. Chen, J.; Luo, Q.; Zhou, Y.; Huang, H. Firefighting multi strategy marine predators algorithm for the early-stage Forest fire rescue problem. Appl. Intell. 2022, 53, 15496–15515. [Google Scholar] [CrossRef]
  45. Mehmood, K.; Chaudhary, N.I.; Khan, Z.A.; Cheema, K.M.; Raja, M.A.Z. Variants of Chaotic Grey Wolf Heuristic for Robust Identification of Control Autoregressive Model. Biomimetics 2023, 8, 141. [Google Scholar] [CrossRef] [PubMed]
  46. Trojovský, P.; Dehghani, M. Subtraction-Average-Based Optimizer: A New Swarm-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 149. [Google Scholar] [CrossRef]
  47. Zhong, L.; Zhou, Y.; Zhou, G.; Luo, Q. Enhanced discrete dragonfly algorithm for solving four-color map problems. Appl. Intell. 2023, 53, 6372–6400. [Google Scholar] [CrossRef]
  48. Deng, W.; Zhang, X.; Zhou, Y.; Liu, Y.; Zhou, X.; Chen, H.; Zhao, H. An enhanced fast non-dominated solution sorting genetic algorithm for multi-objective problems. Inf. Sci. 2022, 585, 441–453. [Google Scholar] [CrossRef]
  49. Dehghani, M.; Trojovský, P.; Malik, O.P. Green Anaconda Optimization: A New Bio-Inspired Metaheuristic Algorithm for Solving Optimization Problems. Biomimetics 2023, 8, 121. [Google Scholar] [CrossRef]
  50. Zhang, Y.D.; Satapathy, S.C.; Guttery, D.S.; Górriz, J.M.; Wang, S.H. Improved breast cancer classification through combining graph convolutional network and convolutional neural network. Inf. Process Manag. 2021, 58, 102439. [Google Scholar] [CrossRef]
  51. Kutlu, H.; Avci, E.; Özyurt, F. White blood cells detection and classification based on regional convolutional neural networks. Med. Hypotheses 2020, 135, 109472. [Google Scholar] [CrossRef] [PubMed]
  52. Srinivasu, P.N.; SivaSai, J.G.; Ijaz, M.F.; Bhoi, A.K.; Kim, W.; Kang, J.J. Classification of skin disease using deep learning neural networks with MobileNet V2 and LSTM. Sensors 2021, 21, 2852. [Google Scholar] [CrossRef] [PubMed]
  53. Oh, K.; Chung, Y.C.; Kim, K.W.; Kim, W.S.; Oh, I.S. Classification and visualization of Alzheimer’s disease using volumetric convolutional neural network and transfer learning. Sci. Rep. 2019, 9, 18150. [Google Scholar] [CrossRef] [PubMed]
  54. Becerra, A.G.; Gutiérrez, M.; Lahoz-Beltra, R. Computing within bacteria: Programming of bacterial behavior by means of a plasmid encoding a perceptron neural network. BioSystems 2022, 213, 104608. [Google Scholar] [CrossRef] [PubMed]
  55. Ghassemi, N.; Shoeibi, A.; Rouhani, M. Deep neural network with generative adversarial networks pre-training for brain tumor classification based on MR images. Biomed. Signal Process Control 2020, 57, 101678. [Google Scholar] [CrossRef]
  56. Cinaglia, P.; Cannataro, M. Forecasting COVID-19 epidemic trends by combining a neural network with rt estimation. Entropy 2022, 24, 929. [Google Scholar] [CrossRef]
  57. Roethel, A.; Biliński, P.; Ishikawa, T. BioS2Net: Holistic Structural and Sequential Analysis of Biomolecules Using a Deep Neural Network. Int. J. Mol. Sci. 2022, 23, 2966. [Google Scholar] [CrossRef]
  58. Abbas, Z.; Tayara, H.; Chong, K.T. ENet-6mA: Identification of 6mA Modification Sites in Plant Genomes Using ElasticNet and Neural Networks. Int. J. Mol. Sci. 2022, 23, 8314. [Google Scholar] [CrossRef]
  59. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z. Motor imagery BCI classification based on novel two-dimensional modelling in empirical wavelet transform. Electron. Lett. 2020, 56, 1367–1369. [Google Scholar] [CrossRef]
  60. Sadiq, M.T.; Yu, X.; Yuan, Z.; Aziz, M.Z.; ur Rehman, N.; Ding, W.; Xiao, G. Motor imagery BCI classification based on multivariate variational mode decomposition. IEEE Trans. Emerg. Top. Comput. Intell. 2022, 6, 1177–1189. [Google Scholar] [CrossRef]
  61. Sadiq, M.T.; Yu, X.; Yuan, Z.; Zeming, F.; Rehman, A.U.; Ullah, I.; Li, G.; Xiao, G. Motor imagery EEG signals decoding by multivariate empirical wavelet transform-based framework for robust brain–computer interfaces. IEEE Access 2019, 7, 171431–171451. [Google Scholar] [CrossRef]
  62. Herzog, B. Fractional Stochastic Search Algorithms: Modelling Complex Systems via AI. Mathematics 2023, 11, 2061. [Google Scholar]
  63. Xu, C.; Mao, Y. Auxiliary model-based multi-innovation fractional stochastic gradient algorithm for hammerstein output-error systems. Machines 2021, 9, 247. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.