Deep Learning and Adaptive Control

A special issue of Mathematics (ISSN 2227-7390). This special issue belongs to the section "Mathematics and Computer Science".

Deadline for manuscript submissions: closed (31 December 2022) | Viewed by 25883

Special Issue Editors


E-Mail Website
Guest Editor
School of Mechanical and Electrical Engineering, Guangzhou University, Guangzhou 510006, China
Interests: adaptive control; learning control; flexible mechanical systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Institute of Artificial Intelligence & School of Automation and Electrical Engineering, University of Science and Technology Beijing, Beijing 100083, China
Interests: boundary control of distributed parameter systems; soft robots; intelligent control
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Deep learning is a research hotspot in artificial intelligence, machine learning and data science. It has made many achievements in search technology, machine learning, machine translation, natural language processing and other related fields. The applications of deep learning are undoubtedly worthy of attention. Recent results in deep learning have left no doubt that it is amongst the most powerful modeling and control tools that we possess. The real question is how we can utilize deep learning for control without losing stability and performance guarantees. At present, with the increasing amount of data to be processed, the calculation process is more complex and cumbersome than before, and the efficiency of the algorithm may be reduced due to over-fitting. As the models become more and more complex, their interpretability will be reduced, and the performance and efficacy of the algorithms will be reduced accordingly, which requires further research. Even though recent successes in deep reinforcement learning (DRL) have shown that deep learning can be a powerful value function approximator, several key questions must be answered before deep learning enables a new frontier in unmanned systems.

The Special Issue on the research progress of deep learning will help to update the most advanced methods, technologies and applications in this field. DRL is closely tied theoretically to adaptive control. Recent work has shown how to use DRL to develop new forms of adaptive controllers that effectively deal with some existing open problems in adaptive control, such as handling unmatched uncertainties. Any actual system has varying degrees of uncertainty. When facing the changes of internal characteristics and the influence of external disturbances, it is necessary to adopt adaptive control. Since its first development, adaptive control has been keeping pace with the development of science and engineering, and more new methods and applications have been introduced over time. This Special Issue aims to introduce the latest progress in adaptive control theory and application. The key points are system modeling, parameter identification, structural analysis, controller design, performance analysis and application research results of adaptive control algorithms. We are looking for the latest research results in deep learning and adaptive control. Topics of interest include but are not limited to the keywords listed below.

Dr. Zhijia Zhao
Dr. Zhijie Liu
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Mathematics is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • deep learning, CNN, RNN, transformer model
  • optimization of deep learning
  • applications of deep learning
  • reinforcement-learning-based control
  • applications of reinforcement learning
  • adaptive iterative learning control
  • modeling of adaptive systems
  • design of adaptive controllers
  • application of adaptive control

Related Special Issue

Published Papers (13 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

14 pages, 1107 KiB  
Article
Adaptive Neural Control for an Uncertain 2-DOF Helicopter System with Unknown Control Direction and Actuator Faults
by Bing Wu, Jiale Wu, Weitian He, Guojian Tang and Zhijia Zhao
Mathematics 2022, 10(22), 4342; https://doi.org/10.3390/math10224342 - 18 Nov 2022
Cited by 4 | Viewed by 1622
Abstract
In accordance with the rapid development of smart devices and technology, unmanned aerial vehicles (UAVs) have been developed rapidly. The two-degree-of-freedom helicopter system is a typical UAV that is susceptible to uncertainty, unknown control direction and actuator faults. Hence, a novel adaptive neural [...] Read more.
In accordance with the rapid development of smart devices and technology, unmanned aerial vehicles (UAVs) have been developed rapidly. The two-degree-of-freedom helicopter system is a typical UAV that is susceptible to uncertainty, unknown control direction and actuator faults. Hence, a novel adaptive neural network (NN), fault-tolerant control scheme is proposed in this paper. Firstly, to compensate for the uncertainty, a radial-basis NN was developed to approximate the uncertain, unknown continuous function in the controlled system, and a novel weight-adaptive approach is proposed to save on computational cost. Secondly, a class of Nussbaum functions was chosen to solve the unknown-control-direction issue to prevent the effect of an unknown sign for the control coefficient. Subsequently, in response to the actuator faults, an adaptive parameter was designed to compensate for the performance loss of the actuators. Through rigorous Lyapunov analyses, the designed control scheme was proven to enable the states of the closed-loop system to be semi-globally uniformly bounded and the controlled system to be stable. Finally, we conducted a numerical simulation on Matlab to further verify the validity of the proposed scheme. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

24 pages, 1276 KiB  
Article
Formation Control with Connectivity Assurance for Missile Swarms by a Natural Co-Evolutionary Strategy
by Junda Chen, Xuejing Lan, Ye Zhou and Jiaqiao Liang
Mathematics 2022, 10(22), 4244; https://doi.org/10.3390/math10224244 - 13 Nov 2022
Cited by 3 | Viewed by 1537
Abstract
Formation control is one of the most concerning topics within the realm of swarm intelligence. This paper presents a metaheuristic approach that leverages a natural co-evolutionary strategy to solve the formation control problem for a swarm of missiles. The missile swarm is modeled [...] Read more.
Formation control is one of the most concerning topics within the realm of swarm intelligence. This paper presents a metaheuristic approach that leverages a natural co-evolutionary strategy to solve the formation control problem for a swarm of missiles. The missile swarm is modeled by a second-order system with a heterogeneous reference target, and the exponential of the resultant error is accumulated to be the objective function such that the swarm converges to optimal equilibrium states satisfying specific formation requirements. Focusing on the issue of the local optimum and unstable evolution, we incorporate a novel model-based policy constraint and a population adaptation strategy that significantly alleviates the performance degradation of the existing natural co-evolutionary strategy in terms of slow training and instability of convergence. With application of the Molloy–Reed criterion in the field of network communication, we developed an adaptive topology method that assures connectivity under node failure, and its effectiveness is validated theoretically and experimentally. The experimental results demonstrate that the accuracy of formation flight achieved by this method is competitive with that of conventional control methods and is much more adaptable. More significantly, we show that it is feasible to treat the generic formation control problem as an optimal control problem for finding a Nash equilibrium strategy and solving it through iterative learning. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

21 pages, 8939 KiB  
Article
A Collision Reduction Adaptive Data Rate Algorithm Based on the FSVM for a Low-Cost LoRa Gateway
by Honggang Wang, Peidong Pei, Ruoyu Pan, Kai Wu, Yu Zhang, Jinchao Xiao and Jingfeng Yang
Mathematics 2022, 10(21), 3920; https://doi.org/10.3390/math10213920 - 22 Oct 2022
Cited by 2 | Viewed by 1186
Abstract
LoRa (Long Range), a wireless communication technology for low power wide area networks (LPWANs), enables a wide range of IoT applications and inter-device communication, due to its openness and flexible network deployment. In the actual deployment and operation of LoRa networks, the static [...] Read more.
LoRa (Long Range), a wireless communication technology for low power wide area networks (LPWANs), enables a wide range of IoT applications and inter-device communication, due to its openness and flexible network deployment. In the actual deployment and operation of LoRa networks, the static link transmission scheme does not make full use of the channel resources in the time-varying channel environment, resulting in a poor network performance. In this paper, we propose a more effective adaptive data rate (ADR) algorithm for low-cost gateways, we firstly analyze the impact of the different hardware parameters (RSSI, SNR) on the link quality and classify the link quality using the fuzzy support vector machine (FSVM). Secondly, we establish an end device (ED) throughput model and energy consumption model and design different adaptive rate algorithms, according to the different link quality considering both the link-level performance and the MAC layer performance. The proposed algorithm uses machine learning to classify the link quality, which can accurately classify the link quality using a small amount of data, compared to other adaptive rate algorithms, and the link parameter adaptation algorithm can maximize the throughput while ensuring the link stability, by considering the link-level performance and the MAC layer performance, compared to other algorithms. The results show that it outperforms the standard LoRaWAN ADR algorithm in both the single ED and the multi ED scenarios, in terms of the packets reception rate (PRR) and the network throughput. Compared to the LoRaWAN ADR in 32 multi-ED scenarios, the proposed algorithm improves the throughput by 34.12% and packets the reception rate by 26%, significantly improving the network throughput and the packets reception rate. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

17 pages, 5727 KiB  
Article
An Adaptive Control Algorithm Based on Q-Learning for UHF Passive RFID Robots in Dynamic Scenarios
by Honggang Wang, Ruixue Yu, Ruoyu Pan, Peidong Pei, Zhao Han, Nanfeng Zhang and Jingfeng Yang
Mathematics 2022, 10(19), 3574; https://doi.org/10.3390/math10193574 - 30 Sep 2022
Viewed by 1069
Abstract
The Identification State (IS) of Radio Frequency Identification (RFID) robot systems changes continuously with the environment, so improving the identification efficiency of RFID robot systems requires adaptive control of system parameters through real-time evaluation of the IS. This paper first expounds on the [...] Read more.
The Identification State (IS) of Radio Frequency Identification (RFID) robot systems changes continuously with the environment, so improving the identification efficiency of RFID robot systems requires adaptive control of system parameters through real-time evaluation of the IS. This paper first expounds on the important roles of the real-time evaluation of the IS and adaptive control of parameters in the RFID robot systems. Secondly, a method for real-time evaluation of the IS of UHF passive RFID robot systems in dynamic scenarios based on principal component analysis (PCA)-K-Nearest Neighbor (KNN) is proposed and establishes an experimental scene to complete algorithm verification. The results show that the accuracy of the real-time evaluation method of IS based on PCA-KNN is 92.4%, and the running time of a single data is 0.258 ms, compared with other algorithms. The proposed evaluation method has higher accuracy and shorter running time. Finally, this paper proposes a Q-learning-based adaptive control algorithm for RFID robot systems. This method dynamically controls the reader’s transmission power and the robot’s moving speed according to the IS fed back by the system; compared with the default parameters, the adaptive control algorithm effectively improves the identification rate of the system, the power consumption under the adaptive parameters is reduced by 36.4%, and the time spent decreases by 29.7%. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

17 pages, 2082 KiB  
Article
Adaptive Fuzzy Iterative Learning Control for Systems with Saturated Inputs and Unknown Control Directions
by Qing-Yuan Xu, Wan-Ying He, Chuang-Tao Zheng, Peng Xu, Yun-Shan Wei and Kai Wan
Mathematics 2022, 10(19), 3462; https://doi.org/10.3390/math10193462 - 22 Sep 2022
Viewed by 1336
Abstract
An adaptive fuzzy iterative learning control (ILC) algorithm is designed for the iterative variable reference trajectory problem of nonlinear discrete-time systems with input saturations and unknown control directions. Firstly, an adaptive fuzzy iterative learning controller is constructed by combining with the fuzzy logic [...] Read more.
An adaptive fuzzy iterative learning control (ILC) algorithm is designed for the iterative variable reference trajectory problem of nonlinear discrete-time systems with input saturations and unknown control directions. Firstly, an adaptive fuzzy iterative learning controller is constructed by combining with the fuzzy logic system (FLS), which can compensate the loss caused by input saturation. Then, the discrete Nussbaum gain technique is adopted along the iteration axis, which can be embedded to the learning control method to identify the control direction of the system. Finally, based on the nonincreasing Lyapunov-like function, it is proven that the adaptive iterative learning controller can converge asymptotically when the number of iterations tends to infinity, and the system signals always remain bounded in the learning process. A simulation example verifies the feasibility and effectiveness of the learning control method. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

24 pages, 12349 KiB  
Article
Double-Loop PID-Type Neural Network Sliding Mode Control of an Uncertain Autonomous Underwater Vehicle Model Based on a Nonlinear High-Order Observer with Unknown Disturbance
by Jiajian Liang, Wenkai Huang, Fobao Zhou, Jiaqiao Liang, Guojian Lin, Endong Xiao, Hongquan Li and Xiaolin Zhang
Mathematics 2022, 10(18), 3332; https://doi.org/10.3390/math10183332 - 14 Sep 2022
Cited by 9 | Viewed by 1548
Abstract
An unknown nonlinear disturbance seriously affects the trajectory tracking of autonomous underwater vehicles (AUVs). Thus, it is critical to eliminate the influence of such disturbances on AUVs. To address this problem, this paper proposes a double-loop proportional–integral–differential (PID) neural network sliding mode control [...] Read more.
An unknown nonlinear disturbance seriously affects the trajectory tracking of autonomous underwater vehicles (AUVs). Thus, it is critical to eliminate the influence of such disturbances on AUVs. To address this problem, this paper proposes a double-loop proportional–integral–differential (PID) neural network sliding mode control (DLNNSMC). First, a double-loop PID sliding mode surface is proposed, which has a faster convergence speed than other PID sliding mode surfaces. Second, a nonlinear high-order observer and a neural network are combined to observe and compensate for the nonlinear disturbance of the AUV system. Then, the bounded stability of an AUV closed-loop system is analyzed and demonstrated using the Lyapunov method, and the time-domain method is used to verify that the velocity- and position-tracking errors of AUVs converge to zero exponentially. Finally, the radial basis function (RBF) neural network PID sliding mode control (RBFPIDSMC) and the RBF neural network PID sliding mode control (RBFPDSMC) are compared with this method in two trajectory tracking control simulation experiments. In the first experiment, the average Euclidean distance of the position-tracking error for this method was reduced by approximately 73.6% and 75.3%, respectively, compared to those for RBFPDSMC and RBFPIDSMC. In the second experiment, the average Euclidean distance of the position tracking error for this method was reduced by approximately 86.8% and 88.8%, respectively. The two experiments showed that the proposed control method has a strong anti-jamming ability and tracking effect. The simulation results obtained in the Gazebo environment validated the superiority of this method. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

16 pages, 1086 KiB  
Article
Anomaly Detection Algorithm Based on Broad Learning System and Support Vector Domain Description
by Qun Huang, Zehua Zheng, Wenhao Zhu, Xiaozhao Fang, Ribo Fang and Weijun Sun
Mathematics 2022, 10(18), 3292; https://doi.org/10.3390/math10183292 - 10 Sep 2022
Viewed by 1571
Abstract
Deep neural network-based autoencoders can effectively extract high-level abstract features with outstanding generalization performance but suffer from sparsity of extracted features, insufficient robustness, greedy training of each layer, and a lack of global optimization. In this study, the broad learning system (BLS) is [...] Read more.
Deep neural network-based autoencoders can effectively extract high-level abstract features with outstanding generalization performance but suffer from sparsity of extracted features, insufficient robustness, greedy training of each layer, and a lack of global optimization. In this study, the broad learning system (BLS) is improved to obtain a new model for data reconstruction. Support Vector Domain Description (SVDD) is one of the best-known one-class-classification methods used to solve problems where the proportion of sample categories of data is extremely unbalanced. The SVDD is sensitive to penalty parameters C, which represents the trade-off between sphere volume and the number of target data outside the sphere. The training process only considers normal samples, which leads to a low recall rate and weak generalization performance. To address these issues, we propose a BLS-based weighted SVDD algorithm (BLSW_SVDD), which introduces reconstruction error weights and a small number of anomalous samples when training the SVDD model, thus improving the robustness of the model. To evaluate the performance of BLSW_SVDD model, comparison experiments were conducted on the UCI dataset, and the experimental results showed that in terms of accuracy and F1 values, the algorithm has better performance advantages than the traditional and improved SVDD algorithms. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

14 pages, 1682 KiB  
Article
Graph Transformer Collaborative Filtering Method for Multi-Behavior Recommendations
by Wenhao Zhu, Yujun Xie, Qun Huang, Zehua Zheng, Xiaozhao Fang, Yonghui Huang and Weijun Sun
Mathematics 2022, 10(16), 2956; https://doi.org/10.3390/math10162956 - 16 Aug 2022
Cited by 1 | Viewed by 1788
Abstract
Graph convolutional networks are widely used in recommendation tasks owing to their ability to learn user and item embeddings using collaborative signals from high-order neighborhoods. Most of the graph convolutional recommendation tasks in existing studies have specialized in modeling a single type of [...] Read more.
Graph convolutional networks are widely used in recommendation tasks owing to their ability to learn user and item embeddings using collaborative signals from high-order neighborhoods. Most of the graph convolutional recommendation tasks in existing studies have specialized in modeling a single type of user–item interaction preference. Meanwhile, graph-convolution-network-based recommendation models are prone to over-smoothing problems when stacking increased numbers of layers. Therefore, in this study we propose a multi-behavior recommendation method based on graph transformer collaborative filtering. This method utilizes an unsupervised subgraph generation model that divides users with similar preferences and their interaction items into subgraphs. Furthermore, it fuses multi-headed attention layers with temporal coding strategies based on the user–item interaction graphs in the subgraphs such that the learned embeddings can reflect multiple user–item relationships and the potential for dynamic interactions. Finally, multi-behavior recommendation is performed by uniting multi-layer embedding representations. The experimental results on two real-world datasets show that the proposed method performs better than previously developed systems. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

20 pages, 7080 KiB  
Article
Design and Control Strategy of Soft Robot Based on Gas–Liquid Phase Transition Actuator
by Guojian Lin, Wenkai Huang, Chuanshuai Hu, Junlong Xiao, Fobao Zhou, Xiaolin Zhang, Jiajian Liang and Jiaqiao Liang
Mathematics 2022, 10(16), 2847; https://doi.org/10.3390/math10162847 - 10 Aug 2022
Cited by 4 | Viewed by 1992
Abstract
In this paper, a soft robot driven by a gas–liquid phase transition actuator with a new structure is designed; The soft robot is driven by the pressure generated by electrically induced ethanol phase transition. The gas–liquid phase transition drive was found to be [...] Read more.
In this paper, a soft robot driven by a gas–liquid phase transition actuator with a new structure is designed; The soft robot is driven by the pressure generated by electrically induced ethanol phase transition. The gas–liquid phase transition drive was found to be able to generate a larger driving force by using only low voltage. Compared with the gas drive of a traditional soft robot, gas–liquid phase transition-driven soft robot does not require a complex circuit system and a huge external energy supply air pump, making its overall structure more compact. At the same time, because of the new structure of the actuator on the soft robot, the soft robot has good gas tightness and less recovery time. A reinforcement depth learning control strategy is also added so that the soft robot with this actuator could better grip objects of different sizes and weights. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

15 pages, 2348 KiB  
Article
Deep Deterministic Policy Gradient-Based Active Disturbance Rejection Controller for Quad-Rotor UAVs
by Kai Zhao, Jia Song, Yunlong Hu, Xiaowei Xu and Yang Liu
Mathematics 2022, 10(15), 2686; https://doi.org/10.3390/math10152686 - 29 Jul 2022
Cited by 4 | Viewed by 1433
Abstract
Thanks to their hovering and vertical take-off and landing abilities, quadrotor unmanned aerial vehicles (UAVs) are receiving a great deal of attention. With the diversified development of the functions of UAVs, the requirements for flight performance with higher stability and maneuverability are increasing. [...] Read more.
Thanks to their hovering and vertical take-off and landing abilities, quadrotor unmanned aerial vehicles (UAVs) are receiving a great deal of attention. With the diversified development of the functions of UAVs, the requirements for flight performance with higher stability and maneuverability are increasing. Aiming at parameter uncertainty and external disturbance, a deep deterministic policy gradient-based active disturbance rejection controller (DDPG-ADRC) is proposed. The total disturbances can be compensated dynamically by adjusting the controller bandwidth and the estimation of system parameters online. The tradeoff between anti-interference and rapidity can be better realized in this way compared with the traditional ADRC. The process of parameter tuning is demonstrated through the simulation results of tracking step instruction and sine sweep under ideal and disturbance conditions. Further analysis shows the proposed DDPG-ADRC has better performance. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

12 pages, 464 KiB  
Article
Contextual Graph Attention Network for Aspect-Level Sentiment Classification
by Yuqing Miao, Ronghai Luo, Lin Zhu, Tonglai Liu, Wanzhen Zhang, Guoyong Cai and Ming Zhou
Mathematics 2022, 10(14), 2473; https://doi.org/10.3390/math10142473 - 15 Jul 2022
Cited by 7 | Viewed by 1310
Abstract
Aspect-level sentiment classification aims to predict the sentiment polarities towards the target aspects given in sentences. To address the issues of insufficient semantic information extraction and high computational complexity of attention mechanisms in existing aspect-level sentiment classification models based on deep learning, a [...] Read more.
Aspect-level sentiment classification aims to predict the sentiment polarities towards the target aspects given in sentences. To address the issues of insufficient semantic information extraction and high computational complexity of attention mechanisms in existing aspect-level sentiment classification models based on deep learning, a contextual graph attention network (CGAT) is proposed. The proposed model adopts two graph attention networks to aggregate syntactic structure information into target aspects and employs a contextual attention network to extract semantic information in sentence-aspect sequences, aiming to generate aspect-sensitive text features. In addition, a syntactic attention mechanism based on syntactic relative distance is proposed, and the Gaussian function is cleverly introduced as a syntactic weight function, which can reduce computational complexities and effectively highlight the words related to aspects in syntax. Experiments on three public sentiment datasets show that the proposed model can make better use of semantic information and syntactic structure information to improve the accuracy of sentiment classification. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

10 pages, 494 KiB  
Article
Spatial Channel Attention for Deep Convolutional Neural Networks
by Tonglai Liu, Ronghai Luo, Longqin Xu, Dachun Feng, Liang Cao, Shuangyin Liu and Jianjun Guo
Mathematics 2022, 10(10), 1750; https://doi.org/10.3390/math10101750 - 20 May 2022
Cited by 23 | Viewed by 5343
Abstract
Recently, the attention mechanism combining spatial and channel information has been widely used in various deep convolutional neural networks (CNNs), proving its great potential in improving model performance. However, this usually uses 2D global pooling operations to compress spatial information or scaling methods [...] Read more.
Recently, the attention mechanism combining spatial and channel information has been widely used in various deep convolutional neural networks (CNNs), proving its great potential in improving model performance. However, this usually uses 2D global pooling operations to compress spatial information or scaling methods to reduce the computational overhead in channel attention. These methods will result in severe information loss. Therefore, we propose a Spatial channel attention mechanism that captures cross-dimensional interaction, which does not involve dimensionality reduction and brings significant performance improvement with negligible computational overhead. The proposed attention mechanism can be seamlessly integrated into any convolutional neural network since it is a lightweight general module. Our method achieves a performance improvement of 2.08% on ResNet and 1.02% on MobileNetV2 in top-one error rate on the ImageNet dataset. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

20 pages, 1598 KiB  
Article
Survival Risk Prediction of Esophageal Cancer Based on the Kohonen Network Clustering Algorithm and Kernel Extreme Learning Machine
by Yanfeng Wang, Haohao Wang, Sanyi Li and Lidong Wang
Mathematics 2022, 10(9), 1367; https://doi.org/10.3390/math10091367 - 19 Apr 2022
Cited by 7 | Viewed by 1857
Abstract
Accurate prediction of the survival risk level of patients with esophageal cancer is significant for the selection of appropriate treatment methods. It contributes to improving the living quality and survival chance of patients. However, considering that the characteristics of blood index vary with [...] Read more.
Accurate prediction of the survival risk level of patients with esophageal cancer is significant for the selection of appropriate treatment methods. It contributes to improving the living quality and survival chance of patients. However, considering that the characteristics of blood index vary with individuals on the basis of their ages, personal habits and living environment etc., a unified artificial intelligence prediction model is not precisely adequate. In order to enhance the precision of the model on the prediction of esophageal cancer survival risk, this study proposes a different model based on the Kohonen network clustering algorithm and the kernel extreme learning machine (KELM), aiming to classifying the tested population into five catergories and provide better efficiency with the use of machine learning. Firstly, the Kohonen network clustering method was used to cluster the patient samples and five types of samples were obtained. Secondly, patients were divided into two risk levels based on 5-year net survival. Then, the Taylor formula was used to expand the theory to analyze the influence of different activation functions on the KELM modeling effect, and conduct experimental verification. RBF was selected as the activation function of the KELM. Finally, the adaptive mutation sparrow search algorithm (AMSSA) was used to optimize the model parameters. The experimental results were compared with the methods of the artificial bee colony optimized support vector machine (ABC-SVM), the three layers of random forest (TLRF), the gray relational analysis–particle swarm optimization support vector machine (GP-SVM) and the mixed-effects Cox model (Cox-LMM). The results showed that the prediction model proposed in this study had certain advantages in terms of prediction accuracy and running time, and could provide support for medical personnel to choose the treatment mode of esophageal cancer patients. Full article
(This article belongs to the Special Issue Deep Learning and Adaptive Control)
Show Figures

Figure 1

Back to TopTop