Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions
Abstract
:1. Introduction and Related Works
2. Materials and Methods
3. Case Study
4. Discussion
4.1. Generalization of Results
4.2. Limitations and Further Research
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Kulyk, M.S.; Kozlov, V.V.; Volianska, L.G. Automation control system of technical condition of gas turbine engine compressor. Aerosp. Tech. Technol. 2019, 8, 121–128. [Google Scholar] [CrossRef]
- Tovkach, S.S. Control laws of the aviation gas turbine engine. Electron. Control. Syst. 2022, 2, 20–25. [Google Scholar] [CrossRef]
- Chao, C.-Y.; Chen, C.-W.; Wang, B.-C. Variable Structure Control of a Turbojet Engine. In Proceedings of the 1986 American Control Conference, Seattle, WA, USA, 18–20 June 1986; pp. 482–487. [Google Scholar] [CrossRef]
- Delgado-Reyes, G.; Guevara-Lopez, P.; Loboda, I.; Hernandez-Gonzalez, L.; Ramirez-Hernandez, J.; Valdez-Martinez, J.-S.; Lopez-Chau, A. State Vector Identification of Hybrid Model of a Gas Turbine by Real-Time Kalman Filter. Mathematics 2020, 8, 659. [Google Scholar] [CrossRef]
- Nadweh, S.; Khaddam, O.; Hayek, G.; Atieh, B.; Haes Alhelou, H. Optimization of P & PI Controller Parameters for Variable Speed Drive Systems Using a Flower Pollination Algorithm. Heliyon 2020, 6, e04648. [Google Scholar] [CrossRef]
- Mešanović, A.; Münz, U.; Szabo, A.; Mangold, M.; Bamberger, J.; Metzger, M.; Heyde, C.; Krebs, R.; Findeisen, R. Structured Controller Parameter Tuning for Power Systems. Control Eng. Pract. 2020, 101, 104490. [Google Scholar] [CrossRef]
- Sun, J.; Liu, J.; Miao, M.; Lin, H. Research on Parameter Optimization Method of Sliding Mode Controller for the Grid-Connected Composite Device Based on IMFO Algorithm. Sensors 2022, 23, 149. [Google Scholar] [CrossRef]
- Kulikov, V.V.; Kutsyi, A.P.; Kutsyi, N.N. The Gradient-Based Algorithm for Parametric Optimization of a Variable Structure PI Controller with Dead Band. Mekhatronika Avtom. Upr. 2020, 21, 530–534. [Google Scholar] [CrossRef]
- Liang, X.; Bao, D.; Ge, S.S. Modeling of Neuro-Fuzzy System with Optimization Algorithm as a Support in System Boundary Capability Online Assessment. IEEE Trans. Circuits Syst. II Express Briefs 2023, 70, 2974–2978. [Google Scholar] [CrossRef]
- Kinga, S.; Megahed, T.F.; Kanaya, H.; Mansour, D.-E.A. A New Voltage Sensitivity-Based Distributed Feedback Online Optimization for Voltage Control in Active Distribution Networks. Comput. Electr. Eng. 2024, 119, 109574. [Google Scholar] [CrossRef]
- Toan, N.T.; Thuy, L.Q.; Kim, D.S. Sensitivity Analysis in Parametric Multiobjective Discrete-Time Control via Fréchet Subdifferential Calculus of the Frontier Map. J. Comput. Appl. Math. 2023, 418, 114662. [Google Scholar] [CrossRef]
- Xu, Y.; Tang, H.; Chen, M. Design Method of Optimal Control Schedule for the Adaptive Cycle Engine Steady-State Performance. Chin. J. Aeronaut. 2022, 35, 148–164. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R. Optimization of Helicopters Aircraft Engine Working Process Using Neural Networks Technologies. CEUR Workshop Proc. 2022, 3171, 1639–1656. Available online: https://ceur-ws.org/Vol-3171/paper117.pdf (accessed on 23 December 2024).
- Zhen, M.; Dong, X.; Liu, X.; Tan, C. Accelerated Formulation of Optimal Control Law for Adaptive Cycle Engines: A Novel Design Methodology. Aerosp. Sci. Technol. 2024, 148, 109076. [Google Scholar] [CrossRef]
- Nikolaidis, T.; Li, Z.; Jafari, S. Advanced Constraints Management Strategy for Real-Time Optimization of Gas Turbine Engine Transient Performance. Appl. Sci. 2019, 9, 5333. [Google Scholar] [CrossRef]
- Cannarsa, P.; Frankowska, H.; Scarinci, T. Second-Order Sensitivity Relations and Regularity of the Value Function for Mayer’s Problem in Optimal Control. SIAM J. Control Optim. 2015, 53, 3642–3672. [Google Scholar] [CrossRef]
- Deng, L. Second Order Necessary Conditions and Sensitivity Relations for Optimal Control Problems on Riemannian Manifolds. In Proceedings of the 2018 37th Chinese Control Conference (CCC), Wuhan, China, 25–27 July 2018; pp. 2138–2143. [Google Scholar] [CrossRef]
- Vassiliadis, V.S.; Canto, E.B.; Banga, J.R. Second-Order Sensitivities of General Dynamic Systems with Application to Optimal Control Problems. Chem. Eng. Sci. 1999, 54, 3851–3860. [Google Scholar] [CrossRef]
- Liang, X.; Bao, D.; Yang, Z. State Evaluation Method for Complex Task Network Models. Inf. Sci. 2024, 653, 119796. [Google Scholar] [CrossRef]
- Slema, S.; Errachdi, A.; Benrejeb, M. A Radial Basis Function Neural Network Model Reference Adaptive Controller for Nonlinear Systems. In Proceedings of the 2018 15th International Multi-Conference on Systems, Signals & Devices (SSD), Yasmine Hammamet, Tunisia, 19–22 March 2018. [Google Scholar] [CrossRef]
- Zarzycki, K.; Ławryńczuk, M. Long Short-Term Memory Neural Networks for Modeling Dynamical Processes and Predictive Control: A Hybrid Physics-Informed Approach. Sensors 2023, 23, 8898. [Google Scholar] [CrossRef]
- Fetanat, M.; Stevens, M.; Jain, P.; Hayward, C.; Meijering, E.; Lovell, N.H. Fully Elman Neural Network: A Novel Deep Recurrent Neural Network Optimized by an Improved Harris Hawks Algorithm for Classification of Pulmonary Arterial Wedge Pressure. IEEE Trans. Biomed. Eng. 2022, 69, 1733–1744. [Google Scholar] [CrossRef]
- Atencia, M.; Joya, G. Hopfield networks: From optimization to adaptive control. In Proceedings of the 2015 International Joint Conference on Neural Networks (IJCNN), Killarney, Ireland, 12–17 July 2015. [Google Scholar] [CrossRef]
- Lee, G.K. On the use of Hamming distance tuning for the generalized adaptive neural network fuzzy inference controller with evolutionary simulated annealing. In Proceedings of the 2011 IEEE International Conference on Information Reuse & Integration, Las Vegas, NV, USA, 3–5 August 2011. [Google Scholar] [CrossRef]
- Kulikov, V.V.; Kutsyi, N.N. Search-free algorithm for parametric optimization of pi controller with semi-permanent integration. Ipolytech J. 2018, 22, 98–108. [Google Scholar] [CrossRef]
- Kutsyi, N.N.; Osipova, E.A. The sensitivity analyzers of cascade control system with two integral pulse-duration controllers of cable insulation thickness stabilization. Modern Technologies. System Analysis. Modeling 2011, 4, 111–117. [Google Scholar]
- Kulikov, V.V.; Kutsyi, N.N. Application of the extended frequency response method for parametric synthesis of a proportional-integral difference controller. Inf. Math. Technol. Sci. Manag. 2021, 1, 37–42. [Google Scholar]
- Kulikov, V.V.; Kutsyi, N.N.; Osipova, E.A. Parametric Optimization of the PID Controller with Restriction Based on the Method of Conjugate Polak–Polyak–Ribier Gradients. Mekhatronika Avtom. Upr. 2023, 24, 240–248. [Google Scholar] [CrossRef]
- Kuz’michev, V.; Krupenich, I.; Filinov, E.; Tkachenko, A. Optimization of Gas Turbine Engine Control Using Dynamic Programming. MATEC Web Conf. 2018, 220, 03002. [Google Scholar] [CrossRef]
- Kulikov, V.V.; Kutsyi, A.P.; Kutsyi, N.N. Formation of Algorithm of Automatic Parametric Optimization of PI Controller with Variable Parameters While Using Internal Model Control. IOP Conf. Ser. Mater. Sci. Eng. 2021, 1151, 012031. [Google Scholar] [CrossRef]
- Kulikov, V.V.; Kutsyi, N.N.; Podkorytov, A.A. Gradient-Based Algorithm for Parametric Optimization of Variable-Structure PI Controller When Using a Reference Model. Adv. Intell. Syst. Comput. 2020, 1295, 938–949. [Google Scholar] [CrossRef]
- Vladov, S.; Scislo, L.; Sokurenko, V.; Muzychuk, O.; Vysotska, V.; Osadchy, S.; Sachenko, A. Neural Network Signal Integration from Thermogas-Dynamic Parameter Sensors for Helicopters Turboshaft Engines at Flight Operation Conditions. Sensors 2024, 24, 4246. [Google Scholar] [CrossRef]
- Vladov, S.; Scislo, L.; Sokurenko, V.; Muzychuk, O.; Vysotska, V.; Sachenko, A.; Yurko, A. Helicopter Turboshaft Engines’ Gas Generator Rotor R.P.M. Neuro-Fuzzy On-Board Controller Development. Energies 2024, 17, 4033. [Google Scholar] [CrossRef]
- Vladov, S.; Sachenko, A.; Sokurenko, V.; Muzychuk, O.; Vysotska, V. Helicopters Turboshaft Engines Neural Network Modeling under Sensor Failure. J. Sens. Actuator Netw. 2024, 13, 66. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R.; Stushchankyi, Y.; Havryliuk, Y. Neural Network Method for Controlling the Helicopters Turboshaft Engines Free Turbine Speed at Flight Modes. CEUR Workshop Proc. 2023, 3426, 89–108. Available online: https://ceur-ws.org/Vol-3426/paper8.pdf (accessed on 30 December 2024).
- Vladov, S.; Shmelov, Y.; Petchenko, M. A Neuro-Fuzzy Expert System for the Control and Diagnostics of Helicopters Aircraft Engines Technical State. CEUR Workshop Proc. 2021, 3013, 40–52. Available online: https://ceur-ws.org/Vol-3013/20210040.pdf (accessed on 2 January 2025).
- Vladov, S.; Yakovliev, R.; Hubachov, O.; Rud, J. Neuro-Fuzzy System for Detection Fuel Consumption of Helicopters Turboshaft Engines. CEUR Workshop Proc. 2024, 3628, 55–72. Available online: https://ceur-ws.org/Vol-3628/paper5.pdf (accessed on 5 January 2025).
- Vladov, S.; Petchenko, M.; Shmelov, Y.; Drozdova, S.; Yakovliev, R. Helicopters Turboshaft Engines Parameters Identification at Flight Modes Using Neural Networks. In Proceedings of the 2022 IEEE 17th International Conference on Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 10–12 November 2022; pp. 5–8. [Google Scholar] [CrossRef]
- Pasieka, M.; Grzesik, N.; Kuźma, K. Simulation modeling of fuzzy logic controller for aircraft engines. Int. J. Comput. 2017, 16, 27–33. [Google Scholar] [CrossRef]
- Aygun, H.; Caliskan, H. Evaluating and Modelling of Thermodynamic and Environmental Parameters of a Gas Turbine Engine and Its Components. J. Clean. Prod. 2022, 365, 132762. [Google Scholar] [CrossRef]
- Vlasenko, D.; Inkarbaieva, O.; Peretiatko, M.; Kovalchuk, D.; Sereda, O. Helicopter Radio System for Low Altitudes and Flight Speed Measuring with Pulsed Ultra-Wideband Stochastic Sounding Signals and Artificial Intelligence Elements. Radioelectron. Comput. Syst. 2023, 3, 48–59. [Google Scholar] [CrossRef]
- Vladov, S.; Banasik, A.; Sachenko, A.; Kempa, W.M.; Sokurenko, V.; Muzychuk, O.; Pikiewicz, P.; Molga, A.; Vysotska, V. Intelligent Method of Identifying the Nonlinear Dynamic Model for Helicopter Turboshaft Engines. Sensors 2024, 24, 6488. [Google Scholar] [CrossRef]
- Kovtun, V.; Grochla, K.; Połys, K. Investigation of the Information Interaction of the Sensor Network End IoT Device and the Hub at the Transport Protocol Level. Electronics 2023, 12, 4662. [Google Scholar] [CrossRef]
- Rusyn, B.; Lutsyk, O.; Kosarevych, R.; Kapshii, O.; Karpin, O.; Maksymyuk, T.; Gazda, J. Rethinking Deep CNN Training: A Novel Approach for Quality-Aware Dataset Optimization. IEEE Access 2024, 12, 137427–137438. [Google Scholar] [CrossRef]
- Balakrishnan, N.; Voinov, V.; Nikulin, M.S. Chapter 2—Pearson’s Sum and Pearson-Fisher Test. In Chi-Squared Goodness of Fit Tests with Applications; Balakrishnan, N., Voinov, V., Nikulin, M.S., Eds.; Academic Press: Waltham, MA, USA, 2013; pp. 11–26. [Google Scholar] [CrossRef]
- Kim, H.-Y. Statistical Notes for Clinical Researchers: Chi-Squared Test and Fisher’s Exact Test. Restor. Dent. Endod. 2017, 42, 152. [Google Scholar] [CrossRef]
- Dyvak, M.; Pukas, A.; Oliynyk, I.; Melnyk, A. Selection the “saturated” block from interval system of linear algebraic equations for recurrent laryngeal nerve identification. In Proceedings of the 2018 IEEE Second International Conference on Data Stream Mining & Processing (DSMP), Lviv, Ukraine; 2018; pp. 444–448. [Google Scholar] [CrossRef]
- Berko, A.; Alieksieiev, V.; Holdovanskyi, V. Determination-based correlation coefficient. CEUR Workshop Proc. 2024, 3711, 198–224. Available online: https://ceur-ws.org/Vol-3711/paper12.pdf (accessed on 11 January 2025).
- Morozov, V.V.; Kalnichenko, O.V.; Mezentseva, O.O. The method of interaction modeling on basis of deep learning the neural networks in complex IT-projects. Int. J. Comput. 2020, 19, 88–96. [Google Scholar] [CrossRef]
- Stefanovic, C.M.; Armada, A.G.; Costa-Perez, X. Second Order Statistics of -Fisher-Snedecor Distribution and Their Application to Burst Error Rate Analysis of Multi-Hop Communications. IEEE Open J. Commun. Soc. 2022, 3, 2407–2424. [Google Scholar] [CrossRef]
- de Voogt, A.; Nero, K. Technical Failures in Helicopters: Non-Powerplant-Related Accidents. Safety 2023, 9, 10. [Google Scholar] [CrossRef]
- de Voogt, A.; Amour, E.S. Safety of Twin-Engine Helicopters: Risks and Operational Specificity. Saf. Sci. 2021, 136, 105169. [Google Scholar] [CrossRef]
- Rusyn, B.; Lutsyk, O.; Kosarevych, R.; Obukh, Y. Application Peculiarities of Deep Learning Methods in the Problem of Big Datasets Classification. Lect. Notes Electr. Eng. 2021, 831, 493–506. [Google Scholar] [CrossRef]
- Lytvyn, V.; Dudyk, D.; Peleshchak, I.; Peleshchak, R.; Pukach, P. Influence of the Number of Neighbours on the Clustering Metric by Oscillatory Chaotic Neural Network with Dipole Synaptic Connections. CEUR Workshop Proc. 2024, 3664, 24–34. Available online: https://ceur-ws.org/Vol-3664/paper3.pdf (accessed on 13 January 2025).
- Babichev, S.; Krejci, J.; Bicanek, J.; Lytvynenko, V. Gene expression sequences clustering based on the internal and external clustering quality criteria. In Proceedings of the 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), Lviv, Ukraine, 5–8 September 2017. [Google Scholar] [CrossRef]
- Wong, H.-T.; Mai, J.; Wang, Z.; Leung, C.-S. Generalized M-Sparse Algorithms for Constructing Fault Tolerant RBF Networks. Neural Netw. 2024, 180, 106633. [Google Scholar] [CrossRef]
- Wysocki, A.; Lawrynczuk, M. Jordan Neural Network for Modelling and Predictive Control of Dynamic Systems. In Proceedings of the 2015 20th International Conference on Methods and Models in Automation and Robotics (MMAR), Miedzyzdroje, Poland, 24–27 August 2015; pp. 145–150. [Google Scholar] [CrossRef]
- Ren, G.; Cao, Y.; Wen, S.; Huang, T.; Zeng, Z. A Modified Elman Neural Network with a New Learning Rate Scheme. Neurocomputing 2018, 286, 11–18. [Google Scholar] [CrossRef]
- Checiu, D.; Bode, M.; Khalil, R. Reconstructing Creative Thoughts: Hopfield Neural Networks. Neurocomputing 2024, 575, 127324. [Google Scholar] [CrossRef]
- Khristodulo, O.I.; Makhmutov, A.A.; Sazonova, T.V. Use Algorithm Based at Hamming Neural Network Method for Natural Objects Classification. Procedia Comput. Sci. 2017, 103, 388–395. [Google Scholar] [CrossRef]
- Turchenko, V.; Chalmers, E.; Luczak, A. A deep convolutional auto-encoder with pooling—Unpooling layers in caffe. Int. J. Comput. 2019, 1, 8–31. [Google Scholar] [CrossRef]
- Kovtun, V.; Altameem, T.; Al-Maitah, M.; Kempa, W. Entropy-Metric Estimation of the Small Data Models with Stochastic Parameters. Heliyon 2024, 10, e24708. [Google Scholar] [CrossRef] [PubMed]
- Cherrat, E.M.; Alaoui, R.; Bouzahir, H. Score fusion of finger vein and face for human recognition based on convolutional neural network model. Int. J. Comput. 2020, 19, 11–19. [Google Scholar] [CrossRef]
- Kosarevych, R.; Lutsyk, O.; Rusyn, B.; Alokhina, O.; Maksymyuk, T.; Gazda, J. Spatial Point Patterns Generation on Remote Sensing Data Using Convolutional Neural Networks with Further Statistical Analysis. Sci. Rep. 2022, 12, 14341. [Google Scholar] [CrossRef]
- Sholomii, Y.; Yakovyna, V. Quality Assessment and Assurance of Machine Learning Systems: A Comprehensive Approach. Commun. Comput. Inf. Sci. 2023, 1980, 265–275. [Google Scholar] [CrossRef]
- Burov, Y. The Introduction of Attentional Mechanism in the Situational Awareness Process. CEUR Workshop Proc. 2022, 3171, 1076–1086. Available online: https://ceur-ws.org/Vol-3171/paper78.pdf (accessed on 15 January 2025).
- Berko, A.; Alieksieiev, V.; Dovbysh, A. Performance evaluation and analysis with code benchmarking and generative AI. CEUR Workshop Proc. 2024, 3711, 169–183. Available online: https://ceur-ws.org/Vol-3711/paper10.pdf (accessed on 15 January 2025).
- Bashtyk, Y.; Campos, J.; Fechan, A.; Konstantyniv, S.; Yakovyna, V. Computer monitoring of physical and chemical parameters of the environment using computer vision systems: Problems and prospects. CEUR Workshop Proc. 2020, 2753, 437–442. [Google Scholar]
- Shakhovska, N.; Yakovyna, V. Feature Selection and Software Defect Prediction by Different Ensemble Classifiers. Lect. Notes Comput. Sci. 2021, 12923, 307–313. [Google Scholar] [CrossRef]
- Shakhovska, N.; Yakovyna, V.; Kryvinska, N. An improved software defect prediction algorithm using self-organizing maps combined with hierarchical clustering and data preprocessing. Lect. Notes Comput. Sci. 2020, 12391, 414–424. [Google Scholar] [CrossRef]
- Lipyanina, H.; Sachenko, S.; Lendyuk, T.; Sachenko, T. Targeting Model of HEI Video Marketing based on Classification Tree. CEUR Workshop Proc. 2020, 2732, 487–498. Available online: https://ceur-ws.org/Vol-2732/20200487.pdf (accessed on 16 January 2025).
- Heßling, H. The challenge of managing and analyzing big data. Int. J. Comput. 2014, 12, 204–209. [Google Scholar] [CrossRef]
- Siroky, P.; Hartansky, R.; Petrilak, J. Possibilities of increasing the reliability by methods of software and time redundancy. Int. J. Comput. 2014, 2, 35–40. [Google Scholar] [CrossRef]
- Fan, L.; Yang, G.; Zhang, Y.; Gao, L.; Wu, B. A Novel Tolerance Optimization Approach for Compressor Blades: Incorporating the Measured out-of-Tolerance Error Data and Aerodynamic Performance. Aerosp. Sci. Technol. 2025, 158, 109920. [Google Scholar] [CrossRef]
- Dyvak, M.; Manzhula, V.; Melnyk, A.; Rusyn, B.; Spivak, I. Modeling the Efficiency of Biogas Plants by Using an Interval Data Analysis Method. Energies 2024, 17, 3537. [Google Scholar] [CrossRef]
- Shubyn, B.; Maksymyuk, T.; Gazda, J.; Rusyn, B.; Mrozek, D. Federated Learning: A Solution for Improving Anomaly Detection Accuracy of Autonomous Guided Vehicles in Smart Manufacturing. Lect. Notes Electr. Eng. 2024, 1198, 746–761. [Google Scholar] [CrossRef]
- Bodyanskiy, Y.; Kostiuk, S. Learnable Extended Activation Function for Deep Neural Networks. Int. J. Comput. 2023, 22, 311–318. [Google Scholar] [CrossRef]
- Kosarevych, R.; Lutsyk, O.; Rusyn, B. Detection of pixels corrupted by impulse noise using random point patterns. Vis. Comput. 2022, 38, 3719–3730. [Google Scholar] [CrossRef]
- Bodyanskiy, Y.; Shafronenko, A.; Pliss, I. Clusterization of Vector and Matrix Data Arrays Using the Combined Evolutionary Method of Fish Schools. Syst. Res. Inf. Technol. 2022, 4, 79–87. [Google Scholar] [CrossRef]
- Marakhimov, A.R.; Khudaybergenov, K.K. Approach to the synthesis of neural network structure during classification. Int. J. Comput. 2020, 19, 20–26. [Google Scholar] [CrossRef]
- Bodyanskiy, Y.V.; Tyshchenko, O.K. A Hybrid Cascade Neuro–Fuzzy Network with Pools of Extended Neo–Fuzzy Neurons and Its Deep Learning. Int. J. Appl. Math. Comput. Sci. 2019, 29, 477–488. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R.; Petchenko, M.; Drozdova, S. Neural Network Method for Helicopters Turboshaft Engines Working Process Parameters Identification at Flight Modes. In Proceedings of the 2022 IEEE 4th International Conference on Modern Electrical and Energy System (MEES), Kremenchuk, Ukraine, 20–23 October 2022; pp. 604–609. [Google Scholar] [CrossRef]
- Vladov, S.; Shmelov, Y.; Yakovliev, R. Methodology for Control of Helicopters Aircraft Engines Technical State in Flight Modes Using Neural Networks. CEUR Workshop Proc. 2022, 3137, 108–125. [Google Scholar] [CrossRef]
- Baranovskyi, D.; Bulakh, M.; Myamlin, S.; Kebal, I. New Design of the Hatch Cover to Increase the Carrying Capacity of the Gondola Car. Adv. Sci. Technol. Res. J. 2022, 16, 186–191. [Google Scholar] [CrossRef]
- Sagin, S.; Madey, V.; Sagin, A.; Stoliaryk, T.; Fomin, O.; Kučera, P. Ensuring Reliable and Safe Operation of Trunk Diesel Engines of Marine Transport Vessels. J. Mar. Sci. Eng. 2022, 10, 1373. [Google Scholar] [CrossRef]
- Sagin, S.V.; Sagin, S.S.; Fomin, O.; Gaichenia, O.; Zablotskyi, Y.; Píštěk, V.; Kučera, P. Use of Biofuels in Marine Diesel Engines for Sustainable and Safe Maritime Transport. Renew. Energy 2024, 224, 120221. [Google Scholar] [CrossRef]
- Nazarkevych, M.; Kowalska-Styczen, A.; Lytvyn, V. Research of Facial Recognition Systems and Criteria for Identification. In Proceedings of the IEEE International Conference on Intelligent Data Acquisition and Advanced Computing Systems: Technology and Applications, IDAACS, Dortmund, Germany, 7–9 September 2023; pp. 555–558. [Google Scholar] [CrossRef]
- Kamran, M.; Nadeem, M.; Żywiołek, J.; Abdalla, M.E.M.; Uzair, A.; Ishtiaq, A. Enhancing Transportation Efficiency with Interval-Valued Fermatean Neutrosophic Numbers: A Multi-Item Optimisation Approach. Symmetry 2024, 16, 766. [Google Scholar] [CrossRef]
- Denizci, A.; Karadeniz, S.; Ulu, C. Fuzzy Cognitive Map Based PI Controller Design. Adv. Intell. Syst. Comput. 2020, 1197, 1250–1257. [Google Scholar] [CrossRef]
Approach | Advantages | Disadvantages | Studies |
---|---|---|---|
Gradient optimization methods | High accuracy in calculating the gradient quality functional in ACSs using first-order sensitivity. | Limited applications in systems with pronounced nonlinearity, high computational complexity in multidimensional systems. | [14,15] |
Methods of sensitivity theory | Takes into account parameter interactions and their influence on changes in the system by calculating second-order sensitivity functions. | High computational load, implementation complexity, need for specialized software. | [16,17,18] |
Optimal control methods | Effective for stable control objects; they take into account control errors and the control signal impact. | Scalability issues, difficulties in integration with real-time adaptive controllers. | [10,11,12,13] |
Neural networks with fixed switching functions | Ability to process large amounts of data and change parameters in real time. | Limited ability to take into account nonlinearities and changes in the dynamics of the control object. | [19,20,21,22,23,24] |
Controllers with variable structure and semi-permanent integration | Flexible adaptation of system parameters depending on the current state; improved tuning accuracy. | Complexity in setting up switching functions, high computational complexity in analytical calculation of second-order sensitivity functions. | [3,4] |
Step Number | Step Name | Description |
---|---|---|
1 | Network initialization | The network architecture consists of an input layer (x), three hidden layers (h1, h2, h3), and an output layer (y). The input layer takes two parameters: system error ϵ(t) and reference action r(t). |
2 | Weight initialization | The neural network parameters kp, Ti, k1, k2, k3, and α and other adaptive parameters are initialized randomly with a normal distribution (or using the Xavier/He method). In this case, the weight matrices for the hidden layers and the bias for each layer are initialized as |
3 | Forward pass | For each l-th hidden layer, an input linear combination is computed. For the first hidden layer (error handling): For the second hidden layer (the regulatory effect computation): z2 = W2 ⋅ a1 + b2, a2 = tanh(z2). For the third hidden layer (switching parameters integration): z3 = W3 ⋅ a2 + b3, a3 = SmoothReLU(z3). For the output layer (the output coordinate compute): z4 = W4 ⋅ a3 + b4. |
4 | Error computation | The output error is computed as the difference between the prediction and the actual value, and a loss function, L, is applied, which could be, for example, MSE: To compute the gradients, the sensitivity function Ξ(t) is used, which takes into account the parameters’ influence on the system. |
5 | Backpropagation | Backpropagation involves computing gradients at each layer. For the output layer: is the activation function’s derivative at the output. For the remaining layers, starting from the third, the error is calculated based on the activation function derivative: |
6 | Updating weights with regularization | For each layer, the weights and biases are updated, taking into account the regularization λ. The gradients for the weights and biases are calculated as The weights and biases are updated using the gradient descent equation: where ηl is the dynamic training rate, depending on the change in the optimization functional: where α is the training rate adaptation coefficient, and ∣Ξ(t)∣ is the change magnitude in the functional at the current step. |
7 | Parameter’s adaptation | Network parameters such as kp, Ti, k1, k2, k3, and α are trained via gradient descent, taking into account regularization and the sensitivity function Ξ(t). These parameters include both weights and adaptive coefficients that change based on changes in the loss function. |
8 | Training repetition | The training process is repeated for each batch of data or all training data, with weights and parameters updated at each step. It continues until stopping after the maximum number of epochs or the error improves. |
9 | Performance evaluation | Once training is complete, the network is tested on test data to assess its ability to handle new data by predicting output values given nonlinearities and delays in the system. |
Number | The System Error ϵ(t) Values | The Reference Action r(t) Values |
---|---|---|
1 | 0.011 | 0.913 |
… | … | … |
46 | 0.012 | 0.918 |
… | … | … |
115 | 0.015 | 0.933 |
… | … | … |
173 | 0.013 | 0.926 |
… | … | … |
209 | 0.011 | 0.911 |
… | … | … |
256 | 0.010 | 0.908 |
Parameter | The Fisher–Pearson Criterion Meaning | The Fisher–Snedekor Criterion, Meaning | Description | ||
---|---|---|---|---|---|
Calculated | Critical | Calculated | Critical | ||
ϵ(t) | 12.996 | 13.3 | 15.402 | 15.98 | The Fisher–Pearson and Fisher–Snedecor criterium-computed values were below the thresholds, which made it possible to confirm the training dataset’s homogeneity. |
r(t) | 13.014 | 15.599 |
Number | J0 | J* | H(J) | det(H(J)) | ||||
---|---|---|---|---|---|---|---|---|
1 | 0.1 | 0.01 | 0.253 | 0.016 | 38.325 | 20.974 | 0.011 | |
2 | 0.25 | 0.025 | 0.347 | 0.016 | 38.261 | 20.659 | 0.069 | |
3 | 0.35 | 0.001 | 0.415 | 0.012 | 59.939 | 19.993 | 0.075 |
Modelling Step Number | Modelling Step Name | Description |
---|---|---|
1 | Setting system parameters | = 1.0, = 4.0, kob = 1.0, τob = 5.0. |
Controller parameters: θ1 = 0.5, θ2 = 0.1. | ||
Reference action: r(t) = sin(t). | ||
2 | System modelling | The system block diagram creation, including the controller Gc(p, ) and the object Gp(p). |
Adding blocks to compute ξi(t) and ξij(t). | ||
3 | Sensitivity computation | The implementation of the numerical calculation of the derivatives and . |
The finite difference methods or analytical solution application. | ||
4 | The optimality criterion computation | where a1 = 1.0 and a2 = 0.5. |
5 | Analysis of results | The computation of the first- and second-order sensitivity functions. |
The gradient and Hessian matrix H(J) computation. | ||
Testing Sylvester’s criterion for H(J) and optimizing the parameters. |
Number | Training Algorithm | Achieved MSEmin Value | Minimum Number of Epochs | Conclusions |
---|---|---|---|---|
1 | Developed algorithm | 0.0050 | 160 | The most efficient learning algorithm is characterized by the minimum MSEmin value achieved in the minimum number of training epochs. |
2 | Traditional backpropagation algorithm | 0.0093 | 220 | The minimum MSEmin value achieved exceeds the developed algorithm’s similar indicator by 1.86 times, with an increase in the optimal number of training epochs by 1.38 times. |
3 | Genetic algorithm | 0.0108 | 240 | The minimum MSEmin value achieved exceeds the developed algorithm’s similar indicator by 2.16 times, with an increase in the optimal number of training epochs by 1.5 times. |
4 | Modified inverse gradient descending method [51] | 0.0163 | 250 | The minimum MSEmin value achieved exceeds the developed algorithm’s similar indicator by 3.26 times, with an increase in the optimal number of training epochs by 1.56 times. |
5 | Traditional inverse gradient descending method | 0.0237 | 330 | The minimum MSEmin value achieved exceeds the developed algorithm’s similar indicator by 4.74 times with an increase in the optimal number of training epochs by 2.07 times. |
6 | Hybrid algorithm | 0.0301 | 380 | The minimum MSEmin value achieved exceeds the developed algorithm’s similar indicator by 6.02 times, with an increase in the optimal number of training epochs by 2.38 times. |
Neural Network Architecture | Metrics for Evaluating Methods | Training Time | ||||
---|---|---|---|---|---|---|
Accuracy | Precision | Recall | F1-Score | AUC-ROC | ||
Developed neural network | 0.993 | 0.991 | 1.0 | 0.995 | 0.825 | 1 min 12 s |
Traditional RBF network [20,56] | 0.992 | 0.989 | 1.0 | 0.994 | 0.811 | 2 min 25 s |
Jordan neural network [21,57] | 0.993 | 0.989 | 1.0 | 0.994 | 0.781 | 4 min 17 s |
Elman neural network [22,58] | 0.993 | 0.989 | 1.0 | 0.994 | 0.780 | 4 min 12 s |
Hopfield neural network [23,59] | 0.837 | 0.832 | 0.835 | 0.833 | 0.635 | 0 min 47 s |
Hamming neural network [24,60] | 0.811 | 0.806 | 0.831 | 0.819 | 0.602 | 0 min 33 s |
Research Title | Aim | Actions | Expected Results |
---|---|---|---|
Optimization of model parameters and increased error tolerance [74]. | Develop methods for identifying parameters with increased accuracy and error tolerance. | Study more accurate methods for estimating the controlled object parameters using machine learning and adaptive filtering methods. | Reduce the model’s sensitivity to errors in parameters, increasing the modelling and control accuracy. |
Develop algorithms for automatic correction of errors in parameters in real time. | |||
Develop methods for taking into account uncertainty in model parameters. | |||
Controller adaptation and tuning simplification [75,76]. | Simplify the process of tuning controller adaptation parameters and optimization coefficients for practical applications. | Investigate new approaches for the automatic tuning of controller parameters (e.g., using optimization methods such as genetic algorithms or gradient descent). | Simplify the tuning process and increase the method’s flexibility when applied to various control objects. |
Develop universal methods for parameter optimization that can automatically adapt to system changes. | |||
Processing complex nonlinearities [77]. | Develop methods for more efficient processing of complex nonlinearities in object dynamics. | Study more complex neural network architectures (e.g., deep neural networks considering multilevel nonlinear dependencies). | Increase the accuracy and universality of the model for systems with different nonlinear dynamics. |
Implement methods considering the dynamic nature of nonlinearities, such as adaptive neural network models. | |||
Robustness to noise and data quality [78]. | Improve the method’s robustness to noise and poor data quality. | Develop data preprocessing methods, including improving noise filtering and eliminating sensor errors. | Improve the accuracy of the method by improving the input data quality. |
Use high-reliability data methods, such as data reconstruction or fragmentation methods, to improve the training datasets’ representativeness. | |||
Reducing computational complexity [79]. | Optimizing computational processes and reducing the need for computational resources. | Developing more efficient algorithms for adapting the training rate and regularization to speed up computations. | Reduce computational costs while maintaining the high accuracy of the method. |
Investigating the use of hardware accelerators (e.g., neural processors or graphics processors) to optimize computational processes. | |||
Increasing the method generalization ability [80]. | Develop methods for adaptation and training transfer to apply the technique to the systems and engines’ new types. | Study methods for training transfer that allow the model to be adapted to different systems and conditions. | Increase the generalization ability of the method and its successful application to different types of engines and systems. |
Development of universal neural network architectures suitable for different types of control objects. | |||
The significant delays in modelling optimization [81]. | Develop methods for more accurate modelling of significant delays in the system. | Research and implementation of methods that take into account dynamic delays and significant delays in the system. | Improve model accuracy for systems with long delays. |
Development of prediction methods to take into account long delays timing models’ integration to take into account phase shifts. | |||
Simplifying the implementation of accurate systems [82]. | Simplify the implementing process of the method on real systems with limited computing power. | Developing lighter and faster versions of the algorithm for implementation on systems with limited computing power. | Increase the availability and practicality of the method for implementation in real systems with limited resources. |
Investigating the possibilities of integration with existing platforms, such as microcontrollers and specialized neuro-microprocessors. | |||
Preventing overfitting [83]. | Develop methods to prevent overfitting on small or heterogeneous training datasets. | Use cross-validation and regularization methods to prevent overfitting. | Reduce risk of overfitting and improve generalization properties of the model. |
Investigate methods for generating training data and expanding the dataset to improve model robustness. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Vladov, S.; Scislo, L.; Szczepanik-Ścisło, N.; Sachenko, A.; Vysotska, V. Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions. Appl. Sci. 2025, 15, 2586. https://doi.org/10.3390/app15052586
Vladov S, Scislo L, Szczepanik-Ścisło N, Sachenko A, Vysotska V. Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions. Applied Sciences. 2025; 15(5):2586. https://doi.org/10.3390/app15052586
Chicago/Turabian StyleVladov, Serhii, Lukasz Scislo, Nina Szczepanik-Ścisło, Anatoliy Sachenko, and Victoria Vysotska. 2025. "Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions" Applied Sciences 15, no. 5: 2586. https://doi.org/10.3390/app15052586
APA StyleVladov, S., Scislo, L., Szczepanik-Ścisło, N., Sachenko, A., & Vysotska, V. (2025). Neural Network Method of Controllers’ Parametric Optimization with Variable Structure and Semi-Permanent Integration Based on the Computation of Second-Order Sensitivity Functions. Applied Sciences, 15(5), 2586. https://doi.org/10.3390/app15052586