Next Article in Journal
Wind-Induced Dynamic Performance Evaluation of Tall Buildings Considering Future Wind Climate
Previous Article in Journal
Robotics, IoT and AI Technologies in Bioengineering
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
This is an early access version, the complete PDF, HTML, and XML versions will be available soon.
Article

Soft Fuzzy Reinforcement Neural Network Proportional–Derivative Controller

1
Department of Electrical, Electronic & Computer Engineering, The University of Western Australia, Perth 6009, Australia
2
Department of Computer Science and Software Engineering, The University of Western Australia, Perth 6009, Australia
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(9), 5071; https://doi.org/10.3390/app15095071
Submission received: 21 March 2025 / Revised: 16 April 2025 / Accepted: 17 April 2025 / Published: 2 May 2025

Abstract

Controlling systems with highly nonlinear or uncertain dynamics present significant challenges, particularly when using conventional Proportional–Integral–Derivative (PID) controllers, as they can be difficult to tune. While PID controllers can be adapted for such systems using advanced tuning methods, they often struggle with lag and instability due to their integral action. In contrast, fuzzy Proportional–Derivative (PD) controllers offer a more responsive alternative by eliminating reliance on error accumulation and enabling rule-based adaptability. However, their industrial adoption remains limited due to challenges in manual rule design. To overcome this limitation, Fuzzy Neural Networks (FNNs) integrate neural networks with fuzzy logic, enabling self-learning and reducing reliance on manually crafted rules. However, most fuzzy neural network PD (FNNPD) controllers rely on mean square error (MSE)-based training, which can be inefficient and unstable in complex, dynamic systems. To address these challenges, this paper presents a Soft Fuzzy Reinforcement Neural Network PD (SFPD) controller, integrating the Soft Actor–Critic (SAC) framework into FNNPD control to improve training speed and stability. While the actor–critic framework is widely used in reinforcement learning, its application to FNNPD controllers has been unexplored. The proposed controller leverages reinforcement learning to autonomously adjust parameters, eliminating the need for manual tuning. Additionally, entropy-regularized stochastic exploration enhances learning efficiency. It can operate with or without expert knowledge, leveraging neural network-driven adaptation. While expert input is not required, its inclusion accelerates convergence and improves initial performance. Experimental results show that the proposed SFPD controller achieves fast learning, superior control performance, and strong robustness to noise, making it effective for complex control tasks.
Keywords: deep reinforcement learning; soft actor–critic; fuzzy neural networks; PD control deep reinforcement learning; soft actor–critic; fuzzy neural networks; PD control

Share and Cite

MDPI and ACS Style

Han, Q.; Boussaid, F.; Bennamoun, M. Soft Fuzzy Reinforcement Neural Network Proportional–Derivative Controller. Appl. Sci. 2025, 15, 5071. https://doi.org/10.3390/app15095071

AMA Style

Han Q, Boussaid F, Bennamoun M. Soft Fuzzy Reinforcement Neural Network Proportional–Derivative Controller. Applied Sciences. 2025; 15(9):5071. https://doi.org/10.3390/app15095071

Chicago/Turabian Style

Han, Qiang, Farid Boussaid, and Mohammed Bennamoun. 2025. "Soft Fuzzy Reinforcement Neural Network Proportional–Derivative Controller" Applied Sciences 15, no. 9: 5071. https://doi.org/10.3390/app15095071

APA Style

Han, Q., Boussaid, F., & Bennamoun, M. (2025). Soft Fuzzy Reinforcement Neural Network Proportional–Derivative Controller. Applied Sciences, 15(9), 5071. https://doi.org/10.3390/app15095071

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop