Next Article in Journal
A Novel Cryptocurrency Price Prediction Model Using GRU, LSTM and bi-LSTM Machine Learning Algorithms
Previous Article in Journal
RIANN—A Robust Neural Network Outperforms Attitude Estimation Filters
Article

Refined Continuous Control of DDPG Actors via Parametrised Activation

by 1,*,†, 2,†, 3,†, 4,† and 5,†
1
School of Engineering and IT, University of New South Wales, Canberra, ACT 2612, Australia
2
Walter and Eliza Hall Institute of Medical Research, Melbourne, VIC 3052, Australia
3
Medical Research Institute, Alexandria University, Alexandria 21568, Egypt
4
Faculty of Engineering and IT, University of Technology Sydney, Sydney, NSW 2007, Australia
5
Faculty of Computers and Artificial Intelligence, Cairo University, Cairo 12613, Egypt
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Academic Editors: Andrea Prati, Carlos A. Iglesias, Luis Javier García Villalba and Vincent A. Cicirello
AI 2021, 2(4), 464-476; https://doi.org/10.3390/ai2040029
Received: 11 August 2021 / Revised: 21 September 2021 / Accepted: 22 September 2021 / Published: 29 September 2021
Continuous action spaces impose a serious challenge for reinforcement learning agents. While several off-policy reinforcement learning algorithms provide a universal solution to continuous control problems, the real challenge lies in the fact that different actuators feature different response functions due to wear and tear (in mechanical systems) and fatigue (in biomechanical systems). In this paper, we propose enhancing the actor-critic reinforcement learning agents by parameterising the final layer in the actor network. This layer produces the actions to accommodate the behaviour discrepancy of different actuators under different load conditions during interaction with the environment. To achieve this, the actor is trained to learn the tuning parameter controlling the activation layer (e.g., Tanh and Sigmoid). The learned parameters are then used to create tailored activation functions for each actuator. We ran experiments on three OpenAI Gym environments, i.e., Pendulum-v0, LunarLanderContinuous-v2, and BipedalWalker-v2. Results showed an average of 23.15% and 33.80% increase in total episode reward of the LunarLanderContinuous-v2 and BipedalWalker-v2 environments, respectively. There was no apparent improvement in Pendulum-v0 environment but the proposed method produces a more stable actuation signal compared to the state-of-the-art method. The proposed method allows the reinforcement learning actor to produce more robust actions that accommodate the discrepancy in the actuators’ response functions. This is particularly useful for real life scenarios where actuators exhibit different response functions depending on the load and the interaction with the environment. This also simplifies the transfer learning problem by fine-tuning the parameterised activation layers instead of retraining the entire policy every time an actuator is replaced. Finally, the proposed method would allow better accommodation to biological actuators (e.g., muscles) in biomechanical systems. View Full-Text
Keywords: continuous control; deep reinforcement learning; actor-critic; DDPG continuous control; deep reinforcement learning; actor-critic; DDPG
Show Figures

Figure 1

MDPI and ACS Style

Hossny, M.; Iskander, J.; Attia, M.; Saleh, K.; Abobakr, A. Refined Continuous Control of DDPG Actors via Parametrised Activation. AI 2021, 2, 464-476. https://doi.org/10.3390/ai2040029

AMA Style

Hossny M, Iskander J, Attia M, Saleh K, Abobakr A. Refined Continuous Control of DDPG Actors via Parametrised Activation. AI. 2021; 2(4):464-476. https://doi.org/10.3390/ai2040029

Chicago/Turabian Style

Hossny, Mohammed, Julie Iskander, Mohamed Attia, Khaled Saleh, and Ahmed Abobakr. 2021. "Refined Continuous Control of DDPG Actors via Parametrised Activation" AI 2, no. 4: 464-476. https://doi.org/10.3390/ai2040029

Find Other Styles

Article Access Map by Country/Region

1
Back to TopTop