Next Article in Journal
Model-Free Predictive Current Control of Synchronous Reluctance Motor Drives for Pump Applications
Next Article in Special Issue
Design and Analysis of a Lower Limb Rehabilitation Training Component for Bedridden Stroke Patients
Previous Article in Journal
A Fuzzy Drive Strategy for an Intelligent Vehicle Controller Unit Integrated with Connected Data
Previous Article in Special Issue
A Performance-Driven MPC Algorithm for Underactuated Bridge Cranes
 
 
Communication

Continuous Control of an Underground Loader Using Deep Reinforcement Learning

1
Algoryx Simulation AB, Kuratorvägen 2, 90736 Umeå, Sweden
2
Department of Physics, Umeå University, 90187 Umeå, Sweden
3
Epiroc AB, Sickla Industriväg 19, 13154 Nacka, Sweden
*
Author to whom correspondence should be addressed.
Academic Editor: Dan Zhang
Machines 2021, 9(10), 216; https://doi.org/10.3390/machines9100216
Received: 28 August 2021 / Revised: 20 September 2021 / Accepted: 24 September 2021 / Published: 27 September 2021
(This article belongs to the Special Issue Design and Control of Advanced Mechatronics Systems)
The reinforcement learning control of an underground loader was investigated in a simulated environment by using a multi-agent deep neural network approach. At the start of each loading cycle, one agent selects the dig position from a depth camera image of a pile of fragmented rock. A second agent is responsible for continuous control of the vehicle, with the goal of filling the bucket at the selected loading point while avoiding collisions, getting stuck, or losing ground traction. This relies on motion and force sensors, as well as on a camera and lidar. Using a soft actor–critic algorithm, the agents learn policies for efficient bucket filling over many subsequent loading cycles, with a clear ability to adapt to the changing environment. The best results—on average, 75% of the max capacity—were obtained when including a penalty for energy usage in the reward. View Full-Text
Keywords: autonomous excavation; bucket filling; deep reinforcement learning; mining robotics; simulation; wheel loader autonomous excavation; bucket filling; deep reinforcement learning; mining robotics; simulation; wheel loader
Show Figures

Figure 1

MDPI and ACS Style

Backman, S.; Lindmark, D.; Bodin, K.; Servin, M.; Mörk, J.; Löfgren, H. Continuous Control of an Underground Loader Using Deep Reinforcement Learning. Machines 2021, 9, 216. https://doi.org/10.3390/machines9100216

AMA Style

Backman S, Lindmark D, Bodin K, Servin M, Mörk J, Löfgren H. Continuous Control of an Underground Loader Using Deep Reinforcement Learning. Machines. 2021; 9(10):216. https://doi.org/10.3390/machines9100216

Chicago/Turabian Style

Backman, Sofi, Daniel Lindmark, Kenneth Bodin, Martin Servin, Joakim Mörk, and Håkan Löfgren. 2021. "Continuous Control of an Underground Loader Using Deep Reinforcement Learning" Machines 9, no. 10: 216. https://doi.org/10.3390/machines9100216

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop