Sign in to use this feature.

Years

Between: -

Subjects

Journals

Article Types

Countries / Regions

Search Results (1)

Search Parameters:
Keywords = polydexterous

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 11040 KB  
Article
PolyDexFrame: Deep Reinforcement Learning-Based Pick-and-Place of Objects in Clutter
by Muhammad Babar Imtiaz, Yuansong Qiao and Brian Lee
Machines 2024, 12(8), 547; https://doi.org/10.3390/machines12080547 - 11 Aug 2024
Cited by 1 | Viewed by 2087
Abstract
This research study represents a polydexterous deep reinforcement learning-based pick-and-place framework for industrial clutter scenarios. In the proposed framework, the agent tends to learn the pick-and-place of regularly and irregularly shaped objects in clutter by using the sequential combination of prehensile and non-prehensile [...] Read more.
This research study represents a polydexterous deep reinforcement learning-based pick-and-place framework for industrial clutter scenarios. In the proposed framework, the agent tends to learn the pick-and-place of regularly and irregularly shaped objects in clutter by using the sequential combination of prehensile and non-prehensile robotic manipulations involving different robotic grippers in a completely self-supervised manner. The problem was tackled as a reinforcement learning problem; after the Markov decision process (MDP) was designed, the off-policy model-free Q-learning algorithm was deployed using deep Q-networks as a Q-function approximator. Four distinct robotic manipulations, i.e., grasp from the prehensile manipulation category and inward slide, outward slide, and suction grip from the non-prehensile manipulation category were considered as actions. The Q-function comprised four fully convolutional networks (FCN) corresponding to each action based on memory-efficient DenseNet-121 variants outputting pixel-wise maps of action-values jointly trained via the pixel-wise parametrization technique. Rewards were awarded according to the status of the action performed, and backpropagation was conducted accordingly for the FCN generating the maximum Q-value. The results showed that the agent learned the sequential combination of the polydexterous prehensile and non-prehensile manipulations, where the non-prehensile manipulations increased the possibility of prehensile manipulations. We achieved promising results in comparison to the baselines, differently designed variants, and density-based testing clutter. Full article
Show Figures

Figure 1

Back to TopTop