Next Article in Journal
Video Watermarking Algorithm Based on NSCT, Pseudo 3D-DCT and NMF
Previous Article in Journal
Determination of Lamb Wave Modes on Lithium-Ion Batteries Using Piezoelectric Transducers
Previous Article in Special Issue
Contact Compliance Based Visual Feedback for Tool Alignment in Robot Assisted Bone Drilling
Article

Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data

1
Department of Electrical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
2
Department of Mechanical Engineering, National Taiwan University, Taipei 106319, Taiwan
*
Author to whom correspondence should be addressed.
Academic Editor: João Miguel da Costa Sousa
Sensors 2022, 22(13), 4751; https://doi.org/10.3390/s22134751
Received: 30 May 2022 / Revised: 17 June 2022 / Accepted: 22 June 2022 / Published: 23 June 2022
(This article belongs to the Special Issue Advanced Sensors for Intelligent Control Systems)
Accurate segmentation of drivable areas and road obstacles is critical for autonomous mobile robots to navigate safely in indoor and outdoor environments. With the fast advancement of deep learning, mobile robots may now perform autonomous navigation based on what they learned in the learning phase. On the other hand, existing techniques often have low performance when confronted with complex situations since unfamiliar objects are not included in the training dataset. Additionally, the use of a large amount of labeled data is generally essential for training deep neural networks to achieve good performance, which is time-consuming and labor-intensive. Thus, this paper presents a solution to these issues by proposing a self-supervised learning method for the drivable areas and road anomaly segmentation. First, we propose the Automatic Generating Segmentation Label (AGSL) framework, which is an efficient system automatically generating segmentation labels for drivable areas and road anomalies by finding dissimilarities between the input and resynthesized image and localizing obstacles in the disparity map. Then, we train RGB-D datasets with a semantic segmentation network using self-generated ground truth labels derived from our method (AGSL labels) to get the pre-trained model. The results showed that our AGSL achieved high performance in labeling evaluation, and the pre-trained model also obtains certain confidence in real-time segmentation application on mobile robots. View Full-Text
Keywords: mobile robots; self-supervised learning; semantic segmentation; automatic labeling mobile robots; self-supervised learning; semantic segmentation; automatic labeling
Show Figures

Figure 1

MDPI and ACS Style

Nguyen, T.-K.; Nguyen, P.T.-T.; Nguyen, D.-D.; Kuo, C.-H. Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data. Sensors 2022, 22, 4751. https://doi.org/10.3390/s22134751

AMA Style

Nguyen T-K, Nguyen PT-T, Nguyen D-D, Kuo C-H. Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data. Sensors. 2022; 22(13):4751. https://doi.org/10.3390/s22134751

Chicago/Turabian Style

Nguyen, Toan-Khoa, Phuc T.-T. Nguyen, Dai-Dong Nguyen, and Chung-Hsien Kuo. 2022. "Effective Free-Driving Region Detection for Mobile Robots by Uncertainty Estimation Using RGB-D Data" Sensors 22, no. 13: 4751. https://doi.org/10.3390/s22134751

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop