sensors-logo

Journal Browser

Journal Browser

Simultaneous Localization and Mapping (SLAM) and Artificial Intelligence (AI) Based Localization for Positioning Applications and Mobile Robot Navigation—Second Edition

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensors and Robotics".

Deadline for manuscript submissions: 30 November 2024 | Viewed by 3502

Special Issue Editors


E-Mail Website
Guest Editor
James Watt School of Engineering, University of Glasgow, Glasgow G12 8QQ, UK
Interests: robotics; unmanned systems; sensor fusion; perception; artificial intelligence; GPS-denied localization; simultaneous localization and mapping
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
School of Computing Science, University of Glasgow, Glasgow G12 8RZ, UK
Interests: cyber-physical security; localization/navigation with wireless communication system; Internet of Things (IoT) using Machine Learning (ML) or Artificial Intelligence methodology (AI)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

With the proliferation of 5G technologies and the Internet of Things (IoT), there has been a surge of mobile robot technologies and location-based services entering our daily lives. This trend accelerated during the COVID-19 pandemic, amplifying the need for automated solutions, which require knowledge of the sensor/robot location and perception of the dynamic environment, e.g., robots/drones in indoor and outdoor environments for delivery, surveillance, inspection, or mapping applications. Simultaneous Localization and Mapping (SLAM) and Artificial Intelligence (AI) are seen as key enablers for precise localization and mobile robot navigation. Despite the popularity of these methods, it remains a challenge for them to work robustly in dynamic, poorly lit, or unknown environments with possible multipath effects. Hence, data from computer vision, inertial, LiDAR, and other time-of-flight sensors are typically coupled with the latest AI and Machine Learning techniques to meet the challenging requirements of high precision in location accuracy, especially in dynamic indoor environments.

This Special Issue explores novel techniques in SLAM and AI for high-precision localization to enable applications of intelligent mobile robots in realistic indoor and outdoor environments. It provides the opportunity to uncover new ground and applications for precise localization and mobile robot navigation. 

Dr. Henrik Hesse
Dr. Chee Kiat Seow
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • applications of SLAM for mobile robot navigation
  • AI and machine learning algorithms for precise localization
  • location-based AI applications for (mobile) robots
  • data fusion for localization/ navigation using vision, inertial, LiDAR, UWB, or other time-of-flight sensors
  • fast SLAM and localization for edge deployment
  • map-based or landmark-based navigation
  • 3D SLAM for indoor mapping
  • algorithms and methods for mobile robot navigation
  • co-operative localization and SLAM
  • ultrawide-band (UWB)-based and other GPS-denied localization approaches
  • AI for non-line-of-sight (NLOS) detection and mitigation
  • Wi-Fi, 5G technology, and Bluetooth low-energy (BLE) applications for localization

Related Special Issues

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

16 pages, 3398 KiB  
Article
Enhancing Pure Inertial Navigation Accuracy through a Redundant High-Precision Accelerometer-Based Method Utilizing Neural Networks
by Qinyuan He, Huapeng Yu, Dalei Liang and Xiaozhuo Yang
Sensors 2024, 24(8), 2566; https://doi.org/10.3390/s24082566 - 17 Apr 2024
Viewed by 419
Abstract
The pure inertial navigation system, crucial for autonomous navigation in GPS-denied environments, faces challenges of error accumulation over time, impacting its effectiveness for prolonged missions. Traditional methods to enhance accuracy have focused on improving instrumentation and algorithms but face limitations due to complexity [...] Read more.
The pure inertial navigation system, crucial for autonomous navigation in GPS-denied environments, faces challenges of error accumulation over time, impacting its effectiveness for prolonged missions. Traditional methods to enhance accuracy have focused on improving instrumentation and algorithms but face limitations due to complexity and costs. This study introduces a novel device-level redundant inertial navigation framework using high-precision accelerometers combined with a neural network-based method to refine navigation accuracy. Experimental validation confirms that this integration significantly boosts navigational precision, outperforming conventional system-level redundancy approaches. The proposed method utilizes the advanced capabilities of high-precision accelerometers and deep learning to achieve superior predictive accuracy and error reduction. This research paves the way for the future integration of cutting-edge technologies like high-precision optomechanical and atom interferometer accelerometers, offering new directions for advanced inertial navigation systems and enhancing their application scope in challenging environments. Full article
Show Figures

Figure 1

18 pages, 17778 KiB  
Article
A Compact Handheld Sensor Package with Sensor Fusion for Comprehensive and Robust 3D Mapping
by Peng Wei, Kaiming Fu, Juan Villacres, Thomas Ke, Kay Krachenfels, Curtis Ryan Stofer, Nima Bayati, Qikai Gao, Bill Zhang, Eric Vanacker and Zhaodan Kong
Sensors 2024, 24(8), 2494; https://doi.org/10.3390/s24082494 - 12 Apr 2024
Viewed by 761
Abstract
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various [...] Read more.
This paper introduces an innovative approach to 3D environmental mapping through the integration of a compact, handheld sensor package with a two-stage sensor fusion pipeline. The sensor package, incorporating LiDAR, IMU, RGB, and thermal cameras, enables comprehensive and robust 3D mapping of various environments. By leveraging Simultaneous Localization and Mapping (SLAM) and thermal imaging, our solution offers good performance in conditions where global positioning is unavailable and in visually degraded environments. The sensor package runs a real-time LiDAR-Inertial SLAM algorithm, generating a dense point cloud map that accurately reconstructs the geometric features of the environment. Following the acquisition of that point cloud, we post-process these data by fusing them with images from the RGB and thermal cameras and produce a detailed, color-enriched 3D map that is useful and adaptable to different mission requirements. We demonstrated our system in a variety of scenarios, from indoor to outdoor conditions, and the results showcased the effectiveness and applicability of our sensor package and fusion pipeline. This system can be applied in a wide range of applications, ranging from autonomous navigation to smart agriculture, and has the potential to make a substantial benefit across diverse fields. Full article
Show Figures

Figure 1

20 pages, 5360 KiB  
Article
An Appearance-Semantic Descriptor with Coarse-to-Fine Matching for Robust VPR
by Jie Chen, Wenbo Li, Pengshuai Hou, Zipeng Yang and Haoyu Zhao
Sensors 2024, 24(7), 2203; https://doi.org/10.3390/s24072203 - 29 Mar 2024
Viewed by 449
Abstract
In recent years, semantic segmentation has made significant progress in visual place recognition (VPR) by using semantic information that is relatively invariant to appearance and viewpoint, demonstrating great potential. However, in some extreme scenarios, there may be semantic occlusion and semantic sparsity, which [...] Read more.
In recent years, semantic segmentation has made significant progress in visual place recognition (VPR) by using semantic information that is relatively invariant to appearance and viewpoint, demonstrating great potential. However, in some extreme scenarios, there may be semantic occlusion and semantic sparsity, which can lead to confusion when relying solely on semantic information for localization. Therefore, this paper proposes a novel VPR framework that employs a coarse-to-fine image matching strategy, combining semantic and appearance information to improve algorithm performance. First, we construct SemLook global descriptors using semantic contours, which can preliminarily screen images to enhance the accuracy and real-time performance of the algorithm. Based on this, we introduce SemLook local descriptors for fine screening, combining robust appearance information extracted by deep learning with semantic information. These local descriptors can address issues such as semantic overlap and sparsity in urban environments, further improving the accuracy of the algorithm. Through this refined screening process, we can effectively handle the challenges of complex image matching in urban environments and obtain more accurate results. The performance of SemLook descriptors is evaluated on three public datasets (Extended-CMU Season, Robot-Car Seasons v2, and SYNTHIA) and compared with six state-of-the-art VPR algorithms (HOG, CoHOG, AlexNet_VPR, Region VLAD, Patch-NetVLAD, Forest). In the experimental comparison, considering both real-time performance and evaluation metrics, the SemLook descriptors are found to outperform the other six algorithms. Evaluation metrics include the area under the curve (AUC) based on the precision–recall curve, Recall@100%Precision, and Precision@100%Recall. On the Extended-CMU Season dataset, SemLook descriptors achieve a 100% AUC value, and on the SYNTHIA dataset, they achieve a 99% AUC value, demonstrating outstanding performance. The experimental results indicate that introducing global descriptors for initial screening and utilizing local descriptors combining both semantic and appearance information for precise matching can effectively address the issue of location recognition in scenarios with semantic ambiguity or sparsity. This algorithm enhances descriptor performance, making it more accurate and robust in scenes with variations in appearance and viewpoint. Full article
Show Figures

Figure 1

29 pages, 3153 KiB  
Article
Ultra-Wideband Ranging Error Mitigation with Novel Channel Impulse Response Feature Parameters and Two-Step Non-Line-of-Sight Identification
by Hongchao Yang, Yunjia Wang, Shenglei Xu, Jingxue Bi, Haonan Jia and Cheekiat Seow
Sensors 2024, 24(5), 1703; https://doi.org/10.3390/s24051703 - 6 Mar 2024
Viewed by 673
Abstract
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based [...] Read more.
The effective identification and mitigation of non-line-of-sight (NLOS) ranging errors are essential for achieving high-precision positioning and navigation with ultra-wideband (UWB) technology in harsh indoor environments. In this paper, an efficient UWB ranging-error mitigation strategy that uses novel channel impulse response parameters based on the results of a two-step NLOS identification, composed of a decision tree and feedforward neural network, is proposed to realize indoor locations. NLOS ranging errors are classified into three types, and corresponding mitigation strategies and recall mechanisms are developed, which are also extended to partial line-of-sight (LOS) errors. Extensive experiments involving three obstacles (humans, walls, and glass) and two sites show an average NLOS identification accuracy of 95.05%, with LOS/NLOS recall rates of 95.72%/94.15%. The mitigated LOS errors are reduced by 50.4%, while the average improvement in the accuracy of the three types of NLOS ranging errors is 61.8%, reaching up to 76.84%. Overall, this method achieves a reduction in LOS and NLOS ranging errors of 25.19% and 69.85%, respectively, resulting in a 54.46% enhancement in positioning accuracy. This performance surpasses that of state-of-the-art techniques, such as the convolutional neural network (CNN), long short-term memory–extended Kalman filter (LSTM-EKF), least-squares–support vector machine (LS-SVM), and k-nearest neighbor (K-NN) algorithms. Full article
Show Figures

Figure 1

20 pages, 9873 KiB  
Article
GY-SLAM: A Dense Semantic SLAM System for Plant Factory Transport Robots
by Xiaolin Xie, Yibo Qin, Zhihong Zhang, Zixiang Yan, Hang Jin, Man Xu and Cheng Zhang
Sensors 2024, 24(5), 1374; https://doi.org/10.3390/s24051374 - 20 Feb 2024
Cited by 1 | Viewed by 779
Abstract
Simultaneous Localization and Mapping (SLAM), as one of the core technologies in intelligent robotics, has gained substantial attention in recent years. Addressing the limitations of SLAM systems in dynamic environments, this research proposes a system specifically designed for plant factory transportation environments, named [...] Read more.
Simultaneous Localization and Mapping (SLAM), as one of the core technologies in intelligent robotics, has gained substantial attention in recent years. Addressing the limitations of SLAM systems in dynamic environments, this research proposes a system specifically designed for plant factory transportation environments, named GY-SLAM. GY-SLAM incorporates a lightweight target detection network, GY, based on YOLOv5, which utilizes GhostNet as the backbone network. This integration is further enhanced with CoordConv coordinate convolution, CARAFE up-sampling operators, and the SE attention mechanism, leading to simultaneous improvements in detection accuracy and model complexity reduction. While [email protected] increased by 0.514% to 95.364, the model simultaneously reduced the number of parameters by 43.976%, computational cost by 46.488%, and model size by 41.752%. Additionally, the system constructs pure static octree maps and grid maps. Tests conducted on the TUM dataset and a proprietary dataset demonstrate that GY-SLAM significantly outperforms ORB-SLAM3 in dynamic scenarios in terms of system localization accuracy and robustness. It shows a remarkable 92.59% improvement in RMSE for Absolute Trajectory Error (ATE), along with a 93.11% improvement in RMSE for the translational drift of Relative Pose Error (RPE) and a 92.89% improvement in RMSE for the rotational drift of RPE. Compared to YOLOv5s, the GY model brings a 41.5944% improvement in detection speed and a 17.7975% increase in SLAM operation speed to the system, indicating strong competitiveness and real-time capabilities. These results validate the effectiveness of GY-SLAM in dynamic environments and provide substantial support for the automation of logistics tasks by robots in specific contexts. Full article
Show Figures

Figure 1

Back to TopTop