Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = autonomous scanning robots in indoor

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 24227 KB  
Article
A Base-Map-Guided Global Localization Solution for Heterogeneous Robots Using a Co-View Context Descriptor
by Xuzhe Duan, Meng Wu, Chao Xiong, Qingwu Hu and Pengcheng Zhao
Remote Sens. 2024, 16(21), 4027; https://doi.org/10.3390/rs16214027 - 30 Oct 2024
Cited by 1 | Viewed by 2050
Abstract
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, [...] Read more.
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, the global localization of heterogeneous robots under complex environments remains challenging. Most of the existing point cloud global localization methods perform poorly due to the different perspective views of heterogeneous robots. Leveraging existing HD maps, this paper proposes a base-map-guided heterogeneous robots localization solution. A novel co-view context descriptor with rotational invariance is developed to represent the characteristics of heterogeneous point clouds in a unified manner. The pre-set base map is divided into virtual scans, each of which generates a candidate co-view context descriptor. These descriptors are assigned to robots before operations. By matching the query co-view context descriptors of a working robot with the assigned candidate descriptors, the coarse localization is achieved. Finally, the refined localization is done through point cloud registration. The proposed solution can be applied to both single-robot and multi-robot global localization scenarios, especially when communication is impaired. The heterogeneous datasets used for the experiments cover both indoor and outdoor scenarios, utilizing various scanning modes. The average rotation and translation errors are within 1° and 0.30 m, indicating the proposed solution can provide reliable localization support despite communication failures, even across heterogeneous robots. Full article
Show Figures

Figure 1

17 pages, 16026 KB  
Article
ARM4CH: A Methodology for Autonomous Reality Modelling for Cultural Heritage
by Nikolaos Giakoumidis and Christos-Nikolaos Anagnostopoulos
Sensors 2024, 24(15), 4950; https://doi.org/10.3390/s24154950 - 30 Jul 2024
Cited by 2 | Viewed by 1326
Abstract
Nowadays, the use of advanced sensors, such as terrestrial, mobile 3D scanners and photogrammetric imaging, has become the prevalent practice for 3D Reality Modeling (RM) and the digitization of large-scale monuments of Cultural Heritage (CH). In practice, this process is heavily related to [...] Read more.
Nowadays, the use of advanced sensors, such as terrestrial, mobile 3D scanners and photogrammetric imaging, has become the prevalent practice for 3D Reality Modeling (RM) and the digitization of large-scale monuments of Cultural Heritage (CH). In practice, this process is heavily related to the expertise of the surveying team handling the laborious planning and time-consuming execution of the 3D scanning process tailored to each site’s specific requirements and constraints. To minimize human intervention, this paper proposes a novel methodology for autonomous 3D Reality Modeling of CH monuments by employing autonomous robotic agents equipped with the appropriate sensors. These autonomous robotic agents are able to carry out the 3D RM process in a systematic, repeatable, and accurate approach. The outcomes of this automated process may also find applications in digital twin platforms, facilitating secure monitoring and the management of cultural heritage sites and spaces, in both indoor and outdoor environments. The main purpose of this paper is the initial release of an Industry 4.0-based methodology for reality modeling and the survey of cultural spaces in the scientific community, which will be evaluated in real-life scenarios in future research. Full article
Show Figures

Figure 1

13 pages, 3080 KB  
Article
Online Calibration of Extrinsic Parameters for Solid-State LIDAR Systems
by Mark O. Mints, Roman Abayev, Nick Theisen, Dietrich Paulus and Anselm von Gladiss
Sensors 2024, 24(7), 2155; https://doi.org/10.3390/s24072155 - 27 Mar 2024
Viewed by 1850
Abstract
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by [...] Read more.
This work addresses the challenge of calibrating multiple solid-state LIDAR systems. The study focuses on three different solid-state LIDAR sensors that implement different hardware designs, leading to distinct scanning patterns for each system. Consequently, detecting corresponding points between the point clouds generated by these LIDAR systems—as required for calibration—is a complex task. To overcome this challenge, this paper proposes a method that involves several steps. First, the measurement data are preprocessed to enhance its quality. Next, features are extracted from the acquired point clouds using the Fast Point Feature Histogram method, which categorizes important characteristics of the data. Finally, the extrinsic parameters are computed using the Fast Global Registration technique. The best set of parameters for the pipeline and the calibration success are evaluated using the normalized root mean square error. In a static real-world indoor scenario, a minimum root mean square error of 7 cm was achieved. Importantly, the paper demonstrates that the presented approach is suitable for online use, indicating its potential for real-time applications. By effectively calibrating the solid-state LIDAR systems and establishing point correspondences, this research contributes to the advancement of multi-LIDAR fusion and facilitates accurate perception and mapping in various fields such as autonomous driving, robotics, and environmental monitoring. Full article
(This article belongs to the Special Issue LIDAR Applications in Mobile Robots)
Show Figures

Figure 1

19 pages, 12776 KB  
Article
Advanced 3D Navigation System for AGV in Complex Smart Factory Environments
by Yiduo Li, Debao Wang, Qipeng Li, Guangtao Cheng, Zhuoran Li and Peiqing Li
Electronics 2024, 13(1), 130; https://doi.org/10.3390/electronics13010130 - 28 Dec 2023
Cited by 10 | Viewed by 4285
Abstract
The advancement of Industry 4.0 has significantly propelled the widespread application of automated guided vehicle (AGV) systems within smart factories. As the structural diversity and complexity of smart factories escalate, the conventional two-dimensional plan-based navigation systems with fixed routes have become inadequate. Addressing [...] Read more.
The advancement of Industry 4.0 has significantly propelled the widespread application of automated guided vehicle (AGV) systems within smart factories. As the structural diversity and complexity of smart factories escalate, the conventional two-dimensional plan-based navigation systems with fixed routes have become inadequate. Addressing this challenge, we devised a novel mobile robot navigation system encompassing foundational control, map construction positioning, and autonomous navigation functionalities. Initially, employing point cloud matching algorithms facilitated the construction of a three-dimensional point cloud map within indoor environments, subsequently converted into a navigational two-dimensional grid map. Simultaneously, the utilization of a multi-threaded normal distribution transform (NDT) algorithm enabled precise robot localization in three-dimensional settings. Leveraging grid maps and the robot’s inherent localization data, the A* algorithm was utilized for global path planning. Moreover, building upon the global path, the timed elastic band (TEB) algorithm was employed to establish a kinematic model, crucial for local obstacle avoidance planning. This research substantiated its findings through simulated experiments and real vehicle deployments: Mobile robots scanned environmental data via laser radar and constructing point clouds and grid maps. This facilitated centimeter-level localization and successful circumvention of static obstacles, while simultaneously charting optimal paths to bypass dynamic hindrances. The devised navigation system demonstrated commendable autonomous navigation capabilities. Experimental evidence showcased satisfactory accuracy in practical applications, with positioning errors of 3.6 cm along the x-axis, 3.3 cm along the y-axis, and 4.3° in orientation. This innovation stands to substantially alleviate the low navigation precision and sluggishness encountered by AGV vehicles within intricate smart factory environments, promising a favorable prospect for practical applications. Full article
Show Figures

Figure 1

15 pages, 14761 KB  
Technical Note
A Benchmark for Multi-Modal LiDAR SLAM with Ground Truth in GNSS-Denied Environments
by Ha Sier, Qingqing Li, Xianjia Yu, Jorge Peña Queralta, Zhuo Zou and Tomi Westerlund
Remote Sens. 2023, 15(13), 3314; https://doi.org/10.3390/rs15133314 - 28 Jun 2023
Cited by 20 | Viewed by 5951
Abstract
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art [...] Read more.
LiDAR-based simultaneous localization and mapping (SLAM) approaches have obtained considerable success in autonomous robotic systems. This is in part owing to the high accuracy of robust SLAM algorithms and the emergence of new and lower-cost LiDAR products. This study benchmarks the current state-of-the-art LiDAR SLAM algorithms with a multi-modal LiDAR sensor setup, showcasing diverse scanning modalities (spinning and solid state) and sensing technologies, and LiDAR cameras, mounted on a mobile sensing and computing platform. We extend our previous multi-modal multi-LiDAR dataset with additional sequences and new sources of ground truth data. Specifically, we propose a new multi-modal multi-LiDAR SLAM-assisted and ICP-based sensor fusion method for generating ground truth maps. With these maps, we then match real-time point cloud data using a normal distributions transform (NDT) method to obtain the ground truth with a full six-degrees-of-freedom (DOF) pose estimation. These novel ground truth data leverage high-resolution spinning and solid-state LiDARs. We also include new open road sequences with GNSS-RTK data and additional indoor sequences with motion capture (MOCAP) ground truth, complementing the previous forest sequences with MOCAP data. We perform an analysis of the positioning accuracy achieved, comprising ten unique configurations generated by pairing five distinct LiDAR sensors with five SLAM algorithms, to critically compare and assess their respective performance characteristics. We also report the resource utilization in four different computational platforms and a total of five settings (Intel and Jetson ARM CPUs). Our experimental results show that the current state-of-the-art LiDAR SLAM algorithms perform very differently for different types of sensors. More results, code, and the dataset can be found at GitHub. Full article
Show Figures

Figure 1

19 pages, 41334 KB  
Article
3D LiDAR Based SLAM System Evaluation with Low-Cost Real-Time Kinematics GPS Solution
by Stefan Hensel, Marin B. Marinov and Markus Obert
Computation 2022, 10(9), 154; https://doi.org/10.3390/computation10090154 - 4 Sep 2022
Cited by 5 | Viewed by 8842
Abstract
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. [...] Read more.
Positioning mobile systems with high accuracy is a prerequisite for intelligent autonomous behavior, both in industrial environments and in field robotics. This paper describes the setup of a robotic platform and its use for the evaluation of simultaneous localization and mapping (SLAM) algorithms. A configuration using a mobile robot Husky A200, and a LiDAR (light detection and ranging) sensor was used to implement the setup. For verification of the proposed setup, different scan matching methods for odometry determination in indoor and outdoor environments are tested. An assessment of the accuracy of the baseline 3D-SLAM system and the selected evaluation system is presented by comparing different scenarios and test situations. It was shown that the hdl_graph_slam in combination with the LiDAR OS1 and the scan matching algorithms FAST_GICP and FAST_VGICP achieves good mapping results with accuracies up to 2 cm. Full article
(This article belongs to the Special Issue Applications of Statistics and Machine Learning in Electronics)
Show Figures

Figure 1

23 pages, 9451 KB  
Article
Research and Implementation of Autonomous Navigation for Mobile Robots Based on SLAM Algorithm under ROS
by Jianwei Zhao, Shengyi Liu and Jinyu Li
Sensors 2022, 22(11), 4172; https://doi.org/10.3390/s22114172 - 31 May 2022
Cited by 55 | Viewed by 13795
Abstract
Aiming at the problems of low mapping accuracy, slow path planning efficiency, and high radar frequency requirements in the process of mobile robot mapping and navigation in an indoor environment, this paper proposes a four-wheel drive adaptive robot positioning and navigation system based [...] Read more.
Aiming at the problems of low mapping accuracy, slow path planning efficiency, and high radar frequency requirements in the process of mobile robot mapping and navigation in an indoor environment, this paper proposes a four-wheel drive adaptive robot positioning and navigation system based on ROS. By comparing and analyzing the mapping effects of various 2D-SLAM algorithms (Gmapping, Karto SLAM, and Hector SLAM), the Karto SLAM algorithm is used for map building. By comparing the Dijkstra algorithm with the A* algorithm, the A* algorithm is used for heuristic searches, which improves the efficiency of path planning. The DWA algorithm is used for local path planning, and real-time path planning is carried out by combining sensor data, which have a good obstacle avoidance performance. The mathematical model of four-wheel adaptive robot sliding steering was established, and the URDF model of the mobile robot was established under a ROS system. The map environment was built in Gazebo, and the simulation experiment was carried out by integrating lidar and odometer data, so as to realize the functions of mobile robot scanning mapping and autonomous obstacle avoidance navigation. The communication between the ROS system and STM32 is realized, the packaging of the ROS chassis node is completed, and the ROS chassis node has the function of receiving speed commands and feeding back odometer data and TF transformation, and the slip rate of the four-wheel robot in situ steering is successfully measured, making the chassis pose more accurate. Simulation tests and experimental verification show that the system has a high precision in environment map building and can achieve accurate navigation tasks. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) and Machine-Learning-Based Localization)
Show Figures

Figure 1

21 pages, 24254 KB  
Article
Text-MCL: Autonomous Mobile Robot Localization in Similar Environment Using Text-Level Semantic Information
by Gengyu Ge, Yi Zhang, Wei Wang, Qin Jiang, Lihe Hu and Yang Wang
Machines 2022, 10(3), 169; https://doi.org/10.3390/machines10030169 - 23 Feb 2022
Cited by 24 | Viewed by 4067
Abstract
Localization is one of the most important issues in mobile robotics, especially when an autonomous mobile robot performs a navigation task. The current and popular occupancy grid map, based on 2D LiDar simultaneous localization and mapping (SLAM), is suitable and easy for path [...] Read more.
Localization is one of the most important issues in mobile robotics, especially when an autonomous mobile robot performs a navigation task. The current and popular occupancy grid map, based on 2D LiDar simultaneous localization and mapping (SLAM), is suitable and easy for path planning, and the adaptive Monte Carlo localization (AMCL) method can realize localization in most of the rooms in indoor environments. However, the conventional method fails to locate the robot when there are similar and repeated geometric structures, like long corridors. To solve this problem, we present Text-MCL, a new method for robot localization based on text information and laser scan data. A coarse-to-fine localization paradigm is used for localization: firstly, we find the coarse place for global localization by finding text-level semantic information, and then get the fine local localization using the Monte Carlo localization (MCL) method based on laser data. Extensive experiments demonstrate that our approach improves the global localization speed and success rate to 96.2% with few particles. In addition, the mobile robot using our proposed approach can recover from robot kidnapping after a short movement, while conventional MCL methods converge to the wrong position. Full article
(This article belongs to the Section Robotics, Mechatronics and Intelligent Machines)
Show Figures

Figure 1

21 pages, 5355 KB  
Article
Navigation of Autonomous Light Vehicles Using an Optimal Trajectory Planning Algorithm
by Ángel Valera, Francisco Valero, Marina Vallés, Antonio Besa, Vicente Mata and Carlos Llopis-Albert
Sustainability 2021, 13(3), 1233; https://doi.org/10.3390/su13031233 - 25 Jan 2021
Cited by 11 | Viewed by 3904
Abstract
Autonomous navigation is a complex problem that involves different tasks, such as location of the mobile robot in the scenario, robotic mapping, generating the trajectory, navigating from the initial point to the target point, detecting objects it may encounter in its path, etc. [...] Read more.
Autonomous navigation is a complex problem that involves different tasks, such as location of the mobile robot in the scenario, robotic mapping, generating the trajectory, navigating from the initial point to the target point, detecting objects it may encounter in its path, etc. This paper presents a new optimal trajectory planning algorithm that allows the assessment of the energy efficiency of autonomous light vehicles. To the best of our knowledge, this is the first time in the literature that this is carried out by minimizing the travel time while considering the vehicle’s dynamic behavior, its limitations, and with the capability of avoiding obstacles and constraining energy consumption. This enables the automotive industry to design environmentally sustainable strategies towards compliance with governmental greenhouse gas (GHG) emission regulations and for climate change mitigation and adaptation policies. The reduction in energy consumption also allows companies to stay competitive in the marketplace. The vehicle navigation control is efficiently implemented through a middleware of component-based software development (CBSD) based on a Robot Operating System (ROS) package. It boosts the reuse of software components and the development of systems from other existing systems. Therefore, it allows the avoidance of complex control software architectures to integrate the different hardware and software components. The global maps are created by scanning the environment with FARO 3D and 2D SICK laser sensors. The proposed algorithm presents a low computational cost and has been implemented as a new module of distributed architecture. It has been integrated into the ROS package to achieve real time autonomous navigation of the vehicle. The methodology has been successfully validated in real indoor experiments using a light vehicle under different scenarios entailing several obstacle locations and dynamic parameters. Full article
(This article belongs to the Section Sustainable Transportation)
Show Figures

Figure 1

29 pages, 12171 KB  
Article
Low-Cost Calibration of Matching Error between Lidar and Motor for a Rotating 2D Lidar
by Chang Yuan, Shusheng Bi, Jun Cheng, Dongsheng Yang and Wei Wang
Appl. Sci. 2021, 11(3), 913; https://doi.org/10.3390/app11030913 - 20 Jan 2021
Cited by 15 | Viewed by 3498
Abstract
For a rotating 2D lidar, the inaccurate matching between the 2D lidar and the motor is an important error resource of the 3D point cloud, where the error is shown both in shape and attitude. Existing methods need to measure the angle position [...] Read more.
For a rotating 2D lidar, the inaccurate matching between the 2D lidar and the motor is an important error resource of the 3D point cloud, where the error is shown both in shape and attitude. Existing methods need to measure the angle position of the motor shaft in real time to synchronize the 2D lidar data and the motor shaft angle. However, the sensor used for measurement is usually expensive, which can increase the cost. Therefore, we propose a low-cost method to calibrate the matching error between the 2D lidar and the motor, without using an angular sensor. First, the sequence between the motor and the 2D lidar is optimized to eliminate the shape error of the 3D point cloud. Next, we eliminate the attitude error with uncertainty of the 3D point cloud by installing a triangular plate on the prototype. Finally, the Levenberg–Marquardt method is used to calibrate the installation error of the triangular plate. Experiments verified that the accuracy of our method can meet the requirements of the 3D mapping of indoor autonomous mobile robots. While we use a 2D lidar Hokuyo UST-10LX with an accuracy of ±40 mm in our prototype, we can limit the mapping error within ±50 mm when the distance is no more than 2.2996 m for a 1 s scan (mode 1), and we can limit the mapping error within ±50 mm at the measuring range 10 m for a 16 s scan (mode 7). Our method can reduce the cost while the accuracy is ensured, which can make a rotating 2D lidar cheaper. Full article
(This article belongs to the Special Issue Laser Sensing in Robotics)
Show Figures

Figure 1

24 pages, 23670 KB  
Article
Visual-Based Localization Using Pictorial Planar Objects in Indoor Environment
by Yu Meng, Kwei-Jay Lin, Bo-Lung Tsai, Ching-Chi Chuang, Yuheng Cao and Bin Zhang
Appl. Sci. 2020, 10(23), 8583; https://doi.org/10.3390/app10238583 - 30 Nov 2020
Cited by 3 | Viewed by 2890
Abstract
Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using [...] Read more.
Localization is an important technology for smart services like autonomous surveillance, disinfection or delivery robots in future distributed indoor IoT applications. Visual-based localization (VBL) is a promising self-localization approach that identifies a robot’s location in an indoor or underground 3D space by using its camera to scan and match the robot’s surrounding objects and scenes. In this study, we present a pictorial planar surface based 3D object localization framework. We have designed two object detection methods for localization, ArPico and PicPose. ArPico detects and recognizes framed pictures by converting them into binary marker codes for matching with known codes in the library. It then uses the corner points on a picture’s border to identify the camera’s pose in the 3D space. PicPose detects the pictorial planar surface of an object in a camera view and produces the pose output by matching the feature points in the view with that in the original picture and producing the homography to map the object’s actual location in the 3D real world map. We have built an autonomous moving robot that can self-localize itself using its on-board camera and the PicPose technology. The experiment study shows that our localization methods are practical, have very good accuracy, and can be used for real time robot navigation. Full article
(This article belongs to the Special Issue Indoor Localization Systems: Latest Advances and Prospects)
Show Figures

Figure 1

21 pages, 3370 KB  
Review
Autonomous Mobile Scanning Systems for the Digitization of Buildings: A Review
by Antonio Adán, Blanca Quintana and Samuel A. Prieto
Remote Sens. 2019, 11(3), 306; https://doi.org/10.3390/rs11030306 - 2 Feb 2019
Cited by 26 | Viewed by 5796
Abstract
Mobile scanning systems are being used more and more frequently in industry, construction, and artificial intelligent applications. More particularly, autonomous scanning plays an essential role in the field of the automatic creation of 3D models of building. This paper presents a critical review [...] Read more.
Mobile scanning systems are being used more and more frequently in industry, construction, and artificial intelligent applications. More particularly, autonomous scanning plays an essential role in the field of the automatic creation of 3D models of building. This paper presents a critical review of current autonomous scanning systems, discussing essential aspects that determine the efficiency and applicability of a scanning system in real environments. Some important issues, such as data redundancy, occlusion, initial assumptions, the complexity of the scanned scene, and autonomy, are analysed in the first part of the document, while the second part discusses other important aspects, such as pre-processing, time requirements, evaluation, and opening detection. A set of representative autonomous systems is then chosen for comparison, and the aforementioned characteristics are shown together in several illustrative tables. Principal gaps, limitations, and future developments are presented in the last section. The paper provides the reader with a general view of the world of autonomous scanning and emphasizes the difficulties and challenges that new autonomous platforms should tackle in the future. Full article
(This article belongs to the Special Issue Mobile Laser Scanning)
Show Figures

Figure 1

19 pages, 6253 KB  
Article
An Eight-Direction Scanning Detection Algorithm for the Mapping Robot Pathfinding in Unknown Indoor Environment
by Le Jiang, Pengcheng Zhao, Wei Dong, Jiayuan Li, Mingyao Ai, Xuan Wu and Qingwu Hu
Sensors 2018, 18(12), 4254; https://doi.org/10.3390/s18124254 - 4 Dec 2018
Cited by 12 | Viewed by 5491
Abstract
Aiming at the problem of how to enable the mobile robot to navigate and traverse efficiently and safely in the unknown indoor environment and map the environment, an eight-direction scanning detection (eDSD) algorithm is proposed as a new pathfinding algorithm. Firstly, we use [...] Read more.
Aiming at the problem of how to enable the mobile robot to navigate and traverse efficiently and safely in the unknown indoor environment and map the environment, an eight-direction scanning detection (eDSD) algorithm is proposed as a new pathfinding algorithm. Firstly, we use a laser-based SLAM (Simultaneous Localization and Mapping) algorithm to perform simultaneous localization and mapping to acquire the environment information around the robot. Then, according to the proposed algorithm, the 8 certain areas around the 8 directions which are developed from the robot’s center point are analyzed in order to calculate the probabilistic path vector of each area. Considering the requirements of efficient traverse and obstacle avoidance in practical applications, the proposal can find the optimal local path in a short time. In addition to local pathfinding, the global pathfinding is also introduced for unknown environments of large-scale and complex structures to reduce the repeated traverse. The field experiments in three typical indoor environments demonstrate that deviation of the planned path from the ideal path can be kept to a low level in terms of the path length and total time consumption. It is confirmed that the proposed algorithm is highly adaptable and practical in various indoor environments. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

23 pages, 7738 KB  
Article
Graph Structure-Based Simultaneous Localization and Mapping Using a Hybrid Method of 2D Laser Scan and Monocular Camera Image in Environments with Laser Scan Ambiguity
by Taekjun Oh, Donghwa Lee, Hyungjin Kim and Hyun Myung
Sensors 2015, 15(7), 15830-15852; https://doi.org/10.3390/s150715830 - 3 Jul 2015
Cited by 23 | Viewed by 9315
Abstract
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot [...] Read more.
Localization is an essential issue for robot navigation, allowing the robot to perform tasks autonomously. However, in environments with laser scan ambiguity, such as long corridors, the conventional SLAM (simultaneous localization and mapping) algorithms exploiting a laser scanner may not estimate the robot pose robustly. To resolve this problem, we propose a novel localization approach based on a hybrid method incorporating a 2D laser scanner and a monocular camera in the framework of a graph structure-based SLAM. 3D coordinates of image feature points are acquired through the hybrid method, with the assumption that the wall is normal to the ground and vertically flat. However, this assumption can be relieved, because the subsequent feature matching process rejects the outliers on an inclined or non-flat wall. Through graph optimization with constraints generated by the hybrid method, the final robot pose is estimated. To verify the effectiveness of the proposed method, real experiments were conducted in an indoor environment with a long corridor. The experimental results were compared with those of the conventional GMappingapproach. The results demonstrate that it is possible to localize the robot in environments with laser scan ambiguity in real time, and the performance of the proposed method is superior to that of the conventional approach. Full article
(This article belongs to the Special Issue Sensors for Indoor Mapping and Navigation)
Show Figures

24 pages, 9048 KB  
Article
Towards the Automatic Scanning of Indoors with Robots
by Antonio Adán, Blanca Quintana, Andres S. Vázquez, Alberto Olivares, Eduardo Parra and Samuel Prieto
Sensors 2015, 15(5), 11551-11574; https://doi.org/10.3390/s150511551 - 19 May 2015
Cited by 30 | Viewed by 8782
Abstract
This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps [...] Read more.
This paper is framed in both 3D digitization and 3D data intelligent processing research fields. Our objective is focused on developing a set of techniques for the automatic creation of simple three-dimensional indoor models with mobile robots. The document presents the principal steps of the process, the experimental setup and the results achieved. We distinguish between the stages concerning intelligent data acquisition and 3D data processing. This paper is focused on the first stage. We show how the mobile robot, which carries a 3D scanner, is able to, on the one hand, make decisions about the next best scanner position and, on the other hand, navigate autonomously in the scene with the help of the data collected from earlier scans. After this stage, millions of 3D data are converted into a simplified 3D indoor model. The robot imposes a stopping criterion when the whole point cloud covers the essential parts of the scene. This system has been tested under real conditions indoors with promising results. The future is addressed to extend the method in much more complex and larger scenarios. Full article
(This article belongs to the Special Issue Sensors for Indoor Mapping and Navigation)
Show Figures

Back to TopTop