sensors-logo

Journal Browser

Journal Browser

Special Issue "Mobile Robot Navigation"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (31 August 2019).

Special Issue Editors

Prof. Dr. Jesús Ureña
Website
Guest Editor
Department of Electronics, School of Engineering, University of Alcala, Campus Universitario s/n, 28805 Alcala de Henares, Madrid, Spain
Interests: ultrasonic signal processing; Local Positioning Systems (LPSs); mobile robots; electronic control, tracking and navigation; daily live monitoring; algorithm implementation on software and hardware
Special Issues and Collections in MDPI journals
Prof. Dr. Felipe Espinosa Zapata
Website
Guest Editor
Department of Electronics, School of Engineering, University of Alcala, Campus Universitario s/n, 28805 Alcala de Henares, Madrid, Spain
Interests: network control systems; wireless sensor networks; event-based control; event-based estimation; electronic control engineering; robot formation; target approaching; trajectory tracking
Dr. Roberto Iglesias Rodríguez
Website
Guest Editor
CiTIUS Research Centre, University of Santiago de Compostela , Rúa de Jenaro de la Fuente Domínguez, Campus Vida, 15782, Santiago de Compostela, Spain
Interests: control and navigation in robotics; continuous and on-line robot and machine learning; indoor localization; scientific methods in robotics (modelling and characterization of robot behavior); pattern recognition

Special Issue Information

Dear Colleagues,

Navigation is one of the main challenges in robotics. Loads of works, from theoretical research to practical applications, have been devoted in the last decades to endow robots with the ability of navigating.  Yet, important advances in many topics are still required to handle the increasingly complex environments and tasks, imposed by the continuous evolution of robot technology in a great variety of domains (from autonomous cars, service robots, underwater vehicles to aerial robots). Nowadays, the massive use of drones has extended the navigation from 2D restricted spaces to 3D. Advances in perception and localization, computer vision, context aware navigation and route planning, dynamic guidance to the target, adaptation through online learning, are some of the challenges, to mention but a few, which are still required.

Different technologies and strategies are involved: sensing, positioning, mapping, approaching, tracking, formation, control, communication, human-interface, learning, etc.

The aim of this Special Issue is to contribute to the state-of-the-art and present current applications of robot navigation. This is why the Guest Editors invite papers related to the following topics, but the list is non-exhaustive:

  • Perception and localization. Stand-alone and cooperative approaches. SLAM.
  • Map-based, landmark-based, beacon-based navigation (2D and 3D)
  • Data fusion for mobile robot navigation.
  • Wireless sensor networks for mobile robot navigation.
  • Network control systems
  • Robot formation and tracking
  • Adaptive robot navigation and control
  • Tracking algorithms
  • Biologically inspired robot navigation
  • Applications of mobile robot navigation

Prof. Dr. Jesús Ureña
Prof. Dr. Felipe Espinosa Zapata
Dr. Roberto Iglesias Rodríguez
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Published Papers (31 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Leader-Following Consensus and Formation Control of VTOL-UAVs with Event-Triggered Communications
Sensors 2019, 19(24), 5498; https://doi.org/10.3390/s19245498 - 12 Dec 2019
Cited by 3
Abstract
This article presents the design and implementation of an event-triggered control approach, applied to the leader-following consensus and formation of a group of autonomous micro-aircraft with capabilities of vertical take-off and landing (VTOL-UAVs). The control strategy is based on an inner–outer loop control [...] Read more.
This article presents the design and implementation of an event-triggered control approach, applied to the leader-following consensus and formation of a group of autonomous micro-aircraft with capabilities of vertical take-off and landing (VTOL-UAVs). The control strategy is based on an inner–outer loop control approach. The inner control law stabilizes the attitude and position of one agent, whereas the outer control follows a virtual leader to achieve position consensus cooperatively through an event-triggered policy. The communication topology uses undirected and connected graphs. With such an event-triggered control, the closed-loop trajectories converge to a compact sphere, centered in the origin of the error space. Furthermore, the minimal inter-sampling time is proven to be below bounded avoiding the Zeno behavior. The formation problem addresses the group of agents to fly in a given shape configuration. The simulation and experimental results highlight the performance of the proposed control strategy. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Innovative Mobile Manipulator Solution for Modern Flexible Manufacturing Processes
Sensors 2019, 19(24), 5414; https://doi.org/10.3390/s19245414 - 09 Dec 2019
Cited by 4
Abstract
There is a paradigm shift in current manufacturing needs that is causing a change from the current mass-production-based approach to a mass customization approach where production volumes are smaller and more variable. Current processes are very adapted to the previous paradigm and lack [...] Read more.
There is a paradigm shift in current manufacturing needs that is causing a change from the current mass-production-based approach to a mass customization approach where production volumes are smaller and more variable. Current processes are very adapted to the previous paradigm and lack the required flexibility to adapt to the new production needs. To solve this problem, an innovative industrial mobile manipulator is presented. The robot is equipped with a variety of sensors that allow it to perceive its surroundings and perform complex tasks in dynamic environments. Following the current needs of the industry, the robot is capable of autonomous navigation, safely avoiding obstacles. It is flexible enough to be able to perform a wide variety of tasks, being the change between tasks done easily thanks to skills-based programming and the ability to change tools autonomously. In addition, its security systems allow it to share the workspace with human operators. This prototype has been developed as part of THOMAS European project, and it has been tested and demonstrated in real-world manufacturing use cases. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Social Navigation in a Cognitive Architecture Using Dynamic Proxemic Zones
Sensors 2019, 19(23), 5189; https://doi.org/10.3390/s19235189 - 27 Nov 2019
Cited by 3
Abstract
Robots have begun to populate the everyday environments of human beings. These social robots must perform their tasks without disturbing the people with whom they share their environment. This paper proposes a navigation algorithm for robots that is acceptable to people. Robots will [...] Read more.
Robots have begun to populate the everyday environments of human beings. These social robots must perform their tasks without disturbing the people with whom they share their environment. This paper proposes a navigation algorithm for robots that is acceptable to people. Robots will detect the personal areas of humans, to carry out their tasks, generating navigation routes that have less impact on human activities. The main novelty of this work is that the robot will perceive the moods of people to adjust the size of proxemic areas. This work will contribute to making the presence of robots in human-populated environments more acceptable. As a result, we have integrated this approach into a cognitive architecture designed to perform tasks in human-populated environments. The paper provides quantitative experimental results in two scenarios: controlled, including social navigation metrics in comparison with a traditional navigation method, and non-controlled, in robotic competitions where different studies of social robotics are measured. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Parking Line Based SLAM Approach Using AVM/LiDAR Sensor Fusion for Rapid and Accurate Loop Closing and Parking Space Detection
Sensors 2019, 19(21), 4811; https://doi.org/10.3390/s19214811 - 05 Nov 2019
Cited by 2
Abstract
Parking is a challenging task for autonomous vehicles and requires a centimeter level precision of distance measurement for safe parking at a destination to avoid collisions with nearby vehicles. In order to avoid collisions with parked vehicles while parking, real-time localization performance should [...] Read more.
Parking is a challenging task for autonomous vehicles and requires a centimeter level precision of distance measurement for safe parking at a destination to avoid collisions with nearby vehicles. In order to avoid collisions with parked vehicles while parking, real-time localization performance should be maintained even when loop closing occurs. This study proposes a simultaneous localization and mapping (SLAM) method, using around view monitor (AVM)/light detection and ranging (LiDAR) sensor fusion, that provides rapid loop closing performance. We extract the parking line features by utilizing the sensor fusion data for sparse feature-based pose graph optimization that boosts the loop closing speed. Hence, the proposed method can perform the loop closing within a few milliseconds to compensate for the accumulative errors even in a large-scale outdoor environment, which is much faster than other LiDAR-based SLAM algorithms. Therefore, it easily satisfies real-time localization performance. Furthermore, thanks to the parking line features, the proposed method can detect a parking space by utilizing the accumulated parking lines in the map. The experiment was performed in three outdoor parking lots to validate the localization performance and parking space detection performance. All of the proposed methods can be operated in real-time in a single-CPU environment. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Vision-Based Multirotor Following Using Synthetic Learning Techniques
Sensors 2019, 19(21), 4794; https://doi.org/10.3390/s19214794 - 04 Nov 2019
Abstract
Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by [...] Read more.
Deep- and reinforcement-learning techniques have increasingly required large sets of real data to achieve stable convergence and generalization, in the context of image-recognition, object-detection or motion-control strategies. On this subject, the research community lacks robust approaches to overcome unavailable real-world extensive data by means of realistic synthetic-information and domain-adaptation techniques. In this work, synthetic-learning strategies have been used for the vision-based autonomous following of a noncooperative multirotor. The complete maneuver was learned with synthetic images and high-dimensional low-level continuous robot states, with deep- and reinforcement-learning techniques for object detection and motion control, respectively. A novel motion-control strategy for object following is introduced where the camera gimbal movement is coupled with the multirotor motion during the multirotor following. Results confirm that our present framework can be used to deploy a vision-based task in real flight using synthetic data. It was extensively validated in both simulated and real-flight scenarios, providing proper results (following a multirotor up to 1.3 m/s in simulation and 0.3 m/s in real flights). Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Passing through Open/Closed Doors: A Solution for 3D Scanning Robots
Sensors 2019, 19(21), 4740; https://doi.org/10.3390/s19214740 - 31 Oct 2019
Cited by 1
Abstract
In this article, a traversing door methodology for building scanning mobile platforms is proposed. The problem of passing through open/closed doors entails several actions that can be implemented by processing 3D information provided by dense 3D laser scanners. Our robotized platform, denominated as [...] Read more.
In this article, a traversing door methodology for building scanning mobile platforms is proposed. The problem of passing through open/closed doors entails several actions that can be implemented by processing 3D information provided by dense 3D laser scanners. Our robotized platform, denominated as MoPAD (Mobile Platform for Autonomous Digitization), has been designed to collect dense 3D data and generate basic architectural models of the interiors of buildings. Moreover, the system identifies the doors of the room, recognises their respective states (open, closed or semi-closed) and completes the aforementioned 3D model, which is later integrated into the robot global planning system. This document is mainly focused on describing how the robot navigates towards the exit door and passes to a contiguous room. The steps of approaching, door-handle recognition/positioning and handle–robot arm interaction (in the case of a closed door) are shown in detail. This approach has been tested using our MoPAD platform on the floors of buildings composed of several rooms in the case of open doors. For closed doors, the solution has been formulated, modeled and successfully tested in the Gazebo robot simulation tool by using a 4DOF robot arm on board MoPAD. The excellent results yielded in both cases lead us to believe that our solution could be implemented/adapted to other platforms and robot arms. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Topological Frontier-Based Exploration and Map-Building Using Semantic Information
Sensors 2019, 19(20), 4595; https://doi.org/10.3390/s19204595 - 22 Oct 2019
Cited by 2
Abstract
Exploration of unknown environments is a fundamental problem in autonomous robotics that deals with the complexity of autonomously traversing an unknown area while acquiring the most important information of the environment. In this work, a mobile robot exploration algorithm for indoor environments is [...] Read more.
Exploration of unknown environments is a fundamental problem in autonomous robotics that deals with the complexity of autonomously traversing an unknown area while acquiring the most important information of the environment. In this work, a mobile robot exploration algorithm for indoor environments is proposed. It combines frontier-based concepts with behavior-based strategies in order to build a topological representation of the environment. Frontier-based approaches assume that, to gain the most information of an environment, the robot has to move to the regions on the boundary between open space and unexplored space. The novelty of this work is in the semantic frontier classification and frontier selection according to a cost–utility function. In addition, a probabilistic loop closure algorithm is proposed to solve cyclic situations. The system outputs a topological map of the free areas of the environment for further navigation. Finally, simulated and real-world experiments have been carried out, their results and the comparison to other state-of-the-art algorithms show the feasibility of the exploration algorithm proposed and the improvement that it offers with regards to execution time and travelled distance. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
ITC: Infused Tangential Curves for Smooth 2D and 3D Navigation of Mobile Robots
Sensors 2019, 19(20), 4384; https://doi.org/10.3390/s19204384 - 10 Oct 2019
Cited by 2
Abstract
Navigation is an indispensable component of ground and aerial mobile robots. Although there is a plethora of path planning algorithms, most of them generate paths that are not smooth and have angular turns. In many cases, it is not feasible for the robots [...] Read more.
Navigation is an indispensable component of ground and aerial mobile robots. Although there is a plethora of path planning algorithms, most of them generate paths that are not smooth and have angular turns. In many cases, it is not feasible for the robots to execute these sharp turns, and a smooth trajectory is desired. We present ‘ITC: Infused Tangential Curves’ which can generate smooth trajectories for mobile robots. The main characteristics of the proposed ITC algorithm are: (1) The curves are tangential to the path, thus maintaining G 1 continuity, (2) The curves are infused in the original global path to smooth out the turns, (3) The straight segments of the global path are kept straight and only the sharp turns are smoothed, (4) Safety is embedded in the ITC trajectories and robots are guaranteed to maintain a safe distance from the obstacles, (5) The curvature of ITC curves can easily be controlled and smooth trajectories can be generated in real-time, (6) The ITC algorithm smooths the global path on a part-by-part basis thus local smoothing at one point does not affect the global path. We compare the proposed ITC algorithm with traditional interpolation based trajectory smoothing algorithms. Results show that, in case of mobile navigation in narrow corridors, ITC paths maintain a safe distance from both walls, and are easy to generate in real-time. We test the algorithm in complex scenarios to generate curves of different curvatures, while maintaining different safety thresholds from obstacles in vicinity. We mathematically discuss smooth trajectory generation for both 2D navigation of ground robots, and 3D navigation of aerial robots. We also test the algorithm in real environments with actual robots in a complex scenario of multi-robot collision avoidance. Results show that the ITC algorithm can be generated quickly and is suitable for real-world scenarios of collision avoidance in narrow corridors. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Graphical abstract

Open AccessArticle
Traversability Assessment and Trajectory Planning of Unmanned Ground Vehicles with Suspension Systems on Rough Terrain
Sensors 2019, 19(20), 4372; https://doi.org/10.3390/s19204372 - 10 Oct 2019
Cited by 5
Abstract
This paper presents a traversability assessment method and a trajectory planning method. They are key features for the navigation of an unmanned ground vehicle (UGV) in a non-planar environment. In this work, a 3D light detection and ranging (LiDAR) sensor is used to [...] Read more.
This paper presents a traversability assessment method and a trajectory planning method. They are key features for the navigation of an unmanned ground vehicle (UGV) in a non-planar environment. In this work, a 3D light detection and ranging (LiDAR) sensor is used to obtain the geometric information about a rough terrain surface. For a given SE(2) pose of the vehicle and a specific vehicle model, the SE(3) pose of the vehicle is estimated based on LiDAR points, and then a traversability is computed. The traversability tells the vehicle the effects of its interaction with the rough terrain. Note that the traversability is computed on demand during trajectory planning, so there is not any explicit terrain discretization. The proposed trajectory planner finds an initial path through the non-holonomic A*, which is a modified form of the conventional A* planner. A path is a sequence of poses without timestamps. Then, the initial path is optimized in terms of the traversability, using the method of Lagrange multipliers. The optimization accounts for the model of the vehicle’s suspension system. Therefore, the optimized trajectory is dynamically feasible, and the trajectory tracking error is small. The proposed methods were tested in both the simulation and the real-world experiments. The simulation experiments were conducted in a simulator called Gazebo, which uses a physics engine to compute the vehicle motion. The experiments were conducted in various non-planar experiments. The results indicate that the proposed methods could accurately estimate the SE(3) pose of the vehicle. Besides, the trajectory cost of the proposed planner was lower than the trajectory costs of other state-of-the-art trajectory planners. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Multirobot Heterogeneous Control Considering Secondary Objectives
Sensors 2019, 19(20), 4367; https://doi.org/10.3390/s19204367 - 09 Oct 2019
Abstract
Cooperative robotics has considered tasks that are executed frequently, maintaining the shape and orientation of robotic systems when they fulfill a common objective, without taking advantage of the redundancy that the robotic group could present. This paper presents a proposal for controlling a [...] Read more.
Cooperative robotics has considered tasks that are executed frequently, maintaining the shape and orientation of robotic systems when they fulfill a common objective, without taking advantage of the redundancy that the robotic group could present. This paper presents a proposal for controlling a group of terrestrial robots with heterogeneous characteristics, considering primary and secondary tasks thus that the group complies with the following of a path while modifying its shape and orientation at any time. The development of the proposal is achieved through the use of controllers based on linear algebra, propounding a low computational cost and high scalability algorithm. Likewise, the stability of the controller is analyzed to know the required features that have to be met by the control constants, that is, the correct values. Finally, experimental results are shown with different configurations and heterogeneous robots, where the graphics corroborate the expected operation of the proposal. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessFeature PaperArticle
Robust and Fast Scene Recognition in Robotics Through the Automatic Identification of Meaningful Images
Sensors 2019, 19(18), 4024; https://doi.org/10.3390/s19184024 - 18 Sep 2019
Cited by 1
Abstract
Scene recognition is still a very important topic in many fields, and that is definitely the case in robotics. Nevertheless, this task is view-dependent, which implies the existence of preferable directions when recognizing a particular scene. Both in human and computer vision-based classification, [...] Read more.
Scene recognition is still a very important topic in many fields, and that is definitely the case in robotics. Nevertheless, this task is view-dependent, which implies the existence of preferable directions when recognizing a particular scene. Both in human and computer vision-based classification, this actually often turns out to be biased. In our case, instead of trying to improve the generalization capability for different view directions, we have opted for the development of a system capable of filtering out noisy or meaningless images while, on the contrary, retaining those views from which is likely feasible that the correct identification of the scene can be made. Our proposal works with a heuristic metric based on the detection of key points in 3D meshes (Harris 3D). This metric is later used to build a model that combines a Minimum Spanning Tree and a Support Vector Machine (SVM). We have performed an extensive number of experiments through which we have addressed (a) the search for efficient visual descriptors, (b) the analysis of the extent to which our heuristic metric resembles the human criteria for relevance and, finally, (c) the experimental validation of our complete proposal. In the experiments, we have used both a public image database and images collected at our research center. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Point-Plane SLAM Using Supposed Planes for Indoor Environments
Sensors 2019, 19(17), 3795; https://doi.org/10.3390/s19173795 - 02 Sep 2019
Cited by 4
Abstract
Simultaneous localization and mapping (SLAM) is a fundamental problem for various applications. For indoor environments, planes are predominant features that are less affected by measurement noise. In this paper, we propose a novel point-plane SLAM system using RGB-D cameras. First, we extract feature [...] Read more.
Simultaneous localization and mapping (SLAM) is a fundamental problem for various applications. For indoor environments, planes are predominant features that are less affected by measurement noise. In this paper, we propose a novel point-plane SLAM system using RGB-D cameras. First, we extract feature points from RGB images and planes from depth images. Then plane correspondences in the global map can be found using their contours. Considering the limited size of real planes, we exploit constraints of plane edges. In general, a plane edge is an intersecting line of two perpendicular planes. Therefore, instead of line-based constraints, we calculate and generate supposed perpendicular planes from edge lines, resulting in more plane observations and constraints to reduce estimation errors. To exploit the orthogonal structure in indoor environments, we also add structural (parallel or perpendicular) constraints of planes. Finally, we construct a factor graph using all of these features. The cost functions are minimized to estimate camera poses and global map. We test our proposed system on public RGB-D benchmarks, demonstrating its robust and accurate pose estimation results, compared with other state-of-the-art SLAM systems. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
End-to-End Learning Framework for IMU-Based 6-DOF Odometry
Sensors 2019, 19(17), 3777; https://doi.org/10.3390/s19173777 - 31 Aug 2019
Cited by 4
Abstract
This paper presents an end-to-end learning framework for performing 6-DOF odometry by using only inertial data obtained from a low-cost IMU. The proposed inertial odometry method allows leveraging inertial sensors that are widely available on mobile platforms for estimating their 3D trajectories. For [...] Read more.
This paper presents an end-to-end learning framework for performing 6-DOF odometry by using only inertial data obtained from a low-cost IMU. The proposed inertial odometry method allows leveraging inertial sensors that are widely available on mobile platforms for estimating their 3D trajectories. For this purpose, neural networks based on convolutional layers combined with a two-layer stacked bidirectional LSTM are explored from the following three aspects. First, two 6-DOF relative pose representations are investigated: one based on a vector in the spherical coordinate system, and the other based on both a translation vector and an unit quaternion. Second, the loss function in the network is designed with the combination of several 6-DOF pose distance metrics: mean squared error, translation mean absolute error, quaternion multiplicative error and quaternion inner product. Third, a multi-task learning framework is integrated to automatically balance the weights of multiple metrics. In the evaluation, qualitative and quantitative analyses were conducted with publicly-available inertial odometry datasets. The best combination of the relative pose representation and the loss function was the translation and quaternion together with the translation mean absolute error and quaternion multiplicative error, which obtained more accurate results with respect to state-of-the-art inertial odometry techniques. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Safe and Robust Mobile Robot Navigation in Uneven Indoor Environments
Sensors 2019, 19(13), 2993; https://doi.org/10.3390/s19132993 - 07 Jul 2019
Cited by 4
Abstract
Complex environments pose great challenges for autonomous mobile robot navigation. In this study, we address the problem of autonomous navigation in 3D environments with staircases and slopes. An integrated system for safe mobile robot navigation in 3D complex environments is presented and both [...] Read more.
Complex environments pose great challenges for autonomous mobile robot navigation. In this study, we address the problem of autonomous navigation in 3D environments with staircases and slopes. An integrated system for safe mobile robot navigation in 3D complex environments is presented and both the perception and navigation capabilities are incorporated into the modular and reusable framework. Firstly, to distinguish the slope from the staircase in the environment, the robot builds a 3D OctoMap of the environment with a novel Simultaneously Localization and Mapping (SLAM) framework using the information of wheel odometry, a 2D laser scanner, and an RGB-D camera. Then, we introduce the traversable map, which is generated by the multi-layer 2D maps extracted from the 3D OctoMap. This traversable map serves as the input for autonomous navigation when the robot faces slopes and staircases. Moreover, to enable robust robot navigation in 3D environments, a novel camera re-localization method based on regression forest towards stable 3D localization is incorporated into this framework. In addition, we utilize a variable step size Rapidly-exploring Random Tree (RRT) method which can adjust the exploring step size automatically without tuning this parameter manually according to the environment, so that the navigation efficiency is improved. The experiments are conducted in different kinds of environments and the output results demonstrate that the proposed system enables the robot to navigate efficiently and robustly in complex 3D environments. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Socially Compliant Path Planning for Robotic Autonomous Luggage Trolley Collection at Airports
Sensors 2019, 19(12), 2759; https://doi.org/10.3390/s19122759 - 19 Jun 2019
Cited by 5
Abstract
This paper describes a socially compliant path planning scheme for robotic autonomous luggage trolley collection at airports. The robot is required to efficiently collect all assigned luggage trolleys in a designated area, while avoiding obstacles and not offending the pedestrians. This path planning [...] Read more.
This paper describes a socially compliant path planning scheme for robotic autonomous luggage trolley collection at airports. The robot is required to efficiently collect all assigned luggage trolleys in a designated area, while avoiding obstacles and not offending the pedestrians. This path planning problem is formulated in this paper as a Traveling Salesman Problem (TSP). Different from the conventional solutions to the TSP, in which the Euclidean distance between two sites is used as the metric, a high-dimensional metric including the factor of pedestrians’ feelings is applied in this work. To obtain the new metric, a novel potential function is firstly proposed to model the relationship between the robot, luggage trolleys, obstacles, and pedestrians. The Social Force Model (SFM) is utilized so that the pedestrians can bring extra influence on the potential field, different from ordinary obstacles. Directed by the attractive and repulsive force generated from the potential field, a number of paths connecting the robot and the luggage trolley, or two luggage trolleys, can be obtained. The length of the generated path is considered as the new metric. The Self-Organizing Map (SOM) satisfies the job of finding a final path to connect all luggage trolleys and the robot located in the potential field, as it can find the intrinsic connection in the high dimensional space. Therefore, while incorporating the new metric, the SOM is used to find the optimal path in which the robot can collect the assigned luggage trolleys in sequence. As a demonstration, the proposed path planning method is implemented in simulation experiments, showing an increase of efficiency and efficacy. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Self-Triggered Formation Control of Nonholonomic Robots
Sensors 2019, 19(12), 2689; https://doi.org/10.3390/s19122689 - 14 Jun 2019
Cited by 2
Abstract
In this paper, we report the design of an aperiodic remote formation controller applied to nonholonomic robots tracking nonlinear, trajectories using an external positioning sensor network. Our main objective is to reduce wireless communication with external sensors and robots while guaranteeing formation stability. [...] Read more.
In this paper, we report the design of an aperiodic remote formation controller applied to nonholonomic robots tracking nonlinear, trajectories using an external positioning sensor network. Our main objective is to reduce wireless communication with external sensors and robots while guaranteeing formation stability. Unlike most previous work in the field of aperiodic control, we design a self-triggered controller that only updates the control signal according to the variation of a Lyapunov function, without taking the measurement error into account. The controller is responsible for scheduling measurement requests to the sensor network and for computing and sending control signals to the robots. We design two triggering mechanisms: centralized, taking into account the formation state and decentralized, considering the individual state of each unit. We present a statistical analysis of simulation results, showing that our control solution significantly reduces the need for communication in comparison with periodic implementations, while preserving the desired tracking performance. To validate the proposal, we also perform experimental tests with robots remotely controlled by a mini PC through an IEEE 802.11g wireless network, in which robots pose is detected by a set of camera sensors connected to the same wireless network. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Graphical abstract

Open AccessArticle
A Novel Approach for Lidar-Based Robot Localization in a Scale-Drifted Map Constructed Using Monocular SLAM
Sensors 2019, 19(10), 2230; https://doi.org/10.3390/s19102230 - 14 May 2019
Cited by 5
Abstract
Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In [...] Read more.
Scale ambiguity and drift are inherent drawbacks of a pure-visual monocular simultaneous localization and mapping (SLAM) system. This problem could be a crucial challenge for other robots with range sensors to perform localization in a map previously built by a monocular camera. In this paper, a metrically inconsistent priori map is made by the monocular SLAM that is subsequently used to perform localization on another robot only using a laser range finder (LRF). To tackle the problem of the metric inconsistency, this paper proposes a 2D-LRF-based localization algorithm which allows the robot to locate itself and resolve the scale of the local map simultaneously. To align the data from 2D LRF to the map, 2D structures are extracted from the 3D point cloud map obtained by the visual SLAM process. Next, a modified Monte Carlo localization (MCL) approach is proposed to estimate the robot’s state which is composed of both the robot’s pose and map’s relative scale. Finally, the effectiveness of the proposed system is demonstrated in the experiments on a public benchmark dataset as well as in a real-world scenario. The experimental results indicate that the proposed method is able to globally localize the robot in real-time. Additionally, even in a badly drifted map, the successful localization can also be achieved. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Learning Environmental Field Exploration with Computationally Constrained Underwater Robots: Gaussian Processes Meet Stochastic Optimal Control
Sensors 2019, 19(9), 2094; https://doi.org/10.3390/s19092094 - 06 May 2019
Cited by 5
Abstract
Autonomous exploration of environmental fields is one of the most promising tasks to be performed by fleets of mobile underwater robots. The goal is to maximize the information gain during the exploration process by integrating an information-metric into the path-planning and control step. [...] Read more.
Autonomous exploration of environmental fields is one of the most promising tasks to be performed by fleets of mobile underwater robots. The goal is to maximize the information gain during the exploration process by integrating an information-metric into the path-planning and control step. Therefore, the system maintains an internal belief representation of the environmental field which incorporates previously collected measurements from the real field. In contrast to surface robots, mobile underwater systems are forced to run all computations on-board due to the limited communication bandwidth in underwater domains. Thus, reducing the computational cost of field exploration algorithms constitutes a key challenge for in-field implementations on micro underwater robot teams. In this work, we present a computationally efficient exploration algorithm which utilizes field belief models based on Gaussian Processes, such as Gaussian Markov random fields or Kalman regression, to enable field estimation with constant computational cost over time. We extend the belief models by the use of weighted shape functions to directly incorporate spatially continuous field observations. The developed belief models function as information-theoretic value functions to enable path planning through stochastic optimal control with path integrals. We demonstrate the efficiency of our exploration algorithm in a series of simulations including the case of a stationary spatio-temporal field. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
A Precise and GNSS-Free Landing System on Moving Platforms for Rotary-Wing UAVs
Sensors 2019, 19(4), 886; https://doi.org/10.3390/s19040886 - 20 Feb 2019
Cited by 3
Abstract
This article presents a precise landing system that allows rotary-wing UAVs to approach and land safely on moving platforms, without using GNSS at any stage of the landing maneuver, and with a centimeter level accuracy and high level of robustness. This system implements [...] Read more.
This article presents a precise landing system that allows rotary-wing UAVs to approach and land safely on moving platforms, without using GNSS at any stage of the landing maneuver, and with a centimeter level accuracy and high level of robustness. This system implements a novel concept where the relative position and velocity between the aerial vehicle and the landing platform are calculated from the angles of a cable that physically connects the UAV and the landing platform. The use of a cable also incorporates a number of extra benefits, such as increasing the precision in the control of the UAV altitude. It also facilitates centering the UAV right on top of the expected landing position, and increases the stability of the UAV just after contacting the landing platform. The system was implemented in an unmanned helicopter and many tests were carried out under different conditions for measuring the accuracy and the robustness of the proposed solution. Results show that the developed system allowed landing with centimeter accuracy by using only local sensors and that the helicopter could follow the landing platform in multiple trajectories at different velocities. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Positioning, Navigation, and Book Accessing/Returning in an Autonomous Library Robot using Integrated Binocular Vision and QR Code Identification Systems
Sensors 2019, 19(4), 783; https://doi.org/10.3390/s19040783 - 14 Feb 2019
Cited by 2
Abstract
With rapid advancements in artificial intelligence and mobile robots, some of the tedious yet simple jobs in modern libraries, like book accessing and returning (BAR) operations that had been fulfilled manually before, could be undertaken by robots. Due to the limited accuracies of [...] Read more.
With rapid advancements in artificial intelligence and mobile robots, some of the tedious yet simple jobs in modern libraries, like book accessing and returning (BAR) operations that had been fulfilled manually before, could be undertaken by robots. Due to the limited accuracies of the existing positioning and navigation (P&N) technologies and the operational errors accumulated within the robot P&N process, however, most of the current robots are not able to fulfill such high-precision operations. To address these practical issues, we propose, for the first time (to the best of our knowledge), to combine the binocular vision and Quick Response (QR) code identification techniques together to improve the robot P&N accuracies, and then construct an autonomous library robot for high-precision BAR operations. Specifically, the binocular vision system is used for dynamic digital map construction and autonomous P&N, as well as obstacle identification and avoiding functions, while the QR code identification technique is responsible for both robot operational error elimination and robotic arm BAR operation determination. Both simulations and experiments are conducted to verify the effectiveness of the proposed technique combination, as well as the constructed robot. Results show that such a technique combination is effective and robust, and could help to significantly improve the P&N and BAR operation accuracies, while reducing the BAR operation time. The implemented autonomous robot is fully-autonomous and cost-effective, and may find applications far beyond libraries with only sophisticated technologies employed. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Obstacle Avoidance of Two-Wheel Differential Robots Considering the Uncertainty of Robot Motion on the Basis of Encoder Odometry Information
Sensors 2019, 19(2), 289; https://doi.org/10.3390/s19020289 - 12 Jan 2019
Cited by 7
Abstract
It is important to overcome different types of uncertainties for the safe and reliable navigation of mobile robots. Uncertainty sources can be categorized into recognition, motion, and environmental sources. Although several challenges of recognition uncertainty have been addressed, little attention has been paid [...] Read more.
It is important to overcome different types of uncertainties for the safe and reliable navigation of mobile robots. Uncertainty sources can be categorized into recognition, motion, and environmental sources. Although several challenges of recognition uncertainty have been addressed, little attention has been paid to motion uncertainty. This study shows how the uncertainties of robot motions can be quantitatively modeled through experiments. Although the practical motion uncertainties are affected by various factors, this research focuses on the velocity control performance of wheels obtained by encoder sensors. Experimental results show that the velocity control errors of practical robots are not negligible. This paper proposes a new motion control scheme toward reliable obstacle avoidance by reflecting the experimental motion uncertainties. The presented experimental results clearly show that the consideration of the motion uncertainty is essential for successful collision avoidance. The presented simulation results show that a robot cannot move through narrow passages owing to a risk of collision when the uncertainty of motion is high. This research shows that the proposed method accurately reflects the motion uncertainty and balances the collision safety with the navigation efficiency of the robot. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Efficient Lazy Theta* Path Planning over a Sparse Grid to Explore Large 3D Volumes with a Multirotor UAV
Sensors 2019, 19(1), 174; https://doi.org/10.3390/s19010174 - 05 Jan 2019
Cited by 4
Abstract
Exploring large, unknown, and unstructured environments is challenging for Unmanned Aerial Vehicles (UAVs), but they are valuable tools to inspect large structures safely and efficiently. The Lazy Theta* path-planning algorithm is revisited and adapted to generate paths fast enough to be used in [...] Read more.
Exploring large, unknown, and unstructured environments is challenging for Unmanned Aerial Vehicles (UAVs), but they are valuable tools to inspect large structures safely and efficiently. The Lazy Theta* path-planning algorithm is revisited and adapted to generate paths fast enough to be used in real time and outdoors in large 3D scenarios. In real unknown scenarios, a given minimum safety distance to the nearest obstacle or unknown space should be observed, increasing the associated obstacle detection queries, and creating a bottleneck in the path-planning algorithm. We have reduced the dimension of the problem by considering geometrical properties to speed up these computations. On the other hand, we have also applied a non-regular grid representation of the world to increase the performance of the path-planning algorithm. In particular, a sparse resolution grid in the form of an octree is used, organizing the measurements spatially, merging voxels when they are of the same state. Additionally, the number of neighbors is trimmed to match the sparse tree to reduce the number of obstacle detection queries. The development methodology adopted was Test-Driven Development (TDD) and the outcome was evaluated in real outdoors flights with a multirotor UAV. In the results, the performance shows over 90 percent decrease in overall path generation computation time. Furthermore, our approach scales well with the safety distance increases. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Graphical abstract

Open AccessArticle
Bearing-Only Obstacle Avoidance Based on Unknown Input Observer and Angle-Dependent Artificial Potential Field
Sensors 2019, 19(1), 31; https://doi.org/10.3390/s19010031 - 21 Dec 2018
Cited by 4
Abstract
This paper presents the problem of obstacle avoidance with bearing-only measurements in the case that the obstacle motion is model-free, i.e., its acceleration is absolutely unknown, which cannot be dealt with by the mainstream Kalman-like schemes based on the known motion model. First, [...] Read more.
This paper presents the problem of obstacle avoidance with bearing-only measurements in the case that the obstacle motion is model-free, i.e., its acceleration is absolutely unknown, which cannot be dealt with by the mainstream Kalman-like schemes based on the known motion model. First, the essential reason of the collision caused by local minimum problem in the standard artificial potential field method is proved, and hence a revised method with angle dependent factor is proposed. Then, an unknown input observer is proposed to estimate the position and velocity of the obstacle. Finally, the numeric simulation demonstrates the effectiveness in terms of estimation accuracy and terminative time. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Validation of a Dynamic Planning Navigation Strategy Applied to Mobile Terrestrial Robots
Sensors 2018, 18(12), 4322; https://doi.org/10.3390/s18124322 - 07 Dec 2018
Cited by 5
Abstract
This work describes the performance of a DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm) algorithm applied to autonomous navigation in unknown static and dynamic terrestrial environments. The main aim was to validate the functionality and robustness of the DPNA-GA, with variations [...] Read more.
This work describes the performance of a DPNA-GA (Dynamic Planning Navigation Algorithm optimized with Genetic Algorithm) algorithm applied to autonomous navigation in unknown static and dynamic terrestrial environments. The main aim was to validate the functionality and robustness of the DPNA-GA, with variations of genetic parameters including the crossover rate and population size. To this end, simulations were performed of static and dynamic environments, applying the different conditions. The simulation results showed satisfactory efficiency and robustness of the DPNA-GA technique, validating it for real applications involving mobile terrestrial robots. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
An Eight-Direction Scanning Detection Algorithm for the Mapping Robot Pathfinding in Unknown Indoor Environment
Sensors 2018, 18(12), 4254; https://doi.org/10.3390/s18124254 - 04 Dec 2018
Cited by 3
Abstract
Aiming at the problem of how to enable the mobile robot to navigate and traverse efficiently and safely in the unknown indoor environment and map the environment, an eight-direction scanning detection (eDSD) algorithm is proposed as a new pathfinding algorithm. Firstly, we use [...] Read more.
Aiming at the problem of how to enable the mobile robot to navigate and traverse efficiently and safely in the unknown indoor environment and map the environment, an eight-direction scanning detection (eDSD) algorithm is proposed as a new pathfinding algorithm. Firstly, we use a laser-based SLAM (Simultaneous Localization and Mapping) algorithm to perform simultaneous localization and mapping to acquire the environment information around the robot. Then, according to the proposed algorithm, the 8 certain areas around the 8 directions which are developed from the robot’s center point are analyzed in order to calculate the probabilistic path vector of each area. Considering the requirements of efficient traverse and obstacle avoidance in practical applications, the proposal can find the optimal local path in a short time. In addition to local pathfinding, the global pathfinding is also introduced for unknown environments of large-scale and complex structures to reduce the repeated traverse. The field experiments in three typical indoor environments demonstrate that deviation of the planned path from the ideal path can be kept to a low level in terms of the path length and total time consumption. It is confirmed that the proposed algorithm is highly adaptable and practical in various indoor environments. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Interval Type-2 Neural Fuzzy Controller-Based Navigation of Cooperative Load-Carrying Mobile Robots in Unknown Environments
Sensors 2018, 18(12), 4181; https://doi.org/10.3390/s18124181 - 28 Nov 2018
Cited by 5
Abstract
In this paper, a navigation method is proposed for cooperative load-carrying mobile robots. The behavior mode manager is used efficaciously in the navigation control method to switch between two behavior modes, wall-following mode (WFM) and goal-oriented mode (GOM), according to various environmental conditions. [...] Read more.
In this paper, a navigation method is proposed for cooperative load-carrying mobile robots. The behavior mode manager is used efficaciously in the navigation control method to switch between two behavior modes, wall-following mode (WFM) and goal-oriented mode (GOM), according to various environmental conditions. Additionally, an interval type-2 neural fuzzy controller based on dynamic group artificial bee colony (DGABC) is proposed in this paper. Reinforcement learning was used to develop the WFM adaptively. First, a single robot is trained to learn the WFM. Then, this control method is implemented for cooperative load-carrying mobile robots. In WFM learning, the proposed DGABC performs better than the original artificial bee colony algorithm and other improved algorithms. Furthermore, the results of cooperative load-carrying navigation control tests demonstrate that the proposed cooperative load-carrying method and the navigation method can enable the robots to carry the task item to the goal and complete the navigation mission efficiently. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Bilevel Optimization-Based Time-Optimal Path Planning for AUVs
Sensors 2018, 18(12), 4167; https://doi.org/10.3390/s18124167 - 27 Nov 2018
Cited by 4
Abstract
Using the bilevel optimization (BIO) scheme, this paper presents a time-optimal path planner for autonomous underwater vehicles (AUVs) operating in grid-based environments with ocean currents. In this scheme, the upper optimization problem is defined as finding a free-collision channel from a starting point [...] Read more.
Using the bilevel optimization (BIO) scheme, this paper presents a time-optimal path planner for autonomous underwater vehicles (AUVs) operating in grid-based environments with ocean currents. In this scheme, the upper optimization problem is defined as finding a free-collision channel from a starting point to a destination, which consists of connected grids, and the lower optimization problem is defined as finding an energy-optimal path in the channel generated by the upper level algorithm. The proposed scheme is integrated with ant colony algorithm as the upper level and quantum-behaved particle swarm optimization as the lower level and tested to find an energy-optimal path for AUV navigating through an ocean environment in the presence of obstacles. This arrangement prevents discrete state transitions that constrain a vehicle’s motion to a small set of headings and improves efficiency by the usage of evolutionary algorithms. Simulation results show that the proposed BIO scheme has higher computation efficiency with a slightly lower fitness value than sliding wavefront expansion scheme, which is a grid-based path planner with continuous motion directions. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Three Landmark Optimization Strategies for Mobile Robot Visual Homing
Sensors 2018, 18(10), 3180; https://doi.org/10.3390/s18103180 - 20 Sep 2018
Cited by 1
Abstract
Visual homing is an attractive autonomous mobile robot navigation technique, which only uses vision sensors to guide the robot to the specified target location. Landmark is the only input form of the visual homing approaches, which is usually represented by scale-invariant features. However, [...] Read more.
Visual homing is an attractive autonomous mobile robot navigation technique, which only uses vision sensors to guide the robot to the specified target location. Landmark is the only input form of the visual homing approaches, which is usually represented by scale-invariant features. However, the landmark distribution has a great impact on the homing performance of the robot, as irregularly distributed landmarks will significantly reduce the navigation precision. In this paper, we propose three strategies to solve this problem. We use scale-invariant feature transform (SIFT) features as natural landmarks, and the proposed strategies can optimize the landmark distribution without over-eliminating landmarks or increasing calculation amount. Experiments on both panoramic image databases and a real mobile robot have verified the effectiveness and feasibility of the proposed strategies. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessArticle
Automatic Calibration of Odometry and Robot Extrinsic Parameters Using Multi-Composite-Targets for a Differential-Drive Robot with a Camera
Sensors 2018, 18(9), 3097; https://doi.org/10.3390/s18093097 - 14 Sep 2018
Cited by 5
Abstract
This paper simultaneously calibrates odometry parameters and the relative pose between a monocular camera and a robot automatically. Most camera pose estimation methods use natural features or artificial landmark tools. However, there are mismatches and scale ambiguity for natural features; the large-scale precision [...] Read more.
This paper simultaneously calibrates odometry parameters and the relative pose between a monocular camera and a robot automatically. Most camera pose estimation methods use natural features or artificial landmark tools. However, there are mismatches and scale ambiguity for natural features; the large-scale precision landmark tool is also challenging to make. To solve these problems, we propose an automatic process to combine multiple composite targets, select keyframes, and estimate keyframe poses. The composite target consists of an aruco marker and a checkerboard pattern. First, an analytical method is applied to obtain initial values of all calibration parameters; prior knowledge of the calibration parameters is not required. Then, two optimization steps are used to refine the calibration parameters. Planar motion constraints of the camera are introduced in these optimizations. The proposed solution is automatic; manual selection of keyframes, initial values, and robot construction within a specific trajectory are not required. The competing accuracy and stability of the proposed method under different target placements and robot paths are tested experimentally. Positive effects on calibration accuracy and stability are obtained when (1) composite targets are adopted; (2) two optimization steps are used; (3) plane motion constraints are introduced; and (4) target numbers are increased. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Graphical abstract

Review

Jump to: Research

Open AccessReview
Towards the Internet of Flying Robots: A Survey
Sensors 2018, 18(11), 4038; https://doi.org/10.3390/s18114038 - 19 Nov 2018
Cited by 25
Abstract
The Internet of Flying Robots (IoFR) has received much attention in recent years thanks to the mobility and flexibility of flying robots. Although a lot of research has been done, there is a lack of a comprehensive survey on this topic. This paper [...] Read more.
The Internet of Flying Robots (IoFR) has received much attention in recent years thanks to the mobility and flexibility of flying robots. Although a lot of research has been done, there is a lack of a comprehensive survey on this topic. This paper analyzes several typical problems in designing IoFR for real applications, including wireless communication support, monitoring targets of interest, serving a wireless sensor network, and collaborating with ground robots. In particular, an overview of the existing publications on the coverage problem, connectivity of flying robots, energy capacity limitation, target searching, path planning, flying robot navigation with collision avoidance, etc., is presented. Beyond the discussion of these available approaches, some shortcomings of them are indicated and some promising future research directions are pointed out. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Open AccessReview
Path Smoothing Techniques in Robot Navigation: State-of-the-Art, Current and Future Challenges
Sensors 2018, 18(9), 3170; https://doi.org/10.3390/s18093170 - 19 Sep 2018
Cited by 43
Abstract
Robot navigation is an indispensable component of any mobile service robot. Many path planning algorithms generate a path which has many sharp or angular turns. Such paths are not fit for mobile robot as it has to slow down at these sharp turns. [...] Read more.
Robot navigation is an indispensable component of any mobile service robot. Many path planning algorithms generate a path which has many sharp or angular turns. Such paths are not fit for mobile robot as it has to slow down at these sharp turns. These robots could be carrying delicate, dangerous, or precious items and executing these sharp turns may not be feasible kinematically. On the contrary, smooth trajectories are often desired for robot motion and must be generated while considering the static and dynamic obstacles and other constraints like feasible curvature, robot and lane dimensions, and speed. The aim of this paper is to succinctly summarize and review the path smoothing techniques in robot navigation and discuss the challenges and future trends. Both autonomous mobile robots and autonomous vehicles (outdoor robots or self-driving cars) are discussed. The state-of-the-art algorithms are broadly classified into different categories and each approach is introduced briefly with necessary background, merits, and drawbacks. Finally, the paper discusses the current and future challenges in optimal trajectory generation and smoothing research. Full article
(This article belongs to the Special Issue Mobile Robot Navigation)
Show Figures

Figure 1

Back to TopTop