Open AccessArticle
A High-Fidelity Energy Efficient Path Planner for Unmanned Airships
Robotics 2017, 6(4), 28; doi:10.3390/robotics6040028 -
Abstract
This paper presents a comparative study on the effects of grid resolution, vehicle velocity, and wind vector fields on the trajectory planning of unmanned airships. A wavefront expansion trajectory planner that minimizes a multi-objective cost function consisting of flight time, energy consumption, and
[...] Read more.
This paper presents a comparative study on the effects of grid resolution, vehicle velocity, and wind vector fields on the trajectory planning of unmanned airships. A wavefront expansion trajectory planner that minimizes a multi-objective cost function consisting of flight time, energy consumption, and collision avoidance while respecting the differential constraints of the vehicle is presented. Trajectories are generated using a variety of test environments and flight conditions to demonstrate that the inclusion of a high terrain map resolution, a temporal vehicle velocity, and a spatial wind vector field yields significant improvements in trajectory feasibility and energy economy when compared to trajectories generated using only two of these three elements. Full article
Figures

Figure 1

Open AccessArticle
HexaMob—A Hybrid Modular Robotic Design for Implementing Biomimetic Structures
Robotics 2017, 6(4), 27; doi:10.3390/robotics6040027 -
Abstract
Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life
[...] Read more.
Modular robots are capable of forming primitive shapes such as lattice and chain structures with the additional flexibility of distributed sensing. The biomimetic structures developed using such modular units provides ease of replacement and reconfiguration in co-ordinated structures, transportation etc. in real life scenarios. Though the research in the employment of modular robotic units in formation of biological organisms is in the nascent stage, modular robotic units are already capable of forming such sophisticated structures. The modular robotic designs proposed so far in modular robotics research vary significantly in external structures, sensor-actuator mechanisms interfaces for docking and undocking, techniques for providing mobility, coordinated structures, locomotions etc. and each robotic design attempted to address various challenges faced in the domain of modular robotics by employing different strategies. This paper presents a novel modular wheeled robotic design - HexaMob facilitating four degrees of freedom (2 degrees for mobility and 2 degrees for structural reconfiguration) on a single module with minimal usage of sensor-actuator assemblies. The crucial features of modular robotics such as back-driving restriction, docking, and navigation are addressed in the process of HexaMob design. The proposed docking mechanism is enabled using vision sensor, enhancing the capabilities in docking as well as navigation in co-ordinated structures such as humanoid robots. Full article
Figures

Figure 1

Open AccessArticle
Design of a Mobile Robot for Air Ducts Exploration
Robotics 2017, 6(4), 26; doi:10.3390/robotics6040026 -
Abstract
This work presents the solutions adopted for the design and the implementation of an autonomous wheeled robot developed for the exploration and mapping of air ventilation ducts. The hardware is based on commercial off-the-shelf devices, including sensors, motors, processing devices and interfaces. The
[...] Read more.
This work presents the solutions adopted for the design and the implementation of an autonomous wheeled robot developed for the exploration and mapping of air ventilation ducts. The hardware is based on commercial off-the-shelf devices, including sensors, motors, processing devices and interfaces. The mechanical chassis was designed from scratch to meet a trade-off between small size and available volume to host the components. The software stack is based on the Robot Operating System (ROS). Special attention was dedicated to the design of the mobility strategy, which must take into account some constraints and issues that are specific to the considered application, such as the relatively small size of ducts, the need to detect and avoid possible holes on the floor of the duct and other unusual obstacles and the unavailability of external reference frameworks for localization. The main contribution of this paper lies in the design, implementation and experimentation of the overall system. Full article
Figures

Figure 1

Open AccessArticle
A Novel Docking System for Modular Self-Reconfigurable Robots
Robotics 2017, 6(4), 25; doi:10.3390/robotics6040025 -
Abstract
Existing self-reconfigurable robots achieve connections and disconnections by a separate drive of the docking system. In this paper, we present a new docking system with which the connections and disconnections are driven by locomotion actuators, without the need for a separate drive, which
[...] Read more.
Existing self-reconfigurable robots achieve connections and disconnections by a separate drive of the docking system. In this paper, we present a new docking system with which the connections and disconnections are driven by locomotion actuators, without the need for a separate drive, which reduces the weight and the complexity of the modules. This self-reconfigurable robot consists of two types of fundamental modules, i.e., active and passive modules. By the docking system, two types of connections are formed with the fundamental modules, and the docking and undocking actions are achieved through simple control with less sensory feedback. This paper describes the design of the robotic modules, the docking system, the docking process, and the docking force analysis. An experiment is performed to demonstrate the self-reconfigurable robot with the docking system. Full article
Figures

Figure 1

Open AccessArticle
The Thorvald II Agricultural Robotic System
Robotics 2017, 6(4), 24; doi:10.3390/robotics6040024 -
Abstract
This paper presents a novel and modular approach to agricultural robots. Food production is highly diverse in several aspects. Even farms that grow the same crops may differ in topology, infrastructure, production method, and so on. Modular robots help us adapt to this
[...] Read more.
This paper presents a novel and modular approach to agricultural robots. Food production is highly diverse in several aspects. Even farms that grow the same crops may differ in topology, infrastructure, production method, and so on. Modular robots help us adapt to this diversity, as they can quickly be configured for various farm environments. The robots presented in this paper are hardware modular in the sense that they can be reconfigured to obtain the necessary physical properties to operate in different production systems—such as tunnels, greenhouses and open fields—and their mechanical properties can be adapted to adjust for track width, power requirements, ground clearance, load capacity, and so on. The robot’s software is generalizing to work with the great variation of robot designs that can be realized by assembling hardware modules in different configurations. The paper presents several novel ideas for agricultural robotics, as well as extensive field trials of several different versions of the Thorvald II platform. Full article
Figures

Figure 1

Open AccessReview
On the Development of Learning Control for Robotic Manipulators
Robotics 2017, 6(4), 23; doi:10.3390/robotics6040023 -
Abstract
Learning control for robotic manipulators has been developed over the past decade and to the best of the authors’ knowledge, it is still in its infant development stage; the authors believe that it will become one of the most promising directions in the
[...] Read more.
Learning control for robotic manipulators has been developed over the past decade and to the best of the authors’ knowledge, it is still in its infant development stage; the authors believe that it will become one of the most promising directions in the control area in robotic manipulators. Learning control in robotic manipulators is mainly used to address the issue that the friction at the joints of robotic mechanisms and other uncertainties may exist in the dynamic models, which are very complex and may even be impossible to model mathematically. In this paper, the authors review and discuss the learning control in robotic manipulators and some issues in learning control for robotic manipulators are also illustrated. This review is able to give a general guideline for future research in learning control for robotic manipulators. Full article
Figures

Figure 1

Open AccessReview
Resilient Robots: Concept, Review, and Future Directions
Robotics 2017, 6(4), 22; doi:10.3390/robotics6040022 -
Abstract
This paper reviews recent developments in the emerging field of resilient robots and the related robots that share common concerns with them, such as self-reconfigurable robots. This paper addresses the identity of the resilient robots by distinguishing the concept of resilience from other
[...] Read more.
This paper reviews recent developments in the emerging field of resilient robots and the related robots that share common concerns with them, such as self-reconfigurable robots. This paper addresses the identity of the resilient robots by distinguishing the concept of resilience from other similar concepts and summarizes the strategies used by robots to recover their original function. By illustrating some issues of current resilient robots in the design of control systems, physical architecture modules, and physical connection systems, this paper shows several of the possible solutions to facilitate the development of the new and improved robots with higher resilience. The conclusion outlines several directions for the future of this field. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Building a ROS-Based Testbed for Realistic Multi-Robot Simulation: Taking the Exploration as an Example
Robotics 2017, 6(3), 21; doi:10.3390/robotics6030021 -
Abstract
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process
[...] Read more.
While the robotics community agrees that the benchmarking is of high importance to objectively compare different solutions, there are only few and limited tools to support it. To address this issue in the context of multi-robot systems, we have defined a benchmarking process based on experimental designs, which aimed at improving the reproducibility of experiments by making explicit all elements of a benchmark such as parameters, measurements and metrics. We have also developed a ROS (Robot Operating System)-based testbed with the goal of making it easy for users to validate, benchmark, and compare different algorithms including coordination strategies. Our testbed uses the MORSE (Modular OpenRobots Simulation Engine) simulator for realistic simulation and a computer cluster for decentralized computation. In this paper, we present our testbed in details with the architecture and infrastructure, the issues encountered in implementing the infrastructure, and the automation of the deployment. We also report a series of experiments on multi-robot exploration, in order to demonstrate the capabilities of our testbed. Full article
Figures

Open AccessArticle
Proposal of Novel Model for a 2 DOF Exoskeleton for Lower-Limb Rehabilitation
Robotics 2017, 6(3), 20; doi:10.3390/robotics6030020 -
Abstract
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist
[...] Read more.
Nowadays, engineering is working side by side with medical sciences to design and create devices which could help to improve medical processes. Physiotherapy is one of the areas of medicine in which engineering is working. There, several devices aimed to enhance and assist therapies are being studied and developed. Mechanics and electronics engineering together with physiotherapy are developing exoskeletons, which are electromechanical devices attached to limbs which could help the user to move or correct the movement of the given limbs, providing automatic therapies with flexible and configurable programs to improve the autonomy and fit the needs of each patient. Exoskeletons can enhance the effectiveness of physiotherapy and reduce patient rehabilitation time. As a contribution, this paper proposes a dynamic model for two degrees of freedom (2 DOF) leg exoskeleton acting over the knee and ankle to treat people with partial disability in lower limbs. This model has the advantage that it can be adapted for any person using the variables of mass and height, converting it into a flexible alternative for calculating the exoskeleton dynamics very quickly and adapting them easily for a child’s or young adult’s body. In addition, this paper includes the linearization of the model and an analysis of its respective observability and controllability, as preliminary study for control strategies applications. Full article
Figures

Figure 1

Open AccessArticle
Automated Remote Insect Surveillance at a Global Scale and the Internet of Things
Robotics 2017, 6(3), 19; doi:10.3390/robotics6030019 -
Abstract
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic
[...] Read more.
Τhe concept of remote insect surveillance at large spatial scales for many serious insect pests of agricultural and medical importance has been introduced in a series of our papers. We augment typical, low-cost plastic traps for many insect pests with the necessary optoelectronic sensors to guard the entrance of the trap to detect, time-stamp, GPS tag, and—in relevant cases—identify the species of the incoming insect from their wingbeat. For every important crop pest, there are monitoring protocols to be followed to decide when to initiate a treatment procedure before a serious infestation occurs. Monitoring protocols are mainly based on specifically designed insect traps. Traditional insect monitoring suffers in that the scope of such monitoring: is curtailed by its cost, requires intensive labor, is time consuming, and an expert is often needed for sufficient accuracy which can sometimes raise safety issues for humans. These disadvantages reduce the extent to which manual insect monitoring is applied and therefore its accuracy, which finally results in significant crop loss due to damage caused by pests. With the term ‘surveillance’ we intend to push the monitoring idea to unprecedented levels of information extraction regarding the presence, time-stamping detection events, species identification, and population density of targeted insect pests. Insect counts, as well as environmental parameters that correlate with insects’ population development, are wirelessly transmitted to the central monitoring agency in real time and are visualized and streamed to statistical methods to assist enforcement of security control to insect pests. In this work, we emphasize how the traps can be self-organized in networks that collectively report data at local, regional, country, continental, and global scales using the emerging technology of the Internet of Things (IoT). This research is necessarily interdisciplinary and falls at the intersection of entomology, optoelectronic engineering, data-science, and crop science and encompasses the design and implementation of low-cost, low-power technology to help reduce the extent of quantitative and qualitative crop losses by many of the most significant agricultural pests. We argue that smart traps communicating through IoT to report in real-time the level of the pest population from the field straight to a human controlled agency can, in the very near future, have a profound impact on the decision-making process in crop protection and will be disruptive of existing manual practices. In the present study, three cases are investigated: monitoring Rhynchophorusferrugineus (Olivier) (Coleoptera: Curculionidae) using (a) Picusan and (b) Lindgren trap; and (c) monitoring various stored grain beetle pests using the stored-grain pitfall trap. Our approach is very accurate, reaching 98–99% accuracy on automatic counts compared with real detected numbers of insects in each type of trap. Full article
Figures

Figure 1

Open AccessReview
Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey
Robotics 2017, 6(3), 18; doi:10.3390/robotics6030018 -
Abstract
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in
[...] Read more.
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper. Full article
Figures

Figure 1

Open AccessArticle
Trajectory Planning and Tracking Control of a Differential-Drive Mobile Robot in a Picture Drawing Application
Robotics 2017, 6(3), 17; doi:10.3390/robotics6030017 -
Abstract
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of
[...] Read more.
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of the pen at the middle between two wheels, we are able to construct an omnidirectional mobile platform, thus implementing a simple and effective trajectory control method. The reference trajectories are generated based on line simplification and B-spline approximation of digitized input curves obtained from Canny’s edge-detection algorithm on a gray image. Experimental results for image picture drawing show the advantage of this proposed method. Full article
Figures

Figure 1

Open AccessArticle
Perception-Link Behavior Model: Supporting a Novel Operator Interface for a Customizable Anthropomorphic Telepresence Robot
Robotics 2017, 6(3), 16; doi:10.3390/robotics6030016 -
Abstract
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a
[...] Read more.
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a CATR. These problems are the disruption during decoupling, discontinuity due to the unstable transmission and jerkiness due to the reactive collision avoidance. From the review, none of the existing interfaces can simultaneously fix all of the problems. Hence, a novel framework with the perception-link behavior model (PLBM) was proposed. The PLBM adopts the distributed spatiotemporal representation for all of its input signals. Equipping it with other components, the PLBM can solve the above problems with some limitations. For instance, the PLBM can retrieve missing modalities from its experience during decoupling. Next, the PLBM can handle up to a high level of drop rate in the network connection because it is dealing with gesture style and not pose. For collision prevention, the PLBM can tune the incoming gesture style so that the CATR can deliberately and smoothly avoid a collision. In summary, the framework consists of PLBM being able to increase the user’s presence on a CATR by synthesizing expressive user gestures. Full article
Figures

Figure 1

Open AccessArticle
Compressed Voxel-Based Mapping Using Unsupervised Learning
Robotics 2017, 6(3), 15; doi:10.3390/robotics6030015 -
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective
[...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
Figures

Figure 1

Open AccessArticle
Automated Assembly Using 3D and 2D Cameras
Robotics 2017, 6(3), 14; doi:10.3390/robotics6030014 -
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a
[...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to <1 and <1. This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
Figures

Figure 1

Open AccessArticle
Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery
Robotics 2017, 6(2), 13; doi:10.3390/robotics6020013 -
Abstract
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical
[...] Read more.
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical system with interactive surgical guidance using tablet-based AR with a Kinect sensor for three-dimensional (3D) localization of patient anatomical structures and intraoperative 3D surgical tool navigation. Depth data acquired from the Kinect sensor was visualized in cone-shaped layers for 3D AR-assisted navigation. Virtual visual cues generated by the tablet were overlaid on the images of the surgical field for spatial reference. We evaluated the proposed system and the experimental results showed that the tablet-based visual guidance system could assist surgeons in locating internal organs, with errors between 1.74 and 2.96 mm. We also demonstrated that the system was able to provide mobile augmented guidance and interaction for surgical tool navigation. Full article
Figures

Figure 1

Open AccessArticle
Bin-Dog: A Robotic Platform for Bin Management in Orchards
Robotics 2017, 6(2), 12; doi:10.3390/robotics6020012 -
Abstract
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management,
[...] Read more.
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, the concept of a robotic bin-dog system is proposed in this study. This system is designed with a “go-over-the-bin” feature, which allows it to drive over bins between tree rows and complete the above process in one trip. To validate this system concept, a prototype and its control and navigation system were designed and built. Field tests were conducted in a commercial orchard to validate its key functionalities in three tasks including headland turning, straight-line tracking between tree rows, and “go-over-the-bin.” Tests of the headland turning showed that bin-dog followed a predefined path to align with an alleyway with lateral and orientation errors of 0.02 m and 1.5°. Tests of straight-line tracking showed that bin-dog could successfully track the alleyway centerline at speeds up to 1.00 m·s−1 with a RMSE offset of 0.07 m. The navigation system also successfully guided the bin-dog to complete the task of go-over-the-bin at a speed of 0.60 m·s−1. The successful validation tests proved that the prototype can achieve all desired functionality. Full article
Figures

Figure 1

Open AccessArticle
Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
Robotics 2017, 6(2), 11; doi:10.3390/robotics6020011 -
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant
[...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
Figures

Figure 1

Open AccessArticle
Binaural Range Finding from Synthetic Aperture Computation as the Head is Turned
Robotics 2017, 6(2), 10; doi:10.3390/robotics6020010 -
Abstract
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one
[...] Read more.
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one and the method extended for SAC as a function of range for estimating range to an acoustic source. An instantaneous angle λ (lambda) between the auditory axis and direction to an acoustic source locates the source on a small circle of colatitude (lambda circle) of a sphere symmetric about the auditory axis. As the head is turned, data over successive instantaneous lambda circles are integrated in a virtual field of audition from which the direction to an acoustic source can be inferred. Multiple sets of lambda circles generated as a function of range yield an optimal range at which the circles intersect to best focus at a point in a virtual three-dimensional field of audition, providing an estimate of range. A proof of concept is demonstrated using simulated experimental data. The method enables a binaural robot to estimate not only direction but also range to an acoustic source from sufficiently accurate measurements of arrival time/level differences at the antennae. Full article
Figures

Figure 1

Open AccessArticle
Visual Place Recognition for Autonomous Mobile Robots
Robotics 2017, 6(2), 9; doi:10.3390/robotics6020009 -
Abstract
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We
[...] Read more.
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter case. Full article
Figures

Figure 1