Open AccessReview
Application of Augmented Reality and Robotic Technology in Broadcasting: A Survey
Robotics 2017, 6(3), 18; doi:10.3390/robotics6030018 (registering DOI) -
Abstract
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in
[...] Read more.
As an innovation technique, Augmented Reality (AR) has been gradually deployed in the broadcast, videography and cinematography industries. Virtual graphics generated by AR are dynamic and overlap on the surface of the environment so that the original appearance can be greatly enhanced in comparison with traditional broadcasting. In addition, AR enables broadcasters to interact with augmented virtual 3D models on a broadcasting scene in order to enhance the performance of broadcasting. Recently, advanced robotic technologies have been deployed in a camera shooting system to create a robotic cameraman so that the performance of AR broadcasting could be further improved, which is highlighted in the paper. Full article
Figures

Figure 1

Open AccessArticle
Trajectory Planning and Tracking Control of a Differential-Drive Mobile Robot in a Picture Drawing Application
Robotics 2017, 6(3), 17; doi:10.3390/robotics6030017 -
Abstract
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of
[...] Read more.
This paper proposes a method for trajectory planning and control of a mobile robot for application in picture drawing from images. The robot is an accurate differential drive mobile robot platform controlled by a field-programmable-gate-array (FPGA) controller. By not locating the tip of the pen at the middle between two wheels, we are able to construct an omnidirectional mobile platform, thus implementing a simple and effective trajectory control method. The reference trajectories are generated based on line simplification and B-spline approximation of digitized input curves obtained from Canny’s edge-detection algorithm on a gray image. Experimental results for image picture drawing show the advantage of this proposed method. Full article
Figures

Figure 1

Open AccessArticle
Perception-Link Behavior Model: Supporting a Novel Operator Interface for a Customizable Anthropomorphic Telepresence Robot
Robotics 2017, 6(3), 16; doi:10.3390/robotics6030016 -
Abstract
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a
[...] Read more.
A customizable anthropomorphic telepresence robot (CATR) is an emerging medium that might have the highest degree of social presence among the existing mediated communication mediums. Unfortunately, there are problems with teleoperating a CATR, and these problems can deteriorate the gesture motion in a CATR. These problems are the disruption during decoupling, discontinuity due to the unstable transmission and jerkiness due to the reactive collision avoidance. From the review, none of the existing interfaces can simultaneously fix all of the problems. Hence, a novel framework with the perception-link behavior model (PLBM) was proposed. The PLBM adopts the distributed spatiotemporal representation for all of its input signals. Equipping it with other components, the PLBM can solve the above problems with some limitations. For instance, the PLBM can retrieve missing modalities from its experience during decoupling. Next, the PLBM can handle up to a high level of drop rate in the network connection because it is dealing with gesture style and not pose. For collision prevention, the PLBM can tune the incoming gesture style so that the CATR can deliberately and smoothly avoid a collision. In summary, the framework consists of PLBM being able to increase the user’s presence on a CATR by synthesizing expressive user gestures. Full article
Figures

Figure 1

Open AccessArticle
Compressed Voxel-Based Mapping Using Unsupervised Learning
Robotics 2017, 6(3), 15; doi:10.3390/robotics6030015 -
Abstract
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective
[...] Read more.
In order to deal with the scaling problem of volumetric map representations, we propose spatially local methods for high-ratio compression of 3D maps, represented as truncated signed distance fields. We show that these compressed maps can be used as meaningful descriptors for selective decompression in scenarios relevant to robotic applications. As compression methods, we compare using PCA-derived low-dimensional bases to nonlinear auto-encoder networks. Selecting two application-oriented performance metrics, we evaluate the impact of different compression rates on reconstruction fidelity as well as to the task of map-aided ego-motion estimation. It is demonstrated that lossily reconstructed distance fields used as cost functions for ego-motion estimation can outperform the original maps in challenging scenarios from standard RGB-D (color plus depth) data sets due to the rejection of high-frequency noise content. Full article
Figures

Figure 1

Open AccessArticle
Automated Assembly Using 3D and 2D Cameras
Robotics 2017, 6(3), 14; doi:10.3390/robotics6030014 -
Abstract
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a
[...] Read more.
2D and 3D computer vision systems are frequently being used in automated production to detect and determine the position of objects. Accuracy is important in the production industry, and computer vision systems require structured environments to function optimally. For 2D vision systems, a change in surfaces, lighting and viewpoint angles can reduce the accuracy of a method, maybe even to a degree that it will be erroneous, while for 3D vision systems, the accuracy mainly depends on the 3D laser sensors. Commercially available 3D cameras lack the precision found in high-grade 3D laser scanners, and are therefore not suited for accurate measurements in industrial use. In this paper, we show that it is possible to identify and locate objects using a combination of 2D and 3D cameras. A rough estimate of the object pose is first found using a commercially available 3D camera. Then, a robotic arm with an eye-in-hand 2D camera is used to determine the pose accurately. We show that this increases the accuracy to <1 and <1. This was demonstrated in a real industrial assembly task where high accuracy is required. Full article
Figures

Figure 1

Open AccessArticle
Augmented Reality Guidance with Multimodality Imaging Data and Depth-Perceived Interaction for Robot-Assisted Surgery
Robotics 2017, 6(2), 13; doi:10.3390/robotics6020013 -
Abstract
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical
[...] Read more.
Image-guided surgical procedures are challenged by mono image modality, two-dimensional anatomical guidance and non-intuitive human-machine interaction. The introduction of Tablet-based augmented reality (AR) into surgical robots may assist surgeons with overcoming these problems. In this paper, we proposed and developed a robot-assisted surgical system with interactive surgical guidance using tablet-based AR with a Kinect sensor for three-dimensional (3D) localization of patient anatomical structures and intraoperative 3D surgical tool navigation. Depth data acquired from the Kinect sensor was visualized in cone-shaped layers for 3D AR-assisted navigation. Virtual visual cues generated by the tablet were overlaid on the images of the surgical field for spatial reference. We evaluated the proposed system and the experimental results showed that the tablet-based visual guidance system could assist surgeons in locating internal organs, with errors between 1.74 and 2.96 mm. We also demonstrated that the system was able to provide mobile augmented guidance and interaction for surgical tool navigation. Full article
Figures

Figure 1

Open AccessArticle
Bin-Dog: A Robotic Platform for Bin Management in Orchards
Robotics 2017, 6(2), 12; doi:10.3390/robotics6020012 -
Abstract
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management,
[...] Read more.
Bin management during apple harvest season is an important activity for orchards. Typically, empty and full bins are handled by tractor-mounted forklifts or bin trailers in two separate trips. In order to simplify this work process and improve work efficiency of bin management, the concept of a robotic bin-dog system is proposed in this study. This system is designed with a “go-over-the-bin” feature, which allows it to drive over bins between tree rows and complete the above process in one trip. To validate this system concept, a prototype and its control and navigation system were designed and built. Field tests were conducted in a commercial orchard to validate its key functionalities in three tasks including headland turning, straight-line tracking between tree rows, and “go-over-the-bin.” Tests of the headland turning showed that bin-dog followed a predefined path to align with an alleyway with lateral and orientation errors of 0.02 m and 1.5°. Tests of straight-line tracking showed that bin-dog could successfully track the alleyway centerline at speeds up to 1.00 m·s−1 with a RMSE offset of 0.07 m. The navigation system also successfully guided the bin-dog to complete the task of go-over-the-bin at a speed of 0.60 m·s−1. The successful validation tests proved that the prototype can achieve all desired functionality. Full article
Figures

Figure 1

Open AccessArticle
Feasibility of Using the Optical Sensing Techniques for Early Detection of Huanglongbing in Citrus Seedlings
Robotics 2017, 6(2), 11; doi:10.3390/robotics6020011 -
Abstract
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant
[...] Read more.
A vision sensor was introduced and tested for early detection of citrus Huanglongbing (HLB). This disease is caused by the bacterium Candidatus Liberibacter asiaticus (CLas) and is transmitted by the Asian citrus psyllid. HLB is a devastating disease that has exerted a significant impact on citrus yield and quality in Florida. Unfortunately, no cure has been reported for HLB. Starch accumulates in HLB infected leaf chloroplasts, which causes the mottled blotchy green pattern. Starch rotates the polarization plane of light. A polarized imaging technique was used to detect the polarization-rotation caused by the hyper-accumulation of starch as a pre-symptomatic indication of HLB in young seedlings. Citrus seedlings were grown in a room with controlled conditions and exposed to intensive feeding by CLas-positive psyllids for eight weeks. A quantitative polymerase chain reaction was employed to confirm the HLB status of samples. Two datasets were acquired; the first created one month after the exposer to psyllids and the second two months later. The results showed that, with relatively unsophisticated imaging equipment, four levels of HLB infections could be detected with accuracies of 72%–81%. As expected, increasing the time interval between psyllid exposure and imaging increased the development of symptoms and, accordingly, improved the detection accuracy. Full article
Figures

Figure 1

Open AccessArticle
Binaural Range Finding from Synthetic Aperture Computation as the Head is Turned
Robotics 2017, 6(2), 10; doi:10.3390/robotics6020010 -
Abstract
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one
[...] Read more.
A solution to binaural direction finding described in Tamsett (Robotics2017, 6(1), 3) is a synthetic aperture computation (SAC) performed as the head is turned while listening to a sound. A far-range approximation in that paper is relaxed in this one and the method extended for SAC as a function of range for estimating range to an acoustic source. An instantaneous angle λ (lambda) between the auditory axis and direction to an acoustic source locates the source on a small circle of colatitude (lambda circle) of a sphere symmetric about the auditory axis. As the head is turned, data over successive instantaneous lambda circles are integrated in a virtual field of audition from which the direction to an acoustic source can be inferred. Multiple sets of lambda circles generated as a function of range yield an optimal range at which the circles intersect to best focus at a point in a virtual three-dimensional field of audition, providing an estimate of range. A proof of concept is demonstrated using simulated experimental data. The method enables a binaural robot to estimate not only direction but also range to an acoustic source from sufficiently accurate measurements of arrival time/level differences at the antennae. Full article
Figures

Figure 1

Open AccessArticle
Visual Place Recognition for Autonomous Mobile Robots
Robotics 2017, 6(2), 9; doi:10.3390/robotics6020009 -
Abstract
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We
[...] Read more.
Place recognition is an essential component of autonomous mobile robot navigation. It is used for loop-closure detection to maintain consistent maps, or to localize the robot along a route, or in kidnapped-robot situations. Camera sensors provide rich visual information for this task. We compare different approaches for visual place recognition: holistic methods (visual compass and warping), signature-based methods (using Fourier coefficients or feature descriptors (able for binary-appearance loop-closure evaluation, ABLE)), and feature-based methods (fast appearance-based mapping, FabMap). As new contributions we investigate whether warping, a successful visual homing method, is suitable for place recognition. In addition, we extend the well-known visual compass to use multiple scale planes, a concept also employed by warping. To achieve tolerance against changing illumination conditions, we examine the NSAD distance measure (normalized sum of absolute differences) on edge-filtered images. To reduce the impact of illumination changes on the distance values, we suggest to compute ratios of image distances to normalize these values to a common range. We test all methods on multiple indoor databases, as well as a small outdoor database, using images with constant or changing illumination conditions. ROC analysis (receiver-operator characteristics) and the metric distance between best-matching image pairs are used as evaluation measures. Most methods perform well under constant illumination conditions, but fail under changing illumination. The visual compass using the NSAD measure on edge-filtered images with multiple scale planes, while being slower than signature methods, performs best in the latter case. Full article
Figures

Figure 1

Open AccessArticle
Robot-Assisted Crowd Evacuation under Emergency Situations: A Survey
Robotics 2017, 6(2), 8; doi:10.3390/robotics6020008 -
Abstract
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions.
[...] Read more.
In the case of emergency situations, robotic systems can play a key role and save human lives in recovery and evacuation operations. To realize such a potential, we have to address many scientific and technical challenges encountered during robotic search and rescue missions. This paper reviews current state-of-the-art robotic technologies that have been deployed in the simulation of crowd evacuation, including both macroscopic and microscopic models used in simulating a crowd. Existing work on crowd simulation is analyzed and the robots used in crowd evacuation are introduced. Finally, the paper demonstrates how autonomous robots could be effectively deployed in disaster evacuation, as well as search and rescue missions. Full article
Figures

Figure 1

Open AccessArticle
An Optimal and Energy Efficient Multi-Sensor Collision-Free Path Planning Algorithm for a Mobile Robot in Dynamic Environments
Robotics 2017, 6(2), 7; doi:10.3390/robotics6020007 -
Abstract
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based
[...] Read more.
There has been a remarkable growth in many different real-time systems in the area of autonomous mobile robots. This paper focuses on the collaboration of efficient multi-sensor systems to create new optimal motion planning for mobile robots. A proposed algorithm is used based on a new model to produce the shortest and most energy-efficient path from a given initial point to a goal point. The distance and time traveled, in addition to the consumed energy, have an asymptotic complexity of O(nlogn), where n is the number of obstacles. Real time experiments are performed to demonstrate the accuracy and energy efficiency of the proposed motion planning algorithm. Full article
Figures

Figure 1

Open AccessArticle
A New Combined Vision Technique for Micro Aerial Vehicle Pose Estimation
Robotics 2017, 6(2), 6; doi:10.3390/robotics6020006 -
Abstract
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are
[...] Read more.
In this work, a new combined vision technique (CVT) is proposed, comprehensively developed, and experimentally tested for stable, precise unmanned micro aerial vehicle (MAV) pose estimation. The CVT combines two measurement methods (multi- and mono-view) based on different constraint conditions. These constraints are considered simultaneously by the particle filter framework to improve the accuracy of visual positioning. The framework, which is driven by an onboard inertial module, takes the positioning results from the visual system as measurements and updates the vehicle state. Moreover, experimental testing and data analysis have been carried out to verify the proposed algorithm, including multi-camera configuration, design and assembly of MAV systems, and the marker detection and matching between different views. Our results indicated that the combined vision technique is very attractive for high-performance MAV pose estimation. Full article
Figures

Figure 1

Open AccessArticle
Visual Tracking of Deformation and Classification of Non-Rigid Objects with Robot Hand Probing
Robotics 2017, 6(1), 5; doi:10.3390/robotics6010005 -
Abstract
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation
[...] Read more.
Performing tasks with a robot hand often requires a complete knowledge of the manipulated object, including its properties (shape, rigidity, surface texture) and its location in the environment, in order to ensure safe and efficient manipulation. While well-established procedures exist for the manipulation of rigid objects, as well as several approaches for the manipulation of linear or planar deformable objects such as ropes or fabric, research addressing the characterization of deformable objects occupying a volume remains relatively limited. The paper proposes an approach for tracking the deformation of non-rigid objects under robot hand manipulation using RGB-D data. The purpose is to automatically classify deformable objects as rigid, elastic, plastic, or elasto-plastic, based on the material they are made of, and to support recognition of the category of such objects through a robotic probing process in order to enhance manipulation capabilities. The proposed approach combines advantageously classical color and depth image processing techniques and proposes a novel combination of the fast level set method with a log-polar mapping of the visual data to robustly detect and track the contour of a deformable object in a RGB-D data stream. Dynamic time warping is employed to characterize the object properties independently from the varying length of the tracked contour as the object deforms. The proposed solution achieves a classification rate over all categories of material of up to 98.3%. When integrated in the control loop of a robot hand, it can contribute to ensure stable grasp, and safe manipulation capability that will preserve the physical integrity of the object. Full article
Figures

Figure 1

Open AccessArticle
Robot-Assisted Therapy for Learning and Social Interaction of Children with Autism Spectrum Disorder
Robotics 2017, 6(1), 4; doi:10.3390/robotics6010004 -
Abstract
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular
[...] Read more.
This paper puts forward the potential for designing a parrot-inspired robot and an indirect teaching technique, the adapted model-rival method (AMRM), to help improve learning and social interaction abilities of children with autism spectrum disorder. The AMRM was formulated by adapting two popular conventional approaches, namely, model-rival method and label-training procedure. In our validation trials, we used a semi-autonomous parrot-inspired robot, called KiliRo, to simulate a set of autonomous behaviors. A proposed robot-assisted therapy using AMRM was pilot tested with nine children with autism spectrum disorder for five consecutive days in a clinical setting. We analyzed the facial expressions of children when they interacted with KiliRo using an automated emotion recognition and classification system, Oxford emotion API (Application Programming Interface). Results provided some indication that the children with autism spectrum disorder appeared attracted and happy to interact with the parrot-inspired robot. Short qualitative interviews with the children’s parents, the pediatrician, and the child psychologist who participated in this pilot study, also acknowledged that the proposed parrot-inspired robot and the AMRM may have some merit in aiding in improving learning and social interaction abilities of children with autism spectrum disorder. Full article
Figures

Figure 1

Open AccessArticle
Synthetic Aperture Computation as the Head is Turned in Binaural Direction Finding
Robotics 2017, 6(1), 3; doi:10.3390/robotics6010003 -
Abstract
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a
[...] Read more.
Binaural systems measure instantaneous time/level differences between acoustic signals received at the ears to determine angles λ between the auditory axis and directions to acoustic sources. An angle λ locates a source on a small circle of colatitude (a lamda circle) on a sphere symmetric about the auditory axis. As the head is turned while listening to a sound, acoustic energy over successive instantaneous lamda circles is integrated in a virtual/subconscious field of audition. The directions in azimuth and elevation to maxima in integrated acoustic energy, or to points of intersection of lamda circles, are the directions to acoustic sources. This process in a robotic system, or in nature in a neural implementation equivalent to it, delivers its solutions to the aurally informed worldview. The process is analogous to migration applied to seismic profiler data, and to that in synthetic aperture radar/sonar systems. A slanting auditory axis, e.g., possessed by species of owl, leads to the auditory axis sweeping the surface of a cone as the head is turned about a single axis. Thus, the plane in which the auditory axis turns continuously changes, enabling robustly unambiguous directions to acoustic sources to be determined. Full article
Figures

Figure 1

Open AccessArticle
Experimental and Simulation-Based Investigation of Polycentric Motion of an Inherent Compliant Pneumatic Bending Actuator with Skewed Rotary Elastic Chambers
Robotics 2017, 6(1), 2; doi:10.3390/robotics6010002 -
Abstract
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence
[...] Read more.
To offer a functionality that could not be found in traditional rigid robots, compliant actuators are in development worldwide for a variety of applications and especially for human–robot interaction. Pneumatic bending actuators are a special kind of such actuators. Due to the absence of fixed mechanical axes and their soft behavior, these actuators generally possess a polycentric motion ability. This can be very useful to provide an implicit self-alignment to human joint axes in exoskeleton-like rehabilitation devices. As a possible realization, a novel bending actuator (BA) was developed using patented pneumatic skewed rotary elastic chambers (sREC). To analyze the actuator self-alignment properties, knowledge about the motion of this bending actuator type, the so-called skewed rotary elastic chambers bending actuator (sRECBA), is of high interest and this paper presents experimental and simulation-based kinematic investigations. First, to describe actuator motion, the finite helical axes (FHA) of basic actuator elements are determined using a three-dimensional (3D) camera system. Afterwards, a simplified two-dimensional (2D) kinematic simulation model based on a four-bar linkage was developed and the motion was compared to the experimental data by calculating the instantaneous center of rotation (ICR). The equivalent kinematic model of the sRECBA was realized using a series of four-bar linkages and the resulting ICR was analyzed in simulation. Finally, the FHA of the sRECBA were determined and analyzed for three different specific motions. The results show that the actuator’s FHA adapt to different motions performed and it can be assumed that implicit self-alignment to the polycentric motion of the human joint axis will be provided. Full article
Figures

Figure 1

Open AccessEditorial
Acknowledgement to Reviewers of Robotics in 2016
Robotics 2017, 6(1), 1; doi:10.3390/robotics6010001 -
Abstract
The editors of Robotics would like to express their sincere gratitude to the following reviewers for assessing manuscripts in 2016.[...] Full article
Open AccessArticle
Complete Coverage Path Planning for a Multi-UAV Response System in Post-Earthquake Assessment
Robotics 2016, 5(4), 26; doi:10.3390/robotics5040026 -
Abstract
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension
[...] Read more.
This paper presents a post-earthquake response system for a rapid damage assessment. In this system, multiple Unmanned Aerial Vehicles (UAVs) are deployed to collect the images from the earthquake site and create a response map for extracting useful information. It is an extension of well-known coverage path problem (CPP) that is based on the grid pattern map decomposition. In addition to some linear strengthening techniques, two mathematic formulations, 4-index and 5-index models, are proposed in the approach and coded in GAMS (Cplex solver). They are tested on a number of problems and the results show that the 5-index model outperforms the 4-index model. Moreover, the proposed system could be significantly improved by the solver-generated cuts, additional constraints, and the variable branching priority extensions. Full article
Figures

Figure 1

Open AccessArticle
Improving Robot Mobility by Combining Downward-Looking and Frontal Cameras
Robotics 2016, 5(4), 25; doi:10.3390/robotics5040025 -
Abstract
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to
[...] Read more.
This paper presents a novel attempt to combine a downward-looking camera and a forward-looking camera for terrain classification in the field of off-road mobile robots. The first camera is employed to identify the terrain beneath the robot. This information is then used to improve the classification of the forthcoming terrain acquired from the frontal camera. This research also shows the usefulness of the Gist descriptor for terrain classification purposes. Physical experiments conducted in different terrains (quasi-planar terrains) and different lighting conditions, confirm the satisfactory performance of this approach in comparison with a simple color-based classifier based only on frontal images. Our proposal substantially reduces the misclassification rate of the color-based classifier (∼10% versus ∼20%). Full article
Figures

Figure 1