E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Vision-Based Sensors in Field Robotics"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (1 October 2016)

Special Issue Editors

Guest Editor
Dr. Gabriel Oliver-Codina

Department of Mathematics and Computer Science, Universitat de les Illes Balears, Crta. Valldemossa km. 7,5, E-07122 Palma de Mallorca, Spain
Website | E-Mail
Interests: visual navigation; underwater imaging; 3D mapping; stereo SLAM
Guest Editor
Dr. Nuno Gracias

Departament d'Arquitectura i Tecnologia de Computadors, Universitat de Girona, Edifici P-IV, Campus de Montilivi, 17071 Girona, Spain
Website | E-Mail
Interests: computer vision; underwater robotics
Guest Editor
Dr. Antonio M. López

Department of Computer Science at Universitat Autònoma de Barcelona (UAB), and Computer Vision Center (CVC) at UAB, Barcelona, Spain
Website | E-Mail
Interests: object detection; semantic segmentation; autonomous driving

Special Issue Information

Dear Colleagues,

Vision-based sensing is widely used in many robotic and automation applications, including localization, mapping, object recognition, guidance or obstacle avoidance, among others. Nowadays, mobile robots are continuously being adapted to carry out a wider range of applications in progressively more demanding environments. Quite often, vision plays an essential role to make these challenging tasks possible, although it can be fused or reinforced with other sensing modalities.

This Special Issue is aimed at bringing together novel solutions related to sensing and acting in field robotics applications. We focus our interest in manuscripts thoroughly describing new vision-based systems to be used in highly unstructured and dynamic environments as well as innovative and efficient methods to process the data gathered in these scenarios. Both original research articles and reviews are welcome.

Original research papers must not rely on processing information from public datasets, but describe complete solutions to specific field robotics applications, including sensor systems, fundamental methods and experimental results. Manuscripts can alternatively focus on presenting new (annotated) datasets gathered by novel vision-based sensors and used in field robotics applications, thus contributing to future benchmarking.

Reviews, presenting an analytical up-to-date overview of the state-of-the-art, would also be appropriate, provided they incorporate some quantitative and qualitative scoring of the exposed solutions through publicly available data.

Robotic solutions, including, but not limited to, the following field applications, are encouraged:

  • Aerial
  • Agriculture
  • Forestry
  • Marine
  • Mining
  • Off-road
  • Planetary
  • Space
  • Underwater

If you have suggestions that you would like to discuss beforehand, please feel free to contact us. We look forward to your participation in this Special Issue.

Dr. Gabriel Oliver-Codina
Dr. Nuno Gracias
Dr. Antonio M. López
Guest Editors

 

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Unmanned Aerial Vehicles
  • Space and Planetary robotics
  • Surface and Underwater Autonomous Vehicles
  • Defense and Security robotized systems
  • Demining and Clearance of Unexploded Ordnance applications
  • Image-based localization in unstructured and dynamic scenarios
  • Multi-sensor integration
  • Off-road autonomous driving
  • Search and rescue robots
  • Mining and subterranean robots
  • Field operational tests of autonomous vehicles

Published Papers (17 papers)

View options order results:
result details:
Displaying articles 1-17
Export citation of selected articles as:

Research

Open AccessArticle Vinobot and Vinoculer: Two Robotic Platforms for High-Throughput Field Phenotyping
Sensors 2017, 17(1), 214; doi:10.3390/s17010214
Received: 24 October 2016 / Revised: 26 December 2016 / Accepted: 13 January 2017 / Published: 23 January 2017
Cited by 1 | PDF Full-text (14910 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower
[...] Read more.
In this paper, a new robotic architecture for plant phenotyping is being introduced. The architecture consists of two robotic platforms: an autonomous ground vehicle (Vinobot) and a mobile observation tower (Vinoculer). The ground vehicle collects data from individual plants, while the observation tower oversees an entire field, identifying specific plants for further inspection by the Vinobot. The advantage of this architecture is threefold: first, it allows the system to inspect large areas of a field at any time, during the day and night, while identifying specific regions affected by biotic and/or abiotic stresses; second, it provides high-throughput plant phenotyping in the field by either comprehensive or selective acquisition of accurate and detailed data from groups or individual plants; and third, it eliminates the need for expensive and cumbersome aerial vehicles or similarly expensive and confined field platforms. As the preliminary results from our algorithms for data collection and 3D image processing, as well as the data analysis and comparison with phenotype data collected by hand demonstrate, the proposed architecture is cost effective, reliable, versatile, and extendable. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle A Multi-Sensor Fusion MAV State Estimation from Long-Range Stereo, IMU, GPS and Barometric Sensors
Sensors 2017, 17(1), 11; doi:10.3390/s17010011
Received: 1 October 2016 / Revised: 10 December 2016 / Accepted: 16 December 2016 / Published: 22 December 2016
Cited by 2 | PDF Full-text (3300 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion;
[...] Read more.
State estimation is the most critical capability for MAV (Micro-Aerial Vehicle) localization, autonomous obstacle avoidance, robust flight control and 3D environmental mapping. There are three main challenges for MAV state estimation: (1) it can deal with aggressive 6 DOF (Degree Of Freedom) motion; (2) it should be robust to intermittent GPS (Global Positioning System) (even GPS-denied) situations; (3) it should work well both for low- and high-altitude flight. In this paper, we present a state estimation technique by fusing long-range stereo visual odometry, GPS, barometric and IMU (Inertial Measurement Unit) measurements. The new estimation system has two main parts, a stochastic cloning EKF (Extended Kalman Filter) estimator that loosely fuses both absolute state measurements (GPS, barometer) and the relative state measurements (IMU, visual odometry), and is derived and discussed in detail. A long-range stereo visual odometry is proposed for high-altitude MAV odometry calculation by using both multi-view stereo triangulation and a multi-view stereo inverse depth filter. The odometry takes the EKF information (IMU integral) for robust camera pose tracking and image feature matching, and the stereo odometry output serves as the relative measurements for the update of the state estimation. Experimental results on a benchmark dataset and our real flight dataset show the effectiveness of the proposed state estimation system, especially for the aggressive, intermittent GPS and high-altitude MAV flight. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Vision-Based Corrosion Detection Assisted by a Micro-Aerial Vehicle in a Vessel Inspection Application
Sensors 2016, 16(12), 2118; doi:10.3390/s16122118
Received: 2 October 2016 / Revised: 22 November 2016 / Accepted: 7 December 2016 / Published: 14 December 2016
PDF Full-text (12843 KB) | HTML Full-text | XML Full-text
Abstract
Vessel maintenance requires periodic visual inspection of the hull in order to detect typical defective situations of steel structures such as, among others, coating breakdown and corrosion. These inspections are typically performed by well-trained surveyors at great cost because of the need for
[...] Read more.
Vessel maintenance requires periodic visual inspection of the hull in order to detect typical defective situations of steel structures such as, among others, coating breakdown and corrosion. These inspections are typically performed by well-trained surveyors at great cost because of the need for providing access means (e.g., scaffolding and/or cherry pickers) that allow the inspector to be at arm’s reach from the structure under inspection. This paper describes a defect detection approach comprising a micro-aerial vehicle which is used to collect images from the surfaces under inspection, particularly focusing on remote areas where the surveyor has no visual access, and a coating breakdown/corrosion detector based on a three-layer feed-forward artificial neural network. As it is discussed in the paper, the success of the inspection process depends not only on the defect detection software but also on a number of assistance functions provided by the control architecture of the aerial platform, whose aim is to improve picture quality. Both aspects of the work are described along the different sections of the paper, as well as the classification performance attained. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle A Height Estimation Approach for Terrain Following Flights from Monocular Vision
Sensors 2016, 16(12), 2071; doi:10.3390/s16122071
Received: 5 October 2016 / Revised: 22 November 2016 / Accepted: 29 November 2016 / Published: 6 December 2016
Cited by 1 | PDF Full-text (4759 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive
[...] Read more.
In this paper, we present a monocular vision-based height estimation algorithm for terrain following flights. The impressive growth of Unmanned Aerial Vehicle (UAV) usage, notably in mapping applications, will soon require the creation of new technologies to enable these systems to better perceive their surroundings. Specifically, we chose to tackle the terrain following problem, as it is still unresolved for consumer available systems. Virtually every mapping aircraft carries a camera; therefore, we chose to exploit this in order to use presently available hardware to extract the height information toward performing terrain following flights. The proposed methodology consists of using optical flow to track features from videos obtained by the UAV, as well as its motion information to estimate the flying height. To determine if the height estimation is reliable, we trained a decision tree that takes the optical flow information as input and classifies whether the output is trustworthy or not. The classifier achieved accuracies of 80 % for positives and 90 % for negatives, while the height estimation algorithm presented good accuracy. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Towards Camera-LIDAR Fusion-Based Terrain Modelling for Planetary Surfaces: Review and Analysis
Sensors 2016, 16(11), 1952; doi:10.3390/s16111952
Received: 31 August 2016 / Revised: 9 November 2016 / Accepted: 10 November 2016 / Published: 20 November 2016
Cited by 1 | PDF Full-text (22651 KB) | HTML Full-text | XML Full-text
Abstract
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers
[...] Read more.
In recent decades, terrain modelling and reconstruction techniques have increased research interest in precise short and long distance autonomous navigation, localisation and mapping within field robotics. One of the most challenging applications is in relation to autonomous planetary exploration using mobile robots. Rovers deployed to explore extraterrestrial surfaces are required to perceive and model the environment with little or no intervention from the ground station. Up to date, stereopsis represents the state-of-the art method and can achieve short-distance planetary surface modelling. However, future space missions will require scene reconstruction at greater distance, fidelity and feature complexity, potentially using other sensors like Light Detection And Ranging (LIDAR). LIDAR has been extensively exploited for target detection, identification, and depth estimation in terrestrial robotics, but is still under development to become a viable technology for space robotics. This paper will first review current methods for scene reconstruction and terrain modelling using cameras in planetary robotics and LIDARs in terrestrial robotics; then we will propose camera-LIDAR fusion as a feasible technique to overcome the limitations of either of these individual sensors for planetary exploration. A comprehensive analysis will be presented to demonstrate the advantages of camera-LIDAR fusion in terms of range, fidelity, accuracy and computation. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle 3D Scene Reconstruction Using Omnidirectional Vision and LiDAR: A Hybrid Approach
Sensors 2016, 16(11), 1923; doi:10.3390/s16111923
Received: 13 September 2016 / Revised: 4 November 2016 / Accepted: 8 November 2016 / Published: 16 November 2016
Cited by 1 | PDF Full-text (25637 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed
[...] Read more.
In this paper, we propose a novel approach to obtain accurate 3D reconstructions of large-scale environments by means of a mobile acquisition platform. The system incorporates a Velodyne LiDAR scanner, as well as a Point Grey Ladybug panoramic camera system. It was designed with genericity in mind, and hence, it does not make any assumption about the scene or about the sensor set-up. The main novelty of this work is that the proposed LiDAR mapping approach deals explicitly with the inhomogeneous density of point clouds produced by LiDAR scanners. To this end, we keep track of a global 3D map of the environment, which is continuously improved and refined by means of a surface reconstruction technique. Moreover, we perform surface analysis on consecutive generated point clouds in order to assure a perfect alignment with the global 3D map. In order to cope with drift, the system incorporates loop closure by determining the pose error and propagating it back in the pose graph. Our algorithm was exhaustively tested on data captured at a conference building, a university campus and an industrial site of a chemical company. Experiments demonstrate that it is capable of generating highly accurate 3D maps in very challenging environments. We can state that the average distance of corresponding point pairs between the ground truth and estimated point cloud approximates one centimeter for an area covering approximately 4000 m 2 . To prove the genericity of the system, it was tested on the well-known Kitti vision benchmark. The results show that our approach competes with state of the art methods without making any additional assumptions. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Image Based Mango Fruit Detection, Localisation and Yield Estimation Using Multiple View Geometry
Sensors 2016, 16(11), 1915; doi:10.3390/s16111915
Received: 14 October 2016 / Revised: 9 November 2016 / Accepted: 9 November 2016 / Published: 15 November 2016
Cited by 3 | PDF Full-text (51264 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration
[...] Read more.
This paper presents a novel multi-sensor framework to efficiently identify, track, localise and map every piece of fruit in a commercial mango orchard. A multiple viewpoint approach is used to solve the problem of occlusion, thus avoiding the need for labour-intensive field calibration to estimate actual yield. Fruit are detected in images using a state-of-the-art faster R-CNN detector, and pair-wise correspondences are established between images using trajectory data provided by a navigation system. A novel LiDAR component automatically generates image masks for each canopy, allowing each fruit to be associated with the corresponding tree. The tracked fruit are triangulated to locate them in 3D, enabling a number of spatial statistics per tree, row or orchard block. A total of 522 trees and 71,609 mangoes were scanned on a Calypso mango orchard near Bundaberg, Queensland, Australia, with 16 trees counted by hand for validation, both on the tree and after harvest. The results show that single, dual and multi-view methods can all provide precise yield estimates, but only the proposed multi-view approach can do so without calibration, with an error rate of only 1.36% for individual trees. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle DeepAnomaly: Combining Background Subtraction and Deep Learning for Detecting Obstacles and Anomalies in an Agricultural Field
Sensors 2016, 16(11), 1904; doi:10.3390/s16111904
Received: 15 September 2016 / Revised: 26 October 2016 / Accepted: 7 November 2016 / Published: 11 November 2016
Cited by 1 | PDF Full-text (10120 KB) | HTML Full-text | XML Full-text
Abstract
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and
[...] Read more.
Convolutional neural network (CNN)-based systems are increasingly used in autonomous vehicles for detecting obstacles. CNN-based object detection and per-pixel classification (semantic segmentation) algorithms are trained for detecting and classifying a predefined set of object types. These algorithms have difficulties in detecting distant and heavily occluded objects and are, by definition, not capable of detecting unknown object types or unusual scenarios. The visual characteristics of an agriculture field is homogeneous, and obstacles, like people, animals and other obstacles, occur rarely and are of distinct appearance compared to the field. This paper introduces DeepAnomaly, an algorithm combining deep learning and anomaly detection to exploit the homogenous characteristics of a field to perform anomaly detection. We demonstrate DeepAnomaly as a fast state-of-the-art detector for obstacles that are distant, heavily occluded and unknown. DeepAnomaly is compared to state-of-the-art obstacle detectors including “Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks” (RCNN). In a human detector test case, we demonstrate that DeepAnomaly detects humans at longer ranges (45–90 m) than RCNN. RCNN has a similar performance at a short range (0–30 m). However, DeepAnomaly has much fewer model parameters and (182 ms/25 ms =) a 7.28-times faster processing time per image. Unlike most CNN-based methods, the high accuracy, the low computation time and the low memory footprint make it suitable for a real-time system running on a embedded GPU (Graphics Processing Unit). Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops
Sensors 2016, 16(11), 1848; doi:10.3390/s16111848
Received: 11 August 2016 / Revised: 17 October 2016 / Accepted: 26 October 2016 / Published: 4 November 2016
Cited by 2 | PDF Full-text (49792 KB) | HTML Full-text | XML Full-text
Abstract
The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution
[...] Read more.
The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Pixel-Level and Robust Vibration Source Sensing in High-Frame-Rate Video Analysis
Sensors 2016, 16(11), 1842; doi:10.3390/s16111842
Received: 3 August 2016 / Revised: 13 October 2016 / Accepted: 26 October 2016 / Published: 2 November 2016
PDF Full-text (8728 KB) | HTML Full-text | XML Full-text
Abstract
We investigate the effect of appearance variations on the detectability of vibration feature extraction with pixel-level digital filters for high-frame-rate videos. In particular, we consider robust vibrating object tracking, which is clearly different from conventional appearance-based object tracking with spatial pattern recognition in
[...] Read more.
We investigate the effect of appearance variations on the detectability of vibration feature extraction with pixel-level digital filters for high-frame-rate videos. In particular, we consider robust vibrating object tracking, which is clearly different from conventional appearance-based object tracking with spatial pattern recognition in a high-quality image region of a certain size. For 512 × 512 videos of a rotating fan located at different positions and orientations and captured at 2000 frames per second with different lens settings, we verify how many pixels are extracted as vibrating regions with pixel-level digital filters. The effectiveness of dynamics-based vibration features is demonstrated by examining the robustness against changes in aperture size and the focal condition of the camera lens, the apparent size and orientation of the object being tracked, and its rotational frequency, as well as complexities and movements of background scenes. Tracking experiments for a flying multicopter with rotating propellers are also described to verify the robustness of localization under complex imaging conditions in outside scenarios. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle I-AUV Docking and Panel Intervention at Sea
Sensors 2016, 16(10), 1673; doi:10.3390/s16101673
Received: 27 July 2016 / Revised: 28 September 2016 / Accepted: 30 September 2016 / Published: 12 October 2016
Cited by 1 | PDF Full-text (1171 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The use of commercially available autonomous underwater vehicles (AUVs) has increased during the last fifteen years. While they are mainly used for routine survey missions, there is a set of applications that nowadays can be only addressed by manned submersibles or work-class remotely
[...] Read more.
The use of commercially available autonomous underwater vehicles (AUVs) has increased during the last fifteen years. While they are mainly used for routine survey missions, there is a set of applications that nowadays can be only addressed by manned submersibles or work-class remotely operated vehicles (ROVs) equipped with teleoperated arms: the intervention applications. To allow these heavy vehicles controlled by human operators to perform intervention tasks, underwater structures like observatory facilities, subsea panels or oil-well Christmas trees have been adapted, making them more robust and easier to operate. The TRITON Spanish founded project proposes the use of a light-weight intervention AUV (I-AUV) to carry out intervention applications simplifying the adaptation of these underwater structures and drastically reducing the operational cost. To prove this concept, the Girona 500 I-AUV is used to autonomously dock into an adapted subsea panel and once docked perform an intervention composed of turning a valve and plugging in/unplugging a connector. The techniques used for the autonomous docking and manipulation as well as the design of an adapted subsea panel with a funnel-based docking system are presented in this article together with the results achieved in a water tank and at sea. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle Vision/INS Integrated Navigation System for Poor Vision Navigation Environments
Sensors 2016, 16(10), 1672; doi:10.3390/s16101672
Received: 8 August 2016 / Revised: 5 October 2016 / Accepted: 7 October 2016 / Published: 12 October 2016
Cited by 1 | PDF Full-text (5384 KB) | HTML Full-text | XML Full-text
Abstract
In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes
[...] Read more.
In order to improve the performance of an inertial navigation system, many aiding sensors can be used. Among these aiding sensors, a vision sensor is of particular note due to its benefits in terms of weight, cost, and power consumption. This paper proposes an inertial and vision integrated navigation method for poor vision navigation environments. The proposed method uses focal plane measurements of landmarks in order to provide position, velocity and attitude outputs even when the number of landmarks on the focal plane is not enough for navigation. In order to verify the proposed method, computer simulations and van tests are carried out. The results show that the proposed method gives accurate and reliable position, velocity and attitude outputs when the number of landmarks is insufficient. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle RGB-D SLAM Based on Extended Bundle Adjustment with 2D and 3D Information
Sensors 2016, 16(8), 1285; doi:10.3390/s16081285
Received: 27 May 2016 / Revised: 4 August 2016 / Accepted: 9 August 2016 / Published: 13 August 2016
Cited by 1 | PDF Full-text (6126 KB) | HTML Full-text | XML Full-text
Abstract
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is
[...] Read more.
In the study of SLAM problem using an RGB-D camera, depth information and visual information as two types of primary measurement data are rarely tightly coupled during refinement of camera pose estimation. In this paper, a new method of RGB-D camera SLAM is proposed based on extended bundle adjustment with integrated 2D and 3D information on the basis of a new projection model. First, the geometric relationship between the image plane coordinates and the depth values is constructed through RGB-D camera calibration. Then, 2D and 3D feature points are automatically extracted and matched between consecutive frames to build a continuous image network. Finally, extended bundle adjustment based on the new projection model, which takes both image and depth measurements into consideration, is applied to the image network for high-precision pose estimation. Field experiments show that the proposed method has a notably better performance than the traditional method, and the experimental results demonstrate the effectiveness of the proposed method in improving localization accuracy. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle DeepFruits: A Fruit Detection System Using Deep Neural Networks
Sensors 2016, 16(8), 1222; doi:10.3390/s16081222
Received: 19 May 2016 / Revised: 25 July 2016 / Accepted: 26 July 2016 / Published: 3 August 2016
Cited by 9 | PDF Full-text (26252 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element
[...] Read more.
This paper presents a novel approach to fruit detection using deep convolutional neural networks. The aim is to build an accurate, fast and reliable fruit detection system, which is a vital element of an autonomous agricultural robotic platform; it is a key element for fruit yield estimation and automated harvesting. Recent work in deep neural networks has led to the development of a state-of-the-art object detector termed Faster Region-based CNN (Faster R-CNN). We adapt this model, through transfer learning, for the task of fruit detection using imagery obtained from two modalities: colour (RGB) and Near-Infrared (NIR). Early and late fusion methods are explored for combining the multi-modal (RGB and NIR) information. This leads to a novel multi-modal Faster R-CNN model, which achieves state-of-the-art results compared to prior work with the F1 score, which takes into account both precision and recall performances improving from 0 . 807 to 0 . 838 for the detection of sweet pepper. In addition to improved accuracy, this approach is also much quicker to deploy for new fruits, as it requires bounding box annotation rather than pixel-level annotation (annotating bounding boxes is approximately an order of magnitude quicker to perform). The model is retrained to perform the detection of seven fruits, with the entire process taking four hours to annotate and train the new model per fruit. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Figures

Figure 1

Open AccessArticle An Improved Otsu Threshold Segmentation Method for Underwater Simultaneous Localization and Mapping-Based Navigation
Sensors 2016, 16(7), 1148; doi:10.3390/s16071148
Received: 25 April 2016 / Revised: 7 July 2016 / Accepted: 19 July 2016 / Published: 22 July 2016
Cited by 2 | PDF Full-text (4456 KB) | HTML Full-text | XML Full-text
Abstract
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM)
[...] Read more.
The main focus of this paper is on extracting features with SOund Navigation And Ranging (SONAR) sensing for further underwater landmark-based Simultaneous Localization and Mapping (SLAM). According to the characteristics of sonar images, in this paper, an improved Otsu threshold segmentation method (TSM) has been developed for feature detection. In combination with a contour detection algorithm, the foreground objects, although presenting different feature shapes, are separated much faster and more precisely than by other segmentation methods. Tests have been made with side-scan sonar (SSS) and forward-looking sonar (FLS) images in comparison with other four TSMs, namely the traditional Otsu method, the local TSM, the iterative TSM and the maximum entropy TSM. For all the sonar images presented in this work, the computational time of the improved Otsu TSM is much lower than that of the maximum entropy TSM, which achieves the highest segmentation precision among the four above mentioned TSMs. As a result of the segmentations, the centroids of the main extracted regions have been computed to represent point landmarks which can be used for navigation, e.g., with the help of an Augmented Extended Kalman Filter (AEKF)-based SLAM algorithm. The AEKF-SLAM approach is a recursive and iterative estimation-update process, which besides a prediction and an update stage (as in classical Extended Kalman Filter (EKF)), includes an augmentation stage. During navigation, the robot localizes the centroids of different segments of features in sonar images, which are detected by our improved Otsu TSM, as point landmarks. Using them with the AEKF achieves more accurate and robust estimations of the robot pose and the landmark positions, than with those detected by the maximum entropy TSM. Together with the landmarks identified by the proposed segmentation algorithm, the AEKF-SLAM has achieved reliable detection of cycles in the map and consistent map update on loop closure, which is shown in simulated experiments. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Open AccessArticle Filtering Based Adaptive Visual Odometry Sensor Framework Robust to Blurred Images
Sensors 2016, 16(7), 1040; doi:10.3390/s16071040
Received: 28 March 2016 / Revised: 18 June 2016 / Accepted: 28 June 2016 / Published: 5 July 2016
Cited by 1 | PDF Full-text (4320 KB) | HTML Full-text | XML Full-text
Abstract
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and
[...] Read more.
Visual odometry (VO) estimation from blurred image is a challenging problem in practical robot applications, and the blurred images will severely reduce the estimation accuracy of the VO. In this paper, we address the problem of visual odometry estimation from blurred images, and present an adaptive visual odometry estimation framework robust to blurred images. Our approach employs an objective measure of images, named small image gradient distribution (SIGD), to evaluate the blurring degree of the image, then an adaptive blurred image classification algorithm is proposed to recognize the blurred images, finally we propose an anti-blurred key-frame selection algorithm to enable the VO robust to blurred images. We also carried out varied comparable experiments to evaluate the performance of the VO algorithms with our anti-blur framework under varied blurred images, and the experimental results show that our approach can achieve superior performance comparing to the state-of-the-art methods under the condition with blurred images while not increasing too much computation cost to the original VO algorithms. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)
Open AccessArticle Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison
Sensors 2016, 16(6), 820; doi:10.3390/s16060820
Received: 17 March 2016 / Revised: 28 May 2016 / Accepted: 30 May 2016 / Published: 4 June 2016
Cited by 7 | PDF Full-text (864 KB) | HTML Full-text | XML Full-text
Abstract
Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and nighttime. Recent research has
[...] Read more.
Despite all the significant advances in pedestrian detection brought by computer vision for driving assistance, it is still a challenging problem. One reason is the extremely varying lighting conditions under which such a detector should operate, namely day and nighttime. Recent research has shown that the combination of visible and non-visible imaging modalities may increase detection accuracy, where the infrared spectrum plays a critical role. The goal of this paper is to assess the accuracy gain of different pedestrian models (holistic, part-based, patch-based) when training with images in the far infrared spectrum. Specifically, we want to compare detection accuracy on test images recorded at day and nighttime if trained (and tested) using (a) plain color images; (b) just infrared images; and (c) both of them. In order to obtain results for the last item, we propose an early fusion approach to combine features from both modalities. We base the evaluation on a new dataset that we have built for this purpose as well as on the publicly available KAIST multispectral dataset. Full article
(This article belongs to the Special Issue Vision-Based Sensors in Field Robotics)

Journal Contact

MDPI AG
Sensors Editorial Office
St. Alban-Anlage 66, 4052 Basel, Switzerland
E-Mail: 
Tel. +41 61 683 77 34
Fax: +41 61 302 89 18
Editorial Board
Contact Details Submit to Sensors Edit a special issue Review for Sensors
logo
loading...
Back to Top