Next Article in Journal
An Application of Deep Neural Networks for Segmentation of Microtomographic Images of Rock Samples
Previous Article in Journal
Font Design—Shape Processing of Text Information Structures in the Process of Non-Invasive Data Acquisition
Previous Article in Special Issue
A Comparison of Compression Codecs for Maritime and Sonar Images in Bandwidth Constrained Applications
Open AccessArticle

Active Eye-in-Hand Data Management to Improve the Robotic Object Detection Performance

Department of Computer Science and Engineering, University of Nevada, Reno, NV 89557, USA
*
Author to whom correspondence should be addressed.
This paper is an extended version of conference paper “Hoseini A., S.P.; Nicolescu, M.; Nicolescu, M. Active object detection through dynamic incorporation of Dempster-Shafer fusion for robotic applications, 2nd International Conference on Vision, Image and Signal Processing (ICVISP), Las Vegas, USA, August 2018”.
Computers 2019, 8(4), 71; https://doi.org/10.3390/computers8040071
Received: 15 July 2019 / Revised: 20 September 2019 / Accepted: 21 September 2019 / Published: 23 September 2019
(This article belongs to the Special Issue Vision, Image and Signal Processing (ICVISP))
Adding to the number of sources of sensory information can be efficacious in enhancing the object detection capability of robots. In the realm of vision-based object detection, in addition to improving the general detection performance, observing objects of interest from different points of view can be central to handling occlusions. In this paper, a robotic vision system is proposed that constantly uses a 3D camera, while actively switching to make use of a second RGB camera in cases where it is necessary. The proposed system detects objects in the view seen by the 3D camera, which is mounted on a humanoid robot’s head, and computes a confidence measure for its recognitions. In the event of low confidence regarding the correctness of the detection, the secondary camera, which is installed on the robot’s arm, is moved toward the object to obtain another perspective of the object. With the objects detected in the scene viewed by the hand camera, they are matched to the detections of the head camera, and subsequently, their recognition decisions are fused together. The decision fusion method is a novel approach based on the Dempster–Shafer evidence theory. Significant improvements in object detection performance are observed after employing the proposed active vision system. View Full-Text
Keywords: object detection; active vision; Dempster–Shafer fusion; transferable belief model; distance matching; PR2; robotics object detection; active vision; Dempster–Shafer fusion; transferable belief model; distance matching; PR2; robotics
Show Figures

Figure 1

MDPI and ACS Style

Hoseini, P.; Blankenburg, J.; Nicolescu, M.; Nicolescu, M.; Feil-Seifer, D. Active Eye-in-Hand Data Management to Improve the Robotic Object Detection Performance. Computers 2019, 8, 71.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop