Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics†
AbstractThis approach presents a localization technique within mobile robotics sustained by visual sensory data fusion. A regression inference framework is designed with the aid of informative data models of the system, together with support of probabilistic techniques such as Gaussian Processes. As a result, the visual data acquired with a catadioptric sensor is fused between poses of the robot in order to produce a probability distribution of visual information in the 3D global reference of the robot. In addition, a prediction technique based on filter gain is defined to improve the matching of visual information extracted from the probability distribution. This work reveals an enhanced matching technique for visual information in both, the image reference frame, and the 3D global reference. Real data results are presented to confirm the validity of the approach when working in a mobile robotic application for visual localization. Besides, a comparison against standard visual matching techniques is also presented. The suitability and robustness of the contributions are tested in the presented experiments.
Share & Cite This Article
Valiente, D.; Payá, L.; Sebastián, J.M.; Jiménez, L.M.; Reinoso, O. Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics. Proceedings 2019, 15, 2.
Valiente D, Payá L, Sebastián JM, Jiménez LM, Reinoso O. Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics. Proceedings. 2019; 15(1):2.Chicago/Turabian Style
Valiente, David; Payá, Luis; Sebastián, José M.; Jiménez, Luis M.; Reinoso, Oscar. 2019. "Dynamic Catadioptric Sensory Data Fusion for Visual Localization in Mobile Robotics." Proceedings 15, no. 1: 2.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.