Next Article in Journal
A 3D CFD Simulation and Analysis of Flow-Induced Forces on Polymer Piezoelectric Sensors in a Chinese Liquors Identification E-Nose
Next Article in Special Issue
GNSS Spoofing Network Monitoring Based on Differential Pseudorange
Previous Article in Journal / Special Issue
Iterative Diffusion-Based Distributed Cubature Gaussian Mixture Filter for Multisensor Estimation
Article Menu

Export Article

Open AccessArticle
Sensors 2016, 16(10), 1751; doi:10.3390/s16101751

A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics

Neuroscientific System Theory Group, Department of Electrical and Computer Engineering, Technical University of Munich, Arcisstrasse 21, Munich 80333, Germany
*
Author to whom correspondence should be addressed.
Academic Editors: Xue-Bo Jin, Feng-Bao Yang, Shuli Sun and Hong Wei
Received: 20 June 2016 / Revised: 11 October 2016 / Accepted: 12 October 2016 / Published: 20 October 2016
(This article belongs to the Special Issue Advances in Multi-Sensor Information Fusion: Theory and Applications)
View Full-Text   |   Download PDF [3888 KB, uploaded 20 October 2016]   |  

Abstract

Biological and technical systems operate in a rich multimodal environment. Due to the diversity of incoming sensory streams a system perceives and the variety of motor capabilities a system exhibits there is no single representation and no singular unambiguous interpretation of such a complex scene. In this work we propose a novel sensory processing architecture, inspired by the distributed macro-architecture of the mammalian cortex. The underlying computation is performed by a network of computational maps, each representing a different sensory quantity. All the different sensory streams enter the system through multiple parallel channels. The system autonomously associates and combines them into a coherent representation, given incoming observations. These processes are adaptive and involve learning. The proposed framework introduces mechanisms for self-creation and learning of the functional relations between the computational maps, encoding sensorimotor streams, directly from the data. Its intrinsic scalability, parallelisation, and automatic adaptation to unforeseen sensory perturbations make our approach a promising candidate for robust multisensory fusion in robotic systems. We demonstrate this by applying our model to a 3D motion estimation on a quadrotor. View Full-Text
Keywords: self-construction; self-organization; correlation learning; multisensory fusion; cortically inspired network; mobile robotics self-construction; self-organization; correlation learning; multisensory fusion; cortically inspired network; mobile robotics
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Axenie, C.; Richter, C.; Conradt, J. A Self-Synthesis Approach to Perceptual Learning for Multisensory Fusion in Robotics. Sensors 2016, 16, 1751.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top