Next Article in Journal
Simple Setup for Measuring the Response to Differential Mode Noise of Common Mode Chokes
Previous Article in Journal
EARL—Embodied Agent-Based Robot Control Systems Modelling Language
Previous Article in Special Issue
From Hotel Reviews to City Similarities: A Unified Latent-Space Model
Open AccessFeature PaperArticle

Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching

Knowledge Media Institute, The Open University, Milton Keynes MK7 6AA, UK
The Interaction Lab, Heriot-Watt University, Edinburgh EH14 4AS, UK
Faculty of Computer Science, Vrije Universitet Amsterdam, 1081 HV Amsterdam, The Netherlands
Information Sciences and Technology, The Pennsylvania State University, University Park, PA 16801, USA
Author to whom correspondence should be addressed.
Electronics 2020, 9(3), 380;
Received: 29 November 2019 / Revised: 11 February 2020 / Accepted: 19 February 2020 / Published: 25 February 2020
(This article belongs to the Special Issue Big Data Analytics for Smart Cities)
To assist humans with their daily tasks, mobile robots are expected to navigate complex and dynamic environments, presenting unpredictable combinations of known and unknown objects. Most state-of-the-art object recognition methods are unsuitable for this scenario because they require that: (i) all target object classes are known beforehand, and (ii) a vast number of training examples is provided for each class. This evidence calls for novel methods to handle unknown object classes, for which fewer images are initially available (few-shot recognition). One way of tackling the problem is learning how to match novel objects to their most similar supporting example. Here, we compare different (shallow and deep) approaches to few-shot image matching on a novel data set, consisting of 2D views of common object types drawn from a combination of ShapeNet and Google. First, we assess if the similarity of objects learned from a combination of ShapeNet and Google can scale up to new object classes, i.e., categories unseen at training time. Furthermore, we show how normalising the learned embeddings can impact the generalisation abilities of the tested methods, in the context of two novel configurations: (i) where the weights of a Convolutional two-branch Network are imprinted and (ii) where the embeddings of a Convolutional Siamese Network are L2-normalised. View Full-Text
Keywords: few-shot object recognition; image matching; robotics few-shot object recognition; image matching; robotics
Show Figures

Figure 1

MDPI and ACS Style

Chiatti, A.; Bardaro, G.; Bastianelli, E.; Tiddi, I.; Mitra, P.; Motta, E. Task-Agnostic Object Recognition for Mobile Robots through Few-Shot Image Matching. Electronics 2020, 9, 380.

Show more citation formats Show less citations formats
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

Search more from Scilit
Back to TopTop