Next Article in Journal
Deep Reinforcement Learning for Soft, Flexible Robots: Brief Review with Impending Challenges
Previous Article in Journal
Persistent Multi-Agent Search and Tracking with Flight Endurance Constraints
Article Menu
Issue 1 (March) cover image

Export Article

Open AccessArticle
Robotics 2019, 8(1), 3; https://doi.org/10.3390/robotics8010003

Indoor Scene and Position Recognition Based on Visual Landmarks Obtained from Visual Saliency without Human Effect

Faculty of Systems Science and Technology, Akita Prefectural University, Yurihonjo City 015–0055, Japan
*
Author to whom correspondence should be addressed.
Received: 29 November 2018 / Revised: 4 January 2019 / Accepted: 8 January 2019 / Published: 11 January 2019
  |  
PDF [2676 KB, uploaded 11 January 2019]
  |  

Abstract

Numerous autonomous robots are used not only for factory automation as labor saving devices, but also for interaction and communication with humans in our daily life. Although superior compatibility for semantic recognition of generic objects provides wide applications in a practical use, it is still a challenging task to create an extraction method that includes robustness and stability against environmental changes. This paper proposes a novel method of scene and position recognition based on visual landmarks (VLs) used for an autonomous mobile robot in an environment living with humans. The proposed method provides a mask image of human regions using histograms of oriented gradients (HOG). The VL features are described with accelerated KAZE (AKAZE) after extracting conspicuous regions obtained using saliency maps (SMs). The experimentally obtained results using leave-one-out cross validation (LOOCV) revealed that recognition accuracy of high-saliency feature points was higher than that of low-saliency feature points. We created our original benchmark datasets using a mobile robot. The recognition accuracy evaluated using LOOCV reveals 49.9% for our method, which is 3.2 percentage points higher than the accuracy of the comparison method without HOG detectors. The analysis of false recognition using a confusion matrix examines false recognition occurring in neighboring zones. This trend is reduced according to zone separations. View Full-Text
Keywords: visual landmark; semantic position recognition; histograms of oriented gradients; saliency maps; machine learning visual landmark; semantic position recognition; histograms of oriented gradients; saliency maps; machine learning
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Madokoro, H.; Sato, K.; Shimoi, N. Indoor Scene and Position Recognition Based on Visual Landmarks Obtained from Visual Saliency without Human Effect. Robotics 2019, 8, 3.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Robotics EISSN 2218-6581 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top