Next Article in Journal
Joint Transmit Power Allocation and Splitting for SWIPT Aided OFDM-IDMA in Wireless Sensor Networks
Next Article in Special Issue
A Visual-Based Approach for Indoor Radio Map Construction Using Smartphones
Previous Article in Journal
A Frequency-Domain Adaptive Matched Filter for Active Sonar Detection
Previous Article in Special Issue
A New Calibration Method for Commercial RGB-D Sensors
Article Menu
Issue 7 (July) cover image

Export Article

Open AccessArticle
Sensors 2017, 17(7), 1569; https://doi.org/10.3390/s17071569

Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition

School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, China
*
Author to whom correspondence should be addressed.
Received: 21 May 2017 / Revised: 20 June 2017 / Accepted: 26 June 2017 / Published: 4 July 2017
(This article belongs to the Special Issue Indoor LiDAR/Vision Systems)
Full-Text   |   PDF [12109 KB, uploaded 5 July 2017]   |  

Abstract

In order to recognize indoor scenarios, we extract image features for detecting objects, however, computers can make some unexpected mistakes. After visualizing the histogram of oriented gradient (HOG) features, we find that the world through the eyes of a computer is indeed different from human eyes, which assists researchers to see the reasons that cause a computer to make errors. Additionally, according to the visualization, we notice that the HOG features can obtain rich texture information. However, a large amount of background interference is also introduced. In order to enhance the robustness of the HOG feature, we propose an improved method for suppressing the background interference. On the basis of the original HOG feature, we introduce a principal component analysis (PCA) to extract the principal components of the image colour information. Then, a new hybrid feature descriptor, which is named HOG–PCA (HOGP), is made by deeply fusing these two features. Finally, the HOGP is compared to the state-of-the-art HOG feature descriptor in four scenes under different illumination. In the simulation and experimental tests, the qualitative and quantitative assessments indicate that the visualizing images of the HOGP feature are close to the observation results obtained by human eyes, which is better than the original HOG feature for object detection. Furthermore, the runtime of our proposed algorithm is hardly increased in comparison to the classic HOG feature. View Full-Text
Keywords: image feature extraction; indoor scenario recognition; HOG feature descriptor; feature visualization; sparse representation image feature extraction; indoor scenario recognition; HOG feature descriptor; feature visualization; sparse representation
Figures

Figure 1

This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited (CC BY 4.0).
SciFeed

Share & Cite This Article

MDPI and ACS Style

Jiao, J.; Wang, X.; Deng, Z. Build a Robust Learning Feature Descriptor by Using a New Image Visualization Method for Indoor Scenario Recognition. Sensors 2017, 17, 1569.

Show more citation formats Show less citations formats

Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top