Next Article in Journal
A Space-Time Network-Based Modeling Framework for Dynamic Unmanned Aerial Vehicle Routing in Traffic Incident Monitoring Applications
Next Article in Special Issue
HyperCube: A Small Lensless Position Sensing Device for the Tracking of Flickering Infrared LEDs
Previous Article in Journal
A Microfluidic Love-Wave Biosensing Device for PSA Detection Based on an Aptamer Beacon Probe
Previous Article in Special Issue
High-Speed Incoming Infrared Target Detection by Fusion of Spatial and Temporal Detectors
Article Menu

Export Article

Open AccessArticle
Sensors 2015, 15(6), 13851-13873; doi:10.3390/s150613851

An Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup

1
ISR Laboratory, University of Reading, Reading RG6 6AY, UK
2
INSA Rouen/LITIS laboratory - EA4108, Saint-Etienne du Rouvray 76801, France
3
VisLab, University of Parma, Parco Area delle Scienze 181A, 43100 Parma, Italy
*
Author to whom correspondence should be addressed.
Academic Editor: Vittorio M.N. Passaro
Received: 2 April 2015 / Accepted: 8 June 2015 / Published: 12 June 2015
(This article belongs to the Special Issue Frontiers in Infrared Photodetection)
View Full-Text   |   Download PDF [2126 KB, uploaded 12 June 2015]   |  

Abstract

The objective of this article is to study the problem of pedestrian classification across different light spectrum domains (visible and far-infrared (FIR)) and modalities (intensity, depth and motion). In recent years, there has been a number of approaches for classifying and detecting pedestrians in both FIR and visible images, but the methods are difficult to compare, because either the datasets are not publicly available or they do not offer a comparison between the two domains. Our two primary contributions are the following: (1) we propose a public dataset, named RIFIR , containing both FIR and visible images collected in an urban environment from a moving vehicle during daytime; and (2) we compare the state-of-the-art features in a multi-modality setup: intensity, depth and flow, in far-infrared over visible domains. The experiments show that features families, intensity self-similarity (ISS), local binary patterns (LBP), local gradient patterns (LGP) and histogram of oriented gradients (HOG), computed from FIR and visible domains are highly complementary, but their relative performance varies across different modalities. In our experiments, the FIR domain has proven superior to the visible one for the task of pedestrian classification, but the overall best results are obtained by a multi-domain multi-modality multi-feature fusion. View Full-Text
Keywords: infrared pedestrian classification; multi-domain; multi-modality; multi-cue; feature comparison; intensity self-similarity; stereovision; benchmark infrared pedestrian classification; multi-domain; multi-modality; multi-cue; feature comparison; intensity self-similarity; stereovision; benchmark
This is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. (CC BY 4.0).

Scifeed alert for new publications

Never miss any articles matching your research from any publisher
  • Get alerts for new papers matching your research
  • Find out the new papers from selected authors
  • Updated daily for 49'000+ journals and 6000+ publishers
  • Define your Scifeed now

SciFeed Share & Cite This Article

MDPI and ACS Style

Miron, A.; Rogozan, A.; Ainouz, S.; Bensrhair, A.; Broggi, A. An Evaluation of the Pedestrian Classification in a Multi-Domain Multi-Modality Setup. Sensors 2015, 15, 13851-13873.

Show more citation formats Show less citations formats

Related Articles

Article Metrics

Article Access Statistics

1

Comments

[Return to top]
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top