Pedestrian Detection with Semantic Regions of Interest
AbstractFor many pedestrian detectors, background vs. foreground errors heavily influence the detection quality. Our main contribution is to design semantic regions of interest that extract the foreground target roughly to reduce the background vs. foreground errors of detectors. First, we generate a pedestrian heat map from the input image with a full convolutional neural network trained on the Caltech Pedestrian Dataset. Next, semantic regions of interest are extracted from the heat map by morphological image processing. Finally, the semantic regions of interest divide the whole image into foreground and background to assist the decision-making of detectors. We test our approach on the Caltech Pedestrian Detection Benchmark. With the help of our semantic regions of interest, the effects of the detectors have varying degrees of improvement. The best one exceeds the state-of-the-art. View Full-Text
Scifeed alert for new publicationsNever miss any articles matching your research from any publisher
- Get alerts for new papers matching your research
- Find out the new papers from selected authors
- Updated daily for 49'000+ journals and 6000+ publishers
- Define your Scifeed now
He, M.; Luo, H.; Chang, Z.; Hui, B. Pedestrian Detection with Semantic Regions of Interest. Sensors 2017, 17, 2699.
He M, Luo H, Chang Z, Hui B. Pedestrian Detection with Semantic Regions of Interest. Sensors. 2017; 17(11):2699.Chicago/Turabian Style
He, Miao; Luo, Haibo; Chang, Zheng; Hui, Bin. 2017. "Pedestrian Detection with Semantic Regions of Interest." Sensors 17, no. 11: 2699.
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.