Next Article in Journal
A Flexible and Low-Cost Tactile Sensor Produced by Screen Printing of Carbon Black/PVA Composite on Cellulose Paper
Next Article in Special Issue
An Input-Perceptual Reconstruction Adversarial Network for Paired Image-to-Image Conversion
Previous Article in Journal
Modification of MTEA-Based Temperature Drift Error Compensation Model for MEMS-Gyros
Previous Article in Special Issue
Deep Binary Classification via Multi-Resolution Network and Stochastic Orthogonality for Subcompact Vehicle Recognition
Article

Global-and-Local Context Network for Semantic Segmentation of Street View Images

1
Department of Electrical Engineering, Yuan Ze University, Taoyuan 32003, Taiwan
2
Department of Computer Science & Information Engineering, National Central University, Taoyuan City 32001, Taiwan
3
Department of Computer Science, University Tunku Abdul Rahman, Kampar 31900, Malaysia
*
Authors to whom correspondence should be addressed.
Sensors 2020, 20(10), 2907; https://doi.org/10.3390/s20102907
Received: 12 February 2020 / Revised: 13 April 2020 / Accepted: 18 May 2020 / Published: 21 May 2020
Semantic segmentation of street view images is an important step in scene understanding for autonomous vehicle systems. Recent works have made significant progress in pixel-level labeling using Fully Convolutional Network (FCN) framework and local multi-scale context information. Rich global context information is also essential in the segmentation process. However, a systematic way to utilize both global and local contextual information in a single network has not been fully investigated. In this paper, we propose a global-and-local network architecture (GLNet) which incorporates global spatial information and dense local multi-scale context information to model the relationship between objects in a scene, thus reducing segmentation errors. A channel attention module is designed to further refine the segmentation results using low-level features from the feature map. Experimental results demonstrate that our proposed GLNet achieves 80.8% test accuracy on the Cityscapes test dataset, comparing favorably with existing state-of-the-art methods. View Full-Text
Keywords: semantic segmentation; global context; local context; fully convolutional networks semantic segmentation; global context; local context; fully convolutional networks
Show Figures

Graphical abstract

MDPI and ACS Style

Lin, C.-Y.; Chiu, Y.-C.; Ng, H.-F.; Shih, T.K.; Lin, K.-H. Global-and-Local Context Network for Semantic Segmentation of Street View Images. Sensors 2020, 20, 2907. https://doi.org/10.3390/s20102907

AMA Style

Lin C-Y, Chiu Y-C, Ng H-F, Shih TK, Lin K-H. Global-and-Local Context Network for Semantic Segmentation of Street View Images. Sensors. 2020; 20(10):2907. https://doi.org/10.3390/s20102907

Chicago/Turabian Style

Lin, Chih-Yang, Yi-Cheng Chiu, Hui-Fuang Ng, Timothy K. Shih, and Kuan-Hung Lin. 2020. "Global-and-Local Context Network for Semantic Segmentation of Street View Images" Sensors 20, no. 10: 2907. https://doi.org/10.3390/s20102907

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop