Next Article in Journal
Understanding the Propagation and Control Strategies of Congestion in Urban Rail Transit Based on Epidemiological Dynamics Model
Previous Article in Journal
“Why Drones for Ordinary People?” Digital Representations, Topic Clusters, and Techno-Nationalization of Drones on Zhihu
Article

Visual Saliency Prediction Based on Deep Learning

1
Faculty of Engineering & Applied Science, Memorial University, St. John’s, Newfoundland, NL A1B 3X5, Canada
2
Faculty of Engineering, Elmergib University, Khoms 40414, Libya
3
Department of Computer Science, Mathematics, Physics and Statistics, University of British Columbia, Okanagan Campus, Kelowna, BC V1V 1V7, Canada
4
C-CORE, Captain Robert A. Bartlett Building, Morrissey Road, St. John’s, Newfoundland, NL A1C 3X5, Canada
*
Author to whom correspondence should be addressed.
Information 2019, 10(8), 257; https://doi.org/10.3390/info10080257
Received: 25 May 2019 / Revised: 30 July 2019 / Accepted: 8 August 2019 / Published: 12 August 2019
Human eye movement is one of the most important functions for understanding our surroundings. When a human eye processes a scene, it quickly focuses on dominant parts of the scene, commonly known as a visual saliency detection or visual attention prediction. Recently, neural networks have been used to predict visual saliency. This paper proposes a deep learning encoder-decoder architecture, based on a transfer learning technique, to predict visual saliency. In the proposed model, visual features are extracted through convolutional layers from raw images to predict visual saliency. In addition, the proposed model uses the VGG-16 network for semantic segmentation, which uses a pixel classification layer to predict the categorical label for every pixel in an input image. The proposed model is applied to several datasets, including TORONTO, MIT300, MIT1003, and DUT-OMRON, to illustrate its efficiency. The results of the proposed model are quantitatively and qualitatively compared to classic and state-of-the-art deep learning models. Using the proposed deep learning model, a global accuracy of up to 96.22% is achieved for the prediction of visual saliency. View Full-Text
Keywords: visual saliency; Convolutional Neural Networks; VGG-16; semantic segmentation; deep learning visual saliency; Convolutional Neural Networks; VGG-16; semantic segmentation; deep learning
Show Figures

Figure 1

MDPI and ACS Style

Ghariba, B.; Shehata, M.S.; McGuire, P. Visual Saliency Prediction Based on Deep Learning. Information 2019, 10, 257. https://doi.org/10.3390/info10080257

AMA Style

Ghariba B, Shehata MS, McGuire P. Visual Saliency Prediction Based on Deep Learning. Information. 2019; 10(8):257. https://doi.org/10.3390/info10080257

Chicago/Turabian Style

Ghariba, Bashir, Mohamed S. Shehata, and Peter McGuire. 2019. "Visual Saliency Prediction Based on Deep Learning" Information 10, no. 8: 257. https://doi.org/10.3390/info10080257

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop