Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (2)

Search Parameters:
Keywords = orderless pooling

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
15 pages, 673 KB  
Article
No-Reference Image Quality Assessment with Multi-Scale Orderless Pooling of Deep Features
by Domonkos Varga
J. Imaging 2021, 7(7), 112; https://doi.org/10.3390/jimaging7070112 - 10 Jul 2021
Cited by 8 | Viewed by 4851
Abstract
The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during [...] Read more.
The goal of no-reference image quality assessment (NR-IQA) is to evaluate their perceptual quality of digital images without using the distortion-free, pristine counterparts. NR-IQA is an important part of multimedia signal processing since digital images can undergo a wide variety of distortions during storage, compression, and transmission. In this paper, we propose a novel architecture that extracts deep features from the input image at multiple scales to improve the effectiveness of feature extraction for NR-IQA using convolutional neural networks. Specifically, the proposed method extracts deep activations for local patches at multiple scales and maps them onto perceptual quality scores with the help of trained Gaussian process regressors. Extensive experiments demonstrate that the introduced algorithm performs favorably against the state-of-the-art methods on three large benchmark datasets with authentic distortions (LIVE In the Wild, KonIQ-10k, and SPAQ). Full article
(This article belongs to the Special Issue Image and Video Quality Assessment)
Show Figures

Figure 1

18 pages, 4442 KB  
Article
Scene Classification from Synthetic Aperture Radar Images Using Generalized Compact Channel-Boosted High-Order Orderless Pooling Network
by Kang Ni, Yiquan Wu and Peng Wang
Remote Sens. 2019, 11(9), 1079; https://doi.org/10.3390/rs11091079 - 7 May 2019
Cited by 2 | Viewed by 2932
Abstract
The convolutional neural network (CNN) has achieved great success in the field of scene classification. Nevertheless, strong spatial information in CNN and irregular repetitive patterns in synthetic aperture radar (SAR) images make the feature descriptors less discriminative for scene classification. Aiming at providing [...] Read more.
The convolutional neural network (CNN) has achieved great success in the field of scene classification. Nevertheless, strong spatial information in CNN and irregular repetitive patterns in synthetic aperture radar (SAR) images make the feature descriptors less discriminative for scene classification. Aiming at providing more discriminative feature representations for SAR scene classification, a generalized compact channel-boosted high-order orderless pooling network (GCCH) is proposed. The GCCH network includes four parts, namely the standard convolution layer, second-order generalized layer, squeeze and excitation block, and the compact high-order generalized orderless pooling layer. Here, all of the layers are trained by back-propagation, and the parameters enable end-to-end optimization. First of all, the second-order orderless feature representation is acquired by the parameterized locality constrained affine subspace coding (LASC) in the second-order generalized layer, which cascades the first and second-order orderless feature descriptors of the output of the standard convolution layer. Subsequently, the squeeze and excitation block is employed to learn the channel information of parameterized LASC statistic representation by explicitly modelling interdependencies between channels. Lastly, the compact high-order orderless feature descriptors can be learned by the kernelled outer product automatically, which enables low-dimensional but highly discriminative feature descriptors. For validation and comparison, we conducted extensive experiments into the SAR scene classification dataset from TerraSAR-X images. Experimental results illustrate that the GCCH network achieves more competitive performance than the state-of-art network in the SAR image scene classification task. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

Back to TopTop