Single Image Super-Resolution Based on Multi-Scale Competitive Convolutional Neural Network

Deep convolutional neural networks (CNNs) are successful in single-image super-resolution. Traditional CNNs are limited to exploit multi-scale contextual information for image reconstruction due to the fixed convolutional kernel in their building modules. To restore various scales of image details, we enhance the multi-scale inference capability of CNNs by introducing competition among multi-scale convolutional filters, and build up a shallow network under limited computational resources. The proposed network has the following two advantages: (1) the multi-scale convolutional kernel provides the multi-context for image super-resolution, and (2) the maximum competitive strategy adaptively chooses the optimal scale of information for image reconstruction. Our experimental results on image super-resolution show that the performance of the proposed network outperforms the state-of-the-art methods.


Introduction
As one of the classical yet challenging problems in image processing, the goal of single-image super-resolution (SISR) is to restore a high-resolution (HR) image from a low-resolution (LR) image input by inferring all the missing high-frequency details. Super-resolution is also a crucial step in many real-world applications, e.g., security and surveillance imaging, television display, satellite imaging and so on.
However, the image super-resolution problem is an inherently ill-posed problem because many HR images can be down-sampled to the same LR image. Such a problem is typically mitigated by constraining the solution space by strong prior information, which assumes that the neighborhood of a pixel provides reasonable information to restore high-frequency details that lost by down-sampling. For a detailed review of these methods see [1]. In general, the current approaches for super-resolution can be categorized into three classes: interpolation-, reconstruction-, and learning-based methods [2][3][4][5][6][7][8].
Recently, learning-based methods [9][10][11] achieved state-of-the-art performance. The above methods typically work at the level of small fixed-size image patches.
Recent years have witnessed significant advancement in speech and visual recognition with deep convolutional neural networks (CNNs) [12][13][14]. CNN consists of multiple convolutional layer. Benefiting from the large number and size of the convolutional kernels in each convolutional layer, CNN has strong learning capacity and can automatically learn hierarchies feature from training data [15]. In the task of image super-resolution, Dong et al. proposed a deep learning-based method named super-resolution CNN (SRCNN) [16,17]. Compared with previous learning-based approaches, SRCNN exploits more contextual information to restore lost image details and achieves leading performance. In general the effective size of image context for reconstruction is correlates with the receptive field size of CNN [18]. Specifically, the receptive field size of SRCNN depends on the convolutional kernel size in each layer and the depth of CNN. Kim et al. developed a very deep CNN [19], which enlarges the receptive field size by stacking more convolutional layers. Yamamoto et al. successfully introduced deep CNN-based super-resolution to agriculture [20]. However, both larger kernel size and deeper network bring more parameters and consume more computing resources. Moreover, once the kernel scale and the depth are fixed, CNN only provides single scale contextual information for image reconstruction, which is ignorant of the inherent multi-scale nature of real-world image.
Each image feature has its own optimal scale at which the image feature is the most pronounced and distinctive from its surroundings. Considering that HR image restoration may rely on both shortand long-range contextual information, an ideal CNN should adaptively determine the convolutional kernel with a large scale on smooth regions and a small scale on texture regions possessing abundant details. On one hand, convolutional layer with large scale kernel has the capability to learn complex features but has more parameters. On the other hand, small scale of convolutional kernel makes CNN more compact thus easy to learn, but has less ability to represent the image features.
A practice solution is to adopt multi-scale inference [21][22][23][24][25] into CNN, yielding two questions: How to introduce multi-scale convolution into CNN and how to choose an optimal scale of the convolutional kernel? In this paper, we introduce a new module to tackle these questions. The proposed module is composed of multi-scale convolutional filters joined by a competitive activation unit.
The contributions of this paper include: • We introduce multi-scale convolutional kernel to traditional convolutional layers, which provides multi-range contextual information for image super-resolution; • We adopt a competitive strategy to CNN, which not only adaptively choose the optimal scale for convolutional filters but also reduces the dimensionality of the intermediate outputs.
The remainder of this paper is organized as follows. The related works are reviewed in Section 2. In Section 3, the structure and training process of our multi-scale CNN are discussed in detail. Section 4 presents the experimental results on image super-resolution. Section 5 discuss the comparisons with other state-of-the-art methods and potential improvement of our method. The conclusions and future work are given in Section 6.

Related Work
Within the field of object recognition, some multi-scale CNNs have been proposed. A single classifier is built and rescale the image multiple times to meet all possible object sizes [26]. Representation from multiple stages in the classifier were combined to provide different scales of receptive fields [27]. Feature maps at intermediate network layers were exploited to cover a large range of object sizes [28]. The discussed above CNNs perform multi-scale learning outside the neural network, which means the above CNNs learn features from multi-scale input images or combine the output of intermediate layers. Liao proposed competitive Multi-scale CNN for image classification [29].
For image super-resolution, image details are too precious to afford any losses caused by resizing, thus the image details should be extracted by performing multi-scale convolutional filter inside the network.
SRCNN is one of the successful method for image super-resolution with a convolutional neural network. The network builds an end-to-end mapping between a pair of a LR image Y and a HR image X. Given any size image Y, SRCNN can directly output the HR image F(Y).
SCRNN consists of three convolutional layers, and each layer performs one specific task. The l-th convolutional layer convolves the image with a set of filters that have the same size f l × f l .
where W l and b l denote the convolutional filters and biases of the l-th layer, respectively, and ' * ' represents the convolutional. Y l−1 indicates the input data from the previous layer, and Y l is the output of the convolution. Y 0 is the original LR images. More detailed structures of CNN in superresolution are summarized below.
• W 1 corresponds to n 1 filters of a size of c × f 1 × f 1 , where c is the number of image channels and f 1 is the spatial size of the filter. The output of the first convolution layer is n 1 feature maps to extract and represent each patch as a high-dimensional feature vector.

•
The second convolutional layer is responsible for non-linear mapping. Suppose that we obtain n 1 dimensional vectors at the above step, the second layer applies n 2 filters of size n 1 × f 2 × f 2 on each feature map. The output n 2 -dimensional vectors will be used for reconstruction.

•
The last layer is expected to reconstruct the final HR image by recombining the above high-dimensional patch-wise representations.
Motivated by different tasks, the above three operations all lead to the same form as a convolutional layer. The layers following the first two convolutional layers are rectified linear layers which use a rectified linear unit as an activation function to decide which neuron is fired. Specifically, the 9-5-5 network refers to network f 1 = 9 × 9, f 2 = 5 × 5, and f 3 = 5 × 5 in each convolutional layer. The sizes of the image contexts for reconstruction are decided by the receptive field of CNN. One pixel of F(Y) is reconstructed by 17 × 17 pixels within the neighborhood from Y.
We introduce a new module that is composed of multi-scale convolutional filters joined by a competitive activation unit. Figure 1 shows the different network modules between SRCNN and the proposed method.

Proposed Method
We first introduce the architecture of our network and then describe the implementation details. An overview flowchart of the proposed network is presented in Figure 2.

Multi-scale Competitive Module
Assume that the output of previous layer is Y l−1 , which consists of n l−1 feature maps (n l−1 channels), the multi-scale filters are first applied to the input data to produce a set of feature maps z l k .
where W l k corresponds to the k-th type filter that contains n l filters of size n l−1 × f l k × f l k . Each convolution produces n l feature maps. Thus, the result of multi-scale convolution consists of K × n l feature maps. Second, all the feature maps are divided into non-overlapping n l groups, thus the i-th group consists of K feature maps z i where σ(·) represents the maxout activation function. For the i-th group, that z i l j (x, y) refers to data at a particular position (x, y) in the j-th feature map. As shown in Figure 3, the multi-scale convolutional layer includes K = 3 types of filter f l 1 = 5 × 5, f l 2 = 9 × 9 and f l 3 = 13 × 13. Suppose that each filter bank contains n l 1 = n l 2 = n l 3 = 4, and the convolutional output is 12 feature maps divided into 4 groups, the maxout function performs maximum element-wise pooling across these 4 groups of feature maps. In each iteration during the training procedure, the convolutional layer feeds feature maps into the maxout activation function, whereas the activation function ensures that the units that have the maximum values in the group are activated. The final output is 4 feature maps. Specifically, we denote the multi-scale convolutional layer as {5,9,13} when f l 1 = 13, f l 2 = 9 and f l 3 = 5.
Our model is inspired by the structure of SRCNN but differs as follows: • Multi-scale filters are applied on the input image, which produce a set of feature maps to provide different range of image context for image super-resolution. On the contrary, SRCNN only implements single scale receptive field and provides fixed range of contextual information.

•
Competitive strategy is introduced to the activation function. The activation function of SRCNN is ReLU, which is replaced by maxout in our network. The maxout unit reduces the dimensionality of the joint filter outputs and promotes competition among the multi-scale filters. • A shortcut connection with identity mapping is used to add the input image to the output of the last layer. The shortcut connections can effectively facilitate gradient flow through multiple layers. Thus accelerating deep network training [31].
Compared with competitive multi-scale CNN for image classification in [29], we design the module for image super-resolution. By removing the Batch Normalization (BN) layers, our method not only makes the network suit for image reconstruction, but also saves more GPU memory to build up a deeper model under limited computational resources. The experimental results show that the improvement of performance without BN layer as detailed in Section 5.3.

Training and Prediction
The training procession is to learn the end-to-end mapping function F from training samples.

The Loss Function
We now describe the object function of our model. Suppose that Y is the input low-resolution image and X is the ground-truth high-resolution image; Θ are the network parameters and F(Y; Θ) is the network prediction. We adopt residual learning [19,31] and reformulate the layers as learning residual functions with reference to the layer inputs rather than learning unreferenced functions. Let X denote the final output of network, the loss function of the residual estimation is defined as The loss refers to the Euclidean distance between the reconstructed image (the sum of Y and F(Y)) and ground truth. Given N pairs of LR {Y i } and HR {X i }, the loss function is average across all pairs : where N is the number of training samples.

Training
The loss is minimized using stochastic gradient descent with standard back-propagation [32]. In particular, the W l of the convolutional layers are updated as where γ is momentum; l and i are the indices of layers and iterations, respectively; η is the learning rate; and ∂L ∂W l i is the derivative.
Similar to SRCNN, all the convolutional filters are randomly initialized by a standard normal distribution with deviation 0.01 and biases are set to 0. The learning rate of the multi-scale competitive module and the second convolutional layer is 10 −2 , and that of the last layer is 10 −3 . The batch size is 32 and the momentum γ is 0.9.

Prediction
In the prediction phase, LR image Y is fed into the network, and the prediction result of the network is F(Y). Therefore, the HR image is the sum of the network input Y and output F(Y), namely, F(Y) + Y.

Multi-Scale Receptive Fields
For 9-5-5 SRCNN, one pixel of F(Y) is reconstructed by 17 × 17 pixels within the neighborhood from Y. The proposed network can be unfolded to a group of subnetworks that are joined by a maxout unit. For example, {5,9,13}-5-5 can be unfolded into 5-5-5, 9-5-5 and 13-5-5 subnetworks, which implement three sizes of receptive fields: 13 × 13, 17 × 17 and 21 × 21. Furthermore, there is a shortcut connection which skips intermediate layers and adds Y to F(Y) directly. This shortcut connection indicates a 1 × 1 receptive field. Consequently, the proposed network implicitly encodes short-and long-range context information in HR reconstruction. In contrast to a single-scale receptive field of SRCNN, the proposed network provides multi-scale context and improves the flexibility of the network.

Competitive Unit Prevents Filter Co-adaptation
Co-adaptation is a sign of overfitting. Neural units are expected to independently extract features from their inputs rather than relying on other neurons to do so [33]. Imposing the maxout competitive unit to different scale filters explicitly drops the border connections, which not only reduces the chances that these filters will converge to similar regions of the feature space but also protects the 2D structure of the convolutional filters [29].

Experimental Section
In this section, we first describe how to construct the training datasets. Next, we explore the different structures of the network and then investigate the relation between performance and parameters. Last, we compare our model with SRCNN and other state-of-the-art methods.

Datasets and Evaluation Criteria
The training set consists of 91 images from [6] with the addition of 200 images from the Berkeley Segmentation Dataset [34]. The size of the training samples is 33 for upscale factor 3 and 32 for upscale factor 2 and 4. We extract samples from the original images with a stride of 10, and then we randomly choose 300,000 samples as training samples. Figure 4 shows some training samples. Samples treated as the ground-truth images X. To synthesize the LR samples, these samples are first downsampled by a given upscaling factor, and then these LR samples are upscaled by the same factor via Bicubic interpolation to form the LR images. Following [10], super-resolution is only applied on the luminance channel (Y channel in YCbCr color space). Note that all the convolutional layers are padded with zeros before performing convolution to obtain the same size output. To evaluate our approach, we adopt the peak signal-to-noise ratio (PSNR) and the structural similarity (SSIM) index as evaluation criteria. The benchmark database includes Set5 and Set14 from [9]. The two datasets consist of different types of images; some images include repetitive patterns, such as the face of "Baby" and the tablecloth of "Barbara", whereas some contain rich structural information, such as "Bird" and "Butterfly".
We reported the computation time of training and prediction in Table 2, showing that our method increased the training time by a factor of 1.42 and the prediction time by a factor of 1.87. Table 3 summarizes the memory cost. The proposed method requires 266% of the memory consumed by SRCNN, no matter at the phases of training or prediction. Both the time cost and the memory cost of the proposed method are acceptable for image super-resolution.

Parameters and Performance
For fair comparisons, we applied residual learning to the original SRCNN. In the following, all the SRCNN means the residual learning SRCNN. The same training sets, learning rates and initial parameters are used in SRCNN and our proposed model, and all networks are evaluated on Set5 with an upscaling factor of 4.
A reasonable larger filter size can improve reconstruction performance. As shown in Table 4, when the filter size is increased from {3,5,9} to {5,9,13} or even larger size, the PSNR has been increased from 30.10 dB to 30.44 dB. Too large size, e.g., {7,13,15}, will degrade the performance and introduce more computation. Therefore, moderate filter size such as {5,9,13} is suggested.

Epoch and Performance
As illustrated in Figure 6, the proposed networks quickly reach state-of-the-art performance within a few epochs. Our model {5,9,13}-7-5 progressively improve over time. We will show that better results can be obtained by providing a longer training time in the following experiments.

Results
We compare the proposed method with state-of-the-art methods both qualitatively and quantitatively. The compared methods include the baseline method Bicubic, adjusted anchored neighborhood regression method (A+) [10], and three CNN-based methods: SRCNN [17], CSCN [36] and FSRCNN [37]. For fair comparisons with SRCNN which training data include 5 million sub-images from imageNet, we augment the training set to 1 million sub-images by rotating and flipping the 300,000 images in the original training set. Table 5 illustrates the average quantitative performance of the compared methods. The proposed method outperforms the other methods for most of images. Furthermore, the PSNR has been increased about 0.1 to 0.4 dB when we build up a deeper network by stacking two modules. As shown in Table 6, our method surpasses SRCNN largely on "Butterfly", "Woman", "Bird" and "Monarch", which have rich image details and diverse image features. For images with a large smooth region or repetitive texture pattern, such as "Baby", "PPT3" and "Barbara", the PSNR of our method is lower than that of the other methods. By stacking more multi-scale competitive module, the network provides more various size context for image reconstruction. Table 6 illustrates the better performance of our deep network than our shallow one. Figures 7-10 present some sampled results generated by the compared methods. The HR images restored by the proposed method are perceptually more plausible with relatively sharp edges and few artifacts.

Discussions
In this section, we compared our method to other state-of-art methods on large datasets. Furthermore, we also show the potential improvement of performance by Iterative Back-Projection (IBP) filter [4].
We divide these compared methods into two types, shallow network that contains only 3-4 layers and much deeper network that have more than 16 layers. The former type includes SRCNN, ESPCN and the proposed methods while the latter one includes SRGAN and VDSR. Our method achieves the best evaluation criteria among all the shallow networks and also obtains better performance than a deeper network, SRGAN, which has 16 layers. A much deeper network, VDSR, obtains the best performance among all compared methods but its network has 20 layers. Overall, our method is a better choice under limited computational resources.

Improvement with Iterative Back Projection
The iterative back projection (IBP) refinement generally improves the PSNR as it makes the HR reconstruction consistent with the LR input and the employed degradation operators. We perform IBP as post-process of our method, and Table 8 shows the improvements obtained with iterative back projection refinement.

The Effect of Batch Normalization on Super-Resolution
Reference [29] proposed competitive network with Batch Normalization for image classification. Reference [29] solved the problem of image classification while our work is designed for image super-resolution. Therefore, we analyzed the effect of BN in image super-resolution.
We removed the Batch Normalization (BN) layers from our module and attained better performance in terms of higher PSNR and SSIM criteria, as shown in Table 9. Experiments proved that even with deeper network, BN still reduces the super-resolution performance [42]. In addition, the BN layers consume more GPU memory to restore the results of BN layers. Thus, our method is more convenient to build up a deeper model under limited computational resources.

Conclusions
We propose a super-resolution reconstruction model for single images based on multi-scale convolutional neural network. Moreover, large filters and small filters are jointly trained within the same model. The maxout unit not only reduces the dimensionality of the filter outputs, but also promotes competition among the multi-scale filters. The success of the proposed network is due to its ability to provide a multi-range of context and adaptively select the optimal local receptive field for image reconstruction. Experiments on super-resolution illustrate the high performance of our network. Under limited computational resources, our method achieves the best evaluation criteria among all the shallow networks and also obtains better performance than a deeper network. The experiments demonstrate that our method can fully take advantage of the cost/accuracy trade-off. The further improvement is expected when stacking more multi-scale competitive modules.