Next Article in Journal
A Fuzzy Similarity-Based Approach to Classify Numerically Simulated and Experimentally Detected Carbon Fiber-Reinforced Polymer Plate Defects
Next Article in Special Issue
Tunable White Light for Elders (TWLITE): A Protocol Demonstrating Feasibility and Acceptability for Deployment, Remote Data Collection, and Analysis of a Home-Based Lighting Intervention in Older Adults
Previous Article in Journal
Cross-Medium Photoacoustic Communications: Challenges, and State of the Art
Previous Article in Special Issue
LEMON: A Lightweight Facial Emotion Recognition System for Assistive Robotics Based on Dilated Residual Convolutional Neural Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution

1
School of Electronics & Information Engineering, South China University of Technology, Guangzhou 510640, China
2
National Research Center for Mobile Ultrasonic Detection, Guangzhou 510640, China
3
Institute of Systems Science, National University of Singapore, Singapore 119615, Singapore
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(11), 4231; https://doi.org/10.3390/s22114231
Submission received: 12 February 2022 / Revised: 28 May 2022 / Accepted: 30 May 2022 / Published: 1 June 2022

Abstract

:
Image super-resolution aims to reconstruct a high-resolution image from its low-resolution counterparts. Conventional image super-resolution approaches share the same spatial convolution kernel for the whole image in the upscaling modules, which neglect the specificity of content information in different positions of the image. In view of this, this paper proposes a regularized pattern method to represent spatially variant structural features in an image and further exploits a dynamic convolution kernel generation method to match the regularized pattern and improve image reconstruction performance. To be more specific, first, the proposed approach extracts features from low-resolution images using a self-organizing feature mapping network to construct regularized patterns (RP), which describe different contents at different locations. Second, the meta-learning mechanism based on the regularized pattern predicts the weights of the convolution kernels that match the regularized pattern for each different location; therefore, it generates different upscaling functions for images with different content. Extensive experiments are conducted using the benchmark datasets Set5, Set14, B100, Urban100, and Manga109 to demonstrate that the proposed approach outperforms the state-of-the-art super-resolution approaches in terms of both PSNR and SSIM performance.

1. Introduction

The goal of single image super-resolution (SISR) is to reconstruct high-quality high-resolution (HR) images from degraded low-resolution (LR) images. It has very wide applications in video surveillance, remote sensing, and medical and military imaging. Another interesting work related to SISR is the face hallucination which enlarges input regions by approximately linear mapping SVD values among different resolutions [1]. Its hallucination capability was further expanded with the same mapping across different views [2]. The pioneering networked SISR work was done by Dong et al. [3]. Their proposed neural network SRCNN established an end-to-end mapping from an input interpolated LR image to the output HR image. Then VDSR [4], DRCN [5], DRRN [6], and MemNet [7] were successively proposed, which further improved the image reconstruction performance. These methods up-sampled an LR input at the very first to the required size of a network output, rather than using an upscaling module to increase the spatial resolution at the end.
However, recent research works found that such an early interpolation on LR image will inevitably result in detail loss and greatly increase the amount of model calculation. Extracting features from the original LR input and increasing the spatial resolution at the end of the network has become a popular deep SISR structure. Shi et al. proposed an efficient sub-pixel convolution layer in ESPCN [8], which enlarged the LR feature map to the output size at the end of a network. With the efficient sub-pixel convolution layer, many methods, such as EDSR [9], RDN [10], RFANet [11], SAN [12], DID [13], treated SR recovery with different scale factors as independent tasks, and applied sub-pixel convolution layers for feature map expansion at the end. While sub-pixel convolutional layers are only feasible to integer scale factors, and a specific network model must be designed for each scale factor, each network model can magnify images merely with a fixed integer scale factor.
To avoid the design of different network models for different scale factors, the meta-learning technique [14] has been introduced to develop various SR approaches. The feed-forward model (FFM) in the meta feature representation [14] provided a feedforward mapping method that directly predicted the required parameters of a test instance. Similar to the Hypernetworks [15], the weight of another neural network was generated in a feedforward process. To perform image super-resolution at any scale in one model, Hu et al. proposed Meta-SR [16] to use the Meta-Upscale Module to improve the spatial resolution at the end of the network. For different scale factors and position coordinate offsets, the weight prediction network in the Meta-Upscale Module can generate different convolution kernels to generate the final SR image. However, the Meta-Upscale Module still shared the convolution kernel spatially and did not consider the content information of the current image. Chen et al. proposed LIIF [17], using a multi-layer perceptron at the end of the network to replace the traditional upscaling layer and predict the gray value of each pixel in the output SR image. However, since the input of the multi-layer perceptron is a one-dimensional vector, the original position information of the feature vector will be lost in the process of converting a multi-dimensional vector of the feature map into a one-dimensional vector of an input.
The major challenge of single image SR is how to perform upscaling reconstruction adaptively to the spatially variant image content. According to the characteristics of involution [18], if the convolution kernel is shared spatially, the parameters of the convolution kernel cannot be flexibly adjusted to match different inputs. On the contrary, we can use space-specific kernels for more flexible modeling in the spatial dimension. Similar to the space-specific involution, introducing a regularized pattern to guide the generation of convolution kernel will be helpful in the upscaling module. Motivated by this, in this paper, we propose a specific regularized pattern extraction network to extract the regularized pattern from LR features and then generate a space-specific convolution kernel according to different regularized patterns.
The two contributions of this paper are summarized as follows.
(1) A regularized pattern extraction method is proposed to extract the regularized pattern from LR features. This will adaptively guide the image reconstruction in a spatially variant manner. Furthermore, both position information and scale information are used in the weight prediction network with the proposed regularized pattern. As a result, the convolution weight prediction network can accurately match the relationship between input parameters and output convolution kernel parameters.
(2) A dynamic convolution kernel generation method is proposed to generate the most matching convolution kernel parameters according to the regularized pattern and position and scale information of the current position. Consequently, the pixels at different positions in the SR image can be processed differently, which enhances the texture consistency with the HR image and improves the network performance.
The rest of this paper is organized as follows. The dynamic convolution kernel generation method is proposed and then further exploited to develop a super-resolution approach in Section 2. The proposed approach is evaluated with state-of-the-art approaches in extensive experiments in Section 3. Finally, Section 4 concludes this paper.

2. Proposed Dynamic Convolution Kernel Generation Based on Regularized Pattern for Image Super-Resolution

Different pixel points in the LR image have different image contents. As shown in Figure 1, the blue points are in the flat color block area, and the red points are in the edge area. During the Meta-SR upsampling process [16], the convolution kernels used for these two positions are the same. The difference in the content information of these two positions is not considered. We propose a dynamic convolution kernel generation method to adaptively generate convolution kernels according to local image content, which is represented by using the proposed regularized pattern. For the blue and red points in Figure 1, the proposed method produces different convolution kernels matching their regularized content patterns, implementing space-specific reconstruction operations.
Assuming that an input LR image is I L R , the LR feature F L R is extracted from I L R by the LR feature extraction network. We use the feature tensor V R H × W × i n C to represent the F L R , where H is the height of I L R , W is the width of I L R , and i n C is the number of channels of V . In the feature tensor V , the feature vector V i , j R i n C corresponds to the feature representation on the pixel point i , j of the LR image.

2.1. Proposed Regularized Pattern Extraction Method

The regularized pattern extraction method is proposed in this section to guide the image upscaling reconstruction. Different pixel positions on the input LR image I L R contain different image content, such as relatively smooth background regions, or edges of an object that changes drastically. Their differences are manifested in their features F L R . We define the regularized pattern P as
P = p F L R = S σ F L R W 1 + B 1 W 2 + B 2
where p ( ) is the regularized pattern extraction function, is the convolution operator, W 1 and B 1 are the weight and bias of the first convolution layer on regularized pattern extraction network, W 2 and B 2 are the weight and bias of the second convolution layer on regularized pattern extraction network, σ ( ) is the Relu activation function, S ( ) is the Sigmod activation function.
The regularized pattern defined in (1) is an abstraction of the LR feature F L R , features that can distinguish content information of different positions are merged to obtain the regularized patterns. During the model training, the weight of the convolution kernel in the regularized pattern extraction network defined in (1) is constantly updated under the constraint of the L1 loss function, focusing on features that can best distinguish the content information to seek the regularized pattern vector with the least structural risk.

2.2. Proposed Dynamic Convolution Kernel Generation Method

In this section, a dynamic convolution kernel generation method is proposed to adaptively generate convolution kernels according to local image content, which is represented using the proposed regularized pattern described in Section 2.1.
First, task-level samples and data samples need to be generated. Suppose that the scale factor range is [ r m i n , r m a x ] when performing SR reconstruction on LR images, and the probability of using each scale factor in the range for super-resolution reconstruction is equal, that is, the distribution p r of the scale factor r is a discrete uniform distribution in [ r m i n , r m a x ] as
p r = U r m i n , r m a x
We use the values of all scale factors in the distribution (2) to downsample the training set HR images to obtain the training set LR images corresponding to different scale factors r . Each time a scale factor r s is randomly selected from the distribution p r as the current task, and then a pair of LR-HR image patches are randomly selected from the training set corresponding to the scale factor r s as training samples.
Suppose that the length and width of the LR image patch are L , there are L 2 pixels on the LR image patch, and there are ( L × r s ) 2 pixels on the corresponding reconstructed SR image patch. The weight prediction network needs to generate a convolution kernel for each pixel in the SR image matching its RP, position, and scale information. Then the generated convolution kernels are used to map the LR image to the HR image. So, the number of data samples in the current task with factor r s is ( L × r s ) 2 .
Second, given a scale factor r s , the input LR image I L R with the height L and the width W , the LR feature obtained after passing I L R through the feature extraction network is F L R . Then, F L R is highly abstracted to extract the regularized pattern P , which represents the structure of different position information to distinguish image content at different locations as
P = p α F L R
where p α is the regularized pattern extraction function and α is the parameter of the regularized pattern extraction network.
Third, for a pixel point i , j in the SR image, the mapping pixel position in the LR image is i , j , and the position and scale information is M i , j which can be obtained as follows. Suppose that for the pixel i , j in I S R , its mapping i , j can always be found in F L R , where the V i , j is most closely related to the RGB value of the pixel i , j in I S R . The mapping formula from I S R to F L R is as [16]
i , j = m i , j = ( i r , j r )
where m ( ) is the position mapping function, r is the scale factor, and ⎣ ⎦ is the floor function. Then, for the feature vector V i , j in the LR feature F L R , the corresponding multiple pixel points i , j in I S R have a different relative positional relationship with V i , j . Define the relative offset function to express this difference [16]
o i = ( i r i r )
where o ( ) is the relative offset function. Then, the position and scale information M i , j at the pixel point i , j of the SR image I S R can be obtained as [16]
M i , j = o i , o j , 1 r = i r i r , j r j r , 1 r
Fourth, the corresponding regularized pattern is the vector P i , j in position i , j of P . For different pixels, the regularized pattern, location, and scale information are different. That means the relative deviation from its mapped location and the structural information of the location are unique. We generate the best-matched convolution weights for each pixel as
W i , j = F θ P i , j , M i , j
where F θ is the convolution weight prediction function, θ is the parameter of the convolution weight prediction network, and W i , j is the convolution weight corresponding to the pixel i , j in the SR image. The convolution weight prediction network generates a total of ( L × r × W × r ) convolutions to form the convolution weight set W s e t as
W s e t = F θ P , M
Fifth, for the gray value of the pixel point i , j in the SR image, the LR feature F i , j L R at the mapping position i , j in the LR image is the most closely related. Performing matrix product of the convolution weight W i , j and the LR feature F i , j L R we obtain the gray value V i , j of the pixel point as
V i , j = F i , j L R W i , j
For the entire SR image, it is obtained by upsampling the LR features as
I S R = f W s e t F L R = f F θ p α F L R , M F L R
where f W s e t is the upsampling function, and W s e t is the convolution kernel weight set.
  • Sixth, for the generated SR image patch, the L1 loss function is used to measure the error between the SR image patch and the HR image patch
    L s = I S R I H R = | f F θ p α F L R , M F L R I H R |
    where L s is the error between I S R and I H R in current task with the scale factor r s . In each task, the regularized pattern extraction network parameters α and convolution weight prediction network parameters θ are updated using gradient descent:
    α = α β α L s = α β α | f F θ p α F L R , M F L R I H R |
    θ = θ β θ L s = θ β θ | f F θ p α F L R , M F L R I H R |
    where α and θ are the parameters before the update, α and θ are the parameters after the update, and β is the learning rate.
  • By continuously extracting different scale factors from the distribution as different tasks to train the model, the parameters α and θ are continuously updated. The purpose of meta-learning training is to obtain appropriate parameters α and θ , so that the sum of task losses of all the scale factors sampled in the distribution p r is the smallest.
  • Finally, we use the trained network for the inference. Suppose that the scale factor of the current task is r , the length of the input LR image corresponding to the current task is L , and the width is W , so the length of the SR image is ⌊ L × r ⌋, and the width is ⌊ W × r ⌋. For each pixel in the SR image, the convolution weight prediction network generates a convolution kernel matching its regularized pattern according to Equation (7). Then the generated convolution kernel is used to map the LR features of the corresponding positions to RGB values according to Equation (9), and finally, the SR image is formed. Figure 2 is an example of SR images generated with scale factors of 1.6, 2.2, 2.8, 3.4, and 4.0, respectively.

2.3. Justification of the Proposed Dynamic Convolution Kernel Generation Method

To demonstrate the various convolution kernels generated according to different image content, an experiment is conducted as follows.
Assuming that the scale factor r is 2, for the pixel X i , j in the low-resolution image I L R , we can generate four convolution kernels W 2 i , 2 j , W 2 i + 1 , 2 j , W 2 i , 2 j + 1 , W 2 i + 1 , 2 j + 1 . We define these convolution kernels as a convolution kernel group G i , j on this same pixel location, which corresponds to G i , j 1 , G i , j 2 , G i , j 3 , G i , j 4 , and the variation of the convolution kernel group G i , j at pixel point X i , j in I L R is defined as
C i , j = p , q 1 , 0 , 1 x 1 , 2 , 3 , 4 D G i , j x , G i + p , j + q x
where the C i , j is the variation of the convolution kernel group G i , j at the pixel point X i , j in I L R , and D ( ) is the function of calculating the variation between the two convolution kernel groups and defined as
D G i , j x , G i + p , j + q x = m G i , j x , n G i + p , j + q x a b s m n
where a b s ( ) is an absolute value function, and m , n are different values at corresponding positions in two different convolution kernels.
In our experiment, we use the test image 253,027 from the B100 dataset [19], the img59 image from the Urban100 dataset [20], and the YumeiroCooking image from the Manga109 dataset [21] as the test images. Then, we apply Equation (10) on these images to obtain the variation value of the convolution kernel group at each position, and then normalize the values to be a range of [0, 255]. These values are visualized as color images using COLORMAP_JET in OpenCV.
As seen from Figure 3, we can find that in the grassland, sky, and large-area color blocks, where the content changes slowly and the regularization pattern is relatively simple, the change variation of the convolution kernel group is very small. The convolution kernel group of these pixel points is very similar to the convolution kernel groups of their neighbor pixels. On the contrary, the zebra patterns, clothing patterns, and architectural textures change drastically. The regularized pattern yields rich information, the convolution kernel group changes much. The regularized pattern guides the generation of the convolution kernel, which prompts the convolution weight prediction network to generate the optimal convolution kernel.

2.4. Proposed Image Super-Resolution Approach

An overview of the proposed network structure is shown in Figure 4. It contains three parts: (i) feature extraction network, (ii) regularized pattern extraction network, and (iii) convolution weight prediction network. We name our network as Regularized Pattern Based-RDN (RPB-RDN) since we chose RDN [10] as the first-part feature extraction network, which has been used also in Meta-RDN [16] and LIIF-RDN [17]. The second part regularized pattern extraction network and the third part convolution weight prediction network are presented as follows, respectively.
The regularized pattern extraction network consists of two convolutional layers, a ReLU activation function layer, and a Sigmoid activation function layer. Both the numbers of input and output channels of the Conv1 layer are i n C , and the Relu activation function layer is used to perform nonlinear mapping on the LR feature F L R . The number of input channels of the Conv2 layer is i n C , and the number of output channels is o u t C , so that the final regularized pattern has a suitable number of channels. Finally, the Sigmoid activation function layer maps the regularized pattern to [0, 1] so that it has the same value range as the position and scale information.
The convolution weight prediction network consists of two full connection layers and a ReLU activation function layer. We concatenate the regularized pattern vector of the current position and the position and scale information to get the vector V i n as the input of the first full connection layer. In our network, the dimensions of the regularized pattern vector P i , j and the position and scale information vector M i , j are both 3, so the dimension of the vector V i n is 6. Considering that the output vector dimension of the entire convolution weight prediction network is o u t C × i n C × k × k , we set the number of output units of the first full connection layer to 256 for the diversity of the output of the entire convolution weight prediction network while ensuring speed. Therefore, the number of input units of the second full connection layer is 256, and the output of that is a vector V o u t whose dimension is o u t C × i n C × k × k . Then we transform V o u t into a group of convolution kernels. The number of convolution kernels is the same number of SR image gray channels o u t C , and the parameter number of each convolution kernel is i n C × k × k . This convolution weight prediction network is expressed as
V o u t = W 2 σ W 1 V i n + b 1 + b 2
where W i is the weight of the ith fully-connected layer, b i is the bias of the ith fully connected layer and σ() is the Relu activation function.

3. Experimental Results

To evaluate the performance of the proposed RPB-RDN network and its various proposed components, including the proposed regularized pattern extraction network and the convolution weight prediction method, extensive experimental results are provided in this section, including the comparison between RPB-RDN and other SOTA methods.

3.1. Experimental Setup

In this paper, the high-resolution image set DIV2K is used. There are a total of 1000 images in DIV2K, 800 images for training, 100 images for verification, and 100 images for testing. All experimental models are trained with a DIV2K training image set. For testing, five standard benchmark data sets are used, including Set5 [22], Set14 [23], B100 [19], Urban100 [20], and Manga109 [21]. The PSNR and SSIM performance metrics are used to evaluate the results of image super-resolution reconstruction. All performance metrics are calculated on the Y channel of the YCbCr color space of the image. Given two images, the detailed formulas of PSNR and SSIM [24] are provided as
P S N R = 10 × l o g 10 ( M a x V M S E )
S S I M x , y = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
where MaxV is the maximum intensity value that image pixels can take, MSE is the mean square error between the two images, μ x is the average intensity value of the image x, μ y is the average intensity value of the image y, σ x 2 is the variance of image x, σ y 2 is the variance of image y, σ x y is the covariance of image x and image y, c1 and c2 are constants used to maintain stability [24].

3.2. Implementation Details

We use the L1 loss function to train the network. During the network training process, 8 low-resolution image patches with a size of 50 × 50 are randomly selected as a batch input. We increase the number of patches by flipping horizontally or vertically and randomly rotating 90°. The optimizer is Adam, and the learning rate is initialized to 0.0001, which is reduced by every 400 epochs. All experiments are run in parallel on 2 GPUs. The training scale factor varies from 1 to 4, the step size is 0.1, and the distribution of the scale factors is uniform. Each image patch in a batch has the same scale factor. The dimension of the regularized pattern vector P i , j is set to 3, which can speed up the matching efficiency and improve the reconstruction effect.

3.3. Performance Evaluation on the Proposed Regularized Pattern Extraction Method

To study the impact of the regularized pattern extraction method, an experiment is conducted to compare two network structures as follows. The first one (denoted as ‘baseline model’) is a single-layer convolutional network, which only performs a limited linear transformation on LR features. The second one is our proposed network. Since the proposed model uses the Sigmoid activation function at the end of the network, the regularized pattern is the same as the value range of the position and scale information, which can help to identify the relationship between input and output and speeds up the network convergence. Table 1 shows that the proposed model achieves better results in X2, X3, and X4 SR tasks in the three data sets of B100, Urban100, and Manga109, with an average increase of 0.04 dB in PSNR and 0.0005 in SSIM compared with the baseline model.

3.4. Performance Evaluation on the Proposed Convolution Weight Prediction Method

To verify the effectiveness of the convolution weight prediction method based on regularized pattern, an experiment is conducted using the benchmark dataset B100 [19] with scale factors ranging from 1.1 to 4.0 and a step length of 0.1 using Meta-RDN [16], and our RPB-RDN model respectively. As shown in Table 2, the proposed model, which integrates the convolution weight prediction method based on the regularized pattern, achieves better results than Meta-RDN [16] in all tasks with different scale factors. In a total of thirty tasks, RPB-RDN improves PSNR by 0.06 dB on average over Meta-RDN [16].

3.5. Performance Evaluation on the Inference Time

In this experiment, we compare the running time of RDN [10], Meta-RDN [16], LIIF-RDN [17], and our RPB-RDN using Xeon4210 and NVIDIA 2080Ti. We choose the B100 [19] as the test dataset and take the image pre-processing time out of consideration. The experimental results are shown in Table 3. The meta-upsampling module in RPB-RDN is more time-consuming than the sub-pixel convolutional layer in RDN [10], so the overall time-consumption of RPB-RDN is longer than RDN [10]. Compared with Meta-RDN [16], RPB-RDN adds a regularized pattern extraction network so that the overall time consumption has increased, but the difference is not large. LIIF-RDN [17] uses a multi-layer perceptron that is more time-consuming than convolutional layers, so the overall time-consuming of LIIF-RDN [17] is longer than RPB-RDN.

3.6. The Superior of the Proposed Method in Texture Reconstruction

We propose a texture dataset Texture, which crops the central part of images from five benchmark datasets Set5 [22], Set14 [23], B100 [19], Urban100 [20], and Manga109 [21]. The size of the cropped image is 1/16 of the original image. The foreground part in the center of the image generally has richer textures than the background part, and it is more difficult to restore. Comparing the SR results of the texture dataset Texture can further explore the texture image restoration ability of various methods.
We use Meta-RDN [16], LIIF-RDN [17], and RPB-RDN to perform the X2, X3, and X4 super-resolution reconstruction tasks on the proposed texture dataset Texture. The experimental results are shown in Table 4. RPB-RDN achieves better results than Meta-RDN [16] and LIIF-RDN [17] on all scales, which proves the superiority of the proposed content-adaptive convolution kernel generation methods for texture restoration. On the PSNR metric, RPB-RDN has an average improvement of 0.13 dB and 0.14 dB over Meta-RDN [16] and LIIF-RDN [17]. On the SSIM metrics, RPB-RDN has an average improvement of 0.0007 and 0.0008 over Meta-RDN [16] and LIIF-RDN [17].

3.7. Performance Comparison with Other SOTA Methods

We use the proposed network model to perform X2, X3, and X4 super-resolution reconstruction on benchmark datasets Set5 [22], Set14 [23], B100 [19], Urban100 [20], and Manga109 [21] respectively, and compare the results with RDN [10], Meta-RDN [16], and LIIF-RDN [17]. As seen from Table 5, we can see that our method achieves better results than these state-of-the-art methods in most of the reconstruction tasks. On all fifteen tasks, our method outperforms RDN [10], Meta-RDN [16], and LIIF-RDN [17] by an average of 0.10 dB, 0.11 dB, and 0.11 dB in PSNR, and 0.0007, 0.0003, and 0.0005 in SSIM, respectively. Especially in the high-resolution benchmark datasets Urban100 [20] and Manga109 [21], where the images have more richer details, our proposed model can achieve an improvement of 0.22 dB, 0.21 dB, and 0.21 dB in PSNR, and 0.0016, 0.0007, and 0.0010 in SSIM than RDN [10], Meta-RDN [16], and LIIF-RDN [17], respectively.

3.8. Qualitative Results

Finally, we compare the SR images generated by our RPB-RDN with those generated by Bicubic, RDN [10], Meta-RDN [16], and LIIF-RDN [17]. As seen in Figure 5, it can be found that our method can recover textures that are recovered wrongly by other methods, especially in zebra patterns, patterns on clothes, and lines of buildings. Owing to our regularized pattern-based convolution kernel generation method, pixels in the different regularized patterns will generate different convolution kernels to match the regularized pattern so that the generated SR image and HR image have stronger texture consistency.

4. Conclusions

In this paper, we propose a dynamic kernel generation method based on the regularized pattern for super-resolution image reconstruction. It can generate convolution kernels for different pixels that match their regularized pattern so that the generated SR images have stronger texture consistency with HR images. Experiments show that our proposed method achieves better performance than other state-of-the-art approaches.

Author Contributions

Conceptualization, H.F., L.M. and J.T.; methodology, H.F., L.M. and J.T; software, H.F. and L.M.; validation, H.F. and L.M.; writing—original draft preparation, H.F., L.M. and J.T.; writing—review and editing, H.F., L.M. and J.T.; supervision, L.M. and J.T.; project administration, L.M.; funding acquisition, L.M. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by China GDSF No. 2019A1515011949.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jian, M.; Lam, K. Simultaneous Hallucination and Recognition of Low-Resolution Faces Based on Singular Value Decomposition. IEEE Trans. Circuits Syst. Video Technol. 2015, 25, 1761–1772. [Google Scholar] [CrossRef]
  2. Jian, M.; Cui, C.; Nie, X.; Zhang, H.; Nie, L.; Yin, Y. Multi-view face hallucination using SVD and a mapping model. Inf. Sci. 2019, 488, 181–189. [Google Scholar] [CrossRef]
  3. Dong, C.; Loy, C.C.; He, K.; Tang, X. Learning a deep convolutional network for image super-resolution. In Proceedings of the ECCV 2014, Zurich, Switzerland, 6–12 September 2014; Springer: Cham, Switzerland, 2014; pp. 184–189. [Google Scholar]
  4. Kim, J.; Lee, J.K.; Lee, K.M. Accurate image super-resolution using very deep convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 1646–1654. [Google Scholar]
  5. Kim, J.; Lee, J.K.; Lee, K.M. Deeply-recursive convolutional network for image super-resolution. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, 27–30 June 2016; IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 1637–1645. [Google Scholar]
  6. Tai, Y.; Yang, J.; Liu, X. Image super-resolution via deep recursive residual network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 2790–2798. [Google Scholar]
  7. Tai, Y.; Yang, J.; Liu, X.; Xu, C. Memnet: A persistent memory network for image restoration. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 4549–4557. [Google Scholar]
  8. Shi, W.; Caballero, J.; Huszar, F. Real-time single image and video super-resolution using an efficient sub-pixel convolutional neural network. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; IEEE Computer Society: Los Alamitos, CA, USA, 2016; pp. 1874–1883. [Google Scholar]
  9. Lim, B.; Son, S.; Kim, H. Enhanced deep residual networks for single image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; IEEE Computer Society: Los Alamitos, CA, USA, 2017; pp. 1132–1140. [Google Scholar]
  10. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; IEEE Computer Society: Los Alamitos, CA, USA, 2018; pp. 2472–2481. [Google Scholar]
  11. Liu, J.; Zhang, W.; Tang, Y.; Tang, J.; Wu, G. Residual Feature Aggregation Network for Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; IEEE: Piscataway, NJ, USA, 2020; pp. 2356–2365. [Google Scholar]
  12. Dai, T.; Cai, J.; Zhang, Y.; Xia, S.; Zhang, L. Second-order Attention Network for Single Image Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; IEEE Computer Society: Los Alamitos, CA, USA, 2019; pp. 11057–11066. [Google Scholar]
  13. Li, L.; Feng, H.; Zheng, B.; Ma, L.; Tian, J. DID: A nested dense in dense structure with variable local dense blocks for super-resolution image reconstruction. In Proceedings of the 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 2582–2589. [Google Scholar]
  14. Hospedales, T.; Antoniou, A.; Micaelli, P.; Storkey, A. Meta-Learning in Neural Networks: A Survey. arXiv 2020, arXiv:2004.05439. [Google Scholar] [CrossRef] [PubMed]
  15. Ha, D.; Dai, A.; Le, Q.V. HyperNetworks. In Proceedings of the International Conference on Learning Representations (ICLR), Toulon, France, 24–26 April 2017. [Google Scholar]
  16. Hu, X.; Mu, H.; Zhang, X.; Wang, Z.; Tan, T.; Sun, J. Meta-SR: A Magnification-Arbitrary Network for Super-Resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 16–20 June 2019; IEEE Computer Society: Los Alamitos, CA, USA, 2019; pp. 1575–1584. [Google Scholar]
  17. Chen, Y.; Liu, S.; Wang, X. Learning Continuous Image Representation with Local Implicit Image Function. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  18. Li, D.; Hu, J.; Wang, C.; Li, X.; She, Q.; Zhu, L.; Zhang, T.; Chen, Q. Involution: Inverting the Inherence of Convolution for Visual Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Nashville, TN, USA, 20–25 June 2021. [Google Scholar]
  19. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; IEEE: Piscataway, NJ, USA, 2001; pp. 416–423. [Google Scholar]
  20. Huang, J.-B.; Singh, A.; Ahuja, N. Single image super-resolution from transformed self-exemplars. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; IEEE: Piscataway, NJ, USA, 2015; pp. 5197–5206. [Google Scholar]
  21. Matsui, Y.; Ito, K.; Aramaki, Y.; Fujimoto, A.; Ogawa, T.; Yamasaki, T.; Aizawa, K. Sketch-based manga retrieval using manga109 dataset. Multimed. Tools Appl. 2017, 76, 21811–21838. [Google Scholar] [CrossRef] [Green Version]
  22. Bevilacqua, M.; Roumy, A.; Guillemot, C.; Morel, A. Low-complexity single-image super-resolution based on nonnegative neighbor embedding. In Proceedings of the British Machine Vision Conference (BMVC), Guildford, UK, 3–7 September 2012. [Google Scholar]
  23. Zeyde, R.; Elad, M.; Protter, M. On single image scale-up using sparse representations. In Proceedings of the International Conference on Curves and Surfaces, Avignon, France, 24-30 June 2010; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  24. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. A conceptual illustration of our proposed SR method. Pixels at different positions are upsampled using different convolution kernels that match the regularized pattern of the current position.
Figure 1. A conceptual illustration of our proposed SR method. Pixels at different positions are upsampled using different convolution kernels that match the regularized pattern of the current position.
Sensors 22 04231 g001
Figure 2. SR images generated by the proposed method at multiple scales.
Figure 2. SR images generated by the proposed method at multiple scales.
Sensors 22 04231 g002
Figure 3. Visualization of dynamic convolution weights that are generated by our proposed approach. The first column presents the original test images, and the second column presents the visualized convolution weight variation values calculated using Equation (10). (a) 253027 from B100 dataset [19]. (b) img59 from Urban100 dataset [20]. (c) YumeiroCooking from Manga109 dataset [21].
Figure 3. Visualization of dynamic convolution weights that are generated by our proposed approach. The first column presents the original test images, and the second column presents the visualized convolution weight variation values calculated using Equation (10). (a) 253027 from B100 dataset [19]. (b) img59 from Urban100 dataset [20]. (c) YumeiroCooking from Manga109 dataset [21].
Sensors 22 04231 g003
Figure 4. An overview of the network structure of the proposed SR approach.
Figure 4. An overview of the network structure of the proposed SR approach.
Sensors 22 04231 g004
Figure 5. Qualitative performance comparison of various SR approaches.
Figure 5. Qualitative performance comparison of various SR approaches.
Sensors 22 04231 g005
Table 1. PSNR (dB) and SSIM performance on proposed regularized pattern extraction network. The best performance is highlighted in the bold format.
Table 1. PSNR (dB) and SSIM performance on proposed regularized pattern extraction network. The best performance is highlighted in the bold format.
MethodsMetricB100 [19]Urban100 [20]Manga109 [21]
X2X3X4X2X3X4X2X3X4
Baseline modelPSNR
SSIM
32.34
0.9011
29.27
0.8089
27.75
0.7417
33.00
0.9359
28.90
0.8668
26.68
0.8042
39.31
0.9781
34.41
0.9491
31.36
0.9173
RPB-RDN (Ours)PSNR
SSIM
32.36
0.9014
29.30
0.8095
27.76
0.7421
33.04
0.9363
28.95
0.8677
26.73
0.8054
39.35
0.9782
34.46
0.9494
31.39
0.9177
Table 2. PSNR (dB) performance evaluation on the proposed convolution weight prediction method using the B100 dataset [19]. The best performance is highlighted in the bold format.
Table 2. PSNR (dB) performance evaluation on the proposed convolution weight prediction method using the B100 dataset [19]. The best performance is highlighted in the bold format.
MethodsX1.1X1.2X1.3X1.4X1.5X1.6X1.7X1.8X1.9X2.0
Bicubic36.5635.0133.8432.9332.1431.4930.9030.3829.9729.55
Meta-RDN [16]42.8240.0438.2836.9535.8634.9034.1333.4532.8632.35
Ours43.0340.1138.3436.9635.8634.9134.1433.4632.8632.36
MethodsX2.1X2.2X2.3X2.4X2.5X2.6X2.7X2.8X2.9X3.0
Bicubic29.1828.8728.5728.3128.1327.8927.6627.5127.3127.19
Meta-RDN [16]31.8231.4131.0630.6230.4530.1329.8229.6729.4029.30
Ours31.8831.4531.0730.7530.4830.1729.9529.7229.4929.30
MethodsX3.1X3.2X3.3X3.4X3.5X3.6X3.7X3.8X3.9X4.0
Bicubic26.9826.8926.5926.6026.4226.3526.1526.0726.0125.96
Meta-RDN [16]28.8728.7928.6828.5428.3228.2728.0427.9227.8227.75
Ours29.0928.9028.7328.5728.4228.2728.1528.0127.8827.76
Table 3. The running time in B100 of various SR approaches (ms). The best performance is highlighted in the bold format.
Table 3. The running time in B100 of various SR approaches (ms). The best performance is highlighted in the bold format.
MethodsX2X3X4
RDN12.812.913.0
Meta-RDN [16]14.414.816.4
LIIF-RDN [17]21.322.724.9
RPB-RDN (Ours)15.115.316.5
Table 4. The PSNR (dB) and SSIM performance in texture dataset Texture. The best performance is highlighted in the bold format.
Table 4. The PSNR (dB) and SSIM performance in texture dataset Texture. The best performance is highlighted in the bold format.
MethodsMetricX2X3X4
Meta-RDN [16]PSNR(dB)/SSIM34.22/0.931230.30/0.860128.08/0.7976
LIIF-RDN [17]34.19/0.930930.31/0.860128.09/0.7975
RPB-RDN (Ours)34.38/0.931830.45/0.861128.17/0.7981
Table 5. The PSNR (dB) and SSIM performance comparison of various SR approaches. The best performance is highlighted in the bold format.
Table 5. The PSNR (dB) and SSIM performance comparison of various SR approaches. The best performance is highlighted in the bold format.
DatasetThe PSNR (dB) Performance
Scale FactorBicubicRDN [10]Meta-RDN [16]LIIF-RDN [17]Ours
Set5 [22]X2
X3
X4
33.66
30.39
28.42
38.24
34.71
32.47
38.22
34.63
32.38
38.17
34.68
32.50
38.23
34.74
32.50
Set14 [23]X2
X3
X4
30.24
27.55
26.00
34.01
30.57
28.81
34.04
30.55
28.84
33.97
30.53
28.80
34.05
30.56
28.86
B100 [19]X2
X3
X4
29.56
27.21
25.96
32.34
29.26
27.72
32.35
29.30
27.75
32.32
29.26
27.74
32.36
29.30
27.76
Urban100 [20]X2
X3
X4
26.88
24.46
23.14
32.89
28.80
26.61
32.92
28.82
26.55
32.87
28.82
26.68
33.04
28.95
26.73
Manga109 [21]X2
X3
X4
30.80
26.95
24.89
39.18
34.13
31.00
39.18
34.14
31.03
39.01
34.13
31.18
39.35
34.46
31.39
DatasetThe SSIM Performance
Scale factorBicubicRDN [10]Meta-RDN [16]LIIF-RDN [17]Ours
Set5 [22]X2
X3
X4
0.9299
0.8682
0.8104
0.9614
0.9296
0.8990
0.9611
0.9298
0.8989
0.9610
0.9293
0.8986
0.9611
0.9298
0.8990
Set14 [23]X2
X3
X4
0.8688
0.7742
0.7027
0.9212
0.8468
0.7871
0.9213
0.8466
0.7872
0.9208
0.8470
0.7876
0.9214
0.8470
0.7881
B100 [19]X2
X3
X4
0.8431
0.7385
0.6675
0.9017
0.8093
0.7419
0.9019
0.8096
0.7423
0.9010
0.8096
0.7422
0.9014
0.8095
0.7421
Urban100 [20]X2
X3
X4
0.8403
0.7349
0.6577
0.9353
0.8653
0.8028
0.9361
0.8674
0.8054
0.9350
0.8662
0.8040
0.9363
0.8677
0.8054
Manga109 [21]X2
X3
X4
0.9339
0.8556
0.7866
0.9780
0.9484
0.9151
0.9782
0.9483
0.9154
0.9780
0.9487
0.9170
0.9782
0.9494
0.9177
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Feng, H.; Ma, L.; Tian, J. A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution. Sensors 2022, 22, 4231. https://doi.org/10.3390/s22114231

AMA Style

Feng H, Ma L, Tian J. A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution. Sensors. 2022; 22(11):4231. https://doi.org/10.3390/s22114231

Chicago/Turabian Style

Feng, Hesen, Lihong Ma, and Jing Tian. 2022. "A Dynamic Convolution Kernel Generation Method Based on Regularized Pattern for Image Super-Resolution" Sensors 22, no. 11: 4231. https://doi.org/10.3390/s22114231

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop