Next Article in Journal
Tensor Network Methods for Hyperparameter Optimization and Compression of Convolutional Neural Networks
Previous Article in Journal
Sparse Indoor Camera Positioning with Fiducial Markers
Previous Article in Special Issue
Automated Dead Chicken Detection in Poultry Farms Using Knowledge Distillation and Vision Transformers
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Harnessing Spatial-Frequency Information for Enhanced Image Restoration

1
Department of Intelligent Electronics and Computer Engineering, Chonnam National University, Gwangju 61186, Republic of Korea
2
Department of Smart ICT Convergence Engineering, Seoul National University of Science and Technology, Seoul 01811, Republic of Korea
3
School of Electrical Engineering, Korea University, Seoul 02841, Republic of Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2025, 15(4), 1856; https://doi.org/10.3390/app15041856
Submission received: 30 December 2024 / Revised: 24 January 2025 / Accepted: 4 February 2025 / Published: 11 February 2025

Abstract

:
Image restoration aims to recover high-quality, clear images from those that have suffered visibility loss due to various types of degradation. Numerous deep learning-based approaches for image restoration have shown substantial improvements. However, there are two notable limitations: (a) Despite substantial spectral mismatches in the frequency domain between clean and degraded images, only a few approaches leverage information from the frequency domain. (b) Variants of attention mechanisms have been proposed for high-resolution images in low-level vision tasks, but these methods still require inherently high computational costs. To address these issues, we propose a Frequency-Aware Network (FreANet) for image restoration, which consists of two simple yet effective modules. We utilize a multi-branch/domain module that integrates latent features from the frequency and spatial domains using the discrete Fourier transform (DFT) and complex convolutional neural networks. Furthermore, we introduce a multi-scale pooling attention mechanism that employs average pooling along the row and column axes. We conducted extensive experiments on image restoration tasks, including defocus deblurring, motion deblurring, dehazing, and low-light enhancement. The proposed FreANet demonstrates remarkable results compared to previous approaches to these tasks.

1. Introduction

Image restoration, a classic task in computer vision, aims to recover a high-quality, clean image by removing specific types of degradation, such as haze, blur, and low light. Since images affected by degradation can adversely impact the performance of computer vision models, the importance of image restoration is emphasized in various applications, including autonomous driving, surveillance, and remote sensing. However, obtaining a single effective solution is challenging because image restoration is an inherently ill-posed problem. To address this challenge, conventional approaches [1,2] often rely on strong image priors, which are limited in producing consistent results across diverse, unseen environments and scenarios.
With the advent of deep learning, the image restoration task has experienced a rapid paradigm shift from conventional approaches to deep learning-based methods. Various deep learning models have been proposed, demonstrating substantial improvements over traditional approaches, such as multi-stage frameworks [3,4], normalization techniques [5], and attention mechanisms [6,7]. More recently, many researchers have shifted their focus to multi-head self-attention (MHA) [8], which captures long-range dependencies and demonstrates remarkable performance [7,9]. However, when an image with a resolution of h   ×   w is given, the computational complexity of MHA is O ( h 2 w 2 ) , which requires significant memory resources, making it unsuitable for high-resolution images.
Since various types of degradation distort texture, edges, and other features in an image, the discrepancy between the degraded and clean images in the frequency domain is essential. Nonetheless, only a few approaches utilize the latent space of the frequency domain [10,11,12,13,14]. Among them, most methods process features by directly transforming them using various transformation tools, such as the wavelet transform [12,13] and the Fourier transform [14,15]. Some recently published studies designed learnable low-pass filters through pooling techniques [10,11] to implicitly process frequency information. These approaches have substantially improved restoration capabilities in simple yet powerful ways, demonstrating promising potential in image restoration.
However, previous methods [14,15] related to modeling the frequency space, particularly those using the discrete Fourier transform (DFT), apply two independent convolutional neural networks (CNNs) to process complex features consisting of real and imaginary components, e.g., Y   =   X r e a l W r e a l   +   j ( X i m a g W i a m g ) . This approach has an upper limit on performance due to a mathematical error in the operation.
In this study, we designed a Frequency-Aware Net work (FreANet) to efficiently the bridge frequency gaps between degraded and sharp images and capture spatial interactions with low computational complexity. Specifically, we propose a multi-domain feature extractor (MDFE) that adaptively aggregates spatial and frequency contexts for various types of degradation. The MDFE is composed of spatial and frequency branches, which utilize convolutions with large strip kernels ( 1   ×   k or k   ×   1 ) to produce rich representations [16]. In the frequency branch, we introduce complex convolutions ( C CNNs) [17] to handle the complex features transformed via the DFT.
To reduce the computational complexity of the attention mechanism, we developed a multi-scale pooling attention (MPA) mechanism, which leverages unidirectional global average pooling (GAP) and strip convolutions with multiple scales. This design bypasses the quadratic computational complexity, O ( h 2 w 2 ) , of MHA, allowing the network to achieve linear computational complexity, O ( h w ) .
We conducted extensive experiments on four image restoration tasks, including defocus deblurring, motion deblurring, dehazing, and low-light enhancement. Compared to previous methods [18,19], the proposed FreANet demonstrates substantial restoration capacity with fewer parameters, achieving state-of-the-art performance on various datasets.
Our main contributions are summarized as follows:
  • We propose a multi-domain feature extractor (MDFE) that selectively integrates spatial and frequency information for different types of degradation and is designed to operate appropriately in the corresponding domain.
  • To efficiently capture global information, we propose a multi-scale pooling attention (MPA) mechanism, which utilizes unidirectional pooling functions and convolutions with multiple strip kernels, thereby reducing the computational complexity of vanilla attention from O ( h 2 w 2 ) to O ( h w ) .
  • The proposed FreANet is evaluated on 11 benchmark datasets for various restoration tasks, demonstrating remarkable performance compared to other algorithms.

2. Related Works

2.1. Image Restoration

Image restoration aims to reproduce clean counterparts from images damaged by various types of degradation, and this field has been extensively studied for a long time. Conventional image restoration approaches restrict the clean latent space based on various assumptions and handcrafted preprocessing techniques [1,2]. Recently, data-driven methods designed with CNNs have shown substantial improvements in restoration capacity compared to conventional algorithms [3,4,5,20,21]. These CNN-based methods primarily utilize encoder–decoder frameworks, particularly U-shaped architectures, which can yet effectively aggregate hierarchical representations. Furthermore, advanced techniques have been introduced to enhance performance, including dilated convolutions [4,22], multi-stage frameworks [3,23], contrastive learning [20,24], dense connections [25,26], and attention mechanisms [3,21]. Recently, newly proposed methods have included the design of transformer-based and MLP-mixer-based models to capture long-range dependencies, demonstrating remarkable performance in several image restoration tasks [7,9,23,27]. While these approaches have been developed to perform restoration only in the spatial domain, we can selectively aggregate features in both the spatial and frequency domains for image restoration.

2.2. Modeling Frequency Information

Some restoration methods have been proposed to bridge the frequency gap between paired degraded and clean images [10,11,14]. Generally, numerous studies have utilized methods that decompose images into different frequency components using various transformation tools, such as the wavelet transform [12,13,28] and the Fourier transform [14,15,29]. Specifically, UHDFour [29] proposed restoring the amplitude and phase separately, as each plays a different role. Fourmer [14] introduced FPE, which consists of spatial interaction and channel evolution for efficient global modeling. Recently, some studies have proposed learnable low-pass filters using pooling techniques [10,11]. In this study, we directly access the frequency latent space using the DFT tool, similar to methods employing Fourier transforms [14,29]. However, we model their implicit interactions through complex convolutions instead of processing the real and imaginary parts separately.

2.3. Attention Mechanisms

The attention mechanism is a crucial factor in boosting performance by allowing the network to focus on essential parts. Vision transformers [30,31] divide an image into a sequence of patches to capture the relationships between these patches. Although they have demonstrated promising performance across various vision tasks, they have the drawback of requiring substantial computational resources. To address this issue, the Swin transformer [32] introduced window-based local attention that learns the interactions between elements within shifted windows. POTTER [33] proposed pooling attention that applies individual h-axis and w-axis pooling operations. Restormer [7] effectively restored clean images by designing attention in the channel direction. SegNeXt [16] was developed with convolutional attention using multi-branch depthwise strip convolutions and achieved performance comparable to that of transformer-based models. In this study, we designed a novel attention mechanism that integrates the advantages of both POTTER and SegNeXt. Our attention compresses features into global statistical information via unidirectional pooling operations and then effectively aggregates spatial information using multi-scale, depthwise strip convolutions.

3. Methodology

In this Section, we provide details on our method, starting with FreANet’s overall architecture and then introducing its core modules: the MDFE and MPA.
Before delving into the details, we define the notations and Fourier transform used throughout the paper. Let x R H × W × C denote an input feature in the spatial domain. The Fourier transform F converts the input feature into the frequency domain, representing it as its corresponding complex component X   =   X re   +   j X im     C H × W × C × 2 . Here, the last dimension, ‘2’, refers to the real X re and imaginary X im components. The Fourier transform is expressed as
F ( x ) ( u , v ) = 1 H W h = 0 H 1 w = 0 W 1 x ( h , w ) e j 2 π ( h H u + w W v ) ,
where u and v are the corresponding coordinates in the frequency domain. The inverse Fourier transform is denoted by F 1 ( X ) . We employ the FFT and IFFT algorithms [34] for the Fourier transform and its inverse process.

3.1. Overall Architecture

As shown in Figure 1, our FreANet utilizes the popular U-shaped encoder–decoder architecture to achieve rich hierarchical representations. Specifically, given an input image I     R H × W × 3 , FreANet projects it into the embedding space using a single 3   ×   3 convolution, producing features of size H   ×   W   ×   C , where H and W indicate the original resolution sizes, and C represents the number of channels. These features are then fed into FreABlock as input to capture high-level representations. FreABlock consists of the MDFE, MPA, and a feed-forward network (Figure 1b). Furthermore, the input features of the MDFE are flexibly enhanced through the residual deformable block (RDB) using DCNv3 [35], allowing a focus on critical features. As the features pass through deeper layers, the number of channels doubles, and their resolution halves. Upsampling and downsampling are performed using transposed and strided convolutions, respectively. Finally, the restored image I ^     R H   ×   W   ×   3 is estimated using a single 3   ×   3 convolution.

3.2. Multi-Domain Feature Extractor (MDFE)

As illustrated in Figure 1c, the MDFE is composed of inter-multi-branches, each operating in different spaces, including the spatial and frequency domains. The spatial branch is designed with an intra-multi-branch structure consisting of parallel branches with small to large receptive fields, effectively capturing the multi-scale spatial context. We adopt two serial depthwise convolutions with strip kernels [16] to maintain a lightweight network in each branch. The process in the spatial branch is expressed as
f s   = i = 0 2 DWConv k i × 1 DWConv 1 × k i ( x ) ,
where f s represents the output features of the spatial branch. DWConv and k i , with i     0 , 1 , 2 , refer to the depthwise convolution and the kernel size of the i-th branch, respectively.
Before being fed into the frequency branch, the input features are converted into their corresponding complex features by applying the FFT algorithm. Given that the real and imaginary components of complex features originate from corresponding spatial features, applying individual convolutions to these components is inappropriate. Therefore, we apply complex convolutions to the complex features, which involve interactions between the real and imaginary components. Complex convolutions are implemented similarly to standard convolutions, but each convolution kernel is responsible for the real W re and imaginary W im weights. The complex kernel W is defined as W   =   W re   +   j W im , and the complex convolution is expressed as W     X   =   ( W re X re     W im X im )   +   j ( W re X im   +   W im X re ) . Like the spatial branch, we utilize two serial depthwise convolutions with strip kernels, employing them in the manner of complex convolutions. The process in the frequency branch is expressed as
F f , re   +   j F f , im   =   DW C Conv 21 × 1 DW C Conv 1 × 21 ( X ) ,
X   =   F ( x ) , f f = F 1 ( F f , re   +   j F f , im ) ,
where F f , re and F f , im are the real and imaginary output features of the frequency branch. DW C Conv refers to the depthwise complex convolution, and F f the output features of the frequency branch, which are converted to the spatial domain through the inverse Fourier transform. Finally, inspired by SKNet [36], the output features extracted from different domains are integrated by computing weights for each channel. The fusion process is expressed as
Z   =   MLP GAP ( f s   +   f f ) ,
where Z     R 1 × 1 × D represents compressed features, with the number of channels denoted by D. MLP and GAP refer to a fully connected layer and global average pooling. Next, two separate fully connected layers are applied, followed by a softmax operation, to compute the weights for each channel:
[ α , β ]   =   MLP α ( Z ) , MLP β ( Z ) ,
α c   =   e α c e α c   +   e β c , β c   =   e β c e α c   +   e β c ,
f c   =   α c     f s c   +   β c     f f c , where α c   +   β c   =   1 ,
where f   =   [ f 1 , f 2 , , f c ] . α and β represent the weights per channel for f s and f f , respectively. The notation α c denotes the c-th channel of α , and the same notation applies to β c , f s c , and f f c . The symbol ⊗ represents element-wise matrix multiplication.

3.3. Multi-Scale Pooling Attention (MPA)

The proposed MPA utilizes global average pooling on the input features along specific directions, such as the vertical and horizontal axes, to address the high complexity of spatial attention. The resulting squeezed features are then fed into several strip convolutions, encompassing both small and large receptive fields, to aggregate local and long-range information:
[ x h , x w ]   =   GAP h ( x ) , GAP w ( x ) ,
A h   =   i = 0 3 DWConv 1 × k i ( x h ) , A w   =   i = 0 3 DWConv k i × 1 ( x w ) ,
where GAP h and GAP w denote global average pooling in the h- and w-axis directions, respectively. A h     R 1 × W × C and A w     R H × 1 × C represent the information aggregated from features squeezed in each direction. Then, we compute the attention map A     R H × W × C by calculating the dot product between the information aggregated for each axis. After performing pointwise convolution for information exchange between channels, the attention values are projected into the range (−1, 1) using the Tanh function, which helps suppress irrelevant information while enhancing useful information. This process can be expressed as follows:
A   =   Tan h PWConv ( A h A w ) , Out   =   A x .
In Equation (11), PWConv refers to the pointwise convolution, and Out represents the final output of the MPA.

3.4. Loss Function

Given that the proposed FreANet operates within the embedding spaces of both the spatial and frequency domains, we adopt the main spatial L Spatial loss and an auxiliary spectral L Frequency loss:
L Spatial   =   I ^ Y 1 , L Frequency   =   F ( I ^ )     F ( Y ) 1 ,
where Y is the corresponding clear image. Additionally, we employ contrastive regularization L CR [20] to further improve performance:
L CR   =   i = 0 n w i Φ i ( I ^ )     Φ i ( Y ) 1 Φ i ( I ^ )     Φ i ( I ) 1 ,
where Φ i refers to the hidden features extracted from the i-th layer of the pre-trained network. w i is the weight coefficient. Then, the overall loss is expressed as a combination of the above-mentioned losses:
L   =   L Spatial   +   λ 1 L Frequency   +   λ 2 L CR .
In this work, λ 1 and λ 2 are set to 0.1 and 0.05, respectively.

4. Experiments

This Section presents extensive experiments conducted on 11 benchmark datasets for four restoration tasks, including defocus deblurring, motion deblurring, dehazing, and low-light enhancement. Summaries of all the datasets used in these experiments are provided in Table 1. In the result tables, the best and second-best scores for each dataset are highlighted in bold and underlined, respectively.
Implementation Settings. We trained separate models to restore images degraded by different degradation factors. Considering the complexity of each problem, we redesigned the proposed FreANet by adjusting the number of encoder–decoder layers and residual blocks [ L 1 , L 2 , L 3 , N]. We set [1, 2, 3, 10] for defocus/motion deblurring, [1, 2, 3, 3] for dehazing, and [1, 2, 2, 1] for low-light enhancement. Additionally, to ensure a fair evaluation for dehazing, we designed a small version of FreANet [1, 1, 1, 2]. Unless mentioned otherwise, the following tunable parameters were used in all experiments. We adopted the Adam optimizer [47] with an initial learning rate of 1   ×   10 4 , which decreases to 1   ×   10 6 in a cosine annealing manner [48]. We used a patch size of 256 × 256 with random horizontal flips and rotations for data augmentation. We trained and evaluated the proposed restoration models for all degradation types on a single NVIDIA H100 GPU (NVDIA, Santa Clara, CA, USA). FLOPs were measured on data with a resolution of 256   ×   256 .

4.1. Single-Image Defocus Deblurring Results

Following the training strategy of SFNet [10], our model was trained on LFDOF [38] and fine-tuned on DPDD [37]. Table 2 presents the quantitative results of single-image defocus deblurring on the DPDD. As can be seen in Table 2, our model improves performance on all metrics except SSIM. Compared to IRNeXt [18], the proposed FreANet, with 13.63% fewer parameters, achieves gains of 0.1 dB PNSR and 0.02 SSIM in the indoor scenes. FreANet outperforms the frequency-based SFNet [10] in the outdoor scenes, achieving improved scores of 0.1 dB in PSNR and 0.01 in SSIM. Figure 2 demonstrates that our model restores visibility-degraded parts more effectively than previous algorithms.

4.2. Motion Deblurring Results

We evaluated deblurring models on the GoPro [39] and HIDE [40] datasets and report the quantitative results in Table 3. Our method achieves a PSNR of 33.21 dB on the GoPro, using fewer parameters compared to the transformer-based Stripformer [52]. FreANet also demonstrates remarkable generalization performance on HIDE. We visualize the deblurred results in Figure 3, where it can be seen that our FreANet restores images more sharply and distinctly than its competitors.

4.3. Dehazing Results

We conducted dehazing experiments on synthetic datasets (RESIDE/SOTS [41]) and real-world datasets (Dense-Haze [42] and NH-HAZE [43]). Table 4 presents the quantitative results in comparison with previous models. Compared to the most recently proposed OKNet [19], our model obtains significant gains of 0.83 dB and 0.71 dB in PSNR on the SOTS-Indoor and SOTS-Outdoor datasets. For Dense-Haze, FreANet reports PSNR gains of 0.61 dB and 1.58 dB over OKNet [19] and the frequency-based Fourmer [14], respectively. Furthermore, the qualitative results for both indoor and outdoor scenarios are presented in Figure 4. Our model generates images with sharper and more detailed object preservation, even in complex environments, outperforming previous works.
In line with previous work [19], we performed a nighttime dehazing experiment on the NHR [44] dataset, and the results can be found in Table 5. FreANet achieves a PSNR gain of 0.84 dB over the recently proposed OKNet [19].

4.4. Low-Light Enhancement Results

We evaluated our network on the LOLv1 [45] and LOLv2 [46] datasets. The LOLv2 dataset is divided into synthetic (LOLv2-syn) and real (LOLv2-real) datasets. According to Retinex theory, our model is equipped with an illumination estimator [56] at the forefront. The quantitative and qualitative comparison results with previous enhancement models are shown in Table 6 and Figure 5, respectively. Although our method shows slightly reduced performance on LOLv1 compared to previous approaches, it demonstrates outstanding performance on both the synthetic and real datasets of LOL-v2, even with significantly fewer parameters (2.45 million). As shown in Figure 5, our method restores low-light images with color and texture similar to the target images.

5. Ablation Study

We performed ablation studies to explore the effectiveness of the proposed modules. For the ablation studies, we used variant models derived from a small version of our FreANet. All these variant models were evaluated on the SOTS-Indoor dataset.
Table 7 presents the quantitative results for the variant models. First, the initial model (a) results in a PSNR of 30.75 dB. The model (b) utilizing only the RDB module achieves a PSNR of 36.29 dB, while the variant model (c) with MPA yields a PSNR gain of 0.77 dB, with a slight increase of 0.19 M parameters. We investigated the components of the MDFE module, including the spatial and frequency branches. The model (d) equipped only with the spatial branch achieves a 33.36 dB PSNR. In the case of the frequency branch, model (e) causes a performance degradation due to insufficient spatial texture information. However, when combined with the spatial branch, model (f) leads to a significant gain of 2.47 dB over model (d). Finally, the progressive addition of MPA and RDB modules produces improved results. Our full model (i) outperforms other variant models, achieving an 8.79 dB boost over the initial model (a).
Visual results of each component. To analyze the impact of individual components, we provide the visual results of the MDFE and MPA in Figure 6 and Figure 7, respectively. As shown in Figure 6, the frequency branch of the MDFE captures high-frequency information, such as edges, while the spatial branch emphasizes less degraded regions, such as boxes, humans, and vines. By selectively aggregating the output features, the MDFE enhances edge signals and texture information from the initial features. Figure 7 demonstrates that MPA highlights regions with extreme low-light conditions or severe blur that require more focus for restoration.

6. Conclusions

In this paper, we propose FreANet, which selectively extracts features from multiple domains and captures long-range spatial dependencies for image restoration. Specifically, we introduce a multi-domain feature extractor (MDFE) composed of spatial and frequency branches. Each branch employs strip convolutions with large kernels in parallel. Furthermore, we propose a multi-scale pooling attention (MPA) mechanism that learns interactions among elements in the spatial embedding space, utilizing global average pooling for efficiency. Finally, our FreANet demonstrates outstanding performance on 11 benchmark datasets for various restoration tasks, including defocus deblurring, motion deblurring, dehazing, and low-light enhancement.

Author Contributions

Conceptualization, C.-H.P., H.-D.C. and M.-T.L.; methodology, C.-H.P., H.-D.C. and M.-T.L.; software, C.-H.P.; validation, H.-D.C. and M.-T.L.; formal analysis, C.-H.P. and H.-D.C.; investigation, H.-D.C. and M.-T.L.; resources, C.-H.P.; data curation, C.-H.P. and H.-D.C.; writing—original draft preparation, C.-H.P.; writing—review and editing, H.-D.C. and M.-T.L.; visualization, C.-H.P.; supervision, H.-D.C. and M.-T.L.; project administration, H.-D.C.; funding acquisition, all authors. All authors have read and agreed to the published version of the manuscript.

Funding

This study was financially supported by Seoul National University of Science and Technology.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in the study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar]
  2. Li, C.Y.; Guo, J.C.; Cong, R.M.; Pang, Y.W.; Wang, B. Underwater image enhancement by dehazing with minimum information loss and histogram distribution prior. IEEE Trans. Image Process. 2016, 25, 5664–5677. [Google Scholar] [CrossRef]
  3. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Multi-stage progressive image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 14821–14831. [Google Scholar]
  4. Ren, W.; Ma, L.; Zhang, J.; Pan, J.; Cao, X.; Liu, W.; Yang, M.H. Gated Fusion Network for Single Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3253–3261. [Google Scholar]
  5. Chen, L.; Lu, X.; Zhang, J.; Chu, X.; Chen, C. HINet: Half Instance Normalization Network for Image Restoration. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Nashville, TN, USA, 19–25 June 2021; pp. 182–192. [Google Scholar]
  6. Qin, X.; Wang, Z.; Bai, Y.; Xie, X.; Jia, H. FFA-Net: Feature fusion attention network for single image dehazing. In Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA, 7–12 February 2020; Volume 34, pp. 11908–11915. [Google Scholar]
  7. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H. Restormer: Efficient transformer for high-resolution image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5728–5739. [Google Scholar]
  8. Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; Volume 30. [Google Scholar]
  9. Song, Y.; He, Z.; Qian, H.; Du, X. Vision transformers for single image dehazing. IEEE Trans. Image Process. 2023, 32, 1927–1941. [Google Scholar] [CrossRef] [PubMed]
  10. Cui, Y.; Tao, Y.; Bing, Z.; Ren, W.; Gao, X.; Cao, X.; Huang, K.; Knoll, A. Selective frequency network for image restoration. In Proceedings of the International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  11. Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Focal network for image restoration. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 13001–13011. [Google Scholar]
  12. Yang, H.H.; Fu, Y. Wavelet u-net and the chromatic adaptation transform for single image dehazing. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 2736–2740. [Google Scholar]
  13. Li, R.; Dong, H.; Wang, L.; Liang, B.; Guo, Y.; Wang, F. Frequency-aware deep dual-path feature enhancement network for image dehazing. In Proceedings of the International Conference on Pattern Recognition, Montreal, QC, Canada, 21–25 August 2022; IEEE: Piscataway, NJ, USA, 2022; pp. 3406–3412. [Google Scholar]
  14. Zhou, M.; Huang, J.; Guo, C.L.; Li, C. Fourmer: An efficient global modeling paradigm for image restoration. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023; pp. 42589–42601. [Google Scholar]
  15. Mao, X.; Liu, Y.; Liu, F.; Li, Q.; Shen, W.; Wang, Y. Intriguing findings of frequency selection for image deblurring. In Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA, 7–14 February 2023; Volume 37, pp. 1905–1913. [Google Scholar]
  16. Guo, M.H.; Lu, C.Z.; Hou, Q.; Liu, Z.; Cheng, M.M.; Hu, S.M. Segnext: Rethinking convolutional attention design for semantic segmentation. In Proceedings of the 36th International Conference on Neural Information Processing Systems, New Orleans, LA, USA, 28 November–9 December 2022; Volume 35, pp. 1140–1156. [Google Scholar]
  17. Trabelsi, C.; Bilaniuk, O.; Zhang, Y.; Serdyuk, D.; Subramanian, S.; Santos, J.F.; Mehri, S.; Rostamzadeh, N.; Bengio, Y.; Pal, C.J. Deep complex networks. arXiv 2017, arXiv:1705.09792. [Google Scholar]
  18. Cui, Y.; Ren, W.; Yang, S.; Cao, X.; Knoll, A. IRNeXt: Rethinking Convolutional Network Design for Image Restoration. In Proceedings of the International Conference on Machine Learning, Honolulu, HI, USA, 23–29 July 2023. [Google Scholar]
  19. Cui, Y.; Ren, W.; Knoll, A. Omni-Kernel Network for Image Restoration. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2024; Volume 38, pp. 1426–1434. [Google Scholar]
  20. Wu, H.; Qu, Y.; Lin, S.; Zhou, J.; Qiao, R.; Zhang, Z.; Xie, Y.; Ma, L. Contrastive learning for compact single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 10551–10560. [Google Scholar]
  21. Zamir, S.W.; Arora, A.; Khan, S.; Hayat, M.; Khan, F.S.; Yang, M.H.; Shao, L. Learning enriched features for real image restoration and enhancement. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 492–511. [Google Scholar]
  22. Chen, D.; He, M.; Fan, Q.; Liao, J.; Zhang, L.; Hou, D.; Yuan, L.; Hua, G. Gated Context Aggregation Network for Image Dehazing and Deraining. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 7–11 January 2019. [Google Scholar]
  23. Tu, Z.; Talebi, H.; Zhang, H.; Yang, F.; Milanfar, P.; Bovik, A.; Li, Y. MAXIM: Multi-axis MLP for image processing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5769–5780. [Google Scholar]
  24. Zheng, Y.; Zhan, J.; He, S.; Dong, J.; Du, Y. Curricular contrastive regularization for physics-aware single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 5785–5794. [Google Scholar]
  25. Anwar, S.; Barnes, N. Densely residual laplacian super-resolution. IEEE Trans. Pattern Anal. Mach. Intell. 2020, 44, 1192–1204. [Google Scholar] [CrossRef]
  26. Zhang, Y.; Tian, Y.; Kong, Y.; Zhong, B.; Fu, Y. Residual dense network for image super-resolution. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2472–2481. [Google Scholar]
  27. Lee, E.; Hwang, Y. Decomformer: Decompose Self-Attention of Transformer for Efficient Image Restoration. IEEE Access 2024, 12, 38672–38684. [Google Scholar] [CrossRef]
  28. Tian, C.; Zheng, M.; Zuo, W.; Zhang, B.; Zhang, Y.; Zhang, D. Multi-stage image denoising with the wavelet transform. Pattern Recognit. 2023, 134, 109050. [Google Scholar] [CrossRef]
  29. Li, C.; Guo, C.L.; Zhou, M.; Liang, Z.; Zhou, S.; Feng, R.; Loy, C.C. Embedding Fourier for Ultra-High-Definition Low-Light Image Enhancement. In Proceedings of the International Conference on Learning Representations, Kigali, Rwanda, 1–5 May 2023. [Google Scholar]
  30. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An Image is Worth 16 × 16 Words: Transformers for Image Recognition at Scale. In Proceedings of the International Conference on Learning Representations, Addis Ababa, Ethiopia, 30 April 2020. [Google Scholar]
  31. Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 568–578. [Google Scholar]
  32. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada, 10–17 October 2021; pp. 10012–10022. [Google Scholar]
  33. Zheng, C.; Liu, X.; Qi, G.J.; Chen, C. Potter: Pooling attention transformer for efficient human mesh recovery. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 1611–1620. [Google Scholar]
  34. Cooley, J.W.; Lewis, P.A.; Welch, P.D. The fast Fourier transform and its applications. IEEE Trans. Educ. 1969, 12, 27–34. [Google Scholar] [CrossRef]
  35. Wang, W.; Dai, J.; Chen, Z.; Huang, Z.; Li, Z.; Zhu, X.; Hu, X.; Lu, T.; Lu, L.; Li, H.; et al. Internimage: Exploring large-scale vision foundation models with deformable convolutions. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 17–24 June 2023; pp. 14408–14419. [Google Scholar]
  36. Li, X.; Wang, W.; Hu, X.; Yang, J. Selective kernel networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 510–519. [Google Scholar]
  37. Abuolaim, A.; Brown, M.S. Defocus deblurring using dual-pixel data. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 111–126. [Google Scholar]
  38. Ruan, L.; Chen, B.; Li, J.; Lam, M.L. Aifnet: All-in-focus image restoration network using a light field-based dataset. IEEE Trans. Comput. Imaging 2021, 7, 675–688. [Google Scholar] [CrossRef]
  39. Nah, S.; Hyun Kim, T.; Mu Lee, K. Deep multi-scale convolutional neural network for dynamic scene deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Honolulu, HI, USA, 21–26 July 2017; pp. 3883–3891. [Google Scholar]
  40. Shen, Z.; Wang, W.; Lu, X.; Shen, J.; Ling, H.; Xu, T.; Shao, L. Human-aware motion deblurring. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 5572–5581. [Google Scholar]
  41. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed]
  42. Ancuti, C.O.; Ancuti, C.; Sbert, M.; Timofte, R. Dense-haze: A benchmark for image dehazing with dense-haze and haze-free images. In Proceedings of the IEEE International Conference on Image Processing, Taipei, Taiwan, 22–25 September 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1014–1018. [Google Scholar]
  43. Ancuti, C.O.; Ancuti, C.; Timofte, R. NH-HAZE: An image dehazing benchmark with non-homogeneous hazy and haze-free images. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA, 14–19 June 2020; pp. 444–445. [Google Scholar]
  44. Zhang, J.; Cao, Y.; Zha, Z.J.; Tao, D. Nighttime dehazing with a synthetic benchmark. In Proceedings of the 28th ACM International Conference on Multimedia, Seattle, WA, USA, 12–16 October 2020; pp. 2355–2363. [Google Scholar]
  45. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep retinex decomposition for low-light enhancement. In Proceedings of the BMVC, Newcastle, UK, 3–6 September 2018. [Google Scholar]
  46. Yang, W.; Wang, W.; Huang, H.; Wang, S.; Liu, J. Sparse gradient regularized deep retinex network for robust low-light image enhancement. IEEE Trans. Image Process. 2021, 30, 2072–2086. [Google Scholar] [CrossRef] [PubMed]
  47. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  48. Loshchilov, I.; Hutter, F. Sgdr: Stochastic gradient descent with warm restarts. arXiv 2016, arXiv:1608.03983. [Google Scholar]
  49. Lee, J.; Son, H.; Rim, J.; Cho, S.; Lee, S. Iterative filter adaptive network for single image defocus deblurring. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 2034–2042. [Google Scholar]
  50. Cui, Y.; Ren, W.; Cao, X.; Knoll, A. Image Restoration via Frequency Selection. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 1093–1108. [Google Scholar] [CrossRef]
  51. Cui, Y.; Knoll, A. Dual-domain strip attention for image restoration. Neural Netw. 2024, 171, 429–439. [Google Scholar] [CrossRef] [PubMed]
  52. Tsai, F.J.; Peng, Y.T.; Lin, Y.Y.; Tsai, C.C.; Lin, C.W. Stripformer: Strip transformer for fast image deblurring. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; pp. 146–162. [Google Scholar]
  53. Wang, Z.; Cun, X.; Bao, J.; Zhou, W.; Liu, J.; Li, H. Uformer: A general u-shaped transformer for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17683–17693. [Google Scholar]
  54. Liu, X.; Ma, Y.; Shi, Z.; Chen, J. Griddehazenet: Attention-based multi-scale network for image dehazing. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27 October–2 November 2019; pp. 7314–7323. [Google Scholar]
  55. Wang, T.; Tao, G.; Lu, W.; Zhang, K.; Luo, W.; Zhang, X.; Lu, T. Restoring vision in hazy weather with hierarchical contrastive learning. Pattern Recognit. 2024, 145, 109956. [Google Scholar] [CrossRef]
  56. Cai, Y.; Bian, H.; Lin, J.; Wang, H.; Timofte, R.; Zhang, Y. Retinexformer: One-stage retinex-based transformer for low-light image enhancement. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 1–6 October 2023; pp. 12504–12513. [Google Scholar]
  57. Yang, W.; Wang, S.; Fang, Y.; Wang, Y.; Liu, J. Band representation-based semi-supervised low-light image enhancement: Bridging the gap between signal fidelity and perceptual quality. IEEE Trans. Image Process. 2021, 30, 3461–3473. [Google Scholar] [CrossRef] [PubMed]
  58. Xu, X.; Wang, R.; Fu, C.W.; Jia, J. SNR-aware low-light image enhancement. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 17714–17724. [Google Scholar]
Figure 1. The overall architecture of our FreANet. (a) FreABlock is composed of MPA, the RDB, the MDFE, and a standard feed-forward network. (b,c) The detailed structures of the MDFE and MPA, respectively.
Figure 1. The overall architecture of our FreANet. (a) FreABlock is composed of MPA, the RDB, the MDFE, and a standard feed-forward network. (b,c) The detailed structures of the MDFE and MPA, respectively.
Applsci 15 01856 g001
Figure 2. Defocus deblurring results on the DPDD [37] test set. These results demonstrate that our method effectively restores the halo blur effect.
Figure 2. Defocus deblurring results on the DPDD [37] test set. These results demonstrate that our method effectively restores the halo blur effect.
Applsci 15 01856 g002
Figure 3. Motion deblurring (removal of motion blur) results on the GoPro [39] test set. Our FreANet produces sharper and more distinct results.
Figure 3. Motion deblurring (removal of motion blur) results on the GoPro [39] test set. Our FreANet produces sharper and more distinct results.
Applsci 15 01856 g003
Figure 4. Image dehazing results on the SOTS test sets [41]. The regions restored better compared to other models are highlighted with red boxes.
Figure 4. Image dehazing results on the SOTS test sets [41]. The regions restored better compared to other models are highlighted with red boxes.
Applsci 15 01856 g004
Figure 5. Low-light image enhancement results on the LOLv1 [45] test set. FreANet effectively enhances the visual quality of the images.
Figure 5. Low-light image enhancement results on the LOLv1 [45] test set. FreANet effectively enhances the visual quality of the images.
Applsci 15 01856 g005
Figure 6. The visual results of the MDFE for dehazing and defocus deblurring. From left to right: degraded images, clean images, input features, and output features of the frequency branch (FB), spatial branch (SB), and MDFE. The FB emphasizes edge information, while the SB captures less degraded regions to facilitate restoration. Through the selective fusion of these outputs, the final output features enhance object textures along with edge information.
Figure 6. The visual results of the MDFE for dehazing and defocus deblurring. From left to right: degraded images, clean images, input features, and output features of the frequency branch (FB), spatial branch (SB), and MDFE. The FB emphasizes edge information, while the SB captures less degraded regions to facilitate restoration. Through the selective fusion of these outputs, the final output features enhance object textures along with edge information.
Applsci 15 01856 g006
Figure 7. The visual results of MPA for low-light enhancement and motion deblurring. From left to right: degraded input images, clean images, and input and output features of MPA. The proposed MPA effectively highlights regions with severe degradation, as indicated by the red boxes.
Figure 7. The visual results of MPA for low-light enhancement and motion deblurring. From left to right: degraded input images, clean images, and input and output features of MPA. The proposed MPA effectively highlights regions with severe degradation, as indicated by the red boxes.
Applsci 15 01856 g007
Table 1. Dataset summary for four restoration tasks.
Table 1. Dataset summary for four restoration tasks.
TaskDatasetTest Subname#Train#Test
Defocus DeblurringDPDD [37]-35076
LFDOF [38]-11,261725
Motion DeblurringGoPro [39]-21031111
HIDE [40]-02025
DehazingRESIDE-ITS [41]SOTS-Indoor13,990500
RESIDE-OTS [41]SOTS-Outdoor313,950500
Dense-Haze [42]-455
NH-HAZE [43]-455
NHR [44]-16,1461794
EnhancementLOLv1 [45]-48515
LOLv2-real [46]-689100
LOLv2-syn [46]-900100
Table 2. Defocus deblurring comparisons on the DPDD [37] test set.
Table 2. Defocus deblurring comparisons on the DPDD [37] test set.
MethodsIndoor ScenesOutdoor ScenesCombined#Params
PSNR↑SSIM↑MAE↓LPIPS↓PSNR↑SSIM↑MAE↓LPIPS↓PSNR↑SSIM↑MAE↓LPIPS↓(M)
IFAN [49]28.110.8610.0260.17922.760.7200.0520.25425.370.7890.0390.21710.47
Restormer [7]28.870.8820.0250.14523.240.7430.0500.20925.980.8110.0380.17826.13
SFNet [10]29.160.8780.0230.16823.450.7470.0490.24426.230.8110.0370.20713.27
FocalNet [11]29.100.8760.0240.17323.410.7430.0490.24626.180.8080.0370.21012.82
IRNeXt [18]29.220.8790.0240.16723.530.7520.0490.24426.300.8140.0370.20614.75
OKNet [19]28.990.8770.0240.16923.510.7510.0490.24126.180.8120.0370.20614.02
FSNet [50]29.140.8780.0240.16623.450.7470.0500.24626.220.8110.0370.20713.28
DSANet [51]29.270.8810.0240.15823.500.7490.0490.23126.310.8130.0360.19513.16
FreANet29.320.8810.0230.13123.550.7570.0490.19526.360.8170.0360.16412.74
The best and second-best scores are highlighted in bold and underlined, respectively. ↑ indicates that a higher score is better, whereas ↓ indicates that a lower score is better.
Table 3. Motion deblurring comparisons on the GoPro [39] and HIDE [40] test sets.
Table 3. Motion deblurring comparisons on the GoPro [39] and HIDE [40] test sets.
MethodsGoPro [39]HIDE [40]#Params
PSNR↑SSIM↑PSNR↑SSIM↑(M)
MPRNet [3]32.660.95930.960.93920.1
HINet [5]32.710.95930.320.93220.1
Restormer [7]32.920.96131.220.94226.13
Uformer-B [53]33.060.96730.900.95350.88
Stripformer [52]33.080.96231.030.94020
IRNeXt [18]33.160.962--14.75
FreANet33.210.96231.110.93712.74
The best and second-best scores are highlighted in bold and underlined, respectively. ↑ indicates that a higher score is better.
Table 4. Image dehazing comparisons on the synthetic and real dehazing test sets.
Table 4. Image dehazing comparisons on the synthetic and real dehazing test sets.
MethodsSOTS-Indoor [41]SOTS-Outdoor [41]Dense-Haze [42]NH-HAZE [43]Overheads
PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑#Params (M)FLOPs (G)
GridDehazeNet [54]32.160.98430.860.982--13.800.540.95621.49
FFA-Net [6]36.390.98933.570.98414.390.4519.870.694.45287.8
AECR-Net [20]37.170.990--15.800.4719.880.722.6152.20
MAXIM [23]38.110.99134.190.985----14.1-
Fourmer [14]37.320.990--15.950.4919.910.721.2920.6
OKNet-S [19]37.590.99435.450.99216.850.6220.290.802.4017.88
OKNet [19]40.790.99637.680.99516.920.6420.480.804.7239.71
FreANet-S39.540.99536.460.99317.390.6520.110.811.7919.77
FreANet41.620.99738.390.99517.530.6720.520.825.3740.29
The best and second-best scores are highlighted in bold and underlined, respectively. ↑ indicates that a higher score is better.
Table 5. Nighttime dehazing comparisons on the NHR [44] test set.
Table 5. Nighttime dehazing comparisons on the NHR [44] test set.
MethodOFSDHCDFocalNetFSNetOKNetFreANet
[44][55][11][50][19](Ours)
PSNR↑21.3223.4325.3526.3027.9228.76
SSIM↑0.8040.9530.9690.9760.9790.978
The best and second-best scores are highlighted in bold and underlined, respectively. ↑ indicates that a higher score is better.
Table 6. Low-light image enhancement comparisons on the LOLv1 [45] and LOLv2 [46] test sets.
Table 6. Low-light image enhancement comparisons on the LOLv1 [45] and LOLv2 [46] test sets.
MethodLOLv1 [45]LOLv2-Real [46]LOLv2-Syn [46]#Params
PSNR↑SSIM↑PSNR↑SSIM↑PSNR↑SSIM↑(M)
DRBN [57]20.130.83020.290.83123.220.9275.27
Restormer [7]22.430.82319.940.82721.410.83026.13
MIRNet [21]24.140.83020.020.82021.940.87631.76
UHDFour [29]23.090.87021.780.870--28.52
SNR-Net [58]24.610.84221.480.84924.140.9284.01
FreANet23.600.83621.780.86025.910.9392.45
The best and second-best scores are highlighted in bold and underlined, respectively. ↑ indicates that a higher score is better.
Table 7. Ablation study on SOTS-Indoor.
Table 7. Ablation study on SOTS-Indoor.
VariantMPARDBMDFESOTS-Indoor
SpatialFrequencyPSNR↑SSIM↑#Params
(a) 30.750.9760.81 M
(b) 36.290.9911.40 M
(c) 37.060.9931.59 M
(d) 33.360.9840.66 M
(e) 25.060.9410.66 M
(f) 35.830.9910.81 M
(g) 36.650.9921.01 M
(h) 39.180.9941.59 M
(i)39.540.9951.78 M
The best score is highlighted in bold. ↑ indicates that a higher score is better. ✔ indicates that the corresponding component is included in the variant.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Park, C.-H.; Choi, H.-D.; Lim, M.-T. Harnessing Spatial-Frequency Information for Enhanced Image Restoration. Appl. Sci. 2025, 15, 1856. https://doi.org/10.3390/app15041856

AMA Style

Park C-H, Choi H-D, Lim M-T. Harnessing Spatial-Frequency Information for Enhanced Image Restoration. Applied Sciences. 2025; 15(4):1856. https://doi.org/10.3390/app15041856

Chicago/Turabian Style

Park, Cheol-Hoon, Hyun-Duck Choi, and Myo-Taeg Lim. 2025. "Harnessing Spatial-Frequency Information for Enhanced Image Restoration" Applied Sciences 15, no. 4: 1856. https://doi.org/10.3390/app15041856

APA Style

Park, C.-H., Choi, H.-D., & Lim, M.-T. (2025). Harnessing Spatial-Frequency Information for Enhanced Image Restoration. Applied Sciences, 15(4), 1856. https://doi.org/10.3390/app15041856

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop