Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (38)

Search Parameters:
Keywords = blind deblurring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 11332 KB  
Article
A Fast Nonlinear Sparse Model for Blind Image Deblurring
by Zirui Zhang, Zheng Guo, Zhenhua Xu, Huasong Chen, Chunyong Wang, Yang Song, Jiancheng Lai, Yunjing Ji and Zhenhua Li
J. Imaging 2025, 11(10), 327; https://doi.org/10.3390/jimaging11100327 - 23 Sep 2025
Viewed by 85
Abstract
Blind image deblurring, which requires simultaneous estimation of the latent image and blur kernel, constitutes a classic ill-posed problem. To address this, priors based on L2, L1, and Lp regularizations have been widely adopted. Based on this foundation [...] Read more.
Blind image deblurring, which requires simultaneous estimation of the latent image and blur kernel, constitutes a classic ill-posed problem. To address this, priors based on L2, L1, and Lp regularizations have been widely adopted. Based on this foundation and combining successful experiences of previous work, this paper introduces LN regularization, a novel nonlinear sparse regularization combining the Lp and L norms via nonlinear coupling. Statistical probability analysis demonstrates that LN regularization achieves stronger sparsity than traditional regularizations like L2, L1, and Lp regularizations. Furthermore, building upon the LN regularization, we propose a novel nonlinear sparse model for blind image deblurring. To optimize the proposed LN regularization, we introduce an Adaptive Generalized Soft-Thresholding (AGST) algorithm and further develop an efficient optimization strategy by integrating AGST with the Half-Quadratic Splitting (HQS) strategy. Extensive experiments conducted on synthetic datasets and real-world images demonstrate that the proposed nonlinear sparse model achieves superior deblurring performance while maintaining completive computational efficiency. Full article
Show Figures

Figure 1

14 pages, 3017 KB  
Article
Investigation of Blind Deconvolution Method with Total Variation Regularization in Cardiac Cine Magnetic Resonance Imaging
by Kyuseok Kim and Youngjin Lee
Electronics 2025, 14(4), 743; https://doi.org/10.3390/electronics14040743 - 13 Feb 2025
Viewed by 1353
Abstract
Various studies have been conducted to reduce the blurring caused by movement in cine magnetic resonance imaging (MRI) of the heart. This study proposed a blind deconvolution method using a total variation regularization algorithm to remove blurring in cardiac cine magnetic resonance (MR) [...] Read more.
Various studies have been conducted to reduce the blurring caused by movement in cine magnetic resonance imaging (MRI) of the heart. This study proposed a blind deconvolution method using a total variation regularization algorithm to remove blurring in cardiac cine magnetic resonance (MR) images. The MR data were acquired using a rat cardiac cine sequence in an open format. We investigated a blind deconvolution method with a total variation regularization, incorporating a 3-dimensional point-spread function on cardiac cine MRI. The gradient of magnitude (GM) and perceptual sharpness index (PSI) were used to evaluate the usefulness of the proposed deblurring method. We confirmed that the proposed method can reduce temporal blur relatively efficiently compared with the generalized variation-based deblurring algorithm. In particular, the GM and PSI values of the cardiac cine MR image corrected using the proposed method were improved by approximately 7.59 and 4.18 times, respectively, compared with the degraded image. We achieved improved image quality by validating a blind deconvolution method using a total variation regularization algorithm on the cardiac cine MR images of small animals. Full article
Show Figures

Figure 1

16 pages, 8261 KB  
Article
Enhanced Deblurring for Smart Cabinets in Dynamic and Low-Light Scenarios
by Yali Sun, Siyang Hu, Kai Xie, Chang Wen, Wei Zhang and Jianbiao He
Electronics 2025, 14(3), 488; https://doi.org/10.3390/electronics14030488 - 25 Jan 2025
Viewed by 775
Abstract
In this paper, we propose a novel method to address dynamic blur and low-light issues in smart cabinets, which is named the MIMO-IMF (Multi-input Multi-output U-Net Integrated Motion Framework). This method combines a Frequency-Domain Adaptive Fusion Module (FDAFM), built on the blind deblurring [...] Read more.
In this paper, we propose a novel method to address dynamic blur and low-light issues in smart cabinets, which is named the MIMO-IMF (Multi-input Multi-output U-Net Integrated Motion Framework). This method combines a Frequency-Domain Adaptive Fusion Module (FDAFM), built on the blind deblurring framework MIMO-UNet (Multi-input Multi-output U-Net), to improve the capture of high-frequency information and enhance the accuracy of blur region recovery. Additionally, a low-light luminance information extraction module (IFEM) is designed to complement the multi-scale features of the FDAFM by extracting valuable luminance information, significantly improving the efficiency of merchandise deblurring under low-light conditions. To further optimize the deblurring effect, we introduce an enhanced residual block structure and a novel loss function. The refined multi-scale residual block, combined with the FDAFM, better restores image details by refining frequency bands across different scales. The New Loss Function improves the model’s performance in low-light and dynamic blur scenarios by effectively balancing luminance and structural information. Experiments on the GOPRO dataset and the self-developed MBSI dataset show that our method outperforms the original model, achieving a PSNR improvement of 0.21 dB on the public dataset and 0.23 dB on the MBSI dataset. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network)
Show Figures

Figure 1

34 pages, 30049 KB  
Article
Blind Infrared Remote-Sensing Image Deblurring Algorithm via Edge Composite-Gradient Feature Prior and Detail Maintenance
by Xiaohang Zhao, Mingxuan Li, Ting Nie, Chengshan Han and Liang Huang
Remote Sens. 2024, 16(24), 4697; https://doi.org/10.3390/rs16244697 - 16 Dec 2024
Viewed by 1293
Abstract
The problem of blind image deblurring remains a challenging inverse problem, due to the ill-posed nature of estimating unknown blur kernels and latent images within the Maximum A Posteriori (MAP) framework. To address this challenge, traditional methods often rely on sparse regularization priors [...] Read more.
The problem of blind image deblurring remains a challenging inverse problem, due to the ill-posed nature of estimating unknown blur kernels and latent images within the Maximum A Posteriori (MAP) framework. To address this challenge, traditional methods often rely on sparse regularization priors to mitigate the uncertainty inherent in the problem. In this paper, we propose a novel blind deblurring model based on the MAP framework that leverages Composite-Gradient Feature (CGF) variations in edge regions after image blurring. This prior term is specifically designed to exploit the high sparsity of sharp edge regions in clear images, thereby effectively alleviating the ill-posedness of the problem. Unlike existing methods that focus on local gradient information, our approach focuses on the aggregation of edge regions, enabling better detection of both sharp and smoothed edges in blurred images. In the blur kernel estimation process, we enhance the accuracy of the kernel by assigning effective edge information from the blurred image to the smoothed intermediate latent image, preserving critical structural details lost during the blurring process. To further improve the edge-preserving restoration, we introduce an adaptive regularizer that outperforms traditional total variation regularization by better maintaining edge integrity in both clear and blurred images. The proposed variational model is efficiently implemented using alternating iterative techniques. Extensive numerical experiments and comparisons with state-of-the-art methods demonstrate the superior performance of our approach, highlighting its effectiveness and real-world applicability in diverse image-restoration tasks. Full article
Show Figures

Figure 1

17 pages, 6702 KB  
Article
A Variational Neural Network Based on Algorithm Unfolding for Image Blind Deblurring
by Shaoqing Gong, Yeran Wang, Guangyu Yang, Weibo Wei, Junli Zhao and Zhenkuan Pan
Appl. Sci. 2024, 14(24), 11742; https://doi.org/10.3390/app142411742 - 16 Dec 2024
Cited by 1 | Viewed by 1196
Abstract
Image blind deblurring is an ill-posed inverse problem in image processing. While deep learning approaches have demonstrated effectiveness, they often lack interpretability and require extensive data. To address these limitations, we propose a novel variational neural network based on algorithm unfolding. The model [...] Read more.
Image blind deblurring is an ill-posed inverse problem in image processing. While deep learning approaches have demonstrated effectiveness, they often lack interpretability and require extensive data. To address these limitations, we propose a novel variational neural network based on algorithm unfolding. The model is solved using the half quadratic splitting (HQS) method and proximal gradient descent. For blur kernel estimation, we introduce an L0 regularizer to constrain the gradient information and use the fast fourier transform (FFT) to solve the iterative results, thereby improving accuracy. Image restoration is initiated with Gabor filters for the convolution kernel, and the activation function is approximated using a Gaussian radial basis function (RBF). Additionally, two attention mechanisms improve feature selection. The experimental results on various datasets demonstrate that our model outperforms state-of-the-art algorithm unfolding networks and other blind deblurring models. Our approach enhances interpretability and generalization while utilizing fewer data and parameters. Full article
Show Figures

Figure 1

23 pages, 50846 KB  
Article
Blind Deblurring Method for CASEarth Multispectral Images Based on Inter-Band Gradient Similarity Prior
by Mengying Zhu, Jiayin Liu and Feng Wang
Sensors 2024, 24(19), 6259; https://doi.org/10.3390/s24196259 - 27 Sep 2024
Viewed by 1136
Abstract
Multispectral remote sensing images contain abundant information about the distribution and reflectance of ground objects, playing a crucial role in target detection, environmental monitoring, and resource exploration. However, due to the complexity of the imaging process in multispectral remote sensing, image blur is [...] Read more.
Multispectral remote sensing images contain abundant information about the distribution and reflectance of ground objects, playing a crucial role in target detection, environmental monitoring, and resource exploration. However, due to the complexity of the imaging process in multispectral remote sensing, image blur is inevitable, and the blur kernel is typically unknown. In recent years, many researchers have focused on blind image deblurring, but most of these methods are based on single-band images. When applied to CASEarth satellite multispectral images, the spectral correlation is unutilized. To address this limitation, this paper proposes a novel approach that leverages the characteristics of multispectral data more effectively. We introduce an inter-band gradient similarity prior and incorporate it into the patch-wise minimal pixel (PMP)-based deblurring model. This approach aims to utilize the spectral correlation across bands to improve deblurring performance. A solution algorithm is established by combining the half-quadratic splitting method with alternating minimization. Subjectively, the final experiments on CASEarth multispectral images demonstrate that the proposed method offers good visual effects while enhancing edge sharpness. Objectively, our method leads to an average improvement in point sharpness by a factor of 1.6, an increase in edge strength level by a factor of 1.17, and an enhancement in RMS contrast by a factor of 1.11. Full article
(This article belongs to the Collection Remote Sensing Image Processing)
Show Figures

Figure 1

15 pages, 20041 KB  
Article
Investigation of Deconvolution Method with Adaptive Point Spread Function Based on Scintillator Thickness in Wavelet Domain
by Kyuseok Kim, Bo Kyung Cha, Hyun-Woo Jeong and Youngjin Lee
Bioengineering 2024, 11(4), 330; https://doi.org/10.3390/bioengineering11040330 - 28 Mar 2024
Cited by 2 | Viewed by 1633
Abstract
In recent years, indirect digital radiography detectors have been actively studied to improve radiographic image performance with low radiation exposure. This study aimed to achieve low-dose radiation imaging with a thick scintillation detector while simultaneously obtaining the resolution of a thin scintillation detector. [...] Read more.
In recent years, indirect digital radiography detectors have been actively studied to improve radiographic image performance with low radiation exposure. This study aimed to achieve low-dose radiation imaging with a thick scintillation detector while simultaneously obtaining the resolution of a thin scintillation detector. The proposed method was used to predict the optimal point spread function (PSF) between thin and thick scintillation detectors by considering image quality assessment (IQA). The process of identifying the optimal PSF was performed on each sub-band in the wavelet domain to improve restoration accuracy. In the experiments, the edge preservation index (EPI) values of the non-blind deblurred image with a blurring sigma of σ = 5.13 pixels and the image obtained with optimal parameters from the thick scintillator using the proposed method were approximately 0.62 and 0.76, respectively. The coefficient of variation (COV) values for the two images were approximately 1.02 and 0.63, respectively. The proposed method was validated through simulations and experimental results, and its viability is expected to be verified on various radiological imaging systems. Full article
(This article belongs to the Special Issue Radiological Imaging and Its Applications)
Show Figures

Figure 1

17 pages, 7142 KB  
Article
Performance Evaluation of L1-Norm-Based Blind Deconvolution after Noise Reduction with Non-Subsampled Contourlet Transform in Light Microscopy Images
by Kyuseok Kim and Ji-Youn Kim
Appl. Sci. 2024, 14(5), 1913; https://doi.org/10.3390/app14051913 - 26 Feb 2024
Cited by 3 | Viewed by 1619
Abstract
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed [...] Read more.
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed and applied to a light microscope image to analyze its feasibility. The designed NSCT-based algorithm first separated the low- and high-frequency components. Then, the restored microscope image and the deblurred and denoised images were compared and evaluated. In both the simulations and experiments, the average coefficient of variation (COV) value in the image using the proposed NSCT-based algorithm showed similar values compared to the denoised image; moreover, it significantly improved the results compared with that of the degraded image. In particular, we confirmed that the restored image in the experiment improved the COV by approximately 2.52 times compared with the deblurred image, and the NSCT-based proposed algorithm showed the best performance in both the peak signal-to-noise ratio and edge preservation index in the simulation. In conclusion, the proposed algorithm was successfully modeled, and the applicability of the proposed method in light microscope images was proved based on various quantitative evaluation indices. Full article
Show Figures

Figure 1

15 pages, 14034 KB  
Article
HPG-GAN: High-Quality Prior-Guided Blind Face Restoration Generative Adversarial Network
by Xu Deng, Hao Zhang and Xiaojie Li
Electronics 2023, 12(16), 3418; https://doi.org/10.3390/electronics12163418 - 11 Aug 2023
Cited by 3 | Viewed by 2751
Abstract
To address the problems of low resolution, compression artifacts, complex noise, and color loss in image restoration, we propose a High-Quality Prior-Guided Blind Face Restoration Generative Adversarial Network (HPG-GAN). This mainly consists of Coarse Restoration Sub-Network (CR-Net) and Fine Restoration Sub-Network (FR-Net). HPG-GAN [...] Read more.
To address the problems of low resolution, compression artifacts, complex noise, and color loss in image restoration, we propose a High-Quality Prior-Guided Blind Face Restoration Generative Adversarial Network (HPG-GAN). This mainly consists of Coarse Restoration Sub-Network (CR-Net) and Fine Restoration Sub-Network (FR-Net). HPG-GAN extracts high-quality structural and textural priors and facial feature priors from coarse restoration images to reconstruct clear and high-quality facial images. FR-Net includes the Facial Feature Enhancement Module (FFEM) and the Asymmetric Feature Fusion Module (AFFM). FFEM enhances facial feature information using high-definition facial feature priors obtained from ArcFace. AFFM fuses and selects asymmetric high-quality structural and textural information from ResNet34 to recover overall structural and textural information. The comparative evaluations on synthetic and real-world datasets demonstrate superior performance and visual restoration effects compared to state-of-the-art methods. The ablation experiments validate the importance of each module. HPG-GAN is an effective and robust blind face deblurring and restoration network. The experimental results demonstrate the effectiveness of the proposed network, which achieves better visual quality against state-of-the-art methods. Full article
(This article belongs to the Special Issue Research Advances in Image Processing and Computer Vision)
Show Figures

Figure 1

26 pages, 7328 KB  
Article
Total Fractional-Order Variation-Based Constraint Image Deblurring Problem
by Shahid Saleem, Shahbaz Ahmad and Junseok Kim
Mathematics 2023, 11(13), 2869; https://doi.org/10.3390/math11132869 - 26 Jun 2023
Cited by 4 | Viewed by 1707
Abstract
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our [...] Read more.
When deblurring an image, ensuring that the restored intensities are strictly non-negative is crucial. However, current numerical techniques often fail to consistently produce favorable results, leading to negative intensities that contribute to significant dark regions in the restored images. To address this, our study proposes a mathematical model for non-blind image deblurring based on total fractional-order variational principles. Our proposed model not only guarantees strictly positive intensity values but also imposes limits on the intensities within a specified range. By removing negative intensities or constraining them within the prescribed range, we can significantly enhance the quality of deblurred images. The key concept in this paper involves converting the constrained total fractional-order variational-based image deblurring problem into an unconstrained one through the introduction of the augmented Lagrangian method. To facilitate this conversion and improve convergence, we describe new numerical algorithms and introduce a novel circulant preconditioned matrix. This matrix effectively overcomes the slow convergence typically encountered when using the conjugate gradient method within the augmented Lagrangian framework. Our proposed approach is validated through computational tests, demonstrating its effectiveness and viability in practical applications. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

12 pages, 8991 KB  
Article
An Underwater Image Enhancement Method for a Preprocessing Framework Based on Generative Adversarial Network
by Xiao Jiang, Haibin Yu, Yaxin Zhang, Mian Pan, Zhu Li, Jingbiao Liu and Shuaishuai Lv
Sensors 2023, 23(13), 5774; https://doi.org/10.3390/s23135774 - 21 Jun 2023
Cited by 14 | Viewed by 4650
Abstract
This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates [...] Read more.
This paper presents an efficient underwater image enhancement method, named ECO-GAN, to address the challenges of color distortion, low contrast, and motion blur in underwater robot photography. The proposed method is built upon a preprocessing framework using a generative adversarial network. ECO-GAN incorporates a convolutional neural network that specifically targets three underwater issues: motion blur, low brightness, and color deviation. To optimize computation and inference speed, an encoder is employed to extract features, whereas different enhancement tasks are handled by dedicated decoders. Moreover, ECO-GAN employs cross-stage fusion modules between the decoders to strengthen the connection and enhance the quality of output images. The model is trained using supervised learning with paired datasets, enabling blind image enhancement without additional physical knowledge or prior information. Experimental results demonstrate that ECO-GAN effectively achieves denoising, deblurring, and color deviation removal simultaneously. Compared with methods relying on individual modules or simple combinations of multiple modules, our proposed method achieves superior underwater image enhancement and offers the flexibility for expansion into multiple underwater image enhancement functions. Full article
(This article belongs to the Special Issue Deep Learning for Computer Vision and Image Processing Sensors)
Show Figures

Figure 1

24 pages, 7656 KB  
Article
Adaptive Multi-Scale Fusion Blind Deblurred Generative Adversarial Network Method for Sharpening Image Data
by Baoyu Zhu, Qunbo Lv and Zheng Tan
Drones 2023, 7(2), 96; https://doi.org/10.3390/drones7020096 - 30 Jan 2023
Cited by 12 | Viewed by 3763
Abstract
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image [...] Read more.
Drone and aerial remote sensing images are widely used, but their imaging environment is complex and prone to image blurring. Existing CNN deblurring algorithms usually use multi-scale fusion to extract features in order to make full use of aerial remote sensing blurred image information, but images with different degrees of blurring use the same weights, leading to increasing errors in the feature fusion process layer by layer. Based on the physical properties of image blurring, this paper proposes an adaptive multi-scale fusion blind deblurred generative adversarial network (AMD-GAN), which innovatively applies the degree of image blurring to guide the adjustment of the weights of multi-scale fusion, effectively suppressing the errors in the multi-scale fusion process and enhancing the interpretability of the feature layer. The research work in this paper reveals the necessity and effectiveness of a priori information on image blurring levels in image deblurring tasks. By studying and exploring the image blurring levels, the network model focuses more on the basic physical features of image blurring. Meanwhile, this paper proposes an image blurring degree description model, which can effectively represent the blurring degree of aerial remote sensing images. The comparison experiments show that the algorithm in this paper can effectively recover images with different degrees of blur, obtain high-quality images with clear texture details, outperform the comparison algorithm in both qualitative and quantitative evaluation, and can effectively improve the object detection performance of blurred aerial remote sensing images. Moreover, the average PSNR of this paper’s algorithm tested on the publicly available dataset RealBlur-R reached 41.02 dB, surpassing the latest SOTA algorithm. Full article
(This article belongs to the Topic Artificial Intelligence in Sensors)
Show Figures

Figure 1

32 pages, 13215 KB  
Article
Performance Analysis of Classification and Detection for PV Panel Motion Blur Images Based on Deblurring and Deep Learning Techniques
by Abdullah Ahmed Al-Dulaimi, Muhammet Tahir Guneser, Alaa Ali Hameed, Fausto Pedro García Márquez, Norma Latif Fitriyani and Muhammad Syafrudin
Sustainability 2023, 15(2), 1150; https://doi.org/10.3390/su15021150 - 7 Jan 2023
Cited by 7 | Viewed by 2874
Abstract
Detecting snow-covered solar panels is crucial as it allows us to remove snow using heating techniques more efficiently and restores the photovoltaic system to proper operation. This paper presents classification and detection performance analyses for snow-covered solar panel images. The classification analysis consists [...] Read more.
Detecting snow-covered solar panels is crucial as it allows us to remove snow using heating techniques more efficiently and restores the photovoltaic system to proper operation. This paper presents classification and detection performance analyses for snow-covered solar panel images. The classification analysis consists of two cases, and the detection analysis consists of one case based on three backbones. In this study, five deep learning models, namely visual geometry group-16 (VGG-16), VGG-19, residual neural network-18 (RESNET-18), RESNET-50, and RESNET-101, are used to classify solar panel images. The models are trained, validated, and tested under different conditions. The first case of classification is performed on the original dataset without preprocessing. In the second case, extreme climate conditions are simulated by generating motion noise; furthermore, the dataset is replicated using the upsampling technique to handle the unbalancing issue. For the detection case, a region-based convolutional neural network (RCNN) detector is used to detect the three categories of solar panels, which are all_snow, no_snow, and partial. The dataset of these categories is taken from the second case in the classification approach. Finally, we proposed a blind image deblurring algorithm (BIDA) that can be a preprocessing step before the CNN (BIDA-CNN) model. The accuracy of the models was compared and verified; the accuracy results show that the proposed CNN-based blind image deblurring algorithm (BIDA-CNN) outperformed other models evaluated in this study. Full article
(This article belongs to the Special Issue Applied Artificial Intelligence for Sustainability)
Show Figures

Figure 1

16 pages, 48353 KB  
Article
MedDeblur: Medical Image Deblurring with Residual Dense Spatial-Asymmetric Attention
by S. M. A. Sharif, Rizwan Ali Naqvi, Zahid Mehmood, Jamil Hussain, Ahsan Ali and Seung-Won Lee
Mathematics 2023, 11(1), 115; https://doi.org/10.3390/math11010115 - 27 Dec 2022
Cited by 23 | Viewed by 4036
Abstract
Medical image acquisition devices are susceptible to producing blurry images due to respiratory and patient movement. Despite having a notable impact on such blind-motion deblurring, medical image deblurring is still underexposed. This study proposes an end-to-end scale-recurrent deep network to learn the deblurring [...] Read more.
Medical image acquisition devices are susceptible to producing blurry images due to respiratory and patient movement. Despite having a notable impact on such blind-motion deblurring, medical image deblurring is still underexposed. This study proposes an end-to-end scale-recurrent deep network to learn the deblurring from multi-modal medical images. The proposed network comprises a novel residual dense block with spatial-asymmetric attention to recover salient information while learning medical image deblurring. The performance of the proposed methods has been densely evaluated and compared with the existing deblurring methods. The experimental results demonstrate that the proposed method can remove blur from medical images without illustrating visually disturbing artifacts. Furthermore, it outperforms the deep deblurring methods in qualitative and quantitative evaluation by a noticeable margin. The applicability of the proposed method has also been verified by incorporating it into various medical image analysis tasks such as segmentation and detection. The proposed deblurring method helps accelerate the performance of such medical image analysis tasks by removing blur from blurry medical inputs. Full article
Show Figures

Figure 1

21 pages, 6023 KB  
Article
Blind Deblurring of Remote-Sensing Single Images Based on Feature Alignment
by Baoyu Zhu, Qunbo Lv, Yuanbo Yang, Xuefu Sui, Yu Zhang, Yinhui Tang and Zheng Tan
Sensors 2022, 22(20), 7894; https://doi.org/10.3390/s22207894 - 17 Oct 2022
Cited by 14 | Viewed by 3720
Abstract
Motion blur recovery is a common method in the field of remote sensing image processing that can effectively improve the accuracy of detection and recognition. Among the existing motion blur recovery methods, the algorithms based on deep learning do not rely on a [...] Read more.
Motion blur recovery is a common method in the field of remote sensing image processing that can effectively improve the accuracy of detection and recognition. Among the existing motion blur recovery methods, the algorithms based on deep learning do not rely on a priori knowledge and, thus, have better generalizability. However, the existing deep learning algorithms usually suffer from feature misalignment, resulting in a high probability of missing details or errors in the recovered images. This paper proposes an end-to-end generative adversarial network (SDD-GAN) for single-image motion deblurring to address this problem and to optimize the recovery of blurred remote sensing images. Firstly, this paper applies a feature alignment module (FAFM) in the generator to learn the offset between feature maps to adjust the position of each sample in the convolution kernel and to align the feature maps according to the context; secondly, a feature importance selection module is introduced in the generator to adaptively filter the feature maps in the spatial and channel domains, preserving reliable details in the feature maps and improving the performance of the algorithm. In addition, this paper constructs a self-constructed remote sensing dataset (RSDATA) based on the mechanism of image blurring caused by the high-speed orbital motion of satellites. Comparative experiments are conducted on self-built remote sensing datasets and public datasets as well as on real remote sensing blurred images taken by an in-orbit satellite (CX-6(02)). The results show that the algorithm in this paper outperforms the comparison algorithm in terms of both quantitative evaluation and visual effects. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

Back to TopTop