Due to scheduled maintenance work on our servers, there may be short service disruptions on this website between 11:00 and 12:00 CEST on March 28th.
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (217)

Search Parameters:
Keywords = image deblurring

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 11307 KB  
Article
A Symmetry-Preserving Extrapolated Primal-Dual Hybrid Gradient Method for Saddle-Point Problems
by Xiayang Zhang, Wenzhuo Li, Bowen Chang, Wei Liu and Shiyu Zhang
Axioms 2026, 15(3), 219; https://doi.org/10.3390/axioms15030219 - 16 Mar 2026
Viewed by 187
Abstract
The primal-dual hybrid gradient (PDHG) method is widely used for convex–concave saddle-point problems, yet its extrapolated variants are typically asymmetric because only one side is extrapolated. We propose a symmetry-preserving refinement, E-PDHG, which performs dual-side extrapolation followed by an explicit correction step. Under [...] Read more.
The primal-dual hybrid gradient (PDHG) method is widely used for convex–concave saddle-point problems, yet its extrapolated variants are typically asymmetric because only one side is extrapolated. We propose a symmetry-preserving refinement, E-PDHG, which performs dual-side extrapolation followed by an explicit correction step. Under standard step-size conditions, we establish global convergence for all η(1,1) and derive a pointwise (non-ergodic) O(1/t) rate for the last iterate. The method does not improve the asymptotic complexity order of PDHG; instead, it enlarges the practically stable parameter region while retaining the same per-iteration cost. Numerical experiments on image deblurring/inpainting and additional machine learning benchmarks (logistic regression and LASSO) demonstrate improved finite-iteration stability and efficiency. Full article
(This article belongs to the Section Mathematical Analysis)
Show Figures

Figure 1

23 pages, 8147 KB  
Article
SDENet: A Novel Approach for Single Image Depth of Field Extension
by Xu Zhang, Miaomiao Wen, Junyang Jia and Yan Liu
Algorithms 2026, 19(3), 216; https://doi.org/10.3390/a19030216 - 13 Mar 2026
Viewed by 187
Abstract
Traditional hardware-based approaches for depth-of-field extension (DOF-E), such as optimized lens design or focus-stacking via layer scanning, are often plagued by bulkiness and prohibitive costs. Meanwhile, conventional multi-focus image fusion algorithms demand precise spatial alignment, a challenge that becomes particularly acute in applications [...] Read more.
Traditional hardware-based approaches for depth-of-field extension (DOF-E), such as optimized lens design or focus-stacking via layer scanning, are often plagued by bulkiness and prohibitive costs. Meanwhile, conventional multi-focus image fusion algorithms demand precise spatial alignment, a challenge that becomes particularly acute in applications like microscopy. To address these limitations, this paper proposed a novel single-image DOF-E method termed SDENet. The method adopts an encoder –decoder architecture enhanced with multi-scale self-attention and depth enhancement modules, enabling the transformation of a single partially focused image into a fully focused output while effectively recovering regions outside the original depth of field (DOF). To support model training and performance evaluation, we introduce a dedicated dataset (MSED) containing 1772 pairs of single-focus and all-focus images covering diverse scenes. Experimental results on multiple datasets verify that SDENet significantly outperforms state-of-the-art deblurring methods, achieving a PSNR of 26.98 dB and SSIM of 0.846 on the DPDD dataset, which represents a substantial improvement in clarity and visual coherence compared to existing techniques. Furthermore, SDENet demonstrates competitive performance with multi-image fusion methods while requiring only a single input. Full article
Show Figures

Figure 1

19 pages, 5400 KB  
Article
Image Deblurring via Frequency-Domain Feature Enhanced Convolutional Neural Networks
by Yecai Guo, Lixiang Ma and Yangyang Zhang
Sensors 2026, 26(6), 1784; https://doi.org/10.3390/s26061784 - 12 Mar 2026
Viewed by 201
Abstract
To address the issues of insufficient restoration of texture details in deblurred images and inadequate learning of frequency domain features, an image deblurring algorithm based on frequency domain feature enhancement and convolutional neural networks is proposed. In this architecture, firstly, a Fourier residual [...] Read more.
To address the issues of insufficient restoration of texture details in deblurred images and inadequate learning of frequency domain features, an image deblurring algorithm based on frequency domain feature enhancement and convolutional neural networks is proposed. In this architecture, firstly, a Fourier residual module with a parallel structure is constructed to achieve collaborative learning and modeling of spatial and frequency domain features, aiming to improve frequency domain feature learning capability and the restoration effect of the texture details; secondly, a gated controlled feed-forward unit acts on the Fourier residual module to further enhance the nonlinear expression ability of the algorithm; thirdly, a supervised attention module is improved and added to the decoder to promote more effective capture of key features for image reconstruction; finally, the weighted sum of spatial domain Charbonnier loss function and frequency domain loss function is defined as a novel total loss function. In addition, to verify the performance of our proposed algorithm, we conducted experiments on the GOPRO and HIDE datasets. Through experiments on the GOPRO, we obtained an SSIM and an LPIPS of 0.961 and 0.0278, respectively. With regard to the experiments on the HIDE datasets, we obtained an SSIM and an LPIPS of 0.941 and 0.0286, respectively. As for parameter count and running time, their values were 1.197 and 9.15 × 106, respectively, obtained by the experiments on the GOPRO. In all algorithms, the values of our proposed algorithm are optimal. However, the PSNR of our proposed algorithm is very close to that of the latest comparison algorithm and is suboptimal. In a word, experimental results have demonstrated that our proposed algorithm effectively removes blur while better preserving the details and edges of the image. Therefore, it has more practical value and prospects in computer vision tasks. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

24 pages, 28764 KB  
Article
Restoration of Non-Uniform Motion-Blurred Star Images Based on Dynamic Strip Attention
by Jixin Han, Zhaodong Niu and Jun He
J. Imaging 2026, 12(3), 103; https://doi.org/10.3390/jimaging12030103 - 27 Feb 2026
Viewed by 235
Abstract
When capturing star images in long-exposure mode, due to the relative motion between stars and space objects and the observation camera, strip tailings with different directions and lengths will be formed, resulting in a serious decline in image quality and inaccurate centroid positioning. [...] Read more.
When capturing star images in long-exposure mode, due to the relative motion between stars and space objects and the observation camera, strip tailings with different directions and lengths will be formed, resulting in a serious decline in image quality and inaccurate centroid positioning. Traditional methods for restoring star images are prone to ringing effects and cannot restore the non-uniformly blurred star images. Aiming at this problem, this paper proposes a star image restoration network based on a dynamic strip attention mechanism. Firstly, a Multi-scale Dynamic Strip Pooling Module is designed to adaptively extract blurred features of different lengths and directions by dynamically adjusting the strip convolution. After that, a Multi-scale Feature Fusion Module is designed to fuse multi-level features to reduce the loss of image details of stars and space objects in the image. Experimental results demonstrate that the proposed method achieves a PSNR of 84.08 and an SSIM of 0.9928 on the 16-bit simulated dataset, outperforming both traditional methods and other deep learning-based approaches. Specifically, the recognition accuracy of star points is increased by 174% in comparison with unprocessed images. Furthermore, this paper validates the network using the real-world dataset spotGEO, and the results indicate that the average number of successfully recognized star points is increased by 57% compared to direct processing of the original images. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

48 pages, 8858 KB  
Review
Advances in Medical Image Processing for Early Breast Cancer Detection: Classical Techniques and Deep Learning Perspectives
by Wenxian Jin and Barmak Honarvar Shakibaei Asli
Electronics 2026, 15(4), 790; https://doi.org/10.3390/electronics15040790 - 12 Feb 2026
Viewed by 386
Abstract
Breast cancer is the most common malignancy among women and a leading cause of cancer-related mortality, making early and accurate detection essential. This review summarises advances in breast imaging and computational diagnostics across mammography, ultrasound, and magnetic resonance imaging (MRI), highlighting challenges in [...] Read more.
Breast cancer is the most common malignancy among women and a leading cause of cancer-related mortality, making early and accurate detection essential. This review summarises advances in breast imaging and computational diagnostics across mammography, ultrasound, and magnetic resonance imaging (MRI), highlighting challenges in differentiating benign from malignant lesions and identifying rarer tumour types. Key preprocessing steps—denoising, deblurring, and contrast enhancement—are reviewed as they improve image quality prior to analysis. Classical methods (e.g., thresholding, edge detection, and region growing) are compared with deep learning approaches for segmentation and classification. CNNs, RNNs, and emerging transformer-based models consistently outperform handcrafted pipelines, with representative studies reporting 5–15% gains in AUC/accuracy and deep models achieving AUC > 0.85–0.95 on several benchmarks. The review also discusses dataset constraints, common evaluation metrics (AUC, Dice, sensitivity, specificity), and clinical translation barriers such as interpretability and domain shift. Overall, AI-driven methods show strong potential to enhance early detection and support improved breast cancer outcomes. Full article
Show Figures

Figure 1

22 pages, 4393 KB  
Article
Visual–Inertial Fusion-Based Restoration of Image Degradation in High-Dynamic Scenes with Rolling Shutter Cameras
by Jianbin Ye, Cengfeng Luo, Qiuxuan Wu, Yuejun Ye, Shenao Li, Yiyang Chen and Aocheng Li
Sensors 2026, 26(4), 1189; https://doi.org/10.3390/s26041189 - 12 Feb 2026
Viewed by 323
Abstract
Rolling shutter CMOS cameras are widely used in mobile and embedded vision, but rapid motion and vibration often cause coupled degradations, including motion blur and rolling shutter (RS) geometric distortion. This paper presents a visual–inertial fusion framework that estimates unified motion-related degradation parameters [...] Read more.
Rolling shutter CMOS cameras are widely used in mobile and embedded vision, but rapid motion and vibration often cause coupled degradations, including motion blur and rolling shutter (RS) geometric distortion. This paper presents a visual–inertial fusion framework that estimates unified motion-related degradation parameters from IMU and image measurements and uses them to restore both photometric and geometric image quality in high-dynamic scenes. We further introduce an exposure-aware deblurring pipeline that accounts for the nonlinear photoelectric conversion characteristics of CMOS sensors, as well as a perspective-consistent RS compensation method to improve geometric consistency under depth–motion coupling. Experiments on real mobile data and public RS-visual–inertial sequences demonstrate improved image quality and downstream SLAM pose accuracy compared with representative baselines. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 4015 KB  
Article
High-Speed Image Restoration Based on a Dynamic Vision Sensor
by Paul K. J. Park, Junseok Kim, Juhyun Ko and Yeoungjin Chang
Sensors 2026, 26(3), 781; https://doi.org/10.3390/s26030781 - 23 Jan 2026
Viewed by 453
Abstract
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. [...] Read more.
We report on the post-capture, on-demand deblurring technique based on a Dynamic Vision Sensor (DVS). Motion blur causes photographic defects inherently in most use cases of mobile cameras. To compensate for motion blur in mobile photography, we use a fast event-based vision sensor. However, in this paper, we found severe artifacts resulting in image quality degradation caused by color ghosts, event noises, and discrepancies between conventional image sensors and event-based sensors. To overcome these inevitable artifacts, we propose and demonstrate event-based compensation techniques such as cross-correlation optimization, contrast maximization, resolution mismatch compensation (event upsampling for alignment), and disparity matching. The results show that the deblur performance can be improved dramatically in terms of metrics such as the Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measure (SSIM), and Spatial Frequency Response (SFR). Thus, we expect that the proposed event-based image restoration technique can be widely deployed in mobile cameras. Full article
(This article belongs to the Special Issue Advances in Optical Sensing, Instrumentation and Systems: 2nd Edition)
Show Figures

Figure 1

19 pages, 25889 KB  
Article
Current-Aware Temporal Fusion with Input-Adaptive Heterogeneous Mixture-of-Experts for Video Deblurring
by Yanwen Zhang, Zejing Zhao and Akio Namiki
Sensors 2026, 26(1), 321; https://doi.org/10.3390/s26010321 - 4 Jan 2026
Viewed by 571
Abstract
In image sensing, measurements such as an object’s position or contour are typically obtained by analyzing digitized images. This method is widely used due to its simplicity. However, relative motion or inaccurate focus can cause motion and defocus blur, reducing measurement accuracy. Thus, [...] Read more.
In image sensing, measurements such as an object’s position or contour are typically obtained by analyzing digitized images. This method is widely used due to its simplicity. However, relative motion or inaccurate focus can cause motion and defocus blur, reducing measurement accuracy. Thus, video deblurring is essential. However, existing deep learning-based video deblurring methods struggle to balance high-quality deblurring, fast inference, and wide applicability. First, we propose a Current-Aware Temporal Fusion (CATF) framework, which focuses on the current frame in terms of both network architecture and modules. This reduces interference from unrelated features of neighboring frames and fully exploits current frame information, improving deblurring quality. Second, we introduce a Mixture-of-Experts module based on NAFBlocks (MoNAF), which adaptively selects expert structures according to the input features, reducing inference time. Third, we design a training strategy to support both sequential and temporally parallel inference. In sequential deblurring, we conduct experiments on the DVD, GoPro, and BSD datasets. Qualitative results show that our method effectively preserves image structures and fine details. Quantitative results further demonstrate that our method achieves clear advantages in terms of PSNR and SSIM. In particular, under the exposure setting of 3 ms–24 ms on the BSD dataset, our method achieves 33.09 dB PSNR and 0.9453 SSIM, indicating its effectiveness even in severely blurred scenarios. Meanwhile, our method achieves a good balance between deblurring quality and runtime efficiency. Moreover, the framework exhibits minimal error accumulation and performs effectively in temporal parallel computation. These results demonstrate that effective video deblurring serves as an important supporting technology for accurate image sensing. Full article
(This article belongs to the Special Issue Smart Remote Sensing Images Processing for Sensor-Based Applications)
Show Figures

Figure 1

32 pages, 59431 KB  
Article
Joint Deblurring and Destriping for Infrared Remote Sensing Images with Edge Preservation and Ringing Suppression
by Ningfeng Wang, Liang Huang, Mingxuan Li, Bin Zhou and Ting Nie
Remote Sens. 2026, 18(1), 150; https://doi.org/10.3390/rs18010150 - 2 Jan 2026
Viewed by 481
Abstract
Infrared remote sensing images are often degraded by blur and stripe noise caused by satellite attitude variations, optical distortions, and electronic interference, which significantly compromise image quality and target detection performance. Existing joint deblurring and destriping methods tend to over-smooth image edges and [...] Read more.
Infrared remote sensing images are often degraded by blur and stripe noise caused by satellite attitude variations, optical distortions, and electronic interference, which significantly compromise image quality and target detection performance. Existing joint deblurring and destriping methods tend to over-smooth image edges and textures, failing to effectively preserve high-frequency details and sometimes misclassifying ringing artifacts as stripes. This paper proposes a variational framework for simultaneous deblurring and destriping of infrared remote sensing images. By leveraging an adaptive structure tensor model, the method exploits the sparsity and directionality of stripe noise, thereby enhancing edge and detail preservation. During blur kernel estimation, a fidelity term orthogonal to the stripe direction is introduced to suppress noise and residual stripes. In the image restoration stage, a WCOB (Non-blind restoration based on Wiener-Cosine composite filtering) model is proposed to effectively mitigate ringing artifacts and visual distortions. The overall optimization problem is efficiently solved using the alternating direction method of multipliers (ADMM). Extensive experiments on real infrared remote sensing datasets demonstrate that the proposed method achieves superior denoising and restoration performance, exhibiting strong robustness and practical applicability. Full article
Show Figures

Graphical abstract

22 pages, 50111 KB  
Article
Kernel Adaptive Swin Transformer for Image Restoration
by Zhen Ni, Jingyu Wang, Aniruddha Bhattacharjya and Le Yan
Symmetry 2025, 17(12), 2161; https://doi.org/10.3390/sym17122161 - 15 Dec 2025
Viewed by 706
Abstract
In this modern era, attention has been devoted to blind super-resolution design, which improves image restoration performance by combining self-attention networks and explicitly introducing degradation information. This paper proposes a novel model called Kernel Adaptive Swin Transformer (KAST) to address the ill-posedness in [...] Read more.
In this modern era, attention has been devoted to blind super-resolution design, which improves image restoration performance by combining self-attention networks and explicitly introducing degradation information. This paper proposes a novel model called Kernel Adaptive Swin Transformer (KAST) to address the ill-posedness in image super-resolution and the resulting irregular difficulties in restoration, including asymmetrical degradation problems. KAST introduces four key innovations: (1) local degradation-aware modeling, (2) parallel attention-based feature fusion, (3) log-space continuous position bias, and (4) comprehensive validation on diverse datasets. The model captures degraded information in different regions of low-resolution images, effectively encodes and distinguishes these degraded features using self-attention mechanisms, and accurately restores image details. The proposed approach innovatively integrates degraded features with image features through a parallel attention fusion strategy, enhancing the network’s ability to capture pixel relationships and achieving denoising, deblurring, and high-resolution image reconstruction. Experimental results demonstrate that our model performs well on multiple datasets, effectively verifying the effectiveness of the proposed method. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

15 pages, 3356 KB  
Article
Motion Blur-Free High-Speed Hybrid Image Sensing
by Paul K. J. Park, Junseok Kim and Juhyun Ko
Sensors 2025, 25(24), 7496; https://doi.org/10.3390/s25247496 - 9 Dec 2025
Cited by 2 | Viewed by 704
Abstract
We propose and demonstrate a novel motion blur-free hybrid image sensing technique. Unlike the previous hybrid image sensors, we developed a homogeneous hybrid image sensing technique including 60 fps CMOS Image Sensor (CIS) and 1440 fps pseudo Dynamic Vision Sensor (DVS) image frames [...] Read more.
We propose and demonstrate a novel motion blur-free hybrid image sensing technique. Unlike the previous hybrid image sensors, we developed a homogeneous hybrid image sensing technique including 60 fps CMOS Image Sensor (CIS) and 1440 fps pseudo Dynamic Vision Sensor (DVS) image frames without any performance degradation caused by static bad pixels. To achieve the fast readout, we implemented two one-side ADCs on two photodiodes (PDs) and the pixel output settling time can be reduced significantly by using the column switch control. The high-speed pseudo DVS frame can be obtained by differentiating fast-readout CIS frames, by which, in turn, the world’s smallest pseudo DVS pixel (1.8 μm) can be achieved. In addition, we confirmed that CIS (50 Mp resolution) and DVS (0.78 Mp resolution) data obtained from the hybrid image sensor can be transmitted over the MIPI (4.5 Gb/s four-lane D-PHY) interface without signal loss. The results showed that the motion blur of a 60 fps CIS frame image can be compensated dramatically by using the proposed pseudo DVS frames together with a deblur algorithm. Finally, using the event simulation, we verified that a 1440 fps pseudo DVS frame can compensate the motion blur of the CIS image captured in the situation of jogging at a 3 m distance. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

21 pages, 17206 KB  
Article
Mean-Curvature-Regularized Deep Image Prior with Soft Attention for Image Denoising and Deblurring
by Muhammad Israr, Shahbaz Ahmad, Muhammad Nabeel Asghar and Saad Arif
Mathematics 2025, 13(24), 3906; https://doi.org/10.3390/math13243906 - 6 Dec 2025
Viewed by 551
Abstract
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote [...] Read more.
Sparsity-driven regularization has undergone significant development in single-image restoration, particularly with the transition from handcrafted priors to trainable deep architectures. In this work, a geometric prior-enhanced deep image prior (DIP) framework, termed DIP-MC, is proposed that integrates mean curvature (MC) regularization to promote natural smoothness and structural coherence in reconstructed images. To strengthen the representational capacity of DIP, a self-attention module is incorporated between the encoder and decoder, enabling the network to capture long-range dependencies and preserve fine-scale textures. In contrast to total variation (TV), which frequently produces piecewise-constant artifacts and staircasing, MC regularization leverages curvature information, resulting in smoother transitions while maintaining sharp structural boundaries. DIP-MC is evaluated on standard grayscale and color image denoising and deblurring tasks using benchmark datasets including BSD68, Classic5, LIVE1, Set5, Set12, Set14, and the Levin dataset. Quantitative performance is assessed using peak signal-to-noise ratio (PSNR) and structural similarity index measure (SSIM) metrics. Experimental results demonstrate that DIP-MC consistently outperformed the DIP-TV baseline with 26.49 PSNR and 0.9 SSIM. It achieved competitive performance relative to BM3D and EPLL models with 28.6 PSNR and 0.87 SSIM while producing visually more natural reconstructions with improved detail fidelity. Furthermore, the learning dynamics of DIP-MC are analyzed by examining update-cost behavior during optimization, visualizing the best-performing network weights, and monitoring PSNR and SSIM progression across training epochs. These evaluations indicate that DIP-MC exhibits superior stability and convergence characteristics. Overall, DIP-MC establishes itself as a robust, scalable, and geometrically informed framework for high-quality single-image restoration. Full article
(This article belongs to the Special Issue Mathematical Methods for Image Processing and Understanding)
Show Figures

Figure 1

19 pages, 7269 KB  
Article
Fully-Cascaded Spatial-Aware Convolutional Network for Motion Deblurring
by Yinghan Hong, Bishenghui Tao, Qian Wang, Guizhen Mai and Cai Guo
Information 2025, 16(12), 1055; https://doi.org/10.3390/info16121055 - 2 Dec 2025
Viewed by 449
Abstract
Motion deblurring is an ill-posed, challenging problem in image restoration due to non-uniform motion blurs. Although recent deep convolutional neural networks have made significant progress, many existing methods adopt multi-scale or multi-patch subnetworks that involve additional inter-subnetwork processing (e.g., feature alignment and fusion) [...] Read more.
Motion deblurring is an ill-posed, challenging problem in image restoration due to non-uniform motion blurs. Although recent deep convolutional neural networks have made significant progress, many existing methods adopt multi-scale or multi-patch subnetworks that involve additional inter-subnetwork processing (e.g., feature alignment and fusion) across different scales or patches, leading to substantial computational cost. In this paper, we propose a novel fully-cascaded spatial-aware convolutional network (FSCNet) that effectively restores sharp images from blurry inputs while maintaining a favorable balance between restoration quality and computational efficiency. The proposed architecture consists of simple yet effective subnetworks connected through a fully-cascaded feature fusion (FCFF) module, enabling the exploitation of diverse and complementary features generated at each stage. In addition, we design a lightweight spatial-aware block (SAB), whose core component is a channel-weighted spatial attention (CWSA) module. The SAB is integrated into both the FCFF module and skip connections, enhancing feature fusion by enriching spatial detail representation. On the GoPro dataset, FSCNet achieves 33.01 dB PSNR and 0.962 SSIM, delivering comparable or higher accuracy than state-of-the-art methods such as HINet, while reducing model size by nearly 80%. Furthermore, when the GoPro-trained model is evaluated on three additional benchmark datasets (HIDE, REDS, and RealBlur), FSCNet attains the highest average PSNR (29.53 dB) and SSIM (0.903) among all compared methods. This consistent cross-dataset superiority highlights FSCNet’s strong generalization and robustness under diverse blur conditions, confirming that it achieves state-of-the-art performance with a favorable performance–complexity trade-off. Full article
Show Figures

Graphical abstract

19 pages, 5931 KB  
Article
Vascular-Aware Multimodal MR–PET Reconstruction for Early Stroke Detection: A Physics-Informed, Topology-Preserving, Adversarial Super-Resolution Framework
by Krzysztof Malczewski
Appl. Sci. 2025, 15(22), 12186; https://doi.org/10.3390/app152212186 - 17 Nov 2025
Cited by 1 | Viewed by 700
Abstract
Rapid and reliable identification of large vessel occlusions and critical stenoses is essential for guiding treatment in acute ischemic stroke. Conventional MR angiography (MRA) and PET protocols are constrained by trade-offs among acquisition time, spatial resolution, and motion tolerance. A multimodal MR–PET angiography [...] Read more.
Rapid and reliable identification of large vessel occlusions and critical stenoses is essential for guiding treatment in acute ischemic stroke. Conventional MR angiography (MRA) and PET protocols are constrained by trade-offs among acquisition time, spatial resolution, and motion tolerance. A multimodal MR–PET angiography reconstruction framework is introduced that integrates joint Hankel-structured sparsity with topology-preserving multitask learning to overcome these limitations. High-resolution time-of-flight MRA and perfusion-sensitive PET volumes are reconstructed from undersampled data using a cross-modal low-rank Hankel prior coupled to a super-resolution generator optimized with adversarial, perceptual, and pixel-wise losses. Vesselness filtering and centerline continuity terms enforce preservation of fine arterial topology, while learned k-space and sinogram sampling concentrate measurements within vascular territories. Motion correction, blind deblurring, and modality-specific denoising are embedded to improve robustness under clinical conditions. A multitask output head estimates occlusion probability, stenosis localization, and collateral flow, with hypoperfusion mapping generated for dynamic PET. Evaluation on clinical and synthetically undersampled MR–PET studies demonstrated consistent improvements over MR-only, PET-only, and conventional fusion methods. The framework achieved higher image quality (MRA PSNR gains up to 3.7 dB and SSIM improvements of 0.042), reduced vascular topology breaks by over 20%, and improved large vessel occlusion detection by nearly 10% in AUROC, while maintaining at least a 40% reduction in sampling. These findings demonstrate that embedding vascular-aware priors within a joint Hankel–sparse MR–PET framework enables accelerated acquisition with clinically relevant benefits for early stroke assessment. Full article
Show Figures

Figure 1

27 pages, 16752 KB  
Article
Unified-Removal: A Semi-Supervised Framework for Simultaneously Addressing Multiple Degradations in Real-World Images
by Yongheng Zhang
J. Imaging 2025, 11(11), 405; https://doi.org/10.3390/jimaging11110405 - 11 Nov 2025
Viewed by 888
Abstract
This work introduces Uni-Removal, an innovative two-stage framework that effectively addresses the critical challenge of domain adaptation in unified image restoration. Contemporary approaches often face significant performance degradation when transitioning from synthetic training environments to complex real-world scenarios due to the substantial domain [...] Read more.
This work introduces Uni-Removal, an innovative two-stage framework that effectively addresses the critical challenge of domain adaptation in unified image restoration. Contemporary approaches often face significant performance degradation when transitioning from synthetic training environments to complex real-world scenarios due to the substantial domain discrepancy. Our proposed solution establishes a comprehensive pipeline that systematically bridges this gap through dual-phase representation learning. In the first stage, we implement a structured multi-teacher knowledge distillation mechanism that enables a unified student architecture to assimilate and integrate specialized expertise from multiple pre-trained degradation-specific networks. This knowledge transfer is rigorously regularized by our novel Instance-Grained Contrastive Learning (IGCL) objective, which explicitly enforces representation consistency across both feature hierarchies and image spaces. The second stage introduces a groundbreaking output distribution calibration methodology that employs Cluster-Grained Contrastive Learning (CGCL) to adversarially align the restored outputs with authentic real-world image characteristics, effectively embedding the student model within the natural image manifold without requiring paired supervision. Comprehensive experimental validation demonstrates Uni-Removal’s superior performance across multiple real-world degradation tasks including dehazing, deraining, and deblurring, where it consistently surpasses existing state-of-the-art methods. The framework’s exceptional generalization capability is further evidenced by its competitive denoising performance on the SIDD benchmark and, more significantly, by delivering a substantial 4.36 mAP improvement in downstream object detection tasks, unequivocally establishing its practical utility as a robust pre-processing component for advanced computer vision systems. Full article
Show Figures

Figure 1

Back to TopTop