Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (75)

Search Parameters:
Keywords = coherent speckle noise

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 11574 KiB  
Article
Multiscale Eight Direction Descriptor-Based Improved SAR–SIFT Method for Along-Track and Cross-Track SAR Images
by Wei Wang, Jinyang Chen and Zhonghua Hong
Appl. Sci. 2025, 15(14), 7721; https://doi.org/10.3390/app15147721 - 10 Jul 2025
Viewed by 252
Abstract
Image matching between spaceborne synthetic aperture radar (SAR) images are frequently interfered with by speckle noise, resulting in low matching accuracy, and the vast coverage of SAR images renders the direct matching approach inefficient. To address this issue, the study puts forward a [...] Read more.
Image matching between spaceborne synthetic aperture radar (SAR) images are frequently interfered with by speckle noise, resulting in low matching accuracy, and the vast coverage of SAR images renders the direct matching approach inefficient. To address this issue, the study puts forward a multi-scale adaptive improved SAR image block matching method (called STSU–SAR–SIFT). To improve accuracy, this method addresses the issue of the number of feature points under different thresholds by using the SAR–Shi–Tomasi response function in a multi-scale space. Then, the SUSAN function is used to constrain the effect of coherent noise on the initial feature points, and the multi-scale and multi-directional GLOH descriptor construction approach is used to boost the robustness of descriptors. To improve efficiency, the method adopts the main and additional image overlapping area matching method to reduce the search range and uses multi-core CPU+GPU collaborative parallel computing to boost the efficiency of the SAR–SIFT algorithm by block processing the overlapping area. The experimental results demonstrate that the STSU–SAR–SIFT approach presented in this paper has better accuracy and distribution. After the algorithm acceleration, the efficiency is obviously improved. Full article
(This article belongs to the Section Earth Sciences)
Show Figures

Figure 1

19 pages, 3591 KiB  
Article
Physics-Informed Generative Adversarial Networks for Laser Speckle Noise Suppression
by Xiangji Guo, Fei Xie, Tingkai Yang, Ming Ming and Tao Chen
Sensors 2025, 25(13), 3842; https://doi.org/10.3390/s25133842 - 20 Jun 2025
Viewed by 412
Abstract
In high-resolution microscopic imaging, using shorter-wavelength ultraviolet (UV) lasers as illumination sources is a common approach. However, the high spatial coherence of such lasers, combined with the surface roughness of the sample, often introduces disturbances in the received optical field, resulting in strong [...] Read more.
In high-resolution microscopic imaging, using shorter-wavelength ultraviolet (UV) lasers as illumination sources is a common approach. However, the high spatial coherence of such lasers, combined with the surface roughness of the sample, often introduces disturbances in the received optical field, resulting in strong speckle noise. This paper presents a novel speckle noise suppression method specifically designed for coherent laser-based microscopic imaging. The proposed approach integrates statistical physical modeling and image gradient discrepancy into the training of a Cycle Generative Adversarial Network (CycleGAN), capturing the perturbation mechanism of speckle noise in the optical field. By incorporating these physical constraints, the method effectively enhances the model’s ability to suppress speckle noise without requiring annotated clean data. Experimental results under high-resolution laser microscopy settings demonstrate that the introduced constraints successfully guide network training and significantly outperform traditional filtering methods and unsupervised CNNs in both denoising performance and training efficiency. While this work focuses on microscopic imaging, the underlying framework offers potential extensibility to other laser-based imaging modalities with coherent noise characteristics. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 4172 KiB  
Article
Multi-Level Feature Fusion Attention Generative Adversarial Network for Retinal Optical Coherence Tomography Image Denoising
by Yiming Qian and Yichao Meng
Appl. Sci. 2025, 15(12), 6697; https://doi.org/10.3390/app15126697 - 14 Jun 2025
Viewed by 442
Abstract
Background: Optical coherence tomography (OCT) is limited by inherent speckle noise, degrading retinal microarchitecture visualization and pathological analysis. Existing denoising methods inadequately balance noise suppression and structural preservation, necessitating advanced solutions for clinical OCT reconstruction. Methods: We propose MFFA-GAN, a generative adversarial [...] Read more.
Background: Optical coherence tomography (OCT) is limited by inherent speckle noise, degrading retinal microarchitecture visualization and pathological analysis. Existing denoising methods inadequately balance noise suppression and structural preservation, necessitating advanced solutions for clinical OCT reconstruction. Methods: We propose MFFA-GAN, a generative adversarial network integrating multilevel feature fusion and an efficient local attention (ELA) mechanism. It optimizes cross-feature interactions and channel-wise information flow. Evaluations on three public OCT datasets compared traditional methods and deep learning models using PSNR, SSIM, CNR, and ENL metrics. Results: MFFA-GAN achieved good performance (PSNR:30.107 dB, SSIM:0.727, CNR:3.927, ENL:529.161) on smaller datasets, outperforming benchmarks and further enhanced interpretability through pixel error maps. It preserved retinal layers and textures while suppressing noise. Ablation studies confirmed the synergy of multilevel features and ELA, improving PSNR by 1.8 dB and SSIM by 0.12 versus baselines. Conclusions: MFFA-GAN offers a reliable OCT denoising solution by harmonizing noise reduction and structural fidelity. Its hybrid attention mechanism enhances clinical image quality, aiding retinal analysis and diagnosis. Full article
(This article belongs to the Special Issue Explainable Artificial Intelligence Technology and Its Applications)
Show Figures

Figure 1

16 pages, 4488 KiB  
Technical Note
Land Use and Land Cover Classification with Deep Learning-Based Fusion of SAR and Optical Data
by Ayesha Irfan, Yu Li, Xinhua E and Guangmin Sun
Remote Sens. 2025, 17(7), 1298; https://doi.org/10.3390/rs17071298 - 5 Apr 2025
Cited by 3 | Viewed by 2164
Abstract
Land use and land cover (LULC) classification through remote sensing imagery serves as a cornerstone for environmental monitoring, resource management, and evidence-based urban planning. While Synthetic Aperture Radar (SAR) and optical sensors individually capture distinct aspects of Earth’s surface, their complementary nature SAR [...] Read more.
Land use and land cover (LULC) classification through remote sensing imagery serves as a cornerstone for environmental monitoring, resource management, and evidence-based urban planning. While Synthetic Aperture Radar (SAR) and optical sensors individually capture distinct aspects of Earth’s surface, their complementary nature SAR excelling in structural and all-weather observation and optical sensors providing rich spectral information—offers untapped potential for improving classification robustness. However, the intrinsic differences in their imaging mechanisms (e.g., SAR’s coherent scattering versus optical’s reflectance properties) pose significant challenges in achieving effective multimodal fusion for LULC analysis. To address this gap, we propose a multimodal deep-learning framework that systematically integrates SAR and optical imagery. Our approach employs a dual-branch neural network, with two fusion paradigms being rigorously compared: the Early Fusion strategy and the Late Fusion strategy. Experiments on the SEN12MS dataset—a benchmark containing globally diverse land cover categories—demonstrate the framework’s efficacy. Our Early Fusion strategy achieved 88% accuracy (F1 score: 87%), outperforming the Late Fusion approach (84% accuracy, F1 score: 82%). The results indicate that optical data provide detailed spectral signatures useful for identifying vegetation, water bodies, and urban areas, whereas SAR data contribute valuable texture and structural details. Early Fusion’s superiority stems from synergistic low-level feature extraction, capturing cross-modal correlations lost in late-stage fusion. Compared to state-of-the-art baselines, our proposed methods show a significant improvement in classification accuracy, demonstrating that multimodal fusion mitigates single-sensor limitations (e.g., optical cloud obstruction and SAR speckle noise). This study advances remote sensing technology by providing a precise and effective method for LULC classification. Full article
Show Figures

Figure 1

19 pages, 10070 KiB  
Article
SAR Image Target Segmentation Guided by the Scattering Mechanism-Based Visual Foundation Model
by Chaochen Zhang, Jie Chen, Zhongling Huang, Hongcheng Zeng, Zhixiang Huang, Yingsong Li, Hui Xu, Xiangkai Pu and Long Sun
Remote Sens. 2025, 17(7), 1209; https://doi.org/10.3390/rs17071209 - 28 Mar 2025
Cited by 1 | Viewed by 624
Abstract
As a typical visual foundation model, SAM has been extensively utilized for optical image segmentation tasks. However, synthetic aperture radar (SAR) employs a unique imaging mechanism, and its images are very different from optical images. Directly transferring a pretrained SAM from optical scenes [...] Read more.
As a typical visual foundation model, SAM has been extensively utilized for optical image segmentation tasks. However, synthetic aperture radar (SAR) employs a unique imaging mechanism, and its images are very different from optical images. Directly transferring a pretrained SAM from optical scenes to SAR image instance segmentation tasks can lead to a substantial decline in performance. Therefore, this paper fully integrates the SAR scattering mechanism, and proposes a SAR image target segmentation method guided by the SAR scattering mechanism-based visual foundation model. First, considering the discrete distribution features of strong scattering points in SAR imagery, we develop an edge enhancement morphological adaptor. This adaptor is designed to incorporate a limited set of trainable parameters aimed at effectively boosting the target’s edge morphology, allowing quick fine-tuning within the SAR realm. Second, an adaptive denoising module based on wavelets and soft-thresholding techniques is implemented to reduce the impact of SAR coherent speckle noise, thus improving the feature representation performance. Furthermore, an efficient automatic prompt module based on a deep object detector is built to enhance the ability of rapid target localization in wide-area scenes and improve image segmentation performance. Our approach has been shown to outperform current segmentation methods through experiments conducted on two open-source datasets, SSDD and HRSID. When the ground-truth is used as a prompt, SARSAM improves mIOU by more than 10%, and APmask50 by more than 5% from the baseline. In addition, the computational cost is greatly reduced because the number of parameters and FLOPs of the structures that require fine-tuning are only 13.5% and 10.1% of the baseline, respectively. Full article
(This article belongs to the Special Issue Physics Informed Foundational Models for SAR Image Interpretation)
Show Figures

Figure 1

20 pages, 31492 KiB  
Article
The Bright Feature Transform for Prominent Point Scatterer Detection and Tone Mapping
by Gregory D. Vetaw and Suren Jayasuriya
Remote Sens. 2025, 17(6), 1037; https://doi.org/10.3390/rs17061037 - 15 Mar 2025
Viewed by 508
Abstract
Detecting bright point scatterers plays an important role in assessing the quality of many sonar, radar, and medical ultrasound imaging systems, especially for characterizing the resolution. Traditionally, prominent scatterers, also known as coherent scatterers, are usually detected by employing thresholding techniques alongside statistical [...] Read more.
Detecting bright point scatterers plays an important role in assessing the quality of many sonar, radar, and medical ultrasound imaging systems, especially for characterizing the resolution. Traditionally, prominent scatterers, also known as coherent scatterers, are usually detected by employing thresholding techniques alongside statistical measures in the detection processing chain. However, these methods can perform poorly in detecting point-like scatterers in relatively high levels of speckle background and can distort the structure of the scatterer when visualized. This paper introduces a fast image-processing method to visually identify and detect point scatterers in synthetic aperture imagery using the bright feature transform (BFT). The BFT is analytic, computationally inexpensive, and requires no thresholding or parameter tuning. We derive this method by analyzing an ideal point scatterer’s response with respect to pixel intensity and contrast around neighboring pixels and non-adjacent pixels. We show that this method preserves the general structure and the width of the bright scatterer while performing tone mapping, which can then be used for downstream image characterization and analysis. We then modify the BFT to present a difference of trigonometric functions to mitigate speckle scatterers and other random noise sources found in the imagery. We evaluate the performance of our methods on simulated and real synthetic aperture sonar and radar images, and show qualitative results on how the methods perform tone mapping on reconstructed input imagery in such a way to highlight the bright scatterer, which is insensitive to seafloor textures and high speckle noise levels. Full article
Show Figures

Figure 1

15 pages, 5137 KiB  
Article
Ray-Based Physical Modeling and Simulation of Multibeam Sonar for Underwater Robotics in ROS-Gazebo Framework
by Woen-Sug Choi
Sensors 2025, 25(5), 1516; https://doi.org/10.3390/s25051516 - 28 Feb 2025
Cited by 1 | Viewed by 1152
Abstract
While sonar sensors are crucial for underwater robotics perception, the key challenge lies in traditional multibeam sonar simulation’s lack of comprehensive physics-based interaction models. Such missing physical aspects lead to sonar imagery discrepancies, such as the absence of coherent imaging systems and speckle [...] Read more.
While sonar sensors are crucial for underwater robotics perception, the key challenge lies in traditional multibeam sonar simulation’s lack of comprehensive physics-based interaction models. Such missing physical aspects lead to sonar imagery discrepancies, such as the absence of coherent imaging systems and speckle noise effects exposing risks of over-fitted control designs of the systems using the sonar perceptions. Previous research addressed this gap by introducing a physics-based simulation approach by direct calculation of the point-scattering model equations from perception data obtained from rasterization. However, the raster-based method could not control the resolution of data to pipeline into image generation, and its limitation was explicitly presented in local search scenarios where the distance between data is large. To eliminate those limitations and extend capabilities without losing the quality of the image, this paper introduces a ray-based approach to replace the raster-based method when obtaining the perception data from the simulated world to pipeline into physical equation calculations. The results of the ray-based and raster-based models are compared for the front floating object and the ground grazing local search scenario to confirm that the ray-based method maintains equal quality of sonar image generation, including physical characteristics, but it has more flexibility and capability in control of data resolution for correct sonar image generation. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 7676 KiB  
Article
A High-Precision Matching Method for Heterogeneous SAR Images Based on ROEWA and Angle-Weighted Gradient
by Anxi Yu, Wenhao Tong, Zhengbin Wang, Keke Zhang and Zhen Dong
Remote Sens. 2025, 17(5), 749; https://doi.org/10.3390/rs17050749 - 21 Feb 2025
Viewed by 430
Abstract
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR [...] Read more.
The prerequisite for the fusion processing of heterogeneous SAR images lies in high-precision image matching, which can be widely applied in areas such as geometric localization, scene matching navigation, and target recognition. This study proposes a method for high-precision matching of heterogeneous SAR images based on the combination of the single-scale ratio of an exponentially weighted averages (ROEWA) operator and angle-weighted gradient (RAWG). The method consists of the following three main steps: feature point extraction, feature description, and feature matching. The algorithm utilizes the block-based SAR-Harris operator to extract feature points from the reference SAR image, effectively combating the interference of coherent speckle noise and improving the uniformity of feature point distribution. By employing the single-scale ROEWA operator in conjunction with angle-weighted gradient projection, the construction of a 3D dense feature descriptor is achieved, enhancing the consistency of gradient features in heterogeneous SAR images and smoothing the search surface. Through the optimal feature construction strategy and frequency domain SSD algorithm, fast template matching is realized. Experimental comparisons with other mainstream matching methods demonstrate that the Root Mean Square Error (RMSE) of our method is reduced by 47.5% compared with CFOG, and compared with HOPES, the error is reduced by 15.4% and the matching time is reduced by 34.3%. The proposed approach effectively addresses the nonlinear intensity differences, geometric disparities, and interference of coherent speckle noise in heterogeneous SAR images. It exhibits robustness, high precision, and efficiency as its prominent advantages. Full article
(This article belongs to the Special Issue Temporal and Spatial Analysis of Multi-Source Remote Sensing Images)
Show Figures

Figure 1

14 pages, 28035 KiB  
Article
Improving Ultrasound B-Mode Image Quality with Coherent Plane-Wave Compounding Using Adaptive Beamformers Based on Minimum Variance
by Larissa C. Neves, Felipe M. Ribas, Joaquim M. Maia, Acacio J. Zimbico, Amauri A. Assef and Eduardo T. Costa
Sensors 2025, 25(5), 1306; https://doi.org/10.3390/s25051306 - 21 Feb 2025
Viewed by 831
Abstract
Medical ultrasound imaging using coherent plane-wave compounding (CPWC) for higher frame-rate applications has generated considerable interest in the research community. The adaptive Eigenspace Beamformer technique combined with a Generalized Sidelobe Canceler (GSC) provides noise and interference reduction in images, improving resolution and contrast [...] Read more.
Medical ultrasound imaging using coherent plane-wave compounding (CPWC) for higher frame-rate applications has generated considerable interest in the research community. The adaptive Eigenspace Beamformer technique combined with a Generalized Sidelobe Canceler (GSC) provides noise and interference reduction in images, improving resolution and contrast compared to basic methods: Delay and Sum (DAS) and Minimum Variance (MV). Different filtering approaches are applied in ultrasound image processing to reduce speckle signals. This work introduces the combination of beamformer Eigenspace Based on Minimum Variance (ESBMV) associated with GSC (EGSC) and the Kuan (EGSCK), Lee (EGSCL), and Wiener (EGSCW) filters and their enhanced versions to obtain better quality of plane-wave ultrasound images. The EGSCK technique did not present significant improvements compared to other methods. However, the EGSC with enhanced Kuan (EGSCKe) showed a remarkable reduction in geometric distortion, i.e., 0.13 mm (35%) and 0.49 mm (67%) compared to the EGSC and DAS techniques, respectively. The EGSC with Enhanced Wiener (EGSCWe) showed the best improvements in contrast radio (CR) aspects, i.e., 74% compared to the DAS technique and 60% to the EGSC technique. Furthermore, our proposed method reduces geometric distortion, making it a good option for plane-wave ultrasound imaging. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

31 pages, 6413 KiB  
Article
Noise-to-Convex: A Hierarchical Framework for SAR Oriented Object Detection via Scattering Keypoint Feature Fusion and Convex Contour Refinement
by Shuoyang Liu, Ming Tong, Bokun He, Jiu Jiang and Chu He
Electronics 2025, 14(3), 569; https://doi.org/10.3390/electronics14030569 - 31 Jan 2025
Cited by 1 | Viewed by 752
Abstract
Oriented object detection has become a hot topic in SAR image interpretation. Due to the unique imaging mechanism, SAR objects are represented as clusters of scattering points surrounded by coherent speckle noise, leading to blurred outlines and increased false alarms in complex scenes. [...] Read more.
Oriented object detection has become a hot topic in SAR image interpretation. Due to the unique imaging mechanism, SAR objects are represented as clusters of scattering points surrounded by coherent speckle noise, leading to blurred outlines and increased false alarms in complex scenes. To address these challenges, we propose a novel noise-to-convex detection paradigm with a hierarchical framework based on the scattering-keypoint-guided diffusion detection transformer (SKG-DDT), which consists of three levels. At the bottom level, the strong-scattering-region generation (SSRG) module constructs the spatial distribution of strong scattering regions via a diffusion model, enabling the direct identification of approximate object regions. At the middle level, the scattering-keypoint feature fusion (SKFF) module dynamically locates scattering keypoints across multiple scales, capturing their spatial and structural relationships with the attention mechanism. Finally, the convex contour prediction (CCP) module at the top level refines the object outline by predicting fine-grained convex contours. Furthermore, we unify the three-level framework into an end-to-end pipeline via a detection transformer. The proposed method was comprehensively evaluated on three public SAR datasets, including HRSID, RSDD-SAR, and SAR-Aircraft-v1.0. The experimental results demonstrate that the proposed method attains an AP50 of 86.5%, 92.7%, and 89.2% on these three datasets, respectively, which is an increase of 0.7%, 0.6%, and 1.0% compared to the existing state-of-the-art method. These results indicate that our approach outperforms existing algorithms across multiple object categories and diverse scenes. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

12 pages, 20046 KiB  
Communication
Time-Series Change Detection Using KOMPSAT-5 Data with Statistical Homogeneous Pixel Selection Algorithm
by Mirza Muhammad Waqar, Heein Yang, Rahmi Sukmawati, Sung-Ho Chae and Kwan-Young Oh
Sensors 2025, 25(2), 583; https://doi.org/10.3390/s25020583 - 20 Jan 2025
Cited by 1 | Viewed by 947
Abstract
For change detection in synthetic aperture radar (SAR) imagery, amplitude change detection (ACD) and coherent change detection (CCD) are widely employed. However, time-series SAR data often contain noise and variability introduced by system and environmental factors, requiring mitigation. Additionally, the stability of SAR [...] Read more.
For change detection in synthetic aperture radar (SAR) imagery, amplitude change detection (ACD) and coherent change detection (CCD) are widely employed. However, time-series SAR data often contain noise and variability introduced by system and environmental factors, requiring mitigation. Additionally, the stability of SAR signals is preserved when calibration accounts for temporal and environmental variations. Although ACD and CCD techniques can detect changes, spatial variability outside the primary target area introduces complexity into the analysis. This study presents a robust change detection methodology designed to identify urban changes using KOMPSAT-5 time-series data. A comprehensive preprocessing framework—including coregistration, radiometric terrain correction, normalization, and speckle filtering—was implemented to ensure data consistency and accuracy. Statistical homogeneous pixels (SHPs) were extracted to identify stable targets, and coherence-based analysis was employed to quantify temporal decorrelation and detect changes. Adaptive thresholding and morphological operations refined the detected changes, while small-segment removal mitigated noise effects. Experimental results demonstrated high reliability, with an overall accuracy of 92%, validated using confusion matrix analysis. The methodology effectively identified urban changes, highlighting the potential of KOMPSAT-5 data for post-disaster monitoring and urban change detection. Future improvements are suggested, focusing on the stability of InSAR orbits to further enhance detection precision. The findings underscore the potential for broader applications of the developed SAR time-series change detection technology, promoting increased utilization of KOMPSAT SAR data for both domestic and international research and monitoring initiatives. Full article
(This article belongs to the Section Remote Sensors)
Show Figures

Figure 1

21 pages, 9375 KiB  
Article
Reconstruction of Optical Coherence Tomography Images from Wavelength Space Using Deep Learning
by Maryam Viqar, Erdem Sahin, Elena Stoykova and Violeta Madjarova
Sensors 2025, 25(1), 93; https://doi.org/10.3390/s25010093 - 27 Dec 2024
Viewed by 4322
Abstract
Conventional Fourier domain Optical Coherence Tomography (FD-OCT) systems depend on resampling into a wavenumber (k) domain to extract the depth profile. This either necessitates additional hardware resources or amplifies the existing computational complexity. Moreover, the OCT images also suffer from speckle [...] Read more.
Conventional Fourier domain Optical Coherence Tomography (FD-OCT) systems depend on resampling into a wavenumber (k) domain to extract the depth profile. This either necessitates additional hardware resources or amplifies the existing computational complexity. Moreover, the OCT images also suffer from speckle noise, due to systemic reliance on low-coherence interferometry. We propose a streamlined and computationally efficient approach based on Deep Learning (DL) which enables reconstructing speckle-reduced OCT images directly from the wavelength (λ) domain. For reconstruction, two encoder–decoder styled networks, namely Spatial Domain Convolution Neural Network (SD-CNN) and Fourier Domain CNN (FD-CNN), are used sequentially. The SD-CNN exploits the highly degraded images obtained by Fourier transforming the (λ) domain fringes to reconstruct the deteriorated morphological structures along with suppression of unwanted noise. The FD-CNN leverages this output to enhance the image quality further by optimization in the Fourier domain (FD). We quantitatively and visually demonstrate the efficacy of the method in obtaining high-quality OCT images. Furthermore, we illustrate the computational complexity reduction by harnessing the power of DL models. We believe that this work lays the framework for further innovations in the realm of OCT image reconstruction. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

17 pages, 6219 KiB  
Article
DGGNets: Deep Gradient-Guidance Networks for Speckle Noise Reduction
by Li Wang, Jinkai Li, Yi-Fei Pu, Hao Yin and Paul Liu
Fractal Fract. 2024, 8(11), 666; https://doi.org/10.3390/fractalfract8110666 - 15 Nov 2024
Cited by 2 | Viewed by 1313
Abstract
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network [...] Read more.
Speckle noise is a granular interference that degrades image quality in coherent imaging systems, including underwater sonar, Synthetic Aperture Radar (SAR), and medical ultrasound. This study aims to enhance speckle noise reduction through advanced deep learning techniques. We introduce the Deep Gradient-Guidance Network (DGGNet), which features an architecture comprising one encoder and two decoders—one dedicated to image recovery and the other to gradient preservation. Our approach integrates a gradient map and fractional-order total variation into the loss function to guide training. The gradient map provides structural guidance for edge preservation and directs the denoising branch to focus on sharp regions, thereby preventing over-smoothing. The fractional-order total variation mitigates detail ambiguity and excessive smoothing, ensuring rich textures and detailed information are retained. Extensive experiments yield an average Peak Signal-to-Noise Ratio (PSNR) of 31.52 dB and a Structural Similarity Index (SSIM) of 0.863 across various benchmark datasets, including McMaster, Kodak24, BSD68, Set12, and Urban100. DGGNet outperforms existing methods, such as RIDNet, which achieved a PSNR of 31.42 dB and an SSIM of 0.853, thereby establishing new benchmarks in speckle noise reduction. Full article
Show Figures

Figure 1

13 pages, 3957 KiB  
Article
Complex Residual Attention U-Net for Fast Ultrasound Imaging from a Single Plane-Wave Equivalent to Diverging Wave Imaging
by Ahmed Bentaleb, Christophe Sintes, Pierre-Henri Conze, François Rousseau, Aziliz Guezou-Philippe and Chafiaa Hamitouche
Sensors 2024, 24(16), 5111; https://doi.org/10.3390/s24165111 - 7 Aug 2024
Viewed by 1373
Abstract
Plane wave imaging persists as a focal point of research due to its high frame rate and low complexity. However, in spite of these advantages, its performance can be compromised by several factors such as noise, speckle, and artifacts that affect the image [...] Read more.
Plane wave imaging persists as a focal point of research due to its high frame rate and low complexity. However, in spite of these advantages, its performance can be compromised by several factors such as noise, speckle, and artifacts that affect the image quality and resolution. In this paper, we propose an attention-based complex convolutional residual U-Net to reconstruct improved in-phase/quadrature complex data from a single insonification acquisition that matches diverging wave imaging. Our approach introduces an attention mechanism to the complex domain in conjunction with complex convolution to incorporate phase information and improve the image quality matching images obtained using coherent compounding imaging. To validate the effectiveness of this method, we trained our network on a simulated phased array dataset and evaluated it using in vitro and in vivo data. The experimental results show that our approach improved the ultrasound image quality by focusing the network’s attention on critical aspects of the complex data to identify and separate different regions of interest from background noise. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

18 pages, 15772 KiB  
Article
Physics-Based Practical Speckle Noise Modeling for Optical Coherence Tomography Image Denoising
by Lei Yang, Di Wu, Wenteng Gao, Ronald X. Xu and Mingzhai Sun
Photonics 2024, 11(6), 569; https://doi.org/10.3390/photonics11060569 - 17 Jun 2024
Cited by 2 | Viewed by 2111
Abstract
Optical coherence tomography (OCT) has been extensively utilized in the field of biomedical imaging due to its non-invasive nature and its ability to provide high-resolution, in-depth imaging of biological tissues. However, the use of low-coherence light can lead to unintended interference phenomena within [...] Read more.
Optical coherence tomography (OCT) has been extensively utilized in the field of biomedical imaging due to its non-invasive nature and its ability to provide high-resolution, in-depth imaging of biological tissues. However, the use of low-coherence light can lead to unintended interference phenomena within the sample, which inevitably introduces speckle noise into the imaging results. This type of noise often obscures key features in the image, thereby reducing the accuracy of medical diagnoses. Existing denoising algorithms, while removing noise, tend to also damage the structural details of the image, affecting the quality of diagnosis. To overcome this challenge, we have proposed a speckle noise (PSN) framework. The core of this framework is an innovative dual-module noise generator that can decompose the noise in OCT images into speckle noise and equipment noise, addressing each type independently. By integrating the physical properties of noise into the design of the noise generator and training it with unpaired data, we are able to synthesize realistic noise images that match clear images. These synthesized paired images are then used to train a denoiser to effectively denoise real OCT images. Our method has demonstrated its superiority in both private and public datasets, particularly in maintaining the integrity of the image structure. This study emphasizes the importance of considering the physical information of noise in denoising tasks, providing a new perspective and solution for enhancing OCT image denoising technology. Full article
(This article belongs to the Section Biophotonics and Biomedical Optics)
Show Figures

Figure 1

Back to TopTop