Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (49)

Search Parameters:
Keywords = Retinex theory

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2514 KiB  
Article
Investigations into Picture Defogging Techniques Based on Dark Channel Prior and Retinex Theory
by Lihong Yang, Zhi Zeng, Hang Ge, Yao Li, Shurui Ge and Kai Hu
Appl. Sci. 2025, 15(15), 8319; https://doi.org/10.3390/app15158319 - 26 Jul 2025
Viewed by 77
Abstract
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is [...] Read more.
To address the concerns of contrast deterioration, detail loss, and color distortion in images produced under haze conditions in scenarios such as intelligent driving and remote sensing detection, an algorithm for image defogging that combines Retinex theory and the dark channel prior is proposed in this paper. The method involves building a two-stage optimization framework: in the first stage, global contrast enhancement is achieved by Retinex preprocessing, which effectively improves the detail information regarding the dark area and the accuracy of the transmittance map and atmospheric light intensity estimation; in the second stage, an a priori compensation model for the dark channel is constructed, and a depth-map-guided transmittance correction mechanism is introduced to obtain a refined transmittance map. At the same time, the atmospheric light intensity is accurately calculated by the Otsu algorithm and edge constraints, which effectively suppresses the halo artifacts and color deviation of the sky region in the dark channel a priori defogging algorithm. The experiments based on self-collected data and public datasets show that the algorithm in this paper presents better detail preservation ability (the visible edge ratio is minimally improved by 0.1305) and color reproduction (the saturated pixel ratio is reduced to about 0) in the subjective evaluation, and the average gradient ratio of the objective indexes reaches a maximum value of 3.8009, which is improved by 36–56% compared with the classical DCP and Tarel algorithms. The method provides a robust image defogging solution for computer vision systems under complex meteorological conditions. Full article
Show Figures

Figure 1

19 pages, 2812 KiB  
Article
Component Generation Network-Based Image Enhancement Method for External Inspection of Electrical Equipment
by Xiong Liu, Juan Zhang, Qiushi Cui, Yingyue Zhou, Qian Wang, Zining Zhao and Yong Li
Electronics 2025, 14(12), 2419; https://doi.org/10.3390/electronics14122419 - 13 Jun 2025
Viewed by 319
Abstract
For external inspection of electrical equipment, poor lighting conditions often lead to problems such as uneven illumination, insufficient brightness, and detail loss, which directly affect subsequent analysis. To solve this problem, the Retinex image enhancement method based on the Component Generation Network (CGNet) [...] Read more.
For external inspection of electrical equipment, poor lighting conditions often lead to problems such as uneven illumination, insufficient brightness, and detail loss, which directly affect subsequent analysis. To solve this problem, the Retinex image enhancement method based on the Component Generation Network (CGNet) is proposed in this paper. It employs CGNet to accurately estimate and generate the illumination and reflection components of the target image. The CGNet, based on UNet, integrates Residual Branch Dual-convolution blocks (RBDConv) and the Channel Attention Mechanism (CAM) to improve the feature-learning capability. By setting different numbers of network layers, the optimal estimation of the illumination and reflection components is achieved. To obtain the ideal enhancement results, gamma correction is applied to adjust the estimated illumination component, while the HSV transformation model preserves color information. Finally, the effectiveness of the proposed method is verified on a dataset of poorly illuminated images from external inspection of electrical equipment. The results show that this method not only requires no external datasets for training but also improves the detail clarity and color richness of the target image, effectively addressing poor lighting of images in external inspection of electrical equipment. Full article
Show Figures

Figure 1

17 pages, 11008 KiB  
Article
Retinex-Based Low-Light Image Enhancement via Spatial-Channel Redundancy Compression and Joint Attention
by Jinlong Chen, Zhigang Xiao, Xingguo Qin and Deming Luo
Electronics 2025, 14(11), 2212; https://doi.org/10.3390/electronics14112212 - 29 May 2025
Viewed by 453
Abstract
Low-light image enhancement (LLIE) methods based on Retinex theory often involve complex, multi-stage training and are commonly built on convolutional neural networks (CNNs). However, CNNs suffer from limitations in capturing long-range dependencies and often introduce redundant computations, leading to high computational costs. To [...] Read more.
Low-light image enhancement (LLIE) methods based on Retinex theory often involve complex, multi-stage training and are commonly built on convolutional neural networks (CNNs). However, CNNs suffer from limitations in capturing long-range dependencies and often introduce redundant computations, leading to high computational costs. To address these issues, we propose a lightweight and efficient LLIE framework that incorporates an optimized CNN compression strategy and a novel attention mechanism. Specifically, we design a Spatial-Channel Feature Reconstruction Module (SCFRM) to suppress spatial and channel redundancy via split-reconstruction and separation-fusion strategies. SCFRM is composed of two parts, a Spatial Feature Enhancement Unit (SFEU) and a Channel Refinement Block (CRB), which together enhance feature representation while reducing computational load. Additionally, we introduce a Joint Attention (JOA) mechanism that captures long-range dependencies across spatial dimensions while preserving positional accuracy. Our Retinex-based framework separates the processing of illumination and reflectance components using a Denoising Network (DNNet) and a Light Enhancement Network (LINet). SCFRM is embedded into DNNet for improved denoising, while JOA is applied in LINet for precise brightness adjustment. Extensive experiments on multiple benchmark datasets demonstrate that our method achieves superior or comparable performance to state-of-the-art LLIE approaches, while significantly reducing computational complexity. On the LOL and VE-LOL datasets, our approach achieves the best or second-best scores in terms of PSNR and SSIM metrics, validating its effectiveness and efficiency. Full article
Show Figures

Figure 1

27 pages, 10202 KiB  
Article
WIGformer: Wavelet-Based Illumination-Guided Transformer
by Wensheng Cao, Tianyu Yan, Zhile Li and Jiongyao Ye
Symmetry 2025, 17(5), 798; https://doi.org/10.3390/sym17050798 - 20 May 2025
Viewed by 410
Abstract
Low-light image enhancement remains a challenging task in computer vision due to the complex interplay of noise, asymmetrical artifacts, illumination non-uniformity, and detail preservation. Existing methods such as traditional histogram equalization, gamma correction, and Retinex-based approaches often struggle to balance contrast improvement and [...] Read more.
Low-light image enhancement remains a challenging task in computer vision due to the complex interplay of noise, asymmetrical artifacts, illumination non-uniformity, and detail preservation. Existing methods such as traditional histogram equalization, gamma correction, and Retinex-based approaches often struggle to balance contrast improvement and naturalness preservation. Deep learning methods such as CNNs and transformers have shown promise, but face limitations in modeling multi-scale illumination and long-range dependencies. To address these issues, we propose WIGformer, a novel wavelet-based illumination-guided transformer framework for low-light image enhancement. The proposed method extends the single-stage Retinex theory to explicitly model noise in both reflectance and illumination components. It introduces a wavelet illumination estimator with a Wavelet Feature Enhancement Convolution (WFEConv) module to capture multi-scale illumination features and an illumination feature-guided corruption restorer with an Illumination-Guided Enhanced Multihead Self-Attention (IGEMSA) mechanism. WIGformer leverages the symmetry properties of wavelet transforms to achieve multi-scale illumination estimation, ensuring balanced feature extraction across different frequency bands. The IGEMSA mechanism integrates adaptive feature refinement and illumination guidance to suppress noise and artifacts while preserving fine details. The same mechanism allows us to further exploit symmetrical dependencies between illumination and reflectance components, enabling robust and natural enhancement of low-light images. Extensive experiments on the LOL-V1, LOL-V2-Real, and LOL-V2-Synthetic datasets demonstrate that WIGformer achieves state-of-the-art performance and outperforms existing methods, with PSNR improvements of up to 26.12 dB and an SSIM score of 0.935. The qualitative results demonstrate WIGformer’s superior capability to not only restore natural illumination but also maintain structural symmetry in challenging conditions, preserving balanced luminance distributions and geometric regularities that are characteristic of properly exposed natural scenes. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 5870 KiB  
Article
Tilt-Induced Error Compensation with Vision-Based Method for Polarization Navigation
by Meng Yuan, Xindong Wu, Chenguang Wang and Xiaochen Liu
Appl. Sci. 2025, 15(9), 5060; https://doi.org/10.3390/app15095060 - 2 May 2025
Viewed by 468
Abstract
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark [...] Read more.
To rectify significant heading calculation errors in polarized light navigation for unmanned aerial vehicles (UAVs) under tilted states, this paper proposes a method for compensating horizontal attitude angles based on horizon detection. First, a defogging enhancement algorithm that integrates Retinex theory with dark channel prior is adopted to improve image quality in low-illumination and hazy environments. Second, a dynamic threshold segmentation method in the HSV color space (Hue, Saturation, and Value) is proposed for robust horizon region extraction, combined with an improved adaptive bilateral filtering Canny operator for edge detection, aimed at balancing detail preservation and noise suppression. Then, the progressive probabilistic Hough transform is used to efficiently extract parameters of the horizon line. The calculated horizontal attitude angles are utilized to convert the body frame to the navigation frame, achieving compensation for polarization orientation errors. Onboard experiments demonstrate that the horizontal attitude angle estimation error remains within 0.3°, and the heading accuracy after compensation is improved by approximately 77.4% relative to uncompensated heading accuracy, thereby validating the effectiveness of the proposed algorithm. Full article
Show Figures

Figure 1

28 pages, 14848 KiB  
Article
Deep-Space Background Low-Light Image Enhancement Method Based on Multi-Image Fusion
by Feixiang Han, Qing Liu, Huawei Wang, Zeyue Ren, Feng Zhou and Chanchan Kang
Appl. Sci. 2025, 15(9), 4837; https://doi.org/10.3390/app15094837 - 27 Apr 2025
Viewed by 550
Abstract
Existing low-light image enhancement methods often struggle to effectively enhance space targets in deep-space contexts due to the effects of extremely low illumination, stellar stray light, and Earth halos. This work proposes a low-light image enhancement method based on multi-image fusion, which integrates [...] Read more.
Existing low-light image enhancement methods often struggle to effectively enhance space targets in deep-space contexts due to the effects of extremely low illumination, stellar stray light, and Earth halos. This work proposes a low-light image enhancement method based on multi-image fusion, which integrates features of space targets with the Retinex theory. The method dynamically adjusts contrast by detecting luminance distribution and incorporates an adaptive noise removal mechanism for enhanced image quality. This method effectively balances detail enhancement with noise suppression. This work presents experiments on deep-space background images featuring 10 types of artificial satellites, including AcrimSat, Calipso, Jason, and others. Experimental results demonstrate that the proposed method outperforms traditional methods and mainstream deep learning models in qualitative and quantitative evaluations, particularly in suppressing Earth halo interference. This study establishes an effective framework for improving the visual quality of spacecraft images and provides important technical support for applications such as spacecraft identification, space target detection, and autonomous spacecraft navigation. Full article
Show Figures

Figure 1

22 pages, 13486 KiB  
Article
Improved Low-Light Image Feature Matching Algorithm Based on the SuperGlue Net Model
by Fengchao Li, Yu Chen, Qunshan Shi, Ge Shi, Hongding Yang and Jiaming Na
Remote Sens. 2025, 17(5), 905; https://doi.org/10.3390/rs17050905 - 4 Mar 2025
Viewed by 2198
Abstract
The SuperGlue algorithm, which integrates deep learning theory with the SuperPoint feature extraction operator and addresses the matching problem using the classical Sinkhorn method, has significantly enhanced matching efficiency and become a prominent research focus. However, existing feature extraction operators often struggle to [...] Read more.
The SuperGlue algorithm, which integrates deep learning theory with the SuperPoint feature extraction operator and addresses the matching problem using the classical Sinkhorn method, has significantly enhanced matching efficiency and become a prominent research focus. However, existing feature extraction operators often struggle to extract high-quality features from extremely low-light or dark images, resulting in reduced matching accuracy. In this study, we propose a novel feature matching method that combines multi-scale retinex with color restoration (MSRCR) and SuperGlue to address this challenge, enabling effective feature extraction and matching from dark images, successfully addressing the challenges of feature point extraction difficulties, sparse matching points, and low matching accuracy in extreme environments such as nighttime autonomous navigation, mine exploration, and tunnel operations. Our approach first employs the retinex-based MSRCR algorithm to enhance features in original low-light images, followed by utilizing the enhanced image pairs as inputs for SuperGlue feature matching. Experimental results validate the effectiveness of our method, demonstrating that both the quantity of extracted feature points and correctly matched feature points approximately doubles compared to traditional methods, thereby significantly improving matching accuracy in dark images. Full article
(This article belongs to the Special Issue GeoAI and EO Big Data Driven Advances in Earth Environmental Science)
Show Figures

Figure 1

19 pages, 3896 KiB  
Article
No-Reference Quality Assessment Based on Dual-Channel Convolutional Neural Network for Underwater Image Enhancement
by Renzhi Hu, Ting Luo, Guowei Jiang, Zhiqiang Lin and Zhouyan He
Electronics 2024, 13(22), 4451; https://doi.org/10.3390/electronics13224451 - 13 Nov 2024
Viewed by 702
Abstract
Underwater images are important for underwater vision tasks, yet their quality often degrades during imaging, promoting the generation of Underwater Image Enhancement (UIE) algorithms. This paper proposes a Dual-Channel Convolutional Neural Network (DC-CNN)-based quality assessment method to evaluate the performance of different UIE [...] Read more.
Underwater images are important for underwater vision tasks, yet their quality often degrades during imaging, promoting the generation of Underwater Image Enhancement (UIE) algorithms. This paper proposes a Dual-Channel Convolutional Neural Network (DC-CNN)-based quality assessment method to evaluate the performance of different UIE algorithms. Specifically, inspired by the intrinsic image decomposition, the enhanced underwater image is decomposed into reflectance with color information and illumination with texture information based on the Retinex theory. Afterward, we design a DC-CNN with two branches to learn color and texture features from reflectance and illumination, respectively, reflecting the distortion characteristics of enhanced underwater images. To integrate the learned features, a feature fusion module and attention mechanism are conducted to align efficiently and reasonably with human visual perception characteristics. Finally, a quality regression module is used to establish the mapping relationship between the extracted features and quality scores. Experimental results on two public enhanced underwater image datasets (i.e., UIQE and SAUD) show that the proposed DC-CNN method outperforms a variety of the existing quality assessment methods. Full article
Show Figures

Figure 1

15 pages, 6894 KiB  
Article
A Novel Approach to Pedestrian Re-Identification in Low-Light and Zero-Shot Scenarios: Exploring Transposed Convolutional Reflectance Decoders
by Zhenghao Li and Jiping Xiong
Electronics 2024, 13(20), 4069; https://doi.org/10.3390/electronics13204069 - 16 Oct 2024
Viewed by 1338
Abstract
In recent years, pedestrian re-identification technology has made significant progress, with various neural network models performing well under normal conditions, such as good weather and adequate lighting. However, most research has overlooked extreme environments, such as rainy weather and nighttime. Additionally, the existing [...] Read more.
In recent years, pedestrian re-identification technology has made significant progress, with various neural network models performing well under normal conditions, such as good weather and adequate lighting. However, most research has overlooked extreme environments, such as rainy weather and nighttime. Additionally, the existing pedestrian re-identification datasets predominantly consist of well-lit images. Although some studies have started to address these issues by proposing methods for enhancing low-light images to restore their original features, the effectiveness of these approaches remains limited. We noted that a method based on Retinex theory designed a reflectance representation learning module aimed at restoring image features as much as possible. However, this method has so far only been applied in object detection networks. In response to this, we improved the method and applied it to pedestrian re-identification, proposing a transposed convolution reflectance decoder (TransConvRefDecoder) to better restore details in low-light images. Extensive experiments on the Market1501, CUHK03, and MSMT17 datasets demonstrated that our approach delivered superior performance. Full article
Show Figures

Figure 1

11 pages, 1496 KiB  
Article
An Improved Retinex-Based Approach Based on Attention Mechanisms for Low-Light Image Enhancement
by Shan Jiang, Yingshan Shi, Yingchun Zhang and Yulin Zhang
Electronics 2024, 13(18), 3645; https://doi.org/10.3390/electronics13183645 - 13 Sep 2024
Cited by 1 | Viewed by 1602
Abstract
Captured images often suffer from issues like color distortion, detail loss, and significant noise. Therefore, it is necessary to improve image quality for reliable threat detection. Balancing brightness enhancement with the preservation of natural colors and details is particularly challenging in low-light image [...] Read more.
Captured images often suffer from issues like color distortion, detail loss, and significant noise. Therefore, it is necessary to improve image quality for reliable threat detection. Balancing brightness enhancement with the preservation of natural colors and details is particularly challenging in low-light image enhancement. To address these issues, this paper proposes an unsupervised low-light image enhancement approach using a U-net neural network with Retinex theory and a Convolutional Block Attention Module (CBAM). This method leverages Retinex-based decomposition to separate and enhance the reflectance map, ensuring visibility and contrast without introducing artifacts. A local adaptive enhancement function improves the brightness of the reflection map, while the designed loss function addresses illumination smoothness, brightness enhancement, color restoration, and denoising. Experiments validate the effectiveness of our method, revealing improved image brightness, reduced color deviation, and superior color restoration compared to leading approaches. Full article
(This article belongs to the Special Issue Network Security Management in Heterogeneous Networks)
Show Figures

Figure 1

12 pages, 7794 KiB  
Article
Single-Shot Direct Transmission Terahertz Imaging Based on Intense Broadband Terahertz Radiation
by Zhang Yue, Xiaoyu Peng, Guangyuan Li, Yilei Zhou, Yezi Pu and Yuhui Zhang
Sensors 2024, 24(13), 4160; https://doi.org/10.3390/s24134160 - 26 Jun 2024
Cited by 3 | Viewed by 1974
Abstract
There are numerous applications of terahertz (THz) imaging in many fields. However, current THz imaging is generally based on scanning technique due to the limited intensity of the THz sources. Thus, it takes a long time to obtain a frame image of the [...] Read more.
There are numerous applications of terahertz (THz) imaging in many fields. However, current THz imaging is generally based on scanning technique due to the limited intensity of the THz sources. Thus, it takes a long time to obtain a frame image of the target and cannot meet the requirement of fast THz imaging. Here, we demonstrate a single-shot direct THz imaging strategy based on a broadband intense THz source with a frequency range of 0.1~23 THz and a THz camera with a frequency response range of 1~7 THz. This THz source was generated from the laser–plasma interaction, with its central frequency at ~12 THz. The frame rate of this imaging system was 8.5 frames per second. The imaging resolution reached 146.2 μm. With this imaging system, a single-shot THz image for a target object with a size of more than 7 cm was routinely obtained, showing a potential application for fast THz imaging. Furthermore, we proposed and tested an image enhancement algorithm based on an improved dark channel prior (DCP) theory and multi-scale retinex (MSR) theory to optimize the image brightness, contrast, entropy and peak signal-to-noise ratio (PSNR). Full article
Show Figures

Figure 1

15 pages, 7340 KiB  
Article
Multi-Modular Network-Based Retinex Fusion Approach for Low-Light Image Enhancement
by Jiarui Wang, Yu Sun and Jie Yang
Electronics 2024, 13(11), 2040; https://doi.org/10.3390/electronics13112040 - 23 May 2024
Cited by 1 | Viewed by 1466
Abstract
Current low-light image enhancement techniques prioritize increasing image luminance but fail to address issues including loss of intricate distortion of colors and image details. In order to address these issues that has been overlooked by all parties, this paper suggests a multi-module optimization [...] Read more.
Current low-light image enhancement techniques prioritize increasing image luminance but fail to address issues including loss of intricate distortion of colors and image details. In order to address these issues that has been overlooked by all parties, this paper suggests a multi-module optimization network for enhancing low-light images by integrating deep learning with Retinex theory. First, we create a decomposition network to separate the lighting components and reflections from the low-light image. We incorporated an enhanced global spatial attention (GSA) module into the decomposition network to boost its flexibility and adaptability. This module enhances the extraction of comprehensive information from the image and safeguards against information loss. To increase the illumination component’s luminosity, we subsequently constructed an enhancement network. The Multiscale Guidance Block (MSGB) has been integrated into the improvement network, together with multilayer extended convolution to expand the sensing field and enhance the network’s capability for feature extraction. Our proposed method out-performs existing ways in both objective measures and personal evaluations, emphasizing the virtues of the procedure outlined in this paper. Full article
(This article belongs to the Special Issue Modern Computer Vision and Image Analysis)
Show Figures

Figure 1

21 pages, 8926 KiB  
Article
Enhancement of Mine Images through Reflectance Estimation of V Channel Using Retinex Theory
by Changlin Wu, Dandan Wang, Kaifeng Huang and Long Wu
Processes 2024, 12(6), 1067; https://doi.org/10.3390/pr12061067 - 23 May 2024
Cited by 1 | Viewed by 1393
Abstract
The dim lighting and excessive dust in underground mines often result in uneven illumination, blurriness, and loss of detail in surveillance images, which hinders subsequent intelligent image recognition. To address the limitations of the existing image enhancement algorithms in terms of generalization and [...] Read more.
The dim lighting and excessive dust in underground mines often result in uneven illumination, blurriness, and loss of detail in surveillance images, which hinders subsequent intelligent image recognition. To address the limitations of the existing image enhancement algorithms in terms of generalization and accuracy, this paper proposes an unsupervised method for enhancing mine images in the hue–saturation–value (HSV) color space. Inspired by the HSV color space, the method first converts RGB images to the HSV space and integrates Retinex theory into the brightness (V channel). Additionally, a random perturbation technique is designed for the brightness. Within the same scene, a U-Net-based reflectance estimation network is constructed by enforcing consistency between the original reflectance and the perturbed reflectance, incorporating ResNeSt blocks and a multi-scale channel pixel attention module to improve accuracy. Finally, an enhanced image is obtained by recombining the original hue (H channel), brightness, and saturation (S channel), and converting back to the RGB space. Importantly, this image enhancement algorithm does not require any normally illuminated images during training. Extensive experiments demonstrated that the proposed method outperformed most existing unsupervised low-light image enhancement methods, qualitatively and quantitatively, achieving a competitive performance comparable to many supervised methods. Specifically, our method achieved the highest PSNR value of 22.18, indicating significant improvements compared to the other methods, and surpassing the second-best WCDM method by 10.3%. In terms of SSIM, our method also performed exceptionally well, achieving a value of 0.807, surpassing all other methods, and improving upon the second-place WCDM method by 19.5%. These results demonstrate that our proposed method significantly enhanced image quality and similarity, far exceeding the performance of the other algorithms. Full article
(This article belongs to the Topic Green Mining, 2nd Volume)
Show Figures

Figure 1

23 pages, 23372 KiB  
Article
Retinex Jointed Multiscale CLAHE Model for HDR Image Tone Compression
by Yu-Joong Kim, Dong-Min Son and Sung-Hak Lee
Mathematics 2024, 12(10), 1541; https://doi.org/10.3390/math12101541 - 15 May 2024
Cited by 5 | Viewed by 2325
Abstract
Tone-mapping algorithms aim to compress a wide dynamic range image into a narrower dynamic range image suitable for display on imaging devices. A representative tone-mapping algorithm, Retinex theory, reflects color constancy based on the human visual system and performs dynamic range compression. However, [...] Read more.
Tone-mapping algorithms aim to compress a wide dynamic range image into a narrower dynamic range image suitable for display on imaging devices. A representative tone-mapping algorithm, Retinex theory, reflects color constancy based on the human visual system and performs dynamic range compression. However, it may induce halo artifacts in some areas or degrade chroma and detail. Thus, this paper proposes a Retinex jointed multiscale contrast limited adaptive histogram equalization method. The proposed algorithm reduces localized halo artifacts and detail loss while maintaining the tone-compression effect via high-scale Retinex processing. A performance comparison of the experimental results between the proposed and existing methods confirms that the proposed method effectively reduces the existing problems and displays better image quality. Full article
(This article belongs to the Special Issue New Advances and Applications in Image Processing and Computer Vision)
Show Figures

Figure 1

22 pages, 18573 KiB  
Article
A Multi-Scale Fusion Strategy for Side Scan Sonar Image Correction to Improve Low Contrast and Noise Interference
by Ping Zhou, Jifa Chen, Pu Tang, Jianjun Gan and Hongmei Zhang
Remote Sens. 2024, 16(10), 1752; https://doi.org/10.3390/rs16101752 - 15 May 2024
Cited by 3 | Viewed by 2060
Abstract
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, [...] Read more.
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, this paper proposes a multi-scale fusion strategy for side scan sonar (SSS) image correction to improve the low contrast and noise interference. Initially, an SSS image was decomposed into low and high frequency sub-bands via the non-subsampled shearlet transform (NSST). Then, modified multi-scale retinex (MMSR) was employed to enhance the contrast of the low frequency sub-band. Next, sparse dictionary learning (SDL) was utilized to eliminate high frequency noise. Finally, the process of NSST reconstruction was completed by fusing the emerging low and high frequency sub-band images to generate a new sonar image. The experimental results demonstrate that the target features, underwater terrain, and edge contours could be clearly displayed in the image corrected by the multi-scale fusion strategy when compared to eight correction techniques: BPDHE, MSRCR, NPE, ALTM, LIME, FE, WT, and TVRLRA. Effective control was achieved over the speckle noise of the sonar image. Furthermore, the AG, STD, and E values illustrated the delicacy and contrast of the corrected images processed by the proposed strategy. The PSNR value revealed that the proposed strategy outperformed the advanced TVRLRA technology in terms of filtering performance by at least 8.8%. It can provide sonar imagery that is appropriate for various circumstances. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing IV)
Show Figures

Graphical abstract

Back to TopTop