Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (15)

Search Parameters:
Keywords = contrast-preserving guided filter

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 7710 KiB  
Article
Improved Space Object Detection Based on YOLO11
by Yi Zhou, Tianhao Zhang, Zijing Li and Jianbin Qiu
Aerospace 2025, 12(7), 568; https://doi.org/10.3390/aerospace12070568 - 23 Jun 2025
Viewed by 445
Abstract
Space object detection, as the foundation for ensuring the long-term safe and stable operation of spacecraft, is widely applied in a variety of close-proximity tasks such as non-cooperative target monitoring, space debris avoidance, and spacecraft mission planning. To strengthen the detection capabilities for [...] Read more.
Space object detection, as the foundation for ensuring the long-term safe and stable operation of spacecraft, is widely applied in a variety of close-proximity tasks such as non-cooperative target monitoring, space debris avoidance, and spacecraft mission planning. To strengthen the detection capabilities for non-cooperative spacecraft and space debris, a method based on You Only Look Once Version 11 (YOLO11) is proposed in this paper. On the one hand, to tackle the issues of noise and low contrast in images captured by spacecraft, bilateral filtering is applied to remove noise while preserving edge and texture details effectively, and image contrast is enhanced using the contrast-limited adaptive histogram equalization (CLAHE) technique. On the other hand, to address the challenge of small object detection in spacecraft, loss-guided online data augmentation is proposed, along with improvements to the YOLO11 network architecture, to boost detection capabilities for small objects. The experimental results show that the proposed method achieved 99.0% mAP50 (mean Average Precision with an Intersection over Union threshold of 0.50) and 92.6% mAP50-95 on the SPARK-2022 dataset, significantly outperforming the YOLO11 baseline, thereby validating the effectiveness of the proposed method. Full article
(This article belongs to the Special Issue Intelligent Perception, Decision and Autonomous Control in Aerospace)
Show Figures

Figure 1

25 pages, 11111 KiB  
Article
Integrating Backscattered Electron Imaging and Multi-Feature-Weighted Clustering for Quantification of Hydrated C3S Microstructure
by Xin Wang and Yongjun Luo
Buildings 2025, 15(10), 1699; https://doi.org/10.3390/buildings15101699 - 17 May 2025
Viewed by 393
Abstract
The microstructure of cement paste is governed by the hydration of its major component, tricalcium silicate (C3S). Quantitative analysis of C3S microstructural images is critical for elucidating the microstructure-property correlation in cementitious systems. Existing image segmentation methods rely on [...] Read more.
The microstructure of cement paste is governed by the hydration of its major component, tricalcium silicate (C3S). Quantitative analysis of C3S microstructural images is critical for elucidating the microstructure-property correlation in cementitious systems. Existing image segmentation methods rely on image contrast, leading to a struggle with multi-phase segmentation in regions with close grayscale intensities. Therefore, this study proposes a weighted K-means clustering method that integrates intensity gradients, texture variations, and spatial coordinates for the quantitative analysis of hydrated C3S microstructure. The results indicate the following: (1) The deep convolutional neural network with guided filtering demonstrates superior performance (mean squared error: 53.52; peak signal-to-noise ratio: 26.35 dB; structural similarity index: 0.8187), enabling high-fidelity preservation of cementitious phases. In contrast, wavelet denoising is effective for pore network analysis but results in partial loss of solid phase information. (2) Unhydrated C3S reflects optimal boundary clarity at intermediate image relative resolutions (0.25–0.56), while calcium hydroxide peaks at 0.19. (3) Silhouette coefficients (0.70–0.84) validate the robustness of weighted K-means clustering, and the Clark–Evans index (0.426) indicates CH aggregation around hydration centers, contrasting with the random CH distribution observed in Portland cement systems. Full article
(This article belongs to the Section Building Materials, and Repair & Renovation)
Show Figures

Figure 1

27 pages, 13146 KiB  
Article
Underwater-Image Enhancement Based on Maximum Information-Channel Correction and Edge-Preserving Filtering
by Wei Liu, Jingxuan Xu, Siying He, Yongzhen Chen, Xinyi Zhang, Hong Shu and Ping Qi
Symmetry 2025, 17(5), 725; https://doi.org/10.3390/sym17050725 - 9 May 2025
Viewed by 776
Abstract
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement [...] Read more.
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement (UIE) approach based on maximum information-channel compensation and edge-preserving filtering techniques. Specifically, we first develop a channel information transmission strategy grounded in maximum information preservation principles, utilizing the maximum information channel to improve the color fidelity of the input image. Next, we locally enhance the color-corrected image using guided filtering and generate a series of globally contrast-enhanced images by applying gamma transformations with varying parameter values. In the final stage, the enhanced image sequence is decomposed into low-frequency (LF) and high-frequency (HF) components via side-window filtering. For the HF component, a weight map is constructed by calculating the difference between the current exposedness and the optimum exposure. For the LF component, we derive a comprehensive feature map by integrating the brightness map, saturation map, and saliency map, thereby accurately assessing the quality of degraded regions in a manner that aligns with the symmetry principle inherent in human vision. Ultimately, we combine the LF and HF components through a weighted summation process, resulting in a high-quality underwater image. Experimental results demonstrate that our method effectively achieves both color restoration and contrast enhancement, outperforming several State-of-the-Art UIE techniques across multiple datasets. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

13 pages, 3080 KiB  
Article
Thermal Infrared-Image-Enhancement Algorithm Based on Multi-Scale Guided Filtering
by Huaizhou Li, Shuaijun Wang, Sen Li, Hong Wang, Shupei Wen and Fengyu Li
Fire 2024, 7(6), 192; https://doi.org/10.3390/fire7060192 - 8 Jun 2024
Cited by 7 | Viewed by 3492
Abstract
Obtaining thermal infrared images with prominent details, high contrast, and minimal background noise has always been a focal point of infrared technology research. To address issues such as the blurriness of details and low contrast in thermal infrared images, an enhancement algorithm for [...] Read more.
Obtaining thermal infrared images with prominent details, high contrast, and minimal background noise has always been a focal point of infrared technology research. To address issues such as the blurriness of details and low contrast in thermal infrared images, an enhancement algorithm for thermal infrared images based on multi-scale guided filtering is proposed. This algorithm fully leverages the excellent edge-preserving characteristics of guided filtering and the multi-scale nature of the edge details in thermal infrared images. It uses multi-scale guided filtering to decompose each thermal infrared image into multiple scales of detail layers and a base layer. Then, CLAHE is employed to compress the grayscale and enhance the contrast of the base layer image. Then, detail-enhancement processing of the multi-scale detail layers is performed. Finally, the base layer and the multi-scale detail layers are linearly fused to obtain an enhanced thermal infrared image. Our experimental results indicate that, compared to other methods, the proposed method can effectively enhance image contrast and enrich image details, and has higher image quality and stronger scene adaptability. Full article
Show Figures

Figure 1

13 pages, 7097 KiB  
Article
Low-Light Mine Image Enhancement Algorithm Based on Improved Retinex
by Feng Tian, Mengjiao Wang and Xiaopei Liu
Appl. Sci. 2024, 14(5), 2213; https://doi.org/10.3390/app14052213 - 6 Mar 2024
Cited by 9 | Viewed by 2386
Abstract
Aiming at solving the problems of local halo blurring, insufficient edge detail preservation, and serious noise in traditional image enhancement algorithms, an improved Retinex algorithm for low-light mine image enhancement is proposed. Firstly, in HSV color space, the hue component remains unmodified, and [...] Read more.
Aiming at solving the problems of local halo blurring, insufficient edge detail preservation, and serious noise in traditional image enhancement algorithms, an improved Retinex algorithm for low-light mine image enhancement is proposed. Firstly, in HSV color space, the hue component remains unmodified, and the improved multi-scale guided filtering and Retinex algorithm are combined to estimate the illumination and reflection components from the brightness component. Secondly, the illumination component is equalized using the Weber–Fechner law, and the contrast limited adaptive histogram equalization (CLAHE) is fused with the improved guided filtering for the brightness enhancement and denoising of reflection component. Then, the saturation component is adaptively stretched. Finally, it is converted back to RGB space to obtain the enhanced image. By comparing with single-scale Retinex (SSR) algorithm and multi-scale Retinex (MSR) algorithm, the mean, standard deviation, information entropy, average gradient, peak signal-to-noise ratio (PSNR), and structural similarity (SSIM) are improved by an average of 50.55%, 19.32%, 3.08%, 28.34%, 29.10%, and 22.97%. The experimental dates demonstrate that the algorithm improves image brightness, prevents halo artifacts while retaining edge details, reduces the effect of noise, and provides some theoretical references for low-light image enhancement. Full article
(This article belongs to the Section Electrical, Electronics and Communications Engineering)
Show Figures

Figure 1

19 pages, 4389 KiB  
Article
A Low-Delay Dynamic Range Compression and Contrast Enhancement Algorithm Based on an Uncooled Infrared Sensor with Local Optimal Contrast
by Youpan Zhu, Yongkang Zhou, Weiqi Jin, Li Zhang, Guanlin Wu and Yiping Shao
Sensors 2023, 23(21), 8860; https://doi.org/10.3390/s23218860 - 31 Oct 2023
Cited by 1 | Viewed by 2247
Abstract
Real-time compression of images with a high dynamic range into those with a low dynamic range while preserving the maximum amount of detail is still a critical technology in infrared image processing. We propose a dynamic range compression and enhancement algorithm for infrared [...] Read more.
Real-time compression of images with a high dynamic range into those with a low dynamic range while preserving the maximum amount of detail is still a critical technology in infrared image processing. We propose a dynamic range compression and enhancement algorithm for infrared images with local optimal contrast (DRCE-LOC). The algorithm has four steps. The first involves blocking the original image to determine the optimal stretching coefficient by using the information of the local block. In the second, the algorithm combines the original image with a low-pass filter to create the background and detailed layers, compressing the background layer with a dynamic range of adaptive gain, and enhancing the detailed layer for the visual characteristics of the human eye. Third, the original image was used as input, the compressed background layer was used as a brightness-guided image, and the local optimal stretching coefficient was used for dynamic range compression. Fourth, an 8-bit image was created (from typical 14-bit input) by merging the enhanced details and the compressed background. Implemented on FPGA, it used 2.2554 Mb of Block RAM, five dividers, and a root calculator with a total image delay of 0.018 s. The study analyzed mainstream algorithms in various scenarios (rich scenes, small targets, and indoor scenes), confirming the proposed algorithm’s superiority in real-time processing, resource utilization, preservation of the image’s details, and visual effects. Full article
(This article belongs to the Special Issue Applications of Manufacturing and Measurement Sensors)
Show Figures

Figure 1

17 pages, 3506 KiB  
Article
Visible and Infrared Image Fusion of Forest Fire Scenes Based on Generative Adversarial Networks with Multi-Classification and Multi-Level Constraints
by Qi Jin, Sanqing Tan, Gui Zhang, Zhigao Yang, Yijun Wen, Huashun Xiao and Xin Wu
Forests 2023, 14(10), 1952; https://doi.org/10.3390/f14101952 - 26 Sep 2023
Cited by 5 | Viewed by 1969
Abstract
Aimed at addressing deficiencies in existing image fusion methods, this paper proposed a multi-level and multi-classification generative adversarial network (GAN)-based method (MMGAN) for fusing visible and infrared images of forest fire scenes (the surroundings of firefighters), which solves the problem that GANs tend [...] Read more.
Aimed at addressing deficiencies in existing image fusion methods, this paper proposed a multi-level and multi-classification generative adversarial network (GAN)-based method (MMGAN) for fusing visible and infrared images of forest fire scenes (the surroundings of firefighters), which solves the problem that GANs tend to ignore visible contrast ratio information and detailed infrared texture information. The study was based on real-time visible and infrared image data acquired by visible and infrared binocular cameras on forest firefighters’ helmets. We improved the GAN by, on the one hand, splitting the input channels of the generator into gradient and contrast ratio paths, increasing the depth of convolutional layers, and improving the extraction capability of shallow networks. On the other hand, we designed a discriminator using a multi-classification constraint structure and trained it against the generator in a continuous and adversarial manner to supervise the generator, generating better-quality fused images. Our results indicated that compared to mainstream infrared and visible image fusion methods, including anisotropic diffusion fusion (ADF), guided filtering fusion (GFF), convolutional neural networks (CNN), FusionGAN, and dual-discriminator conditional GAN (DDcGAN), the MMGAN model was overall optimal and had the best visual effect when applied to image fusions of forest fire surroundings. Five of the six objective metrics were optimal, and one ranked second-to-optimal. The image fusion speed was more than five times faster than that of the other methods. The MMGAN model significantly improved the quality of fused images of forest fire scenes, preserved the contrast ratio information of visible images and the detailed texture information of infrared images of forest fire scenes, and could accurately reflect information on forest fire scene surroundings. Full article
(This article belongs to the Section Natural Hazards and Risk Management)
Show Figures

Figure 1

19 pages, 7483 KiB  
Article
MGFCTFuse: A Novel Fusion Approach for Infrared and Visible Images
by Shuai Hao, Jiahao Li, Xu Ma, Siya Sun, Zhuo Tian and Le Cao
Electronics 2023, 12(12), 2740; https://doi.org/10.3390/electronics12122740 - 20 Jun 2023
Cited by 4 | Viewed by 1693
Abstract
Traditional deep-learning-based fusion algorithms usually take the original image as input to extract features, which easily leads to a lack of rich details and background information in the fusion results. To address this issue, we propose a fusion algorithm, based on mutually guided [...] Read more.
Traditional deep-learning-based fusion algorithms usually take the original image as input to extract features, which easily leads to a lack of rich details and background information in the fusion results. To address this issue, we propose a fusion algorithm, based on mutually guided image filtering and cross-transmission, termed MGFCTFuse. First, an image decomposition method based on mutually guided image filtering is designed, one which decomposes the original image into a base layer and a detail layer. Second, in order to preserve as much background and detail as possible during feature extraction, the base layer is concatenated with the corresponding original image to extract deeper features. Moreover, in order to enhance the texture details in the fusion results, the information in the visible and infrared detail layers is fused, and an enhancement module is constructed to enhance the texture detail contrast. Finally, in order to enhance the communication between different features, a decoding network based on cross-transmission is designed within feature reconstruction, which further improves the quality of image fusion. In order to verify the advantages of the proposed algorithm, experiments are conducted on the TNO, MSRS, and RoadScene image fusion datasets, and the results demonstrate that the algorithm outperforms nine comparative algorithms in both subjective and objective aspects. Full article
(This article belongs to the Special Issue Robotics Vision in Challenging Environment and Applications)
Show Figures

Figure 1

18 pages, 3630 KiB  
Article
CBFM: Contrast Balance Infrared and Visible Image Fusion Based on Contrast-Preserving Guided Filter
by Xilai Li, Xiaosong Li and Wuyang Liu
Remote Sens. 2023, 15(12), 2969; https://doi.org/10.3390/rs15122969 - 7 Jun 2023
Cited by 11 | Viewed by 2254
Abstract
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance [...] Read more.
Infrared (IR) and visible image fusion is an important data fusion and image processing technique that can accurately and comprehensively integrate the thermal radiation and texture details of source images. However, existing methods neglect the high-contrast fusion problem, leading to suboptimal fusion performance when thermal radiation target information in IR images is replaced by high-contrast information in visible images. To address this limitation, we propose a contrast-balanced framework for IR and visible image fusion. Specifically, a novel contrast balance strategy is proposed to process visible images and reduce energy while allowing for detailed compensation of overexposed areas. Moreover, a contrast-preserving guided filter is proposed to decompose the image into energy-detail layers to reduce high contrast and filter information. To effectively extract the active information in the detail layer and the brightness information in the energy layer, we proposed a new weighted energy-of-Laplacian operator and a Gaussian distribution of the image entropy scheme to fuse the detail and energy layers, respectively. The fused result was obtained by adding the detail and energy layers. Extensive experimental results demonstrate that the proposed method can effectively reduce the high contrast and highlighted target information in an image while simultaneously preserving details. In addition, the proposed method exhibited superior performance compared to the state-of-the-art methods in both qualitative and quantitative assessments. Full article
Show Figures

Graphical abstract

16 pages, 3022 KiB  
Article
Image Noise Removal in Ultrasound Breast Images Based on Hybrid Deep Learning Technique
by Baiju Babu Vimala, Saravanan Srinivasan, Sandeep Kumar Mathivanan, Venkatesan Muthukumaran, Jyothi Chinna Babu, Norbert Herencsar and Lucia Vilcekova
Sensors 2023, 23(3), 1167; https://doi.org/10.3390/s23031167 - 19 Jan 2023
Cited by 39 | Viewed by 6013
Abstract
Rapid improvements in ultrasound imaging technology have made it much more useful for screening and diagnosing breast problems. Local-speckle-noise destruction in ultrasound breast images may impair image quality and impact observation and diagnosis. It is crucial to remove localized noise from images. In [...] Read more.
Rapid improvements in ultrasound imaging technology have made it much more useful for screening and diagnosing breast problems. Local-speckle-noise destruction in ultrasound breast images may impair image quality and impact observation and diagnosis. It is crucial to remove localized noise from images. In the article, we have used the hybrid deep learning technique to remove local speckle noise from breast ultrasound images. The contrast of ultrasound breast images was first improved using logarithmic and exponential transforms, and then guided filter algorithms were used to enhance the details of the glandular ultrasound breast images. In order to finish the pre-processing of ultrasound breast images and enhance image clarity, spatial high-pass filtering algorithms were used to remove the extreme sharpening. In order to remove local speckle noise without sacrificing the image edges, edge-sensitive terms were eventually added to the Logical-Pool Recurrent Neural Network (LPRNN). The mean square error and false recognition rate both fell below 1.1% at the hundredth training iteration, showing that the LPRNN had been properly trained. Ultrasound images that have had local speckle noise destroyed had signal-to-noise ratios (SNRs) greater than 65 dB, peak SNR ratios larger than 70 dB, edge preservation index values greater than the experimental threshold of 0.48, and quick destruction times. The time required to destroy local speckle noise is low, edge information is preserved, and image features are brought into sharp focus. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

12 pages, 5225 KiB  
Article
X-ray Image Enhancement Based on Adaptive Gradient Domain Guided Image Filtering
by Liangliang Li, Ming Lv, Hongbing Ma, Zhenhong Jia, Xinghua Yang and Weiyi Yang
Appl. Sci. 2022, 12(20), 10453; https://doi.org/10.3390/app122010453 - 17 Oct 2022
Cited by 6 | Viewed by 4313
Abstract
Due to the contrast of X-ray images being low, significant elements including organs, bones, and nodules are very difficult to identify, so contrast enhancement is necessary. In this paper, an X-ray image enhancement algorithm based on adaptive gradient domain guided image filtering is [...] Read more.
Due to the contrast of X-ray images being low, significant elements including organs, bones, and nodules are very difficult to identify, so contrast enhancement is necessary. In this paper, an X-ray image enhancement algorithm based on adaptive gradient domain guided image filtering is proposed. The amplification factor in the gradient domain guided image filtering needs to be set manually; it needs to constantly adjust the parameters to achieve the best enhancement effect, and this also increases the computational complexity. In order to solve this problem, an adaptive amplification factor is defined in this paper, and the proposed algorithm is applied to the X-ray image enhancement. Experimental results demonstrate that the proposed method is superior to state-of-the art algorithms in terms of detail enhancement and edge-preserving. Full article
(This article belongs to the Special Issue Advances in Intelligent Control and Image Processing)
Show Figures

Figure 1

16 pages, 3133 KiB  
Article
Infrared and Visible Image Fusion Based on Visual Saliency Map and Image Contrast Enhancement
by Yuanyuan Liu, Zhiyong Wu, Xizhen Han, Qiang Sun, Jian Zhao and Jianzhuo Liu
Sensors 2022, 22(17), 6390; https://doi.org/10.3390/s22176390 - 25 Aug 2022
Cited by 10 | Viewed by 3089
Abstract
The purpose of infrared and visible image fusion is to generate images with prominent targets and rich information which provides the basis for target detection and recognition. Among the existing image fusion methods, the traditional method is easy to produce artifacts, and the [...] Read more.
The purpose of infrared and visible image fusion is to generate images with prominent targets and rich information which provides the basis for target detection and recognition. Among the existing image fusion methods, the traditional method is easy to produce artifacts, and the information of the visible target and texture details are not fully preserved, especially for the image fusion under dark scenes and smoke conditions. Therefore, an infrared and visible image fusion method is proposed based on visual saliency image and image contrast enhancement processing. Aiming at the problem that low image contrast brings difficulty to fusion, an improved gamma correction and local mean method is used to enhance the input image contrast. To suppress artifacts that are prone to occur in the process of image fusion, a differential rolling guidance filter (DRGF) method is adopted to decompose the input image into the basic layer and the detail layer. Compared with the traditional multi-scale decomposition method, this method can retain specific edge information and reduce the occurrence of artifacts. In order to solve the problem that the salient object of the fused image is not prominent and the texture detail information is not fully preserved, the salient map extraction method is used to extract the infrared image salient map to guide the fusion image target weight, and on the other hand, it is used to control the fusion weight of the basic layer to improve the shortcomings of the traditional ‘average’ fusion method to weaken the contrast information. In addition, a method based on pixel intensity and gradient is proposed to fuse the detail layer and retain the edge and detail information to the greatest extent. Experimental results show that the proposed method is superior to other fusion algorithms in both subjective and objective aspects. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Graphical abstract

19 pages, 3502 KiB  
Article
Guidance Image-Based Enhanced Matched Filter with Modified Thresholding for Blood Vessel Extraction
by Sonali Dash, Sahil Verma, Kavita, Savitri Bevinakoppa, Marcin Wozniak, Jana Shafi and Muhammad Fazal Ijaz
Symmetry 2022, 14(2), 194; https://doi.org/10.3390/sym14020194 - 19 Jan 2022
Cited by 73 | Viewed by 5672
Abstract
Fundus images have been established as an important factor in analyzing and recognizing many cardiovascular and ophthalmological diseases. Consequently, precise segmentation of blood using computer vision is vital in the recognition of ailments. Although clinicians have adopted computer-aided diagnostics (CAD) in day-to-day diagnosis, [...] Read more.
Fundus images have been established as an important factor in analyzing and recognizing many cardiovascular and ophthalmological diseases. Consequently, precise segmentation of blood using computer vision is vital in the recognition of ailments. Although clinicians have adopted computer-aided diagnostics (CAD) in day-to-day diagnosis, it is still quite difficult to conduct fully automated analysis based exclusively on information contained in fundus images. In fundus image applications, one of the methods for conducting an automatic analysis is to ascertain symmetry/asymmetry details from corresponding areas of the retina and investigate their association with positive clinical findings. In the field of diabetic retinopathy, matched filters have been shown to be an established technique for vessel extraction. However, there is reduced efficiency in matched filters due to noisy images. In this work, a joint model of a fast guided filter and a matched filter is suggested for enhancing abnormal retinal images containing low vessel contrasts. Extracting all information from an image correctly is one of the important factors in the process of image enhancement. A guided filter has an excellent property in edge-preserving, but still tends to suffer from halo artifacts near the edges. Fast guided filtering is a technique that subsamples the filtering input image and the guidance image and calculates the local linear coefficients for upsampling. In short, the proposed technique applies a fast guided filter and a matched filter for attaining improved performance measures for vessel extraction. The recommended technique was assessed on DRIVE and CHASE_DB1 datasets and achieved accuracies of 0.9613 and 0.960, respectively, both of which are higher than the accuracy of the original matched filter and other suggested vessel segmentation algorithms. Full article
Show Figures

Figure 1

20 pages, 7490 KiB  
Article
Detail Preserving Low Illumination Image and Video Enhancement Algorithm Based on Dark Channel Prior
by Lingli Guo, Zhenhong Jia, Jie Yang and Nikola K. Kasabov
Sensors 2022, 22(1), 85; https://doi.org/10.3390/s22010085 - 23 Dec 2021
Cited by 4 | Viewed by 3353
Abstract
In low illumination situations, insufficient light in the monitoring device results in poor visibility of effective information, which cannot meet practical applications. To overcome the above problems, a detail preserving low illumination video image enhancement algorithm based on dark channel prior is proposed [...] Read more.
In low illumination situations, insufficient light in the monitoring device results in poor visibility of effective information, which cannot meet practical applications. To overcome the above problems, a detail preserving low illumination video image enhancement algorithm based on dark channel prior is proposed in this paper. First, a dark channel refinement method is proposed, which is defined by imposing a structure prior to the initial dark channel to improve the image brightness. Second, an anisotropic guided filter (AnisGF) is used to refine the transmission, which preserves the edges of the image. Finally, a detail enhancement algorithm is proposed to avoid the problem of insufficient detail in the initial enhancement image. To avoid video flicker, the next video frames are enhanced based on the brightness of the first enhanced frame. Qualitative and quantitative analysis shows that the proposed algorithm is superior to the contrast algorithm, in which the proposed algorithm ranks first in average gradient, edge intensity, contrast, and patch-based contrast quality index. It can be effectively applied to the enhancement of surveillance video images and for wider computer vision applications. Full article
(This article belongs to the Special Issue AI Multimedia Applications)
Show Figures

Figure 1

14 pages, 2945 KiB  
Article
Hierarchical Guided-Image-Filtering for Efficient Stereo Matching
by Chengtao Zhu and Yau-Zen Chang
Appl. Sci. 2019, 9(15), 3122; https://doi.org/10.3390/app9153122 - 1 Aug 2019
Cited by 9 | Viewed by 3623
Abstract
Stereo matching is complicated by the uneven distribution of textures on the image pairs. We address this problem by applying the edge-preserving guided-Image-filtering (GIF) at different resolutions. In contrast to most multi-scale stereo matching algorithms, parameters of the proposed hierarchical GIF model are [...] Read more.
Stereo matching is complicated by the uneven distribution of textures on the image pairs. We address this problem by applying the edge-preserving guided-Image-filtering (GIF) at different resolutions. In contrast to most multi-scale stereo matching algorithms, parameters of the proposed hierarchical GIF model are in an innovative weighted-combination scheme to generate an improved matching cost volume. Our method draws its strength from exploiting texture in various resolution levels and performing an effective mixture of the derived parameters. This novel approach advances our recently proposed algorithm, the pervasive guided-image-filtering scheme, by equipping it with hierarchical filtering modules, leading to disparity images with more details. The approach ensures as many different-scale patterns as possible to be involved in the cost aggregation and hence improves matching accuracy. The experimental results show that the proposed scheme achieves the best matching accuracy when compared with six well-recognized cutting-edge algorithms using version 3 of the Middlebury stereo evaluation data sets. Full article
(This article belongs to the Special Issue Selected Papers from IEEE ICASI 2019)
Show Figures

Graphical abstract

Back to TopTop