Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (18)

Search Parameters:
Keywords = clarity histogram

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 10025 KB  
Article
MFP-PAINet: Enhancing Underwater Images Through Multi-Dimensional Feature Fusion and Probabilistic Uncertainty Modeling
by Shuangquan Wu, Huanliang Xu, Xinfei Zhou, Zhuobing Wan, Zhaoyu Zhai and Xuehui Wu
J. Mar. Sci. Eng. 2025, 13(12), 2250; https://doi.org/10.3390/jmse13122250 - 27 Nov 2025
Viewed by 232
Abstract
In supervised training for underwater image enhancement (UIE), many deep learning methods rely on approximate reference images; however, they tend to neglect the inherent uncertainty of these references. To address this problem, this study proposes MFP-PAINet, an underwater image enhancement network, which can [...] Read more.
In supervised training for underwater image enhancement (UIE), many deep learning methods rely on approximate reference images; however, they tend to neglect the inherent uncertainty of these references. To address this problem, this study proposes MFP-PAINet, an underwater image enhancement network, which can integrate multi-dimensional feature fusion with probabilistic adaptive uncertainty modeling. Specifically, histogram equalization, white balance correction, and gamma correction were first implemented to generate preliminary enhanced inputs so as to improve contrast and color balance. Subsequently, a multi-dimensional feature mapping module was used to extract high-dimensional representations and produce multiple feature maps, which were dynamically weighted and adaptively fused by a confidence generation module to initially handle the uncertainty at the feature level. Finally, a probabilistic module was introduced to further model the uncertainty and refine image details. Extensive experiments conducted on eight publicly available datasets demonstrate that the proposed MFP-PAINet outperforms both traditional and deep learning-based UIE methods. For example, on the LSUI dataset, MFP-PAINet achieved SSIM, PSNR, and UIQM scores of 0.854, 23.044, and 3.056, respectively. Furthermore, MFP-PAINet exhibited certain promising performance in image deraining and image dehazing tasks, proving that it can effectively retrieve image details and improve image clarity. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

22 pages, 6249 KB  
Article
Edge-Aware Illumination Enhancement for Fine-Grained Defect Detection on Railway Surfaces
by Geuntae Bae, Sungan Yoon and Jeongho Cho
Mathematics 2025, 13(23), 3780; https://doi.org/10.3390/math13233780 - 25 Nov 2025
Viewed by 272
Abstract
Fine-grained defects on rail surfaces are often inadequately detected by conventional vision-based object detection models in low-light environments. Although this problem can be mitigated by enhancing image brightness and contrast or employing deep learning-based object detectors, these methods frequently distort critical edge and [...] Read more.
Fine-grained defects on rail surfaces are often inadequately detected by conventional vision-based object detection models in low-light environments. Although this problem can be mitigated by enhancing image brightness and contrast or employing deep learning-based object detectors, these methods frequently distort critical edge and texture information essential for accurate defect recognition. Herein, we propose a preprocessing framework that integrates two complementary modules, namely adaptive illumination enhancement (AIE) and EdgeSeal enhancement (ESE). AIE leverages contrast-limited adaptive histogram equalization and gamma correction to enhance local contrast while adjusting the global brightness distribution. ESE further refines defect visibility through morphological closing and sharpening, enhancing edge continuity and structural clarity. When integrated with the You Only Look Once v11 (YOLOv11) object detection model and evaluated on a rail defect dataset, the proposed framework achieves an ~7% improvement in mean average precision over baseline YOLOv11 and outperforms recent state-of-the-art detectors under diverse low-light and degraded-visibility conditions. The improved precision and recall across three defect classes (defects, dirt, and gaps) demonstrate the robustness of our approach. The proposed framework holds promise for real-time railway infrastructure monitoring and automation systems and is broadly applicable to low-light object detection tasks across other industrial domains. Full article
(This article belongs to the Special Issue Applications of Deep Learning and Convolutional Neural Network)
Show Figures

Figure 1

18 pages, 2054 KB  
Article
A Unified Preprocessing Pipeline for Noise-Resilient Crack Segmentation in Leaky Infrastructure Surfaces
by Jae-Jun Shin and Jeongho Cho
Sensors 2025, 25(17), 5574; https://doi.org/10.3390/s25175574 - 6 Sep 2025
Viewed by 1223
Abstract
Wet cracks caused by leakage often exhibit visual and structural distortions due to surface contamination, salt crystallization, and corrosion byproducts. These factors significantly degrade the performance of sensor- and vision-based crack detection systems. In moist environments, the initiation and propagation of cracks tend [...] Read more.
Wet cracks caused by leakage often exhibit visual and structural distortions due to surface contamination, salt crystallization, and corrosion byproducts. These factors significantly degrade the performance of sensor- and vision-based crack detection systems. In moist environments, the initiation and propagation of cracks tend to be highly nonlinear and irregular, making it challenging to distinguish crack regions from the background—especially under visual noise such as reflections, stains, and low contrast. To address these challenges, this study proposes a segmentation framework that integrates a dedicated preprocessing pipeline aimed at suppressing noise and enhancing feature clarity, all without altering the underlying segmentation architecture. The pipeline begins with adaptive thresholding to perform initial binarization under varying lighting conditions. This is followed by morphological operations and connected component analysis to eliminate micro-level noise and restore structural continuity of crack patterns. Subsequently, both local and global contrast are enhanced using histogram stretching and contrast limited adaptive histogram equalization. Finally, a background fusion step is applied to emphasize crack features while preserving the original surface texture. Experimental results demonstrate that the proposed method significantly improves segmentation performance under adverse conditions. Notably, it achieves a precision of 97.5% and exhibits strong robustness against noise introduced by moisture, reflections, and surface irregularities. These findings confirm that targeted preprocessing can substantially enhance the accuracy and reliability of crack detection systems deployed in real-world infrastructure inspection scenarios. Full article
Show Figures

Figure 1

25 pages, 10815 KB  
Article
Enhancing Heart Disease Diagnosis Using ECG Signal Reconstruction and Deep Transfer Learning Classification with Optional SVM Integration
by Mostafa Ahmad, Ali Ahmed, Hasan Hashim, Mohammed Farsi and Nader Mahmoud
Diagnostics 2025, 15(12), 1501; https://doi.org/10.3390/diagnostics15121501 - 13 Jun 2025
Cited by 3 | Viewed by 3368
Abstract
Background/Objectives: Accurate and efficient diagnosis of heart disease through electrocardiogram (ECG) analysis remains a critical challenge in clinical practice due to noise interference, morphological variability, and the complexity of overlapping cardiac signals. Methods: This study presents a comprehensive deep learning (DL) framework [...] Read more.
Background/Objectives: Accurate and efficient diagnosis of heart disease through electrocardiogram (ECG) analysis remains a critical challenge in clinical practice due to noise interference, morphological variability, and the complexity of overlapping cardiac signals. Methods: This study presents a comprehensive deep learning (DL) framework that integrates advanced ECG signal segmentation with transfer learning-based classification, aimed at improving diagnostic performance. The proposed ECG segmentation algorithm introduces a distinct and original approach compared to prior research by integrating adaptive preprocessing, histogram-based lead separation, and robust point-tracking techniques into a unified framework. While most earlier studies have addressed ECG image processing using basic filtering, fixed-region cropping, or template matching, our method uniquely focuses on automated and precise reconstruction of individual ECG leads from noisy and overlapping multi-lead images—a challenge often overlooked in previous work. This innovative segmentation strategy significantly enhances signal clarity and enables the extraction of richer and more localized features, boosting the performance of DL classifiers. The dataset utilized in this work of 12 lead-based standard ECG images consists of four primary classes. Results: Experiments conducted using various DL models—such as VGG16, VGG19, ResNet50, InceptionNetV2, and GoogleNet—reveal that segmentation notably enhances model performance in terms of recall, precision, and F1 score. The hybrid VGG19 + SVM model achieved 98.01% and 100% accuracy in multi-class classification, along with average accuracies of 99% and 97.95% in binary classification tasks using the original and reconstructed datasets, respectively. Conclusions: The results highlight the superiority of deep, feature-rich models in handling reconstructed ECG signals and confirm the value of segmentation as a critical preprocessing step. These findings underscore the importance of effective ECG segmentation in DL applications for automated heart disease diagnosis, offering a more reliable and accurate solution. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

23 pages, 32021 KB  
Article
SVDDD: SAR Vehicle Target Detection Dataset Augmentation Based on Diffusion Model
by Keao Wang, Zongxu Pan and Zixiao Wen
Remote Sens. 2025, 17(2), 286; https://doi.org/10.3390/rs17020286 - 15 Jan 2025
Cited by 3 | Viewed by 2645
Abstract
In the field of target detection using synthetic aperture radar (SAR) images, deep learning-based supervised learning methods have demonstrated outstanding performance. However, the effectiveness of deep learning methods is largely influenced by the quantity and diversity of samples in the dataset. Unfortunately, due [...] Read more.
In the field of target detection using synthetic aperture radar (SAR) images, deep learning-based supervised learning methods have demonstrated outstanding performance. However, the effectiveness of deep learning methods is largely influenced by the quantity and diversity of samples in the dataset. Unfortunately, due to various constraints, the availability of labeled image data for training SAR vehicle detection networks is quite limited. This scarcity of data has become one of the main obstacles hindering the further development of SAR vehicle detection. In response to this issue, this paper collects SAR images of the Ka, Ku, and X bands to construct a labeled dataset for training Stable Diffusion and then propose a framework for data augmentation for SAR vehicle detection based on the Diffusion model, which consists of a fine-tuned Stable Diffusion model, a ControlNet, and a series of methods for processing and filtering images based on image clarity, histogram, and an influence function to enhance the diversity of the original dataset, thereby improving the performance of deep learning detection models. In the experiment, the samples we generated and screened achieved an average improvement of 2.32%, with a maximum of 6.6% in mAP75 on five different strong baseline detectors. Full article
Show Figures

Figure 1

15 pages, 7532 KB  
Article
Inhomogeneous Illumination Image Enhancement Under Extremely Low Visibility Condition
by Libang Chen, Jinyan Lin, Qihang Bian, Yikun Liu and Jianying Zhou
Appl. Sci. 2024, 14(22), 10111; https://doi.org/10.3390/app142210111 - 5 Nov 2024
Cited by 3 | Viewed by 1772
Abstract
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition, thereby hindering conventional image processing methods. Despite improvements through neural network-based approaches, these techniques falter under extremely low visibility conditions exacerbated by inhomogeneous illumination, [...] Read more.
Imaging through dense fog presents unique challenges, with essential visual information crucial for applications like object detection and recognition, thereby hindering conventional image processing methods. Despite improvements through neural network-based approaches, these techniques falter under extremely low visibility conditions exacerbated by inhomogeneous illumination, which degrades deep learning performance due to inconsistent signal intensities. We introduce in this paper a novel method that adaptively filters background illumination based on Structural Differential and Integral Filtering (SDIF) to enhance only the vital signal information. The grayscale banding is eliminated by incorporating a visual optimization strategy based on image gradients. Maximum Histogram Equalization (MHE) is used to achieve high contrast while maintaining fidelity to the original content. We evaluated our algorithm using data collected from both a fog chamber and outdoor environments and performed comparative analyses with existing methods. Our findings demonstrate that our proposed method significantly enhances signal clarity under extremely low visibility conditions and out-performs existing techniques, offering substantial improvements for deep fog imaging applications. Full article
(This article belongs to the Special Issue Advances in Image Enhancement and Restoration Technology)
Show Figures

Figure 1

15 pages, 5779 KB  
Article
Enhancing Visual Perception in Immersive VR and AR Environments: AI-Driven Color and Clarity Adjustments Under Dynamic Lighting Conditions
by Maryam Abbasi, Paulo Váz, José Silva and Pedro Martins
Technologies 2024, 12(11), 216; https://doi.org/10.3390/technologies12110216 - 3 Nov 2024
Cited by 7 | Viewed by 5965
Abstract
The visual fidelity of virtual reality (VR) and augmented reality (AR) environments is essential for user immersion and comfort. Dynamic lighting often leads to chromatic distortions and reduced clarity, causing discomfort and disrupting user experience. This paper introduces an AI-driven chromatic adjustment system [...] Read more.
The visual fidelity of virtual reality (VR) and augmented reality (AR) environments is essential for user immersion and comfort. Dynamic lighting often leads to chromatic distortions and reduced clarity, causing discomfort and disrupting user experience. This paper introduces an AI-driven chromatic adjustment system based on a modified U-Net architecture, optimized for real-time applications in VR/AR. This system adapts to dynamic lighting conditions, addressing the shortcomings of traditional methods like histogram equalization and gamma correction, which struggle with rapid lighting changes and real-time user interactions. We compared our approach with state-of-the-art color constancy algorithms, including Barron’s Convolutional Color Constancy and STAR, demonstrating superior performance. Experimental results from 60 participants show significant improvements, with up to 41% better color accuracy and 39% enhanced clarity under dynamic lighting conditions. The study also included eye-tracking data, which confirmed increased user engagement with AI-enhanced images. Our system provides a practical solution for developers aiming to improve image quality, reduce visual discomfort, and enhance overall user satisfaction in immersive environments. Future work will focus on extending the model’s capability to handle more complex lighting scenarios. Full article
(This article belongs to the Section Information and Communication Technologies)
Show Figures

Figure 1

34 pages, 9166 KB  
Article
Enhancing Daylight Comfort with Climate-Responsive Kinetic Shading: A Simulation and Experimental Study of a Horizontal Fin System
by Marcin Brzezicki
Sustainability 2024, 16(18), 8156; https://doi.org/10.3390/su16188156 - 19 Sep 2024
Cited by 4 | Viewed by 4260
Abstract
This study employs both simulation and experimental methodologies to evaluate the effectiveness of bi-sectional horizontal kinetic shading systems (KSS) with horizontal fins in enhancing daylight comfort across various climates. It emphasizes the importance of optimizing daylight levels while minimizing solar heat gain, particularly [...] Read more.
This study employs both simulation and experimental methodologies to evaluate the effectiveness of bi-sectional horizontal kinetic shading systems (KSS) with horizontal fins in enhancing daylight comfort across various climates. It emphasizes the importance of optimizing daylight levels while minimizing solar heat gain, particularly in the context of increasing energy demands and shifting climatic patterns. The study introduces a custom-designed bi-sectional KSS, simulated in three distinct climates—Wroclaw, Tehran, and Bangkok—using climate-based daylight modeling methods with the Ladybug and Honeybee tools in Rhino v.7 software. Standard daylight metrics, such as Useful Daylight Illuminance (UDI) and Daylight Glare Probability (DGP), were employed alongside custom metrics tailored to capture the unique dynamics of the bi-sectional KSS. The results were statistically analyzed using box plots and histograms, revealing UDI300–3000 medians of 78.51%, 88.96%, and 86.22% for Wroclaw, Tehran, and Bangkok, respectively. These findings demonstrate the KSS’s effectiveness in providing optimal daylight conditions across diverse climatic regions. Annual simulations based on standardized weather data showed that the KSS improved visual comfort by 61.04%, 148.60%, and 88.55%, respectively, compared to a scenario without any shading, and by 31.96%, 54.69%, and 37.05%, respectively, compared to a scenario with open static horizontal fins. The inclusion of KSS switching schedules, often overlooked in similar research, enhances the reproducibility and clarity of the findings. A physical reduced-scale mock-up of the bi-sectional KSS was then tested under real-weather conditions in Wroclaw (latitude 51° N) during June–July 2024. The mock-up consisted of two Chambers ‘1’ and ‘2’ equipped with the bi-sectional KSS prototype, and the other one without shading. Stepper motors managed the fins’ operation via a Python script on a Raspberry Pi 3 minicomputer. The control Chamber ‘1’ provided a baseline for comparing the KSS’s efficiency. Experimental results supported the simulations, demonstrating the KSS’s robustness in reducing high illuminance levels, with illuminance below 3000 lx maintained for 68% of the time during the experiment (conducted from 1 to 4 PM on three analysis days). While UDI and DA calculations were not feasible due to the limited number of sensors, the Eh1 values enabled the evaluation of the time illuminance to remain below the threshold. However, during the June–July 2024 heat waves, illuminance levels briefly exceeded the comfort threshold, reaching 4674 lx. Quantitative and qualitative analyses advocate for the broader application and further development of KSS as a climate-responsive shading system in various architectural contexts. Full article
Show Figures

Figure 1

22 pages, 5638 KB  
Article
A Method for Defogging Sea Fog Images by Integrating Dark Channel Prior with Adaptive Sky Region Segmentation
by Kongchi Hu, Qingyan Zeng, Junyan Wang, Jianqing Huang and Qi Yuan
J. Mar. Sci. Eng. 2024, 12(8), 1255; https://doi.org/10.3390/jmse12081255 - 25 Jul 2024
Cited by 7 | Viewed by 1673
Abstract
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly [...] Read more.
Due to the detrimental impact of fog on image quality, dehazing maritime images is essential for applications such as safe maritime navigation, surveillance, environmental monitoring, and marine research. Traditional dehazing techniques, which are dependent on presupposed conditions, often fail to perform effectively, particularly when processing sky regions within marine fog images in which these conditions are not met. This study proposes an adaptive sky area segmentation dark channel prior to the marine image dehazing method. This study effectively addresses challenges associated with traditional marine image dehazing methods, improving dehazing results affected by bright targets in the sky area and mitigating the grayish appearance caused by the dark channel. This study uses the grayscale value of the region boundary’s grayscale discontinuity characteristics, takes the grayscale value with the least number of discontinuity areas in the grayscale histogram as a segmentation threshold adapted to the characteristics of the sea fog image to segment bright areas such as the sky, and then uses grayscale gradients to identify grayscale differences in different bright areas, accurately distinguishing boundaries between sky and non-sky areas. By comparing the area parameters, non-sky blocks are filled; this adaptively eliminates interference from other bright non-sky areas and accurately locks the sky area. Furthermore, this study proposes an enhanced dark channel prior approach that optimizes transmittance locally within sky areas and globally across the image. This is achieved using a transmittance optimization algorithm combined with guided filtering technology. The atmospheric light estimation is refined through iterative adjustments, ensuring consistency in brightness between the dehazed and original images. The image reconstruction employs calculated atmospheric light and transmittance values through an atmospheric scattering model. Finally, the use of gamma-correction technology ensures that images more accurately replicate natural colors and brightness levels. Experimental outcomes demonstrate substantial improvements in the contrast, color saturation, and visual clarity of marine fog images. Additionally, a set of foggy marine image data sets is developed for monitoring purposes. Compared with traditional dark channel prior dehazing techniques, this new approach significantly improves fog removal. This advancement enhances the clarity of images obtained from maritime equipment and effectively mitigates the risk of maritime transportation accidents. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

25 pages, 12172 KB  
Article
Contrast Enhancement Method Using Region-Based Dynamic Clipping Technique for LWIR-Based Thermal Camera of Night Vision Systems
by Cheol-Ho Choi, Joonhwan Han, Jeongwoo Cha, Hyunmin Choi, Jungho Shin, Taehyun Kim and Hyun Woo Oh
Sensors 2024, 24(12), 3829; https://doi.org/10.3390/s24123829 - 13 Jun 2024
Cited by 5 | Viewed by 5072
Abstract
In the autonomous driving industry, there is a growing trend to employ long-wave infrared (LWIR)-based uncooled thermal-imaging cameras, capable of robustly collecting data even in extreme environments. Consequently, both industry and academia are actively researching contrast-enhancement techniques to improve the quality of LWIR-based [...] Read more.
In the autonomous driving industry, there is a growing trend to employ long-wave infrared (LWIR)-based uncooled thermal-imaging cameras, capable of robustly collecting data even in extreme environments. Consequently, both industry and academia are actively researching contrast-enhancement techniques to improve the quality of LWIR-based thermal-imaging cameras. However, most research results only showcase experimental outcomes using mass-produced products that already incorporate contrast-enhancement techniques. Put differently, there is a lack of experimental data on contrast enhancement post-non-uniformity (NUC) and temperature compensation (TC) processes, which generate the images seen in the final products. To bridge this gap, we propose a histogram equalization (HE)-based contrast enhancement method that incorporates a region-based clipping technique. Furthermore, we present experimental results on the images obtained after applying NUC and TC processes. We simultaneously conducted visual and qualitative performance evaluations on images acquired after NUC and TC processes. In the visual evaluation, it was confirmed that the proposed method improves image clarity and contrast ratio compared to conventional HE-based methods, even in challenging driving scenarios such as tunnels. In the qualitative evaluation, the proposed method demonstrated upper-middle-class rankings in both image quality and processing speed metrics. Therefore, our proposed method proves to be effective for the essential contrast enhancement process in LWIR-based uncooled thermal-imaging cameras intended for autonomous driving platforms. Full article
(This article belongs to the Special Issue Infrared Sensing and Target Detection)
Show Figures

Figure 1

20 pages, 6160 KB  
Article
A Multi-Step Image Pre-Enhancement Strategy for a Fish Feeding Behavior Analysis Using Efficientnet
by Guofu Feng, Xiaojuan Kan and Ming Chen
Appl. Sci. 2024, 14(12), 5099; https://doi.org/10.3390/app14125099 - 12 Jun 2024
Cited by 8 | Viewed by 1818
Abstract
To enhance the accuracy of lightweight CNN classification models in analyzing fish feeding behavior, this paper addresses the image quality issues caused by external environmental factors and lighting conditions, such as low contrast and uneven illumination, by proposing a Multi-step Image Pre-enhancement Strategy [...] Read more.
To enhance the accuracy of lightweight CNN classification models in analyzing fish feeding behavior, this paper addresses the image quality issues caused by external environmental factors and lighting conditions, such as low contrast and uneven illumination, by proposing a Multi-step Image Pre-enhancement Strategy (MIPS). This strategy includes three critical steps: initially, images undergo a preliminary processing using the Multi-Scale Retinex with Color Restoration (MSRCR) algorithm, effectively reducing the impact of water surface reflections and enhancing the visual effect of the images; secondly, the Multi-Metric-Driven Contrast Limited Adaptive Histogram Equalization (mdc) technique is applied to further improve image contrast, especially in areas of low contrast, by adjusting the local contrast levels to enhance the clarity of the image details; finally, Unsharp Masking (UM) technology is employed to sharpen the images, emphasizing their edges to increase the clarity of the image details, thereby significantly improving the overall image quality. Experimental results on a custom dataset have confirmed that this pre-enhancement strategy significantly boosts the accuracy of various CNN-based classification models, particularly for lightweight CNN models, and drastically reduces the time required for model training compared to the use of advanced ResNet models. This research provides an effective technical route for improving the accuracy and efficiency of an image-based analysis of fish feeding behavior in complex environments. Full article
(This article belongs to the Section Marine Science and Engineering)
Show Figures

Figure 1

16 pages, 3114 KB  
Article
Underwater Degraded Image Restoration by Joint Evaluation and Polarization Partition Fusion
by Changye Cai, Yuanyi Fan, Ronghua Li, Haotian Cao, Shenghui Zhang and Mianze Wang
Appl. Sci. 2024, 14(5), 1769; https://doi.org/10.3390/app14051769 - 21 Feb 2024
Cited by 2 | Viewed by 1906
Abstract
Images of underwater environments suffer from contrast degradation, reduced clarity, and information attenuation. The traditional method is the global estimate of polarization. However, targets in water often have complex polarization properties. For low polarization regions, since the polarization is similar to the polarization [...] Read more.
Images of underwater environments suffer from contrast degradation, reduced clarity, and information attenuation. The traditional method is the global estimate of polarization. However, targets in water often have complex polarization properties. For low polarization regions, since the polarization is similar to the polarization of background, it is difficult to distinguish between target and non-targeted regions when using traditional methods. Therefore, this paper proposes a joint evaluation and partition fusion method. First, we use histogram stretching methods for preprocessing two polarized orthogonal images, which increases the image contrast and enhances the image detail information. Then, the target is partitioned according to the values of each pixel point of the polarization image, and the low and high polarization target regions are extracted based on polarization values. To address the practical problem, the low polarization region is recovered using the polarization difference method, and the high polarization region is recovered using the joint estimation of multiple optimization metrics. Finally, the low polarization and the high polarization regions are fused. Subjectively, the experimental results as a whole have been fully restored, and the information has been retained completely. Our method can fully recover the low polarization region, effectively remove the scattering effect and increase an image’s contrast. Objectively, the results of the experimental evaluation indexes, EME, Entropy, and Contrast, show that our method performs significantly better than the other methods, which confirms the feasibility of this paper’s algorithm for application in specific underwater scenarios. Full article
(This article belongs to the Topic Computer Vision and Image Processing, 2nd Edition)
Show Figures

Figure 1

26 pages, 1181 KB  
Article
Advanced Medical Image Segmentation Enhancement: A Particle-Swarm-Optimization-Based Histogram Equalization Approach
by Shoffan Saifullah and Rafał Dreżewski
Appl. Sci. 2024, 14(2), 923; https://doi.org/10.3390/app14020923 - 22 Jan 2024
Cited by 31 | Viewed by 6260
Abstract
Accurate medical image segmentation is paramount for precise diagnosis and treatment in modern healthcare. This research presents a comprehensive study of the efficacy of particle swarm optimization (PSO) combined with histogram equalization (HE) preprocessing for medical image segmentation, focusing on lung CT scan [...] Read more.
Accurate medical image segmentation is paramount for precise diagnosis and treatment in modern healthcare. This research presents a comprehensive study of the efficacy of particle swarm optimization (PSO) combined with histogram equalization (HE) preprocessing for medical image segmentation, focusing on lung CT scan and chest X-ray datasets. Best-cost values reveal the PSO algorithm’s performance, with HE preprocessing demonstrating significant stabilization and enhanced convergence, particularly for complex lung CT scan images. Evaluation metrics, including accuracy, precision, recall, F1-score/Dice, specificity, and Jaccard, show substantial improvements with HE preprocessing, emphasizing its impact on segmentation accuracy. Comparative analyses against alternative methods, such as Otsu, Watershed, and K-means, confirm the competitiveness of the PSO-HE approach, especially for chest X-ray images. The study also underscores the positive influence of preprocessing on image clarity and precision. These findings highlight the promise of the PSO-HE approach for advancing the accuracy and reliability of medical image segmentation and pave the way for further research and method integration to enhance this critical healthcare application. Full article
(This article belongs to the Special Issue Artificial Intelligence in Biomedical Image Processing)
Show Figures

Figure 1

21 pages, 77017 KB  
Article
Color Night Light Remote Sensing Images Generation Using Dual-Transformation
by Yanling Lu, Guoqing Zhou, Meiqi Huang and Yaqi Huang
Sensors 2024, 24(1), 294; https://doi.org/10.3390/s24010294 - 3 Jan 2024
Cited by 2 | Viewed by 2446
Abstract
Traditional night light images are black and white with a low resolution, which has largely limited their applications in areas such as high-accuracy urban electricity consumption estimation. For this reason, this study proposes a fusion algorithm based on a dual-transformation (wavelet transform and [...] Read more.
Traditional night light images are black and white with a low resolution, which has largely limited their applications in areas such as high-accuracy urban electricity consumption estimation. For this reason, this study proposes a fusion algorithm based on a dual-transformation (wavelet transform and IHS (Intensity Hue Saturation) color space transform), is proposed to generate color night light remote sensing images (color-NLRSIs). In the dual-transformation, the red and green bands of Landsat multi-spectral images and “NPP-VIIRS-like” night light remote sensing images are merged. The three bands of the multi-band image are converted into independent components by the IHS modulated wavelet transformed algorithm, which represents the main effective information of the original image. With the color space transformation of the original image to the IHS color space, the components I, H, and S of Landsat multi-spectral images are obtained, and the histogram is optimally matched, and then it is combined with a two-dimensional discrete wavelet transform. Finally, it is inverted into RGB (red, green, and blue) color images. The experimental results demonstrate the following: (1) Compared with the traditional single-fusion algorithm, the dual-transformation has the best comprehensive performance effect on the spatial resolution, detail contrast, and color information before and after fusion, so the fusion image quality is the best; (2) The fused color-NLRSIs can visualize the information of the features covered by lights at night, and the resolution of the image has been improved from 500 m to 40 m, which can more accurately analyze the light of small-scale area and the ground features covered; (3) The fused color-NLRSIs are improved in terms of their MEAN (mean value), STD (standard deviation), EN (entropy), and AG (average gradient) so that the images have better advantages in terms of detail texture, spectral characteristics, and clarity of the images. In summary, the dual-transformation algorithm has the best overall performance and the highest quality of fused color-NLRSIs. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 5351 KB  
Article
A Novel Hybrid Retinal Blood Vessel Segmentation Algorithm for Enlarging the Measuring Range of Dual-Wavelength Retinal Oximetry
by Yongli Xian, Guangxin Zhao, Congzheng Wang, Xuejian Chen and Yun Dai
Photonics 2023, 10(7), 722; https://doi.org/10.3390/photonics10070722 - 24 Jun 2023
Cited by 5 | Viewed by 2192
Abstract
The non-invasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and the absorption spectral characteristics of the tissue. The dual-wavelength retinal images are simultaneously captured via retinal oximetry. SO2 is calculated by processing a series [...] Read more.
The non-invasive measurement of hemoglobin oxygen saturation (SO2) in retinal vessels is based on spectrophotometry and the absorption spectral characteristics of the tissue. The dual-wavelength retinal images are simultaneously captured via retinal oximetry. SO2 is calculated by processing a series of images and by calculating the optic density ratio of two images. However, existing SO2 research is focused on the thick vessels in the high-clarity region of retinal images. However, the thin vessels in the low-clarity region could provide significant information for the detection and diagnosis of neovascular diseases. To this end, we proposed a novel hybrid vessel segmentation algorithm. Firstly, a median filter was employed for image denoising. Secondly, high- and low-clarity region segmentation was carried out based on a clarity histogram. The vessels in the high-clarity areas were segmented after implementing a Gaussian filter, a matched filter, and morphological segmentation. Additionally, the vessels in the low-clarity areas were segmented using a guided filter, matched filtering, and dynamic threshold segmentation. Finally, the results were obtained through image merger and morphological operations. The experimental results and analysis show that the proposed method can effectively segment thick and thin vessels and can extend the measuring range of dual-wavelength retinal oximetry. Full article
Show Figures

Figure 1

Back to TopTop