Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (12)

Search Parameters:
Keywords = NIQE optimization

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 3619 KiB  
Article
An Adaptive Underwater Image Enhancement Framework Combining Structural Detail Enhancement and Unsupervised Deep Fusion
by Semih Kahveci and Erdinç Avaroğlu
Appl. Sci. 2025, 15(14), 7883; https://doi.org/10.3390/app15147883 - 15 Jul 2025
Viewed by 234
Abstract
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To [...] Read more.
The underwater environment severely degrades image quality by absorbing and scattering light. This causes significant challenges, including non-uniform illumination, low contrast, color distortion, and blurring. These degradations compromise the performance of critical underwater applications, including water quality monitoring, object detection, and identification. To address these issues, this study proposes a detail-oriented hybrid framework for underwater image enhancement that synergizes the strengths of traditional image processing with the powerful feature extraction capabilities of unsupervised deep learning. Our framework introduces a novel multi-scale detail enhancement unit to accentuate structural information, followed by a Latent Low-Rank Representation (LatLRR)-based simplification step. This unique combination effectively suppresses common artifacts like oversharpening, spurious edges, and noise by decomposing the image into meaningful subspaces. The principal structural features are then optimally combined with a gamma-corrected luminance channel using an unsupervised MU-Fusion network, achieving a balanced optimization of both global contrast and local details. The experimental results on the challenging Test-C60 and OceanDark datasets demonstrate that our method consistently outperforms state-of-the-art fusion-based approaches, achieving average improvements of 7.5% in UIQM, 6% in IL-NIQE, and 3% in AG. Wilcoxon signed-rank tests confirm that these performance gains are statistically significant (p < 0.01). Consequently, the proposed method significantly mitigates prevalent issues such as color aberration, detail loss, and artificial haze, which are frequently encountered in existing techniques. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

15 pages, 3474 KiB  
Article
New Underwater Image Enhancement Algorithm Based on Improved U-Net
by Sisi Zhu, Zaiming Geng, Yingjuan Xie, Zhuo Zhang, Hexiong Yan, Xuan Zhou, Hao Jin and Xinnan Fan
Water 2025, 17(6), 808; https://doi.org/10.3390/w17060808 - 12 Mar 2025
Viewed by 1430
Abstract
(1) Objective: As light propagates through water, it undergoes significant attenuation and scattering, causing underwater images to experience color distortion and exhibit a bluish or greenish tint. Additionally, suspended particles in the water further degrade image quality. This paper proposes an improved U-Net [...] Read more.
(1) Objective: As light propagates through water, it undergoes significant attenuation and scattering, causing underwater images to experience color distortion and exhibit a bluish or greenish tint. Additionally, suspended particles in the water further degrade image quality. This paper proposes an improved U-Net network model for underwater image enhancement to generate high-quality images. (2) Method: Instead of incorporating additional complex modules into enhancement networks, we opted to simplify the classic U-Net architecture. Specifically, we replaced the standard convolutions in U-Net with our self-designed efficient basic block, which integrates a simplified channel attention mechanism. Moreover, we employed Layer Normalization to enhance the capability of training with a small number of samples and used the GELU activation function to achieve additional benefits in image denoising. Furthermore, we introduced the SK fusion module into the network to aggregate feature information, replacing traditional concatenation operations. In the experimental section, we used the “Underwater ImageNet” dataset from “Enhancing Underwater Visual Perception (EUVP)” for training and testing. EUVP, established by Islam et al., is a large-scale dataset comprising paired images (high-quality clear images and low-quality blurry images) as well as unpaired underwater images. (3) Results: We compared our proposed method with several high-performing traditional algorithms and deep learning-based methods. The traditional algorithms include He, UDCP, ICM, and ULAP, while the deep learning-based methods include CycleGAN, UGAN, UGAN-P, and FUnIEGAN. The results demonstrate that our algorithm exhibits outstanding competitiveness on the underwater imagenet-dataset. Compared to the currently optimal lightweight model, FUnIE-GAN, our method reduces the number of parameters by 0.969 times and decreases Floating-Point Operations Per Second (FLOPS) by more than half. In terms of image quality, our approach achieves a minimal UCIQE reduction of only 0.008 while improving the NIQE by 0.019 compared to state-of-the-art (SOTA) methods. Finally, extensive ablation experiments validate the feasibility of our designed network. (4) Conclusions: The underwater image enhancement algorithm proposed in this paper significantly reduces model size and accelerates inference speed while maintaining high processing performance, demonstrating strong potential for practical applications. Full article
Show Figures

Figure 1

28 pages, 10234 KiB  
Article
Estimating QoE from Encrypted Video Conferencing Traffic
by Michael Sidorov, Raz Birman, Ofer Hadar and Amit Dvir
Sensors 2025, 25(4), 1009; https://doi.org/10.3390/s25041009 - 8 Feb 2025
Cited by 1 | Viewed by 993
Abstract
Traffic encryption is vital for internet security but complicates analytical applications like video delivery optimization or quality of experience (QoE) estimation, which often rely on clear text data. While many models address the problem of QoE prediction in video streaming, the video conferencing [...] Read more.
Traffic encryption is vital for internet security but complicates analytical applications like video delivery optimization or quality of experience (QoE) estimation, which often rely on clear text data. While many models address the problem of QoE prediction in video streaming, the video conferencing (VC) domain remains underexplored despite rising demand for these applications. Existing models often provide low-resolution predictions, categorizing QoE into broad classes such as “high” or “low”, rather than providing precise, continuous predictions. Moreover, most models focus on clear-text rather than encrypted traffic. This paper addresses these challenges by analyzing a large dataset of Zoom sessions and training five classical machine learning (ML) models and two custom deep neural networks (DNNs) to predict three QoE indicators: frames per second (FPS), resolution (R), and the naturalness image quality evaluator (NIQE). The models achieve mean error rates of 8.27%, 7.56%, and 2.08% for FPS, R, and NIQE, respectively, using a 10-fold cross-validation technique. This approach advances QoE assessment for encrypted traffic in VC applications. Full article
(This article belongs to the Special Issue Machine Learning in Image/Video Processing and Sensing)
Show Figures

Figure 1

16 pages, 1981 KiB  
Article
Optimizing Natural Image Quality Evaluators for Quality Measurement in CT Scan Denoising
by Rudy Gunawan, Yvonne Tran, Jinchuan Zheng, Hung Nguyen and Rifai Chai
Computers 2025, 14(1), 18; https://doi.org/10.3390/computers14010018 - 7 Jan 2025
Viewed by 1413
Abstract
Evaluating the results of image denoising algorithms in Computed Tomography (CT) scans typically involves several key metrics to assess noise reduction while preserving essential details. Full Reference (FR) quality evaluators are popular for evaluating image quality in denoising CT scans. There is limited [...] Read more.
Evaluating the results of image denoising algorithms in Computed Tomography (CT) scans typically involves several key metrics to assess noise reduction while preserving essential details. Full Reference (FR) quality evaluators are popular for evaluating image quality in denoising CT scans. There is limited information about using Blind/No Reference (NR) quality evaluators in the medical image area. This paper shows the previously utilized Natural Image Quality Evaluator (NIQE) in CT scans; this NIQE is commonly used as a photolike image evaluator and provides an extensive assessment of the optimum NIQE setting. The result was obtained using the library of good images. Most are also part of the Convolutional Neural Network (CNN) training dataset against the testing dataset, and a new dataset shows an optimum patch size and contrast levels suitable for the task. This evidence indicates a possibility of using the NIQE as a new option in evaluating denoised quality to find improvement or compare the quality between CNN models. Full article
(This article belongs to the Special Issue Machine and Deep Learning in the Health Domain 2024)
Show Figures

Figure 1

12 pages, 7530 KiB  
Article
Wavefront Correction for Extended Sources Imaging Based on a 97-Element MEMS Deformable Mirror
by Huizhen Yang, Lingzhe Tang, Zhaojun Yan, Peng Chen, Wenjie Yang, Xianshuo Li and Yongqi Ge
Micromachines 2025, 16(1), 50; https://doi.org/10.3390/mi16010050 - 31 Dec 2024
Cited by 1 | Viewed by 3623
Abstract
Adaptive optics (AO) systems are capable of correcting wavefront aberrations caused by transmission media or defects in optical systems. The deformable mirror (DM) plays a crucial role as a component of the adaptive optics system. In this study, our focus is on analyzing [...] Read more.
Adaptive optics (AO) systems are capable of correcting wavefront aberrations caused by transmission media or defects in optical systems. The deformable mirror (DM) plays a crucial role as a component of the adaptive optics system. In this study, our focus is on analyzing the ability of a 97-element MEMS (Micro-Electro-Mechanical System) DM to correct blurred images of extended sources affected by atmospheric turbulence. The RUN optimizer is employed as the control method to evaluate the correction capability of the DM through simulations and physical experiments. Simulation results demonstrate that within 100 iterations, both the normalized gray variance and Strehl Ratio can converge, leading to an improvement in image quality by approximately 30%. In physics experiments, we observe an increase in normalized gray variance (NGV) from 0.53 to 0.97 and the natural image quality evaluation (NIQE) from 15.35 to 19.73, representing an overall improvement in image quality of about 28%. These findings can offer theoretical and technical support for applying MEMS DMs in correcting imaging issues related to extended sources degraded by wavefront aberrations. Full article
(This article belongs to the Special Issue Integrated Photonics and Optoelectronics, 2nd Edition)
Show Figures

Figure 1

25 pages, 14192 KiB  
Article
A Low-Cost Remotely Configurable Electronic Trap for Insect Pest Dataset Generation
by Fernando León-García, Jose M. Palomares, Meelad Yousef-Yousef, Enrique Quesada-Moraga and Cristina Martínez-Ruedas
Appl. Sci. 2024, 14(22), 10307; https://doi.org/10.3390/app142210307 - 9 Nov 2024
Viewed by 1683
Abstract
The precise monitoring of insect pest populations is the foundation of Integrated Pest Management (IPM) for pests of plants, humans, and animals. Digital technologies can be employed to address one of the main challenges, such as reducing the IPM workload and enhancing decision-making [...] Read more.
The precise monitoring of insect pest populations is the foundation of Integrated Pest Management (IPM) for pests of plants, humans, and animals. Digital technologies can be employed to address one of the main challenges, such as reducing the IPM workload and enhancing decision-making accuracy. In this study, digital technologies are used to deploy an automated trap for capturing images of insects and generating centralized repositories on a server. Subsequently, advanced computational models can be applied to analyze the collected data. The study provides a detailed description of the prototype, designed with a particular focus on its remote reconfigurability to optimize repository quality; and the server, accessible via an API interface to enhance system interoperability and scalability. Quality metrics are presented through an experimental study conducted on the constructed demonstrator, emphasizing trap reliability, stability, performance, and energy consumption, along with an objective analysis of image quality using metrics such as RMS contrast, Image Entropy, Image sharpness metric, Natural Image Quality Evaluator (NIQE), and Modulation Transfer Function (MFT). This study contributes to the optimization of the current knowledge regarding automated insect pest monitoring techniques and offers advanced solutions for the current systems. Full article
Show Figures

Figure 1

22 pages, 12643 KiB  
Article
Boosting the Performance of LLIE Methods via Unsupervised Weight Map Generation Network
by Shuichen Ji, Shaoping Xu, Nan Xiao, Xiaohui Cheng, Qiyu Chen and Xinyi Jiang
Appl. Sci. 2024, 14(12), 4962; https://doi.org/10.3390/app14124962 - 7 Jun 2024
Cited by 2 | Viewed by 1280
Abstract
Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across [...] Read more.
Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across diverse scenarios remains challenging. This challenge primarily arises from the inherent data bias in deep learning-based approaches, stemming from disparities in image statistical distributions between training and testing datasets. To tackle this problem, we propose an unsupervised weight map generation network aimed at effectively integrating pre-enhanced images generated from carefully selected complementary LLIE methods. Our ultimate goal is to enhance the overall enhancement performance by leveraging these pre-enhanced images, therewith culminating the enhancement workflow in a dual-stage execution paradigm. To be more specific, in the preprocessing stage, we initially employ two distinct LLIE methods, namely Night and PairLIE, chosen specifically for their complementary enhancement characteristics, to process the given input low-light image. The resultant outputs, termed pre-enhanced images, serve as dual target images for fusion in the subsequent image fusion stage. Subsequently, at the fusion stage, we utilize an unsupervised UNet architecture to determine the optimal pixel-level weight maps for merging the pre-enhanced images. This process is adeptly directed by a specially formulated loss function in conjunction with the no-reference image quality algorithm, namely the naturalness image quality evaluator (NIQE). Finally, based on a mixed weighting mechanism that combines generated pixel-level local weights with image-level global empirical weights, the pre-enhanced images are fused to produce the final enhanced image. Our experimental findings demonstrate exceptional performance across a range of datasets, surpassing various state-of-the-art methods, including two pre-enhancement methods, involved in the comparison. This outstanding performance is attributed to the harmonious integration of diverse LLIE methods, which yields robust and high-quality enhancement outcomes across various scenarios. Furthermore, our approach exhibits scalability and adaptability, ensuring compatibility with future advancements in enhancement technologies while maintaining superior performance in this rapidly evolving field. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 5302 KiB  
Article
Low-Light Image Enhancement by Combining Transformer and Convolutional Neural Network
by Nianzeng Yuan, Xingyun Zhao, Bangyong Sun, Wenjia Han, Jiahai Tan, Tao Duan and Xiaomei Gao
Mathematics 2023, 11(7), 1657; https://doi.org/10.3390/math11071657 - 30 Mar 2023
Cited by 8 | Viewed by 3748
Abstract
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To [...] Read more.
Within low-light imaging environment, the insufficient reflected light from objects often results in unsatisfactory images with degradations of low contrast, noise artifacts, or color distortion. The captured low-light images usually lead to poor visual perception quality for color deficient or normal observers. To address the above problems, we propose an end-to-end low-light image enhancement network by combining transformer and CNN (convolutional neural network) to restore the normal light images. Specifically, the proposed enhancement network is designed into a U-shape structure with several functional fusion blocks. Each fusion block includes a transformer stem and a CNN stem, and those two stems collaborate to accurately extract the local and global features. In this way, the transformer stem is responsible for efficiently learning global semantic information and capturing long-term dependencies, while the CNN stem is good at learning local features and focusing on detailed features. Thus, the proposed enhancement network can accurately capture the comprehensive semantic information of low-light images, which significantly contribute to recover normal light images. The proposed method is compared with the current popular algorithms quantitatively and qualitatively. Subjectively, our method significantly improves the image brightness, suppresses the image noise, and maintains the texture details and color information. For objective metrics such as peak signal-to-noise ratio (PSNR), structural similarity (SSIM), image perceptual similarity (LPIPS), DeltaE, and NIQE, our method improves the optimal values by 1.73 dB, 0.05, 0.043, 0.7939, and 0.6906, respectively, compared with other methods. The experimental results show that our proposed method can effectively solve the problems of underexposure, noise interference, and color inconsistency in micro-optical images, and has certain application value. Full article
(This article belongs to the Special Issue Advances in Computer Vision and Machine Learning)
Show Figures

Figure 1

15 pages, 2662 KiB  
Article
AP Shadow Net: A Remote Sensing Shadow Removal Network Based on Atmospheric Transport and Poisson’s Equation
by Fan Li, Zhiyi Wang and Guoliang He
Entropy 2022, 24(9), 1301; https://doi.org/10.3390/e24091301 - 14 Sep 2022
Cited by 4 | Viewed by 2684
Abstract
Shadow is one of the fundamental indicators of remote sensing image which could cause loss or interference of the target data. As a result, the detection and removal of shadow has already been the hotspot of current study because of the complicated background [...] Read more.
Shadow is one of the fundamental indicators of remote sensing image which could cause loss or interference of the target data. As a result, the detection and removal of shadow has already been the hotspot of current study because of the complicated background information. In the following passage, a model combining the Atmospheric Transport Model (hereinafter abbreviated as ATM) with the Poisson Equation, AP ShadowNet, is proposed for the shadow detection and removal of remote sensing images by unsupervised learning. This network based on a preprocessing network based on ATM, A Net, and a network based on the Poisson Equation, P Net. Firstly, corresponding mapping between shadow and unshaded area is generated by the ATM. The brightened image will then enter the Confrontation identification in the P Net. Lastly, the reconstructed image is optimized on color consistency and edge transition by Poisson Equation. At present, most shadow removal models based on neural networks are significantly data-driven. Fortunately, by the model in this passage, the unsupervised shadow detection and removal could be released from the data source restrictions from the remote sensing images themselves. By verifying the shadow removal on our model, the result shows a satisfying effect from a both qualitative and quantitative angle. From a qualitative point of view, our results have a prominent effect on tone consistency and removal of detailed shadows. From the quantitative point of view, we adopt the non-reference evaluation indicators: gradient structure similarity (NRSS) and Natural Image Quality Evaluator (NIQE). Combining various evaluation factors such as reasoning speed and memory occupation, it shows that it is outstanding among other current algorithms. Full article
Show Figures

Figure 1

22 pages, 11602 KiB  
Article
Low Light Image Enhancement Algorithm Based on Detail Prediction and Attention Mechanism
by Yanming Hui, Jue Wang, Ying Shi and Bo Li
Entropy 2022, 24(6), 815; https://doi.org/10.3390/e24060815 - 11 Jun 2022
Cited by 11 | Viewed by 3453
Abstract
Most LLIE algorithms focus solely on enhancing the brightness of the image and ignore the extraction of image details, leading to losing much of the information that reflects the semantics of the image, losing the edges, textures, and shape features, resulting in image [...] Read more.
Most LLIE algorithms focus solely on enhancing the brightness of the image and ignore the extraction of image details, leading to losing much of the information that reflects the semantics of the image, losing the edges, textures, and shape features, resulting in image distortion. In this paper, the DELLIE algorithm is proposed, an algorithmic framework with deep learning as the central premise that focuses on the extraction and fusion of image detail features. Unlike existing methods, basic enhancement preprocessing is performed first, and then the detail enhancement components are obtained by using the proposed detail component prediction model. Then, the V-channel is decomposed into a reflectance map and an illumination map by proposed decomposition network, where the enhancement component is used to enhance the reflectance map. Then, the S and H channels are nonlinearly constrained using an improved adaptive loss function, while the attention mechanism is introduced into the algorithm proposed in this paper. Finally, the three channels are fused to obtain the final enhancement effect. The experimental results show that, compared with the current mainstream LLIE algorithm, the DELLIE algorithm proposed in this paper can extract and recover the image detail information well while improving the luminance, and the PSNR, SSIM, and NIQE are optimized by 1.85%, 4.00%, and 2.43% on average on recognized datasets. Full article
(This article belongs to the Special Issue Information Theory in Signal Processing and Image Processing)
Show Figures

Figure 1

12 pages, 2678 KiB  
Article
Application of Fast Non-Local Means Algorithm for Noise Reduction Using Separable Color Channels in Light Microscopy Images
by Seong-Hyeon Kang and Ji-Youn Kim
Int. J. Environ. Res. Public Health 2021, 18(6), 2903; https://doi.org/10.3390/ijerph18062903 - 12 Mar 2021
Cited by 12 | Viewed by 2728
Abstract
The purpose of this study is to evaluate the various control parameters of a modeled fast non-local means (FNLM) noise reduction algorithm which can separate color channels in light microscopy (LM) images. To achieve this objective, the tendency of image characteristics with changes [...] Read more.
The purpose of this study is to evaluate the various control parameters of a modeled fast non-local means (FNLM) noise reduction algorithm which can separate color channels in light microscopy (LM) images. To achieve this objective, the tendency of image characteristics with changes in parameters, such as smoothing factors and kernel and search window sizes for the FNLM algorithm, was analyzed. To quantitatively assess image characteristics, the coefficient of variation (COV), blind/referenceless image spatial quality evaluator (BRISQUE), and natural image quality evaluator (NIQE) were employed. When high smoothing factors and large search window sizes were applied, excellent COV and unsatisfactory BRISQUE and NIQE results were obtained. In addition, all three evaluation parameters improved as the kernel size increased. However, the kernel and search window sizes of the FNLM algorithm were shown to be dependent on the image processing time (time resolution). In conclusion, this work has demonstrated that the FNLM algorithm can effectively reduce noise in LM images, and parameter optimization is important to achieve the algorithm’s appropriate application. Full article
(This article belongs to the Section Digital Health)
Show Figures

Figure 1

23 pages, 1453 KiB  
Article
An Underwater Image Enhancement Algorithm Based on MSR Parameter Optimization
by Kai Hu, Yanwen Zhang, Feiyu Lu, Zhiliang Deng and Yunping Liu
J. Mar. Sci. Eng. 2020, 8(10), 741; https://doi.org/10.3390/jmse8100741 - 25 Sep 2020
Cited by 22 | Viewed by 4207
Abstract
The quality of underwater images is often affected by the absorption of light and the scattering and diffusion of floating objects. Therefore, underwater image enhancement algorithms have been widely studied. In this area, algorithms based on Multi-Scale Retinex (MSR) represent an important research [...] Read more.
The quality of underwater images is often affected by the absorption of light and the scattering and diffusion of floating objects. Therefore, underwater image enhancement algorithms have been widely studied. In this area, algorithms based on Multi-Scale Retinex (MSR) represent an important research direction. Although the visual quality of underwater images can be improved to some extent, the enhancement effect is not good due to the fact that the parameters of these algorithms cannot adapt to different underwater environments. To solve this problem, based on classical MSR, we propose an underwater image enhancement optimization (MSR-PO) algorithm which uses the non-reference image quality assessment (NR-IQA) index as the optimization index. First of all, in a large number of experiments, we choose the Natural Image Quality Evaluator (NIQE) as the NR-IQA index and determine the appropriate parameters in MSR as the optimization object. Then, we use the Gravitational Search Algorithm (GSA) to optimize the underwater image enhancement algorithm based on MSR and the NIQE index. The experimental results show that this algorithm has an excellent adaptive ability to environmental changes. Full article
(This article belongs to the Special Issue Signals and Images in Sea Technologies)
Show Figures

Figure 1

Back to TopTop