Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (9)

Search Parameters:
Keywords = no-reference loss

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 3281 KiB  
Article
Multi-Space Feature Fusion and Entropy-Based Metrics for Underwater Image Quality Assessment
by Baozhen Du, Hongwei Ying, Jiahao Zhang and Qunxin Chen
Entropy 2025, 27(2), 173; https://doi.org/10.3390/e27020173 - 6 Feb 2025
Viewed by 988
Abstract
In marine remote sensing, underwater images play an indispensable role in ocean exploration, owing to their richness in information and intuitiveness. However, underwater images often encounter issues such as color shifts, loss of detail, and reduced clarity, leading to the decline of image [...] Read more.
In marine remote sensing, underwater images play an indispensable role in ocean exploration, owing to their richness in information and intuitiveness. However, underwater images often encounter issues such as color shifts, loss of detail, and reduced clarity, leading to the decline of image quality. Therefore, it is critical to study precise and efficient methods for assessing underwater image quality. A no-reference multi-space feature fusion and entropy-based metrics for underwater image quality assessment (MFEM-UIQA) are proposed in this paper. Considering the color shifts of underwater images, the chrominance difference map is created from the chrominance space and statistical features are extracted. Moreover, considering the information representation capability of entropy, entropy-based multi-channel mutual information features are extracted to further characterize chrominance features. For the luminance space features, contrast features from luminance images based on gamma correction and luminance uniformity features are extracted. In addition, logarithmic Gabor filtering is applied to the luminance space images for subband decomposition and entropy-based mutual information of subbands is captured. Furthermore, underwater image noise features, multi-channel dispersion information, and visibility features are extracted to jointly represent the perceptual features. The experiments demonstrate that the proposed MFEM-UIQA surpasses the state-of-the-art methods. Full article
(This article belongs to the Collection Entropy in Image Analysis)
Show Figures

Figure 1

30 pages, 40714 KiB  
Article
Zero-TCE: Zero Reference Tri-Curve Enhancement for Low-Light Images
by Chengkang Yu, Guangliang Han, Mengyang Pan, Xiaotian Wu and Anping Deng
Appl. Sci. 2025, 15(2), 701; https://doi.org/10.3390/app15020701 - 12 Jan 2025
Cited by 1 | Viewed by 1311
Abstract
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference [...] Read more.
Addressing the common issues of low brightness, poor contrast, and blurred details in images captured under conditions such as night, backlight, and adverse weather, we propose a zero-reference dual-path network based on multi-scale depth curve estimation for low-light image enhancement. Utilizing a no-reference loss function, the enhancement of low-light images is converted into depth curve estimation, with three curves fitted to enhance the dark details of the image: a brightness adjustment curve (LE-curve), a contrast enhancement curve (CE-curve), and a multi-scale feature fusion curve (MF-curve). Initially, we introduce the TCE-L and TCE-C modules to improve image brightness and enhance image contrast, respectively. Subsequently, we design a multi-scale feature fusion (MFF) module that integrates the original and enhanced images at multiple scales in the HSV color space based on the brightness distribution characteristics of low-light images, yielding an optimally enhanced image that avoids overexposure and color distortion. We compare our proposed method against ten other advanced algorithms based on multiple datasets, including LOL, DICM, MEF, NPE, and ExDark, that encompass complex illumination variations. Experimental results demonstrate that the proposed algorithm adapts better to the characteristics of images captured in low-light environments, producing enhanced images with sharp contrast, rich details, and preserved color authenticity, while effectively mitigating the issue of overexposure. Full article
Show Figures

Figure 1

25 pages, 5085 KiB  
Article
Enhancing Underwater Images through Multi-Frequency Detail Optimization and Adaptive Color Correction
by Xiujing Gao, Junjie Jin, Fanchao Lin, Hongwu Huang, Jiawei Yang, Yongfeng Xie and Biwen Zhang
J. Mar. Sci. Eng. 2024, 12(10), 1790; https://doi.org/10.3390/jmse12101790 - 8 Oct 2024
Cited by 1 | Viewed by 3816
Abstract
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, [...] Read more.
This paper presents a novel underwater image enhancement method addressing the challenges of low contrast, color distortion, and detail loss prevalent in underwater photography. Unlike existing methods that may introduce color bias or blur during enhancement, our approach leverages a two-pronged strategy. First, an Efficient Fusion Edge Detection (EFED) module preserves crucial edge information, ensuring detail clarity even in challenging turbidity and illumination conditions. Second, a Multi-scale Color Parallel Frequency-division Attention (MCPFA) module integrates multi-color space data with edge information. This module dynamically weights features based on their frequency domain positions, prioritizing high-frequency details and areas affected by light attenuation. Our method further incorporates a dual multi-color space structural loss function, optimizing the performance of the network across RGB, Lab, and HSV color spaces. This approach enhances structural alignment and minimizes color distortion, edge artifacts, and detail loss often observed in existing techniques. Comprehensive quantitative and qualitative evaluations using both full-reference and no-reference image quality metrics demonstrate that our proposed method effectively suppresses scattering noise, corrects color deviations, and significantly enhances image details. In terms of objective evaluation metrics, our method achieves the best performance in the test dataset of EUVP with a PSNR of 23.45, SSIM of 0.821, and UIQM of 3.211, indicating that it outperforms state-of-the-art methods in improving image quality. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

19 pages, 1708 KiB  
Article
No-Reference Image Quality Assessment Combining Swin-Transformer and Natural Scene Statistics
by Yuxuan Yang, Zhichun Lei and Changlu Li
Sensors 2024, 24(16), 5221; https://doi.org/10.3390/s24165221 - 12 Aug 2024
Cited by 6 | Viewed by 3045
Abstract
No-reference image quality assessment aims to evaluate image quality based on human subjective perceptions. Current methods face challenges with insufficient ability to focus on global and local information simultaneously and information loss due to image resizing. To address these issues, we propose a [...] Read more.
No-reference image quality assessment aims to evaluate image quality based on human subjective perceptions. Current methods face challenges with insufficient ability to focus on global and local information simultaneously and information loss due to image resizing. To address these issues, we propose a model that combines Swin-Transformer and natural scene statistics. The model utilizes Swin-Transformer to extract multi-scale features and incorporates a feature enhancement module and deformable convolution to improve feature representation, adapting better to structural variations in images, apply dual-branch attention to focus on key areas, and align the assessment more closely with human visual perception. The Natural Scene Statistics compensates information loss caused by image resizing. Additionally, we use a normalized loss function to accelerate model convergence and enhance stability. We evaluate our model on six standard image quality assessment datasets (both synthetic and authentic), and show that our model achieves advanced results across multiple datasets. Compared to the advanced DACNN method, our model achieved Spearman rank correlation coefficients of 0.922 and 0.923 on the KADID and KonIQ datasets, respectively, representing improvements of 1.9% and 2.4% over this method. It demonstrated outstanding performance in handling both synthetic and authentic scenes. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

22 pages, 12643 KiB  
Article
Boosting the Performance of LLIE Methods via Unsupervised Weight Map Generation Network
by Shuichen Ji, Shaoping Xu, Nan Xiao, Xiaohui Cheng, Qiyu Chen and Xinyi Jiang
Appl. Sci. 2024, 14(12), 4962; https://doi.org/10.3390/app14124962 - 7 Jun 2024
Cited by 2 | Viewed by 1280
Abstract
Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across [...] Read more.
Over the past decade, significant advancements have been made in low-light image enhancement (LLIE) methods due to the robust capabilities of deep learning in non-linear mapping, feature extraction, and representation. However, the pursuit of a universally superior method that consistently outperforms others across diverse scenarios remains challenging. This challenge primarily arises from the inherent data bias in deep learning-based approaches, stemming from disparities in image statistical distributions between training and testing datasets. To tackle this problem, we propose an unsupervised weight map generation network aimed at effectively integrating pre-enhanced images generated from carefully selected complementary LLIE methods. Our ultimate goal is to enhance the overall enhancement performance by leveraging these pre-enhanced images, therewith culminating the enhancement workflow in a dual-stage execution paradigm. To be more specific, in the preprocessing stage, we initially employ two distinct LLIE methods, namely Night and PairLIE, chosen specifically for their complementary enhancement characteristics, to process the given input low-light image. The resultant outputs, termed pre-enhanced images, serve as dual target images for fusion in the subsequent image fusion stage. Subsequently, at the fusion stage, we utilize an unsupervised UNet architecture to determine the optimal pixel-level weight maps for merging the pre-enhanced images. This process is adeptly directed by a specially formulated loss function in conjunction with the no-reference image quality algorithm, namely the naturalness image quality evaluator (NIQE). Finally, based on a mixed weighting mechanism that combines generated pixel-level local weights with image-level global empirical weights, the pre-enhanced images are fused to produce the final enhanced image. Our experimental findings demonstrate exceptional performance across a range of datasets, surpassing various state-of-the-art methods, including two pre-enhancement methods, involved in the comparison. This outstanding performance is attributed to the harmonious integration of diverse LLIE methods, which yields robust and high-quality enhancement outcomes across various scenarios. Furthermore, our approach exhibits scalability and adaptability, ensuring compatibility with future advancements in enhancement technologies while maintaining superior performance in this rapidly evolving field. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

14 pages, 4602 KiB  
Article
Improved Retinex-Theory-Based Low-Light Image Enhancement Algorithm
by Jiarui Wang, Hanjia Wang, Yu Sun and Jie Yang
Appl. Sci. 2023, 13(14), 8148; https://doi.org/10.3390/app13148148 - 13 Jul 2023
Cited by 7 | Viewed by 3044
Abstract
Researchers working on image processing have had a hard time handling low-light images due to their low contrast, noise, and brightness. This paper presents an improved method that uses the Retinex theory to enhance low-light images, with a network model mainly composed of [...] Read more.
Researchers working on image processing have had a hard time handling low-light images due to their low contrast, noise, and brightness. This paper presents an improved method that uses the Retinex theory to enhance low-light images, with a network model mainly composed of a Decom-Net and an Enhance-Net. Residual connectivity is fully utilized in both the Decom-Net and Enhance-Net to reduce the possible loss of image details. Additionally, Enhance-Net introduces a positional pixel attention mechanism that directly incorporates the global information of the image. Specifically, Decom-Net serves to decompose the low-light image into illumination and reflection maps, and Enhance-Net serves to increase the brightness of the illumination map. Finally, via adaptive image fusion, the reflectance map and the enhanced illuminance map are fused to obtain the final enhanced image. Experiments show better results in terms of both subjective visual aspects and objective evaluation indicators. Compared to RetinexNet, the proposed method shows improvements in the full-reference evaluation metrics, including a 4.6% improvement in PSNR, a 1.8% improvement in SSIM, and a 10.8% improvement in LPIPS. Additionally, it achieved an average improvement of 17.3% in the no-reference evaluation metric NIQE. Full article
(This article belongs to the Special Issue Advances in Digital Image Processing)
Show Figures

Figure 1

31 pages, 43485 KiB  
Article
QMRNet: Quality Metric Regression for EO Image Quality Assessment and Super-Resolution
by David Berga, Pau Gallés, Katalin Takáts, Eva Mohedano, Laura Riordan-Chen, Clara Garcia-Moll, David Vilaseca and Javier Marín
Remote Sens. 2023, 15(9), 2451; https://doi.org/10.3390/rs15092451 - 6 May 2023
Cited by 2 | Viewed by 2736
Abstract
The latest advances in super-resolution have been tested with general-purpose images such as faces, landscapes and objects, but mainly unused for the task of super-resolving earth observation images. In this research paper, we benchmark state-of-the-art SR algorithms for distinct EO datasets using both [...] Read more.
The latest advances in super-resolution have been tested with general-purpose images such as faces, landscapes and objects, but mainly unused for the task of super-resolving earth observation images. In this research paper, we benchmark state-of-the-art SR algorithms for distinct EO datasets using both full-reference and no-reference image quality assessment metrics. We also propose a novel Quality Metric Regression Network (QMRNet) that is able to predict the quality (as a no-reference metric) by training on any property of the image (e.g., its resolution, its distortions, etc.) and also able to optimize SR algorithms for a specific metric objective. This work is part of the implementation of the framework IQUAFLOW, which has been developed for the evaluation of image quality and the detection and classification of objects as well as image compression in EO use cases. We integrated our experimentation and tested our QMRNet algorithm on predicting features such as blur, sharpness, snr, rer and ground sampling distance and obtained validation medRs below 1.0 (out of N = 50) and recall rates above 95%. The overall benchmark shows promising results for LIIF, CAR and MSRN and also the potential use of QMRNet as a loss for optimizing SR predictions. Due to its simplicity, QMRNet could also be used for other use cases and image domains, as its architecture and data processing is fully scalable. Full article
(This article belongs to the Special Issue Artificial Intelligence in Computational Remote Sensing)
Show Figures

Figure 1

18 pages, 1541 KiB  
Article
No-Reference Quality Assessment of Transmitted Stereoscopic Videos Based on Human Visual System
by Md Mehedi Hasan, Md. Ariful Islam, Sejuti Rahman, Michael R. Frater and John F. Arnold
Appl. Sci. 2022, 12(19), 10090; https://doi.org/10.3390/app121910090 - 7 Oct 2022
Cited by 2 | Viewed by 2016
Abstract
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual [...] Read more.
Provisioning the stereoscopic 3D (S3D) video transmission services of admissible quality in a wireless environment is an immense challenge for video service providers. Unlike for 2D videos, a widely accepted No-reference objective model for assessing transmitted 3D videos that explores the Human Visual System (HVS) appropriately has not been developed yet. Distortions perceived in 2D and 3D videos are significantly different due to the sophisticated manner in which the HVS handles the dissimilarities between the two different views. In real-time video transmission, viewers only have the distorted or receiver end content of the original video acquired through the communication medium. In this paper, we propose a No-reference quality assessment method that can estimate the quality of a stereoscopic 3D video based on HVS. By evaluating perceptual aspects and correlations of visual binocular impacts in a stereoscopic movie, the approach creates a way for the objective quality measure to assess impairments similarly to a human observer who would experience the similar material. Firstly, the disparity is measured and quantified by the region-based similarity matching algorithm, and then, the magnitude of the edge difference is calculated to delimit the visually perceptible areas of an image. Finally, an objective metric is approximated by extracting these significant perceptual image features. Experimental analysis with standard S3D video datasets demonstrates the lower computational complexity for the video decoder and comparison with the state-of-the-art algorithms shows the efficiency of the proposed approach for 3D video transmission at different quantization (QP 26 and QP 32) and loss rate (1% and 3% packet loss) parameters along with the perceptual distortion features. Full article
(This article belongs to the Special Issue Computational Intelligence in Image and Video Analysis)
Show Figures

Figure 1

24 pages, 5110 KiB  
Article
Pan-Sharpening Based on CNN+ Pyramid Transformer by Using No-Reference Loss
by Sijia Li, Qing Guo and An Li
Remote Sens. 2022, 14(3), 624; https://doi.org/10.3390/rs14030624 - 27 Jan 2022
Cited by 26 | Viewed by 3874
Abstract
The majority of existing deep learning pan-sharpening methods often use simulated degraded reference data due to the missing of real fusion labels which affects the fusion performance. The normally used convolutional neural network (CNN) can only extract the local detail information well which [...] Read more.
The majority of existing deep learning pan-sharpening methods often use simulated degraded reference data due to the missing of real fusion labels which affects the fusion performance. The normally used convolutional neural network (CNN) can only extract the local detail information well which may cause the loss of important global contextual characteristics with long-range dependencies in fusion. To address these issues and to fuse spatial and spectral information with high quality information from the original panchromatic (PAN) and multispectral (MS) images, this paper presents a novel pan-sharpening method by designing the CNN+ pyramid Transformer network with no-reference loss (CPT-noRef). Specifically, the Transformer is used as the main architecture for fusion to supply the global features, the local features in shallow CNN are combined, and the multi-scale features from the pyramid structure adding to the Transformer encoder are learned simultaneously. Our loss function directly learns the spatial information extracted from the PAN image and the spectral information from the MS image which is suitable for the theory of pan-sharpening and makes the network control the spatial and spectral loss simultaneously. Both training and test processes are based on real data, so the simulated degraded reference data is no longer needed, which is quite different from most existing deep learning fusion methods. The proposed CPT-noRef network can effectively solve the huge amount of data required by the Transformer network and extract abundant image features for fusion. In order to assess the effectiveness and universality of the fusion model, we have trained and evaluated the model on the experimental data of WorldView-2(WV-2) and Gaofen-1(GF-1) and compared it with other typical deep learning pan-sharpening methods from both the subjective visual effect and the objective index evaluation. The results show that the proposed CPT-noRef network offers superior performance in both qualitative and quantitative evaluations compared with existing state-of-the-art methods. In addition, our method has the strongest generalization capability by testing the Pleiades and WV-2 images on the network trained by GF-1 data. The no-reference loss function proposed in this paper can greatly enhance the spatial and spectral information of the fusion image with good performance and robustness. Full article
(This article belongs to the Special Issue Pan-Sharpening Methods for Remotely Sensed Images)
Show Figures

Graphical abstract

Back to TopTop