Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (32)

Search Parameters:
Keywords = saliency and distortion

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 4696 KB  
Article
GATF-PCQA: A Graph Attention Transformer Fusion Network for Point Cloud Quality Assessment
by Abdelouahed Laazoufi, Mohammed El Hassouni and Hocine Cherifi
J. Imaging 2025, 11(11), 387; https://doi.org/10.3390/jimaging11110387 (registering DOI) - 1 Nov 2025
Abstract
Point cloud quality assessment remains a critical challenge due to the high dimensionality and irregular structure of 3D data, as well as the need to align objective predictions with human perception. To solve this, we suggest a novel graph-based learning architecture that integrates [...] Read more.
Point cloud quality assessment remains a critical challenge due to the high dimensionality and irregular structure of 3D data, as well as the need to align objective predictions with human perception. To solve this, we suggest a novel graph-based learning architecture that integrates perceptual features with advanced graph neural networks. Our method consists of four main stages: First, key perceptual features, including curvature, saliency, and color, are extracted to capture relevant geometric and visual distortions. Second, a graph-based representation of the point cloud is created using these characteristics, where nodes represent perceptual clusters and weighted edges encode their feature similarities, yielding a structured adjacency matrix. Third, a novel Graph Attention Network Transformer Fusion (GATF) module dynamically refines the importance of these features and generates a unified, view-specific representation. Finally, a Graph Convolutional Network (GCN) regresses the fused features into a final quality score. We validate our approach on three benchmark datasets: ICIP2020, WPC, and SJTU-PCQA. Experimental results demonstrate that our method achieves high correlation with human subjective scores, outperforming existing state-of-the-art metrics by effectively modeling the perceptual mechanisms of quality judgment. Full article
30 pages, 10206 KB  
Article
Evaluation and Improvement of Image Aesthetics Quality via Composition and Similarity
by Xinyu Cui, Guoqing Tu, Guoying Wang, Senjun Zhang and Lufeng Mo
Sensors 2025, 25(18), 5919; https://doi.org/10.3390/s25185919 - 22 Sep 2025
Viewed by 544
Abstract
The evaluation and enhancement of image aesthetics play a pivotal role in the development of visual media, impacting fields including photography, design, and computer vision. Composition, a key factor shaping visual aesthetics, significantly influences an image’s vividness and expressiveness. However, existing image optimization [...] Read more.
The evaluation and enhancement of image aesthetics play a pivotal role in the development of visual media, impacting fields including photography, design, and computer vision. Composition, a key factor shaping visual aesthetics, significantly influences an image’s vividness and expressiveness. However, existing image optimization methods face practical challenges: compression-induced distortion, imprecise object extraction, and cropping-caused unnatural proportions or content loss. To tackle these issues, this paper proposes an image aesthetic evaluation with composition and similarity (IACS) method that harmonizes composition aesthetics and image similarity through a unified function. When evaluating composition aesthetics, the method calculates the distance between the main semantic line (or salient object) and the nearest rule-of-thirds line or central line. For images featuring prominent semantic lines, a modified Hough transform is utilized to detect the main semantic line, while for images containing salient objects, a salient object detection method based on luminance channel salience features (LCSF) is applied to determine the salient object region. In evaluating similarity, edge similarity measured by the Canny operator is combined with the structural similarity index (SSIM). Furthermore, we introduce a Framework for Image Aesthetic Evaluation with Composition and Similarity-Based Optimization (FIACSO), which uses semantic segmentation and generative adversarial networks (GANs) to optimize composition while preserving the original content. Compared with prior approaches, the proposed method improves both the aesthetic appeal and fidelity of optimized images. Subjective evaluation involving 30 participants further confirms that FIACSO outperforms existing methods in overall aesthetics, compositional harmony, and content integrity. Beyond methodological contributions, this study also offers practical value: it supports photographers in refining image composition without losing context, assists designers in creating balanced layouts with minimal distortion, and provides computational tools to enhance the efficiency and quality of visual media production. Full article
(This article belongs to the Special Issue Recent Innovations in Computational Imaging and Sensing)
Show Figures

Figure 1

39 pages, 684 KB  
Review
Targeting the Roots of Psychosis: The Role of Aberrant Salience
by Giuseppe Marano, Francesco Maria Lisci, Greta Sfratta, Ester Maria Marzo, Francesca Abate, Gianluca Boggio, Gianandrea Traversi, Osvaldo Mazza, Roberto Pola, Eleonora Gaetani and Marianna Mazza
Pediatr. Rep. 2025, 17(3), 63; https://doi.org/10.3390/pediatric17030063 - 4 Jun 2025
Cited by 1 | Viewed by 2676
Abstract
Aberrant salience, defined as the inappropriate attribution of significance to neutral stimuli, is increasingly recognized as a critical mechanism in the onset of psychotic disorders. In young individuals at ultra-high risk (UHR) for psychosis, abnormal salience processing may serve as a precursor to [...] Read more.
Aberrant salience, defined as the inappropriate attribution of significance to neutral stimuli, is increasingly recognized as a critical mechanism in the onset of psychotic disorders. In young individuals at ultra-high risk (UHR) for psychosis, abnormal salience processing may serve as a precursor to full-blown psychotic symptoms, contributing to distorted perceptions and the onset of psychotic ideation. This review examines current literature on aberrant salience among UHR youth, exploring its neurobiological, psychological, and behavioral dimensions. Through a comprehensive analysis of studies involving neuroimaging, cognitive assessments, and symptomatology, we assess the consistency of findings across diverse methodologies. Additionally, we evaluate factors contributing to aberrant salience, including neurochemical imbalances, dysregulation in dopamine pathways, and environmental stressors, which may jointly increase psychosis vulnerability. Identifying aberrant salience as a measurable trait in UHR populations could facilitate earlier identification and targeted interventions. Implications for clinical practice are discussed, highlighting the need for specialized therapeutic approaches that address cognitive and emotional dysregulation in salience attribution. Recent research underscores the importance of aberrant salience in early psychosis research and advocates for further studies on intervention strategies to mitigate progression to psychosis among UHR individuals. Full article
(This article belongs to the Special Issue Mental Health and Psychiatric Disorders of Children and Adolescents)
Show Figures

Figure 1

27 pages, 13146 KB  
Article
Underwater-Image Enhancement Based on Maximum Information-Channel Correction and Edge-Preserving Filtering
by Wei Liu, Jingxuan Xu, Siying He, Yongzhen Chen, Xinyi Zhang, Hong Shu and Ping Qi
Symmetry 2025, 17(5), 725; https://doi.org/10.3390/sym17050725 - 9 May 2025
Cited by 1 | Viewed by 1923
Abstract
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement [...] Read more.
The properties of light propagation underwater typically cause color distortion and reduced contrast in underwater images. In addition, complex underwater lighting conditions can result in issues such as non-uniform illumination, spotting, and noise. To address these challenges, we propose an innovative underwater-image enhancement (UIE) approach based on maximum information-channel compensation and edge-preserving filtering techniques. Specifically, we first develop a channel information transmission strategy grounded in maximum information preservation principles, utilizing the maximum information channel to improve the color fidelity of the input image. Next, we locally enhance the color-corrected image using guided filtering and generate a series of globally contrast-enhanced images by applying gamma transformations with varying parameter values. In the final stage, the enhanced image sequence is decomposed into low-frequency (LF) and high-frequency (HF) components via side-window filtering. For the HF component, a weight map is constructed by calculating the difference between the current exposedness and the optimum exposure. For the LF component, we derive a comprehensive feature map by integrating the brightness map, saturation map, and saliency map, thereby accurately assessing the quality of degraded regions in a manner that aligns with the symmetry principle inherent in human vision. Ultimately, we combine the LF and HF components through a weighted summation process, resulting in a high-quality underwater image. Experimental results demonstrate that our method effectively achieves both color restoration and contrast enhancement, outperforming several State-of-the-Art UIE techniques across multiple datasets. Full article
(This article belongs to the Special Issue Symmetry and Its Applications in Image Processing)
Show Figures

Figure 1

38 pages, 13077 KB  
Article
Accentuation as a Mechanism of Visual Illusions: Insights from Adaptive Resonance Theory (ART)
by Baingio Pinna, Jurģis Šķilters and Daniele Porcheddu
Information 2025, 16(3), 172; https://doi.org/10.3390/info16030172 - 25 Feb 2025
Cited by 1 | Viewed by 1419
Abstract
This study introduces and examines the principle of accentuation as a novel mechanism in perceptual organization, analyzing its effects through the framework of Grossberg’s Adaptive Resonance Theory (ART). We demonstrate that localized accentuators, manifesting as minimal dissimilarities or discontinuities, can significantly modulate global [...] Read more.
This study introduces and examines the principle of accentuation as a novel mechanism in perceptual organization, analyzing its effects through the framework of Grossberg’s Adaptive Resonance Theory (ART). We demonstrate that localized accentuators, manifesting as minimal dissimilarities or discontinuities, can significantly modulate global perceptions, inducing illusions of geometric distortion, orientation shifts, and apparent motion. Through a series of phenomenological experiments, we establish that accentuation can supersede classical Gestalt principles, influencing figure-ground segregation, shape perception, and lexical processing. Our findings suggest that accentuation functions as an autonomous organizing principle, leveraging salience-driven attentional capture to generate perceptual effects. We then apply the ART model to elucidate these phenomena, focusing on its core constructs of complementary computing, boundary–surface interactions, and resonant states. Specifically, we show how accentuation-induced asymmetries in boundary signals within the boundary contour system (BCS) can propagate through laminar cortical circuits, biasing figure-ground assignments and shape representations. The interaction between these biased signals and top–down expectations, as modeled by ART’s resonance mechanisms, provides a neurally plausible account for the observed illusions. This integration of accentuation effects with ART offers novel insights into the neural substrates of visual perception and presents a unifying theoretical framework for a diverse array of perceptual phenomena, bridging low-level feature processing with high-level cognitive representations. Full article
Show Figures

Figure 1

20 pages, 9814 KB  
Article
Research on Performance of Interior Permanent Magnet Synchronous Motor with Fractional Slot Concentrated Winding for Electric Vehicles Applications
by Zhiqiang Xi, Lianbo Niu, Xianghai Yan and Liyou Xu
World Electr. Veh. J. 2024, 15(10), 470; https://doi.org/10.3390/wevj15100470 - 14 Oct 2024
Cited by 1 | Viewed by 2941
Abstract
The fractional-slot, concentrated-winding, interior permanent magnet synchronous motor (FSCW IPMSM) has advantages, such as reducing motor copper consumption, improving flux-weakening capability, and motor fault tolerance, and has certain development potential in application fields such as electric vehicles. However, fractional-slot concentrated-winding motors often contain [...] Read more.
The fractional-slot, concentrated-winding, interior permanent magnet synchronous motor (FSCW IPMSM) has advantages, such as reducing motor copper consumption, improving flux-weakening capability, and motor fault tolerance, and has certain development potential in application fields such as electric vehicles. However, fractional-slot concentrated-winding motors often contain rich harmonic components due to their winding characteristics, leading to increased motor losses and back electromotive force harmonics, thereby affecting the efficiency and constant power speed regulation range of the motor. Based on this, this article first uses the winding function method to explore the inductance and saliency ratio of the interior permanent magnet synchronous motor with different slot pole combinations in the fractional-slot concentrated- winding of electric vehicles. Secondly, this article will establish a 2D finite element parameterized model to analyze and compare the performance of fractional-slot concentrated-winding motors with different slot pole combinations, including air gap magnetic density, back electromotive force distortion rate, overload multiple, and torque. The structural parameters of the motor were optimized with the objective of minimizing the torque ripple under the constraint of minimizing the average torque reduction. The motor slot width, permanent magnet angle, and permanent magnet pole arc angle were analyzed and optimized. The simulation results showed that 12 slots and 8 poles were the optimal design schemes, providing a theoretical basis for the selection of slot pole coordination in the fractional-slot concentrated-winding interior permanent magnet synchronous motor for electric vehicles. Full article
Show Figures

Figure 1

22 pages, 15192 KB  
Article
Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment
by Zhiqiang Lin, Zhouyan He, Chongchong Jin, Ting Luo and Yeyao Chen
Remote Sens. 2024, 16(16), 3021; https://doi.org/10.3390/rs16163021 - 17 Aug 2024
Cited by 5 | Viewed by 2069
Abstract
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an [...] Read more.
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an urgent issue for various marine vision systems to address. Therefore, it is necessary to develop underwater image enhancement (UIE) and corresponding quality assessment methods. At present, most underwater image quality assessment (UIQA) methods primarily rely on extracting handcrafted features that characterize degradation attributes, which struggle to measure complex mixed distortions and often exhibit discrepancies with human visual perception in practical applications. Furthermore, current UIQA methods lack the consideration of the perception perspective of enhanced effects. To this end, this paper employs luminance and saliency priors as critical visual information for the first time to measure the enhancement effect of global and local quality achieved by the UIE algorithms, named JLSAU. The proposed JLSAU is built upon an overall pyramid-structured backbone, supplemented by the Luminance Feature Extraction Module (LFEM) and Saliency Weight Learning Module (SWLM), which aim at obtaining perception features with luminance and saliency priors at multiple scales. The supplement of luminance priors aims to perceive visually sensitive global distortion of luminance, including histogram statistical features and grayscale features with positional information. The supplement of saliency priors aims to perceive visual information that reflects local quality variation both in spatial and channel domains. Finally, to effectively model the relationship among different levels of visual information contained in the multi-scale features, the Attention Feature Fusion Module (AFFM) is proposed. Experimental results on the public UIQE and UWIQA datasets demonstrate that the proposed JLSAU outperforms existing state-of-the-art UIQA methods. Full article
(This article belongs to the Special Issue Ocean Remote Sensing Based on Radar, Sonar and Optical Techniques)
Show Figures

Figure 1

18 pages, 4373 KB  
Article
BPG-Based Lossy Compression of Three-Channel Remote Sensing Images with Visual Quality Control
by Fangfang Li, Oleg Ieremeiev, Vladimir Lukin and Karen Egiazarian
Remote Sens. 2024, 16(15), 2740; https://doi.org/10.3390/rs16152740 - 26 Jul 2024
Cited by 1 | Viewed by 1492
Abstract
A tendency to increase the number of acquired remote sensing images and to make their average size larger has been observed. To manage such data, compression is needed, and lossy compression is often preferable. Since lossy compression introduces distortions, this results in worse [...] Read more.
A tendency to increase the number of acquired remote sensing images and to make their average size larger has been observed. To manage such data, compression is needed, and lossy compression is often preferable. Since lossy compression introduces distortions, this results in worse classification and object detection. Therefore, lossy compression must be controlled, i.e., the introduced distortions must be under a certain limit. The distortions and the limit can be characterized by different metrics (quantitative criteria). Here, we consider the case of using the HaarPSI metric, which has a very high correlation with visual quality and human attention (saliency map), for three-channel optical band images compressed by the better portable graphics (BPG) encoder, one of the best modern compression techniques. We analyze a two-step procedure of providing a desired visual quality and show its peculiarities for the modes 4:4:4, 4:2:2, and 4:2:0 of image compression. We show how the HaarPSI metric relates to other known metrics of image visual quality and thresholds of distortion visibility. It is demonstrated that the two-step procedure provides about three times better accuracy in providing the desired visual quality compared to the fixed setting of parameter Q that controls compression for the BPG encoder. The provided accuracy is close to the reachable limit determined by the integer value setting of the Q parameter. We also briefly analyze the influence of compression on the classification accuracy of real-life remote sensing data. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Graphical abstract

34 pages, 5234 KB  
Article
Simulated Dopamine Modulation of a Neurorobotic Model of the Basal Ganglia
by Tony J. Prescott, Fernando M. Montes González, Kevin Gurney, Mark D. Humphries and Peter Redgrave
Biomimetics 2024, 9(3), 139; https://doi.org/10.3390/biomimetics9030139 - 25 Feb 2024
Cited by 3 | Viewed by 2986
Abstract
The vertebrate basal ganglia play an important role in action selection—the resolution of conflicts between alternative motor programs. The effective operation of basal ganglia circuitry is also known to rely on appropriate levels of the neurotransmitter dopamine. We investigated reducing or increasing the [...] Read more.
The vertebrate basal ganglia play an important role in action selection—the resolution of conflicts between alternative motor programs. The effective operation of basal ganglia circuitry is also known to rely on appropriate levels of the neurotransmitter dopamine. We investigated reducing or increasing the tonic level of simulated dopamine in a prior model of the basal ganglia integrated into a robot control architecture engaged in a foraging task inspired by animal behaviour. The main findings were that progressive reductions in the levels of simulated dopamine caused slowed behaviour and, at low levels, an inability to initiate movement. These states were partially relieved by increased salience levels (stronger sensory/motivational input). Conversely, increased simulated dopamine caused distortion of the robot’s motor acts through partially expressed motor activity relating to losing actions. This could also lead to an increased frequency of behaviour switching. Levels of simulated dopamine that were either significantly lower or higher than baseline could cause a loss of behavioural integration, sometimes leaving the robot in a ‘behavioral trap’. That some analogous traits are observed in animals and humans affected by dopamine dysregulation suggests that robotic models could prove useful in understanding the role of dopamine neurotransmission in basal ganglia function and dysfunction. Full article
(This article belongs to the Special Issue Bio-Inspired and Biomimetic Intelligence in Robotics)
Show Figures

Figure 1

18 pages, 39167 KB  
Article
Underwater Image Enhancement via Triple-Branch Dense Block and Generative Adversarial Network
by Peng Yang, Chunhua He, Shaojuan Luo, Tao Wang and Heng Wu
J. Mar. Sci. Eng. 2023, 11(6), 1124; https://doi.org/10.3390/jmse11061124 - 26 May 2023
Cited by 6 | Viewed by 2433
Abstract
The complex underwater environment and light scattering effect lead to severe degradation problems in underwater images, such as color distortion, noise interference, and loss of details. However, the degradation problems of underwater images bring a significant challenge to underwater applications. To address the [...] Read more.
The complex underwater environment and light scattering effect lead to severe degradation problems in underwater images, such as color distortion, noise interference, and loss of details. However, the degradation problems of underwater images bring a significant challenge to underwater applications. To address the color distortion, noise interference, and loss of detail problems in underwater images, we propose a triple-branch dense block-based generative adversarial network (TDGAN) for the quality enhancement of underwater images. A residual triple-branch dense block is designed in the generator, which improves performance and feature extraction efficiency and retains more image details. A dual-branch discriminator network is also developed, which helps to capture more high-frequency information and guides the generator to use more global content and detailed features. Experimental results show that TDGAN is more competitive than many advanced methods from the perspective of visual perception and quantitative metrics. Many application tests illustrate that TDGAN can significantly improve the accuracy of underwater target detection, and it is also applicable in image segmentation and saliency detection. Full article
(This article belongs to the Section Physical Oceanography)
Show Figures

Figure 1

24 pages, 19626 KB  
Article
An Innovative Approach for Effective Removal of Thin Clouds in Optical Images Using Convolutional Matting Model
by Renzhe Wu, Guoxiang Liu, Jichao Lv, Yin Fu, Xin Bao, Age Shama, Jialun Cai, Baikai Sui, Xiaowen Wang and Rui Zhang
Remote Sens. 2023, 15(8), 2119; https://doi.org/10.3390/rs15082119 - 17 Apr 2023
Cited by 3 | Viewed by 2876
Abstract
Clouds are the major source of clutter in optical remote sensing (RS) images. Approximately 60% of the Earth’s surface is covered by clouds, with the equatorial and Tibetan Plateau regions being the most affected. Although the implementation of techniques for cloud removal can [...] Read more.
Clouds are the major source of clutter in optical remote sensing (RS) images. Approximately 60% of the Earth’s surface is covered by clouds, with the equatorial and Tibetan Plateau regions being the most affected. Although the implementation of techniques for cloud removal can significantly improve the efficiency of remote sensing imagery, its use is severely restricted due to the poor timeliness of time-series cloud removal techniques and the distortion-prone nature of single-frame cloud removal techniques. To thoroughly remove thin clouds from remote sensing imagery, we propose the Saliency Cloud Matting Convolutional Neural Network (SCM-CNN) from an image fusion perspective. This network can automatically balance multiple loss functions, extract the cloud opacity and cloud top reflectance intensity from cloudy remote sensing images, and recover ground surface information under thin cloud cover through inverse operations. The SCM-CNN was trained on simulated samples and validated on both simulated samples and Sentinel-2 images, achieving average peak signal-to-noise ratios (PSNRs) of 30.04 and 25.32, respectively. Comparative studies demonstrate that the SCM-CNN model is more effective in performing cloud removal on individual remote sensing images, is robust, and can recover ground surface information under thin cloud cover without compromising the original image. The method proposed in this article can be widely promoted in regions with year-round cloud cover, providing data support for geological hazard, vegetation, and frozen area studies, among others. Full article
(This article belongs to the Special Issue Remote Sensing and Machine Learning of Signal and Image Processing)
Show Figures

Figure 1

25 pages, 7792 KB  
Article
Image Enhancement of Maritime Infrared Targets Based on Scene Discrimination
by Yingqi Jiang, Lili Dong and Junke Liang
Sensors 2022, 22(15), 5873; https://doi.org/10.3390/s22155873 - 5 Aug 2022
Cited by 6 | Viewed by 2585
Abstract
Infrared image enhancement technology can effectively improve the image quality and enhance the saliency of the target and is a critical component in the marine target search and tracking system. However, the imaging quality of maritime infrared images is easily affected by weather [...] Read more.
Infrared image enhancement technology can effectively improve the image quality and enhance the saliency of the target and is a critical component in the marine target search and tracking system. However, the imaging quality of maritime infrared images is easily affected by weather and sea conditions and has low contrast defects and weak target contour information. At the same time, the target is disturbed by different intensities of sea clutter, so the characteristics of the target are also different, which cannot be processed by a single algorithm. Aiming at these problems, the relationship between the directional texture features of the target and the roughness of the sea surface is deeply analyzed. According to the texture roughness of the waves, the image scene is adaptively divided into calm sea surface and rough sea surface. At the same time, through the Gabor filter at a specific frequency and the gradient-based target feature extraction operator proposed in this paper, the clutter suppression and feature fusion strategies are set, and the target feature image of multi-scale fusion in two types of scenes are obtained, which is used as a guide image for guided filtering. The original image is decomposed into a target and a background layer to extract the target features and avoid image distortion. The blurred background around the target contour is extracted by Gaussian filtering based on the potential target region, and the edge blur caused by the heat conduction of the target is eliminated. Finally, an enhanced image is obtained by fusing the target and background layers with appropriate weights. The experimental results show that, compared with the current image enhancement method, the method proposed in this paper can improve the clarity and contrast of images, enhance the detectability of targets in distress, remove sea surface clutter while retaining the natural environment features in the background, and provide more information for target detection and continuous tracking in maritime search and rescue. Full article
(This article belongs to the Collection Remote Sensing Image Processing)
Show Figures

Figure 1

27 pages, 34983 KB  
Article
Multi-Resolution Collaborative Fusion of SAR, Multispectral and Hyperspectral Images for Coastal Wetlands Mapping
by Yi Yuan, Xiangchao Meng, Weiwei Sun, Gang Yang, Lihua Wang, Jiangtao Peng and Yumiao Wang
Remote Sens. 2022, 14(14), 3492; https://doi.org/10.3390/rs14143492 - 21 Jul 2022
Cited by 33 | Viewed by 6109
Abstract
The hyperspectral, multispectral, and synthetic aperture radar (SAR) remote sensing images provide complementary advantages in high spectral resolution, high spatial resolution, and geometric and polarimetric properties, generally. How to effectively integrate cross-modal information to obtain a high spatial resolution hyperspectral image with the [...] Read more.
The hyperspectral, multispectral, and synthetic aperture radar (SAR) remote sensing images provide complementary advantages in high spectral resolution, high spatial resolution, and geometric and polarimetric properties, generally. How to effectively integrate cross-modal information to obtain a high spatial resolution hyperspectral image with the characteristics of the SAR is promising. However, due to divergent imaging mechanisms of modalities, existing SAR and optical image fusion techniques generally remain limited due to the spectral or spatial distortions, especially for complex surface features such as coastal wetlands. This paper provides, for the first time, an efficient multi-resolution collaborative fusion method for multispectral, hyperspectral, and SAR images. We improve generic multi-resolution analysis with spectral-spatial weighted modulation and spectral compensation to achieve minimal spectral loss. The backscattering gradients of SAR are guided to fuse, which is calculated from saliency gradients with edge preserving. The experiments were performed on ZiYuan-1 02D (ZY-1 02D) and GaoFen-5B (AHSI) hyperspectral, Sentinel-2 and GaoFen-5B (VIMI) multispectral, and Sentinel-1 SAR images in the challenging coastal wetlands. Specifically, the fusion results were comprehensively tested and verified on the qualitative, quantitative, and classification metrics. The experimental results show the competitive performance of the proposed method. Full article
Show Figures

Figure 1

14 pages, 1640 KB  
Article
Saliency-Guided Local Full-Reference Image Quality Assessment
by Domonkos Varga
Signals 2022, 3(3), 483-496; https://doi.org/10.3390/signals3030028 - 11 Jul 2022
Cited by 15 | Viewed by 4377
Abstract
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible [...] Read more.
Research and development of image quality assessment (IQA) algorithms have been in the focus of the computer vision and image processing community for decades. The intent of IQA methods is to estimate the perceptual quality of digital images correlating as high as possible with human judgements. Full-reference image quality assessment algorithms, which have full access to the distortion-free images, usually contain two phases: local image quality estimation and pooling. Previous works have utilized visual saliency in the final pooling stage. In addition to this, visual saliency was utilized as weights in the weighted averaging of local image quality scores, emphasizing image regions that are salient to human observers. In contrast to this common practice, visual saliency is applied in the computation of local image quality in this study, based on the observation that local image quality is determined both by local image degradation and visual saliency simultaneously. Experimental results on KADID-10k, TID2013, TID2008, and CSIQ have shown that the proposed method was able to improve the state-of-the-art’s performance at low computational costs. Full article
Show Figures

Figure 1

10 pages, 7456 KB  
Article
Quality Assessment of View Synthesis Based on Visual Saliency and Texture Naturalness
by Lijuan Tang, Kezheng Sun, Shuaifeng Huang, Guangcheng Wang and Kui Jiang
Electronics 2022, 11(9), 1384; https://doi.org/10.3390/electronics11091384 - 26 Apr 2022
Cited by 2 | Viewed by 2309
Abstract
Depth-Image-Based-Rendering (DIBR) is one of the core techniques for generating new views in 3D video applications. However, the distortion characteristics of the DIBR synthetic view are different from the 2D image. It is necessary to study the unique distortion characteristics of DIBR views [...] Read more.
Depth-Image-Based-Rendering (DIBR) is one of the core techniques for generating new views in 3D video applications. However, the distortion characteristics of the DIBR synthetic view are different from the 2D image. It is necessary to study the unique distortion characteristics of DIBR views and design effective and efficient algorithms to evaluate the DIBR-synthesized image and guide DIBR algorithms. In this work, the visual saliency and texture natrualness features are extracted to evaluate the quality of the DIBR views. After extracting the feature, we adopt machine learning method for mapping the extracted feature to the quality score of the DIBR views. Experiments constructed on two synthetic view databases IETR and IRCCyN/IVC, and the results show that our proposed algorithm performs better than the compared synthetic view quality evaluation methods. Full article
Show Figures

Figure 1

Back to TopTop