Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (33)

Search Parameters:
Keywords = non-subsampled shearlet transform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 3908 KB  
Article
Transform Domain Based GAN with Deep Multi-Scale Features Fusion for Medical Image Super-Resolution
by Huayong Yang, Qingsong Wei and Yu Sang
Electronics 2025, 14(18), 3726; https://doi.org/10.3390/electronics14183726 - 20 Sep 2025
Viewed by 770
Abstract
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical [...] Read more.
High-resolution (HR) medical images provide clearer anatomical details and facilitate early disease diagnosis, yet acquiring HR scans is often limited by imaging conditions, device capabilities, and patient factors. We propose a transform domain deep multiscale feature fusion generative adversarial network (MSFF-GAN) for medical image super-resolution (SR). Considering the advantages of generative adversarial networks (GANs) and convolutional neural networks (CNNs), MSFF-GAN integrates a deep multi-scale convolution network into the GAN generator, which is composed primarily of a series of cascaded multi-scale feature extraction blocks in a coarse-to-fine manner to restore the medical images. Two tailored blocks are designed: a multiscale information distillation (MSID) block that adaptively captures long- and short-path features across scales, and a granular multiscale (GMS) block that expands receptive fields at fine granularity to strengthen multiscale feature extraction with reduced computational cost. Unlike conventional methods that predict HR images directly in the spatial domain, which often yield excessively smoothed outputs with missing textures, we formulate SR as the prediction of coefficients in the non-subsampled shearlet transform (NSST) domain. This transform domain modeling enables better preservation of global anatomical structure and local texture details. The predicted coefficients are inverted to reconstruct HR images, and the transform domain subbands are also fed to the discriminator to enhance its discrimination ability and improve perceptual fidelity. Extensive experiments on medical image datasets demonstrate that MSFF-GAN outperforms state-of-the-art approaches in structural similarity index (SSIM) and peak signal-to-noise ratio (PSNR), while more effectively preserving global anatomy and fine textures. These results validate the effectiveness of combining multiscale feature fusion with transform domain prediction for high-quality medical image super-resolution. Full article
(This article belongs to the Special Issue New Trends in AI-Assisted Computer Vision)
Show Figures

Figure 1

16 pages, 13327 KB  
Article
Fusion of Infrared and Visible Light Images Based on Improved Adaptive Dual-Channel Pulse Coupled Neural Network
by Bin Feng, Chengbo Ai and Haofei Zhang
Electronics 2024, 13(12), 2337; https://doi.org/10.3390/electronics13122337 - 14 Jun 2024
Cited by 3 | Viewed by 1605
Abstract
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the [...] Read more.
The pulse-coupled neural network (PCNN), due to its effectiveness in simulating the mammalian visual system to perceive and understand visual information, has been widely applied in the fields of image segmentation and image fusion. To address the issues of low contrast and the loss of detail information in infrared and visible light image fusion, this paper proposes a novel image fusion method based on an improved adaptive dual-channel PCNN model in the non-subsampled shearlet transform (NSST) domain. Firstly, NSST is used to decompose the infrared and visible light images into a series of high-pass sub-bands and a low-pass sub-band, respectively. Next, the PCNN models are stimulated using the weighted sum of the eight-neighborhood Laplacian of the high-pass sub-bands and the energy activity of the low-pass sub-band. The high-pass sub-bands are fused using local structural information as the basis for the linking strength for the PCNN, while the low-pass sub-band is fused using a linking strength based on multiscale morphological gradients. Finally, the fused high-pass and low-pass sub-bands are reconstructed to obtain the fused image. Comparative experiments demonstrate that, subjectively, this method effectively enhances the contrast of scenes and targets while preserving the detail information of the source images. Compared to the best mean values of the objective evaluation metrics of the compared methods, the proposed method shows improvements of 2.35%, 3.49%, and 11.60% in information entropy, mutual information, and standard deviation, respectively. Full article
(This article belongs to the Special Issue Machine Learning Methods for Solving Optical Imaging Problems)
Show Figures

Figure 1

22 pages, 18573 KB  
Article
A Multi-Scale Fusion Strategy for Side Scan Sonar Image Correction to Improve Low Contrast and Noise Interference
by Ping Zhou, Jifa Chen, Pu Tang, Jianjun Gan and Hongmei Zhang
Remote Sens. 2024, 16(10), 1752; https://doi.org/10.3390/rs16101752 - 15 May 2024
Cited by 5 | Viewed by 2669
Abstract
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, [...] Read more.
Side scan sonar images have great application prospects in underwater surveys, target detection, and engineering activities. However, the acquired sonar images exhibit low illumination, scattered noise, distorted outlines, and unclear edge textures due to the complicated undersea environment and intrinsic device flaws. Hence, this paper proposes a multi-scale fusion strategy for side scan sonar (SSS) image correction to improve the low contrast and noise interference. Initially, an SSS image was decomposed into low and high frequency sub-bands via the non-subsampled shearlet transform (NSST). Then, modified multi-scale retinex (MMSR) was employed to enhance the contrast of the low frequency sub-band. Next, sparse dictionary learning (SDL) was utilized to eliminate high frequency noise. Finally, the process of NSST reconstruction was completed by fusing the emerging low and high frequency sub-band images to generate a new sonar image. The experimental results demonstrate that the target features, underwater terrain, and edge contours could be clearly displayed in the image corrected by the multi-scale fusion strategy when compared to eight correction techniques: BPDHE, MSRCR, NPE, ALTM, LIME, FE, WT, and TVRLRA. Effective control was achieved over the speckle noise of the sonar image. Furthermore, the AG, STD, and E values illustrated the delicacy and contrast of the corrected images processed by the proposed strategy. The PSNR value revealed that the proposed strategy outperformed the advanced TVRLRA technology in terms of filtering performance by at least 8.8%. It can provide sonar imagery that is appropriate for various circumstances. Full article
(This article belongs to the Special Issue Radar and Sonar Imaging and Processing IV)
Show Figures

Graphical abstract

19 pages, 27088 KB  
Article
Research on Multi-Scale Fusion Method for Ancient Bronze Ware X-ray Images in NSST Domain
by Meng Wu, Lei Yang and Ruochang Chai
Appl. Sci. 2024, 14(10), 4166; https://doi.org/10.3390/app14104166 - 14 May 2024
Cited by 1 | Viewed by 1464
Abstract
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully [...] Read more.
X-ray imaging is a valuable non-destructive tool for examining bronze wares, but the complexity of the coverings of bronze wares and the limitations of single-energy imaging techniques often obscure critical details, such as lesions and ornamentation. Therefore, multiple imaging is required to fully present the key information of bronze artifacts, which affects the complete presentation of information and increases the difficulty of analysis and interpretation. Using high-performance image fusion technology to fuse X-ray images of different energies into one image can effectively solve this problem. However, there is currently no specialized method for the fusion of images of bronze artifacts. Considering the special requirements for the restoration of bronze artifacts and the existing fusion framework, this paper proposes a new method. It is a novel multi-scale morphological gradient and local topology-coupled neural P systems approach within the Non-Subsampled Shearlet Transform domain. It addresses the absence of a specialized method for image fusion of bronze artifacts. The method proposed in this paper is compared with eight high-performance fusion methods and validated using a total of six evaluation metrics. The results demonstrate the significant theoretical and practical potential of this method for advancing the analysis and preservation of cultural heritage artifacts. Full article
Show Figures

Figure 1

20 pages, 13665 KB  
Article
ECFuse: Edge-Consistent and Correlation-Driven Fusion Framework for Infrared and Visible Image Fusion
by Hanrui Chen, Lei Deng, Lianqing Zhu and Mingli Dong
Sensors 2023, 23(19), 8071; https://doi.org/10.3390/s23198071 - 25 Sep 2023
Cited by 4 | Viewed by 2272
Abstract
Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This [...] Read more.
Infrared and visible image fusion (IVIF) aims to render fused images that maintain the merits of both modalities. To tackle the challenge in fusing cross-modality information and avoiding texture loss in IVIF, we propose a novel edge-consistent and correlation-driven fusion framework (ECFuse). This framework leverages our proposed edge-consistency fusion module to maintain rich and coherent edges and textures, simultaneously introducing a correlation-driven deep learning network to fuse the cross-modality global features and modality-specific local features. Firstly, the framework employs a multi-scale transformation (MST) to decompose the source images into base and detail layers. Then, the edge-consistent fusion module fuses detail layers while maintaining the coherence of edges through consistency verification. A correlation-driven fusion network is proposed to fuse the base layers containing both modalities’ main features in the transformation domain. Finally, the final fused spatial image is reconstructed by inverse MST. We conducted experiments to compare our ECFuse with both conventional and deep leaning approaches on TNO, LLVIP and M3FD datasets. The qualitative and quantitative evaluation results demonstrate the effectiveness of our framework. We also show that ECFuse can boost the performance in downstream infrared–visible object detection in a unified benchmark. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

23 pages, 11979 KB  
Article
Multi-Focus Image Fusion via PAPCNN and Fractal Dimension in NSST Domain
by Ming Lv, Zhenhong Jia, Liangliang Li and Hongbing Ma
Mathematics 2023, 11(18), 3803; https://doi.org/10.3390/math11183803 - 5 Sep 2023
Cited by 4 | Viewed by 1807
Abstract
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive [...] Read more.
Multi-focus image fusion is a popular technique for generating a full-focus image, where all objects in the scene are clear. In order to achieve a clearer and fully focused fusion effect, in this paper, the multi-focus image fusion method based on the parameter-adaptive pulse-coupled neural network and fractal dimension in the nonsubsampled shearlet transform domain was developed. The parameter-adaptive pulse coupled neural network-based fusion rule was used to merge the low-frequency sub-bands, and the fractal dimension-based fusion rule via the multi-scale morphological gradient was used to merge the high-frequency sub-bands. The inverse nonsubsampled shearlet transform was used to reconstruct the fused coefficients, and the final fused multi-focus image was generated. We conducted comprehensive evaluations of our algorithm using the public Lytro dataset. The proposed method was compared with state-of-the-art fusion algorithms, including traditional and deep-learning-based approaches. The quantitative and qualitative evaluations demonstrated that our method outperformed other fusion algorithms, as evidenced by the metrics data such as QAB/F, QE, QFMI, QG, QNCIE, QP, QMI, QNMI, QY, QAG, QPSNR, and QMSE. These results highlight the clear advantages of our proposed technique in multi-focus image fusion, providing a significant contribution to the field. Full article
Show Figures

Figure 1

21 pages, 4502 KB  
Article
TDFusion: When Tensor Decomposition Meets Medical Image Fusion in the Nonsubsampled Shearlet Transform Domain
by Rui Zhang, Zhongyang Wang, Haoze Sun, Lizhen Deng and Hu Zhu
Sensors 2023, 23(14), 6616; https://doi.org/10.3390/s23146616 - 23 Jul 2023
Cited by 8 | Viewed by 2389
Abstract
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency [...] Read more.
In this paper, a unified optimization model for medical image fusion based on tensor decomposition and the non-subsampled shearlet transform (NSST) is proposed. The model is based on the NSST method and the tensor decomposition method to fuse the high-frequency (HF) and low-frequency (LF) parts of two source images to obtain a mixed-frequency fused image. In general, we integrate low-frequency and high-frequency information from the perspective of tensor decomposition (TD) fusion. Due to the structural differences between the high-frequency and low-frequency representations, potential information loss may occur in the fused images. To address this issue, we introduce a joint static and dynamic guidance (JSDG) technique to complement the HF/LF information. To improve the result of the fused images, we combine the alternating direction method of multipliers (ADMM) algorithm with the gradient descent method for parameter optimization. Finally, the fused images are reconstructed by applying the inverse NSST to the fused high-frequency and low-frequency bands. Extensive experiments confirm the superiority of our proposed TDFusion over other comparison methods. Full article
(This article belongs to the Special Issue Computer-Aided Diagnosis Based on AI and Sensor Technology)
Show Figures

Figure 1

15 pages, 2065 KB  
Article
Multimodality Medical Image Fusion Using Clustered Dictionary Learning in Non-Subsampled Shearlet Transform
by Manoj Diwakar, Prabhishek Singh, Ravinder Singh, Dilip Sisodia, Vijendra Singh, Ankur Maurya, Seifedine Kadry and Lukas Sevcik
Diagnostics 2023, 13(8), 1395; https://doi.org/10.3390/diagnostics13081395 - 12 Apr 2023
Cited by 19 | Viewed by 3140
Abstract
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract [...] Read more.
Imaging data fusion is becoming a bottleneck in clinical applications and translational research in medical imaging. This study aims to incorporate a novel multimodality medical image fusion technique into the shearlet domain. The proposed method uses the non-subsampled shearlet transform (NSST) to extract both low- and high-frequency image components. A novel approach is proposed for fusing low-frequency components using a modified sum-modified Laplacian (MSML)-based clustered dictionary learning technique. In the NSST domain, directed contrast can be used to fuse high-frequency coefficients. Using the inverse NSST method, a multimodal medical image is obtained. Compared to state-of-the-art fusion techniques, the proposed method provides superior edge preservation. According to performance metrics, the proposed method is shown to be approximately 10% better than existing methods in terms of standard deviation, mutual information, etc. Additionally, the proposed method produces excellent visual results regarding edge preservation, texture preservation, and more information. Full article
Show Figures

Figure 1

16 pages, 6327 KB  
Article
Research on Multi-Scale Feature Extraction and Working Condition Classification Algorithm of Lead-Zinc Ore Flotation Foam
by Xiaoping Jiang, Huilin Zhao, Junwei Liu, Suliang Ma and Mingzhen Hu
Appl. Sci. 2023, 13(6), 4028; https://doi.org/10.3390/app13064028 - 22 Mar 2023
Cited by 6 | Viewed by 2363
Abstract
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional [...] Read more.
To address the problems of difficult online monitoring, low recognition efficiency and the subjectivity of work condition identification in mineral flotation processes, a foam flotation performance state recognition method is developed to improve the issues mentioned above. This method combines multi-dimensional CNN (convolutional neural networks) characteristics and improved LBP (local binary patterns) characteristics. We have divided the foam flotation conditions into six categories. First, the multi-directional and multi-scale selectivity and anisotropy of nonsubsampled shearlet transform (NSST) are used to decompose the flotation foam images at multiple frequency scales, and a multi-channel CNN network is designed to extract static features from the images at different frequencies. Then, the flotation video image sequences are rotated and dynamic features are extracted by the LBP-TOP (local binary patterns from three orthogonal planes), and the CNN-extracted static picture features are fused with the LBP dynamic video features. Finally, classification decisions are made by a PSO-RVFLNs (particle swarm optimization-random vector functional link networks) algorithm to accurately identify the foam flotation performance states. Experimental results show that the detection accuracy of the new method is significantly improved by 4.97% and 6.55%, respectively, compared to the single CNN algorithm and the traditional LBP algorithm, respectively. The accuracy of flotation performance state classification was as high as 95.17%, and the method reduced manual intervention, thus improving production efficiency. Full article
Show Figures

Figure 1

18 pages, 5006 KB  
Article
Classification of Mineral Foam Flotation Conditions Based on Multi-Modality Image Fusion
by Xiaoping Jiang, Huilin Zhao and Junwei Liu
Appl. Sci. 2023, 13(6), 3512; https://doi.org/10.3390/app13063512 - 9 Mar 2023
Cited by 6 | Viewed by 2278
Abstract
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient [...] Read more.
Accurate and rapid identification of mineral foam flotation states can increase mineral utilization and reduce the consumption of reagents. The traditional flotation process concentrates on extracting foam features from a single-modality foam image, and the accuracy is undesirable once problems such as insufficient image clarity or poor foam boundaries are encountered. In this work, a classification method based on multi-modality image fusion and CNN-PCA-SVM is proposed for work condition recognition of visible and infrared gray foam images. Specifically, the visible and infrared gray images are fused in the non-subsampled shearlet transform (NSST) domain using the parameter adaptive pulse coupled neural network (PAPCNN) method and the image quality detection method for high and low frequencies, respectively. The convolution neural network (CNN) is used as a trainable feature extractor to process the fused foam images, the principal component analysis (PCA) reduces feature data, and the support vector machine (SVM) is used as a recognizer to classify the foam flotation condition. After experiments, this model can fuse the foam images and recognize the flotation condition classification with high accuracy. Full article
Show Figures

Figure 1

17 pages, 12590 KB  
Article
PCNN Model Guided by Saliency Mechanism for Image Fusion in Transform Domain
by Liqun Liu and Jiuyuan Huo
Sensors 2023, 23(5), 2488; https://doi.org/10.3390/s23052488 - 23 Feb 2023
Cited by 3 | Viewed by 2183
Abstract
In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A shortcoming of [...] Read more.
In heterogeneous image fusion problems, different imaging mechanisms have always existed between time-of-flight and visible light heterogeneous images which are collected by binocular acquisition systems in orchard environments. Determining how to enhance the fusion quality is key to the solution. A shortcoming of the pulse coupled neural network model is that parameters are limited by manual experience settings and cannot be terminated adaptively. The limitations are obvious during the ignition process, and include ignoring the impact of image changes and fluctuations on the results, pixel artifacts, area blurring, and the occurrence of unclear edges. Aiming at these problems, an image fusion method in a pulse coupled neural network transform domain guided by a saliency mechanism is proposed. A non-subsampled shearlet transform is used to decompose the accurately registered image; the time-of-flight low-frequency component, after multiple lighting segmentation using a pulse coupled neural network, is simplified to a first-order Markov situation. The significance function is defined as first-order Markov mutual information to measure the termination condition. A new momentum-driven multi-objective artificial bee colony algorithm is used to optimize the parameters of the link channel feedback term, link strength, and dynamic threshold attenuation factor. The low-frequency components of time-of-flight and color images, after multiple lighting segmentation using a pulse coupled neural network, are fused using the weighted average rule. The high-frequency components are fused using improved bilateral filters. The results show that the proposed algorithm has the best fusion effect on the time-of-flight confidence image and the corresponding visible light image collected in the natural scene, according to nine objective image evaluation indicators. It is suitable for the heterogeneous image fusion of complex orchard environments in natural landscapes. Full article
Show Figures

Figure 1

14 pages, 8261 KB  
Article
Panchromatic and Multispectral Image Fusion Combining GIHS, NSST, and PCA
by Lina Xu, Guangqi Xie and Sitong Zhou
Appl. Sci. 2023, 13(3), 1412; https://doi.org/10.3390/app13031412 - 20 Jan 2023
Cited by 7 | Viewed by 2825
Abstract
Spatial and spectral information are essential sources of information in remote sensing applications, and the fusion of panchromatic and multispectral images effectively combines the advantages of both. Due to the existence of two main classes of fusion methods—component substitution (CS) and multi-resolution analysis [...] Read more.
Spatial and spectral information are essential sources of information in remote sensing applications, and the fusion of panchromatic and multispectral images effectively combines the advantages of both. Due to the existence of two main classes of fusion methods—component substitution (CS) and multi-resolution analysis (MRA), which have different advantages—mixed approaches are possible. This paper proposes a fusion algorithm that combines the advantages of generalized intensity–hue–saturation (GIHS) and non-subsampled shearlet transform (NSST) with principal component analysis (PCA) technology to extract more spatial information. Therefore, compared with the traditional algorithms, the algorithm in this paper uses PCA transformation to obtain spatial structure components from PAN and MS, which can effectively inject spatial information while maintaining spectral information with high fidelity. First, PCA is applied to each band of low-resolution multispectral (MS) images and panchromatic (PAN) images to obtain the first principal component and to calculate the intensity of MS. Then, the PAN image is fused with the first principal component using NSST, and the fused image is used to replace the original intensity component. Finally, a fused image is obtained using the GIHS algorithm. Using the urban, plants and water, farmland, and desert images from GeoEye-1, WorldView-4, GaoFen-7 (GF-7), and Gaofen Multi-Mode (GFDM) as experimental data, this fusion method was tested using the evaluation mode with references and the evaluation mode without references and was compared with five other classic fusion algorithms. The results showed that the algorithms in this paper had better fusion performances in both spectral preservation and spatial information incorporation. Full article
(This article belongs to the Special Issue Recent Advances in Image Processing)
Show Figures

Figure 1

19 pages, 3522 KB  
Article
SI2FM: SID Isolation Double Forest Model for Hyperspectral Anomaly Detection
by Zhenhua Mu, Ming Wang, Yihan Wang, Ruoxi Song and Xianghai Wang
Remote Sens. 2023, 15(3), 612; https://doi.org/10.3390/rs15030612 - 20 Jan 2023
Cited by 3 | Viewed by 2570
Abstract
Hyperspectral image (HSI) anomaly detection (HSI-AD) has become a hot issue in hyperspectral information processing as a method for detecting undesired targets without a priori information against unknown background and target information, which can be better adapted to the needs of practical applications. [...] Read more.
Hyperspectral image (HSI) anomaly detection (HSI-AD) has become a hot issue in hyperspectral information processing as a method for detecting undesired targets without a priori information against unknown background and target information, which can be better adapted to the needs of practical applications. However, the demanding detection environment with no prior and small targets, as well as the large data and high redundancy of HSI itself, make the study of HSI-AD very challenging. First, we propose an HSI-AD method based on the nonsubsampled shearlet transform (NSST) domain spectral information divergence isolation double forest (SI2FM) in this paper. Further, the method excavates the intrinsic deep correlation properties between NSST subband coefficients of HSI in two ways to provide synergistic constraints and guidance on the prediction of abnormal target coefficients. On the one hand, with the “difference band” as a guide, the global isolation forest and local isolation forest models are constructed based on the spectral information divergence (SID) attribute values of the difference band and the low-frequency and high-frequency subbands, and the anomaly scores are determined by evaluating the path lengths of the isolation binary tree nodes in the forest model to obtain a progressively optimized anomaly detection map. On the other hand, based on the relationship of NSST high-frequency subband coefficients of spatial-spectral dimensions, the three-dimensional forest structure is constructed to realize the co-optimization of multiple anomaly detection maps obtained from the isolation forest. Finally, the guidance of the difference band suppresses the background noise and anomaly interference to a certain extent, enhancing the separability of target and background. The two-branch collaborative optimization based on the NSST subband coefficient correlation mining of HSI enables the prediction of anomaly sample coefficients to be gradually improved from multiple perspectives, which effectively improves the accuracy of anomaly detection. The effectiveness of the algorithm is verified by comparing real hyperspectral datasets captured in four different scenes with eleven typical anomaly detection algorithms currently available. Full article
(This article belongs to the Special Issue Hyperspectral Remote Sensing Imaging and Processing)
Show Figures

Figure 1

21 pages, 11923 KB  
Article
A Remote Sensing Image Fusion Method Combining Low-Level Visual Features and Parameter-Adaptive Dual-Channel Pulse-Coupled Neural Network
by Zhaoyang Hou, Kaiyun Lv, Xunqiang Gong and Yuting Wan
Remote Sens. 2023, 15(2), 344; https://doi.org/10.3390/rs15020344 - 6 Jan 2023
Cited by 11 | Viewed by 3299
Abstract
Remote sensing image fusion can effectively solve the inherent contradiction between spatial resolution and spectral resolution of imaging systems. At present, the fusion methods of remote sensing images based on multi-scale transform usually set fusion rules according to local feature information and pulse-coupled [...] Read more.
Remote sensing image fusion can effectively solve the inherent contradiction between spatial resolution and spectral resolution of imaging systems. At present, the fusion methods of remote sensing images based on multi-scale transform usually set fusion rules according to local feature information and pulse-coupled neural network (PCNN), but there are problems such as single local feature, as fusion rule cannot effectively extract feature information, PCNN parameter setting is complex, and spatial correlation is poor. To this end, a fusion method of remote sensing images that combines low-level visual features and a parameter-adaptive dual-channel pulse-coupled neural network (PADCPCNN) in a non-subsampled shearlet transform (NSST) domain is proposed in this paper. In the low-frequency sub-band fusion process, a low-level visual feature fusion rule is constructed by combining three local features, local phase congruency, local abrupt measure, and local energy information to enhance the extraction ability of feature information. In the process of high-frequency sub-band fusion, the structure and parameters of the dual-channel pulse-coupled neural network (DCPCNN) are optimized, including: (1) the multi-scale morphological gradient is used as an external stimulus to enhance the spatial correlation of DCPCNN; and (2) implement parameter-adaptive representation according to the difference box-counting, the Otsu threshold, and the image intensity to solve the complexity of parameter setting. Five sets of remote sensing image data of different satellite platforms and ground objects are selected for experiments. The proposed method is compared with 16 other methods and evaluated from qualitative and quantitative aspects. The experimental results show that, compared with the average value of the sub-optimal method in the five sets of data, the proposed method is optimized by 0.006, 0.009, 0.009, 0.035, 0.037, 0.042, and 0.020, respectively, in the seven evaluation indexes of information entropy, mutual information, average gradient, spatial frequency, spectral distortion, ERGAS, and visual information fidelity, indicating that the proposed method has the best fusion effect. Full article
Show Figures

Graphical abstract

13 pages, 13090 KB  
Article
Seismic Coherent Noise Removal of Source Array in the NSST Domain
by Minghao Yu, Xiangbo Gong and Xiaojie Wan
Appl. Sci. 2022, 12(21), 10846; https://doi.org/10.3390/app122110846 - 26 Oct 2022
Cited by 3 | Viewed by 2298
Abstract
The technique of the source array based on the vibroseis can provide the strong energy of a seismic wave field, which better meets the need for seismic exploration. The seismic coherent noise reduces the signal-to-noise ratio (SNR) of the source array seismic data [...] Read more.
The technique of the source array based on the vibroseis can provide the strong energy of a seismic wave field, which better meets the need for seismic exploration. The seismic coherent noise reduces the signal-to-noise ratio (SNR) of the source array seismic data and affects the seismic data processing. The traditional coherent noise removal methods often cause some damage to the effective signal while suppressing coherent noise or cannot suppress the interference wave effectively at all. Based on the multi-scale and multi-direction properties of the non-subsampled Shearlet transform (NSST) and its simple mathematical structure, the seismic coherent noise removal method of source array in NSST domain is proposed. The method is applied to both the synthetic seismic data and the filed seismic data. After processing with this method, the coherent noise of the seismic data is greatly removed and the effective signal information is greatly protected. The analysis of the results demonstrates the effectiveness and practicability of the proposed method on coherent noise attenuation. Full article
(This article belongs to the Special Issue Technological Advances in Seismic Data Processing and Imaging)
Show Figures

Figure 1

Back to TopTop