Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (40)

Search Parameters:
Keywords = non-subsampled contourlet transform

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 3158 KB  
Article
Design of a High-Speed Pavement Image Acquisition System Based on Binocular Vision
by Ruipeng Gao, Zhuofan Dang, Yiran Wang, Qing Jiang and Yuechen Meng
Appl. Syst. Innov. 2025, 8(6), 173; https://doi.org/10.3390/asi8060173 - 18 Nov 2025
Viewed by 459
Abstract
The acquisition of images of road surfaces not only establishes a theoretical foundation for road maintenance by relevant departments but also is instrumental in ensuring the safe operation of highway transportation systems. To address the limitations of traditional road surface image acquisition systems, [...] Read more.
The acquisition of images of road surfaces not only establishes a theoretical foundation for road maintenance by relevant departments but also is instrumental in ensuring the safe operation of highway transportation systems. To address the limitations of traditional road surface image acquisition systems, such as low collection speed, poor image clarity, insufficient information richness, and prohibitive costs, this study has developed a high-speed binocular-vision-based system. Through theoretical analysis, we developed a complete system that integrates hybrid anti-shake technology. Specifically, a hardware device was designed for stable installation at the rear of high-speed vehicles, and a software algorithm was implemented to develop an electronic anti-shake module that compensates for horizontal, vertical, and rotational motion vectors with sub-pixel-level accuracy. Furthermore, a road surface image fusion algorithm that combines the stationary wavelet transform (SWT) and nonsubsampled contourlet transform (NSCT) was proposed to preserve multi-scale edge and textural details by leveraging their complementary multidirectional characteristics. Experimental results demonstrate that the fusion algorithm based on SWT and NSCT outperforms those using either SWT or NSCT alone across quality evaluation metrics such as QAB/F, SF, MI, and RMSE: at 80 km/h, the SF value reaches 4.5, representing an improvement of 0.088 over the SWT algorithm and 4.412 over the NSCT algorithm, indicating that the fused images are clearer. The increases in QAB/F and MI values confirm that the fused road surface images retain rich edge and detailed information, achieving excellent fusion results. Consequently, the system can economically and efficiently capture stable, clear, and information-rich road surface images in real-time under high-speed conditions with low energy consumption and outstanding fidelity. Full article
Show Figures

Figure 1

16 pages, 2334 KB  
Article
A Comprehensive Image Quality Evaluation of Image Fusion Techniques Using X-Ray Images for Detonator Detection Tasks
by Lynda Oulhissane, Mostefa Merah, Simona Moldovanu and Luminita Moraru
Appl. Sci. 2025, 15(20), 10987; https://doi.org/10.3390/app152010987 - 13 Oct 2025
Viewed by 727
Abstract
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance [...] Read more.
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance unattended detection without requiring ground-truth labels; (2) thoroughly evaluate fusion techniques in terms of balancing image quality, information content, contrast, and the preservation of meaningful features. Methods: A total of 1000 X-ray luggage images and 150 detonator images were used for fusion experiments based on deep learning, transform-based, and feature-driven methods. The proposed approach does not need ground truth supervision. Deep learning fusion techniques, including VGG, FusionNet, and AttentionFuse, enable the dynamic selection and combination of features from multiple input images. The transform-based fusion methods convert input images into different domains using mathematical transforms to enhance fine structures. The Nonsubsampled Contourlet Transform (NSCT), Curvelet Transform, and Laplacian Pyramid (LP) are employed. Feature-driven image fusion methods combine meaningful representations for easier interpretation. Singular Value Decomposition (SVD), Principal Component Analysis (PCA), Random Forest (RF), and Local Binary Pattern (LBP) are used to capture and compare texture details across source images. Entropy (EN), Standard Deviation (SD), and Average Gradient (AG) assess factors such as spatial resolution, contrast preservation, and information retention and are used to evaluate the performance of the analysed methods. Results: The results highlight the strengths and limitations of the evaluated techniques, demonstrating their effectiveness in producing sharpened fused X-ray images with clearly emphasized targets and enhanced structural details. Conclusions: The Laplacian Pyramid fusion method emerges as the most versatile choice for applications demanding a balanced trade-off. This is evidenced by its overall multi-criteria balance, supported by a composite (geometric mean) score on normalised metrics. It consistently achieves high performance across all evaluated metrics, making it reliable for detecting concealed threats under diverse imaging conditions. Full article
Show Figures

Figure 1

27 pages, 11214 KB  
Article
Study on Spatiotemporal Coupling Between Urban Form and Carbon Footprint from the Perspective of Color Nighttime Light Remote Sensing
by Jingwen Li, Xinyi Gong, Yanling Lu and Jianwu Jiang
Remote Sens. 2025, 17(18), 3208; https://doi.org/10.3390/rs17183208 - 17 Sep 2025
Viewed by 620
Abstract
This study addresses the limitations of traditional nighttime light remote sensing data in ground object feature recognition and carbon emission monitoring by proposing a fusion framework based on Nonsubsampled Contourlet Transform (NSCT) and Intensity-Hue-Saturation (IHS). This framework successfully generates a high-resolution color nighttime [...] Read more.
This study addresses the limitations of traditional nighttime light remote sensing data in ground object feature recognition and carbon emission monitoring by proposing a fusion framework based on Nonsubsampled Contourlet Transform (NSCT) and Intensity-Hue-Saturation (IHS). This framework successfully generates a high-resolution color nighttime light remote sensing imagery (color-NLRSI) dataset. Focusing on Guangzhou, an important city in the Guangdong-Hong Kong-Macao Greater Bay Area, the study systematically analyzes the spatiotemporal coupling mechanism between urban form evolution and carbon footprint by integrating multiple remote sensing data sources and socio-economic statistical information. Key findings include: (i) The color-NLRSI dataset outperforms traditional NPP-VIIRS data in built-up area extraction, providing more accurate spatial information by refining urban boundary recognition logic. (ii) Spatial correlation analysis reveals a remarkably strong positive relationship between built-up area expansion and carbon emissions, with the correlation coefficient for numerous districts exceeding 0.9. High-density built-up areas are strongly associated with a carbon lock-in effect, hindering low-carbon transformation efficiency. (iii) Geographically Weighted Regression analysis demonstrates that in population-polarized regions, the impact coefficient of built-up area expansion on carbon emissions is notably high at 0.961. This factor’s association (22.43%) surpasses economic development (10.34%) and urbanization rate (14.91%). The established “data fusion—dynamic monitoring—mechanism analysis” technical system, which generates a novel high-resolution color-NLRSI dataset and reveals a distinct ‘core-periphery’ heterogeneity pattern in Guangzhou, demonstrating that urban expansion is the dominant driver of carbon emissions. This approach offers a scientific basis for tailored urban low-carbon development strategies, spatial optimization, and enhanced precision in carbon emission monitoring. Full article
(This article belongs to the Special Issue Smart Monitoring of Urban Environment Using Remote Sensing)
Show Figures

Figure 1

12 pages, 2172 KB  
Article
Instance Segmentation Method for Insulators in Complex Backgrounds Based on Improved SOLOv2
by Ze Chen, Yangpeng Ji, Xiaodong Du, Shaokang Zhao, Zhenfei Huo and Xia Fang
Sensors 2025, 25(17), 5318; https://doi.org/10.3390/s25175318 - 27 Aug 2025
Viewed by 874
Abstract
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a [...] Read more.
To precisely delineate the contours of insulators in complex transmission line images obtained from Unmanned Aerial Vehicle (UAV) inspections and thereby facilitate subsequent defect analysis, this study proposes an instance segmentation framework predicated upon an enhanced SOLOv2 model. The proposed framework integrates a preprocessed edge channel, generated through the Non-Subsampled Contourlet Transform (NSCT), which augments the model’s capability to accurately capture the edges of insulators. Moreover, the input image resolution to the network is heightened to 1200 × 1600, permitting more detailed extraction of edges. Rather than the original ResNet + FPN architecture, the improved HRNet is utilized as the backbone to effectively harness multi-scale feature information, thereby enhancing the model’s overall efficacy. In response to the increased input size, there is a reduction in the network’s channel count, concurrent with an increase in the number of layers, ensuring an adequate receptive field without substantially escalating network parameters. Additionally, a Convolutional Block Attention Module (CBAM) is incorporated to refine mask quality and augment object detection precision. Furthermore, to bolster the model’s robustness and minimize annotation demands, a virtual dataset is crafted utilizing the fourth-generation Unreal Engine (UE4). Empirical results reveal that the proposed framework exhibits superior performance, with AP0.50 (90.21%), AP0.75 (83.34%), and AP[0.50:0.95] (67.26%) on a test set consisting of images supplied by the power grid. This framework surpasses existing methodologies and contributes significantly to the advancement of intelligent transmission line inspection. Full article
(This article belongs to the Special Issue Recent Trends and Advances in Intelligent Fault Diagnostics)
Show Figures

Figure 1

26 pages, 23383 KB  
Article
Multi-Focus Image Fusion Based on Dual-Channel Rybak Neural Network and Consistency Verification in NSCT Domain
by Ming Lv, Sensen Song, Zhenhong Jia, Liangliang Li and Hongbing Ma
Fractal Fract. 2025, 9(7), 432; https://doi.org/10.3390/fractalfract9070432 - 30 Jun 2025
Cited by 8 | Viewed by 1215
Abstract
In multi-focus image fusion, accurately detecting and extracting focused regions remains a key challenge. Some existing methods suffer from misjudgment of focus areas, resulting in incorrect focus information or the unintended retention of blurred regions in the fused image. To address these issues, [...] Read more.
In multi-focus image fusion, accurately detecting and extracting focused regions remains a key challenge. Some existing methods suffer from misjudgment of focus areas, resulting in incorrect focus information or the unintended retention of blurred regions in the fused image. To address these issues, this paper proposes a novel multi-focus image fusion method that leverages a dual-channel Rybak neural network combined with consistency verification in the nonsubsampled contourlet transform (NSCT) domain. Specifically, the high-frequency sub-bands produced by NSCT decomposition are processed using the dual-channel Rybak neural network and a consistency verification strategy, allowing for more accurate extraction and integration of salient details. Meanwhile, the low-frequency sub-bands are fused using a simple averaging approach to preserve the overall structure and brightness information. The effectiveness of the proposed method has been thoroughly evaluated through comprehensive qualitative and quantitative experiments conducted on three widely used public datasets: Lytro, MFFW, and MFI-WHU. Experimental results show that our method consistently outperforms several state-of-the-art image fusion techniques, including both traditional algorithms and deep learning-based approaches, in terms of visual quality and objective performance metrics (QAB/F, QCB, QE, QFMI, QMI, QMSE, QNCIE, QNMI, QP, and QPSNR). These results clearly demonstrate the robustness and superiority of the proposed fusion framework in handling multi-focus image fusion tasks. Full article
Show Figures

Figure 1

19 pages, 16134 KB  
Article
Non-Subsampled Contourlet Transform-Based Domain Feedback Information Distillation Network for Suppressing Noise in Seismic Data
by Kang Chen, Guangzhi Zhang, Cong Tang, Qi Ran, Long Wen, Song Han, Han Liang and Haiyong Yi
Appl. Sci. 2025, 15(12), 6734; https://doi.org/10.3390/app15126734 - 16 Jun 2025
Viewed by 714
Abstract
Seismic signal processing often relies on general convolutional neural network (CNN)-based models, which typically focus on features in the time domain while neglecting frequency characteristics. Moreover, down-sampling operations in these models tend to cause the loss of critical high-frequency details. To this end, [...] Read more.
Seismic signal processing often relies on general convolutional neural network (CNN)-based models, which typically focus on features in the time domain while neglecting frequency characteristics. Moreover, down-sampling operations in these models tend to cause the loss of critical high-frequency details. To this end, we propose a feedback information distillation network (FID-N) in the non-subsampled contourlet transform (NSCT) domain to remarkably suppress seismic noise. The method aims to enhance denoising performance by preserving the fine-grained details and frequency characteristics of seismic data. The FID-N mainly consists of a two-path information distillation block used in a recurrent manner to form a feedback mechanism, carrying an output to correct previous states, which fully exploits competitive features from seismic signals and effectively realizes the signal restoration step by step across time. Additionally, the NSCT has an excellent high-frequency response and powerful curve and surface description capabilities. We suggest converting the noise suppression problem into NSCT coefficient prediction, which maintains more detailed high-frequency information and promotes the FID-N to further suppress noise. Extensive experiments on both synthetic and real seismic datasets demonstrated that our method significantly outperformed the SOTA methods, particularly in scenarios with low signal-to-noise ratios and in recovering high-frequency components. Full article
Show Figures

Figure 1

18 pages, 2939 KB  
Article
Feature-Level Image Fusion Scheme for X-Ray Multi-Contrast Imaging
by Zhuo Zuo, Jinglei Luo, Haoran Liu, Xiang Zheng and Guibin Zan
Electronics 2025, 14(1), 210; https://doi.org/10.3390/electronics14010210 - 6 Jan 2025
Cited by 1 | Viewed by 1449
Abstract
Since the mid-1990s, X-ray phase contrast imaging (XPCI) has attracted increasing interest in the industrial and bioimaging fields due to its high sensitivity to weakly absorbing materials and has gained widespread acceptance. XPCI can simultaneously provide three imaging modalities with complementary information, offering [...] Read more.
Since the mid-1990s, X-ray phase contrast imaging (XPCI) has attracted increasing interest in the industrial and bioimaging fields due to its high sensitivity to weakly absorbing materials and has gained widespread acceptance. XPCI can simultaneously provide three imaging modalities with complementary information, offering enriched details and data. This study proposes an image fusion method that simultaneously retrieves the three complementary channels of XPCI. It integrates block features, non-subsampled contourlet transform (NSCT), and a spiking cortical model (SCM), comprising three steps: (I) Image denoising, (II) Block-based feature-level NSCT-SCM fusion, and (III) Image quality enhancement. Compared with other methods in the XPCI image fusion field, the fusion results of the proposed algorithm demonstrated significant advantages, particularly with an impressive increase in the standard deviation by over 50% compared to traditional NSCT-SCM. The results revealed that the proposed algorithm exhibits high contrast, clear contours, and a short operation time. Experimental outcomes also demonstrated that the block-based feature extraction procedure performs better in retaining edge strength and texture information, with released computational resource consumption, thus, offering new possibilities for the industrial application of XPCI technology. Full article
Show Figures

Figure 1

26 pages, 19476 KB  
Article
Fractal Dimension-Based Multi-Focus Image Fusion via Coupled Neural P Systems in NSCT Domain
by Liangliang Li, Xiaobin Zhao, Huayi Hou, Xueyu Zhang, Ming Lv, Zhenhong Jia and Hongbing Ma
Fractal Fract. 2024, 8(10), 554; https://doi.org/10.3390/fractalfract8100554 - 25 Sep 2024
Cited by 12 | Viewed by 2154
Abstract
In this paper, we introduce an innovative approach to multi-focus image fusion by leveraging the concepts of fractal dimension and coupled neural P (CNP) systems in nonsubsampled contourlet transform (NSCT) domain. This method is designed to overcome the challenges posed by the limitations [...] Read more.
In this paper, we introduce an innovative approach to multi-focus image fusion by leveraging the concepts of fractal dimension and coupled neural P (CNP) systems in nonsubsampled contourlet transform (NSCT) domain. This method is designed to overcome the challenges posed by the limitations of camera lenses and depth-of-field effects, which often prevent all parts of a scene from being simultaneously in focus. Our proposed fusion technique employs CNP systems with a local topology-based fusion model to merge the low-frequency components effectively. Meanwhile, for the high-frequency components, we utilize the spatial frequency and fractal dimension-based focus measure (FDFM) to achieve superior fusion performance. The effectiveness of the method is validated through extensive experiments conducted on three benchmark datasets: Lytro, MFI-WHU, and MFFW. The results demonstrate the superiority of our proposed multi-focus image fusion method, showcasing its potential to significantly enhance image clarity across the entire scene. Our algorithm has achieved advantageous values on metrics QAB/F, QCB, QCV, QE, QFMI, QG, QMI, and QNCIE. Full article
Show Figures

Figure 1

20 pages, 8425 KB  
Article
An NSCT-Based Multifrequency GPR Data-Fusion Method for Concealed Damage Detection
by Junfang Wang, Xiangxiong Li, Huike Zeng, Jianfu Lin, Shiming Xue, Jing Wang and Yanfeng Zhou
Buildings 2024, 14(9), 2657; https://doi.org/10.3390/buildings14092657 - 27 Aug 2024
Viewed by 1778
Abstract
Ground-penetrating radar (GPR) is widely employed as a non-destructive tool for subsurface detection of transport infrastructures. Typically, data collected by high-frequency antennas offer high resolution but limited penetration depth, whereas data from low-frequency antennas provide deeper penetration but lower resolution. To simultaneously achieve [...] Read more.
Ground-penetrating radar (GPR) is widely employed as a non-destructive tool for subsurface detection of transport infrastructures. Typically, data collected by high-frequency antennas offer high resolution but limited penetration depth, whereas data from low-frequency antennas provide deeper penetration but lower resolution. To simultaneously achieve high resolution and deep penetration via a composite radargram, a Non-Subsampled Contourlet Transform (NSCT) algorithm-based multifrequency GPR data-fusion method is proposed by integrating NSCT with appropriate fusion rules, respectively, for high-frequency and low-frequency coefficients of decomposed radargrams and by incorporating quantitative assessment metrics. Despite the advantages of NSCT in image processing, its applications to GPR data fusion for concealed damage identification of transport infrastructures are rarely reported. Numerical simulation, tunnel model test, and on-site road test are conducted for performance validation. The comparison between the evaluation metrics before and after fusion demonstrates the effectiveness of the proposed fusion method. Both shallow and deep hollow targets hidden in the simulated concrete structure, real tunnel model, and road are identified through one radargram obtained by fusing different radargrams. The significance of this study is producing a high-quality composite radargram to enable multi-depth concealed damage detection and exempting human interference in the interpretation of multiple radargrams. Full article
(This article belongs to the Special Issue Structural Health Monitoring and Vibration Control)
Show Figures

Figure 1

17 pages, 7142 KB  
Article
Performance Evaluation of L1-Norm-Based Blind Deconvolution after Noise Reduction with Non-Subsampled Contourlet Transform in Light Microscopy Images
by Kyuseok Kim and Ji-Youn Kim
Appl. Sci. 2024, 14(5), 1913; https://doi.org/10.3390/app14051913 - 26 Feb 2024
Cited by 3 | Viewed by 1889
Abstract
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed [...] Read more.
Noise and blurring in light microscope images are representative factors that affect accurate identification of cellular and subcellular structures in biological research. In this study, a method for l1-norm-based blind deconvolution after noise reduction with non-subsampled contourlet transform (NSCT) was designed and applied to a light microscope image to analyze its feasibility. The designed NSCT-based algorithm first separated the low- and high-frequency components. Then, the restored microscope image and the deblurred and denoised images were compared and evaluated. In both the simulations and experiments, the average coefficient of variation (COV) value in the image using the proposed NSCT-based algorithm showed similar values compared to the denoised image; moreover, it significantly improved the results compared with that of the degraded image. In particular, we confirmed that the restored image in the experiment improved the COV by approximately 2.52 times compared with the deblurred image, and the NSCT-based proposed algorithm showed the best performance in both the peak signal-to-noise ratio and edge preservation index in the simulation. In conclusion, the proposed algorithm was successfully modeled, and the applicability of the proposed method in light microscope images was proved based on various quantitative evaluation indices. Full article
Show Figures

Figure 1

20 pages, 1159 KB  
Article
Image Watermarking Using Discrete Wavelet Transform and Singular Value Decomposition for Enhanced Imperceptibility and Robustness
by Mahbuba Begum, Sumaita Binte Shorif, Mohammad Shorif Uddin, Jannatul Ferdush, Tony Jan, Alistair Barros and Md Whaiduzzaman
Algorithms 2024, 17(1), 32; https://doi.org/10.3390/a17010032 - 12 Jan 2024
Cited by 13 | Viewed by 6484
Abstract
Digital multimedia elements such as text, image, audio, and video can be easily manipulated because of the rapid rise of multimedia technology, making data protection a prime concern. Hence, copyright protection, content authentication, and integrity verification are today’s new challenging issues. To address [...] Read more.
Digital multimedia elements such as text, image, audio, and video can be easily manipulated because of the rapid rise of multimedia technology, making data protection a prime concern. Hence, copyright protection, content authentication, and integrity verification are today’s new challenging issues. To address these issues, digital image watermarking techniques have been proposed by several researchers. Image watermarking can be conducted through several transformations, such as discrete wavelet transform (DWT), singular value decomposition (SVD), orthogonal matrix Q and upper triangular matrix R (QR) decomposition, and non-subsampled contourlet transform (NSCT). However, a single transformation cannot simultaneously satisfy all the design requirements of image watermarking, which makes a platform to design a hybrid invisible image watermarking technique in this work. The proposed work combines four-level (4L) DWT and two-level (2L) SVD. The Arnold map initially encrypts the watermark image, and 2L SVD is applied to it to extract the s components of the watermark image. A 4L DWT is applied to the host image to extract the LL sub-band, and then 2L SVD is applied to extract s components that are embedded into the host image to generate the watermarked image. The dynamic-sized watermark maintains a balanced visual impact and non-blind watermarking preserves the quality and integrity of the host image. We have evaluated the performance after applying several intentional and unintentional attacks and found high imperceptibility and improved robustness with enhanced security to the system than existing state-of-the-art methods. Full article
Show Figures

Figure 1

30 pages, 20140 KB  
Article
Comparative Analysis of Pixel-Level Fusion Algorithms and a New High-Resolution Dataset for SAR and Optical Image Fusion
by Jinjin Li, Jiacheng Zhang, Chao Yang, Huiyu Liu, Yangang Zhao and Yuanxin Ye
Remote Sens. 2023, 15(23), 5514; https://doi.org/10.3390/rs15235514 - 27 Nov 2023
Cited by 24 | Viewed by 7266
Abstract
Synthetic aperture radar (SAR) and optical images often present different geometric structures and texture features for the same ground object. Through the fusion of SAR and optical images, it can effectively integrate their complementary information, thus better meeting the requirements of remote sensing [...] Read more.
Synthetic aperture radar (SAR) and optical images often present different geometric structures and texture features for the same ground object. Through the fusion of SAR and optical images, it can effectively integrate their complementary information, thus better meeting the requirements of remote sensing applications, such as target recognition, classification, and change detection, so as to realize the collaborative utilization of multi-modal images. In order to select appropriate methods to achieve high-quality fusion of SAR and optical images, this paper conducts a systematic review of current pixel-level fusion algorithms for SAR and optical image fusion. Subsequently, eleven representative fusion methods, including component substitution methods (CS), multiscale decomposition methods (MSD), and model-based methods, are chosen for a comparative analysis. In the experiment, we produce a high-resolution SAR and optical image fusion dataset (named YYX-OPT-SAR) covering three different types of scenes, including urban, suburban, and mountain. This dataset and a publicly available medium-resolution dataset are used to evaluate these fusion methods based on three different kinds of evaluation criteria: visual evaluation, objective image quality metrics, and classification accuracy. In terms of the evaluation using image quality metrics, the experimental results show that MSD methods can effectively avoid the negative effects of SAR image shadows on the corresponding area of the fusion result compared with CS methods, while model-based methods exhibit relatively poor performance. Among all of the fusion methods involved in the comparison, the non-subsampled contourlet transform method (NSCT) presents the best fusion results. In the evaluation using image classification, most experimental results show that the overall classification accuracy after fusion is better than that before fusion. This indicates that optical-SAR fusion can improve land classification, with the gradient transfer fusion method (GTF) yielding the best classification results among all of these fusion methods. Full article
(This article belongs to the Special Issue Multi-Sensor Systems and Data Fusion in Remote Sensing II)
Show Figures

Figure 1

17 pages, 2655 KB  
Article
Detection Method of Cracks in Expressway Asphalt Pavement Based on Digital Image Processing Technology
by Hui Fang and Na He
Appl. Sci. 2023, 13(22), 12270; https://doi.org/10.3390/app132212270 - 13 Nov 2023
Cited by 9 | Viewed by 2283
Abstract
Considering the limitations of the current pavement crack damage detection methods, this study proposes a method based on digital image processing technology for detecting highway asphalt pavement crack damage. Firstly, a non-subsampled contourlet transform is used to enhance the image of highway asphalt [...] Read more.
Considering the limitations of the current pavement crack damage detection methods, this study proposes a method based on digital image processing technology for detecting highway asphalt pavement crack damage. Firstly, a non-subsampled contourlet transform is used to enhance the image of highway asphalt pavement. Secondly, the non-crack regions in the image are screened, and the crack extraction is completed by obtaining and enhancing the crack intensity map. Finally, the features of cracks are extracted and input into the support vector machine for classification and recognition to complete the detection of cracks in highway asphalt pavement. The experimental results show that the proposed method can effectively enhance the quality of a pavement image and precisely extract a crack area from the image with a high level of damage detection accuracy. Full article
(This article belongs to the Special Issue Advances in Renewable Asphalt Pavement Materials)
Show Figures

Figure 1

18 pages, 9526 KB  
Article
Multi-Focus Image Fusion via Distance-Weighted Regional Energy and Structure Tensor in NSCT Domain
by Ming Lv, Liangliang Li, Qingxin Jin, Zhenhong Jia, Liangfu Chen and Hongbing Ma
Sensors 2023, 23(13), 6135; https://doi.org/10.3390/s23136135 - 4 Jul 2023
Cited by 11 | Viewed by 2121
Abstract
In this paper, a multi-focus image fusion algorithm via the distance-weighted regional energy and structure tensor in non-subsampled contourlet transform domain is introduced. The distance-weighted regional energy-based fusion rule was used to deal with low-frequency components, and the structure tensor-based fusion rule was [...] Read more.
In this paper, a multi-focus image fusion algorithm via the distance-weighted regional energy and structure tensor in non-subsampled contourlet transform domain is introduced. The distance-weighted regional energy-based fusion rule was used to deal with low-frequency components, and the structure tensor-based fusion rule was used to process high-frequency components; fused sub-bands were integrated with the inverse non-subsampled contourlet transform, and a fused multi-focus image was generated. We conducted a series of simulations and experiments on the multi-focus image public dataset Lytro; the experimental results of 20 sets of data show that our algorithm has significant advantages compared to advanced algorithms and that it can produce clearer and more informative multi-focus fusion images. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 8211 KB  
Article
Multi-Focus Image Fusion for Full-Field Optical Angiography
by Yuchan Jie, Xiaosong Li, Mingyi Wang and Haishu Tan
Entropy 2023, 25(6), 951; https://doi.org/10.3390/e25060951 - 16 Jun 2023
Cited by 5 | Viewed by 2327
Abstract
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field [...] Read more.
Full-field optical angiography (FFOA) has considerable potential for clinical applications in the prevention and diagnosis of various diseases. However, owing to the limited depth of focus attainable using optical lenses, only information about blood flow in the plane within the depth of field can be acquired using existing FFOA imaging techniques, resulting in partially unclear images. To produce fully focused FFOA images, an FFOA image fusion method based on the nonsubsampled contourlet transform and contrast spatial frequency is proposed. Firstly, an imaging system is constructed, and the FFOA images are acquired by intensity-fluctuation modulation effect. Secondly, we decompose the source images into low-pass and bandpass images by performing nonsubsampled contourlet transform. A sparse representation-based rule is introduced to fuse the lowpass images to effectively retain the useful energy information. Meanwhile, a contrast spatial frequency rule is proposed to fuse bandpass images, which considers the neighborhood correlation and gradient relationships of pixels. Finally, the fully focused image is produced by reconstruction. The proposed method significantly expands the range of focus of optical angiography and can be effectively extended to public multi-focused datasets. Experimental results confirm that the proposed method outperformed some state-of-the-art methods in both qualitative and quantitative evaluations. Full article
(This article belongs to the Special Issue Methods in Artificial Intelligence and Information Processing II)
Show Figures

Figure 1

Back to TopTop