Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (41)

Search Parameters:
Keywords = curvelets

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 2334 KB  
Article
A Comprehensive Image Quality Evaluation of Image Fusion Techniques Using X-Ray Images for Detonator Detection Tasks
by Lynda Oulhissane, Mostefa Merah, Simona Moldovanu and Luminita Moraru
Appl. Sci. 2025, 15(20), 10987; https://doi.org/10.3390/app152010987 - 13 Oct 2025
Viewed by 944
Abstract
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance [...] Read more.
Purpose: Luggage X-rays suffer from low contrast, material overlap, and noise; dual-energy imaging reduces ambiguity but creates colour biases that impair segmentation. This study aimed to (1) employ connotative fusion by embedding realistic detonator patches into real X-rays to simulate threats and enhance unattended detection without requiring ground-truth labels; (2) thoroughly evaluate fusion techniques in terms of balancing image quality, information content, contrast, and the preservation of meaningful features. Methods: A total of 1000 X-ray luggage images and 150 detonator images were used for fusion experiments based on deep learning, transform-based, and feature-driven methods. The proposed approach does not need ground truth supervision. Deep learning fusion techniques, including VGG, FusionNet, and AttentionFuse, enable the dynamic selection and combination of features from multiple input images. The transform-based fusion methods convert input images into different domains using mathematical transforms to enhance fine structures. The Nonsubsampled Contourlet Transform (NSCT), Curvelet Transform, and Laplacian Pyramid (LP) are employed. Feature-driven image fusion methods combine meaningful representations for easier interpretation. Singular Value Decomposition (SVD), Principal Component Analysis (PCA), Random Forest (RF), and Local Binary Pattern (LBP) are used to capture and compare texture details across source images. Entropy (EN), Standard Deviation (SD), and Average Gradient (AG) assess factors such as spatial resolution, contrast preservation, and information retention and are used to evaluate the performance of the analysed methods. Results: The results highlight the strengths and limitations of the evaluated techniques, demonstrating their effectiveness in producing sharpened fused X-ray images with clearly emphasized targets and enhanced structural details. Conclusions: The Laplacian Pyramid fusion method emerges as the most versatile choice for applications demanding a balanced trade-off. This is evidenced by its overall multi-criteria balance, supported by a composite (geometric mean) score on normalised metrics. It consistently achieves high performance across all evaluated metrics, making it reliable for detecting concealed threats under diverse imaging conditions. Full article
Show Figures

Figure 1

23 pages, 4505 KB  
Article
Enhanced ResNet-50 with Multi-Feature Fusion for Robust Detection of Pneumonia in Chest X-Ray Images
by Neenu Sebastian and B. Ankayarkanni
Diagnostics 2025, 15(16), 2041; https://doi.org/10.3390/diagnostics15162041 - 14 Aug 2025
Cited by 2 | Viewed by 2605
Abstract
Background/Objectives: Pneumonia is a critical lung infection that demands timely and precise diagnosis, particularly during the evaluation of chest X-ray images. Deep learning is widely used for pneumonia detection but faces challenges such as poor denoising, limited feature diversity, low interpretability, and class [...] Read more.
Background/Objectives: Pneumonia is a critical lung infection that demands timely and precise diagnosis, particularly during the evaluation of chest X-ray images. Deep learning is widely used for pneumonia detection but faces challenges such as poor denoising, limited feature diversity, low interpretability, and class imbalance issues. This study aims to develop an optimized ResNet-50 based framework for accurate pneumonia detection. Methods: The proposed approach integrates Multiscale Curvelet Filtering with Directional Denoising (MCF-DD) as a preprocessing step to suppress noise while preserving diagnostic details. Multi-feature fusion is performed by combining deep features extracted from ResNet-50 with handcrafted texture descriptors such as Local Binary Patterns (LBPs), leveraging both semantic and structural information. Precision attention mechanisms are incorporated to enhance interpretability by highlighting diagnostically relevant regions. Results: Validation on the Kaggle chest radiograph dataset demonstrates that the proposed model achieves higher accuracy, sensitivity, specificity, and other performance metrics compared to existing methods. The inclusion of MCF-DD preprocessing, multi-feature fusion, and precision attention contributes to improved robustness and diagnostic reliability. Conclusions: The optimized ResNet-50 framework, enhanced by noise suppression, multi-feature fusion, and attention mechanisms, offers a more accurate and interpretable solution for pneumonia detection from chest X-ray images, addressing key challenges in existing deep learning approaches. Full article
(This article belongs to the Special Issue Machine Learning in Precise and Personalized Diagnosis)
Show Figures

Figure 1

23 pages, 19331 KB  
Article
Multi-Focus Image Fusion Based on Fractal Dimension and Parameter Adaptive Unit-Linking Dual-Channel PCNN in Curvelet Transform Domain
by Liangliang Li, Sensen Song, Ming Lv, Zhenhong Jia and Hongbing Ma
Fractal Fract. 2025, 9(3), 157; https://doi.org/10.3390/fractalfract9030157 - 3 Mar 2025
Cited by 15 | Viewed by 1776
Abstract
Multi-focus image fusion is an important method for obtaining fully focused information. In this paper, a novel multi-focus image fusion method based on fractal dimension (FD) and parameter adaptive unit-linking dual-channel pulse-coupled neural network (PAUDPCNN) in the curvelet transform (CVT) domain is proposed. [...] Read more.
Multi-focus image fusion is an important method for obtaining fully focused information. In this paper, a novel multi-focus image fusion method based on fractal dimension (FD) and parameter adaptive unit-linking dual-channel pulse-coupled neural network (PAUDPCNN) in the curvelet transform (CVT) domain is proposed. The source images are decomposed into low-frequency and high-frequency sub-bands by CVT, respectively. The FD and PAUDPCNN models, along with consistency verification, are employed to fuse the high-frequency sub-bands, the average method is used to fuse the low-frequency sub-band, and the final fused image is generated by inverse CVT. The experimental results demonstrate that the proposed method shows superior performance in multi-focus image fusion on Lytro, MFFW, and MFI-WHU datasets. Full article
Show Figures

Figure 1

24 pages, 1533 KB  
Article
Unsupervised SAR Image Change Detection Based on Curvelet Fusion and Local Patch Similarity Information Clustering
by Yuhao Huang, Zhihui Xin, Guisheng Liao, Penghui Huang, Guangyu Hou and Rui Zou
Remote Sens. 2025, 17(5), 840; https://doi.org/10.3390/rs17050840 - 27 Feb 2025
Cited by 3 | Viewed by 1545
Abstract
Change detection for synthetic aperture radar (SAR) images effectively identifies and analyzes changes in the ground surface, demonstrating significant value in applications such as urban planning, natural disaster assessment, and environmental protection. Since speckle noise is an inherent characteristic of SAR images, noise [...] Read more.
Change detection for synthetic aperture radar (SAR) images effectively identifies and analyzes changes in the ground surface, demonstrating significant value in applications such as urban planning, natural disaster assessment, and environmental protection. Since speckle noise is an inherent characteristic of SAR images, noise suppression has always been a challenging problem. At the same time, the existing unsupervised deep learning-based methods relying on the pseudo labels may lead to a low-performance network. These methods are high data-dependent. To this end, we propose a novel unsupervised change detection method based on curvelet fusion and local patch similarity information clustering (CF-LPSICM). Firstly, a curvelet fusion module is designed to utilize the complementary information of different difference images. Different fusion rules are designed for the low-frequency subband, mid-frequency directional subband, and high-frequency subband of curvelet coefficients. Then the proposed local patch similarity information clustering algorithm is used to classify the image pixels to output the final change map. The pixels with similar structures and the weight of spatial information are incorporated into the traditional clustering algorithm in a fuzzy way, which greatly suppresses the speckle noise and enhances the structural information of the changing area. Experimental results and analysis on five datasets verify the effectiveness and robustness of the proposed method. Full article
(This article belongs to the Special Issue Spaceborne High-Resolution SAR Imaging (Second Edition))
Show Figures

Figure 1

29 pages, 6970 KB  
Article
Traumatic Brain Injury Structure Detection Using Advanced Wavelet Transformation Fusion Algorithm with Proposed CNN-ViT
by Abdullah, Ansar Siddique, Zulaikha Fatima and Kamran Shaukat
Information 2024, 15(10), 612; https://doi.org/10.3390/info15100612 - 6 Oct 2024
Cited by 2 | Viewed by 2498
Abstract
Detecting Traumatic Brain Injuries (TBI) through imaging remains challenging due to limited sensitivity in current methods. This study addresses the gap by proposing a novel approach integrating deep-learning algorithms and advanced image-fusion techniques to enhance detection accuracy. The method combines contextual and visual [...] Read more.
Detecting Traumatic Brain Injuries (TBI) through imaging remains challenging due to limited sensitivity in current methods. This study addresses the gap by proposing a novel approach integrating deep-learning algorithms and advanced image-fusion techniques to enhance detection accuracy. The method combines contextual and visual models to effectively assess injury status. Using a dataset of repeat mild TBI (mTBI) cases, we compared various image-fusion algorithms: PCA (89.5%), SWT (89.69%), DCT (89.08%), HIS (83.3%), and averaging (80.99%). Our proposed hybrid model achieved a significantly higher accuracy of 98.78%, demonstrating superior performance. Metrics including Dice coefficient (98%), sensitivity (97%), and specificity (98%) verified that the strategy is efficient in improving image quality and feature extraction. Additional validations with “entropy”, “average pixel intensity”, “standard deviation”, “correlation coefficient”, and “edge similarity measure” confirmed the robustness of the fused images. The hybrid CNN-ViT model, integrating curvelet transform features, was trained and validated on a comprehensive dataset of 24 types of brain injuries. The overall accuracy was 99.8%, with precision, recall, and F1-score of 99.8%. The “average PSNR” was 39.0 dB, “SSIM” was 0.99, and MI was 1.0. Cross-validation across five folds proved the model’s “dependability” and “generalizability”. In conclusion, this study introduces a promising method for TBI detection, leveraging advanced image-fusion and deep-learning techniques, significantly enhancing medical imaging and diagnostic capabilities for brain injuries. Full article
(This article belongs to the Special Issue Real-World Applications of Machine Learning Techniques)
Show Figures

Figure 1

19 pages, 26310 KB  
Article
Concrete Crack Detection and Segregation: A Feature Fusion, Crack Isolation, and Explainable AI-Based Approach
by Reshma Ahmed Swarna, Muhammad Minoar Hossain, Mst. Rokeya Khatun, Mohammad Motiur Rahman and Arslan Munir
J. Imaging 2024, 10(9), 215; https://doi.org/10.3390/jimaging10090215 - 31 Aug 2024
Cited by 8 | Viewed by 4389
Abstract
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and [...] Read more.
Scientific knowledge of image-based crack detection methods is limited in understanding their performance across diverse crack sizes, types, and environmental conditions. Builders and engineers often face difficulties with image resolution, detecting fine cracks, and differentiating between structural and non-structural issues. Enhanced algorithms and analysis techniques are needed for more accurate assessments. Hence, this research aims to generate an intelligent scheme that can recognize the presence of cracks and visualize the percentage of cracks from an image along with an explanation. The proposed method fuses features from concrete surface images through a ResNet-50 convolutional neural network (CNN) and curvelet transform handcrafted (HC) method, optimized by linear discriminant analysis (LDA), and the eXtreme gradient boosting (XGB) classifier then uses these features to recognize cracks. This study evaluates several CNN models, including VGG-16, VGG-19, Inception-V3, and ResNet-50, and various HC techniques, such as wavelet transform, counterlet transform, and curvelet transform for feature extraction. Principal component analysis (PCA) and LDA are assessed for feature optimization. For classification, XGB, random forest (RF), adaptive boosting (AdaBoost), and category boosting (CatBoost) are tested. To isolate and quantify the crack region, this research combines image thresholding, morphological operations, and contour detection with the convex hulls method and forms a novel algorithm. Two explainable AI (XAI) tools, local interpretable model-agnostic explanations (LIMEs) and gradient-weighted class activation mapping++ (Grad-CAM++) are integrated with the proposed method to enhance result clarity. This research introduces a novel feature fusion approach that enhances crack detection accuracy and interpretability. The method demonstrates superior performance by achieving 99.93% and 99.69% accuracy on two existing datasets, outperforming state-of-the-art methods. Additionally, the development of an algorithm for isolating and quantifying crack regions represents a significant advancement in image processing for structural analysis. The proposed approach provides a robust and reliable tool for real-time crack detection and assessment in concrete structures, facilitating timely maintenance and improving structural safety. By offering detailed explanations of the model’s decisions, the research addresses the critical need for transparency in AI applications, thus increasing trust and adoption in engineering practice. Full article
(This article belongs to the Special Issue Image Processing and Computer Vision: Algorithms and Applications)
Show Figures

Figure 1

20 pages, 2121 KB  
Article
Data-Proximal Complementary 1-TV Reconstruction for Limited Data Computed Tomography
by Simon Göppel, Jürgen Frikel and Markus Haltmeier
Mathematics 2024, 12(10), 1606; https://doi.org/10.3390/math12101606 - 20 May 2024
Cited by 1 | Viewed by 1956
Abstract
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example [...] Read more.
In a number of tomographic applications, data cannot be fully acquired, resulting in severely underdetermined image reconstruction. Conventional methods in such cases lead to reconstructions with significant artifacts. To overcome these artifacts, regularization methods are applied that incorporate additional information. An important example is TV reconstruction, which is known to be efficient in compensating for missing data and reducing reconstruction artifacts. On the other hand, tomographic data are also contaminated by noise, which poses an additional challenge. The use of a single regularizer must therefore account for both the missing data and the noise. A particular regularizer may not be ideal for both tasks. For example, the TV regularizer is a poor choice for noise reduction over multiple scales, in which case 1 curvelet regularization methods are well suited. To address this issue, in this paper, we present a novel variational regularization framework that combines the advantages of different regularizers. The basic idea of our framework is to perform reconstruction in two stages. The first stage is mainly aimed at accurate reconstruction in the presence of noise, and the second stage is aimed at artifact reduction. Both reconstruction stages are connected by a data proximity condition. The proposed method is implemented and tested for limited-view CT using a combined curvelet–TV approach. We define and implement a curvelet transform adapted to the limited-view problem and illustrate the advantages of our approach in numerical experiments. Full article
(This article belongs to the Section E: Applied Mathematics)
Show Figures

Figure 1

20 pages, 8469 KB  
Article
Sparsity and M-Estimators in RFI Mitigation for Typical Radio Astrophysical Signals
by Hao Shan, Ming Jiang, Jianping Yuan, Xiaofeng Yang, Wenming Yan, Zhen Wang and Na Wang
Universe 2023, 9(12), 488; https://doi.org/10.3390/universe9120488 - 23 Nov 2023
Viewed by 1993
Abstract
In this paper, radio frequency interference (RFI) mitigation by robust maximum likelihood estimators (M-estimators) for typical radio astrophysical signals of, e.g., pulsars and fast radio bursts (FRBs), will be investigated. The current status reveals several defects in signal modeling, manual factors, and a [...] Read more.
In this paper, radio frequency interference (RFI) mitigation by robust maximum likelihood estimators (M-estimators) for typical radio astrophysical signals of, e.g., pulsars and fast radio bursts (FRBs), will be investigated. The current status reveals several defects in signal modeling, manual factors, and a limited range of RFI morphologies. The goal is to avoid these defects while realizing RFI mitigation with an attempt at feature detection for FRB signals. The motivation behind this paper is to combine the essential signal sparsity with the M-estimators, which are pertinent to the RFI outliers. Thus, the sparsity of the signals is fully explored. Consequently, typical isotropic and anisotropic features of multichannel radio signals are accurately approximated by sparse transforms. The RFI mitigation problem is thus modeled as a sparsity-promoting robust nonlinear estimator. This general model can reduce manual factors and is expected to be effective in mitigating most types of RFI, thus alleviating the defects. Comparative studies are carried out among three classes of M-estimators combined with several sparse transforms. Numerical experiments focus on real radio signals of several pulsars and FRB 121102. There are two discoveries in the high-frequency components of FRB 121102-11A. First, highly varying narrow-band isotropic flux regions of superradiance are discovered. Second, emission centers revealed by the isotropic features can be completely separated in the time axis. The results have demonstrated that the M-estimator-based sparse optimization frameworks show competitive results and have potential application prospects. Full article
(This article belongs to the Special Issue A New Horizon of Pulsar and Neutron Star: The 55-Year Anniversary)
Show Figures

Figure 1

17 pages, 1470 KB  
Article
Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer
by M. Deeparani and M. Kalamani
Diagnostics 2023, 13(19), 3131; https://doi.org/10.3390/diagnostics13193131 - 5 Oct 2023
Cited by 6 | Viewed by 2004
Abstract
Accurate and early detection of malignant pelvic mass is important for a suitable referral, triage, and for further care for the women diagnosed with a pelvic mass. Several deep learning (DL) methods have been proposed to detect pelvic masses but other methods cannot [...] Read more.
Accurate and early detection of malignant pelvic mass is important for a suitable referral, triage, and for further care for the women diagnosed with a pelvic mass. Several deep learning (DL) methods have been proposed to detect pelvic masses but other methods cannot provide sufficient accuracy and increase the computational time while classifying the pelvic mass. To overcome these issues, in this manuscript, the evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer for gynecological abdominal pelvic masses classification is proposed for classifying the pelvic masses (EGNNN-NPOA-PM-UI). The real time ultrasound pelvic mass images are augmented using random transformation. Then the augmented images are given to the 3D Tsallis entropy-based multilevel thresholding technique for extraction of the ROI region and its features are further extracted with the help of fast discrete curvelet transform with the wrapping (FDCT-WRP) method. Therefore, in this work, EGNNN optimized with nomadic people optimizer (NPOA) was utilized for classifying the gynecological abdominal pelvic masses. It was executed in PYTHON and the efficiency of the proposed method analyzed under several performance metrics. The proposed EGNNN-NPOA-PM-UI methods attained 99.8%. Ultrasound image analysis using the proposed EGNNN-NPOA-PM-UI methods can accurately predict pelvic masses analyzed with the existing methods. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

16 pages, 5303 KB  
Article
A Curvelet-Transform-Based Image Fusion Method Incorporating Side-Scan Sonar Image Features
by Xinyang Zhao, Shaohua Jin, Gang Bian, Yang Cui, Junsen Wang and Bo Zhou
J. Mar. Sci. Eng. 2023, 11(7), 1291; https://doi.org/10.3390/jmse11071291 - 25 Jun 2023
Cited by 9 | Viewed by 2392
Abstract
Current methods of fusing side-scan sonar images fail to tackle the issues of shadow removal, preservation of information from adjacent strip images, and maintenance of image clarity and contrast. To address these deficiencies, a novel curvelet-transform-based approach that integrates the complementary attribute of [...] Read more.
Current methods of fusing side-scan sonar images fail to tackle the issues of shadow removal, preservation of information from adjacent strip images, and maintenance of image clarity and contrast. To address these deficiencies, a novel curvelet-transform-based approach that integrates the complementary attribute of details from side-scan sonar strip images is proposed. By capitalizing on the multiple scales and orientations of the curvelet transform and its intricate hierarchical nature, myriad fusion rules were applied at the corresponding frequency levels, enabling a more-tailored image fusion technique for side-scan sonar imagery. The experimental results validated the effectiveness of this method in preserving valuable information from side-scan sonar images, reducing the presence of shadows and ensuring both clarity and contrast in the fused images. By meeting the aforementioned challenges encountered in existing methodologies, this approach demonstrated great practical significance. Full article
Show Figures

Figure 1

19 pages, 5449 KB  
Article
Optimal Deep Learning Architecture for Automated Segmentation of Cysts in OCT Images Using X-Let Transforms
by Reza Darooei, Milad Nazari, Rahele Kafieh and Hossein Rabbani
Diagnostics 2023, 13(12), 1994; https://doi.org/10.3390/diagnostics13121994 - 7 Jun 2023
Cited by 7 | Viewed by 2261
Abstract
The retina is a thin, light-sensitive membrane with a multilayered structure found in the back of the eyeball. There are many types of retinal disorders. The two most prevalent retinal illnesses are Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). Optical Coherence [...] Read more.
The retina is a thin, light-sensitive membrane with a multilayered structure found in the back of the eyeball. There are many types of retinal disorders. The two most prevalent retinal illnesses are Age-Related Macular Degeneration (AMD) and Diabetic Macular Edema (DME). Optical Coherence Tomography (OCT) is a vital retinal imaging technology. X-lets (such as curvelet, DTCWT, contourlet, etc.) have several benefits in image processing and analysis. They can capture both local and non-local features of an image simultaneously. The aim of this paper is to propose an optimal deep learning architecture based on sparse basis functions for the automated segmentation of cystic areas in OCT images. Different X-let transforms were used to produce different network inputs, including curvelet, Dual-Tree Complex Wavelet Transform (DTCWT), circlet, and contourlet. Additionally, three different combinations of these transforms are suggested to achieve more accurate segmentation results. Various metrics, including Dice coefficient, sensitivity, false positive ratio, Jaccard index, and qualitative results, were evaluated to find the optimal networks and combinations of the X-let’s sub-bands. The proposed network was tested on both original and noisy datasets. The results show the following facts: (1) contourlet achieves the optimal results between different combinations; (2) the five-channel decomposition using high-pass sub-bands of contourlet transform achieves the best performance; and (3) the five-channel decomposition using high-pass sub-bands formations out-performs the state-of-the-art methods, especially in the noisy dataset. The proposed method has the potential to improve the accuracy and speed of the segmentation process in clinical settings, facilitating the diagnosis and treatment of retinal diseases. Full article
(This article belongs to the Section Biomedical Optics)
Show Figures

Figure 1

24 pages, 14664 KB  
Article
Night Vision Anti-Halation Algorithm of Different-Source Image Fusion Based on Low-Frequency Sequence Generation
by Quanmin Guo, Jiahao Liang and Hanlei Wang
Mathematics 2023, 11(10), 2237; https://doi.org/10.3390/math11102237 - 10 May 2023
Cited by 3 | Viewed by 2701
Abstract
The abuse of high beam lights dazzles the opposite drivers when the vehicles meet at night, which can easily cause traffic accidents. The existing night vision anti-halation algorithms based on different-source image fusion can eliminate halation and obtain fusion images with rich color [...] Read more.
The abuse of high beam lights dazzles the opposite drivers when the vehicles meet at night, which can easily cause traffic accidents. The existing night vision anti-halation algorithms based on different-source image fusion can eliminate halation and obtain fusion images with rich color and details. However, the algorithms mistakenly eliminate some high-brightness important information. In order to address the problem, a night vision anti-halation algorithm based on low-frequency sequence generation is proposed. The low-frequency sequence generation model is constructed to generate image sequences with different degrees of halation elimination. According to the estimated illuminance for image sequences, the proposed sequence synthesis based on visual information maximization assigns a large weight to the areas with good brightness so as to obtain the fusion image without halation and with rich details. In four typical halation scenes covering most cases of night driving, the proposed algorithm effectively eliminates halation while retaining useful high-brightness information and has better universality than the other seven advanced comparison algorithms. The experimental results show that the fusion image obtained by the proposed algorithm is more suitable for human visual perception and helps to improve night driving safety. Full article
(This article belongs to the Special Issue Advances of Mathematical Image Processing)
Show Figures

Figure 1

20 pages, 4940 KB  
Article
Correlated-Weighted Statistically Modeled Contourlet and Curvelet Coefficient Image-Based Breast Tumor Classification Using Deep Learning
by Shahriar M. Kabir and Mohammed I. H. Bhuiyan
Diagnostics 2023, 13(1), 69; https://doi.org/10.3390/diagnostics13010069 - 26 Dec 2022
Cited by 7 | Viewed by 2708
Abstract
Deep learning-based automatic classification of breast tumors using parametric imaging techniques from ultrasound (US) B-mode images is still an exciting research area. The Rician inverse Gaussian (RiIG) distribution is currently emerging as an appropriate example of statistical modeling. This study presents a new [...] Read more.
Deep learning-based automatic classification of breast tumors using parametric imaging techniques from ultrasound (US) B-mode images is still an exciting research area. The Rician inverse Gaussian (RiIG) distribution is currently emerging as an appropriate example of statistical modeling. This study presents a new approach of correlated-weighted contourlet-transformed RiIG (CWCtr-RiIG) and curvelet-transformed RiIG (CWCrv-RiIG) image-based deep convolutional neural network (CNN) architecture for breast tumor classification from B-mode ultrasound images. A comparative study with other statistical models, such as Nakagami and normal inverse Gaussian (NIG) distributions, is also experienced here. The weighted entitled here is for weighting the contourlet and curvelet sub-band coefficient images by correlation with their corresponding RiIG statistically modeled images. By taking into account three freely accessible datasets (Mendeley, UDIAT, and BUSI), it is demonstrated that the proposed approach can provide more than 98 percent accuracy, sensitivity, specificity, NPV, and PPV values using the CWCtr-RiIG images. On the same datasets, the suggested method offers superior classification performance to several other existing strategies. Full article
(This article belongs to the Special Issue Frontline of Breast Imaging)
Show Figures

Figure 1

16 pages, 1354 KB  
Article
Optimized Convolutional Neural Network Recognition for Athletes’ Pneumonia Image Based on Attention Mechanism
by Hui Zhang, Ruipu Ma, Yingao Zhao, Qianqian Zhang, Quandang Sun and Yuanyuan Ma
Entropy 2022, 24(10), 1434; https://doi.org/10.3390/e24101434 - 8 Oct 2022
Cited by 3 | Viewed by 2005
Abstract
After high-intensity exercise, athletes have a greatly increased possibility of pneumonia infection due to the immune function of athletes decreasing. Diseases caused by pulmonary bacterial or viral infections can have serious consequences on the health of athletes in a short period of time, [...] Read more.
After high-intensity exercise, athletes have a greatly increased possibility of pneumonia infection due to the immune function of athletes decreasing. Diseases caused by pulmonary bacterial or viral infections can have serious consequences on the health of athletes in a short period of time, and can even lead to their early retirement. Therefore, early diagnosis is the key to athletes’ early recovery from pneumonia. Existing identification methods rely too much on professional medical knowledge, which leads to inefficient diagnosis due to the shortage of medical staff. To solve this problem, this paper presents an optimized convolutional neural network recognition method based on an attention mechanism after image enhancement. For the collected images of athlete pneumonia, we first use contrast boost to adjust the coefficient distribution. Then, the edge coefficient is extracted and enhanced to highlight the edge information, and enhanced images of the athlete lungs are obtained by using the inverse curvelet transformation. Finally, an optimized convolutional neural network with an attention mechanism is used to identify the athlete lung images. A series of experimental results show that, compared with the typical image recognition methods based on DecisionTree and RandomForest, the proposed method has higher recognition accuracy for lung images. Full article
Show Figures

Figure 1

19 pages, 61563 KB  
Article
Remote Sensing Image Fusion Based on Morphological Convolutional Neural Networks with Information Entropy for Optimal Scale
by Bairu Jia, Jindong Xu, Haihua Xing and Peng Wu
Sensors 2022, 22(19), 7339; https://doi.org/10.3390/s22197339 - 27 Sep 2022
Cited by 3 | Viewed by 2416
Abstract
Remote sensing image fusion is a fundamental issue in the field of remote sensing. In this paper, we propose a remote sensing image fusion method based on optimal scale morphological convolutional neural networks (CNN) using the principle of entropy from information theory. We [...] Read more.
Remote sensing image fusion is a fundamental issue in the field of remote sensing. In this paper, we propose a remote sensing image fusion method based on optimal scale morphological convolutional neural networks (CNN) using the principle of entropy from information theory. We use an attentional CNN to fuse the optimal cartoon and texture components of the original images to obtain a high-resolution multispectral image. We obtain the cartoon and texture components using sparse decomposition-morphological component analysis (MCA) with an optimal threshold value determined by calculating the information entropy of the fused image. In the sparse decomposition process, the local discrete cosine transform dictionary and the curvelet transform dictionary compose the MCA dictionary. We sparsely decompose the original remote sensing images into a texture component and a cartoon component at an optimal scale using the information entropy to control the dictionary parameter. Experimental results show that the remote sensing image fusion method proposed in this paper can effectively retain the information of the original image, improve the spatial resolution and spectral fidelity, and provide a new idea for image fusion from the perspective of multi-morphological deep learning. Full article
(This article belongs to the Special Issue State-of-the-Art Multimodal Remote Sensing Technologies)
Show Figures

Figure 1

Back to TopTop