error_outline You can access the new MDPI.com website here. Explore and share your feedback with us.
 
 
Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (50)

Search Parameters:
Keywords = enhanced maximum Relevance and minimum Redundancy

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2539 KB  
Article
Inertial Sensor-Based Recognition of Field Hockey Activities Using a Hybrid Feature Selection Framework
by Norazman Shahar, Muhammad Amir As’ari, Mohamad Hazwan Mohd Ghazali, Nasharuddin Zainal, Mohd Asyraf Zulkifley, Ahmad Asrul Ibrahim, Zaid Omar, Mohd Sabirin Rahmat, Kok Beng Gan and Asraf Mohamed Moubark
Sensors 2025, 25(24), 7615; https://doi.org/10.3390/s25247615 - 16 Dec 2025
Viewed by 349
Abstract
Accurate recognition of complex human activities from wearable sensors plays a critical role in sports analytics and human performance monitoring. However, the high dimensionality and redundancy of raw inertial data can hinder model performance and interpretability. This study proposes a hybrid feature selection [...] Read more.
Accurate recognition of complex human activities from wearable sensors plays a critical role in sports analytics and human performance monitoring. However, the high dimensionality and redundancy of raw inertial data can hinder model performance and interpretability. This study proposes a hybrid feature selection framework that combines Minimum Redundancy Maximum Relevance (MRMR) and Regularized Neighborhood Component Analysis (RNCA) to improve classification accuracy while reducing computational complexity. Multi-sensor inertial data were collected from field hockey players performing six activity types. Time- and frequency-domain features were extracted from four body-mounted inertial measurement units (IMUs), resulting in 432 initial features. MRMR, combined with Pearson correlation filtering (|ρ| > 0.7), eliminated redundant features, and RNCA further refined the subset by learning supervised feature weights. The final model achieved a test accuracy of 92.82% and F1-score of 86.91% using only 83 features, surpassing the MRMR-only configuration and slightly outperforming the full feature set. This performance was supported by reduced training time, improved confusion matrix profiles, and enhanced class separability in PCA and t-SNE visualizations. These results demonstrate the effectiveness of the proposed two-stage feature selection method in optimizing classification performance while enhancing model efficiency and interpretability for real-time human activity recognition systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1931 KB  
Article
Habitat Model Based on Lung CT for Predicting Brain Metastasis in Patients with Non-Small Cell Lung Cancer
by Feiyu Xing, Yan Lei, Qin Zhong, Yan Wu, Huan Liu and Yuanliang Xie
Diagnostics 2025, 15(23), 3043; https://doi.org/10.3390/diagnostics15233043 - 28 Nov 2025
Viewed by 526
Abstract
Background: In lung cancer, the occurrence of brain metastasis (BM) is closely associated with the heterogeneity of the primary lung tumor. This study aimed to develop a habitat-based radiomics model using enhanced computed tomography (CT) lung imaging to predict the risk of [...] Read more.
Background: In lung cancer, the occurrence of brain metastasis (BM) is closely associated with the heterogeneity of the primary lung tumor. This study aimed to develop a habitat-based radiomics model using enhanced computed tomography (CT) lung imaging to predict the risk of BM in patients with non-small cell lung cancer (NSCLC). Methods: A retrospective cohort of 475 patients with NSCLC who underwent enhanced CT of the lungs prior to anti-tumor treatment was analyzed. Volumetric CT images were segmented into tumor subregions via k-means clustering based on voxel intensity and entropy values. Radiomics features were extracted from these subregions, and predictive features were selected using minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Two logistic regression models were constructed: a whole-tumor radiomics model and a habitat-based model integrating subregional heterogeneity. Model performance was evaluated via receiver operating characteristic analysis and compared via DeLong’s test. Results: A total of 195 eligible patients with NSCLC were included. The volume of interest of the whole tumor was clustered into three subregions based on voxel intensity and entropy values. In the training cohort (n = 138), the areas under the curve of the clinical model, the whole-tumor model and the habitat-based model were 0.639 (95% confidence interval [CI]: 0.543–0.731), the whole-tumor model and the habitat-based model were 0.728 (95% confidence interval [CI]: 0.645–0.812) and 0.819 (95% CI: 0.744–0.894), respectively. The habitat-based model demonstrated superior predictive performance compared with the whole-tumor model (p = 0.022). Conclusions: The habitat-based radiomics model outperformed the whole-tumor model in terms of predicting BM, highlighting the importance of subregional tumor heterogeneity analysis. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

26 pages, 1271 KB  
Article
Predicting the Forest Fire Duration Enriched with Meteorological Data Using Feature Construction Techniques
by Constantina Kopitsa, Ioannis G. Tsoulos, Andreas Miltiadous and Vasileios Charilogis
Symmetry 2025, 17(11), 1785; https://doi.org/10.3390/sym17111785 - 22 Oct 2025
Viewed by 666
Abstract
The spread of contemporary artificial intelligence technologies, particularly machine learning, has significantly enhanced the capacity to predict asymmetrical natural disasters. Wildfires constitute a prominent example, as machine learning can be employed to forecast not only their spatial extent but also their environmental and [...] Read more.
The spread of contemporary artificial intelligence technologies, particularly machine learning, has significantly enhanced the capacity to predict asymmetrical natural disasters. Wildfires constitute a prominent example, as machine learning can be employed to forecast not only their spatial extent but also their environmental and socio-economic impacts, propagation dynamics, symmetrical or asymmetrical patterns, and even their duration. Such predictive capabilities are of critical importance for effective wildfire management, as they inform the strategic allocation of material resources, and the optimal deployment of human personnel in the field. Beyond that, examination of symmetrical or asymmetrical patterns in fires helps us to understand the causes and dynamics of their spread. The necessity of leveraging machine learning tools has become imperative in our era, as climate change has disrupted traditional wildfire management models due to prolonged droughts, rising temperatures, asymmetrical patterns, and the increasing frequency of extreme weather events. For this reason, our research seeks to fully exploit the potential of Principal Component Analysis (PCA), Minimum Redundancy Maximum Relevance (MRMR), and Grammatical Evolution, both for constructing Artificial Features and for generating Neural Network Architectures. For this purpose, we utilized the highly detailed and publicly available symmetrical datasets provided by the Hellenic Fire Service for the years 2014–2021, which we further enriched with meteorological data, corresponding to the prevailing conditions at both the onset and the suppression of each wildfire event. The research concluded that the Feature Construction technique, using Grammatical Evolution, combines both symmetrical and asymmetrical conditions, and that weather phenomena may provide and outperform other methods in terms of stability and accuracy. Therefore, the asymmetric phenomenon in our research is defined as the unpredictable outcome of climate change (meteorological data) which prolongs the duration of forest fires over time. Specifically, in the model accuracy of wildfire duration using Feature Construction, the mean error was 8.25%, indicating an overall accuracy of 91.75%. Full article
Show Figures

Figure 1

18 pages, 2025 KB  
Article
A Priori Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Using Deep Features from Pre-Treatment MRI and CT
by Deok Hyun Jang, Laurentius O. Osapoetra, Lakshmanan Sannachi, Belinda Curpen, Ana Pejović-Milić and Gregory J. Czarnota
Cancers 2025, 17(20), 3394; https://doi.org/10.3390/cancers17203394 - 21 Oct 2025
Viewed by 1108
Abstract
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to [...] Read more.
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to predict treatment response a priori. Methods: Two response endpoints were analyzed: pathologic complete response (pCR) versus non-pCR, and responders versus non-responders, with response defined as a reduction in tumor size of at least 30%. Intratumoral and peritumoral segmentations were generated on contrast-enhanced T1-weighted (CE-T1) and T2-weighted MRI, as well as contrast-enhanced CT images of tumors. Deep features were extracted from these regions using ResNet10, ResNet18, ResNet34, and ResNet50 architectures pre-trained with MedicalNet. Handcrafted radiomic features were also extracted for comparison. Feature selection was conducted with minimum redundancy maximum relevance (mRMR) followed by recursive feature elimination (RFE), and classification was performed using XGBoost across ten independent data partitions. Results: A total of 177 patients were analyzed in this study. ResNet34-derived features achieved the highest overall classification performance under both criteria, outperforming handcrafted features and deep features from other ResNet architectures. For distinguishing pCR from non-pCR, ResNet34 achieved a balanced accuracy of 81.6%, whereas handcrafted radiomics achieved 77.9%. For distinguishing responders from non-responders, ResNet34 achieved a balanced accuracy of 73.5%, compared with 70.2% for handcrafted radiomics. Conclusions: Deep features extracted from routinely acquired MRI and CT, when combined with clinical information, improve the prediction of NAC response in breast cancer. This multimodal framework demonstrates the value of deep learning-based approaches as a complement to handcrafted radiomics and provides a basis for more individualized treatment strategies. Full article
(This article belongs to the Special Issue CT/MRI/PET in Cancer)
Show Figures

Figure 1

24 pages, 661 KB  
Article
Brain Network Analysis and Recognition Algorithm for MDD Based on Class-Specific Correlation Feature Selection
by Zhengnan Zhang, Yating Hu, Jiangwen Lu and Yunyuan Gao
Information 2025, 16(10), 912; https://doi.org/10.3390/info16100912 - 17 Oct 2025
Viewed by 890
Abstract
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes [...] Read more.
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes a brain network analysis and recognition algorithm based on class-specific correlation feature selection. Leveraging electroencephalogram monitoring as a more objective MDD detection tool, this study employs tensor sparse representation to reduce the dimensionality of functional brain network time-series data, extracting the most representative functional connectivity matrices. To mitigate the impact of redundant connections, a feature selection algorithm combining topologically aware maximum class-specific dynamic correlation and minimum redundancy is integrated, identifying an optimal feature subset that best distinguishes MDD patients from healthy controls. The selected features are then ranked by relevance and fed into a hybrid CNN-BiLSTM classifier. Experimental results demonstrate classification accuracies of 95.96% and 94.90% on the MODMA and PRED + CT datasets, respectively, significantly outperforming conventional methods. This study not only improves the accuracy of MDD identification but also enhances the clinical interpretability of feature selection results, offering novel perspectives for pathological MDD research and clinical diagnosis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 4145 KB  
Article
Enhanced Feature Engineering Symmetry Model Based on Novel Dolphin Swarm Algorithm
by Fei Gao and Mideth Abisado
Symmetry 2025, 17(10), 1736; https://doi.org/10.3390/sym17101736 - 15 Oct 2025
Cited by 1 | Viewed by 638
Abstract
This study addresses the challenges of high-dimensional data, such as the curse of dimensionality and feature redundancy, which can be viewed as an inherent asymmetry in the data space. To restore a balanced symmetry and build a more complete feature representation, we propose [...] Read more.
This study addresses the challenges of high-dimensional data, such as the curse of dimensionality and feature redundancy, which can be viewed as an inherent asymmetry in the data space. To restore a balanced symmetry and build a more complete feature representation, we propose an enhanced feature engineering model (EFEM) that employs a novel dual-strategy approach. First, we present a symmetrical feature selection algorithm that combines an improved Dolphin Swarm Algorithm (DSA) with the Maximum Relevance–Minimum Redundancy (mRMR) criterion. This method not only selects an optimal, high-relevance feature subset, but also identifies the remaining features as a complementary, redundant subset. Second, an ensemble learning-based feature reconstruction algorithm is introduced to mine potential information from these redundant features. This process transforms fragmented, redundant information into a new, synthetic feature, thereby establishing a form of information symmetry with the selected optimal subset. Finally, the EFEM constructs a high-performance feature space by symmetrically integrating the optimal feature subset with the synthetic feature. The model’s superior performance is extensively validated on nine standard UCI regression datasets, with comparative analysis showing that it significantly outperforms similar algorithms and achieves an average goodness-of-fit of 0.9263. The statistical significance of this improvement is confirmed by the Wilcoxon signed-rank test. Comprehensive analyses of parameter sensitivity, robustness, convergence, and runtime, as well as ablation experiments, further validate the efficiency and stability of the proposed algorithm. The successful application of the EFEM in a real-world product demand forecasting task fully demonstrates its practical value in complex scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 2624 KB  
Article
Research on Feature Variable Set Optimization Method for Data-Driven Building Cooling Load Prediction Model
by Di Bai, Shuo Ma, Liwen Wu, Kexun Wang and Zhipeng Zhou
Buildings 2025, 15(19), 3583; https://doi.org/10.3390/buildings15193583 - 5 Oct 2025
Viewed by 546
Abstract
Short-term building cooling load prediction is crucial for optimizing building energy management and promoting sustainability. While data-driven models excel in this task, their performance heavily depends on the input feature set. Feature selection must balance predictive accuracy (relevance) and model simplicity (minimal redundancy), [...] Read more.
Short-term building cooling load prediction is crucial for optimizing building energy management and promoting sustainability. While data-driven models excel in this task, their performance heavily depends on the input feature set. Feature selection must balance predictive accuracy (relevance) and model simplicity (minimal redundancy), a challenge that existing methods often address incompletely. This study proposes a novel feature optimization framework that integrates the Maximum Information Coefficient (MIC) to measure non-linear relevance and the Maximum Relevance Minimum Redundancy (MRMR) principle to control redundancy. The proposed MRMR-MIC method was evaluated against four benchmark feature selection methods using three predictive models in a simulated office building case study. The results demonstrate that MRMR-MIC significantly outperforms other methods: it reduces the feature dimensionality from over 170 to merely 40 variables while maintaining a prediction error below 5%. This represents a substantial reduction in model complexity without sacrificing accuracy. Furthermore, the selected features cover a more comprehensive and physically meaningful set of attributes compared to other redundancy-control methods. The study concludes that the MRMR-MIC framework provides a robust, systematic methodology for identifying essential feature variables, which can not only enhance the performance of prediction models, but also offer practical guidance for designing cost-effective data acquisition systems in real-building applications. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

20 pages, 1036 KB  
Review
Radiomics-Driven Tumor Prognosis Prediction Across Imaging Modalities: Advances in Sampling, Feature Selection, and Multi-Omics Integration
by Mohan Huang, Helen K. W. Law and Shing Yau Tam
Cancers 2025, 17(19), 3121; https://doi.org/10.3390/cancers17193121 - 25 Sep 2025
Cited by 1 | Viewed by 3010
Abstract
Radiomics has shown remarkable potential in predicting cancer prognosis by noninvasive and quantitative analysis of tumors through medical imaging. This review summarizes recent advances in the use of radiomics across various cancer types and imaging modalities, including computed tomography (CT), magnetic resonance imaging [...] Read more.
Radiomics has shown remarkable potential in predicting cancer prognosis by noninvasive and quantitative analysis of tumors through medical imaging. This review summarizes recent advances in the use of radiomics across various cancer types and imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and interventional radiology. Innovative sampling methods, including deep learning-based segmentation, multiregional analysis, and adaptive region of interest (ROI) methods, have contributed to improved model performance. The review examines various feature selection approaches, including least absolute shrinkage and selection operator (LASSO), minimum redundancy maximum relevance (mRMR), and ensemble methods, highlighting their roles in enhancing model robustness. The integration of radiomics with multi-omics data has further boosted predictive accuracy and enriched biological interpretability. Despite these advancements, challenges remain in terms of reproducibility, workflow standardization, clinical validation and acceptance. Future research should prioritize multicenter collaborations, methodological coordination, and clinical translation to fully unlock the prognostic potential of radiomics in oncology. Full article
(This article belongs to the Special Issue Radiomics and Imaging in Cancer Analysis)
Show Figures

Figure 1

14 pages, 1881 KB  
Article
MRI Radiomics for Predicting the Diffuse Type of Tenosynovial Giant Cell Tumor: An Exploratory Study
by Seul Ki Lee, Min Wook Joo, Jee-Young Kim and Mingeon Kim
Diagnostics 2025, 15(18), 2399; https://doi.org/10.3390/diagnostics15182399 - 20 Sep 2025
Viewed by 778
Abstract
Objective: To develop and validate a radiomics-based MRI model for prediction of diffuse-type tenosynovial giant cell tumor (D-TGCT), which has higher postoperative recurrence and more aggressive behavior than localized-type (L-TGCT). The study was conducted under the hypothesis that MRI-based radiomics models can predict [...] Read more.
Objective: To develop and validate a radiomics-based MRI model for prediction of diffuse-type tenosynovial giant cell tumor (D-TGCT), which has higher postoperative recurrence and more aggressive behavior than localized-type (L-TGCT). The study was conducted under the hypothesis that MRI-based radiomics models can predict D-TGCT with diagnostic performance significantly greater than chance level, as measured by the area under the receiver operating characteristic (ROC) curve (AUC) (null hypothesis: AUC ≤ 0.5; alternative hypothesis: AUC > 0.5). Materials and Methods: This retrospective study included 84 patients with histologically confirmed TGCT (54 L-TGCT, 30 D-TGCT) who underwent preoperative MRI between January 2005 and December 2024. Tumor segmentation was manually performed on T2-weighted (T2WI) and contrast-enhanced T1-weighted images. After standardized preprocessing, 1691 radiomic features were extracted, and feature selection was performed using minimum redundancy maximum relevance. Multivariate logistic regression (MLR) and random forest (RF) classifiers were developed using a training cohort (n = 52) and tested in an independent test cohort (n = 32). Model performance was assessed AUC, sensitivity, specificity, and accuracy. Results: In the training set, D-TGCT prevalence was 32.6%; in the test set, it was 40.6%. The MLR model used three T2WI features: wavelet-LHL_glszm_GrayLevelNonUniformity, wavelet-HLL_gldm_LowGrayLevelEmphasis, and square_firstorder_Median. Training performance was high (AUC 0.94; sensitivity 75.0%; specificity 90.9%; accuracy 85.7%) but dropped in testing (AUC 0.60; sensitivity 62.5%; specificity 60.6%; accuracy 61.2%). The RF classifier demonstrated more stable performance [(training) AUC 0.85; sensitivity 43.8%; specificity 87.9%; accuracy 73.5% and (test) AUC 0.73; sensitivity 56.2%; specificity 72.7%; accuracy 67.3%]. Conclusions: Radiomics-based MRI models may help predict D-TGCT. While the MLR model overfitted, the RF classifier demonstrated relatively greater robustness and generalizability, suggesting that it may support clinical decision-making for D-TGCT in the future. Full article
(This article belongs to the Special Issue Innovative Diagnostic Imaging Technology in Musculoskeletal Tumors)
Show Figures

Figure 1

15 pages, 1786 KB  
Article
Application of Gaussian SVM Flame Detection Model Based on Color and Gradient Features in Engine Test Plume Images
by Song Yan, Yushan Gao, Zhiwei Zhang and Yi Li
Sensors 2025, 25(17), 5592; https://doi.org/10.3390/s25175592 - 8 Sep 2025
Viewed by 1155
Abstract
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation [...] Read more.
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation system—captured plume images for analysis. To detect abnormal flame phenomena in the plume, a Gaussian support vector machine (SVM) model was developed using image features that were derived from both color and gradient information. Six representative frames containing visible flames were selected from a single test failure video. These images were segmented in the YCbCr color space using the k-means clustering algorithm to distinguish flame and non-flame pixels. A 10-dimensional feature vector was constructed for each pixel and then reduced to five dimensions using the Maximum Relevance Minimum Redundancy (mRMR) method. The reduced vectors were used to train the Gaussian SVM model. The model achieved a 97.6% detection accuracy despite being trained on a limited dataset. It has been successfully applied in multiple subsequent engine tests, and it has proven effective in detecting ablation-related anomalies. By combining real-world sensor data acquisition with intelligent image-based analysis, this work enhances the monitoring capabilities in rocket engine development. Full article
Show Figures

Figure 1

23 pages, 14694 KB  
Article
PLCNet: A 3D-CNN-Based Plant-Level Classification Network Hyperspectral Framework for Sweetpotato Virus Disease Detection
by Qiaofeng Zhang, Wei Wang, Han Su, Gaoxiang Yang, Jiawen Xue, Hui Hou, Xiaoyue Geng, Qinghe Cao and Zhen Xu
Remote Sens. 2025, 17(16), 2882; https://doi.org/10.3390/rs17162882 - 19 Aug 2025
Cited by 1 | Viewed by 1381
Abstract
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral [...] Read more.
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral imagery. High-resolution data from early sweetpotato growth stages were processed via three feature selection methods—Random Forest (RF), Minimum Redundancy Maximum Relevance (mRMR), and Local Covariance Matrix (LCM)—in combination with 24 vegetation indices. Variance Inflation Factor (VIF) analysis reduced multicollinearity, yielding an optimized SPVD-sensitive feature set. First, using the RF-selected bands and vegetation indices, we benchmarked four classifiers—Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Residual Network (ResNet), and 3D Convolutional Neural Network (3D-CNN). Under identical inputs, the 3D-CNN achieved superior performance (OA = 96.55%, Macro F1 = 95.36%, UA_mean = 0.9498, PA_mean = 0.9504), outperforming SVM, GBDT, and ResNet. Second, with the same spectral–spatial features and 3D-CNN backbone, we compared a pixel-level baseline (CropdocNet) against our plant-level PLCNet. CropdocNet exhibited spatial fragmentation and isolated errors, whereas PLCNet’s two-stage pipeline—deep feature extraction followed by connected-component analysis and majority voting—aggregated voxel predictions into coherent whole-plant labels, substantially reducing noise and enhancing biological interpretability. By integrating optimized feature selection, deep learning, and plant-level post-processing, PLCNet delivers a scalable, high-throughput solution for precise SPVD monitoring in agricultural fields. Full article
Show Figures

Figure 1

15 pages, 3326 KB  
Article
Radiomics and Machine Learning Approaches for the Preoperative Classification of In Situ vs. Invasive Breast Cancer Using Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE–MRI)
by Luana Conte, Rocco Rizzo, Alessandra Sallustio, Eleonora Maggiulli, Mariangela Capodieci, Francesco Tramacere, Alessandra Castelluccia, Giuseppe Raso, Ugo De Giorgi, Raffaella Massafra, Maurizio Portaluri, Donato Cascio and Giorgio De Nunzio
Appl. Sci. 2025, 15(14), 7999; https://doi.org/10.3390/app15147999 - 18 Jul 2025
Cited by 1 | Viewed by 1470
Abstract
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. [...] Read more.
Accurate preoperative distinction between in situ and invasive Breast Cancer (BC) is critical for clinical decision-making and treatment planning. Radiomics and Machine Learning (ML) have shown promise in enhancing diagnostic performance from breast MRI, yet their application to this specific task remains underexplored. The aim of this study was to evaluate the performance of several ML classifiers, trained on radiomic features extracted from DCE–MRI and supported by basic clinical information, for the classification of in situ versus invasive BC lesions. In this study, we retrospectively analysed 71 post-contrast DCE–MRI scans (24 in situ, 47 invasive cases). Radiomic features were extracted from manually segmented tumour regions using the PyRadiomics library, and a limited set of basic clinical variables was also included. Several ML classifiers were evaluated in a Leave-One-Out Cross-Validation (LOOCV) scheme. Feature selection was performed using two different strategies: Minimum Redundancy Maximum Relevance (MRMR), mutual information. Axial 3D rotation was used for data augmentation. Support Vector Machine (SVM), K Nearest Neighbors (KNN), Random Forest (RF), and Extreme Gradient Boosting (XGBoost) were the best-performing models, with an Area Under the Curve (AUC) ranging from 0.77 to 0.81. Notably, KNN achieved the best balance between sensitivity and specificity without the need for data augmentation. Our findings confirm that radiomic features extracted from DCE–MRI, combined with well-validated ML models, can effectively support the differentiation of in situ vs. invasive breast cancer. This approach is quite robust even in small datasets and may aid in improving preoperative planning. Further validation on larger cohorts and integration with additional imaging or clinical data are recommended. Full article
Show Figures

Figure 1

39 pages, 2612 KB  
Article
A Deep Learning-Driven CAD for Breast Cancer Detection via Thermograms: A Compact Multi-Architecture Feature Strategy
by Omneya Attallah
Appl. Sci. 2025, 15(13), 7181; https://doi.org/10.3390/app15137181 - 26 Jun 2025
Cited by 3 | Viewed by 2396
Abstract
Breast cancer continues to be the most common malignancy among women worldwide, presenting a considerable public health issue. Mammography, though the gold standard for screening, has limitations that catalyzed the advancement of non-invasive, radiation-free alternatives, such as thermal imaging (thermography). This research introduces [...] Read more.
Breast cancer continues to be the most common malignancy among women worldwide, presenting a considerable public health issue. Mammography, though the gold standard for screening, has limitations that catalyzed the advancement of non-invasive, radiation-free alternatives, such as thermal imaging (thermography). This research introduces a novel computer-aided diagnosis (CAD) framework aimed at improving breast cancer detection via thermal imaging. The suggested framework mitigates the limitations of current CAD systems, which frequently utilize intricate convolutional neural network (CNN) structures and resource-intensive preprocessing, by incorporating streamlined CNN designs, transfer learning strategies, and multi-architecture ensemble methods. Features are primarily obtained from various layers of MobileNet, EfficientNetB0, and ShuffleNet architectures to assess the impact of individual layers on classification performance. Following that, feature transformation methods, such as discrete wavelet transform (DWT) and non-negative matrix factorization (NNMF), are employed to diminish feature dimensionality and enhance computational efficiency. Features from all layers of the three CNNs are subsequently incorporated, and the Minimum Redundancy Maximum Relevance (MRMR) algorithm is utilized to determine the most prominent features. Ultimately, support vector machine (SVM) classifiers are employed for classification purposes. The results indicate that integrating features from various CNNs and layers markedly improves performance, attaining a maximum accuracy of 99.4%. Furthermore, the combination of attributes from all three layers of the CNNs, in conjunction with NNMF, attained a maximum accuracy of 99.9% with merely 350 features. This CAD system demonstrates the efficacy of thermal imaging and multi-layer feature amalgamation to enhance non-invasive breast cancer diagnosis by reducing computational requirements through multi-layer feature integration and dimensionality reduction techniques. Full article
(This article belongs to the Special Issue Application of Decision Support Systems in Biomedical Engineering)
Show Figures

Figure 1

21 pages, 41092 KB  
Article
UAV as a Bridge: Mapping Key Rice Growth Stage with Sentinel-2 Imagery and Novel Vegetation Indices
by Jianping Zhang, Rundong Zhang, Qi Meng, Yanying Chen, Jie Deng and Bingtai Chen
Remote Sens. 2025, 17(13), 2180; https://doi.org/10.3390/rs17132180 - 25 Jun 2025
Cited by 1 | Viewed by 1463
Abstract
Rice is one of the three primary staple crops worldwide. The accurate monitoring of its key growth stages is crucial for agricultural management, disaster early warning, and ensuring food security. The effective collection of ground reference data is a critical step for monitoring [...] Read more.
Rice is one of the three primary staple crops worldwide. The accurate monitoring of its key growth stages is crucial for agricultural management, disaster early warning, and ensuring food security. The effective collection of ground reference data is a critical step for monitoring rice growth stages using satellite imagery, traditionally achieved through labor-intensive field surveys. Here, we propose utilizing UAVs as an alternative means to collect spatially continuous ground reference data across larger areas, thereby enhancing the efficiency and scalability of training and validation processes for rice growth stage mapping products. The UAV data collection involved the Nanchuan, Yongchuan, Tongnan, and Kaizhou districts of Chongqing City, encompassing a total area of 377.5 hectares. After visual interpretation, centimeter-level high-resolution labels of the key rice growth stages were constructed. These labels were then mapped to Sentinel-2 imagery through spatiotemporal matching and scale conversion, resulting in a reference dataset of Sentinel 2 data that covered growth stages such as jointing and heading. Furthermore, we employed 30 vegetation index calculation methods to explore 48,600 spectral band combinations derived from 10 Sentinel-2 spectral bands, thereby constructing a series of novel vegetation indices. Based on the maximum relevance minimum redundancy (mRMR) algorithm, we identified an optimal subset of features that were both highly correlated with rice growth stages and mutually complementary. The results demonstrate that multi-feature modeling significantly enhanced classification performance. The optimal model, incorporating 300 features, achieved an F1 score of 0.864, representing a 2.5% improvement over models based on original spectral bands and a 38.8% improvement over models using a single feature. Notably, a model utilizing only 12 features maintained a high classification accuracy (F1 = 0.855) while substantially reducing computational costs. Compared with existing methods, this study constructed a large-scale ground-truth reference dataset for satellite imagery based on UAV observations, demonstrating its potential as an effective technical framework and providing an effective technical framework for the large-scale mapping of rice growth stages using satellite data. Full article
(This article belongs to the Special Issue Recent Progress in UAV-AI Remote Sensing II)
Show Figures

Figure 1

15 pages, 1496 KB  
Article
Radiomics-Based Classification of Clear Cell Renal Cell Carcinoma ISUP Grade: A Machine Learning Approach with SHAP-Enhanced Explainability
by María Aymerich, Alejandra García-Baizán, Paolo Niccolò Franco, Mariña González, Pilar San Miguel Fraile, José Antonio Ortiz-Rey and Milagros Otero-García
Diagnostics 2025, 15(11), 1337; https://doi.org/10.3390/diagnostics15111337 - 26 May 2025
Viewed by 1880
Abstract
Background: Clear cell renal cell carcinoma (ccRCC) is the most common subtype of renal cancer, and its prognosis is closely linked to the International Society of Urological Pathology (ISUP) grade. While histopathological evaluation remains the gold standard for grading, non-invasive methods, such [...] Read more.
Background: Clear cell renal cell carcinoma (ccRCC) is the most common subtype of renal cancer, and its prognosis is closely linked to the International Society of Urological Pathology (ISUP) grade. While histopathological evaluation remains the gold standard for grading, non-invasive methods, such as radiomics, offer potential for automated classification. This study aims to develop a radiomics-based machine learning model for the ISUP grade classification of ccRCC using nephrographic-phase CT images, with an emphasis on model interpretability through SHAP (SHapley Additive exPlanations) values. Objective: To develop and interpret a radiomics-based machine learning model for classifying ISUP grade in clear cell renal cell carcinoma (ccRCC) using nephrographic-phase CT images. Materials and Methods: This retrospective study included 109 patients with histopathologically confirmed ccRCC. Radiomic features were extracted from the nephrographic-phase CT scans. Feature robustness was evaluated via intraclass correlation coefficient (ICC), followed by redundancy reduction using Pearson correlation and minimum Redundancy Maximum Relevance (mRMR). Logistic regression, support vector machine, and random forest classifiers were trained using 8-fold cross-validation. SHAP values were computed to assess feature contribution. Results: The logistic regression model achieved the highest classification performance, with an accuracy of 82% and an AUC of 0.86. SHAP analysis identified major axis length, busyness, and large area emphasis as the most influential features. These variables represented shape and texture information, critical for distinguishing between high and low ISUP grades. Conclusions: A radiomics-based logistic regression model using nephrographic-phase CT enables accurate, non-invasive classification of ccRCC according to ISUP grade. The use of SHAP values enhances model transparency, supporting clinical interpretability and potential adoption in precision oncology. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

Back to TopTop