Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (159)

Search Parameters:
Keywords = Minimum Redundancy Maximum Relevance

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
29 pages, 2539 KB  
Article
Inertial Sensor-Based Recognition of Field Hockey Activities Using a Hybrid Feature Selection Framework
by Norazman Shahar, Muhammad Amir As’ari, Mohamad Hazwan Mohd Ghazali, Nasharuddin Zainal, Mohd Asyraf Zulkifley, Ahmad Asrul Ibrahim, Zaid Omar, Mohd Sabirin Rahmat, Kok Beng Gan and Asraf Mohamed Moubark
Sensors 2025, 25(24), 7615; https://doi.org/10.3390/s25247615 - 16 Dec 2025
Viewed by 238
Abstract
Accurate recognition of complex human activities from wearable sensors plays a critical role in sports analytics and human performance monitoring. However, the high dimensionality and redundancy of raw inertial data can hinder model performance and interpretability. This study proposes a hybrid feature selection [...] Read more.
Accurate recognition of complex human activities from wearable sensors plays a critical role in sports analytics and human performance monitoring. However, the high dimensionality and redundancy of raw inertial data can hinder model performance and interpretability. This study proposes a hybrid feature selection framework that combines Minimum Redundancy Maximum Relevance (MRMR) and Regularized Neighborhood Component Analysis (RNCA) to improve classification accuracy while reducing computational complexity. Multi-sensor inertial data were collected from field hockey players performing six activity types. Time- and frequency-domain features were extracted from four body-mounted inertial measurement units (IMUs), resulting in 432 initial features. MRMR, combined with Pearson correlation filtering (|ρ| > 0.7), eliminated redundant features, and RNCA further refined the subset by learning supervised feature weights. The final model achieved a test accuracy of 92.82% and F1-score of 86.91% using only 83 features, surpassing the MRMR-only configuration and slightly outperforming the full feature set. This performance was supported by reduced training time, improved confusion matrix profiles, and enhanced class separability in PCA and t-SNE visualizations. These results demonstrate the effectiveness of the proposed two-stage feature selection method in optimizing classification performance while enhancing model efficiency and interpretability for real-time human activity recognition systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

13 pages, 1931 KB  
Article
Habitat Model Based on Lung CT for Predicting Brain Metastasis in Patients with Non-Small Cell Lung Cancer
by Feiyu Xing, Yan Lei, Qin Zhong, Yan Wu, Huan Liu and Yuanliang Xie
Diagnostics 2025, 15(23), 3043; https://doi.org/10.3390/diagnostics15233043 - 28 Nov 2025
Viewed by 372
Abstract
Background: In lung cancer, the occurrence of brain metastasis (BM) is closely associated with the heterogeneity of the primary lung tumor. This study aimed to develop a habitat-based radiomics model using enhanced computed tomography (CT) lung imaging to predict the risk of [...] Read more.
Background: In lung cancer, the occurrence of brain metastasis (BM) is closely associated with the heterogeneity of the primary lung tumor. This study aimed to develop a habitat-based radiomics model using enhanced computed tomography (CT) lung imaging to predict the risk of BM in patients with non-small cell lung cancer (NSCLC). Methods: A retrospective cohort of 475 patients with NSCLC who underwent enhanced CT of the lungs prior to anti-tumor treatment was analyzed. Volumetric CT images were segmented into tumor subregions via k-means clustering based on voxel intensity and entropy values. Radiomics features were extracted from these subregions, and predictive features were selected using minimum redundancy maximum relevance and least absolute shrinkage and selection operator regression. Two logistic regression models were constructed: a whole-tumor radiomics model and a habitat-based model integrating subregional heterogeneity. Model performance was evaluated via receiver operating characteristic analysis and compared via DeLong’s test. Results: A total of 195 eligible patients with NSCLC were included. The volume of interest of the whole tumor was clustered into three subregions based on voxel intensity and entropy values. In the training cohort (n = 138), the areas under the curve of the clinical model, the whole-tumor model and the habitat-based model were 0.639 (95% confidence interval [CI]: 0.543–0.731), the whole-tumor model and the habitat-based model were 0.728 (95% confidence interval [CI]: 0.645–0.812) and 0.819 (95% CI: 0.744–0.894), respectively. The habitat-based model demonstrated superior predictive performance compared with the whole-tumor model (p = 0.022). Conclusions: The habitat-based radiomics model outperformed the whole-tumor model in terms of predicting BM, highlighting the importance of subregional tumor heterogeneity analysis. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

26 pages, 1271 KB  
Article
Predicting the Forest Fire Duration Enriched with Meteorological Data Using Feature Construction Techniques
by Constantina Kopitsa, Ioannis G. Tsoulos, Andreas Miltiadous and Vasileios Charilogis
Symmetry 2025, 17(11), 1785; https://doi.org/10.3390/sym17111785 - 22 Oct 2025
Viewed by 604
Abstract
The spread of contemporary artificial intelligence technologies, particularly machine learning, has significantly enhanced the capacity to predict asymmetrical natural disasters. Wildfires constitute a prominent example, as machine learning can be employed to forecast not only their spatial extent but also their environmental and [...] Read more.
The spread of contemporary artificial intelligence technologies, particularly machine learning, has significantly enhanced the capacity to predict asymmetrical natural disasters. Wildfires constitute a prominent example, as machine learning can be employed to forecast not only their spatial extent but also their environmental and socio-economic impacts, propagation dynamics, symmetrical or asymmetrical patterns, and even their duration. Such predictive capabilities are of critical importance for effective wildfire management, as they inform the strategic allocation of material resources, and the optimal deployment of human personnel in the field. Beyond that, examination of symmetrical or asymmetrical patterns in fires helps us to understand the causes and dynamics of their spread. The necessity of leveraging machine learning tools has become imperative in our era, as climate change has disrupted traditional wildfire management models due to prolonged droughts, rising temperatures, asymmetrical patterns, and the increasing frequency of extreme weather events. For this reason, our research seeks to fully exploit the potential of Principal Component Analysis (PCA), Minimum Redundancy Maximum Relevance (MRMR), and Grammatical Evolution, both for constructing Artificial Features and for generating Neural Network Architectures. For this purpose, we utilized the highly detailed and publicly available symmetrical datasets provided by the Hellenic Fire Service for the years 2014–2021, which we further enriched with meteorological data, corresponding to the prevailing conditions at both the onset and the suppression of each wildfire event. The research concluded that the Feature Construction technique, using Grammatical Evolution, combines both symmetrical and asymmetrical conditions, and that weather phenomena may provide and outperform other methods in terms of stability and accuracy. Therefore, the asymmetric phenomenon in our research is defined as the unpredictable outcome of climate change (meteorological data) which prolongs the duration of forest fires over time. Specifically, in the model accuracy of wildfire duration using Feature Construction, the mean error was 8.25%, indicating an overall accuracy of 91.75%. Full article
Show Figures

Figure 1

18 pages, 2025 KB  
Article
A Priori Prediction of Neoadjuvant Chemotherapy Response in Breast Cancer Using Deep Features from Pre-Treatment MRI and CT
by Deok Hyun Jang, Laurentius O. Osapoetra, Lakshmanan Sannachi, Belinda Curpen, Ana Pejović-Milić and Gregory J. Czarnota
Cancers 2025, 17(20), 3394; https://doi.org/10.3390/cancers17203394 - 21 Oct 2025
Viewed by 977
Abstract
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to [...] Read more.
Background: Response to neoadjuvant chemotherapy (NAC) is a key prognostic indicator in breast cancer, yet current assessment relies on postoperative pathology. This study investigated the use of deep features derived from pre-treatment MRI and CT scans, in conjunction with clinical variables, to predict treatment response a priori. Methods: Two response endpoints were analyzed: pathologic complete response (pCR) versus non-pCR, and responders versus non-responders, with response defined as a reduction in tumor size of at least 30%. Intratumoral and peritumoral segmentations were generated on contrast-enhanced T1-weighted (CE-T1) and T2-weighted MRI, as well as contrast-enhanced CT images of tumors. Deep features were extracted from these regions using ResNet10, ResNet18, ResNet34, and ResNet50 architectures pre-trained with MedicalNet. Handcrafted radiomic features were also extracted for comparison. Feature selection was conducted with minimum redundancy maximum relevance (mRMR) followed by recursive feature elimination (RFE), and classification was performed using XGBoost across ten independent data partitions. Results: A total of 177 patients were analyzed in this study. ResNet34-derived features achieved the highest overall classification performance under both criteria, outperforming handcrafted features and deep features from other ResNet architectures. For distinguishing pCR from non-pCR, ResNet34 achieved a balanced accuracy of 81.6%, whereas handcrafted radiomics achieved 77.9%. For distinguishing responders from non-responders, ResNet34 achieved a balanced accuracy of 73.5%, compared with 70.2% for handcrafted radiomics. Conclusions: Deep features extracted from routinely acquired MRI and CT, when combined with clinical information, improve the prediction of NAC response in breast cancer. This multimodal framework demonstrates the value of deep learning-based approaches as a complement to handcrafted radiomics and provides a basis for more individualized treatment strategies. Full article
(This article belongs to the Special Issue CT/MRI/PET in Cancer)
Show Figures

Figure 1

21 pages, 2556 KB  
Article
Comparison of Machine Learning Models in Nonlinear and Stochastic Signal Classification
by Elzbieta Olejarczyk and Carlo Massaroni
Appl. Sci. 2025, 15(20), 11226; https://doi.org/10.3390/app152011226 - 20 Oct 2025
Viewed by 468
Abstract
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight [...] Read more.
This study aims to compare different classifiers in the context of distinguishing two classes of signals: nonlinear electrocardiography (ECG) signals and stochastic artifacts occurring in ECG signals. The ECG signals from a single-lead wearable Movesense device were analyzed with a set of eight features: variance (VAR), three fractal dimension measures (Higuchi fractal dimension (HFD), Katz fractal dimension (KFD), and Detrended Fluctuation Analysis (DFA)), and four entropy measures (approximate entropy (ApEn), sample entropy (SampEn), and multiscale entropy (MSE) for scales 1 and 2). The minimum-redundancy maximum-relevance algorithm was applied for evaluation of feature importance. A broad spectrum of machine learning models was considered for classification. The proposed approach allowed for comparison of classifier features, as well as providing a broader insight into the characteristics of the signals themselves. The most important features for classification were VAR, DFA, ApEn, and HFD. The best performance among 34 classifiers was obtained using an optimized RUSBoosted Trees ensemble classifier (sensitivity, specificity, and positive and negative predictive values were 99.8, 73.7%, 99.8, and 74.3, respectively). The accuracy of the Movesense device was very high (99.6%). Moreover, the multifractality of ECG during sleep was observed in the relationship between SampEn (or ApEn) and MSE. Full article
(This article belongs to the Special Issue New Advances in Electrocardiogram (ECG) Signal Processing)
Show Figures

Figure 1

24 pages, 661 KB  
Article
Brain Network Analysis and Recognition Algorithm for MDD Based on Class-Specific Correlation Feature Selection
by Zhengnan Zhang, Yating Hu, Jiangwen Lu and Yunyuan Gao
Information 2025, 16(10), 912; https://doi.org/10.3390/info16100912 - 17 Oct 2025
Viewed by 758
Abstract
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes [...] Read more.
Major Depressive Disorder (MDD) is a high-risk mental illness that severely affects individuals across all age groups. However, existing research lacks comprehensive analysis and utilization of brain topological features, making it challenging to reduce redundant connectivity while preserving depression-related biomarkers. This study proposes a brain network analysis and recognition algorithm based on class-specific correlation feature selection. Leveraging electroencephalogram monitoring as a more objective MDD detection tool, this study employs tensor sparse representation to reduce the dimensionality of functional brain network time-series data, extracting the most representative functional connectivity matrices. To mitigate the impact of redundant connections, a feature selection algorithm combining topologically aware maximum class-specific dynamic correlation and minimum redundancy is integrated, identifying an optimal feature subset that best distinguishes MDD patients from healthy controls. The selected features are then ranked by relevance and fed into a hybrid CNN-BiLSTM classifier. Experimental results demonstrate classification accuracies of 95.96% and 94.90% on the MODMA and PRED + CT datasets, respectively, significantly outperforming conventional methods. This study not only improves the accuracy of MDD identification but also enhances the clinical interpretability of feature selection results, offering novel perspectives for pathological MDD research and clinical diagnosis. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

26 pages, 4145 KB  
Article
Enhanced Feature Engineering Symmetry Model Based on Novel Dolphin Swarm Algorithm
by Fei Gao and Mideth Abisado
Symmetry 2025, 17(10), 1736; https://doi.org/10.3390/sym17101736 - 15 Oct 2025
Viewed by 579
Abstract
This study addresses the challenges of high-dimensional data, such as the curse of dimensionality and feature redundancy, which can be viewed as an inherent asymmetry in the data space. To restore a balanced symmetry and build a more complete feature representation, we propose [...] Read more.
This study addresses the challenges of high-dimensional data, such as the curse of dimensionality and feature redundancy, which can be viewed as an inherent asymmetry in the data space. To restore a balanced symmetry and build a more complete feature representation, we propose an enhanced feature engineering model (EFEM) that employs a novel dual-strategy approach. First, we present a symmetrical feature selection algorithm that combines an improved Dolphin Swarm Algorithm (DSA) with the Maximum Relevance–Minimum Redundancy (mRMR) criterion. This method not only selects an optimal, high-relevance feature subset, but also identifies the remaining features as a complementary, redundant subset. Second, an ensemble learning-based feature reconstruction algorithm is introduced to mine potential information from these redundant features. This process transforms fragmented, redundant information into a new, synthetic feature, thereby establishing a form of information symmetry with the selected optimal subset. Finally, the EFEM constructs a high-performance feature space by symmetrically integrating the optimal feature subset with the synthetic feature. The model’s superior performance is extensively validated on nine standard UCI regression datasets, with comparative analysis showing that it significantly outperforms similar algorithms and achieves an average goodness-of-fit of 0.9263. The statistical significance of this improvement is confirmed by the Wilcoxon signed-rank test. Comprehensive analyses of parameter sensitivity, robustness, convergence, and runtime, as well as ablation experiments, further validate the efficiency and stability of the proposed algorithm. The successful application of the EFEM in a real-world product demand forecasting task fully demonstrates its practical value in complex scenarios. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 1951 KB  
Article
Enhancing Lemon Leaf Disease Detection: A Hybrid Approach Combining Deep Learning Feature Extraction and mRMR-Optimized SVM Classification
by Ahmet Saygılı
Appl. Sci. 2025, 15(20), 10988; https://doi.org/10.3390/app152010988 - 13 Oct 2025
Viewed by 920
Abstract
This study presents a robust and extensible hybrid classification framework for accurately detecting diseases in citrus leaves by integrating transfer learning-based deep learning models with classical machine learning techniques. Features were extracted using advanced pretrained architectures—DenseNet201, ResNet50, MobileNetV2, and EfficientNet-B0—and refined via the [...] Read more.
This study presents a robust and extensible hybrid classification framework for accurately detecting diseases in citrus leaves by integrating transfer learning-based deep learning models with classical machine learning techniques. Features were extracted using advanced pretrained architectures—DenseNet201, ResNet50, MobileNetV2, and EfficientNet-B0—and refined via the minimum redundancy maximum relevance (mRMR) method to reduce redundancy while maximizing discriminative power. These features were classified using support vector machines (SVMs), ensemble bagged trees, k-nearest neighbors (kNNs), and neural networks under stratified 10-fold cross-validation. On the lemon dataset, the best configuration (DenseNet201 + SVM) achieved 94.1 ± 4.9% accuracy, 93.2 ± 5.7% F1 score, and a balanced accuracy of 93.4 ± 6.0%, demonstrating strong and stable performance. To assess external generalization, the same pipeline was applied to mango and pomegranate leaves, achieving 100.0 ± 0.0% and 98.7 ± 1.5% accuracy, respectively—confirming the model’s robustness across citrus and non-citrus domains. Beyond accuracy, lightweight models such as EfficientNet-B0 and MobileNetV2 provided significantly higher throughput and lower latency, underscoring their suitability for real-time agricultural applications. These findings highlight the importance of combining deep representations with efficient classical classifiers for precision agriculture, offering both high diagnostic accuracy and practical deployability in field conditions. Full article
(This article belongs to the Topic Digital Agriculture, Smart Farming and Crop Monitoring)
Show Figures

Figure 1

19 pages, 2624 KB  
Article
Research on Feature Variable Set Optimization Method for Data-Driven Building Cooling Load Prediction Model
by Di Bai, Shuo Ma, Liwen Wu, Kexun Wang and Zhipeng Zhou
Buildings 2025, 15(19), 3583; https://doi.org/10.3390/buildings15193583 - 5 Oct 2025
Viewed by 514
Abstract
Short-term building cooling load prediction is crucial for optimizing building energy management and promoting sustainability. While data-driven models excel in this task, their performance heavily depends on the input feature set. Feature selection must balance predictive accuracy (relevance) and model simplicity (minimal redundancy), [...] Read more.
Short-term building cooling load prediction is crucial for optimizing building energy management and promoting sustainability. While data-driven models excel in this task, their performance heavily depends on the input feature set. Feature selection must balance predictive accuracy (relevance) and model simplicity (minimal redundancy), a challenge that existing methods often address incompletely. This study proposes a novel feature optimization framework that integrates the Maximum Information Coefficient (MIC) to measure non-linear relevance and the Maximum Relevance Minimum Redundancy (MRMR) principle to control redundancy. The proposed MRMR-MIC method was evaluated against four benchmark feature selection methods using three predictive models in a simulated office building case study. The results demonstrate that MRMR-MIC significantly outperforms other methods: it reduces the feature dimensionality from over 170 to merely 40 variables while maintaining a prediction error below 5%. This represents a substantial reduction in model complexity without sacrificing accuracy. Furthermore, the selected features cover a more comprehensive and physically meaningful set of attributes compared to other redundancy-control methods. The study concludes that the MRMR-MIC framework provides a robust, systematic methodology for identifying essential feature variables, which can not only enhance the performance of prediction models, but also offer practical guidance for designing cost-effective data acquisition systems in real-building applications. Full article
(This article belongs to the Section Building Energy, Physics, Environment, and Systems)
Show Figures

Figure 1

34 pages, 4605 KB  
Article
Forehead and In-Ear EEG Acquisition and Processing: Biomarker Analysis and Memory-Efficient Deep Learning Algorithm for Sleep Staging with Optimized Feature Dimensionality
by Roberto De Fazio, Şule Esma Yalçınkaya, Ilaria Cascella, Carolina Del-Valle-Soto, Massimo De Vittorio and Paolo Visconti
Sensors 2025, 25(19), 6021; https://doi.org/10.3390/s25196021 - 1 Oct 2025
Viewed by 2168
Abstract
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be [...] Read more.
Advancements in electroencephalography (EEG) technology and feature extraction methods have paved the way for wearable, non-invasive systems that enable continuous sleep monitoring outside clinical environments. This study presents the development and evaluation of an EEG-based acquisition system for sleep staging, which can be adapted for wearable applications. The system utilizes a custom experimental setup with the ADS1299EEG-FE-PDK evaluation board to acquire EEG signals from the forehead and in-ear regions under various conditions, including visual and auditory stimuli. Afterward, the acquired signals were processed to extract a wide range of features in time, frequency, and non-linear domains, selected based on their physiological relevance to sleep stages and disorders. The feature set was reduced using the Minimum Redundancy Maximum Relevance (mRMR) algorithm and Principal Component Analysis (PCA), resulting in a compact and informative subset of principal components. Experiments were conducted on the Bitbrain Open Access Sleep (BOAS) dataset to validate the selected features and assess their robustness across subjects. The feature set extracted from a single EEG frontal derivation (F4-F3) was then used to train and test a two-step deep learning model that combines Long Short-Term Memory (LSTM) and dense layers for 5-class sleep stage classification, utilizing attention and augmentation mechanisms to mitigate the natural imbalance of the feature set. The results—overall accuracies of 93.5% and 94.7% using the reduced feature sets (94% and 98% cumulative explained variance, respectively) and 97.9% using the complete feature set—demonstrate the feasibility of obtaining a reliable classification using a single EEG derivation, mainly for unobtrusive, home-based sleep monitoring systems. Full article
Show Figures

Figure 1

20 pages, 1036 KB  
Review
Radiomics-Driven Tumor Prognosis Prediction Across Imaging Modalities: Advances in Sampling, Feature Selection, and Multi-Omics Integration
by Mohan Huang, Helen K. W. Law and Shing Yau Tam
Cancers 2025, 17(19), 3121; https://doi.org/10.3390/cancers17193121 - 25 Sep 2025
Viewed by 2661
Abstract
Radiomics has shown remarkable potential in predicting cancer prognosis by noninvasive and quantitative analysis of tumors through medical imaging. This review summarizes recent advances in the use of radiomics across various cancer types and imaging modalities, including computed tomography (CT), magnetic resonance imaging [...] Read more.
Radiomics has shown remarkable potential in predicting cancer prognosis by noninvasive and quantitative analysis of tumors through medical imaging. This review summarizes recent advances in the use of radiomics across various cancer types and imaging modalities, including computed tomography (CT), magnetic resonance imaging (MRI), ultrasound, positron emission tomography (PET), and interventional radiology. Innovative sampling methods, including deep learning-based segmentation, multiregional analysis, and adaptive region of interest (ROI) methods, have contributed to improved model performance. The review examines various feature selection approaches, including least absolute shrinkage and selection operator (LASSO), minimum redundancy maximum relevance (mRMR), and ensemble methods, highlighting their roles in enhancing model robustness. The integration of radiomics with multi-omics data has further boosted predictive accuracy and enriched biological interpretability. Despite these advancements, challenges remain in terms of reproducibility, workflow standardization, clinical validation and acceptance. Future research should prioritize multicenter collaborations, methodological coordination, and clinical translation to fully unlock the prognostic potential of radiomics in oncology. Full article
(This article belongs to the Special Issue Radiomics and Imaging in Cancer Analysis)
Show Figures

Figure 1

14 pages, 1881 KB  
Article
MRI Radiomics for Predicting the Diffuse Type of Tenosynovial Giant Cell Tumor: An Exploratory Study
by Seul Ki Lee, Min Wook Joo, Jee-Young Kim and Mingeon Kim
Diagnostics 2025, 15(18), 2399; https://doi.org/10.3390/diagnostics15182399 - 20 Sep 2025
Viewed by 704
Abstract
Objective: To develop and validate a radiomics-based MRI model for prediction of diffuse-type tenosynovial giant cell tumor (D-TGCT), which has higher postoperative recurrence and more aggressive behavior than localized-type (L-TGCT). The study was conducted under the hypothesis that MRI-based radiomics models can predict [...] Read more.
Objective: To develop and validate a radiomics-based MRI model for prediction of diffuse-type tenosynovial giant cell tumor (D-TGCT), which has higher postoperative recurrence and more aggressive behavior than localized-type (L-TGCT). The study was conducted under the hypothesis that MRI-based radiomics models can predict D-TGCT with diagnostic performance significantly greater than chance level, as measured by the area under the receiver operating characteristic (ROC) curve (AUC) (null hypothesis: AUC ≤ 0.5; alternative hypothesis: AUC > 0.5). Materials and Methods: This retrospective study included 84 patients with histologically confirmed TGCT (54 L-TGCT, 30 D-TGCT) who underwent preoperative MRI between January 2005 and December 2024. Tumor segmentation was manually performed on T2-weighted (T2WI) and contrast-enhanced T1-weighted images. After standardized preprocessing, 1691 radiomic features were extracted, and feature selection was performed using minimum redundancy maximum relevance. Multivariate logistic regression (MLR) and random forest (RF) classifiers were developed using a training cohort (n = 52) and tested in an independent test cohort (n = 32). Model performance was assessed AUC, sensitivity, specificity, and accuracy. Results: In the training set, D-TGCT prevalence was 32.6%; in the test set, it was 40.6%. The MLR model used three T2WI features: wavelet-LHL_glszm_GrayLevelNonUniformity, wavelet-HLL_gldm_LowGrayLevelEmphasis, and square_firstorder_Median. Training performance was high (AUC 0.94; sensitivity 75.0%; specificity 90.9%; accuracy 85.7%) but dropped in testing (AUC 0.60; sensitivity 62.5%; specificity 60.6%; accuracy 61.2%). The RF classifier demonstrated more stable performance [(training) AUC 0.85; sensitivity 43.8%; specificity 87.9%; accuracy 73.5% and (test) AUC 0.73; sensitivity 56.2%; specificity 72.7%; accuracy 67.3%]. Conclusions: Radiomics-based MRI models may help predict D-TGCT. While the MLR model overfitted, the RF classifier demonstrated relatively greater robustness and generalizability, suggesting that it may support clinical decision-making for D-TGCT in the future. Full article
(This article belongs to the Special Issue Innovative Diagnostic Imaging Technology in Musculoskeletal Tumors)
Show Figures

Figure 1

15 pages, 1786 KB  
Article
Application of Gaussian SVM Flame Detection Model Based on Color and Gradient Features in Engine Test Plume Images
by Song Yan, Yushan Gao, Zhiwei Zhang and Yi Li
Sensors 2025, 25(17), 5592; https://doi.org/10.3390/s25175592 - 8 Sep 2025
Viewed by 1099
Abstract
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation [...] Read more.
This study presents a flame detection model that is based on real experimental data that were collected during turbopump hot-fire tests of a liquid rocket engine. In these tests, a MEMRECAM ACS-1 M40 high-speed camera—serving as an optical sensor within the test instrumentation system—captured plume images for analysis. To detect abnormal flame phenomena in the plume, a Gaussian support vector machine (SVM) model was developed using image features that were derived from both color and gradient information. Six representative frames containing visible flames were selected from a single test failure video. These images were segmented in the YCbCr color space using the k-means clustering algorithm to distinguish flame and non-flame pixels. A 10-dimensional feature vector was constructed for each pixel and then reduced to five dimensions using the Maximum Relevance Minimum Redundancy (mRMR) method. The reduced vectors were used to train the Gaussian SVM model. The model achieved a 97.6% detection accuracy despite being trained on a limited dataset. It has been successfully applied in multiple subsequent engine tests, and it has proven effective in detecting ablation-related anomalies. By combining real-world sensor data acquisition with intelligent image-based analysis, this work enhances the monitoring capabilities in rocket engine development. Full article
Show Figures

Figure 1

25 pages, 804 KB  
Article
The Role of Mutual Information Estimator Choice in Feature Selection: An Empirical Study on mRMR
by Nikolaos Papaioannou, Georgios Myllis, Alkiviadis Tsimpiris and Vasiliki Vrana
Information 2025, 16(9), 724; https://doi.org/10.3390/info16090724 - 25 Aug 2025
Cited by 5 | Viewed by 2522
Abstract
Maximum Relevance Minimum Redundancy (mRMR) is a widely used feature selection method that is applied in a wide range of applications in various fields. mRMR adds to the optimal subset the features that have high relevance to the target variable while having minimum [...] Read more.
Maximum Relevance Minimum Redundancy (mRMR) is a widely used feature selection method that is applied in a wide range of applications in various fields. mRMR adds to the optimal subset the features that have high relevance to the target variable while having minimum redundancy with each other. Mutual information is a key component of mRMR as it measures the degree of dependence between two variables. However, the real value of mutual information is not known and needs to be estimated. The aim of this study is to examine whether the choice of mutual information estimator affects the performance of mRMR. To this end, three variations of mRMR are compared. The first one uses Parzen window estimation to assess mutual information between continuous variables. The second is based on equidistant partitioning using the cells method, while the third incorporates a bias-corrected version of the same estimator. All methods are tested with and without a regularization term in the mRMR denominator, introduced to improve numerical stability. The evaluation is conducted on synthetic datasets where the target variable is defined as a combination of continuous features, simulating both linear and nonlinear dependencies. To demonstrate the applicability of the proposed methods, we also include a case study in real-world classification tasks. The study carried out showed that the choice of mutual information estimator can affect the performance of mRMR and it must be carefully selected depending on the dataset and the parameters of the examined problem. The application of the corrected mutual information estimator improves the performance of mRMR in the examined setup. Full article
(This article belongs to the Section Information and Communications Technology)
Show Figures

Figure 1

23 pages, 14694 KB  
Article
PLCNet: A 3D-CNN-Based Plant-Level Classification Network Hyperspectral Framework for Sweetpotato Virus Disease Detection
by Qiaofeng Zhang, Wei Wang, Han Su, Gaoxiang Yang, Jiawen Xue, Hui Hou, Xiaoyue Geng, Qinghe Cao and Zhen Xu
Remote Sens. 2025, 17(16), 2882; https://doi.org/10.3390/rs17162882 - 19 Aug 2025
Viewed by 1283
Abstract
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral [...] Read more.
Sweetpotato virus disease (SPVD) poses a significant threat to global sweetpotato production; therefore, early, accurate field-scale detection is necessary. To address the limitations of the currently utilized assays, we propose PLCNet (Plant-Level Classification Network), a rapid, non-destructive SPVD identification framework using UAV-acquired hyperspectral imagery. High-resolution data from early sweetpotato growth stages were processed via three feature selection methods—Random Forest (RF), Minimum Redundancy Maximum Relevance (mRMR), and Local Covariance Matrix (LCM)—in combination with 24 vegetation indices. Variance Inflation Factor (VIF) analysis reduced multicollinearity, yielding an optimized SPVD-sensitive feature set. First, using the RF-selected bands and vegetation indices, we benchmarked four classifiers—Support Vector Machine (SVM), Gradient Boosting Decision Tree (GBDT), Residual Network (ResNet), and 3D Convolutional Neural Network (3D-CNN). Under identical inputs, the 3D-CNN achieved superior performance (OA = 96.55%, Macro F1 = 95.36%, UA_mean = 0.9498, PA_mean = 0.9504), outperforming SVM, GBDT, and ResNet. Second, with the same spectral–spatial features and 3D-CNN backbone, we compared a pixel-level baseline (CropdocNet) against our plant-level PLCNet. CropdocNet exhibited spatial fragmentation and isolated errors, whereas PLCNet’s two-stage pipeline—deep feature extraction followed by connected-component analysis and majority voting—aggregated voxel predictions into coherent whole-plant labels, substantially reducing noise and enhancing biological interpretability. By integrating optimized feature selection, deep learning, and plant-level post-processing, PLCNet delivers a scalable, high-throughput solution for precise SPVD monitoring in agricultural fields. Full article
Show Figures

Figure 1

Back to TopTop