Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (634)

Search Parameters:
Keywords = Bayes network

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
21 pages, 2536 KB  
Article
Predicting Star Scientists in the Field of Artificial Intelligence: A Machine Learning Approach
by Koosha Shirouyeh, Andrea Schiffauerova and Ashkan Ebadi
Metrics 2025, 2(4), 22; https://doi.org/10.3390/metrics2040022 (registering DOI) - 11 Oct 2025
Abstract
Star scientists are highly influential researchers who have made significant contributions to their field, gained widespread recognition, and often attracted substantial research funding. They are critical for the advancement of science and innovation and significantly influence the transfer of knowledge and technology to [...] Read more.
Star scientists are highly influential researchers who have made significant contributions to their field, gained widespread recognition, and often attracted substantial research funding. They are critical for the advancement of science and innovation and significantly influence the transfer of knowledge and technology to industry. Identifying potential star scientists before their performance becomes outstanding is important for recruitment, collaboration, networking, and research funding decisions. This study utilizes machine learning techniques and builds four different classifiers, i.e., random forest, support vector machines, naïve bayes, and logistic regression, to predict star scientists in the field of artificial intelligence while highlighting features related to their success. The analysis is based on publication data collected from Scopus from 2000 to 2019, incorporating a diverse set of features such as gender, ethnic diversity, and collaboration network structural properties. The random forest model achieved the best performance with an AUC of 0.75. Our results confirm that star scientists follow different patterns compared to their non-star counterparts in almost all the early-career features. We found that certain features, such as gender and ethnic diversity, play important roles in scientific collaboration and can significantly impact an author’s career development and success. The most important features in predicting star scientists in the field of artificial intelligence were the number of articles, betweenness centrality, research impact indicators, and weighted degree centrality. Our approach offers valuable insights for researchers, practitioners, and funding agencies interested in identifying and supporting talented researchers. Full article
Show Figures

Figure 1

31 pages, 3644 KB  
Article
Machine Learning for Basketball Game Outcomes: NBA and WNBA Leagues
by João M. Alves and Ramiro S. Barbosa
Computation 2025, 13(10), 230; https://doi.org/10.3390/computation13100230 - 1 Oct 2025
Viewed by 241
Abstract
Artificial intelligence has become crucial in sports, leveraging its analytical capabilities to enhance the understanding and prediction of complex events. Machine learning algorithms in sports, especially basketball, are transforming performance analysis by identifying patterns and trends invisible to traditional methods. This technology provides [...] Read more.
Artificial intelligence has become crucial in sports, leveraging its analytical capabilities to enhance the understanding and prediction of complex events. Machine learning algorithms in sports, especially basketball, are transforming performance analysis by identifying patterns and trends invisible to traditional methods. This technology provides in-depth insights into individual and team performance, enabling precise evaluation of strategies and tactics. Consequently, the detailed analysis of every aspect of a team’s routine can significantly elevate the level of competition in the sport. This study investigates a range of machine learning models, including Logistic Regression (LR), Ridge Regression Classifier (RR), Random Forest (RF), Naive Bayes (NB), K-Nearest Neighbors (KNNs), Support Vector Machine (SVM), Stacking Classifier (STACK), Bagging Classifier (BAG), Multi-Layer Perceptron (MLP), AdaBoost (AB), and XGBoost (XGB), as well as deep learning architectures such as Long Short-Term Memory (LSTM) networks and Convolutional Neural Networks (CNNs), to compare their effectiveness in predicting game outcomes in the NBA and WNBA leagues. The results show highly acceptable prediction accuracies of 65.50% for the NBA and 67.48% for the WNBA. This study allows us to understand the impact that artificial intelligence can have on the world of basketball and its current state in relation to previous studies. It can provide valuable insights for coaches, performance analysts, team managers, and sports strategists by using machine learning and deep learning models to predict NBA and WNBA outcomes, enabling informed decisions and enhancing competitive performance. Full article
(This article belongs to the Section Computational Engineering)
Show Figures

Figure 1

31 pages, 5909 KB  
Article
Machine Learning Approaches for Classification of Composite Materials
by Dmytro Tymoshchuk, Iryna Didych, Pavlo Maruschak, Oleh Yasniy, Andrii Mykytyshyn and Mykola Mytnyk
Modelling 2025, 6(4), 118; https://doi.org/10.3390/modelling6040118 - 1 Oct 2025
Viewed by 179
Abstract
The paper presents a comparative analysis of various machine learning algorithms for the classification of epoxy composites reinforced with basalt fiber and modified with inorganic fillers. The classification is based on key thermophysical characteristics, in particular, the mass fraction of the filler, temperature, [...] Read more.
The paper presents a comparative analysis of various machine learning algorithms for the classification of epoxy composites reinforced with basalt fiber and modified with inorganic fillers. The classification is based on key thermophysical characteristics, in particular, the mass fraction of the filler, temperature, and thermal conductivity coefficient. A dataset of 16,056 interpolated samples was used to train and evaluate more than a dozen models. Among the tested algorithms, the MLP neural network model showed the highest accuracy of 99.7% and balanced classification metrics F1-measure and G-Mean. Ensemble methods, including XGBoost, CatBoost, ExtraTrees, and HistGradientBoosting, also showed high classification accuracy. To interpret the results of the MLP model, SHAP analysis was applied, which confirmed the predominant influence of the mass fraction of the filler on decision-making for all classes. The results of the study confirm the high effectiveness of machine learning methods for recognizing filler type in composite materials, as well as the potential of interpretable AI in materials science tasks. Full article
(This article belongs to the Special Issue Machine Learning and Artificial Intelligence in Modelling)
Show Figures

Figure 1

35 pages, 3558 KB  
Article
Realistic Performance Assessment of Machine Learning Algorithms for 6G Network Slicing: A Dual-Methodology Approach with Explainable AI Integration
by Sümeye Nur Karahan, Merve Güllü, Deniz Karhan, Sedat Çimen, Mustafa Serdar Osmanca and Necaattin Barışçı
Electronics 2025, 14(19), 3841; https://doi.org/10.3390/electronics14193841 - 27 Sep 2025
Viewed by 393
Abstract
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized [...] Read more.
As 6G networks become increasingly complex and heterogeneous, effective classification of network slicing is essential for optimizing resources and managing quality of service. While recent advances demonstrate high accuracy under controlled laboratory conditions, a critical gap exists between algorithm performance evaluation under idealized conditions and their actual effectiveness in realistic deployment scenarios. This study presents a comprehensive comparative analysis of two distinct preprocessing methodologies for 6G network slicing classification: Pure Raw Data Analysis (PRDA) and Literature-Validated Realistic Transformations (LVRTs). We evaluate the impact of these strategies on algorithm performance, resilience characteristics, and practical deployment feasibility to bridge the laboratory–reality gap in 6G network optimization. Our experimental methodology involved testing eleven machine learning algorithms—including traditional ML, ensemble methods, and deep learning approaches—on a dataset comprising 10,000 network slicing samples (expanded to 21,033 through realistic transformations) across five network slice types. The LVRT methodology incorporates realistic operational impairments including market-driven class imbalance (9:1 ratio), multi-layer interference patterns, and systematic missing data reflecting authentic 6G deployment challenges. The experimental results revealed significant differences in algorithm behavior between the two preprocessing approaches. Under PRDA conditions, deep learning models achieved perfect accuracy (100% for CNN and FNN), while traditional algorithms ranged from 60.9% to 89.0%. However, LVRT results exposed dramatic performance variations, with accuracies spanning from 58.0% to 81.2%. Most significantly, we discovered that algorithms achieving excellent laboratory performance experience substantial degradation under realistic conditions, with CNNs showing an 18.8% accuracy loss (dropping from 100% to 81.2%), FNNs experiencing an 18.9% loss (declining from 100% to 81.1%), and Naive Bayes models suffering a 34.8% loss (falling from 89% to 58%). Conversely, SVM (RBF) and Logistic Regression demonstrated counter-intuitive resilience, improving by 14.1 and 10.3 percentage points, respectively, under operational stress, demonstrating superior adaptability to realistic network conditions. This study establishes a resilience-based classification framework enabling informed algorithm selection for diverse 6G deployment scenarios. Additionally, we introduce a comprehensive explainable artificial intelligence (XAI) framework using SHAP analysis to provide interpretable insights into algorithm decision-making processes. The XAI analysis reveals that Packet Loss Budget emerges as the dominant feature across all algorithms, while Slice Jitter and Slice Latency constitute secondary importance features. Cross-scenario interpretability consistency analysis demonstrates that CNN, LSTM, and Naive Bayes achieve perfect or near-perfect consistency scores (0.998–1.000), while SVM and Logistic Regression maintain high consistency (0.988–0.997), making them suitable for regulatory compliance scenarios. In contrast, XGBoost shows low consistency (0.106) despite high accuracy, requiring intensive monitoring for deployment. This research contributes essential insights for bridging the critical gap between algorithm development and deployment success in next-generation wireless networks, providing evidence-based guidelines for algorithm selection based on accuracy, resilience, and interpretability requirements. Our findings establish quantitative resilience boundaries: algorithms achieving >99% laboratory accuracy exhibit 58–81% performance under realistic conditions, with CNN and FNN maintaining the highest absolute accuracy (81.2% and 81.1%, respectively) despite experiencing significant degradation from laboratory conditions. Full article
Show Figures

Figure 1

22 pages, 3646 KB  
Article
Machine Learning in the Classification of RGB Images of Maize (Zea mays L.) Using Texture Attributes and Different Doses of Nitrogen
by Thiago Lima da Silva, Fernanda de Fátima da Silva Devechio, Marcos Silva Tavares, Jamile Raquel Regazzo, Edson José de Souza Sardinha, Liliane Maria Romualdo Altão, Gabriel Pagin, Adriano Rogério Bruno Tech and Murilo Mesquita Baesso
AgriEngineering 2025, 7(10), 317; https://doi.org/10.3390/agriengineering7100317 - 23 Sep 2025
Viewed by 386
Abstract
Nitrogen fertilization is decisive for maize productivity, fertilizer use efficiency, and sustainability, which calls for fast and nondestructive nutritional diagnosis. This study evaluated the classification of maize plant nutritional status from red, green, and blue (RGB) leaf images using texture attributes. A greenhouse [...] Read more.
Nitrogen fertilization is decisive for maize productivity, fertilizer use efficiency, and sustainability, which calls for fast and nondestructive nutritional diagnosis. This study evaluated the classification of maize plant nutritional status from red, green, and blue (RGB) leaf images using texture attributes. A greenhouse experiment was conducted under a completely randomized factorial design with four nitrogen doses, one maize hybrid Pioneer 30F35, and four replicates, at two sampling times corresponding to distinct phenological stages, totaling thirty-two experimental units. Images were processed with the gray-level cooccurrence matrix computed at three distances 1, 3, and 5 pixels and four orientations 0°, 45°, 90°, and 135°, yielding eight texture descriptors that served as inputs to five supervised classifiers: an artificial neural network, a support vector machine, k nearest neighbors, a decision tree, and Naive Bayes. The results indicated that texture descriptors discriminated nitrogen doses with good performance and moderate computational cost, and that homogeneity, dissimilarity, and contrast were the most informative attributes. The artificial neural network showed the most stable performance at both stages, followed by the support vector machine and k nearest neighbors, whereas the decision tree and Naive Bayes were less suitable. Confusion matrices and receiver operating characteristic curves indicated greater separability for omission and excess classes, with D1 standing out, and the patterns were consistent with the chemical analysis. Future work should include field validation, multiple seasons and genotypes, integration with spectral indices and multisensor data, application of model explainability techniques, and assessment of latency and scalability in operational scenarios. Full article
Show Figures

Figure 1

18 pages, 456 KB  
Article
Machine Learning-Powered IDS for Gray Hole Attack Detection in VANETs
by Juan Antonio Arízaga-Silva, Alejandro Medina Santiago, Mario Espinosa-Tlaxcaltecatl and Carlos Muñiz-Montero
World Electr. Veh. J. 2025, 16(9), 526; https://doi.org/10.3390/wevj16090526 - 18 Sep 2025
Viewed by 475
Abstract
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these [...] Read more.
Vehicular Ad Hoc Networks (VANETs) enable critical communication for Intelligent Transportation Systems (ITS) but are vulnerable to cybersecurity threats, such as Gray Hole attacks, where malicious nodes selectively drop packets, compromising network integrity. Traditional detection methods struggle with the intermittent nature of these attacks, necessitating advanced solutions. This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Methods: This study proposes a machine learning-based Intrusion Detection System (IDS) to detect Gray Hole attacks in VANETs. Features were extracted from network traffic simulations on NS-3 and categorized into time-, packet-, and protocol-based attributes, where NS-3 is defined as a discrete event network simulator widely used in communication protocol research. Multiple classifiers, including Random Forest, Support Vector Machine (SVM), Logistic Regression, and Naive Bayes, were evaluated using precision, recall, and F1-score metrics. The Random Forest classifier outperformed others, achieving an F1-score of 0.9927 with 15 estimators and a depth of 15. In contrast, SVM variants exhibited limitations due to overfitting, with precision and recall below 0.76. Feature analysis highlighted transmission rate and packet/byte counts as the most influential for detection. The Random Forest-based IDS effectively identifies Gray Hole attacks, offering high accuracy and robustness. This approach addresses a critical gap in VANET security, enhancing resilience against sophisticated threats. Future work could explore hybrid models or real-world deployment to further validate the system’s efficacy. Full article
Show Figures

Figure 1

18 pages, 3374 KB  
Article
Evaluation of Apical Closure in Panoramic Radiographs Using Vision Transformer Architectures ViT-Based Apical Closure Classification
by Sümeyye Coşgun Baybars, Merve Daldal, Merve Parlak Baydoğan and Seda Arslan Tuncer
Diagnostics 2025, 15(18), 2350; https://doi.org/10.3390/diagnostics15182350 - 16 Sep 2025
Viewed by 396
Abstract
Objective: To evaluate the performance of vision transformer (ViT)-based deep learning models in the classification of open apex on panoramic radiographs (orthopantomograms (OPGs)) and compare their diagnostic accuracy with conventional convolutional neural network (CNN) architectures. Materials and Methods: OPGs were retrospectively [...] Read more.
Objective: To evaluate the performance of vision transformer (ViT)-based deep learning models in the classification of open apex on panoramic radiographs (orthopantomograms (OPGs)) and compare their diagnostic accuracy with conventional convolutional neural network (CNN) architectures. Materials and Methods: OPGs were retrospectively collected and labeled by two observers based on apex closure status. Two ViT models (Base Patch16 and Patch32) and three CNN models (ResNet50, VGG19, and EfficientNetB0) were evaluated using eight classifiers (support vector machine (SVM), random forest (RF), XGBoost, logistic regression (LR), K-nearest neighbors (KNN), naïve Bayes (NB), decision tree (DT), and multi-layer perceptron (MLP)). Performance metrics (accuracy, precision, recall, F1 score, and area under the curve (AUC)) were computed. Results: ViT Base Patch16 384 with MLP achieved the highest accuracy (0.8462 ± 0.0330) and AUC (0.914 ± 0.032). Although CNN models like EfficientNetB0 + MLP performed competitively (0.8334 ± 0.0479 accuracy), ViT models demonstrated more balanced and robust performance. Conclusions: ViT models outperformed CNNs in classifying open apex, suggesting their integration into dental radiologic decision support systems. Future studies should focus on multi-center and multimodal data to improve generalizability. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

11 pages, 833 KB  
Proceeding Paper
Heart Failure Prediction Through a Comparative Study of Machine Learning and Deep Learning Models
by Mohid Qadeer, Rizwan Ayaz and Muhammad Ikhsan Thohir
Eng. Proc. 2025, 107(1), 61; https://doi.org/10.3390/engproc2025107061 - 4 Sep 2025
Viewed by 4728
Abstract
The heart is essential to human life, so it is important to protect it and understand any kind of damage it can have. All the diseases related to hearts leads to heart failure. To help address this, a tool for predicting survival is [...] Read more.
The heart is essential to human life, so it is important to protect it and understand any kind of damage it can have. All the diseases related to hearts leads to heart failure. To help address this, a tool for predicting survival is needed. This study explores the use of several classification models for forecasting heart failure outcomes using the Heart Failure Clinical Records dataset. The outcome contrasts a deep learning (DL) model known as the Convolutional Neural Network (CNN) with many machine learning models, including Random Forest (RF), K-Nearest Neighbors (KNN), Decision Tree (DT), and Naïve Bayes (NB). Various data processing techniques, like standard scaling and Synthetic Minority Oversampling Technique (SMOTE), are used to improve prediction accuracy. The CNN model performs best by achieving 99%. In comparison, the best-performing ML model, Naïve Bayes, reaches 92.57%. This shows that deep learning provides better predictions of heart failure, making it a useful tool for early detection and better patient care. Full article
Show Figures

Figure 1

26 pages, 3398 KB  
Article
Hybrid Mamba and Attention-Enhanced Bi-LSTM for Obesity Classification and Key Determinant Identification
by Chongyang Fu, Mohd Shahril Nizam Bin Shaharom and Syed Kamaruzaman Bin Syed Ali
Electronics 2025, 14(17), 3445; https://doi.org/10.3390/electronics14173445 - 29 Aug 2025
Viewed by 626
Abstract
Obesity is a major public health challenge linked to increased risks of chronic diseases. Effective prevention and intervention strategies require accurate classification and identification of key determinants. This study aims to develop a robust deep learning framework to enhance the accuracy and interpretability [...] Read more.
Obesity is a major public health challenge linked to increased risks of chronic diseases. Effective prevention and intervention strategies require accurate classification and identification of key determinants. This study aims to develop a robust deep learning framework to enhance the accuracy and interpretability of obesity classification using comprehensive datasets, and to compare its performance with both traditional and state-of-the-art deep learning models. We propose a hybrid deep learning framework that combines an improved Mamba model with an attention-enhanced bidirectional LSTM (ABi-LSTM). The framework utilizes the Obesity and CDC datasets. A feature tokenizer is integrated into the Mamba model to improve scalability and representation learning. Channel-independent processing is employed to prevent overfitting through independent feature analysis. The ABi-LSTM component is used to capture complex temporal dependencies in the data, thereby enhancing classification performance. The proposed framework achieved an accuracy of 93.42%, surpassing existing methods such as ID3 (91.87%), J48 (89.98%), Naïve Bayes (90.31%), Bayesian Network (89.23%), as well as deep learning-based approaches such as VAE (92.12%) and LightCNN (92.50%). Additionally, the model improved sensitivity to 91.11% and specificity to 92.34%. The hybrid model demonstrates superior performance in obesity classification and determinant identification compared to both traditional and advanced deep learning methods. These results underscore the potential of deep learning in enabling data-driven personalized healthcare and targeted obesity interventions. Full article
(This article belongs to the Special Issue Knowledge Representation and Reasoning in Artificial Intelligence)
Show Figures

Figure 1

12 pages, 811 KB  
Article
Determination of Malignancy Risk Factors Using Gallstone Data and Comparing Machine Learning Methods to Predict Malignancy
by Sirin Cetin, Ayse Ulgen, Ozge Pasin, Hakan Sıvgın and Meryem Cetin
J. Clin. Med. 2025, 14(17), 6091; https://doi.org/10.3390/jcm14176091 - 28 Aug 2025
Viewed by 628
Abstract
Background/Objectives: Gallstone disease, a prevalent and costly digestive system disorder, is influenced by multifactorial risk factors, some of which may predispose to malignancy. This study aims to evaluate the association between gallstone disease and malignancy using advanced machine learning (ML) algorithms. Methods: A [...] Read more.
Background/Objectives: Gallstone disease, a prevalent and costly digestive system disorder, is influenced by multifactorial risk factors, some of which may predispose to malignancy. This study aims to evaluate the association between gallstone disease and malignancy using advanced machine learning (ML) algorithms. Methods: A dataset comprising approximately 1000 patients was analyzed, employing six ML methods: random forests (RFs), support vector machines (SVMs), multi-layer perceptron (MLP), MLP with PyTorch 2.3.1 (MLP_PT), naive Bayes (NB), and Tabular Prior-data Fitted Network (TabPFN). Comparative performance was assessed using Pearson correlation, sensitivity, specificity, Kappa, receiver operating characteristic (ROC), area under curve (AUC), and accuracy metrics. Results: Our results revealed that age, body mass index (BMI), and history of HRT were the most significant predictors of malignancy. Among the ML models, TabPFN emerged as the most effective, achieving superior performance across multiple evaluation criteria. Conclusions: This study highlights the potential of leveraging cutting-edge ML methodologies to uncover complex relationships in clinical datasets, offering a novel perspective on gallstone-related malignancy. By identifying critical risk factors and demonstrating the efficacy of TabPFN, this research provides actionable insights for predictive modeling and personalized patient management in clinical practice. Full article
(This article belongs to the Section General Surgery)
Show Figures

Figure 1

19 pages, 2221 KB  
Article
Leveraging Deep Learning to Enhance Malnutrition Detection via Nutrition Risk Screening 2002: Insights from a National Cohort
by Nadir Yalçın, Merve Kaşıkcı, Burcu Kelleci-Çakır, Kutay Demirkan, Karel Allegaert, Meltem Halil, Mutlu Doğanay and Osman Abbasoğlu
Nutrients 2025, 17(16), 2716; https://doi.org/10.3390/nu17162716 - 21 Aug 2025
Cited by 1 | Viewed by 985
Abstract
Purpose: This study aimed to develop and validate a new machine learning (ML)-based screening tool for a two-step prediction of the need for and type of nutritional therapy (enteral, parenteral, or combined) using Nutrition Risk Screening 2002 (NRS-2002) and other demographic parameters from [...] Read more.
Purpose: This study aimed to develop and validate a new machine learning (ML)-based screening tool for a two-step prediction of the need for and type of nutritional therapy (enteral, parenteral, or combined) using Nutrition Risk Screening 2002 (NRS-2002) and other demographic parameters from the Optimal Nutrition Care for All (ONCA) national cohort data. Methods: This multicenter retrospective cohort study included 191,028 patients, with data on age, gender, body mass index (BMI), NRS-2002 score, presence of cancer, and hospital unit type. In the first step, classification models estimated whether patients required nutritional therapy, while the second step predicted the type of therapy. The dataset was divided into 60% training, 20% validation, and 20% test sets. Random Forest (RF), Artificial Neural Network (ANN), deep learning (DL), Elastic Net (EN), and Naive Bayes (NB) algorithms were used for classification. Performance was evaluated using AUC, accuracy, balanced accuracy, MCC, sensitivity, specificity, PPV, NPV, and F1-score. Results: Of the patients, 54.6% were male, 9.2% had cancer, and 49.9% were hospitalized in internal medicine units. According to NRS-2002, 11.6% were at risk of malnutrition (≥3 points). The DL algorithm performed best in both classification steps. The top three variables for determining the need for nutritional therapy were severe illness, reduced dietary intake in the last week, and mild impaired nutritional status (AUC = 0.933). For determining the type of nutritional therapy, the most important variables were severe illness, severely impaired nutritional status, and ICU admission (AUC = 0.741). Adding gender, cancer status, and ward type to NRS-2002 improved AUC by 0.6% and 3.27% for steps 1 and 2, respectively. Conclusions: Incorporating gender, cancer status, and ward type into the widely used and validated NRS-2002 led to the development of a new scale that accurately classifies nutritional therapy type. This ML-enhanced model has the potential to be integrated into clinical workflows as a decision support system to guide nutritional therapy, although further external validation with larger multinational cohorts is needed. Full article
(This article belongs to the Section Clinical Nutrition)
Show Figures

Figure 1

24 pages, 4431 KB  
Article
Fault Classification in Power Transformers Using Dissolved Gas Analysis and Optimized Machine Learning Algorithms
by Vuyani M. N. Dladla and Bonginkosi A. Thango
Machines 2025, 13(8), 742; https://doi.org/10.3390/machines13080742 - 20 Aug 2025
Viewed by 616
Abstract
Power transformers are critical assets in electrical power systems, yet their fault diagnosis often relies on conventional dissolved gas analysis (DGA) methods such as the Duval Pentagon and Triangle, Key Gas, and Rogers Ratio methods. Even though these methods are commonly used, they [...] Read more.
Power transformers are critical assets in electrical power systems, yet their fault diagnosis often relies on conventional dissolved gas analysis (DGA) methods such as the Duval Pentagon and Triangle, Key Gas, and Rogers Ratio methods. Even though these methods are commonly used, they present limitations in classification accuracy, concurrent fault identification, and manual sample handling. In this study, a framework of optimized machine learning algorithms that integrates Chi-squared statistical feature selection with Random Search hyperparameter optimization algorithms was developed to enhance transformer fault classification accuracy using DGA data, thereby addressing the limitations of conventional methods and improving diagnostic precision. Utilizing the R2024b MATLAB Classification Learner App, five optimized machine learning algorithms were trained and tested using 282 transformer oil samples with varying DGA gas concentrations obtained from industrial transformers, the IEC TC10 database, and the literature. The optimized and assessed models are Linear Discriminant, Naïve Bayes, Decision Trees, Support Vector Machine, Neural Networks, k-Nearest Neighbor, and the Ensemble Algorithm. From the proposed models, the best performing algorithm, Optimized k-Nearest Neighbor, achieved an overall performance accuracy of 92.478%, followed by the Optimized Neural Network at 89.823%. To assess their performance against the conventional methods, the same dataset used for the optimized machine learning algorithms was used to evaluate the performance of the Duval Triangle and Duval Pentagon methods using VAISALA DGA software version 1.1.0; the proposed models outperformed the conventional methods, which could only achieve a classification accuracy of 35.757% and 30.818%, respectively. This study concludes that the application of the proposed optimized machine learning algorithms can enhance the classification accuracy of DGA-based faults in power transformers, supporting more reliable diagnostics and proactive maintenance strategies. Full article
(This article belongs to the Section Electrical Machines and Drives)
Show Figures

Figure 1

18 pages, 1127 KB  
Article
Comparative Analysis of Machine Learning Techniques in Enhancing Acoustic Noise Loggers’ Leak Detection
by Samer El-Zahab, Eslam Mohammed Abdelkader, Ali Fares and Tarek Zayed
Water 2025, 17(16), 2427; https://doi.org/10.3390/w17162427 - 17 Aug 2025
Viewed by 3187
Abstract
Urban areas face a significant challenge with water pipeline leaks, resulting in resource wastage and economic consequences. The application of noise logger sensors, integrated with ensemble machine learning, emerges as a promising real-time monitoring solution, enhancing efficiency in Water Distribution Networks (WDNs) and [...] Read more.
Urban areas face a significant challenge with water pipeline leaks, resulting in resource wastage and economic consequences. The application of noise logger sensors, integrated with ensemble machine learning, emerges as a promising real-time monitoring solution, enhancing efficiency in Water Distribution Networks (WDNs) and mitigating environmental impacts. The paper investigates the integrated use of Noise Loggers with machine learning models, including Support Vector Machines (SVMs), Random Forest (RF), Naïve Bayes (NB), K-Nearest Neighbors (KNN), Decision Tree (DT), Logistic Regression (LogR), Multi-Layer Perceptron (MLP), and YamNet, along with ensemble models, for effective leak detection. The study utilizes a dataset comprising 2110 sound signals collected from various locations in Hong Kong through wireless acoustic Noise Loggers. RF model stands out with 93.68% accuracy, followed closely by KNN at 93.40%, and MLP with 92.15%, demonstrating machine learning’s potential in scrutinizing acoustic signals. The ensemble model, combining these diverse models, achieves an impressive 94.40% accuracy, surpassing individual models and YamNet. The comparison of various machine learning models provides researchers with valuable insights into the use of machine learning for leak detection applications. Additionally, this paper introduces a novel method to develop a robust ensemble leak detection model by selecting the most performing machine learning models. Full article
(This article belongs to the Special Issue Advances in Management and Optimization of Urban Water Networks)
Show Figures

Graphical abstract

22 pages, 4300 KB  
Article
Optimised DNN-Based Agricultural Land Mapping Using Sentinel-2 and Landsat-8 with Google Earth Engine
by Nisha Sharma, Sartajvir Singh and Kawaljit Kaur
Land 2025, 14(8), 1578; https://doi.org/10.3390/land14081578 - 1 Aug 2025
Viewed by 1319
Abstract
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of [...] Read more.
Agriculture is the backbone of Punjab’s economy, and with much of India’s population dependent on agriculture, the requirement for accurate and timely monitoring of land has become even more crucial. Blending remote sensing with state-of-the-art machine learning algorithms enables the detailed classification of agricultural lands through thematic mapping, which is critical for crop monitoring, land management, and sustainable development. Here, a Hyper-tuned Deep Neural Network (Hy-DNN) model was created and used for land use and land cover (LULC) classification into four classes: agricultural land, vegetation, water bodies, and built-up areas. The technique made use of multispectral data from Sentinel-2 and Landsat-8, processed on the Google Earth Engine (GEE) platform. To measure classification performance, Hy-DNN was contrasted with traditional classifiers—Convolutional Neural Network (CNN), Random Forest (RF), Classification and Regression Tree (CART), Minimum Distance Classifier (MDC), and Naive Bayes (NB)—using performance metrics including producer’s and consumer’s accuracy, Kappa coefficient, and overall accuracy. Hy-DNN performed the best, with overall accuracy being 97.60% using Sentinel-2 and 91.10% using Landsat-8, outperforming all base models. These results further highlight the superiority of the optimised Hy-DNN in agricultural land mapping and its potential use in crop health monitoring, disease diagnosis, and strategic agricultural planning. Full article
Show Figures

Figure 1

18 pages, 4863 KB  
Article
Evaluation of Explainable, Interpretable and Non-Interpretable Algorithms for Cyber Threat Detection
by José Ramón Trillo, Felipe González-López, Juan Antonio Morente-Molinera, Roberto Magán-Carrión and Pablo García-Sánchez
Electronics 2025, 14(15), 3073; https://doi.org/10.3390/electronics14153073 - 31 Jul 2025
Viewed by 528
Abstract
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not [...] Read more.
As anonymity-enabling technologies such as VPNs and proxies become increasingly exploited for malicious purposes, detecting traffic associated with such services emerges as a critical first step in anticipating potential cyber threats. This study analyses a network traffic dataset focused on anonymised IP addresses—not direct attacks—to evaluate and compare explainable, interpretable, and opaque machine learning models. Through advanced preprocessing and feature engineering, we examine the trade-off between model performance and transparency in the early detection of suspicious connections. We evaluate explainable ML-based models such as k-nearest neighbours, fuzzy algorithms, decision trees, and random forests, alongside interpretable models like naïve Bayes, support vector machines, and non-interpretable algorithms such as neural networks. Results show that neural networks achieve the highest performance, with a macro F1-score of 0.8786, but explainable models like HFER offer strong performance (macro F1-score = 0.6106) with greater interpretability. The choice of algorithm depends on project-specific needs: neural networks excel in accuracy, while explainable algorithms are preferred for resource efficiency and transparency, as stated in this work. This work underscores the importance of aligning cybersecurity strategies with operational requirements, providing insights into balancing performance with interpretability. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Graphical abstract

Back to TopTop