Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,153)

Search Parameters:
Keywords = machine learning—ML

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
42 pages, 2372 KB  
Systematic Review
The Road to Autonomy: A Systematic Review Through AI in Autonomous Vehicles
by Adrian Domenteanu, Paul Diaconu, Margareta-Stela Florescu and Camelia Delcea
Electronics 2025, 14(21), 4174; https://doi.org/10.3390/electronics14214174 (registering DOI) - 25 Oct 2025
Abstract
In the last decade, the incorporation of Artificial Intelligence (AI) with autonomous vehicles (AVs) has transformed transportation, mobility, and smart mobility systems. The present study provides a systematic review of global trends, applications, and challenges at the intersection of AI, including Machine Learning [...] Read more.
In the last decade, the incorporation of Artificial Intelligence (AI) with autonomous vehicles (AVs) has transformed transportation, mobility, and smart mobility systems. The present study provides a systematic review of global trends, applications, and challenges at the intersection of AI, including Machine Learning (ML), Deep Learning (DL), and autonomous vehicle technologies. Using data extracted from Clarivate Analytics’ Web of Science Core Collection and a set of specific keywords related to both AI and autonomous (electric) vehicles, this paper identifies the themes presented in the scientific literature using thematic maps and thematic map evolution analysis. Furthermore, the research topics are identified using both thematic maps, as well as Latent Dirichlet Allocation (LDA) and BERTopic, offering a more faceted insight into the research field as LDA enables the probabilistic discovery of high-level research themes, while BERTopic, based on transformer-based language models, captures deeper semantic patterns and emerging topics over time. This approach offers richer insights into the systematic review analysis, while comparison in the results obtained through the various methods considered leads to a better overview of the themes associated with the field of AI in autonomous vehicles. As a result, a strong correspondence can be observed between core topics, such as object detection, driving models, control, safety, cybersecurity and system vulnerabilities. The findings offer a roadmap for researchers and industry practitioners, by outlining critical gaps and discussing the opportunities for future exploration. Full article
22 pages, 6015 KB  
Article
Data-Driven Estimation of Reference Evapotranspiration in Paraguay from Geographical and Temporal Predictors
by Bilal Cemek, Erdem Küçüktopçu, Maria Gabriela Fleitas Ortellado and Halis Simsek
Appl. Sci. 2025, 15(21), 11429; https://doi.org/10.3390/app152111429 (registering DOI) - 25 Oct 2025
Abstract
Reference evapotranspiration (ET0) is a fundamental variable for irrigation scheduling and water management. Conventional estimation methods, such as the FAO-56 Penman–Monteith equation, are of limited use in developing regions where meteorological data are scarce. This study evaluates the potential of machine [...] Read more.
Reference evapotranspiration (ET0) is a fundamental variable for irrigation scheduling and water management. Conventional estimation methods, such as the FAO-56 Penman–Monteith equation, are of limited use in developing regions where meteorological data are scarce. This study evaluates the potential of machine learning (ML) approaches to estimate ET0 in Paraguay, using only geographical and temporal predictors—latitude, longitude, altitude, and month. Five algorithms were tested: artificial neural networks (ANNs), k-nearest neighbors (KNN), random forest (RF), extreme gradient boosting (XGB), and adaptive neuro-fuzzy inference systems (ANFISs). The framework consisted of ET0 calculation, baseline model testing (ML techniques), ensemble modeling, leave-one-station-out validation, and spatial interpolation by inverse distance weighting. ANFIS achieved the highest prediction accuracy (R2 = 0.950, RMSE = 0.289 mm day−1, MAE = 0.202 mm day−1), while RF and XGB showed stable and reliable performance across all stations. Spatial maps highlighted strong seasonal variability, with higher ET0 values in the Chaco region in summer and lower values in winter. These results confirm that ML algorithms can generate robust ET0 estimates under data-constrained conditions, and provide scalable and cost-effective solutions for irrigation management and agricultural planning in Paraguay. Full article
Show Figures

Figure 1

29 pages, 2242 KB  
Systematic Review
Artificial Intelligence for Optimizing Solar Power Systems with Integrated Storage: A Critical Review of Techniques, Challenges, and Emerging Trends
by Raphael I. Areola, Abayomi A. Adebiyi and Katleho Moloi
Electricity 2025, 6(4), 60; https://doi.org/10.3390/electricity6040060 (registering DOI) - 25 Oct 2025
Abstract
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean [...] Read more.
The global transition toward sustainable energy has significantly accelerated the deployment of solar power systems. Yet, the inherent variability of solar energy continues to present considerable challenges in ensuring its stable and efficient integration into modern power grids. As the demand for clean and dependable energy sources intensifies, the integration of artificial intelligence (AI) with solar systems, particularly those coupled with energy storage, has emerged as a promising and increasingly vital solution. It explores the practical applications of machine learning (ML), deep learning (DL), fuzzy logic, and emerging generative AI models, focusing on their roles in areas such as solar irradiance forecasting, energy management, fault detection, and overall operational optimisation. Alongside these advancements, the review also addresses persistent challenges, including data limitations, difficulties in model generalization, and the integration of AI in real-time control scenarios. We included peer-reviewed journal articles published between 2015 and 2025 that apply AI methods to PV + ESS, with empirical evaluation. We excluded studies lacking evaluation against baselines or those focusing solely on PV or ESS in isolation. We searched IEEE Xplore, Scopus, Web of Science, and Google Scholar up to 1 July 2025. Two reviewers independently screened titles/abstracts and full texts; disagreements were resolved via discussion. Risk of bias was assessed with a custom tool evaluating validation method, dataset partitioning, baseline comparison, overfitting risk, and reporting clarity. Results were synthesized narratively by grouping AI techniques (forecasting, MPPT/control, dispatch, data augmentation). We screened 412 records and included 67 studies published between 2018 and 2025, following a documented PRISMA process. The review revealed that AI-driven techniques significantly enhance performance in solar + battery energy storage system (BESS) applications. In solar irradiance and PV output forecasting, deep learning models in particular, long short-term memory (LSTM) and hybrid convolutional neural network–LSTM (CNN–LSTM) architectures repeatedly outperform conventional statistical methods, obtaining significantly lower Root Mean Square Error (RMSE), Mean Absolute Error (MAE), and higher R-squared. Smarter energy dispatch and market-based storage decisions are made possible by reinforcement learning and deep reinforcement learning frameworks, which increase economic returns and lower curtailment risks. Furthermore, hybrid metaheuristic–AI optimisation improves control tuning and system sizing with increased efficiency and convergence. In conclusion, AI enables transformative gains in forecasting, dispatch, and optimisation for solar-BESSs. Future efforts should focus on explainable, robust AI models, standardized benchmark datasets, and real-world pilot deployments to ensure scalability, reliability, and stakeholder trust. Full article
Show Figures

Figure 1

21 pages, 2903 KB  
Review
Nematode Detection and Classification Using Machine Learning Techniques: A Review
by Arjun Neupane, Tej Bahadur Shahi, Richard Koech, Kerry Walsh and Philip Kibet Langat
Agronomy 2025, 15(11), 2481; https://doi.org/10.3390/agronomy15112481 (registering DOI) - 25 Oct 2025
Abstract
Nematode identification and quantification are critical for understanding their impact on agricultural ecosystems. However, traditional methods rely on specialised expertise in nematology, making the process costly and time-consuming. Recent developments in technologies such as Artificial Intelligence (AI) and computer vision (CV) offer promising [...] Read more.
Nematode identification and quantification are critical for understanding their impact on agricultural ecosystems. However, traditional methods rely on specialised expertise in nematology, making the process costly and time-consuming. Recent developments in technologies such as Artificial Intelligence (AI) and computer vision (CV) offer promising alternatives for automating nematode identification and counting at scale. This work reviews the current literature on nematode detection using AI techniques, focusing on their application, performance, and limitations. First, we discuss various image analysis, machine learning (ML), and deep learning (DL) methods, including You Only Look Once (YOLO) models, and evaluate their effectiveness in detecting and classifying nematodes. Second, we compare and contrast the performance of ML- and DL-based approaches on different nematode datasets. Next, we highlight how these techniques can support sustainable agricultural practices and optimise crop productivity. Finally, we conclude by outlining the key opportunities and challenges in integrating ML and DL methods for precise and efficient nematode management. Full article
Show Figures

Figure 1

33 pages, 1433 KB  
Article
Hybrid Time Series Transformer–Deep Belief Network for Robust Anomaly Detection in Mobile Communication Networks
by Anita Ershadi Oskouei, Mehrdad Kaveh, Francisco Hernando-Gallego and Diego Martín
Symmetry 2025, 17(11), 1800; https://doi.org/10.3390/sym17111800 (registering DOI) - 25 Oct 2025
Abstract
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, [...] Read more.
The rapid evolution of 5G and emerging 6G networks has increased system complexity, data volume, and security risks, making anomaly detection vital for ensuring reliability and resilience. However, existing machine learning (ML)-based approaches still face challenges related to poor generalization, weak temporal modeling, and degraded accuracy under heterogeneous and imbalanced real-world conditions. To overcome these limitations, a hybrid time series transformer–deep belief network (HTST-DBN) is introduced, integrating the sequential modeling strength of TST with the hierarchical feature representation of DBN, while an improved orchard algorithm (IOA) performs adaptive hyper-parameter optimization. The framework also embodies the concept of symmetry and asymmetry. The IOA introduces controlled symmetry-breaking between exploration and exploitation, while the TST captures symmetric temporal patterns in network traffic whose asymmetric deviations often indicate anomalies. The proposed method is evaluated across four benchmark datasets (ToN-IoT, 5G-NIDD, CICDDoS2019, and Edge-IoTset) that capture diverse network environments, including 5G core traffic, IoT telemetry, mobile edge computing, and DDoS attacks. Experimental evaluation is conducted by benchmarking HTST-DBN against several state-of-the-art models, including TST, bidirectional encoder representations from transformers (BERT), DBN, deep reinforcement learning (DRL), convolutional neural network (CNN), and random forest (RF) classifiers. The proposed HTST-DBN achieves outstanding performance, with the highest accuracy reaching 99.61%, alongside strong recall and area under the curve (AUC) scores. The HTST-DBN framework presents a scalable and reliable solution for anomaly detection in next-generation mobile networks. Its hybrid architecture, reinforced by hyper-parameter optimization, enables effective learning in complex, dynamic, and heterogeneous environments, making it suitable for real-world deployment in future 5G/6G infrastructures. Full article
(This article belongs to the Special Issue AI-Driven Optimization for EDA: Balancing Symmetry and Asymmetry)
Show Figures

Figure 1

20 pages, 944 KB  
Article
Predicting Corrosion Behaviour of Magnesium Alloy Using Machine Learning Approaches
by Tülay Yıldırım and Hüseyin Zengin
Metals 2025, 15(11), 1183; https://doi.org/10.3390/met15111183 (registering DOI) - 24 Oct 2025
Abstract
The primary objective of this study is to develop a machine learning-based predictive model using corrosion rate data for magnesium alloys compiled from the literature. Corrosion rates measured under different deformation rates and heat treatment parameters were analyzed using artificial intelligence algorithms. Variables [...] Read more.
The primary objective of this study is to develop a machine learning-based predictive model using corrosion rate data for magnesium alloys compiled from the literature. Corrosion rates measured under different deformation rates and heat treatment parameters were analyzed using artificial intelligence algorithms. Variables such as chemical composition, heat treatment temperature and time, deformation state, pH, test method, and test duration were used as inputs in the dataset. Various regression algorithms were compared with the PyCaret AutoML library, and the models with the highest accuracy scores were analyzed with Gradient Extra Trees and AdaBoost regression methods. The findings of this study demonstrate that modelling corrosion behaviour by integrating chemical composition with experimental conditions and processing parameters substantially enhances predictive accuracy. The regression models, developed using the PyCaret library, achieved high accuracy scores, producing corrosion rate predictions that are remarkably consistent with experimental values reported in the literature. Detailed tables and figures confirm that the most influential factors governing corrosion were successfully identified, providing valuable insights into the underlying mechanisms. These results highlight the potential of AI-assisted decision systems as powerful tools for material selection and experimental design, and, when supported by larger databases, for predicting the corrosion life of magnesium alloys and guiding the development of new alloys. Full article
(This article belongs to the Section Computation and Simulation on Metals)
19 pages, 1572 KB  
Article
Exploring the Impact of Cooling Environments on the Machinability of AM-AlSi10Mg: Optimizing Cooling Techniques and Predictive Modelling
by Zhenhua Dou, Kai Guo, Jie Sun and Xiaoming Huang
Machines 2025, 13(11), 984; https://doi.org/10.3390/machines13110984 (registering DOI) - 24 Oct 2025
Abstract
Additively manufactured (AM) aluminum (Al) alloys are very useful in sectors like automotive, manufacturing, and aerospace because they have unique mechanical properties, such as their light weight, etc. AlSi10Mg made by laser powder bed fusion (LPBF) is one of the most promising materials [...] Read more.
Additively manufactured (AM) aluminum (Al) alloys are very useful in sectors like automotive, manufacturing, and aerospace because they have unique mechanical properties, such as their light weight, etc. AlSi10Mg made by laser powder bed fusion (LPBF) is one of the most promising materials because it has a high strength-to-weight ratio, good thermal resistance, and good corrosion resistance. But machining AlSi10Mg parts is still hard because they have unique microstructural properties from the way they were produced. This research investigates the machining efficacy of the AM-AlSi10Mg alloy in distinct cutting conditions (dry, flood, chilled air, and minimal quantity lubrication with castor oil). The study assesses how different cooling conditions affect important performance metrics such as cutting temperature, surface roughness, and tool wear. Due to castor oil’s superior lubricating and film-forming properties, MQL (Minimal Quantity Lubrication) reduces heat generation between 80 °C and 98 °C for the distinct speed–feed combinations. The Multi-Objective Optimization by Ratio Analysis (MOORA) approach is used to determine the ideal cooling and machining conditions (MQL, Vc of 90 m/min, and fr of 0.05 mm/rev). The relative closeness values derived from the MOORA approach were used to predict machining results using machine learning (ML) models (MLP, GPR, and RF). The MLP showed the strongest relationship between the measured and predicted values, with R values of 0.9995 in training and 0.9993 in testing. Full article
(This article belongs to the Special Issue Neural Networks Applied in Manufacturing and Design)
14 pages, 694 KB  
Article
Machine Learning for ADHD Diagnosis: Feature Selection from Parent Reports, Self-Reports and Neuropsychological Measures
by Yun-Wei Dai and Chia-Fen Hsu
Children 2025, 12(11), 1448; https://doi.org/10.3390/children12111448 (registering DOI) - 24 Oct 2025
Abstract
Background: Attention-deficit/hyperactivity disorder (ADHD) is a heterogeneous neurodevelopmental condition that currently relies on subjective clinical judgment for diagnosis, emphasizing the need for objective, clinically applicable tools. Methods: We applied machine learning techniques to parent reports, self-reports, and performance-based measures in a sample of [...] Read more.
Background: Attention-deficit/hyperactivity disorder (ADHD) is a heterogeneous neurodevelopmental condition that currently relies on subjective clinical judgment for diagnosis, emphasizing the need for objective, clinically applicable tools. Methods: We applied machine learning techniques to parent reports, self-reports, and performance-based measures in a sample of 255 Taiwanese children and adolescents (108 ADHD and 147 controls; mean age = 11.85 years). Models were trained under a nested cross-validation framework to avoid performance overestimation. Results: Most models achieved high classification accuracy (AUCs ≈ 0.886–0.906), while convergent feature importance across models highlighted parent-rated social problems, executive dysfunction, and self-regulation traits as robust predictors. Additionally, ex-Gaussian parameters derived from reaction time distributions on the Continuous Performance Test (CPT) proved more informative than raw scores. Conclusions: These findings support the utility of integrating multi-informant ratings and task-based measures in interpretable ML models to enhance ADHD diagnosis in clinical practice. Full article
(This article belongs to the Special Issue Attention Deficit/Hyperactivity Disorder in Children and Adolescents)
28 pages, 1050 KB  
Perspective
Toward Artificial Intelligence in Oncology and Cardiology: A Narrative Review of Systems, Challenges, and Opportunities
by Visar Vela, Ali Yasin Sonay, Perparim Limani, Lukas Graf, Besmira Sabani, Diona Gjermeni, Andi Rroku, Arber Zela, Era Gorica, Hector Rodriguez Cetina Biefer, Uljad Berdica, Euxhen Hasanaj, Adisa Trnjanin, Taulant Muka and Omer Dzemali
J. Clin. Med. 2025, 14(21), 7555; https://doi.org/10.3390/jcm14217555 (registering DOI) - 24 Oct 2025
Abstract
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become [...] Read more.
Background: Artificial intelligence (AI), the overarching field that includes machine learning (ML) and its subfield deep learning (DL), is rapidly transforming clinical research by enabling the analysis of high-dimensional data and automating the output of diagnostic and prognostic tests. As clinical trials become increasingly complex and costly, ML-based approaches (especially DL for image and signal data) offer promising solutions, although they require new approaches in clinical education. Objective: Explore current and emerging AI applications in oncology and cardiology, highlight real-world use cases, and discuss the challenges and future directions for responsible AI adoption. Methods: This narrative review summarizes various aspects of AI technology in clinical research, exploring its promise, use cases, and its limitations. The review was based on a literature search in PubMed covering publications from 2019 to 2025. Search terms included “artificial intelligence”, “machine learning”, “deep learning”, “oncology”, “cardiology”, “digital twin”. and “AI-ECG”. Preference was given to studies presenting validated or clinically applicable AI tools, while non-English articles, conference abstracts, and gray literature were excluded. Results: AI demonstrates significant potential in improving diagnostic accuracy, facilitating biomarker discovery, and detecting disease at an early stage. In clinical trials, AI improves patient stratification, site selection, and virtual simulations via digital twins. However, there are still challenges in harmonizing data, validating models, cross-disciplinary training, ensuring fairness, explainability, as well as the robustness of gold standards to which AI models are built. Conclusions: The integration of AI in clinical research can enhance efficiency, reduce costs, and facilitate clinical research as well as lead the way towards personalized medicine. Realizing this potential requires robust validation frameworks, transparent model interpretability, and collaborative efforts among clinicians, data scientists, and regulators. Interoperable data systems and cross-disciplinary education will be critical to enabling the integration of scalable, ethical, and trustworthy AI into healthcare. Full article
(This article belongs to the Section Clinical Research Methods)
47 pages, 36851 KB  
Article
Comparative Analysis of ML and DL Models for Data-Driven SOH Estimation of LIBs Under Diverse Temperature and Load Conditions
by Seyed Saeed Madani, Marie Hébert, Loïc Boulon, Alexandre Lupien-Bédard and François Allard
Batteries 2025, 11(11), 393; https://doi.org/10.3390/batteries11110393 (registering DOI) - 24 Oct 2025
Abstract
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, [...] Read more.
Accurate estimation of lithium-ion battery (LIB) state of health (SOH) underpins safe operation, predictive maintenance, and lifetime-aware energy management. Despite recent advances in machine learning (ML), systematic benchmarking across heterogeneous real-world cells remains limited, often confounded by data leakage and inconsistent validation. Here, we establish a leakage-averse, cross-battery evaluation framework encompassing 32 commercial LIBs (B5–B56) spanning diverse cycling histories and temperatures (≈4 °C, 24 °C, 43 °C). Models ranging from classical regressors to ensemble trees and deep sequence architectures were assessed under blocked 5-fold GroupKFold splits using RMSE, MAE, R2 with confidence intervals, and inference latency. The results reveal distinct stratification among model families. Sequence-based architectures—CNN–LSTM, GRU, and LSTM—consistently achieved the highest accuracy (mean RMSE ≈ 0.006; per-cell R2 up to 0.996), demonstrating strong generalization across regimes. Gradient-boosted ensembles such as LightGBM and CatBoost delivered competitive mid-tier accuracy (RMSE ≈ 0.012–0.015) yet unrivaled computational efficiency (≈0.001–0.003 ms), confirming their suitability for embedded applications. Transformer-based hybrids underperformed, while approximately one-third of cells exhibited elevated errors linked to noise or regime shifts, underscoring the necessity of rigorous evaluation design. Collectively, these findings establish clear deployment guidelines: CNN–LSTM and GRU are recommended where robustness and accuracy are paramount (cloud and edge analytics), while LightGBM and CatBoost offer optimal latency–efficiency trade-offs for embedded controllers. Beyond model choice, the study highlights data curation and leakage-averse validation as critical enablers for transferable and reliable SOH estimation. This benchmarking framework provides a robust foundation for future integration of ML models into real-world battery management systems. Full article
Show Figures

Figure 1

65 pages, 3348 KB  
Systematic Review
The Role of Graph Neural Networks, Transformers, and Reinforcement Learning in Network Threat Detection: A Systematic Literature Review
by Thilina Prasanga Doremure Gamage, Jairo A. Gutierrez and Sayan K. Ray
Electronics 2025, 14(21), 4163; https://doi.org/10.3390/electronics14214163 (registering DOI) - 24 Oct 2025
Abstract
Traditional network threat detection based on signatures is becoming increasingly inadequate as network threats and attacks continue to grow in their novelty and sophistication. Such advanced network threats are better handled by anomaly detection based on Machine Learning (ML) models. However, conventional anomaly-based [...] Read more.
Traditional network threat detection based on signatures is becoming increasingly inadequate as network threats and attacks continue to grow in their novelty and sophistication. Such advanced network threats are better handled by anomaly detection based on Machine Learning (ML) models. However, conventional anomaly-based network threat detection with traditional ML and Deep Learning (DL) faces fundamental limitations. Graph Neural Networks (GNNs) and Transformers are recent deep learning models with innovative architectures, capable of addressing these challenges. Reinforcement learning (RL) can facilitate adaptive learning strategies for GNN- and Transformer-based Intrusion Detection Systems (IDS). However, no systematic literature review (SLR) has jointly analyzed and synthesized these three powerful modeling algorithms in network threat detection. To address this gap, this SLR analyzed 36 peer-reviewed studies published between 2017 and 2025, collectively identifying 56 distinct network threats via the proposed threat classification framework by systematically mapping them to Enterprise MITRE ATT&CK tactics and their corresponding Cyber Kill Chain stages. The reviewed literature consists of 23 GNN-based studies implementing 19 GNN model types, 9 Transformer-based studies implementing 13 Transformer architectures, and 4 RL-based studies with 5 different RL algorithms, evaluated across 50 distinct datasets, demonstrating their overall effectiveness in network threat detection. Full article
(This article belongs to the Special Issue AI-Enhanced Security: Advancing Threat Detection and Defense)
26 pages, 1535 KB  
Article
Prognostic and Predictive Significance of B7-H3 and CD155 Expression in Gastric Cancer Patients
by Ozlem Dalda, Zehra Bozdag, Sami Akbulut, Hasan Gokce, Yasin Dalda, Ayse Nur Akatli and Mustafa Huz
Diagnostics 2025, 15(21), 2695; https://doi.org/10.3390/diagnostics15212695 (registering DOI) - 24 Oct 2025
Abstract
Background/Objectives: This study aimed to characterize the expression patterns of B7 homolog 3 (B7-H3) and cluster of differentiation 155 (CD155), two immune-related transmembrane glycoproteins, in resectable gastric adenocarcinoma and to elucidate their clinicopathological, prognostic, and molecular implications. Methods: The study included [...] Read more.
Background/Objectives: This study aimed to characterize the expression patterns of B7 homolog 3 (B7-H3) and cluster of differentiation 155 (CD155), two immune-related transmembrane glycoproteins, in resectable gastric adenocarcinoma and to elucidate their clinicopathological, prognostic, and molecular implications. Methods: The study included 112 patients who underwent gastrectomy for gastric adenocarcinoma between 2020 and 2025, along with 30 samples of normal gastric tissue obtained from sleeve gastrectomy specimens. Histological subtype, grade of differentiation, TNM stage, and invasion parameters were re-evaluated. Immunohistochemical expression of B7-H3 and CD155 was quantified for membranous, stromal and membranous/cytoplasmic staining patterns. Quantitative reverse transcription polymerase chain reaction (RT-PCR) was performed on 29 tumor and 25 normal samples to confirm mRNA expression levels, with fold change ≥2 considered biologically significant upregulation and ≤0.5 considered downregulation. Machine learning models were developed to predict metastasis and mortality based on clinical and immunohistochemical features. Results: 78.5% of tumors were at an advanced stage (T3–T4), and metastasis was present in 22.3% of patients. Perineural invasion (PNI) and lymphovascular invasion (LVI) were observed in 67.9% and 88.4% of cases, respectively. Increased B7-H3 and CD155 expression were significantly associated with advanced tumor stage, metastasis, and the presence of PNI and LVI (all p < 0.05). In metastatic tumors, median membranous B7-H3, stromal B7-H3, and CD155 scores were 60, 130, and 190, respectively, compared with 20, 90, and 120 in non-metastatic tumors. A significant positive correlation was found between stromal B7-H3 and CD155 expression (r = 0.384, p < 0.001), indicating parallel upregulation. Quantitative RT-PCR confirmed significant overexpression of both genes in tumor tissues relative to normal controls. B7-H3 was upregulated in 75.9% and CD155 in 58.6% of samples, with co-upregulation in 55.2%. Fold-change levels were markedly higher in metastatic versus non-metastatic cases (B7-H3: 7.69-fold vs. 3.04-fold; CD155: 7.44-fold vs. 1.79-fold). ML analysis using the XGBoost model achieved 91.1% accuracy for metastasis prediction (F1-score 0.800). Key variables included pathological T4b stage, perineural invasion, N3b status, T4a stage, and CD155 score. The mortality model yielded 86.7% accuracy (F1-score 0.864), with metastasis, differentiation status, nodal involvement, age, lymph node ratio, and perineural invasion emerging as principal predictors. Conclusions: Combined evaluation of B7-H3 and CD155, supported by immunohistochemical staining and RT-PCR quantification of B7-H3 and CD155 mRNA expression levels, provides meaningful prognostic insights and supports their potential as dual molecular biomarkers for aggressive gastric adenocarcinoma phenotypes. Full article
(This article belongs to the Section Pathology and Molecular Diagnostics)
Show Figures

Figure 1

22 pages, 4258 KB  
Article
Visible Image-Based Machine Learning for Identifying Abiotic Stress in Sugar Beet Crops
by Seyed Reza Haddadi, Masoumeh Hashemi, Richard C. Peralta and Masoud Soltani
Algorithms 2025, 18(11), 680; https://doi.org/10.3390/a18110680 (registering DOI) - 24 Oct 2025
Abstract
Previous researches have proved that the synchronized use of inexpensive RGB images, image processing, and machine learning (ML) can accurately identify crop stress. Four Machine Learning Image Modules (MLIMs) were developed to enable the rapid and cost-effective identification of sugar beet stresses caused [...] Read more.
Previous researches have proved that the synchronized use of inexpensive RGB images, image processing, and machine learning (ML) can accurately identify crop stress. Four Machine Learning Image Modules (MLIMs) were developed to enable the rapid and cost-effective identification of sugar beet stresses caused by water and/or nitrogen deficiencies. RGB images representing stressed and non-stressed crops were used in the analysis. To improve robustness, data augmentation was applied, generating six variations on each image and expanding the dataset from 150 to 900 images for training and testing. Each MLIM was trained and tested using 54 combinations derived from nine canopy and RGB-based input features and six ML algorithms. The most accurate MLIM used RGB bands as inputs to a Multilayer Perceptron, achieving 96.67% accuracy for overall stress detection, and 95.93% and 94.44% for water and nitrogen stress identification, respectively. A Random Forest model, using only the green band, achieved 92.22% accuracy for stress detection while requiring only one-fourth the computation time. For specific stresses, a Random Forest (RF) model using a Scale-Invariant Feature Transform descriptor (SIFT) achieved 93.33% for water stress, while RF with RGB bands and canopy cover reached 85.56% for nitrogen stress. To address the trade-off between accuracy and computational cost, a bargaining theory-based framework was applied. This approach identified optimal MLIMs that balance performance and execution efficiency. Full article
Show Figures

Figure 1

23 pages, 1063 KB  
Article
Assessment of Airport Pavement Condition Index (PCI) Using Machine Learning
by Bertha Santos, André Studart and Pedro Almeida
Appl. Syst. Innov. 2025, 8(6), 162; https://doi.org/10.3390/asi8060162 (registering DOI) - 24 Oct 2025
Abstract
Pavement condition assessment is a fundamental aspect of airport pavement management systems (APMS) for ensuring safe and efficient airport operations. However, conventional methods, which rely on extensive on-site inspections and complex calculations, are often time-consuming and resource-intensive. In response, Industry 4.0 has introduced [...] Read more.
Pavement condition assessment is a fundamental aspect of airport pavement management systems (APMS) for ensuring safe and efficient airport operations. However, conventional methods, which rely on extensive on-site inspections and complex calculations, are often time-consuming and resource-intensive. In response, Industry 4.0 has introduced machine learning (ML) as a powerful tool to streamline these processes. This study explores five ML algorithms (Linear Regression (LR), Decision Tree (DT), Random Forest (RF), Artificial Neural Network (ANN), and Support Vector Machine (SVM)) for predicting the Pavement Condition Index (PCI). Using basic alphanumeric distress data from three international airports, this study predicts both numerical PCI values (on a 0–100 scale) and categorical PCI values (3 and 7 condition classes). To address data imbalance, random oversampling (SMOTE—Synthetic Minority Oversampling Technique) and undersampling (RUS) were used. This study fills a critical knowledge gap by identifying the most effective algorithms for both numerical and categorical PCI determination, with a particular focus on validating class-based predictions using relatively small data samples. The results demonstrate that ML algorithms, particularly Random Forest, are highly effective at predicting both the numerical and the three-class PCI for the original database. However, accurate prediction of the seven-class PCI required the application of oversampling techniques, indicating that a larger, more balanced database is necessary for this detailed classification. Using 10-fold cross-validation, the successful models achieved excellent performance, yielding Kappa statistics between 0.88 and 0.93, an error rate of less than 7.17%, and an area under the ROC curve greater than 0.93. The approach not only significantly reduces the complexity and time required for PCI calculation, but it also makes the technology accessible, enabling resource-limited airports and smaller management entities to adopt advanced pavement management practices. Full article
Show Figures

Figure 1

34 pages, 385 KB  
Review
Machine Learning in MRI Brain Imaging: A Review of Methods, Challenges, and Future Directions
by Martyna Ottoni, Anna Kasperczuk and Luis M. N. Tavora
Diagnostics 2025, 15(21), 2692; https://doi.org/10.3390/diagnostics15212692 (registering DOI) - 24 Oct 2025
Viewed by 19
Abstract
In recent years, machine learning (ML) has been increasingly used in many fields, including medicine. Magnetic resonance imaging (MRI) is a non-invasive and effective diagnostic technique; however, manual image analysis is time-consuming and prone to human variability. In response, ML models have been [...] Read more.
In recent years, machine learning (ML) has been increasingly used in many fields, including medicine. Magnetic resonance imaging (MRI) is a non-invasive and effective diagnostic technique; however, manual image analysis is time-consuming and prone to human variability. In response, ML models have been developed to support MRI analysis, particularly in segmentation and classification tasks. This work presents an updated narrative review of ML applications in brain MRI, with a focus on tumor classification and segmentation. A literature search was conducted in PubMed and Scopus databases and Mendeley Catalog (MC)—a publicly accessible bibliographic catalog linked to Elsevier’s Scopus indexing system—covering the period from January 2020 to April 2025. The included studies focused on patients with primary or secondary brain neoplasms and applied machine learning techniques to MRI data for classification or segmentation purposes. Only original research articles written in English and reporting model validation were considered. Studies using animal models, non-imaging data, lacking proper validation, or without accessible full texts (e.g., abstract-only records or publications unavailable through institutional access) were excluded. In total, 108 studies met all inclusion criteria and were analyzed qualitatively. In general, models based on convolutional neural networks (CNNs) were found to dominate current research due to their ability to extract spatial features directly from imaging data. Reported classification accuracies ranged from 95% to 99%, while Dice coefficients for segmentation tasks varied between 0.83 and 0.94. Hybrid architectures (e.g., CNN-SVM, CNN-LSTM) achieved strong results in both classification and segmentation tasks, with accuracies above 95% and Dice scores around 0.90. Transformer-based models, such as the Swin Transformer, reached the highest performance, up to 99.9%. Despite high reported accuracy, challenges remain regarding overfitting, generalization to real-world clinical data, and lack of standardized evaluation protocols. Transfer learning and data augmentation were frequently applied to mitigate limited data availability, while radiomics-based models introduced new avenues for personalized diagnostics. ML has demonstrated substantial potential in enhancing brain MRI analysis and supporting clinical decision-making. Nevertheless, further progress requires rigorous clinical validation, methodological standardization, and comparative benchmarking to bridge the gap between research settings and practical deployment. Full article
(This article belongs to the Special Issue Brain/Neuroimaging 2025–2026)
Back to TopTop