Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (9,172)

Search Parameters:
Keywords = machine learning (ML)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
17 pages, 1303 KB  
Article
Prediction of Adherence to an Online Wellness Program for People with Mobility Limitations: A Machine Learning Approach
by Salma Aly, Hui-Ju Young, James H. Rimmer and Tapan Mehta
Healthcare 2026, 14(6), 781; https://doi.org/10.3390/healthcare14060781 - 19 Mar 2026
Abstract
Background/Objectives: People with mobility limitations face disproportionately high rates of chronic health conditions and demonstrate lower adherence to wellness interventions. Digital programs such as MENTOR offer accessible alternatives but often face high rates of attrition. This study applied machine learning (ML) methods to [...] Read more.
Background/Objectives: People with mobility limitations face disproportionately high rates of chronic health conditions and demonstrate lower adherence to wellness interventions. Digital programs such as MENTOR offer accessible alternatives but often face high rates of attrition. This study applied machine learning (ML) methods to predict adherence to the eight-week MENTOR telewellness program and identify key predictors of participant attendance. Methods: Data were drawn from 1218 adults enrolled in MENTOR (2023–2024). Adherence was defined as the percentage of 40 sessions attended. Baseline demographic, socioeconomic, psychosocial, mindfulness, resilience, health status, and physical activity variables were included as predictors. Following preprocessing and imputation, 13 ML regression models were trained using an 80/20 train–test split. The best-performing model was identified using mean absolute error (MAE), followed by feature selection and SHAP interpretability analyses. Pairwise synergy analysis quantified interactions between top predictors. Results: Model performance was modest overall. Bayesian ridge regression achieved the best performance (MAE 20.98; RMSE 25.26; R2 = 0.12). SHAP analyses revealed that education, race, emotional support, Area Deprivation Index, household size, mindfulness, life satisfaction, and disability onset were the strongest predictors of adherence. Higher emotional support, mindfulness, and life satisfaction were associated with greater adherence, while socioeconomic disadvantage predicted lower adherence. Synergy analyses showed the strongest predictive interactions between low education and psychosocial resources (emotional support and life satisfaction). Conclusions: Baseline characteristics alone modestly predicted adherence to a digital wellness program. However, psychosocial and socioeconomic factors emerged as meaningful predictors, underscoring the need for personalized support strategies to reduce dropout among participants with mobility limitations. Full article
Show Figures

Figure 1

15 pages, 2099 KB  
Review
Current Trends and Future Prospects of Radiomics and Machine Learning (ML) Models in Spinal Tumors—A Narrative Review
by Vivek Sanker, Suhrud Panchawgh, Anmol Kaur, Vinay Suresh, Dhanya Mahesh, Eeman Ahmad, Srinath Hariharan, Dhiraj Pangal, Maria Jose Cavgnaro, Mirabela Rusu, John Ratliff and Atman Desai
J. Imaging 2026, 12(3), 138; https://doi.org/10.3390/jimaging12030138 - 19 Mar 2026
Abstract
The intersection between radiomics, the computational analysis of imaging data, and machine learning (ML) may lead to new developments in the diagnosis, prognosis, and management of diseases. For spinal tumors specifically, applications of these fields appear promising. In this educational narrative review, we [...] Read more.
The intersection between radiomics, the computational analysis of imaging data, and machine learning (ML) may lead to new developments in the diagnosis, prognosis, and management of diseases. For spinal tumors specifically, applications of these fields appear promising. In this educational narrative review, we provide a summary of the current advancements in radiomics and artificial intelligence (AI), as well as applications of both fields in the diagnosis and management of spinal tumors. We also provide a suggested workflow of radiomics and machine learning analysis of spinal tumors for researchers, including a list and description of commonly used radiomic features. Future directions in the field of radiomics and machine learning applications to spinal tumors may involve validating already proposed algorithms with larger datasets, ensuring that all computational applications to patient care maintain high ethical standards, and continuing work in developing novel and highly accurate computational techniques to enhance patient outcomes. Full article
(This article belongs to the Topic Machine Learning and Deep Learning in Medical Imaging)
Show Figures

Figure 1

33 pages, 2201 KB  
Review
Machine Learning Models for Non-Intrusive Load Monitoring: A Systematic Review and Meta-Analysis
by Herman Cristiano Jaime, Adler Diniz de Souza, Raphael Carlos Santos Machado and Otávio de Souza Martins Gomes
Inventions 2026, 11(2), 29; https://doi.org/10.3390/inventions11020029 - 19 Mar 2026
Abstract
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based [...] Read more.
Non-Intrusive Load Monitoring (NILM) systems are increasingly applied in residential and commercial environments to disaggregate energy consumption without requiring additional hardware sensors. The integration of Machine Learning (ML) techniques has enhanced the accuracy and efficiency of load identification and classification in smart meter-based systems. This study presents a systematic review and meta-analysis aimed at identifying, classifying, and quantitatively evaluating ML models applied to NILM. Searches were conducted in the IEEE Xplore and Scopus databases, restricted to peer-reviewed publications from 2017 to 2024. Thirty studies met the eligibility criteria and were included in the quantitative synthesis using a random-effects meta-analysis model (DerSimonian–Laird estimator). The primary effect measure was the F1-score. Statistical analyses were performed using R (version 4.5.0) and Python (version 3.10.0), including heterogeneity assessment and subgroup analyses according to model type. Hybrid models, such as SVDT-KNN-MLP, LE-CRNN, and RBFNN-MOGA, achieved the highest pooled F1-scores, although supported by a limited number of studies. Traditional approaches, including CNN, KNN, and Random Forest, demonstrated consistently strong performance and broader validation, whereas Boosted Trees and RNN-based models showed lower or more variable results. Substantial heterogeneity was observed across studies, highlighting the need for dataset standardization, reproducible evaluation frameworks, and further validation of emerging hybrid architectures in diverse operational scenarios. This study contributes by providing a quantitative synthesis of machine learning models applied to NILM using a structured PRISMA-based methodology and subgroup analysis by model architecture. Unlike previous narrative reviews, this work integrates scientometric analysis with meta-analytic performance aggregation, offering a consolidated and comparative evidence base for future NILM research. Full article
Show Figures

Figure 1

22 pages, 2209 KB  
Article
Predictive Traumatic Brain Injury Model for Determining Discharge Disposition and Infection Outcomes: A Machine Learning Approach Developed from the National Trauma Data Bank
by Asher Ralphs, Constana Gracia, Devesh Sarda, Subhajit Chakrabarty, Navdeep Samra, Bharat Guthikonda, Deepak Kumbhare and Julie Schwertfeger
Trauma Care 2026, 6(1), 6; https://doi.org/10.3390/traumacare6010006 - 19 Mar 2026
Abstract
Background/Objectives: Traumatic brain injury (TBI) affects more than 50 million people annually worldwide. Challenges in managing moderate-to-severe TBI include high rates of hospital-acquired infections and substantial variability in discharge disposition, and these combined challenges contribute significantly to the cost and trajectory of health [...] Read more.
Background/Objectives: Traumatic brain injury (TBI) affects more than 50 million people annually worldwide. Challenges in managing moderate-to-severe TBI include high rates of hospital-acquired infections and substantial variability in discharge disposition, and these combined challenges contribute significantly to the cost and trajectory of health recovery. Although current strategies such as antibiotic-impregnated external ventricular drains (EVDs) offer some benefit in controlling infections, they remain limited by high cost and inconsistent implementation. A clearer understanding of clinical and demographic factors associated with infection risk and discharge disposition are essential for improving care pathways. This study aims to identify and quantify key determinants of infection and discharge outcomes in patients with TBI. Methods: The National Trauma Database (NTDB) was queried using structured query language (SQL) based on predefined inclusion criteria (adult patients with ICD-coded TBI), input variables (basic demographics, injury location and severity, and vital signs), and specified outcome variables (emergency department discharge disposition, infection, and sepsis) to identify and filter the eligible patient cohort. A set of machine learning models were trained for each outcome (e.g., Emergency Department (ED) discharge, types of infections, and sepsis). Results: Data from 310,494 patients were extracted. The prediction model we developed, the Predictive TBI-Disposition Model (PTDM), was able to predict the outcome of a patient’s discharge with 96% accuracy. The accuracy of the models for infection and sepsis was 93% and 94%, respectively. Conclusions: Demographic and clinical factors significantly influence the discharge disposition and infection risk among TBI patients. Machine learning models demonstrated strong predictive performance, suggesting their utility in early risk stratification and targeted clinical decision-making. Full article
Show Figures

Figure 1

30 pages, 1145 KB  
Review
Trust Assessment Methods for Blockchain-Empowered Internet of Things Systems: A Comprehensive Review
by Mostafa E. A. Ibrahim, Yassine Daadaa and Alaa E. S. Ahmed
Appl. Sci. 2026, 16(6), 2949; https://doi.org/10.3390/app16062949 - 18 Mar 2026
Abstract
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and [...] Read more.
The Internet of things (IoT) is rapidly pervading daily life and linking everything. Although higher connectivity offers many benefits, including higher productivity, robotic processes, and decision-making guided by data, it also poses a number of security dangers. Modern risks to data authenticity and confidence are getting harder to handle through typical central safety solutions. In this paper, we present a detailed investigation of the latest innovations and approaches for assessing reputation and confidence in the blockchain-empowered Internet of Things (BIoT) area. A comprehensive literature search was conducted across major electronic databases, including IEEE, Springer, Elsevier, Wiley, MDPI, and top indexed conference proceedings. The publication year was restricted to the period from 2018 to 2025. The methodological quality of a total of 122 studies met the inclusion criteria assessed using predefined quality measures. We figure out existing flaws at each layer of IoT architecture, illustrating how autonomous, transparent, and impenetrable blockchain ledgers address these flaws. Plus, we analytically compare public, private, consortium, and hybrid blockchain networking architectures to emphasize the underlying compromises among security, reliability, and decentralization. We also assess how reputation evaluation techniques evolved over time, moving from classical fuzzy logic and weighted average models to modern mature game theory and machine learning (ML) models, addressing their limitations in terms of computational overhead, scalability, adaptability, and deployment feasibility in IoT systems. Additionally, we outline future directions for BIoT system trust assessment and identify research limitations and potential solutions. Our research indicates that although ML-driven models offer more accurate predictions for identifying illicit node activities, they are still constrained by limited unbalanced data and high processing overhead. Full article
(This article belongs to the Special Issue Advanced Blockchain Technologies and Their Applications)
28 pages, 2467 KB  
Review
Light-Curve Classification of Resident Space Objects for Space Situational Awareness: A Scoping Review
by Minyoung Hwang, Vithurshan Suthakar, Randa Qashoa, Regina S. K. Lee and Gunho Sohn
Aerospace 2026, 13(3), 287; https://doi.org/10.3390/aerospace13030287 - 18 Mar 2026
Abstract
The proliferation of Resident Space Objects (RSOs), including satellites, rocket bodies, and debris, poses escalating challenges for Space Situational Awareness (SSA). Optical light curves capture temporal brightness variations influenced by factors such as attitude variation, viewing geometry, and surface properties. When appropriately processed [...] Read more.
The proliferation of Resident Space Objects (RSOs), including satellites, rocket bodies, and debris, poses escalating challenges for Space Situational Awareness (SSA). Optical light curves capture temporal brightness variations influenced by factors such as attitude variation, viewing geometry, and surface properties. When appropriately processed and analyzed, these data can support RSO characterization and classification. This paper presents a scoping review of machine learning (ML) and deep learning (DL) methods for RSO classification using light-curve data. From 297 peer-reviewed studies published between 2014 and 2025, a screened subset of 29 works is selected for detailed methodological comparison. We trace the methodological evolution from handcrafted feature engineering toward convolutional, recurrent, and self-supervised models that learn representations directly from photometric time series. An analysis of three publicly accessible databases, Mini Mega TORTORA, Space Debris Light-Curve Database, and Ukrainian Database, reveals pronounced class imbalance, with payloads comprising over 80% of observations. While models trained on simulated data routinely achieve 95 to 99% accuracy, performance on measured light curves degrades to 75 to 92%, exposing a persistent gap between simulation and observation. We further identify data scarcity, repeated observations of the same objects, and inconsistent evaluation protocols as key barriers to reproducible benchmarking. Future progress will require benchmark-ready, sensor-aware datasets spanning diverse orbital regimes and viewing geometries, alongside physics-informed and transfer-learning approaches that improve robustness across sensors and between synthetic and observational domains. Full article
(This article belongs to the Special Issue Advances in Space Surveillance and Tracking)
Show Figures

Figure 1

22 pages, 1823 KB  
Article
Machine Learning-Based Models for Identifying Learning Disabilities
by Wun-Tsong Chaou, Yu-Hui Liu, Ying-Lei Lin, Chao-Chien Huang and Ping-Feng Pai
Electronics 2026, 15(6), 1278; https://doi.org/10.3390/electronics15061278 - 18 Mar 2026
Abstract
Timely and accurate identification of learning disability (LD) severity is critical for early screening and for guiding appropriate clinical and educational interventions. This study developed a machine learning model with feature selection and hyperparameter optimization (MLFSHO) architecture to predict the severity of LD [...] Read more.
Timely and accurate identification of learning disability (LD) severity is critical for early screening and for guiding appropriate clinical and educational interventions. This study developed a machine learning model with feature selection and hyperparameter optimization (MLFSHO) architecture to predict the severity of LD using heterogeneous clinical data with clinical expert labeling. Four machine learning models including eXtreme Gradient Boosting (XGB), Categorical Boosting (CAT), Light Gradient Boosting Machine (LGBM), and Multi-Layer Perceptron (MLP) were implemented within the MLFSHO architecture that integrates HSIC-based feature selection and Optuna-based joint optimization of feature-related parameters and model hyperparameters. Experiment results indicated all machine learning-based (ML-based) models can provide average accuracy of more than 85%. In addition, hyperparameter optimization consistently improved most predictive performance. Joint optimization of feature-related parameters and model hyperparameters achieved the best overall performance across models. These findings suggest that treating feature selection and hyperparameter tuning as a unified optimization problem can improve the reliability of severity prediction in learning disabilities and support early screening in clinical settings. The proposed MLFSHO architecture provides a systematic approach for modeling heterogeneous clinical data and improves the performance of LD severity prediction. Full article
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)
Show Figures

Figure 1

26 pages, 10653 KB  
Review
AI/ML-Enhanced Wind Forecasts for Reducing Uncertainty in Prescribed Fire Planning
by Sara Brambilla, Shane Xavier Coffing, Jesse Edward Slaten, Diego Rojas, David Joseph Robinson and Arvind Thanam Mohan
Atmosphere 2026, 17(3), 312; https://doi.org/10.3390/atmos17030312 - 18 Mar 2026
Abstract
Prescribed fire is a vital tool for ecosystem management and wildfire risk reduction but its escalation is constrained by overly conservative burn windows because of uncertainties, for instance, in wind forecasts. This review describes the state of the art in weather product use [...] Read more.
Prescribed fire is a vital tool for ecosystem management and wildfire risk reduction but its escalation is constrained by overly conservative burn windows because of uncertainties, for instance, in wind forecasts. This review describes the state of the art in weather product use by fire/smoke models and identifies three priority research gaps that artificial intelligence/machine learning (AI/ML) is well positioned to address: (1) spatial and temporal downscaling to meter-scale, sub-hourly wind fields; (2) bias correction for systematic model errors in complex terrain; and (3) robust uncertainty quantification to inform ensemble-based simulations. Emerging AI/ML techniques offer promising frameworks to address all three challenges. By providing high-resolution, bias-corrected, and probabilistic wind fields, AI/ML-enhanced forecasts will allow for expanded burn windows, improved ignition strategy design and a reduced reliance on expert intuition, especially when a prescribed fire is introduced into new areas. Full article
(This article belongs to the Section Atmospheric Techniques, Instruments, and Modeling)
Show Figures

Figure 1

24 pages, 3360 KB  
Article
Satellite-Based Machine Learning for Temporal Assessment of Water Quality Parameter Prediction in a Coastal Shallow Lake
by Anja Batina, Ljiljana Šerić, Andrija Krtalić and Ante Šiljeg
J. Mar. Sci. Eng. 2026, 14(6), 566; https://doi.org/10.3390/jmse14060566 - 18 Mar 2026
Abstract
Satellite remote sensing increasingly supports water quality monitoring, yet the temporal transferability of machine learning (ML) models remains insufficiently tested, particularly in coastal shallow lakes subject to hydrological variability. This study evaluates the predictive robustness of satellite-based ML models for electrical conductivity (EC), [...] Read more.
Satellite remote sensing increasingly supports water quality monitoring, yet the temporal transferability of machine learning (ML) models remains insufficiently tested, particularly in coastal shallow lakes subject to hydrological variability. This study evaluates the predictive robustness of satellite-based ML models for electrical conductivity (EC), turbidity (TUR), water temperature (WT), and dissolved oxygen (DO) in Vrana Lake, Croatia. A total of 409 in situ measurements collected during 2023–2024 and 2025 were paired with Sentinel-2 and Landsat 8–9 imagery. Pearson, Spearman, and Kendall correlation analyses were applied for parameter-specific band selection using original, inverse, quadratic, and logarithmic feature transformations. Seventeen regression algorithms were evaluated under six training–testing split strategies, including strict temporal projection. WT exhibited high robustness (R2 ≈ 0.90 under temporal projection) due to its strong dependence on thermal bands, while DO achieved moderate temporal stability (R2 = 0.51) using log-transformed predictors. EC and TUR demonstrated substantial performance degradation under temporal separation (R2 = 0.14 and −4.62, respectively), reflecting sensitivity to distribution shifts. For parameters showing sufficient stability, interpretable band-based retrieval equations were derived using the most strongly correlated spectral predictors. These findings highlight the importance of temporally structured validation and demonstrate that model complexity does not guarantee operational robustness in shallow, dynamically evolving lake systems. Full article
(This article belongs to the Special Issue Assessment and Monitoring of Coastal Water Quality)
Show Figures

Figure 1

20 pages, 1573 KB  
Review
Real-Time Engine Oil Quality Monitoring: A Review and Future Perspectives on Microcontroller-Based Sensor Fusion and AI
by Mathew Habyarimana and Abayomi A. Adebiyi
Appl. Sci. 2026, 16(6), 2919; https://doi.org/10.3390/app16062919 - 18 Mar 2026
Abstract
Engine oil degradation critically influences the performance, efficiency, and longevity of internal combustion engines. Conventional mileage or time-based replacement schedules often result in premature oil changes or delayed servicing, both of which compromise engine health and increase costs. This review examines recent advances [...] Read more.
Engine oil degradation critically influences the performance, efficiency, and longevity of internal combustion engines. Conventional mileage or time-based replacement schedules often result in premature oil changes or delayed servicing, both of which compromise engine health and increase costs. This review examines recent advances in real-time oil condition monitoring and evaluates the feasibility of a low-cost microcontroller-based system that integrates physical sensors with machine learning models for continuous on-board oil health assessment. Drawing on established techniques from industrial lubrication monitoring, we propose an experimental framework that leverages electrical engineering principles, including sensor interface, analog front-end design, signal acquisition, and embedded AI deployment to enable accurate, affordable, and scalable oil health diagnostics. The review highlights opportunities for innovation in embedded systems and electrical engineering design, positioning AI-driven monitoring as a practical solution for predictive automotive maintenance. Full article
Show Figures

Figure 1

24 pages, 9747 KB  
Article
Stress Detection and Classification Using Dendrometer-Based SDVs in Citrus Trees
by Omer Hanan, Or Sperling, Eran Raveh and Guy Shani
Agriculture 2026, 16(6), 683; https://doi.org/10.3390/agriculture16060683 - 18 Mar 2026
Abstract
Chronic nutrient stress, whether deficiency or excess, alters trees’ physiology and disrupts their growth. Because growth and hydraulics co-determine stem diameter dynamics, we hypothesize that stem diameter variation (SDV, measured by point dendrometers) carries identifiable signatures of nutrient stress that can be detected [...] Read more.
Chronic nutrient stress, whether deficiency or excess, alters trees’ physiology and disrupts their growth. Because growth and hydraulics co-determine stem diameter dynamics, we hypothesize that stem diameter variation (SDV, measured by point dendrometers) carries identifiable signatures of nutrient stress that can be detected and classified. We evaluated our hypothesis by an SDV time series from a controlled experiment of nitrogen (N), phosphorus (P), and potassium (K) deficiency, optimal levels, and excess in citrus trees. We extracted temporal features from hourly SDV measurements and trained a hierarchical machine learning framework that first detected stress, then separated deficiency from excess, and finally attributed the nutrient axis (if possible). The framework substantially outperformed flat baselines. It achieved more than 70% precision for nutrient-specific classification under the full hierarchy and nearly 90% accuracy with a modified variant in which the deficiency and excess branches were each decomposed into potassium vs. {nitrogen, phosphorus} without further subdivision of nitrogen and phosphorus. Accuracy stabilized within one to two weeks of temporal aggregation, indicating an agronomically actionable detection horizon. Dendrometer-derived SDV enables noninvasive nutrient stress detection. Hierarchical ML outperforms flat baselines for NPK stress classification. Two-week temporal voting stabilizes accuracy at actionable timescales. Full article
Show Figures

Figure 1

21 pages, 1343 KB  
Systematic Review
The Role of Artificial Intelligence in the Detection and Diagnosis of Neurocognitive Disorders: A Systematic Review
by Pasqualina Perna, Alessandra Claudi, Fabrizio Stasolla and Raffaele Nappo
Technologies 2026, 14(3), 183; https://doi.org/10.3390/technologies14030183 - 18 Mar 2026
Abstract
Dementia represents a major healthcare challenge, as pathological changes often occur years before overt symptoms. Early manifestations such as mild cognitive impairment (MCI) and subjective cognitive decline (SCD) represent critical transitional stages between normal aging and dementia. Thus, distinguishing these conditions (i.e., MCI [...] Read more.
Dementia represents a major healthcare challenge, as pathological changes often occur years before overt symptoms. Early manifestations such as mild cognitive impairment (MCI) and subjective cognitive decline (SCD) represent critical transitional stages between normal aging and dementia. Thus, distinguishing these conditions (i.e., MCI and SCD) and determining their potential evolution into dementia remains crucial. However, current clinical tools, mainly neuroimaging and neuropsychological assessments, are not always clearly interpretable and are often resource-intensive. In recent years, artificial intelligence (AI), including machine learning (ML) and deep learning (DL), has demonstrated promising potential in early detection, progression prediction, and differential diagnosis of neurocognitive disorders. This systematic review aims to synthesize current evidence on the application of AI-based approaches to improve diagnostic accuracy and prognostic assessments in dementia. A comprehensive literature search of studies published between 2015 and 2025 was conducted across PubMed/MEDLINE, Scopus, and Web of Science, following PRISMA 2020 guidelines. Studies were evaluated for data modality, methodological rigor, performance metrics, and clinical applicability. Seventeen (17) studies, of which twelve (12) are primary studies and five (5) are secondary studies, examining AI applications in detecting and diagnosing neurocognitive disorders (NCDs) in adults with dementia, MCI, or SCD were included. Results indicate that AI models, particularly DL applied to neuroimaging, electrophysiological data, speech and language features, biomarkers, and digital behavioral data, achieve high diagnostic accuracy in distinguishing MCI, Alzheimer’s disease, and healthy aging. Predictive models also show potential in forecasting conversion from MCI to dementia and monitoring cognitive trajectories via wearable or smart-home technologies. Nonetheless, heterogeneity, limited external validation, and methodological inconsistencies hinder clinical translation. In conclusion, AI represents a rapidly evolving and promising tool for early detection and monitoring of neurocognitive disorders. Collectively, the reviewed studies underscore the need for standardized pipelines, larger multicenter datasets, and explainable AI frameworks to enable effective clinical implementation. Full article
Show Figures

Figure 1

12 pages, 710 KB  
Article
FTIR-Based Machine Learning Identification of Virgin and Recycled Polyester for Textile Recycling in Industry 4.0
by Maria Inês Barbosa, Ana Margarida Teixeira, Maria Leonor Sousa, Pedro Ribeiro, Clara Sousa and Pedro Miguel Rodrigues
Processes 2026, 14(6), 964; https://doi.org/10.3390/pr14060964 - 18 Mar 2026
Abstract
Advances in Industry 4.0 manufacturing have accelerated the adoption of machine learning (ML) for automated classification. Polyester (PES), a widely used synthetic fiber, competes with natural fibers like cotton and other synthetics, highlighting the need for continuous research and improvement. In the textile [...] Read more.
Advances in Industry 4.0 manufacturing have accelerated the adoption of machine learning (ML) for automated classification. Polyester (PES), a widely used synthetic fiber, competes with natural fibers like cotton and other synthetics, highlighting the need for continuous research and improvement. In the textile sector, distinguishing recycled polyester (rPES) from virgin polyester (vPES) remains challenging due to overlapping chemical signatures and material variability. A combination of Fourier transform infrared (FTIR) spectroscopy and ML has not been explored for this purpose. In this study, we evaluated ML models to discriminate three PES fiber types (45 vPES, 65 rPES, and 55 mixed PES) using 165 FTIR spectra across four spectral regions, R1, R2, R3, and R4, as well as their combined representation. Six ML approaches were tested on data reduced with fast independent component analysis (FastICA) (1–30 components) using an 80/20 train–test dataset split. The Decision Tree classifier achieved the highest Accuracy in four of the five spectral evaluations, with classification accuracies ranging from 66.67% to 77.78% for region R4, which also had a balanced classification profile with an area-under-the-curve (AUC) value of 0.81. Notably, despite the moderate overall Accuracy, the model achieved 100% discrimination of rPES when distinguishing it from both mixed and vPES. Mixed fibers remained the most difficult to classify, highlighting the need for improved feature representation. Full article
Show Figures

Figure 1

41 pages, 4823 KB  
Article
AI-Driven Bankruptcy Prediction in Manufacturing SMEs: Comparing Machine Learning Techniques with Logistic Regression
by Stanislav Letkovský, Sylvia Jenčová, Petra Vašaničová, Marta Miškufová and Michal Erben
Adm. Sci. 2026, 16(3), 148; https://doi.org/10.3390/admsci16030148 - 18 Mar 2026
Abstract
Bankruptcy prediction is currently a widely researched topic, as it typically results from a chain of negative events. Logistic Regression (LR) is one of the standard prediction tools; however, with advances in technology, machine learning (ML) methods are gaining prominence and demonstrating improvements [...] Read more.
Bankruptcy prediction is currently a widely researched topic, as it typically results from a chain of negative events. Logistic Regression (LR) is one of the standard prediction tools; however, with advances in technology, machine learning (ML) methods are gaining prominence and demonstrating improvements in performance and accuracy. It remains inconclusive whether ML methods significantly outperform traditional approaches such as LR in bankruptcy prediction. In this study, we identified the most commonly applied basic ML techniques—namely, Artificial Neural Networks (ANNs), Support Vector Machines (SVMs), and Decision Trees (DTs)—which are frequently used in the literature for classification tasks. These methods were selected for empirical comparison with LR to evaluate their relative predictive performance and potential advantages in bankruptcy forecasting. In the EU, small and medium-sized enterprises (SMEs) constitute more than 99% of the economy; however, only a few survive beyond five years. This study examines bankruptcy prediction in the specific context of the Slovak Republic, using a sample of 2754 SME manufacturing enterprises from 2020 to 2021 and 3158 from 2022 to 2023. All models show good predictive performance; however, the small statistical difference between the results does not conclusively demonstrate the superiority of ML methods over LR. Full article
Show Figures

Figure 1

20 pages, 6030 KB  
Article
Grassland Productivity and Ewes’ Forage Intake Monitoring by Combined Multispectral Vegetation Indices and Machine Learning Approaches for Precision Grazing Management
by Pasquale Caparra, Salvatore Praticò, Gaetano Messina, Caterina Cilione, Paolo De Caria, Emilio Lo Presti, Ada Braghieri, Adriana Di Trana, Rosanna Paolino and Giuseppe Badagliacca
Land 2026, 15(3), 485; https://doi.org/10.3390/land15030485 - 17 Mar 2026
Abstract
Grassland productivity and precise monitoring of animal herbage intake are key requirements for sustainable grazing management in Mediterranean upland systems. This study aimed to evaluate the potential of uncrewed aerial vehicle (UAV)-based multispectral vegetation indices (VIs) combined with machine learning (ML) algorithms to [...] Read more.
Grassland productivity and precise monitoring of animal herbage intake are key requirements for sustainable grazing management in Mediterranean upland systems. This study aimed to evaluate the potential of uncrewed aerial vehicle (UAV)-based multispectral vegetation indices (VIs) combined with machine learning (ML) algorithms to estimate forage biomass, quality parameters and daily herbage dry matter intake (HDMI) of grazing ewes at the paddock scale. The experiment was conducted in a managed ryegrass–white clover meadow–pasture in southern Italy, where four plots were grazed sequentially by lactating Sarda ewes during spring–summer 2025. Ground measurements included pre- and post-grazing biomass inside and outside exclusion cages, botanical composition and forage quality. Concurrently, UAV multispectral imagery has been acquired, from which several VIs were computed. Pearson’s correlations were used to explore relationships between VIs and forage variables, and five ML algorithms. Indices such as MCARI2, MTVI2, MTVI, MSAVI and OSAVI showed the strongest associations with biomass and quality traits, while support vector machine and neural networks provided the best prediction accuracies, particularly for HDMI (R2 up to 0.91). The integrated UAV–ML approach proved effective in simultaneously capturing spatial variability of pasture productivity and animal intake, supporting the development of operational precision grazing tools for heterogeneous Mediterranean grasslands. Full article
(This article belongs to the Section Land Innovations – Data and Machine Learning)
Show Figures

Figure 1

Back to TopTop