Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,087)

Search Parameters:
Keywords = learned prior

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1972 KB  
Article
Few-Shot Identification of Individuals in Sports: The Case of Darts
by Val Vec, Anton Kos, Rongfang Bie, Libin Jiao, Haodi Wang, Zheng Zhang, Sašo Tomažič and Anton Umek
Information 2025, 16(10), 865; https://doi.org/10.3390/info16100865 (registering DOI) - 5 Oct 2025
Abstract
This paper contains an analysis of methods for person classification based on signals from wearable IMU sensors during sports. While this problem has been investigated in prior work, existing approaches have not addressed it within the context of few-shot or minimal-data scenarios. A [...] Read more.
This paper contains an analysis of methods for person classification based on signals from wearable IMU sensors during sports. While this problem has been investigated in prior work, existing approaches have not addressed it within the context of few-shot or minimal-data scenarios. A few-shot scenario is especially useful as the main use case for person identification in sports systems is to be integrated into personalised biofeedback systems in sports. Such systems should provide personalised feedback that helps athletes learn faster. When introducing a new user, it is impractical to expect them to first collect many recordings. We demonstrate that the problem can be solved with over 90% accuracy in both open-set and closed-set scenarios using established methods. However, the challenge arises when applying few-shot methods, which do not require retraining the model to recognise new people. Most few-shot methods perform poorly due to feature extractors that learn dataset-specific representations, limiting their generalizability. To overcome this, we propose a combination of an unsupervised feature extractor and a prototypical network. This approach achieves 91.8% accuracy in the five-shot closed-set setting and 81.5% accuracy in the open-set setting, with a 99.6% rejection rate for unknown athletes. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

17 pages, 1613 KB  
Article
Superimposed CSI Feedback Assisted by Inactive Sensing Information
by Mintao Zhang, Haowen Jiang, Zilong Wang, Linsi He, Yuqiao Yang, Mian Ye and Chaojin Qing
Sensors 2025, 25(19), 6156; https://doi.org/10.3390/s25196156 (registering DOI) - 4 Oct 2025
Abstract
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although [...] Read more.
In massive multiple-input and multiple-output (mMIMO) systems, superimposed channel state information (CSI) feedback is developed to improve the occupation of uplink bandwidth resources. Nevertheless, the interference from this superimposed mode degrades the recovery performance of both downlink CSI and uplink data sequences. Although machine learning (ML)-based methods effectively mitigate superimposed interference by leveraging the multi-domain features of downlink CSI, the complex interactions among network model parameters cause a significant burden on system resources. To address these issues, inspired by sensing-assisted communication, we propose a novel superimposed CSI feedback method assisted by inactive sensing information that previously existed but was not utilized at the base station (BS). To the best of our knowledge, this is the first time that inactive sensing information is utilized to enhance superimposed CSI feedback. In this method, a new type of modal data, different from communication data, is developed to aid in interference suppression without requiring additional hardware at the BS. Specifically, the proposed method utilizes location, speed, and path information extracted from sensing devices to derive prior information. Then, based on the derived prior information, denoising processing is applied to both the delay and Doppler dimensions of downlink CSI in the delay—Doppler (DD) domain, significantly enhancing the recovery accuracy. Simulation results demonstrate the performance improvement of downlink CSI and uplink data sequences when compared to both classic and novel superimposed CSI feedback methods. Moreover, against parameter variations, simulation results also validate the robustness of the proposed method. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

25 pages, 666 KB  
Article
Continual Learning for Intrusion Detection Under Evolving Network Threats
by Chaoqun Guo, Xihan Li, Jubao Cheng, Shunjie Yang and Huiquan Gong
Future Internet 2025, 17(10), 456; https://doi.org/10.3390/fi17100456 (registering DOI) - 4 Oct 2025
Abstract
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, [...] Read more.
In the face of ever-evolving cyber threats, modern intrusion detection systems (IDS) must achieve long-term adaptability without sacrificing performance on previously encountered attacks. Traditional IDS approaches often rely on static training assumptions, making them prone to forgetting old patterns, underperforming in label-scarce conditions, and struggling with imbalanced class distributions as new attacks emerge. To overcome these limitations, we present a continual learning framework tailored for adaptive intrusion detection. Unlike prior methods, our approach is designed to operate under real-world network conditions characterized by high-dimensional, sparse traffic data and task-agnostic learning sequences. The framework combines three core components: a clustering-based memory strategy that selectively retains informative historical samples using DP-Means; multi-level knowledge distillation that aligns current and previous model states at output and intermediate feature levels; and a meta-learning-driven class reweighting mechanism that dynamically adjusts to shifting attack distributions. Empirical evaluations on benchmark intrusion detection datasets demonstrate the framework’s ability to maintain high detection accuracy while effectively mitigating forgetting. Notably, it delivers reliable performance in continually changing environments where the availability of labeled data is limited, making it well-suited for real-world cybersecurity systems. Full article
Show Figures

Figure 1

15 pages, 265 KB  
Article
Supporting Teacher Professionalism for Inclusive Education: Integrating Cognitive, Emotional, and Contextual Dimensions
by Michal Nissim and Fathi Shamma
Educ. Sci. 2025, 15(10), 1317; https://doi.org/10.3390/educsci15101317 (registering DOI) - 4 Oct 2025
Abstract
This study examined how cognitive, affective, and sociocultural factors shape teachers’ readiness for inclusive education, focusing on the interplay between attitudes, emotional concerns, and self-efficacy. A survey of 149 elementary school teachers from diverse communities employed three validated instruments to assess these constructs. [...] Read more.
This study examined how cognitive, affective, and sociocultural factors shape teachers’ readiness for inclusive education, focusing on the interplay between attitudes, emotional concerns, and self-efficacy. A survey of 149 elementary school teachers from diverse communities employed three validated instruments to assess these constructs. Overall, teachers expressed moderately positive attitudes toward inclusion and relatively high levels of self-efficacy, yet emotional concerns were consistently present. Importantly, correlational analyses revealed that emotional concerns fully mediated the relationship between attitudes and self-efficacy, underscoring the central role of affective dimensions in shaping teachers’ professional confidence. Teachers with prior training or direct experience with students with disabilities reported lower emotional concerns, suggesting the value of practice-based professional learning opportunities. Sociocultural differences also emerged, with differences across communities, pointing to the influence of communal norms on emotional readiness for inclusion. These findings highlight the need to reconceptualize teacher professionalism in inclusive education as integrating cognitive, emotional, and contextual dimensions. Implications include designing professional development programs that combine knowledge, practice, and emotional preparedness, alongside culturally responsive approaches tailored to minority communities. Full article
(This article belongs to the Special Issue Supporting Teaching Staff Development for Professional Education)
18 pages, 3251 KB  
Article
Classifying Advanced Driver Assistance System (ADAS) Activation from Multimodal Driving Data: A Real-World Study
by Gihun Lee, Kahyun Lee and Jong-Uk Hou
Sensors 2025, 25(19), 6139; https://doi.org/10.3390/s25196139 (registering DOI) - 4 Oct 2025
Abstract
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller [...] Read more.
Identifying the activation status of advanced driver assistance systems (ADAS) in real-world driving environments is crucial for safety, responsibility attribution, and accident forensics. Unlike prior studies that primarily rely on simulation-based settings or unsynchronized data, we collected a multimodal dataset comprising synchronized controller area network (CAN)-bus and smartphone-based inertial measurement unit (IMU) signals from drivers on consistent highway sections under both ADAS-enabled and manual modes. Using these data, we developed lightweight classification pipelines based on statistical and deep learning approaches to explore the feasibility of distinguishing ADAS operation. Our analyses revealed systematic behavioral differences between modes, particularly in speed regulation and steering stability, highlighting how ADAS reduces steering variability and stabilizes speed control. Although classification accuracy was moderate, this study provides one of the first data-driven demonstrations of ADAS status detection under naturalistic conditions. Beyond classification, the released dataset enables systematic behavioral analysis and offers a valuable resource for advancing research on driver monitoring, adaptive ADAS algorithms, and accident forensics. Full article
(This article belongs to the Special Issue Applications of Machine Learning in Automotive Engineering)
Show Figures

Figure 1

20 pages, 4264 KB  
Article
Skeleton-Guided Diffusion for Font Generation
by Li Zhao, Shan Dong, Jiayi Liu, Xijin Zhang, Xiaojiao Gao and Xiaojun Wu
Electronics 2025, 14(19), 3932; https://doi.org/10.3390/electronics14193932 - 3 Oct 2025
Abstract
Generating non-standard fonts, such as running script (e.g., XingShu), poses significant challenges due to their high stroke continuity, structural flexibility, and stylistic diversity, which traditional component-based prior knowledge methods struggle to model effectively. While diffusion models excel at capturing continuous feature spaces and [...] Read more.
Generating non-standard fonts, such as running script (e.g., XingShu), poses significant challenges due to their high stroke continuity, structural flexibility, and stylistic diversity, which traditional component-based prior knowledge methods struggle to model effectively. While diffusion models excel at capturing continuous feature spaces and stroke variations through iterative denoising, they face critical limitations: (1) style leakage, where large stylistic differences lead to inconsistent outputs due to noise interference; (2) structural distortion, caused by the absence of explicit structural guidance, resulting in broken strokes or deformed glyphs; and (3) style confusion, where similar font styles are inadequately distinguished, producing ambiguous results. To address these issues, we propose a novel skeleton-guided diffusion model with three key innovations: (1) a skeleton-constrained style rendering module that enforces semantic alignment and balanced energy constraints to amplify critical skeletal features, mitigating style leakage and ensuring stylistic consistency; (2) a cross-scale skeleton preservation module that integrates multi-scale glyph skeleton information through cross-dimensional interactions, effectively modeling macro-level layouts and micro-level stroke details to prevent structural distortions; (3) a contrastive style refinement module that leverages skeleton decomposition and recombination strategies, coupled with contrastive learning on positive and negative samples, to establish robust style representations and disambiguate similar styles. Extensive experiments on diverse font datasets demonstrate that our approach significantly improves the generation quality, achieving superior style fidelity, structural integrity, and style differentiation compared to state-of-the-art diffusion-based font generation methods. Full article
38 pages, 1412 KB  
Article
A Framework for Understanding the Impact of Integrating Conceptual and Quantitative Reasoning in a Quantum Optics Tutorial on Students’ Conceptual Understanding
by Paul D. Justice, Emily Marshman and Chandralekha Singh
Educ. Sci. 2025, 15(10), 1314; https://doi.org/10.3390/educsci15101314 - 3 Oct 2025
Abstract
We investigated the impact of incorporating quantitative reasoning for deeper sense-making in a Quantum Interactive Learning Tutorial (QuILT) on students’ conceptual performance using a framework emphasizing integration of conceptual and quantitative aspects of quantum optics. In this investigation, we compared two versions of [...] Read more.
We investigated the impact of incorporating quantitative reasoning for deeper sense-making in a Quantum Interactive Learning Tutorial (QuILT) on students’ conceptual performance using a framework emphasizing integration of conceptual and quantitative aspects of quantum optics. In this investigation, we compared two versions of the QuILT that were developed and validated to help students learn various aspects of quantum optics using a Mach Zehnder Interferometer with single photons and polarizers. One version of the QuILT is entirely conceptual while the other version integrates quantitative and conceptual reasoning (hybrid version). Performance on conceptual questions of upper-level undergraduate and graduate students who engaged with the hybrid QuILT was compared with that of those who utilized the conceptual QuILT emphasizing the same concepts. Both versions of the QuILT focus on the same concepts, use a scaffolded approach to learning, and take advantage of research on students’ difficulties in learning these challenging concepts as well as a cognitive task analysis from an expert perspective as a guide. The hybrid and conceptual QuILTs were used in courses for upper-level undergraduates or first-year physics graduate students in several consecutive years at the same university. The same conceptual pre-test and post-test were administered after traditional lecture-based instruction in relevant concepts and after student engaged with the QuILT, respectively. We find that the post-test performance of physics graduate students who utilized the hybrid QuILT on conceptual questions, on average, was better than those who utilized the conceptual QuILT. For undergraduates, the results showed differences for different classes. One possible interpretation of these findings that is consistent with our framework is that integrating conceptual and quantitative aspects of physics in research-based tools and pedagogies should be commensurate with students’ prior knowledge of physics and mathematics involved so that students do not experience cognitive overload while engaging with such learning tools and have appropriate opportunities for metacognition, deeper sense-making, and knowledge organization. In the undergraduate course in which many students did not derive added benefit from the integration of conceptual and quantitative aspects, their pre-test performance suggests that the traditional lecture-based instruction may not have sufficiently provided a “first coat” to help students avoid cognitive overload when engaging with the hybrid QuILT. These findings suggest that different groups of students can benefit from a research-based learning tool that integrates conceptual and quantitative aspects if cognitive overload while learning is prevented either due to students’ high mathematical facility or due to their reasonable conceptual facility before engaging with the learning tool. Full article
20 pages, 781 KB  
Article
Development of a Brief Screener for Crosscutting Patterns of Family Maltreatment and Psychological Health Problems
by Shu Xu, Micahel F. Lorber, Richard E. Heyman and Amy M. Smith Slep
Psychol. Int. 2025, 7(4), 83; https://doi.org/10.3390/psycholint7040083 - 3 Oct 2025
Abstract
Prior work established the presence of six crosscutting patterns of clinically significant family maltreatment (FM) and psychological health (PH) problems among active-duty service members. Here, we develop a brief screener for these patterns via Classification and Regression Trees (CART) analyses using a sample [...] Read more.
Prior work established the presence of six crosscutting patterns of clinically significant family maltreatment (FM) and psychological health (PH) problems among active-duty service members. Here, we develop a brief screener for these patterns via Classification and Regression Trees (CART) analyses using a sample of active-duty members of the United States Air Force. CART is a predictive algorithm used in machine learning. It balances prediction accuracy and model parsimony to identify an optimal set of predictors and identifies the thresholds on those predictors in relation to a discrete condition of interest (e.g., diagnosis of pathology). A 22-item screener predicted membership in five of the six classes (sensitivities and specificities > 0.96; positive and negative predictive values > 0.90). However, for service members at extremely high risk of clinically significant externalizing behavior, sensitivity and positive predictive values were much lower. The resulting 22-item brief screener can facilitate feasible, cost-effective detection of five of the six identified FM and PH problem patterns with a small number of items. The sixth pattern can be predicted far better than chance. Researchers and policymakers can use this tool to guide prevention efforts for FM and PH problems in service members. Full article
Show Figures

Figure 1

26 pages, 6668 KB  
Article
Using Entity-Aware LSTM to Enhance Streamflow Predictions in Transboundary and Large Lake Basins
by Yunsu Park, Xiaofeng Liu, Yuyue Zhu and Yi Hong
Hydrology 2025, 12(10), 261; https://doi.org/10.3390/hydrology12100261 - 2 Oct 2025
Abstract
Hydrological simulation of large, transboundary water systems like the Laurentian Great Lakes remains challenging. Although deep learning has advanced hydrologic forecasting, prior efforts are fragmented, lacking a unified basin-wide model for daily streamflow. We address this gap by developing a single Entity-Aware Long [...] Read more.
Hydrological simulation of large, transboundary water systems like the Laurentian Great Lakes remains challenging. Although deep learning has advanced hydrologic forecasting, prior efforts are fragmented, lacking a unified basin-wide model for daily streamflow. We address this gap by developing a single Entity-Aware Long Short-Term Memory (EA-LSTM) model, an architecture that distinctly processes static catchment attributes and dynamic meteorological forcings, trained without basin-specific calibration. We compile a cross-border dataset integrating daily meteorological forcings, static catchment attributes, and observed streamflow for 975 sub-basins across the United States and Canada (1980–2023). With a temporal training/testing split, the unified EA-LSTM attains a median Nash–Sutcliffe Efficiency (NSE) of 0.685 and a median Kling–Gupta Efficiency (KGE) of 0.678 in validation, substantially exceeding a standard LSTM (median NSE 0.567, KGE 0.555) and the operational NOAA National Water Model (median NSE 0.209, KGE 0.440). Although skill is reduced in the smallest basins (median NSE 0.554) and during high-flow events (median PBIAS −29.6%), the performance is robust across diverse hydroclimatic settings. These results demonstrate that a single, calibration-free deep learning model can provide accurate, scalable streamflow prediction across an international basin, offering a practical path toward unified forecasting for the Great Lakes and a transferable framework for other large, data-sparse watersheds. Full article
17 pages, 1747 KB  
Article
Weighted Transformer Classifier for User-Agent Progression Modeling, Bot Contamination Detection, and Traffic Trust Scoring
by Geza Lucz and Bertalan Forstner
Mathematics 2025, 13(19), 3153; https://doi.org/10.3390/math13193153 - 2 Oct 2025
Abstract
In this paper, we present a unique method to determine the level of bot contamination of web-based user agents. It is common practice for bots and robotic agents to masquerade as human-like to avoid content and performance limitations. This paper continues our previous [...] Read more.
In this paper, we present a unique method to determine the level of bot contamination of web-based user agents. It is common practice for bots and robotic agents to masquerade as human-like to avoid content and performance limitations. This paper continues our previous work, using over 600 million web log entries collected from over 4000 domains to derive and generalize how the prominence of specific web browser versions progresses over time, assuming genuine human agency. Here, we introduce a parametric model capable of reproducing this progression in a tunable way. This simulation allows us to tag human-generated traffic in our data accurately. Along with the highest confidence self-tagged bot traffic, we train a Transformer-based classifier that can determine the bot contamination—a botness metric of user-agents without prior labels. Unlike traditional syntactic or rule-based filters, our model learns temporal patterns of raw and heuristic-derived features, capturing nuanced shifts in request volume, response ratios, content targeting, and entropy-based indicators over time. This rolling window-based pre-classification of traffic allows content providers to bin streams according to their bot infusion levels and direct them to several specifically tuned filtering pipelines, given the current load levels and available free resources. We also show that aggregated traffic data from multiple sources can enhance our model’s accuracy and can be further tailored to regional characteristics using localized metadata from standard web server logs. Our ability to adjust the heuristics to geographical or use case specifics makes our method robust and flexible. Our evaluation highlights that 65% of unclassified traffic is bot-based, underscoring the urgency of robust detection systems. We also propose practical methods for independent or third-party verification and further classification by abusiveness. Full article
Show Figures

Figure 1

23 pages, 3987 KB  
Article
From Symmetry to Semantics: Improving Heritage Point Cloud Classification with a Geometry-Aware, Uniclass-Informed Taxonomy for Random Forest Implementation in Automated HBIM Modelling
by Aleksander Gil and Yusuf Arayici
Symmetry 2025, 17(10), 1635; https://doi.org/10.3390/sym17101635 - 2 Oct 2025
Abstract
Heritage Building Information Modelling (HBIM) requires the accurate classification of diverse building elements from 3D point clouds. This study presents a novel classification approach integrating a bespoke Uniclass-derived taxonomy with a hierarchical Random Forest model. It was applied to the 17th-century Queen’s House [...] Read more.
Heritage Building Information Modelling (HBIM) requires the accurate classification of diverse building elements from 3D point clouds. This study presents a novel classification approach integrating a bespoke Uniclass-derived taxonomy with a hierarchical Random Forest model. It was applied to the 17th-century Queen’s House in Greenwich, a building rich in classical architectural elements whose geometric properties are often defined by principles of symmetry. The bespoke classification was implemented across three levels (50 mm, 20 mm, 5 mm point cloud resolutions) and evaluated against the prior experiment that used Uniclass classification. Results showed a substantial improvement in classification precision and overall accuracy at all levels. The Level 1 classifier’s accuracy increased by 15% of points (relative ~50% improvement) with the bespoke classification taxonomy, reducing the misclassifications and error propagation in subsequent levels. This research demonstrates that tailoring the Uniclass building classification for heritage-specific geometry significantly enhances machine learning performance, which, to date, has not been published in the academic domain. The findings underscore the importance of adaptive taxonomies and suggest pathways for integrating multi-scale features and advanced learning methods to support automated HBIM workflows. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

19 pages, 2848 KB  
Article
Monitoring of Cropland Abandonment Integrating Machine Learning and Google Earth Engine—Taking Hengyang City as an Example
by Yefeng Jiang and Zichun Guo
Land 2025, 14(10), 1984; https://doi.org/10.3390/land14101984 - 2 Oct 2025
Abstract
Cropland abandonment, a global challenge, necessitates comprehensive monitoring to achieve the zero hunger goal. Prior monitoring approaches to cropland abandonment often face constraints in resolution, time series, drivers, prediction, or a combination of these. Here, we proposed an artificial intelligence framework to comprehensively [...] Read more.
Cropland abandonment, a global challenge, necessitates comprehensive monitoring to achieve the zero hunger goal. Prior monitoring approaches to cropland abandonment often face constraints in resolution, time series, drivers, prediction, or a combination of these. Here, we proposed an artificial intelligence framework to comprehensively monitor cropland abandonment and tested the framework in Hengyang City, China. Specifically, we first mapped land cover at 30 m resolution from 1985 to 2023 using Landsat, stable sample points, and a machine learning model. Subsequently, we constructed the extent, time, and frequency of cropland abandonment from 1986 to 2022 by analyzing pixel-level land-use trajectories. Finally, we quantified the drivers of cropland abandonment using machine learning models and predicted the spatial distribution of cropland abandonment risk from 2032 to 2062. Our results indicated that the abandonment maps achieved overall accuracies of 0.88 and 0.78 for identifying abandonment locations and timing, respectively. From 1986 to 2022, the proportion of cropland abandonment ranged between 0.15% and 4.06%, with an annual average abandonment rate of 1.32%. Additionally, the duration of abandonment varied from 2 to 38 years, averaging approximately 14 years, indicating widespread cropland abandonment in the study area. Furthermore, 62.99% of the abandoned cropland experienced abandonment once, 27.17% experienced it twice, and only 0.23% experienced it five times or more. Over 50% of cropland abandonment remained unreclaimed or reused. During the study period, tree cover, soil pH, soil total phosphorus, potential crop yield, and the multiresolution index of valley bottom flatness emerged as the five most important environmental covariates, with relative importances of 0.087, 0.074, 0.068, 0.050, and 0.043, respectively. Temporally, cropland abandonment in 1992 was influenced by transportation inaccessibility and low agricultural productivity, soil quality degradation became an additional factor by 2010, and synergistic effects of all three drivers were observed from 2012 to 2022. Notably, most cropland had a low abandonment risk (mean: 0.36), with only 0.37% exceeding 0.7, primarily distributed in transitional zones between cropland and non-cropland. Future risk predictions suggested a gradual decline in both risk values and the spatial extent of cropland abandonment from 2032 to 2062. In summary, we developed a comprehensive framework for monitoring cropland abandonment using artificial intelligence technology, which can be used in national or regional land-use policies, warning systems, and food security planning. Full article
Show Figures

Figure 1

20 pages, 990 KB  
Article
Hybrid Stochastic–Machine Learning Framework for Postprandial Glucose Prediction in Type 1 Diabetes
by Irina Naskinova, Mikhail Kolev, Dilyana Karova and Mariyan Milev
Algorithms 2025, 18(10), 623; https://doi.org/10.3390/a18100623 - 1 Oct 2025
Abstract
This research introduces a hybrid framework that integrates stochastic modeling and machine learning for predicting postprandial glucose levels in individuals with Type 1 Diabetes (T1D). The primary aim is to enhance the accuracy of glucose predictions by merging a biophysical Glucose–Insulin–Meal (GIM) model [...] Read more.
This research introduces a hybrid framework that integrates stochastic modeling and machine learning for predicting postprandial glucose levels in individuals with Type 1 Diabetes (T1D). The primary aim is to enhance the accuracy of glucose predictions by merging a biophysical Glucose–Insulin–Meal (GIM) model with advanced machine learning techniques. This framework is tailored to utilize the Kaggle BRIST1D dataset, which comprises real-world data from continuous glucose monitoring (CGM), insulin administration, and meal intake records. The methodology employs the GIM model as a physiological prior to generate simulated glucose and insulin trajectories, which are then utilized as input features for the machine learning (ML) component. For this component, the study leverages the Light Gradient Boosting Machine (LightGBM) due to its efficiency and strong performance with tabular data, while Long Short-Term Memory (LSTM) networks are applied to capture temporal dependencies. Additionally, Bayesian regression is integrated to assess prediction uncertainty. A key advancement of this research is the transition from a deterministic GIM formulation to a stochastic differential equation (SDE) framework, which allows the model to represent the probabilistic range of physiological responses and improves uncertainty management when working with real-world data. The findings reveal that this hybrid methodology enhances both the precision and applicability of glucose predictions by integrating the physiological insights of Glucose Interaction Models (GIM) with the flexibility of data-driven machine learning techniques to accommodate real-world variability. This innovative framework facilitates the creation of robust, transparent, and personalized decision-support systems aimed at improving diabetes management. Full article
14 pages, 1037 KB  
Article
MMSE-Based Dementia Prediction: Deep vs. Traditional Models
by Yuyeon Jung, Yeji Park, Jaehyun Jo and Jinhyoung Jeong
Life 2025, 15(10), 1544; https://doi.org/10.3390/life15101544 - 1 Oct 2025
Abstract
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and [...] Read more.
Early and accurate diagnosis of dementia is essential to improving patient outcomes and reducing societal burden. The Mini-Mental State Examination (MMSE) is widely used to assess cognitive function, yet traditional statistical and machine learning approaches often face limitations in capturing nonlinear interactions and subtle decline patterns. This study developed a novel deep learning-based dementia prediction model using MMSE data collected from domestic clinical settings and compared its performance with traditional machine learning models. A notable strength of this work lies in its use of item-level MMSE features combined with explainable AI (SHAP analysis), enabling both high predictive accuracy and clinical interpretability—an advancement over prior approaches that primarily relied on total scores or linear modeling. Data from 164 participants, classified into cognitively normal, mild cognitive impairment (MCI), and dementia groups, were analyzed. Individual MMSE items and total scores were used as input features, and the dataset was divided into training and validation sets (8:2 split). A fully connected neural network with regularization techniques was constructed and evaluated alongside Random Forest and support vector machine (SVM) classifiers. Model performance was assessed using accuracy, F1-score, confusion matrices, and receiver operating characteristic (ROC) curves. The deep learning model achieved the highest performance (accuracy 0.90, F1-score 0.90), surpassing Random Forest (0.86) and SVM (0.82). SHAP analysis identified Q11 (immediate memory), Q12 (calculation), and Q17 (drawing shapes) as the most influential variables, aligning with clinical diagnostic practices. These findings suggest that deep learning not only enhances predictive accuracy but also offers interpretable insights aligned with clinical reasoning, underscoring its potential utility as a reliable tool for early dementia diagnosis. However, the study is limited by the use of data from a single clinical site with a relatively small sample size, which may restrict generalizability. Future research should validate the model using larger, multi-institutional, and multimodal datasets to strengthen clinical applicability and robustness. Full article
(This article belongs to the Section Biochemistry, Biophysics and Computational Biology)
Show Figures

Figure 1

13 pages, 647 KB  
Article
Critical Data Discovery for Self-Driving: A Data Distillation Approach
by Xiangyi Liao, Zhenyu Shou and Xu Chen
Appl. Sci. 2025, 15(19), 10649; https://doi.org/10.3390/app151910649 - 1 Oct 2025
Abstract
Deep learning models have achieved significant progress in developing self-driving algorithms. Despite their advantages, these algorithms typically require substantial amounts of data for effective training. Critical driving data, in particular, is essential for enhancing training efficiency and ensuring driving safety. However, existing methods [...] Read more.
Deep learning models have achieved significant progress in developing self-driving algorithms. Despite their advantages, these algorithms typically require substantial amounts of data for effective training. Critical driving data, in particular, is essential for enhancing training efficiency and ensuring driving safety. However, existing methods for identifying critical data often rely on human prior knowledge or are disconnected from the training of self-driving algorithms. In this paper, we introduce a novel data distillation technique designed to autonomously identify critical data for training self-driving algorithms. We conducted experiments with both numerical simulations and the NGSIM dataset, which consists of real-world car trajectories on highway US-101, to validate our approach. In the numerical experiments, the distillation method achieved a test root mean squared error of 1.933 using only 200 distilled training data samples, demonstrating a significant improvement in data efficiency compared to the 1.872 test error obtained with 20,000 randomly sampled training samples. The distilled critical data represents only 1% of the original dataset, optimizing data usage and significantly enhancing computational efficiency. For real-world NGSIM data, we demonstrate the performance of the proposed method in scenarios with extremely sparse data availability and show that our proposed data distillation method outperforms other sampling baselines, including Herding and K-centering. These experimental results highlight the capability of the proposed method to autonomously identify critical data without relying on human prior knowledge. Full article
(This article belongs to the Special Issue Pushing the Boundaries of Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop