Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (700)

Search Parameters:
Keywords = meta-learning approach

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 2206 KB  
Review
International Benchmarking of Pharmacology Curricula and Prescribing Related Learning Outcomes, Implications for Australian Health Professional Education: A Systematic Review and Meta-Analysis
by Syed Haris Omar and Anna Barwick
Pharmacy 2026, 14(1), 27; https://doi.org/10.3390/pharmacy14010027 - 3 Feb 2026
Abstract
Background: Pharmacology plays a central role in linking biomedical science concepts with their application in clinical practice across medical and healthcare education. Globally, the pharmacological curriculum has evolved, just like other disciplines, through the integration of case-based, problem-based, and hybrid teaching models that [...] Read more.
Background: Pharmacology plays a central role in linking biomedical science concepts with their application in clinical practice across medical and healthcare education. Globally, the pharmacological curriculum has evolved, just like other disciplines, through the integration of case-based, problem-based, and hybrid teaching models that led to firm clinical reasoning and long-term learning. Thus, this study aims to evaluate and compare the learning outcomes of pharmacology curricula across the globe by adopting a systematic review and meta-analysis research approach. Methods: This comprehensive review was conducted with transparency and integrity in accordance with Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA 2020) guidelines and was registered with PROSPERO (CRD420251207753). Five electronic databases, including MEDLINE (PubMed), EMBASE, CINAHL, PsycINFO, and the Cochrane Library were searched from January 2000 to October 2025. The Cochrane Library tool was used for the risk of bias assessment of randomised controlled trials, while the Joanna Briggs Institute (JBI) checklist was used for mixed-design, quasi-experimental, and cross-sectional cohorts. Review Manager 5.4 was used for statistical analysis. Results: Out of 3300 identified studies, 11 met the inclusion criteria, spanning 11 countries (published between 2007 and 2025). Integrated and case-based curricula significantly improved pharmacology knowledge compared to traditional lecture-based methods (SMD = 0.35; 95% CI: 0.07–0.64; I2 = 75%). Student satisfaction also favours integrated learning (OR = 1.53; 95% CI: 1.16–2.02; I2 = 46%). Most included studies were of moderate-to-high methodological quality. Conclusion: Globally, active and integrated pharmacology curricula foster greater cognitive understanding and learner satisfaction than conventional models. However, significant variability persists in resource-limited settings, leading to unequal competency in prescribing and therapeutic reasoning. Australian pharmacology programmes align broadly with international standards but require greater standardisation in assessment and experiential learning. Full article
Show Figures

Figure 1

46 pages, 1262 KB  
Systematic Review
Financial Risk Prediction Models Integrating Environmental, Social and Governance Factors: A Systematic Review
by Cristina Caro-González, Daniel Jato-Espino and Yudith Cardinale
Int. J. Financial Stud. 2026, 14(2), 31; https://doi.org/10.3390/ijfs14020031 - 3 Feb 2026
Abstract
This systematic review explores the incorporation of Environmental, Social, and Governance (ESG) factors within financial risk prediction models, with a particular focus on Machine Learning (ML), Natural Language Processing (NLP), and Large Language Models (LLM). Adhering to the Preferred Reporting Items for Systematic [...] Read more.
This systematic review explores the incorporation of Environmental, Social, and Governance (ESG) factors within financial risk prediction models, with a particular focus on Machine Learning (ML), Natural Language Processing (NLP), and Large Language Models (LLM). Adhering to the Preferred Reporting Items for Systematic Reviews and the Meta-Analyses (PRISMA) and PICOC frameworks, we identified 74 peer-reviewed publications disseminated between 2009 and March 2025 from the Scopus database. After excluding 10 systematic and literature reviews to avoid double-counting of evidence, we conducted quantitative analysis on 64 empirical studies. The findings indicate that traditional econometric methodologies continue to prevail (48%), followed by ML strategies (39%), NLP methodologies (8%), and Other (5%). Research that concurrently focuses on all three dimensions of ESG constitutes the most substantial category (44%), whereas the Social dimension, in isolation, receives minimal focus (5%). A geographic analysis reveals a concentration of research activity in China (13 studies), Italy (10), and the United States and India (6 each). Chi-square tests reveal no statistically significant relationship between the methodological approaches employed and the ESG dimensions examined (p = 0.62). The principal findings indicate that ML models—particularly ensemble methodologies and neural networks—exhibit enhanced predictive accuracy in the context of credit risk and default probability, whereas NLP methodologies reveal significant potential for the analysis of unstructured ESG disclosures. The review highlighted ongoing challenges, including inconsistencies in ESG data, variability in ratings across different providers, insufficient coverage of emerging markets, and the disparity between academic research and practical application in model implementation. Full article
Show Figures

Figure 1

30 pages, 1988 KB  
Systematic Review
MRI-Based Radiomics for Non-Invasive Prediction of Molecular Biomarkers in Gliomas
by Edoardo Agosti, Karen Mapelli, Gianluca Grimod, Amedeo Piazza, Marco Maria Fontanella and Pier Paolo Panciani
Cancers 2026, 18(3), 491; https://doi.org/10.3390/cancers18030491 - 2 Feb 2026
Abstract
Background: Radiomics has emerged as a promising approach to non-invasively characterize the molecular landscape of gliomas, providing quantitative, high-dimensional data derived from routine MRI. Given the recent shift toward molecularly driven classification, radiomics may support precision oncology by predicting key genomic, epigenetic, and [...] Read more.
Background: Radiomics has emerged as a promising approach to non-invasively characterize the molecular landscape of gliomas, providing quantitative, high-dimensional data derived from routine MRI. Given the recent shift toward molecularly driven classification, radiomics may support precision oncology by predicting key genomic, epigenetic, and phenotypic alterations without the need for invasive tissue sampling. This systematic review aimed to synthesize current radiomics applications for the non-invasive prediction of molecular biomarkers in gliomas, evaluating methodological trends, performance metrics, and translational readiness. Methods: This review followed the PRISMA 2020 guidelines. A systematic search was conducted in PubMed, Ovid MEDLINE, and Scopus on 10 January 2025, and updated on 1 February 2025, using predefined MeSH terms and keywords related to glioma, radiomics, machine learning, deep learning, and molecular biomarkers. Eligible studies included original research using MRI-based radiomics to predict molecular alterations in human gliomas, with reported performance metrics. Data extraction covered study design, cohort size, MRI sequences, segmentation approaches, feature extraction software, computational methods, biomarkers assessed, and diagnostic performance. Methodological quality was evaluated using the Radiomics Quality Score (RQS), Image Biomarker Standardization Initiative (IBSI) criteria, and Newcastle–Ottawa Scale (NOS). Due to heterogeneity, no meta-analysis was performed. Results: Of 744 screened records, 70 studies met the inclusion criteria. A total of 10,324 patients were included across all studies (mean 140 patients/study, range 23–628). The most frequently employed MRI sequences were T2-weighted (59 studies, 84.3%), contrast-enhanced T1WI (53 studies, 75.7%), T1WI (50 studies, 71.4%), and FLAIR (48 studies, 68.6%); diffusion-weighted imaging was used in only 7 studies (12.8%). Manual segmentation predominated (52 studies, 74.3%), whereas automated approaches were used in 13 studies (18.6%). Common feature extraction platforms included 3D Slicer (20 studies, 28.6%) and MATLAB-based tools (17 studies, 24.3%). Machine learning methods were applied in 47 studies (67.1%), with support vector machines used in 29 studies (41.4%); deep learning models were implemented in 27 studies (38.6%), primarily convolutional neural networks (20 studies, 28.6%). IDH mutation was the most frequently predicted biomarker (49 studies, 70%), followed by ATRX (27 studies, 38.6%), MGMT methylation (8 studies, 11,4%), and 1p/19q codeletion (7 studies, 10%). Reported AUC values ranged from 0.80 to 0.99 for IDH, approximately 0.71–0.953 for 1p/19q, 0.72–0.93 for MGMT, and 0.76–0.97 for ATRX, with deep learning or hybrid pipelines generally achieving the highest performance. RQS values highlighted substantial methodological variability, and IBSI adherence was inconsistent. NOS scores indicated high-quality methodology in a limited subset of studies. Conclusions: Radiomics demonstrates strong potential for the non-invasive prediction of key glioma molecular biomarkers, achieving high diagnostic performance across diverse computational approaches. However, widespread clinical translation remains hindered by heterogeneous imaging protocols, limited standardization, insufficient external validation, and variable methodological rigor. Full article
(This article belongs to the Special Issue Radiomics and Molecular Biology in Glioma: A Synergistic Approach)
Show Figures

Figure 1

18 pages, 582 KB  
Article
Interdisciplinary Conceptualizations of Variables and Parameters Through Narratives
by Eugenia Taranto, Sara Bagossi, Ferdinando Arzarello and Silvia Beltramino
Educ. Sci. 2026, 16(2), 217; https://doi.org/10.3390/educsci16020217 - 2 Feb 2026
Abstract
The concepts of variable and parameter are fundamental for mathematical activity but are also challenging to rigorously introduce at a didactic level. In this paper, we discuss findings from an 11th-grade class interdisciplinary project involving mathematics and linguistics, promoting a conceptual approach to [...] Read more.
The concepts of variable and parameter are fundamental for mathematical activity but are also challenging to rigorously introduce at a didactic level. In this paper, we discuss findings from an 11th-grade class interdisciplinary project involving mathematics and linguistics, promoting a conceptual approach to the learning of variables and parameters through narratives. We analyze the written productions of 22 Italian students across two tasks: the first aimed at exploring differences in the meaning of variables and parameters while employing a logico–scientific or narrative mode of thinking; the second involved a meta-reflection on the methodological tools used in the disciplines involved. Students’ productions were qualitatively analyzed using content analysis and thematic analysis, respectively. The analysis enables an elaboration on students’ understanding of the different roles of variables and parameters and of their epistemic features. Students’ conceptualizations of variables and parameters went beyond formal description, drawing on a variety of real-world situations. Moreover, the meta-reflection on the methodological tools shows new awareness of students’ understanding of disciplinary concepts. These findings suggest that interdisciplinary approaches involving mathematics and linguistics can effectively support the conceptual understanding of algebraic notions in secondary school. We therefore recommend further research exploring the integration of narrative contexts and cross-disciplinary collaborations in mathematics teaching. Full article
Show Figures

Figure 1

39 pages, 3699 KB  
Article
Enhancing Decision Intelligence Using Hybrid Machine Learning Framework with Linear Programming for Enterprise Project Selection and Portfolio Optimization
by Abdullah, Nida Hafeez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz, Rolando Quintero Téllez, Eponon Anvi Alex, Grigori Sidorov and Alexander Gelbukh
AI 2026, 7(2), 52; https://doi.org/10.3390/ai7020052 - 1 Feb 2026
Viewed by 143
Abstract
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we [...] Read more.
This study presents a hybrid analytical framework that enhances project selection by achieving reasonable predictive accuracy through the integration of expert judgment and modern artificial intelligence (AI) techniques. Using an enterprise-level dataset of 10,000 completed software projects with verified real-world statistical characteristics, we develop a three-step architecture for intelligent decision support. First, we introduce an extended Analytic Hierarchy Process (AHP) that incorporates organizational learning patterns to compute expert-validated criteria weights with a consistent level of reliability (CR=0.04), and Linear Programming is used for portfolio optimization. Second, we propose a machine learning architecture that integrates expert knowledge derived from AHP into models such as Transformers, TabNet, and Neural Oblivious Decision Ensembles through mechanisms including attention modulation, split criterion weighting, and differentiable tree regularization. Third, the hybrid AHP-Stacking classifier generates a meta-ensemble that adaptively balances expert-derived information with data-driven patterns. The analysis shows that the model achieves 97.5% accuracy, a 96.9% F1-score, and a 0.989 AUC-ROC, representing a 25% improvement compared to baseline methods. The framework also indicates a projected 68.2% improvement in portfolio value (estimated incremental value of USD 83.5 M) based on post factum financial results from the enterprise’s ventures.This study is evaluated retrospectively using data from a single enterprise, and while the results demonstrate strong robustness, generalizability to other organizational contexts requires further validation. This research contributes a structured approach to hybrid intelligent systems and demonstrates that combining expert knowledge with machine learning can provide reliable, transparent, and high-performing decision-support capabilities for project portfolio management. Full article
Show Figures

Figure 1

24 pages, 3870 KB  
Article
Hybrid Ensemble Learning for TWSA Prediction in Water-Stressed Regions: A Case Study from Casablanca–Settat Region, Morocco
by Youssef Laalaoui, Naïma El Assaoui, Oumaima Ouahine, Thanh Thi Nguyen and Ahmed M. Saqr
Hydrology 2026, 13(2), 53; https://doi.org/10.3390/hydrology13020053 - 1 Feb 2026
Viewed by 67
Abstract
A hybrid machine learning framework has been developed in this study to estimate Terrestrial Water Storage Anomalies (TWSA) in Morocco’s Casablanca–Settat region, which faces serious groundwater stress due to rapid urbanization, intensive agriculture, and climate variability. In this study, TWSA is used as [...] Read more.
A hybrid machine learning framework has been developed in this study to estimate Terrestrial Water Storage Anomalies (TWSA) in Morocco’s Casablanca–Settat region, which faces serious groundwater stress due to rapid urbanization, intensive agriculture, and climate variability. In this study, TWSA is used as an integrated proxy for groundwater-related storage changes, while acknowledging that it also includes contributions from soil moisture and surface water. The approach combines satellite-based observations from the Gravity Recovery and Climate Experiment (GRACE) and GRACE Follow-On (GRACE-FO) with key environmental indicators such as rainfall, evapotranspiration, and land use data to track changes in groundwater availability with improved spatial detail. After preprocessing the data through feature selection, normalization, and outlier handling, the model applies six base learners, i.e., Huber regressor, automatic relevance determination regression, kernel ridge, long short-term memory, k-nearest neighbors, and gradient boosting. Their predictions are aggregated using a random forest meta-learner to improve accuracy and stability. The ensemble achieved strong results, with a root mean square error of 0.13, a mean absolute error of 0.108, and a determination coefficient of 0.97—far better than single-model baselines—based on a temporally independent train-test split. Spatial analysis highlighted clear patterns of groundwater depletion linked to land cover and usage. These results can guide targeted aquifer recharge efforts, drought response planning, and smarter irrigation management. The model also aligns with national goals under Morocco’s water sustainability initiatives and can be adapted for use in other regions with similar environmental challenges. Full article
(This article belongs to the Topic Advances in Hydrological Remote Sensing)
Show Figures

Figure 1

29 pages, 1843 KB  
Systematic Review
Deep Learning for Tree Crown Detection and Delineation Using UAV and High-Resolution Imagery for Biometric Parameter Extraction: A Systematic Review
by Abdulrahman Sufyan Taha Mohammed Aldaeri, Chan Yee Kit, Lim Sin Ting and Mohamad Razmil Bin Abdul Rahman
Forests 2026, 17(2), 179; https://doi.org/10.3390/f17020179 - 29 Jan 2026
Viewed by 184
Abstract
Mapping individual-tree crowns (ITCs) along with extracting tree morphological attributes provides the core parameters required for estimating thermal stress and carbon emission functions. However, calculating morphological attributes relies on the prior delineation of ITCs. Using the Preferred Reporting Items for Systematic Reviews and [...] Read more.
Mapping individual-tree crowns (ITCs) along with extracting tree morphological attributes provides the core parameters required for estimating thermal stress and carbon emission functions. However, calculating morphological attributes relies on the prior delineation of ITCs. Using the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) framework, this review synthesizes how deep-learning (DL)-based methods enable the conversion of crown geometry into reliable biometric parameter extraction (BPE) from high-resolution imagery. This addresses a gap often overlooked in studies focused solely on detection by providing a direct link to forest inventory metrics. Our review showed that instance segmentation dominates (approximately 46% of studies), producing the most accurate pixel-level masks for BPE, while RGB imagery is most common (73%), often integrated with canopy-height models (CHM) to enhance accuracy. New architectural approaches, such as StarDist, outperform Mask R-CNN by 6% in dense canopies. However, performance differs with crown overlap, occlusion, species diversity, and the poor transferability of allometric equations. Future work could prioritize multisensor data fusion, develop end-to-end biomass modeling to minimize allometric dependence, develop open datasets to address model generalizability, and enhance and test models like StarDist for higher accuracy in dense forests. Full article
Show Figures

Figure 1

20 pages, 1381 KB  
Systematic Review
AI-Enhanced Skill Assessment in Higher Vocational Education: A Systematic Review and Meta-Analysis
by Xia Sun and Haoheng Tian
Informatics 2026, 13(2), 20; https://doi.org/10.3390/informatics13020020 - 28 Jan 2026
Viewed by 198
Abstract
This study synthesizes empirical evidence on AI-supported skill assessment systems in higher vocational education through a systematic review and meta-analysis. Despite growing interest in generative AI within higher education, empirical research on AI-enabled assessment remains fragmented and methodologically uneven, particularly in vocational contexts. [...] Read more.
This study synthesizes empirical evidence on AI-supported skill assessment systems in higher vocational education through a systematic review and meta-analysis. Despite growing interest in generative AI within higher education, empirical research on AI-enabled assessment remains fragmented and methodologically uneven, particularly in vocational contexts. Following PRISMA 2020 guidelines, 27 peer-reviewed empirical studies published between 2010 and 2024 were identified from major international and Chinese databases and included in the analysis. Using a random-effects model, the meta-analysis indicates a moderate positive association between AI-supported assessment systems and skill-related learning outcomes (Hedges’ g = 0.72), alongside substantial heterogeneity across study designs, outcome measures, and implementation contexts. Subgroup analyses suggest variation across regional and institutional settings, which should be interpreted cautiously given small sample sizes and diverse methodological approaches. Based on the synthesized evidence, the study proposes a conceptual AI-supported skill assessment framework that distinguishes empirically grounded components from forward-looking extensions related to generative AI. Rather than offering prescriptive solutions, the framework provides an evidence-informed baseline to support future research, system design, and responsible integration of generative AI in higher education assessment. Overall, the findings highlight both the potential and the current empirical limitations of AI-enabled assessment, underscoring the need for more robust, theory-informed, and transparent studies as generative AI applications continue to evolve. Full article
Show Figures

Figure 1

31 pages, 947 KB  
Systematic Review
A Systematic Review of Cyber Risk Analysis Approaches for Wind Power Plants
by Muhammad Arsal, Tamer Kamel, Hafizul Asad and Asiya Khan
Energies 2026, 19(3), 677; https://doi.org/10.3390/en19030677 - 28 Jan 2026
Viewed by 145
Abstract
Wind power plants (WPPs), as large-scale cyber–physical systems (CPSs), have become essential to renewable energy generation but are increasingly exposed to cyber threats. Attacks on supervisory control and data acquisition (SCADA) networks can cause cascading physical and economic impacts. The systematic synthesis of [...] Read more.
Wind power plants (WPPs), as large-scale cyber–physical systems (CPSs), have become essential to renewable energy generation but are increasingly exposed to cyber threats. Attacks on supervisory control and data acquisition (SCADA) networks can cause cascading physical and economic impacts. The systematic synthesis of cyber risk analysis methods specific to WPPs and cyber–physical energy systems (CPESs) is a need of the hour to identify research gaps and guide the development of resilient protection frameworks. This study employs a Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) protocol to review the state of the art in this area. Peer-reviewed studies published between January 2010 and January 2025 were taken from four major journals using a structured set of nine search queries. After removing duplicates, applying inclusion and exclusion criteria, and screening titles and abstracts, 62 studies were examined for analysis on the basis of a synthesis framework. The studies were classified along three methodological dimensions, qualitative vs. quantitative, model-based vs. data-driven, and informal vs. formal, giving us a unified taxonomy of cyber risk analysis approaches. Among the included studies, 45% appeared to be qualitative or semi-quantitative frameworks such as STRIDE, DREAD, or MITRE ATT&CK; 35% were classified as quantitative or model-based techniques such as Bayesian networks, Markov decision processes, and Petri nets; and 20% adopted data-driven or hybrid AI/ML methods. Only 28% implemented formal verification, and fewer than 10% explicitly linked cyber vulnerabilities to safety consequences. Key research gaps include limited integration of safety–security interdependencies, scarce operational datasets, and inadequate modelling of environmental factors in WPPs. This systematic review highlights a predominance of qualitative approaches and a shortage of data-driven and formally verified frameworks for WPP cybersecurity. Future research should prioritise hybrid methods that integrate formal modelling, synthetic data generation, and machine learning-based risk prioritisation to enhance resilience and operational safety of renewable-energy infrastructures. Full article
(This article belongs to the Special Issue Trends and Challenges in Cyber-Physical Energy Systems)
Show Figures

Figure 1

23 pages, 3441 KB  
Article
Integrating Large Language Models with Deep Learning for Breast Cancer Treatment Decision Support
by Heeseung Park, Serin Ok, Taewoo Kang and Meeyoung Park
Diagnostics 2026, 16(3), 394; https://doi.org/10.3390/diagnostics16030394 - 26 Jan 2026
Viewed by 270
Abstract
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study [...] Read more.
Background/Objectives: Breast cancer is one of the most common malignancies, but its heterogeneous molecular subtypes make treatment decision-making complex and patient-specific. Both the pathology reports and the electronic medical record (EMR) play a critical role for an appropriate treatment decision. This study aimed to develop an integrated clinical decision support system (CDSS) that combines a large language model (LLM)-based pathology analysis with deep learning-based treatment prediction to support standardized and reliable decision-making. Methods: Real-world data (RWD) obtained from a cohort of 5015 patients diagnosed with breast cancer were analyzed. Meta-Llama-3-8B-Instruct automatically extracted the TNM stage and tumor size from the pathology reports, which were then integrated with EMR variables. A multi-label classification of 16 treatment combinations was performed using six models, including Decision Tree, Random Forest, GBM, XGBoost, DNN, and Transformer. Performance was evaluated using accuracy, macro/micro-averaged precision, recall, F1 score, and AUC. Results: Using combined LLM-extracted pathology and EMR features, GBM and XGBoost achieved the highest and most stable predictive performance across all feature subset configurations (macro-F1 ≈ 0.88–0.89; AUC = 0.867–0.868). Both models demonstrated strong discrimination ability and consistent recall and precision, highlighting their robustness for multi-label classification in real-world settings. Decision Tree and Random Forest showed moderate but reliable performance (macro-F1 = 0.84–0.86; AUC = 0.849–0.821), indicating their applicability despite lower predictive capability. By contrast, the DNN and Transformer models produced comparatively lower scores (macro-F1 = 0.74–0.82; AUC = 0.780–0.757), especially when using the full feature set, suggesting limited suitability for structured clinical data without strong contextual dependencies. These findings indicate that gradient-boosting ensemble approaches are better optimized for tabular medical data and generate more clinically reliable treatment recommendations. Conclusions: The proposed artificial intelligence-based CDSS improves accuracy and consistency in breast cancer treatment decision support by integrating automated pathology interpretation with deep learning, demonstrating its potential utility in real-world cancer care. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

27 pages, 3850 KB  
Article
A Robust Meta-Learning-Based Map-Matching Method for Vehicle Navigation in Complex Environments
by Fei Meng and Jiale Zhao
Symmetry 2026, 18(1), 210; https://doi.org/10.3390/sym18010210 - 22 Jan 2026
Viewed by 109
Abstract
Map matching is a fundamental technique for aligning noisy GPS trajectory data with digital road networks and constitutes a key component of Intelligent Transportation Systems (ITS) and Location-Based Services (LBS). Nevertheless, existing approaches still suffer from notable limitations in complex environments, particularly urban [...] Read more.
Map matching is a fundamental technique for aligning noisy GPS trajectory data with digital road networks and constitutes a key component of Intelligent Transportation Systems (ITS) and Location-Based Services (LBS). Nevertheless, existing approaches still suffer from notable limitations in complex environments, particularly urban and urban-like scenarios characterized by heterogeneous GPS noise and sparse observations, including inadequate adaptability to dynamically varying noise, unavoidable trade-offs between real-time efficiency and matching accuracy, and limited generalization capability across heterogeneous driving behaviors. To overcome these challenges, this paper presents a Meta-learning-driven Progressive map-Matching (MPM) method with a symmetry-aware design, which integrates a two-layer pattern-mining-based noise-robust meta-learning mechanism with a dynamic weight adjustment strategy. By explicitly modeling topological symmetry in road networks, symmetric trajectory patterns, and symmetric noise variation characteristics, the proposed method effectively enhances prior knowledge utilization, accelerates online adaptation, and achieves a more favorable balance between accuracy and computational efficiency. Extensive experiments on two real-world datasets demonstrate that MPM consistently outperforms state-of-the-art methods, achieving up to 10–15% improvement in matching accuracy while reducing online matching latency by over 30% in complex urban environments. Furthermore, the symmetry-aware design significantly improves robustness against asymmetric interference, thereby providing a reliable and scalable solution for high-precision map matching in complex and dynamic traffic environments. Full article
Show Figures

Figure 1

15 pages, 801 KB  
Systematic Review
Artificial Intelligence in Pediatric Dentistry: A Systematic Review and Meta-Analysis
by Nevra Karamüftüoğlu, Büşra Yavuz Üçpunar, İrem Birben, Asya Eda Altundağ, Kübra Örnek Mullaoğlu and Cenkhan Bal
Children 2026, 13(1), 152; https://doi.org/10.3390/children13010152 - 21 Jan 2026
Viewed by 324
Abstract
Background/Objectives: Artificial intelligence (AI) has gained substantial prominence in pediatric dentistry, offering new opportunities to enhance diagnostic precision and clinical decision-making. AI-based systems are increasingly applied in caries detection, early childhood caries (ECC) risk prediction, tooth development assessment, mesiodens identification, and other key [...] Read more.
Background/Objectives: Artificial intelligence (AI) has gained substantial prominence in pediatric dentistry, offering new opportunities to enhance diagnostic precision and clinical decision-making. AI-based systems are increasingly applied in caries detection, early childhood caries (ECC) risk prediction, tooth development assessment, mesiodens identification, and other key diagnostic tasks. This systematic review and meta-analysis aimed to synthesize evidence on the diagnostic performance of AI models developed specifically for pediatric dental applications. Methods: A systematic search was conducted in PubMed, Scopus, Web of Science, and Embase following PRISMA-DTA guidelines. Studies evaluating AI-based diagnostic or predictive models in pediatric populations (≤18 years) were included. Reference screening, data extraction, and quality assessment were performed independently by two reviewers. Pooled sensitivity, specificity, and area under the receiver operating characteristic curve (AUC) were calculated using random-effects models. Sources of heterogeneity related to imaging modality, annotation strategy, and dataset characteristics were examined. Results: Thirty-two studies met the inclusion criteria for qualitative synthesis, and fifteen were eligible for quantitative analysis. For radiographic caries detection, pooled sensitivity, specificity, and AUC were 0.91, 0.97, and 0.98, respectively. Prediction models demonstrated good diagnostic performance, with pooled sensitivity of 0.86, specificity of 0.82, and AUC of 0.89. Deep learning architectures, particularly convolutional neural networks, consistently outperformed traditional machine learning approaches. Considerable heterogeneity was identified across studies, primarily driven by differences in imaging protocols, dataset balance, and annotation procedures. Beyond quantitative accuracy estimates, this review critically evaluates whether current evidence supports meaningful clinical translation and identifies pediatric domains that remain underrepresented in AI-driven diagnostic innovation. Conclusions: AI technologies exhibit strong potential to improve diagnostic accuracy in pediatric dentistry. However, limited external validation, methodological variability, and the scarcity of prospective real-world studies restrict immediate clinical implementation. Future research should prioritize the development of multicenter pediatric datasets, harmonized annotation workflows, and transparent, explainable AI (XAI) models to support safe and effective clinical translation. Full article
(This article belongs to the Section Pediatric Dentistry & Oral Medicine)
Show Figures

Figure 1

16 pages, 3176 KB  
Article
Stacking Ensemble Learning for Genomic Prediction Under Complex Genetic Architectures
by Maurício de Oliveira Celeri, Moyses Nascimento, Ana Carolina Campana Nascimento, Filipe Ribeiro Formiga Teixeira, Camila Ferreira Azevedo, Cosme Damião Cruz and Laís Mayara Azevedo Barroso
Agronomy 2026, 16(2), 241; https://doi.org/10.3390/agronomy16020241 - 20 Jan 2026
Viewed by 131
Abstract
Genomic selection (GS) estimates the GEBV from genome-wide markers to reduce generation intervals and optimize germplasm selection, which is particularly advantageous for high-cost or late-expressed traits. While models like GBLUP are popular, they assume a polygenic architecture. In contrast, the Bayesian alphabet and [...] Read more.
Genomic selection (GS) estimates the GEBV from genome-wide markers to reduce generation intervals and optimize germplasm selection, which is particularly advantageous for high-cost or late-expressed traits. While models like GBLUP are popular, they assume a polygenic architecture. In contrast, the Bayesian alphabet and machine learning (ML) can accommodate other types of genetic architectures. Given that no single model is universally optimal, stacking ensembles, which train a meta-model using predictions from diverse base learners, emerge as a compelling solution. However, the application of stacking in GS often overlooks non-additive effects. This study evaluated different stacking configurations for genomic prediction across 10 simulated traits, covering additive, dominance, and epistatic genetic architectures. A 5-fold cross-validation scheme was used to assess predictive ability and other evaluation metrics. The stacking approach demonstrated superior predictive ability in all scenarios. Gains were especially pronounced in complex architectures (100 QTLs, h2 = 0.3), reaching an 83% increment over the best individual model (BayesA with dominance), and also in oligogenic scenarios with epistasis (10 QTLs, h2 = 0.6), with a 27.59% gain. The success of stacking was attributed to two key strategies: base learner selection and the use of robust meta-learners (such as principal component or penalized regression) that effectively handled multicollinearity. Full article
Show Figures

Figure 1

15 pages, 1045 KB  
Systematic Review
AI at the Bedside of Psychiatry: Comparative Meta-Analysis of Imaging vs. Non-Imaging Models for Bipolar vs. Unipolar Depression
by Andrei Daescu, Ana-Maria Cristina Daescu, Alexandru-Ioan Gaitoane, Ștefan Maxim, Silviu Alexandru Pera and Liana Dehelean
J. Clin. Med. 2026, 15(2), 834; https://doi.org/10.3390/jcm15020834 - 20 Jan 2026
Viewed by 185
Abstract
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered [...] Read more.
Background: Differentiating bipolar disorder (BD) from unipolar major depressive disorder (MDD) at first episode is clinically consequential but challenging. Artificial intelligence/machine learning (AI/ML) may improve early diagnostic accuracy across imaging and non-imaging data sources. Methods: Following PRISMA 2020 and a pre-registered protocol on protocols.io, we searched PubMed, Scopus, Europe PMC, Semantic Scholar, OpenAlex, The Lens, medRxiv, ClinicalTrials.gov, and Web of Science (2014–8 October 2025). Eligible studies developed/evaluated supervised ML classifiers for BD vs. MDD at first episode and reported test-set discrimination. AUCs were meta-analyzed on the logit (GEN) scale using random effects (REML) with Hartung–Knapp adjustment and then back-transformed. Subgroup (imaging vs. non-imaging), leave-one-out (LOO), and quality sensitivity (excluding high risk of leakage) analyses were prespecified. Risk of bias used QUADAS-2 with PROBAST/AI considerations. Results: Of 158 records, 39 duplicates were removed and 119 records screened; 17 met qualitative criteria; and 6 had sufficient data for meta-analysis. The pooled random-effects AUC was 0.84 (95% CI 0.75–0.90), indicating above-chance discrimination, with substantial heterogeneity (I2 = 86.5%). Results were robust to LOO, exclusion of two high-risk-of-leakage studies (pooled AUC 0.83, 95% CI 0.72–0.90), and restriction to higher-rigor validation (AUC 0.83, 95% CI 0.69–0.92). Non-imaging models showed higher point estimates than imaging models; however, subgroup comparisons were exploratory due to the small number of studies: pooled AUC ≈ 0.90–0.92 with I2 = 0% vs. 0.79 with I2 = 64%; test for subgroup difference Q = 7.27, df = 1, p = 0.007. Funnel plot inspection and Egger/Begg tests found that we could not reliably assess small-study effects/publication bias due to the small number of studies. Conclusions: AI/ML models provide good and robust discrimination of BD vs. MDD at first episode. Non-imaging approaches are promising due to higher point estimates in the available studies and practical scalability, but prospective evaluation is needed and conclusions about modality superiority remain tentative given the small number of non-imaging studies (k = 2). Full article
(This article belongs to the Special Issue How Clinicians See the Use of AI in Psychiatry)
Show Figures

Figure 1

19 pages, 13379 KB  
Perspective
The Affordances of AI-Powered, Deepfake, Avatar Creator Systems in Archaeological Facial Depiction and the Related Changes in the Cultural Heritage Sector
by Caroline M. Wilkinson, Mark Roughley, Ching Yiu Jessica Liu, Sarah Shrimpton, Cydney Davidson and Thomas Dickinson
Appl. Sci. 2026, 16(2), 1023; https://doi.org/10.3390/app16021023 - 20 Jan 2026
Viewed by 459
Abstract
Technological advances have influenced and changed cultural heritage in the galleries, libraries, archives, and museums (GLAM) sector by facilitating new forms of experimentation and knowledge exchange. In this context, this paper explores the evolving practice of archaeological facial depiction using AI-powered deepfake avatar [...] Read more.
Technological advances have influenced and changed cultural heritage in the galleries, libraries, archives, and museums (GLAM) sector by facilitating new forms of experimentation and knowledge exchange. In this context, this paper explores the evolving practice of archaeological facial depiction using AI-powered deepfake avatar creator software programs, such as Epic Games’ MetaHuman Creator (MHC), which offer new affordances in terms of agility, realism, and engagement, and build upon traditional workflows involving the physical sculpting or digital modelling of faces from the past. Through a case-based approach, we illustrate these affordances via real-world applications, including four-dimensional portraits, multi-platform presentations, Augmented Reality (AR), and enhanced audience interaction. We consider the limitations and challenges of these digital avatar systems, such as misrepresentation or cultural insensitivity, and we position this advanced technology within the broader context of digital heritage, considering both the technical possibilities and ethical concerns around synthetic representations of individuals from the past. Finally, we propose that the use of MHC is not a replacement for current practice, but rather an augmentation, expanding the potential for storytelling and public learning outcomes in the GLAM sector, as a result of increased efficiency and new forms of public engagement. Full article
(This article belongs to the Special Issue Application of Digital Technology in Cultural Heritage)
Show Figures

Figure 1

Back to TopTop