Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,319)

Search Parameters:
Keywords = reflective learning

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
16 pages, 1695 KB  
Article
DU-Net: A Dual-Path Architecture for High-Contrast Velocity Anomaly Detection in Seismic Inversion
by Maksim Nikishin, Alexey Vasyukov and Nikolay Khokhlov
Minerals 2026, 16(5), 530; https://doi.org/10.3390/min16050530 (registering DOI) - 15 May 2026
Abstract
Full-waveform inversion (FWI) is a powerful interpretation method in geophysics for inferring high-resolution subsurface models by minimizing the difference between observed and simulated seismic data. In mineral exploration, FWI has shown particular promise for delineating complex ore bodies in hard-rock environments where conventional [...] Read more.
Full-waveform inversion (FWI) is a powerful interpretation method in geophysics for inferring high-resolution subsurface models by minimizing the difference between observed and simulated seismic data. In mineral exploration, FWI has shown particular promise for delineating complex ore bodies in hard-rock environments where conventional reflection seismic methods often fail. However, traditional FWI remains computationally expensive due to the iterative solution of forward and adjoint problems. The integration of deep learning, particularly the U-Net architecture, has recently emerged as a promising approach to address these computational challenges. Originally developed for biomedical image segmentation, U-Net employs a symmetric encoder–decoder structure with skip connections, enabling precise localization and efficient feature extraction from complex data. This paper proposes a modified dual-path architecture, termed DU-Net, specifically designed for the simultaneous detection and extraction of high-contrast velocity anomalies (representing potential ore bodies) and reconstruction of the background velocity model. The key innovation lies in parallel processing branches—one dedicated to anomaly segmentation and another to background reconstruction—combined with a specialized composite loss function, SeismoLoss, that independently supervises each component. This design allows the network to focus on the distinctive features of the anomaly while filtering out background complexity that typically degrades prediction quality in single-path approaches. We provide a detailed description of the DU-Net architecture and evaluate its performance on two synthetic datasets representing different styles of mineralization and host-rock complexity. Experimental results demonstrate that DU-Net achieves superior accuracy in localizing anomalous bodies and reconstructing background geology compared to the standard U-Net baseline, with a substantial reduction in boundary blurring artifacts. Full article
(This article belongs to the Section Mineral Exploration Methods and Applications)
23 pages, 1710 KB  
Review
Co-Creation of Immersive Learning for Cultural Heritage Education: A Scoping Review
by Jiajia Zhang and Fanke Peng
Heritage 2026, 9(5), 192; https://doi.org/10.3390/heritage9050192 - 15 May 2026
Abstract
Immersive technologies—such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR)—are increasingly adopted in cultural heritage settings to support education, public engagement, and digital preservation. This scoping review systematically maps existing research on immersive learning within cultural heritage [...] Read more.
Immersive technologies—such as virtual reality (VR), augmented reality (AR), mixed reality (MR), and extended reality (XR)—are increasingly adopted in cultural heritage settings to support education, public engagement, and digital preservation. This scoping review systematically maps existing research on immersive learning within cultural heritage contexts, identifying major trends, pedagogical approaches, and reported outcomes. Following the PRISMA-ScR framework, nineteen studies were selected from 235 publications published between 2016 and 2025 across four databases: ACM Digital Library, Web of Science, ProQuest, and Scopus. Findings reveal a predominant focus on enhancing learner motivation, engagement, and the perceived authenticity of immersive experiences. However, empirical validation of learning outcomes—particularly regarding sustained knowledge retention, critical reflection, and inclusive participation—remains scarce. Persistent gaps are also evident in accessibility and scalability, alongside ethical concerns related to cultural sensitivity, power asymmetries, and the representation of diverse heritage voices. By foregrounding participatory and co-creation approaches, this review highlights how collaborative design processes can enhance learner engagement and support the sustainable digital preservation of cultural heritage. Full article
(This article belongs to the Section Cultural Heritage)
Show Figures

Figure 1

10 pages, 462 KB  
Article
Dental Students’ Perceptions of a Self-Directed Simulation-Based Learning Methodology (MAES©): A Pilot Study
by Sonia Guzmán, Alfonso García, María Ángeles Velló-Ribes and Olga Cortés
Dent. J. 2026, 14(5), 305; https://doi.org/10.3390/dj14050305 - 15 May 2026
Abstract
Background/Objectives: Simulation-based education is increasingly used in health sciences to promote active learning and the development of clinical and non-technical skills. However, its implementation in undergraduate dental education remains limited. This study aimed to explore dental students’ perceptions of the Self-Learning Methodology [...] Read more.
Background/Objectives: Simulation-based education is increasingly used in health sciences to promote active learning and the development of clinical and non-technical skills. However, its implementation in undergraduate dental education remains limited. This study aimed to explore dental students’ perceptions of the Self-Learning Methodology in Simulated Environments (MAES©) applied to high-fidelity simulation. Methods: A mixed-methods, cross-sectional pilot study was conducted with 80 fourth-year dental students enrolled in a Pediatric Dentistry course at a Spanish university. Quantitative data were collected using a validated satisfaction questionnaire (Cronbach’s alpha = 0.905), and descriptive statistics were performed. Qualitative data were obtained through open-ended questions and analyzed using inductive content analysis. Results: Students reported high levels of satisfaction, motivation, and perceived learning, with mean scores above 8.5 out of 10 across all evaluated dimensions. The facilitator’s role received the highest ratings. Qualitative analysis identified four main themes: perceived advantages of the methodology, increased engagement and participation, the value of structured debriefing, and areas for improvement related to group dynamics and performance-related stress. Conclusions: The MAES© methodology was well received and perceived as a feasible approach in dental simulation-based education. It may support student-centered learning, collaboration, and reflective practice, providing practical guidance for educators interested in implementing active learning strategies. As an exploratory pilot study conducted in a single institution, these findings should be interpreted cautiously and warrant further research. Full article
14 pages, 256 KB  
Article
Development of Undergraduate Nursing Students’ Clinical Performance Self-Efficacy Beliefs: A Cross-Sectional Study
by Beth Pierce, Jeanne Allen and Thea van de Mortel
Educ. Sci. 2026, 16(5), 784; https://doi.org/10.3390/educsci16050784 (registering DOI) - 15 May 2026
Abstract
Self-efficacy is a person’s belief in their ability to perform a task effectively despite difficulties and predicts future willingness to undertake similar tasks. The study’s aim was to determine the extent to which undergraduate nursing students develop their clinical performance self-efficacy beliefs throughout [...] Read more.
Self-efficacy is a person’s belief in their ability to perform a task effectively despite difficulties and predicts future willingness to undertake similar tasks. The study’s aim was to determine the extent to which undergraduate nursing students develop their clinical performance self-efficacy beliefs throughout their degree. Using a cross-sectional survey design, Year 1, 2 and 3 students from a three-year undergraduate nursing program completed a clinical performance self-efficacy scale, comprising the domains of assessment, planning, implementation and evaluation. Welch’s one-way ANOVA and Games–Howell post hoc analyses compared self-efficacy scores across year levels. Self-efficacy predictors were identified with multiple linear regression. Descriptive statistics determined students’ confidence with clinical activities. Participants’ self-efficacy scores increased significantly from Year 1 to 2 and Year 2 to 3. Year level of study was the only unique positive predictor of scores. Over the years, participants were most confident implementing care and least confident planning and evaluating care. Given that clinical placement frequency was not a unique significant predictor of self-efficacy, but rather weakly correlated, future studies should examine if other learning activities such as high-fidelity simulation may play a greater role in its development. The lower confidence with planning and evaluation underscores the need for curricula that scaffold higher-order skills like critical thinking and reflection. Full article
30 pages, 5573 KB  
Article
Physics-Inspired Frequency-Decoupled Network for Remote Sensing Image Dehazing
by Hao Yang, Xiaohan Chen and Gang Xu
Sensors 2026, 26(10), 3124; https://doi.org/10.3390/s26103124 - 15 May 2026
Abstract
Remote sensing (RS) imagery often suffers from non-uniform atmospheric scattering, resulting in severe contrast degradation, detail blurring, and spectral distortion. While recent advanced State Space Models (SSMs) offer efficient long-range modeling, they frequently struggle with spectral–spatial coupling interference and lack explicit physical constraints, [...] Read more.
Remote sensing (RS) imagery often suffers from non-uniform atmospheric scattering, resulting in severe contrast degradation, detail blurring, and spectral distortion. While recent advanced State Space Models (SSMs) offer efficient long-range modeling, they frequently struggle with spectral–spatial coupling interference and lack explicit physical constraints, leading to over-smoothed textures and color biases in high-reflectance regions. In this paper, we propose PhysWave-SSN, a Physics-Inspired Frequency-Decoupled Network specifically designed for high-fidelity RS image dehazing. The architecture employs a task-adaptive frequency-specific screening strategy to effectively isolate structural details from atmospheric interference. Specifically, we first introduce a Frequency-Aware Selection Gate (FASG) that unifies adaptive channel screening with physical transmission estimation, enabling precise recalibration of frequency components. To bridge the gap between physical scattering principles and state space representation learning, we develop a Physics-Informed SSM (PI-SSM), where the discretization step size of Mamba is dynamically modulated by the estimated haze density. This mechanism allows the model to adaptively adjust its spatial receptive field according to local degradation levels, enhancing physical interpretability. Furthermore, a Luminance-Adaptive Fusion Module (LAFM) is presented to protect high-reflectance land covers and maintain spectral consistency. Extensive experiments on multiple RS datasets demonstrate that PhysWave-SSN achieves superior performance, notably attaining a maximum PSNR gain of 2.49 dB while ensuring high structural and spectral fidelity. Full article
(This article belongs to the Special Issue Remote Sensing Technology for Agricultural and Land Management)
Show Figures

Figure 1

20 pages, 1998 KB  
Systematic Review
Machine Learning and Deep Learning for Wildfire Prediction: A Systematic and Bibliometric Review of Methods, Data Practices, and Reproducibility (2020–2025)
by Kevin Manuel Galván Lara, Yosune Miquelajauregui, Luis Fernando Enriquez Ocaña, Alf Enrique Meling-López, Christoph Neger, John Abatzoglou, Leopoldo Galicia, César Hinojo, Graciela Jiménez-Guzmán and Edelmira Rodríguez Alcantar
Fire 2026, 9(5), 204; https://doi.org/10.3390/fire9050204 - 15 May 2026
Abstract
Wildfire prediction using machine learning (ML) and deep learning (DL) has expanded rapidly, yet synthesis regarding algorithmic configurations, data practices, and transparency remains limited. This systematic review characterizes ML/DL applications in wildfire prediction (2020–2025) using a PRISMA-EcoEvo framework across 341 peer-reviewed studies, with [...] Read more.
Wildfire prediction using machine learning (ML) and deep learning (DL) has expanded rapidly, yet synthesis regarding algorithmic configurations, data practices, and transparency remains limited. This systematic review characterizes ML/DL applications in wildfire prediction (2020–2025) using a PRISMA-EcoEvo framework across 341 peer-reviewed studies, with detailed analysis of 110 articles from 2024. Publication output increased steadily, concentrated geographically in China and the United States. Methodologically, ensemble tree-based methods (26.7%) and deep learning architectures (59.4%) coexist, reflecting adaptation to diverse data modalities. Input data are dominated by vegetation/fuel characteristics (44.7%) and historical fire labels (41.2%), while socioeconomic variables remain marginal (1.2%). Evaluation practices distinguish classification and regression tasks, yet metric heterogeneity constrains cross-study comparability. Critically, only 7.7% of studies provided publicly accessible code, with a significant association between algorithm family and code availability (χ2 = 78, p = 0.0012). Collectively, wildfire ML/DL research demonstrates technical advancement but remains geographically concentrated and constrained by limited transparency. Strengthening reporting standards, metric-task alignment, dataset documentation, and open-code practices is essential to translate computational innovation into globally robust, reproducible wildfire decision-support systems. Full article
Show Figures

Figure 1

17 pages, 4761 KB  
Article
Predicting Urban PM2.5 Dynamics with XGBoost: Insights from a Dense Mobile Monitoring Network in Malaysia
by Noraishah Mohammad Sham, Siti Hazimah Ayu Ismain and Siti Syakirin Sazali
Atmosphere 2026, 17(5), 501; https://doi.org/10.3390/atmos17050501 (registering DOI) - 14 May 2026
Abstract
This study applies and evaluates established machine learning (ML) models for predicting monthly PM2.5 concentrations across the Greater Klang Valley (GKV), Malaysia using one year of data collected from 36 mobile monitoring stations between July 2022 and June 2023. Daily PM2.5 temperature (T), [...] Read more.
This study applies and evaluates established machine learning (ML) models for predicting monthly PM2.5 concentrations across the Greater Klang Valley (GKV), Malaysia using one year of data collected from 36 mobile monitoring stations between July 2022 and June 2023. Daily PM2.5 temperature (T), relative humidity (RH), and station location (L) were aggregated to form monthly datasets. Exploratory analysis showed substantial temporal variability, with elevated PM2.5 levels during the southwest monsoon and reduced concentrations during the northeast monsoon due to enhanced rainfall washout. Tree-based ML algorithms: decision tree (DT), random forest (RF), and Extreme Gradient Boosting (XGBoost) were developed following data cleaning, transformation, partitioning, and hyperparameter optimization via grid search. Model performance was evaluated using R2, RMSE, MAE and NAE. Across all months, XGBoost consistently outperformed DT and RF, achieving the highest R2 values (0.214–0.559) and generally lower error metrics. Model performance varied seasonally, with the highest accuracy observed in March 2023 (R2 = 0.559) and February 2023 (R2 = 0.552), whereas November 2022 showed the weakest predictive capability. Feature-importance analysis revealed that temperature exerted the strongest influence during the southwest monsoon, while station location dominated predictions in several months, reflecting spatial heterogeneity likely associated with land-use and emission patterns. RH was most influential in September 2022, when low humidity coincided with higher PM2.5 levels. Comparison of predicted and observed values showed strong alignment except during extreme pollution events, where the model tended to underperform. Overall, the findings demonstrate that XGBoost provides a robust modeling framework for monthly PM2.5 prediction in the GKV and highlights the importance of incorporating meteorological and spatial drivers to improve localized air-quality assessments. Full article
(This article belongs to the Special Issue Advances in Air Quality Monitoring and Source Apportionment)
Show Figures

Figure 1

28 pages, 125254 KB  
Article
Bridging Image-Based Detection and Field Evaluation: A Semi-Automated Pavement Distress Assessment Framework
by Betül Değer Şitilbay and Mehmet Ozan Yılmaz
Sustainability 2026, 18(10), 4935; https://doi.org/10.3390/su18104935 - 14 May 2026
Abstract
Accurate, rapid, and consistent evaluation of pavement condition across large-scale road networks is critical for sustainable maintenance and rehabilitation planning. However, conventional approaches largely rely on manual visual inspections, which are time-consuming, subjective, and difficult to implement at the network level. In this [...] Read more.
Accurate, rapid, and consistent evaluation of pavement condition across large-scale road networks is critical for sustainable maintenance and rehabilitation planning. However, conventional approaches largely rely on manual visual inspections, which are time-consuming, subjective, and difficult to implement at the network level. In this study, a semi-automated pavement distress evaluation framework that integrates field-based assessment with computer vision techniques is proposed. The study was conducted on a 3 km roadway network located within the Yıldız Technical University Davutpaşa Campus. Field-based distress observations were used as reference data, while street-level images obtained from the Mapillary platform were analyzed using a deep learning-based YOLOv8 model trained on the RDD2022 dataset, which was specifically developed for road distress detection. The analysis focuses on crack and pothole distress, which have a dominant influence on PCR and are highly distinguishable in image-based approaches. Correlation analyses between automated detection results and field-based data demonstrate a strong agreement, reaching values of approximately ρ0.90 in some routes. These findings indicate that these distress types are effective in representing variations in pavement condition. The results demonstrate that multi-source image data and deep learning-based detection methods can be reliably used for section-level pavement condition assessment. The proposed approach addresses a key gap in the literature by transforming image-level detections into engineering-based decision-support information. Furthermore, by leveraging publicly available data sources, the framework offers a low-cost and scalable solution that enables rapid preliminary assessment over large road networks, thereby providing significant potential for sustainable infrastructure management and the development of data-driven maintenance strategies. Several practical challenges encountered during the detection process—including sensitivity to contrast enhancement parameters, false positives from shadows and surface reflections, heterogeneous image resolution across crowdsourced imagery, and training distribution gaps for locally prevalent infrastructure features—are discussed, and directions for reducing human intervention through adaptive preprocessing and targeted model refinement are identified. Full article
Show Figures

Figure 1

21 pages, 1011 KB  
Review
Artificial Intelligence in the Assessment of Heart Rate Variability as an Instrument to Understand the Connection Between Psychologic and Psychiatric Conditions and the Heart
by Simon W. Rabkin
Bioengineering 2026, 13(5), 554; https://doi.org/10.3390/bioengineering13050554 (registering DOI) - 14 May 2026
Abstract
Heart rate variability (HRV) refers to variations in the time intervals between consecutive heart beats. Changes in HRV reflect changes in either sympathetic or decreased parasympathetic tone that can originate in the brain. This brain–heart connection has led to the proposal that HRV [...] Read more.
Heart rate variability (HRV) refers to variations in the time intervals between consecutive heart beats. Changes in HRV reflect changes in either sympathetic or decreased parasympathetic tone that can originate in the brain. This brain–heart connection has led to the proposal that HRV may have utility in the diagnosis of psychiatric conditions and/or be a predictor of the response to psychiatric medications. There have been attempts to improve the correlation between HRV and psychological and psychiatric conditions by using artificial intelligence or specific machine learning algorithms. The objective of this review is to synthesize data on the use of machine learning to improve accuracy in differentiating psychological conditions such as mental stress, as well as distinguishing persons with anxiety disorders, panic disorders, major depression disorders and schizophrenia from health subjects. Reported accuracies for the identification of mental stress vary from 42 to 94%, while accuracies for anxiety vary from 67 to 98%, panic disorders from 71 to 93% and depression from 71 to 95%. The ability of HRV to differentiate different psychological or psychiatric conditions from each other requires more investigation. The ‘best’ machine learning algorithm varied between studies, with some reporting the k-nearest neighbor algorithm, support vector machine, random forest, or neural networks to be the best. A number of studies combined HRV with other variables such as respiration, EEG, or electromyography to obtain a composite index, but in doing so obscured the independent contribution of HRV. In summary, HRV has shown promise in detecting abnormalities in a range of psychological and psychiatric conditions. The use of machine learning algorithms improves diagnostic accuracy. Full article
Show Figures

Figure 1

19 pages, 1186 KB  
Review
Applications of Artificial Intelligence in Endobronchial Ultrasound for Lung Cancer Diagnosis and Staging: A Scoping Review
by Jacobo Echeverri-Hoyos, Jaime A. Echeverri-Franco, Nicole Bonilla, Gustavo Monsalve-Morales and Eduardo Tuta-Quintero
Curr. Oncol. 2026, 33(5), 287; https://doi.org/10.3390/curroncol33050287 - 13 May 2026
Abstract
Introduction: Lung cancer remains highly lethal. Endobronchial ultrasound (EBUS) enables minimally invasive diagnosis and staging. Artificial intelligence (AI) improves image analysis and diagnostic accuracy, though current evidence is limited by retrospective, small, single center studies. Methods: A scoping review following Arksey–O’Malley, [...] Read more.
Introduction: Lung cancer remains highly lethal. Endobronchial ultrasound (EBUS) enables minimally invasive diagnosis and staging. Artificial intelligence (AI) improves image analysis and diagnostic accuracy, though current evidence is limited by retrospective, small, single center studies. Methods: A scoping review following Arksey–O’Malley, Levac, and JBI frameworks, was reported as per PRISMA-ScR. Databases were searched for studies (2015–2026) on AI in EBUS. Two reviewers screened, extracted standardized data, and performed narrative synthesis grouped by algorithm type, application, and performance metrics. Results: A total of 26 studies were included. Of these, 73.1% (19/26) employed deep learning-based models, while 26.9% (7/26) used traditional or hybrid machine learning approaches. The most frequent clinical objective was diagnostic classification of malignancy (14/26; 53.8%), followed by segmentation or cytological analysis (5/26; 19.2%), anatomical navigation or lymph node station classification (3/26; 11.5%), and multimodal predictive or staging support models (4/26; 15.4%). Most studies were based on EBUS-derived images or videos (18/26; 69.2%), including both convex-probe and radial-probe applications. Studies were distributed among Convex Probe-EBUS for mediastinal staging, Radial Probe-EBUS for peripheral lesion assessment, and rapid on-site evaluation-based cytology analysis, reflecting diverse clinical contexts. Most models were developed using static images. Conclusions: AI applications in EBUS are predominantly based on deep learning and mainly focused on diagnostic classification, with growing but still limited exploration of segmentation, navigation, and multimodal approaches. The evidence reflects diverse clinical contexts and data sources, particularly image-based inputs, but remains unevenly distributed across applications. Full article
17 pages, 494 KB  
Article
Discordance Between Electronic Health Records and Self-Reported Data: Evidence from Traumatic Brain Injury and Colorectal Cancer
by Zahra Mojtahedi, Alireza Bolourian, Taylor S. Lane and Monica R. Lininger
Healthcare 2026, 14(10), 1337; https://doi.org/10.3390/healthcare14101337 - 13 May 2026
Abstract
Background/Objectives: Discordance between electronic health records (EHR) and self-reported survey data may reflect incomplete clinical documentation on the provider side, as well as sociodemographic differences among survey participants. Cancer conditions are frequently reported with the least discordance. Traumatic brain injury (TBI) may be [...] Read more.
Background/Objectives: Discordance between electronic health records (EHR) and self-reported survey data may reflect incomplete clinical documentation on the provider side, as well as sociodemographic differences among survey participants. Cancer conditions are frequently reported with the least discordance. Traumatic brain injury (TBI) may be particularly prone to discordance. The aim of this study was to nationally investigate discordance between EHR and self-reported data for TBI and colorectal cancer. Methods: This cross-sectional study used data from the national All of Us Research Program, including participants with both linked EHR and self-reported survey data. Participants in each condition were stratified into four groups: EHR+/Survey+ (concordant positive), EHR−/Survey− (concordant negative), EHR+/Survey− (discordant), and EHR−/Survey+ (discordant). EHR-documented and survey-reported conditions were compared using a 2 × 2 classification framework to assess concordance. Agreement metrics, including sensitivity, specificity, predictive values, overall concordance/discordance, directional discordance, and Cohen’s kappa, were calculated. Logistic regression models were used to examine the association between the outcomes and sociodemographic factors. Machine learning models additionally investigated the predictive performance of these factors. Results: For TBI, concordance between EHR and survey was fair (κ = 0.33), with sensitivity of 60.9% and specificity, 92.9%. In regression models, increasing age was associated with higher odds of both discordant groups (EHR+/Survey− and EHR−/Survey+); lower educational levels and non-White participants had higher odds of discordance specifically in EHR+/Survey− group. Medicaid insurance had higher odds in the EHR−/Survey+ group. In contrast, colorectal cancer showed stronger concordance (κ = 0.66; sensitivity 74.5%; specificity 98.6%) and fewer sociodemographic associations in regression models. The association between race and Medicaid coverage showed a similar pattern to TBI. Machine learning results were also consistent with logistic regression models. Conclusions: Concordance between EHR and self-reported data was fair for TBI. Older age, lower education, non-White race, and Medicaid insurance were associated with greater discordance. These sociodemographic patterns were less pronounced in colorectal cancer, except for race and Medicaid insurance. Policies are needed to improve concordance between EHR and self-reported data, particularly across certain sociodemographic groups. Full article
11 pages, 440 KB  
Article
Territorial Performance by Disciplinary Themes Assessed in Chilean Physical Education Teacher Education
by Francisco Gallardo-Fuentes, Bastian Carter-Thuillier, Jorge Gallardo-Fuentes, Johan Rivas-Valenzuela and Sebastián Peña-Troncoso
Trends High. Educ. 2026, 5(2), 40; https://doi.org/10.3390/higheredu5020040 - 13 May 2026
Abstract
Territorial inequalities in higher education systems remain a persistent challenge in highly centralized countries. In Chile, the concentration of academic resources and institutional capacities in the Metropolitan Region has historically shaped disparities in educational opportunities and outcomes. In this context, the National Diagnostic [...] Read more.
Territorial inequalities in higher education systems remain a persistent challenge in highly centralized countries. In Chile, the concentration of academic resources and institutional capacities in the Metropolitan Region has historically shaped disparities in educational opportunities and outcomes. In this context, the National Diagnostic Assessment (END) serves as a standardized instrument designed to evaluate the achievement of professional standards in initial teacher education programs. This study aimed to identify and characterize the territorial patterns of achievement in disciplinary domains of the END assessment, examining whether significant differences between macrozones reflect structural inequalities in educational resources and institutional capacities. A quantitative approach was adopted using secondary data from the national open database of the Ministry of Education. Statistical analyses were conducted in R, applying Mann–Whitney U tests for independent comparisons between macrozones and Wilcoxon tests for paired comparisons between disciplinary topics. The results reveal a consistent territorial pattern in which the Metropolitan Region and the Central–North macrozone present the highest performance levels, while the Northern and Southern macrozones show comparatively lower averages. These findings suggest that territorial conditions and institutional resources may influence learning outcomes even within nationally standardized evaluation frameworks. Full article
46 pages, 2849 KB  
Systematic Review
Artificial Intelligence Approaches for Energy Consumption and Generation Forecasting, Anomaly Detection, and Public Decision-Making: A Systematic Review
by David Velasco Ayuso, Jesús Ángel Román Gallego and Carolina Zato Domínguez
Energies 2026, 19(10), 2347; https://doi.org/10.3390/en19102347 - 13 May 2026
Abstract
The large-scale integration of variable renewable energy sources introduces critical challenges of intermittency and uncertainty, yet consumption forecasting, generation forecasting, and anomaly detection are typically addressed in isolation, neglecting the bidirectional feedback between consumption patterns, generation mix, and public decision-making. This PRISMA 2020-compliant [...] Read more.
The large-scale integration of variable renewable energy sources introduces critical challenges of intermittency and uncertainty, yet consumption forecasting, generation forecasting, and anomaly detection are typically addressed in isolation, neglecting the bidirectional feedback between consumption patterns, generation mix, and public decision-making. This PRISMA 2020-compliant systematic review compared statistical, machine learning, and deep learning models for energy forecasting and machine learning and deep learning models for anomaly detection. Searches in Google Scholar and Scopus used seven targeted strings, restricted to peer-reviewed empirical studies (2022–2026; 2023–2026 for anomaly detection), indexed in Q1–Q3 JCR journals, excluding theoretical and non-benchmarked works. A six-item risk of bias questionnaire—with a threshold of four points—guided inclusion, yielding 60 articles. Addressing the first research question (RQ1) on comparative model performance, hybrid deep learning architectures optimized with bio-inspired metaheuristics achieved the highest forecasting accuracy (R2 up to 0.9984), with metaheuristic optimization acting as a cost-reducing factor; statistical models remained competitive for long-horizon forecasting, while large-language-model-based approaches addressed data scarcity through few-shot learning. Addressing the second research question (RQ2) on smart grid optimization, predictive techniques reduce forecasting errors enabling real-time load adjustment and Demand Response, though a systematic asymmetry constrains their potential: consumption studies integrate socio-economic variables, whereas generation studies rely on meteorological inputs. Addressing the third research question (RQ3) on infrastructure security, supervised and unsupervised approaches detect anomalous operational states and support fault diagnosis, yet remain constrained by scarce labeled fault data and limited cross-regional validation; generative models such as GANs and diffusion models partially address this limitation by enabling Sim2Real strategies and realistic digital twin construction. Evidence is strongest for hybrid forecasting; certainty is lower for anomaly detection given reliance on experimental surrogates. No single paradigm achieves universal superiority. The primary finding is the consistent absence of integrated frameworks jointly modeling consumption, generation, anomaly detection, and public decision-making across the reviewed literature. This result reflects a structural limitation of the current state of the art, rather than a forward-looking research agenda. This study was funded by the ENIA International Chair on Trustworthy Artificial Intelligence European Recovery Plan; the protocol was not pre-registered. Full article
Show Figures

Figure 1

27 pages, 2068 KB  
Review
A Risk-Tiered Validation Framework for Artificial Intelligence in Drug Discovery: From Reproducibility to Clinical Translation
by Sarfaraz K. Niazi
Int. J. Mol. Sci. 2026, 27(10), 4349; https://doi.org/10.3390/ijms27104349 - 13 May 2026
Abstract
Artificial intelligence has advanced from merely predicting static protein structures to modeling equilibrium conformational ensembles. It now concurrently forecasts structure and binding affinity and actively participates in candidate selection during the initial stages of drug discovery. Foundation models introduced between 2024 and 2026, [...] Read more.
Artificial intelligence has advanced from merely predicting static protein structures to modeling equilibrium conformational ensembles. It now concurrently forecasts structure and binding affinity and actively participates in candidate selection during the initial stages of drug discovery. Foundation models introduced between 2024 and 2026, including BioEmu, AlphaFlow, DiG, Boltz-2, Chai-1, NeuralPLexer, and explicit-solvent prediction systems such as SuperWater, have begun to address issues previously identified as fundamental concerns in earlier critiques of AI in drug discovery. Nevertheless, many of these models are presently accessible only as preprints and require validation through independent peer review. Evidence indicates a shift in the primary bottleneck from representation challenges to validation difficulties. However, this transition remains incomplete and heavily dependent on context. The risks associated with AI-enabled drug discovery are increasingly not solely about the models’ capacity to accurately represent ensembles, but also about whether the evidentiary standards used to validate AI-derived predictions keep pace with the rapidity with which these predictions are generated and employed. This article introduces a four-tier validation framework designed to align the extent of computational and experimental evidence with the translational and regulatory risks associated with various artificial intelligence (AI) applications within the molecular sciences. These applications include machine learning (ML) models that analyze sequences, structures, conformational ensembles, protein–ligand complexes, and molecular dynamics trajectories. Tier 1 addresses the internal reproducibility of ML inference when applied to molecular inputs; Tier 2 pertains to the robustness of molecular-science benchmarks such as CASP, CASF-2016, PoseBusters, and OpenFE; Tier 3 involves prospective experimental validation against biophysical and biochemical measurements; and Tier 4 encompasses clinical and translational calibration within physiologically based pharmacokinetic (PBPK) and quantitative systems pharmacology (QSP) frameworks. This validation hierarchy functions as an explicit conceptual guide, serving as a framework rather than a regulatory requirement. It is firmly grounded in established principles derived from ICH Q8/Q9/Q10, the FDA model-informed drug development (MIDD) approach, the EMA reflection paper on AI in the medicinal product lifecycle, and the EU AI Act. The manuscript further incorporates recent evidence from ensemble-aware AI, prospective docking, free-energy campaigns, and clinical-stage AI-derived candidates. It concludes with specific recommendations pertaining to lifecycle governance, uncertainty reporting, and the adoption of harmonized evidentiary templates for AI/ML applications in the molecular sciences. Full article
Show Figures

Figure 1

21 pages, 8121 KB  
Article
Research on Real-Time Drowning Detection in Open Water Using Unmanned Aerial Vehicles and Artificial Intelligence Image Recognition
by Shun-Yuan Cheng, Meng-Dar Shieh, Shuo-Yen Chen, Jin-Hua Chen, Ming-Chen Chen and An-Che Lee
Drones 2026, 10(5), 374; https://doi.org/10.3390/drones10050374 - 13 May 2026
Abstract
Accurate detection of drowning victims in open water remains a major challenge for search-and-rescue (SAR) operations due to low illumination, reflections, occlusions, and complex backgrounds that degrade human visual performance. This study proposes a multi-modal AI-assisted UAV system for real-time drowning detection using [...] Read more.
Accurate detection of drowning victims in open water remains a major challenge for search-and-rescue (SAR) operations due to low illumination, reflections, occlusions, and complex backgrounds that degrade human visual performance. This study proposes a multi-modal AI-assisted UAV system for real-time drowning detection using a multi-rotor platform (<15 kg) equipped with integrated visual, thermal, and distance sensing, along with geolocation capabilities. A deep learning-based detection model was trained on 7103 images collected from real human subjects simulating four drowning scenarios in riverine and coastal environments, with additional stabilization and preprocessing modules to improve data quality. The proposed system achieves 98% detection accuracy, with a mean Average Precision (mAP@0.5) of 0.991 and a peak F1-score of 0.97. Results demonstrate reliable detection performance under challenging conditions, including low light, reflective water surfaces, and complex backgrounds, and show improved identification of low-contrast targets such as dark-clothed victims. These findings indicate that the proposed system provides a robust and scalable solution for real-time aquatic SAR applications and enhances the effectiveness of UAV-assisted rescue operations. Full article
Back to TopTop