Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (3,517)

Search Parameters:
Keywords = generalizability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
53 pages, 5533 KB  
Systematic Review
Embodied AI with Foundation Models for Mobile Service Robots: A Systematic Review
by Matthew Lisondra, Beno Benhabib and Goldie Nejat
Robotics 2026, 15(3), 55; https://doi.org/10.3390/robotics15030055 - 4 Mar 2026
Abstract
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, [...] Read more.
Rapid advancements in foundation models, including Large Language Models, Vision-Language Models, Multimodal Large Language Models, and Vision-Language-Action models, have opened new avenues for embodied AI in mobile service robotics. By combining foundation models with the principles of embodied AI, where intelligent systems perceive, reason, and act through physical interaction, mobile service robots can achieve more flexible understanding, adaptive behavior, and robust task execution in dynamic real-world environments. Despite this progress, embodied AI for mobile service robots continues to face fundamental challenges related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment. In this paper, we present the first systematic review of foundation models in mobile service robotics, following the preferred reporting items for systematic reviews and meta-analysis (PRISMA) guidelines. Using an OpenAlex literature search, we considered 7506 papers for the years spanning 1968–2025. Our detailed analysis identified four main challenges and how recent advances in foundation models, related to the translation of natural language instructions into executable robot actions, multimodal perception in human-centered environments, uncertainty estimation for safe decision-making, and computational constraints for real-time onboard deployment, have addressed these challenges. We further examine real-world applications in domestic assistance, healthcare, and service automation, highlighting how foundation models enable context-aware, socially responsive, and generalizable robot behaviors. Beyond technical considerations, we discuss ethical, societal, human-interaction, and physical design and ergonomic implications associated with deploying foundation-model-enabled service robots in human environments. Finally, we outline future research directions emphasizing reliability and lifelong adaptation, privacy-aware and resource-constrained deployment, as well as the governance and human-in-the-loop frameworks required for safe, scalable, and trustworthy mobile service robotics. Full article
(This article belongs to the Special Issue Embodied Intelligence: Physical Human–Robot Interaction)
Show Figures

Figure 1

17 pages, 278 KB  
Article
Augmented Reality’s Impact on Student Creativity in Design and Technology: An Immersive Learning Study
by Zuraini Yakob, Nazlena Mohamad Ali, Mohamad Hidir Mhd Salim and Norshita Mat Nayan
Multimodal Technol. Interact. 2026, 10(3), 25; https://doi.org/10.3390/mti10030025 - 4 Mar 2026
Abstract
This quasi-experimental study examined the effectiveness of Augmented Reality (AR)-enhanced instruction on creativity development in Malaysian Design and Technology education. Forty-six, fifteen-year-old female students were assigned to AR-enhanced (n = 23) or traditional instruction (n = 23) groups for a four-week [...] Read more.
This quasi-experimental study examined the effectiveness of Augmented Reality (AR)-enhanced instruction on creativity development in Malaysian Design and Technology education. Forty-six, fifteen-year-old female students were assigned to AR-enhanced (n = 23) or traditional instruction (n = 23) groups for a four-week Mechatronic Design unit. Creativity was assessed using an adapted Torrance Tests of Creative Thinking-Figural (TTCT-F) instrument with expert validation and independent scoring by three raters. Bootstrapped ANCOVA (5000 iterations) controlling for pretest differences revealed significant improvements across all Guilford creativity components in the AR group: Elaboration (F = 27.093, p < 0.001, η2 = 0.387), Originality (F = 20.445, p < 0.001, η2 = 0.322), Fluency (F = 17.896, p < 0.001, η2 = 0.294), and Flexibility (F = 7.593, p = 0.008, η2 = 0.150). The differential effect pattern suggests AR operates through multiple mechanisms, primarily socio-constructivist collaborative scaffolding, followed by motivational enhancement and cognitive load reduction. These findings demonstrate AR’s substantial potential for creativity development in Design and Technology education, particularly for collaborative elaboration and generative ideation. However, single gender sampling, brief intervention duration, and quasi-experimental design limit generalizability, warranting future research with diverse populations and extended interventions. Full article
16 pages, 683 KB  
Article
Artificial Intelligence and Error Analysis: Effects on Feedback of Recurrent Errors and Fossilisation Tendencies
by Manuel Macías-Borrego
Educ. Sci. 2026, 16(3), 393; https://doi.org/10.3390/educsci16030393 - 4 Mar 2026
Abstract
This study investigates the pedagogical value of integrating AI-supported feedback with Error Analysis in university-level English as a Foreign Language (EFL) writing instruction, where English is the target language (TL). Adopting a comparative, corpus-based design, the research examines whether AI-mediated feedback can complement [...] Read more.
This study investigates the pedagogical value of integrating AI-supported feedback with Error Analysis in university-level English as a Foreign Language (EFL) writing instruction, where English is the target language (TL). Adopting a comparative, corpus-based design, the research examines whether AI-mediated feedback can complement traditional teacher-led Error Analysis in reducing recurrent errors, improving grammatical accuracy, and supporting revision practices among Spanish L1 learners of English at the B2 (CEFR) level. Seventy participants completed two writing tasks over a twelve-week period, generating a learner corpus that was randomly assigned to two groups: AI-assisted feedback and teacher-mediated feedback. Quantitative Error Analysis and learner-perception surveys were conducted to assess both linguistic outcomes and attitudinal responses. Results indicate that students receiving AI-assisted feedback demonstrated lower rates of error repetition (25%) compared to those receiving teacher-based correction (40%), particularly in subject–verb agreement, preposition use, tense selection, and L1-induced lexical transfer in L2 English writing. Survey findings further reveal higher perceived levels of clarity, usefulness, and immediacy for AI-generated feedback, although participants continued to value teacher input for higher-order writing concerns. Overall, the findings suggest that AI-supported Error Analysis can contribute to short-term error reduction and foster learner autonomy. This study highlights the potential of blended and mixed feedback models within a focused pedagogical context and underscores the need for longitudinal research examining long-term retention, pragmatic development, and cross-context generalizability. Full article
Show Figures

Figure 1

33 pages, 5521 KB  
Article
Contrast-Free Myocardial Infarction Segmentation with Attention U-Net
by Khaled Ali Deeb, Yasmeen Alshelle, Hala Hammoud, Andrey Briko, Vladislava Kapravchuk, Alexey Tikhomirov, Amaliya Latypova and Ahmad Hammoud
Diagnostics 2026, 16(5), 768; https://doi.org/10.3390/diagnostics16050768 - 4 Mar 2026
Abstract
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) [...] Read more.
Background: Cardiovascular magnetic resonance (CMR) is the clinical gold standard for assessing cardiac anatomy and function. However, the manual segmentation of cardiac structures and myocardial infarction (MI) is time-consuming, prone to inter-observer variability, and often depends on contrast-enhanced imaging. Although deep learning (DL) has enabled substantial automation, challenges remain in generalizability, particularly for MI detection from non-contrast cine CMR. Objective: This study proposes a comprehensive DL-based framework for automatic segmentation of cardiac structures and myocardial infarction using contrast-free cine CMR. Methods: The framework integrates multiple convolutional neural network (CNN) architectures for cardiac structure segmentation with an attention-based deep learning model for MI localization. Post-processing refinement using stacked autoencoders and active contour modeling is applied to improve anatomical consistency. Segmentation performance is evaluated using overlap-based and boundary-based metrics, including the Dice Similarity Coefficient (DSC), Mean Contour Distance (MCD), and Hausdorff Distance (HD). Results: The best-performing model achieved Dice scores of 0.93 ± 0.05 for the left ventricular (LV) cavity, 0.89 ± 0.04 for the LV myocardium, and 0.91 ± 0.06 for the right ventricular (RV) cavity, with consistently low boundary errors across all structures. Myocardial infarction segmentation achieved a Dice score of 0.80 ± 0.02 with high recall, demonstrating reliable infarct localization without the use of contrast agents. Conclusions: By enabling accurate cardiac structure and myocardial infarction segmentation from contrast-free cine CMR, the proposed framework supports broader clinical applicability, particularly for patients with contraindications to gadolinium-based contrast agents and in emergency or resource-limited settings. This approach facilitates scalable, contrast-independent cardiac assessment. Full article
(This article belongs to the Special Issue Artificial Intelligence and Computational Methods in Cardiology 2026)
Show Figures

Figure 1

22 pages, 2161 KB  
Systematic Review
Prognostic Models for Predicting Coronary Heart Disease Risk in Patients with Type 2 Diabetes Mellitus: A Systematic Review and Meta-Analysis
by Maicol Cortez-Sandoval, César J. Eras Lévano, Joaquín Fernández Álvarez, Jorge López-Leal, Lady Morán Valenzuela, Raul H. Sandoval-Ato, Hady Keita, Martin Gomez-Lujan, Fernando M. Quevedo Candela, Jesús I. Parra Prado, José Luis Muñoz-Carrillo, Oriana Rivera-Lozada and Joshuan J. Barboza
Diagnostics 2026, 16(5), 765; https://doi.org/10.3390/diagnostics16050765 - 4 Mar 2026
Abstract
Background: Individuals with type 2 diabetes mellitus (T2DM) are at markedly increased risk of developing coronary heart disease (CHD); however, the generalizability and transportability of existing prediction models remain uncertain. Objective: To identify and evaluate multivariable prognostic models developed to predict [...] Read more.
Background: Individuals with type 2 diabetes mellitus (T2DM) are at markedly increased risk of developing coronary heart disease (CHD); however, the generalizability and transportability of existing prediction models remain uncertain. Objective: To identify and evaluate multivariable prognostic models developed to predict CHD in adults with T2DM. Methods: We conducted a PRISMA-guided systematic review and meta-analysis of multivariable prognostic models predicting CHD in T2DM populations. Model characteristics and performance metrics were extracted following the CHARMS and TRIPOD-SRMA frameworks, and pooled discrimination was estimated on the logit-transformed AUC scale using a random-effects model (REML, Hartung–Knapp adjustment). Between-study heterogeneity and 95% prediction intervals were quantified, while risk of bias and applicability were assessed using the PROBAST tool. Results: Thirteen studies encompassing clinical, imaging-based, and omics-augmented models met the inclusion criteria. The pooled AUC was 0.69 (95% CI: 0.66–0.71), with high heterogeneity (I2 = 97.4%; τ2 = 0.0979) and a wide 95% prediction interval (0.54–0.81). Classical regression-based models demonstrated modest discrimination, whereas machine learning, imaging, and proteomic approaches achieved higher AUC estimates but were frequently constrained by small sample sizes, internal-only validation, and poor calibration reporting. The analysis domain emerged as the principal source of bias in PROBAST evaluations, and applicability issues were most frequent in models requiring advanced imaging or molecular platforms. Conclusions: Prognostic models for CHD in T2DM demonstrate moderate-to-good discrimination but substantial heterogeneity and frequent miscalibration across studies. Their clinical utility depends on rigorous external validation and local recalibration, particularly when incorporating imaging or molecular predictors. Future research should prioritize standardized CHD outcomes, consistent calibration reporting, decision-analytic assessments, and the development of transportable multimodal prediction models across diverse populations. Full article
(This article belongs to the Section Clinical Diagnosis and Prognosis)
Show Figures

Figure 1

34 pages, 2108 KB  
Systematic Review
A Systematic Review of Cross-Population Shifts in Medical Imaging Analysis with Deep Learning
by Aminu Musa, Rajesh Prasad, Peter Onwualu and Monica Hernandez
Big Data Cogn. Comput. 2026, 10(3), 76; https://doi.org/10.3390/bdcc10030076 - 4 Mar 2026
Abstract
Deep learning has achieved expert-level performance in medical imaging analysis. However, models often fail to generalize across patient populations due to cross-population domain shifts, distributional differences arising from demographic variability, variations in imaging protocols, scanner hardware, and differences in disease prevalence. This challenge [...] Read more.
Deep learning has achieved expert-level performance in medical imaging analysis. However, models often fail to generalize across patient populations due to cross-population domain shifts, distributional differences arising from demographic variability, variations in imaging protocols, scanner hardware, and differences in disease prevalence. This challenge limits the real-world deployment and can increase health inequities. This review systematically examines the nature, causes, and impact of cross-population domain shift in deep learning-based medical imaging analysis. We analyzed 50 peer-reviewed studies from 2020 to 2025, evaluating the proposed methodologies for handling population shifts, the datasets employed, and the metrics used to assess performance. Our findings demonstrate that performance degradation ranged from 10–25% when models were tested on unseen populations, emphasizing the substantial impact of domain shifts on model generalizability. The literature reveals that mitigation strategies broadly fall into two categories: data-centric approaches, such as augmentation and harmonization, and model-centric approaches, including domain adaptation, transfer learning, adversarial learning, multi-task learning, and continual learning. While domain adaptation and transfer learning are the most widely used, their performance gains across populations remain modest, ranging from 5–15%, and are not supported by external validation. Our synthesis reveals a significant reliance on large, publicly available datasets from limited regions, with an underrepresentation of data from low- and middle-income countries. Evaluation practices are inconsistent, with few studies employing standardized external test sets. This review provides a structured taxonomy of mitigation techniques, a refined analysis of domain shift characteristics, and an in-depth critique of methodological challenges. We highlight the urgent need for more geographically and demographically inclusive datasets, adaptable modeling techniques, and standardized evaluation protocols to enable accurate and equitable AI-driven diagnostics across diverse populations. Finally, we outline future research directions to guide the development of robust, generalizable, and fair models for medical imaging analysis. Full article
Show Figures

Figure 1

34 pages, 2813 KB  
Review
AI in Membrane Design and Optimization for Hydrogen Fuel Cells
by Bshaer Nasser, Hisham Kazim, Moin Sabri, Muhammad Tawalbeh and Amani Al-Othman
Membranes 2026, 16(3), 97; https://doi.org/10.3390/membranes16030097 (registering DOI) - 3 Mar 2026
Abstract
This paper reviews artificial intelligence (AI) applications in the design and optimization of proton exchange membrane (PEM) materials for hydrogen fuel cells. Clean energy conversion is a substantial benefit of PEM fuel cells, which conventional membrane development struggles with due to time-consuming trial-and-error [...] Read more.
This paper reviews artificial intelligence (AI) applications in the design and optimization of proton exchange membrane (PEM) materials for hydrogen fuel cells. Clean energy conversion is a substantial benefit of PEM fuel cells, which conventional membrane development struggles with due to time-consuming trial-and-error methods, which are not adequate in capturing the different interdependencies of the membrane structure, and environmental variables. The review establishes foundational design principles of PEMs and outlines their challenges and computational methodologies are constructed to address them. Various advanced AI methods have been highlighted which include graph neural networks, multitask frameworks, and physics-informed models that facilitate rapid prediction of polymer properties. Optimization methods have been reported with 10–30% performance improvements, for instance, NSGA-II frameworks achieving 13–27% gains in power density. Experimental requirements are reduced by 40–60%, as seen with Bayesian optimization, identifying optimal designs within as few as 40 iterations. Current challenges include data availability, generalizability, and scalability, which are closely assessed in this review. Full article
(This article belongs to the Special Issue Advanced Membrane Design for Hydrogen Technologies)
Show Figures

Figure 1

25 pages, 1057 KB  
Review
Transforming Intracerebral Hemorrhage Care with Artificial Intelligence: Opportunities, Challenges, and Future Directions
by Qian Gao, Yujia Jin, Yuxuan Sun, Meng Jin, Lili Tang, Yuxiao Chen, Yutong She and Meng Li
Diagnostics 2026, 16(5), 752; https://doi.org/10.3390/diagnostics16050752 - 3 Mar 2026
Abstract
Spontaneous intracerebral hemorrhage (ICH) is associated with substantial mortality and morbidity. Current management paradigms rely heavily on the rapid interpretation of neuroimaging and clinical data, yet are frequently constrained by limitations in processing speed, diagnostic accuracy, and prognostic precision. Artificial intelligence (AI), specifically [...] Read more.
Spontaneous intracerebral hemorrhage (ICH) is associated with substantial mortality and morbidity. Current management paradigms rely heavily on the rapid interpretation of neuroimaging and clinical data, yet are frequently constrained by limitations in processing speed, diagnostic accuracy, and prognostic precision. Artificial intelligence (AI), specifically machine learning (ML) and deep learning (DL), offers transformative potential to circumvent these challenges across the entire continuum of ICH care. This comprehensive review synthesizes the rapidly evolving landscape of AI applications in ICH management. Through a systematic evaluation of recent literature, we examine studies focused on the development, validation, or critical appraisal of AI-driven technologies for ICH care. Our analysis encompasses automated neuroimaging, computer-assisted surgical navigation, brain–computer interfaces (BCIs), prognostic modeling, and fundamental research into disease mechanisms. AI has demonstrated performance comparable to that of clinical experts in automating hematoma segmentation, predicting complications such as hematoma expansion, and refining surgical planning via augmented reality. Furthermore, BCIs present innovative therapeutic avenues for motor rehabilitation. However, the translation of these technological advances into routine clinical practice is impeded by substantial challenges, including data heterogeneity, model opacity (“black-box” issues), workflow integration barriers, regulatory ambiguities, and ethical concerns surrounding accountability and algorithmic bias. The integration of AI into ICH care signifies a paradigm shift from standardized treatment protocols toward dynamic, precision medicine. Realizing this vision necessitates interdisciplinary collaboration to engineer robust, generalizable, and interpretable AI systems. Key priorities include the establishment of large-scale multimodal data repositories, the advancement of explainable AI (XAI) frameworks, the execution of rigorous prospective clinical trials to validate efficacy, and the implementation of adaptive regulatory and ethical guidelines. By systematically addressing these barriers, AI can evolve from a mere analytical tool into an indispensable clinical partner, ultimately optimizing patient outcomes. Full article
(This article belongs to the Special Issue Cerebrovascular Lesions: Diagnosis and Management, 2nd Edition)
Show Figures

Figure 1

17 pages, 7794 KB  
Review
Artificial Intelligence and Digital Technology in Cardiovascular Imaging: A Narrative Review
by Constantinos H. Papadopoulos, Dimitris Karelas, Christina Floropoulou, Konstantina Tzavida, Dimitrios Oikonomidis, Athanasios Tasoulis, Evangelos Tatsis, Ioannis Kouloulias and Nikolaos P. E. Kadoglou
BioTech 2026, 15(1), 22; https://doi.org/10.3390/biotech15010022 - 3 Mar 2026
Abstract
The rapid expansion of digital technologies and artificial intelligence (AI) has profoundly transformed cardiovascular imaging, enabling more precise, efficient, and reproducible assessment of cardiac structure and function. This narrative review summarizes recent advances in AI-driven methods across echocardiography, cardiac computed tomography, cardiac magnetic [...] Read more.
The rapid expansion of digital technologies and artificial intelligence (AI) has profoundly transformed cardiovascular imaging, enabling more precise, efficient, and reproducible assessment of cardiac structure and function. This narrative review summarizes recent advances in AI-driven methods across echocardiography, cardiac computed tomography, cardiac magnetic resonance, and nuclear imaging, with emphasis on image acquisition, automated quantification, and diagnostic and prognostic interpretation. We reviewed contemporary literature describing machine-learning and deep-learning applications for image reconstruction, segmentation, radiomics, and multimodal data integration. Current evidence demonstrates that AI improves image quality, reduces acquisition and analysis time, and enables automated, highly reproducible measurements of chamber volumes, function, tissue characterization, coronary anatomy, and myocardial perfusion, while facilitating advanced pattern recognition for differential diagnosis and risk stratification. Furthermore, digital platforms support remote acquisition, tele-echocardiography, and AI-assisted training of non-expert operators. Despite these advances, challenges remain regarding external validation, generalizability across vendors and populations, explainability, data governance, and regulatory compliance. In conclusion, AI and digital technologies are reshaping cardiovascular imaging by enhancing accuracy, efficiency, and accessibility, but their safe and effective clinical integration requires robust multicenter validation, transparent reporting, and ethical-legal frameworks that ensure trust, equity, and accountability. Full article
(This article belongs to the Special Issue Advances in Bioimaging Technology)
Show Figures

Figure 1

45 pages, 7022 KB  
Article
Digitalization of Railway Traffic Dispatching Systems: From Legacy Infrastructure to a Software-Centric Platform
by Ivan Kokić, Jovana Vuleta-Radoičić, Iva Salom, Goran Dimić, Bratislav Planić, Sandra Velimirović and Slavica Boštjančič Rakas
Computers 2026, 15(3), 163; https://doi.org/10.3390/computers15030163 - 3 Mar 2026
Abstract
Digitalization of railway traffic dispatching systems is a key step in the modernization of railway telecommunication infrastructure. This paper presents a case study of the migration from legacy analog technology to a software-centric dispatching platform that integrates digital signal processing, optical fiber transmission, [...] Read more.
Digitalization of railway traffic dispatching systems is a key step in the modernization of railway telecommunication infrastructure. This paper presents a case study of the migration from legacy analog technology to a software-centric dispatching platform that integrates digital signal processing, optical fiber transmission, and Internet Protocol (IP)-based network architectures, as implemented in the Serbian railway system. The modernization is performed through an iterative, incremental process: existing analog dispatcher equipment and established operating procedures are preserved, while digital dispatching centers, trackside communication nodes, and radio-dispatching services are introduced gradually. This staged evolution enables high-capacity, noise-resilient communication and seamless interconnection between the old and the new subsystems without disrupting railway operations. The adoption of software-based control and integrated digital signal processing provides modular scalability, real-time system supervision, automated diagnostics, and improved maintainability. One of critical services within the new architecture, the Centralized Call Record- and Message-Archiving System (CCRMAS), provides a centralized platform that captures, secures, and retrieves operational railway communication in real time for monitoring, post-incident analysis, and regulatory compliance. The resulting architecture, deployed within Serbian Railways, establishes a scalable and resilient foundation for future automation, interoperability, and integration within intelligent railway traffic-management environments. Thus, the paper extracts a generalizable hybrid migration architecture model and transferable design principles, supported by deployment artifacts and illustrated through migration scenarios, that can be applied to the modernization of other legacy-intensive railway networks. Full article
Show Figures

Figure 1

15 pages, 222 KB  
Article
The Knowledge, Attitudes, and Practices of Parents of Children Admitted to the Paediatric Emergency Department with Fever
by Sema Bayraktar, Gülay Türk, Ahmet Butun and Zeynep Olgac Tay
Healthcare 2026, 14(5), 638; https://doi.org/10.3390/healthcare14050638 - 3 Mar 2026
Abstract
Introduction: Fever is one of the most common reasons for Paediatric Emergency Department (PED) visits, often driven by parental anxiety and misconceptions about fever management. This study aimed to evaluate the knowledge, attitudes, and practices of parents regarding childhood fever to identify gaps [...] Read more.
Introduction: Fever is one of the most common reasons for Paediatric Emergency Department (PED) visits, often driven by parental anxiety and misconceptions about fever management. This study aimed to evaluate the knowledge, attitudes, and practices of parents regarding childhood fever to identify gaps and guide targeted educational interventions. Understanding parental behaviors is crucial for improving care outcomes and reducing unnecessary PED utilization. Methods: This study is a descriptive cross-sectional study. The sample of this study consists of a total of 440 parents of children admitted to the Paediatric Emergency Department (PED) with complaints of fever. Convenience sampling was used to select the participants. Data were collected using a questionnaire covering sociodemographics, a form surveying the parents’ fever knowledge and attitude, and the validated parents’ fever management scale (Turkish version). The data were analyzed using the SPSS 22.0 statistical program. Results: Most parents (95.5%) reported prior experience with childhood fever, yet 54.1% lacked a regular physician. Common fever detection methods included tactile assessment (56.4%) and thermometers (27.3%). Parental concern arose at 39 °C (48.6%). Cold applications (41.6%) and antipyretics (21.1%) were frequent interventions. The mean PFMS-TR score was high (34.97 ± 4.27), indicating elevated caregiver burden. Scores varied significantly by the child’s age (higher for infants, p = 0.044) and maternal education (higher for educated mothers, p = 0.008). Satisfaction with healthcare staff correlated with higher scores (p = 0.024). Negative correlations emerged between parental age, number of children, and fever management scores (p < 0.05). Conclusions: Parents exhibited high interventionist behaviors and persistent knowledge gaps, underscoring the need for targeted education programs. Educational programs targeting fever management, tailored to parental demographics and misconceptions, are essential. Healthcare providers, particularly pediatric nurses, should prioritize clear communication and evidence-based guidance to empower parents and reduce unnecessary healthcare burdens. Future research should expand to diverse geographic and cultural settings to enhance generalizability. Full article
20 pages, 4390 KB  
Article
NeuroFusion-ViT: A Hybrid CNN–EVA Transformer Model with Cross-Attention Fusion for MRI-Based Alzheimer’s Stage Classification
by Derya Öztürk Söylemez and Sevinç Ay Doğru
Diagnostics 2026, 16(5), 754; https://doi.org/10.3390/diagnostics16050754 - 3 Mar 2026
Abstract
Background: Alzheimer’s disease is the most common type of dementia and a progressive neurodegenerative disease that begins with neuronal damage and leads to a reduction in brain tissue. Currently, there is no cure for this disease, and existing approaches focus on alleviating symptoms. [...] Read more.
Background: Alzheimer’s disease is the most common type of dementia and a progressive neurodegenerative disease that begins with neuronal damage and leads to a reduction in brain tissue. Currently, there is no cure for this disease, and existing approaches focus on alleviating symptoms. Methods: This study proposes NeuroFusion-ViT, a highly accurate and computationally efficient hybrid deep learning model for early-stage detection of Alzheimer’s disease. The model combines an EVA-02-based Vision Transformer (ViT) with the ConvNeXt-Small CNN architecture, providing powerful representation learning that can process both global context and local details. The proposed Gated Cross-Attention Fusion (G-CAF) mechanism dynamically combines two different features, offering high discriminative power and model stability. Results: In experiments conducted on the OASIS MRI dataset, the model achieved 99.86% accuracy, 0.9989 Macro F1, and 0.999 ROC-AUC values, demonstrating clear superiority over single-modal and hybrid models described in the literature. Furthermore, 5-fold cross-validation results also support the model’s high generalizability. Ablation studies showed that each of the components—cross-attention, gate mechanism, Dual LayerNorm, and FFN-Dropout—made a meaningful contribution to performance. Conclusions: The results demonstrate that the NeuroFusion-ViT architecture offers a reliable, stable, and clinically applicable solution for Alzheimer’s stage classification. Full article
(This article belongs to the Special Issue Alzheimer's Disease Diagnosis Based on Deep Learning)
Show Figures

Figure 1

20 pages, 6711 KB  
Article
RUL Prediction Based on xLSTM–Transformer Neural Network for Rolling Element Bearings Under Different Working Conditions
by Runzhong Jiang, Ziqi Li, Haiyu Lu, Weizhong Mo, Wei Huang and Minmin Xu
Sensors 2026, 26(5), 1578; https://doi.org/10.3390/s26051578 - 3 Mar 2026
Abstract
Remaining useful life (RUL) prediction of rolling bearings is a crucial issue in intelligent predictive maintenance, thereby ensuring equipment safety and reducing maintenance costs. To address the challenge that traditional deep learning models struggle to simultaneously capture local temporal features and global degradation [...] Read more.
Remaining useful life (RUL) prediction of rolling bearings is a crucial issue in intelligent predictive maintenance, thereby ensuring equipment safety and reducing maintenance costs. To address the challenge that traditional deep learning models struggle to simultaneously capture local temporal features and global degradation trends when processing degradation health indicators (HI), this paper proposes a hybrid RUL prediction model based on extended Long Short-Term Memory (xLSTM) and Transformer. The model employs an encoder–decoder architecture, integrating the Multi-Head Attention mechanism with the xLSTM module. This design simultaneously enhances the modeling capability of short-term dynamic features and effectively captures long-term degradation patterns. Validation was conducted on the XJTU-SY and PHM2012 datasets. The proposed model outperformed the comparative models across evaluation metrics such as Root Mean Square Error (RMSE), Coefficient of Determination (R2) and the Score, achieving a significant improvement in prediction accuracy and multi-dataset generalization capability. The proposed network provides a more accurate and generalizable solution for bearing health assessment and remaining useful life prediction and demonstrates significant potential for intelligent health management of industrial equipment. Full article
(This article belongs to the Special Issue Sensor-Based Fault Diagnosis and Prognosis)
Show Figures

Figure 1

20 pages, 448 KB  
Article
Assessing the Generalizability of Mobile Software Engineering Research Through Combined Systematic Methods
by Robin Nunkesser
Software 2026, 5(1), 12; https://doi.org/10.3390/software5010012 - 3 Mar 2026
Abstract
Mobile Software Engineering has emerged as a distinct subfield, raising questions about the transferability of its research findings to general software engineering. This paper addresses the challenge of evaluating the generalizability of mobile-specific research, using Green Computing as a representative case. We propose [...] Read more.
Mobile Software Engineering has emerged as a distinct subfield, raising questions about the transferability of its research findings to general software engineering. This paper addresses the challenge of evaluating the generalizability of mobile-specific research, using Green Computing as a representative case. We propose a combination of systematic methods to identify potentially overlooked mobile-specific papers with a focused literature review to assess their broader relevance. Applying this approach, we find that several mobile-specific studies offer insights applicable beyond their original context, particularly in areas such as energy efficiency guidelines, measurement, and trade-offs. The results demonstrate that systematic identification and evaluation can reveal valuable contributions for the wider software engineering community. The proposed method provides a structured framework for future research to assess the generalizability of findings from specialized domains, fostering greater integration and knowledge transfer across software engineering disciplines. Full article
Show Figures

Figure 1

32 pages, 9401 KB  
Article
A Leakage-Aware Multimodal Machine Learning Framework for Nutrition Supply–Demand Forecasting Using Temporal and Spatial Data Fusion
by Abdullah, Muhammad Ateeb Ather, Jose Luis Oropeza Rodriguez, Carlos Guzmán Sánchez-Mejorada, Miguel Jesús Torres Ruiz and Rolando Quintero Tellez
Computers 2026, 15(3), 156; https://doi.org/10.3390/computers15030156 - 2 Mar 2026
Abstract
Accurate forecasting of nutrition supply–demand dynamics is essential for reducing resource wastage and improving equitable allocation. However, this task remains challenging due to heterogeneous data sources, cold-start regions, and the risk of information leakage in spatiotemporal modeling. This study presents a leakage-aware multimodal [...] Read more.
Accurate forecasting of nutrition supply–demand dynamics is essential for reducing resource wastage and improving equitable allocation. However, this task remains challenging due to heterogeneous data sources, cold-start regions, and the risk of information leakage in spatiotemporal modeling. This study presents a leakage-aware multimodal machine learning framework for nutrition supply–demand forecasting. The framework integrates temporal, spatial, and contextual information within a unified architecture. It combines self-supervised temporal representation learning, causal time-lag modeling, and few-shot adaptation to improve generalization under limited or previously unseen data conditions. Heterogeneous inputs include epidemiological, environmental, demographic, sentiment, and biologically derived indicators. These signals are encoded using a PatchTST-inspired temporal backbone coupled with a feature-token transformer employing cross-modal attention. Spatial dependencies are explicitly modeled using graph neural networks. Hierarchical decoding enables multi-horizon forecasting with calibrated uncertainty estimates. Model evaluation is conducted under strict spatiotemporal hold-out protocols with explicit leakage detection. All synthetic signals are excluded from testing. Across geographically and temporally disjoint datasets, the proposed framework consistently outperforms strong unimodal and multimodal baselines. It achieves macro-F1 scores above 99.5% and stable early-warning lead times of approximately 9 days under distribution shift. Ablation studies indicate that causal time-lag enforcement and few-shot adaptation contribute most strongly to performance robustness. Closed-loop simulation experiments suggest potential reductions in nutrient wastage of approximately 38%, response latency of 19%, and operational costs of 16% when deployed as a decision-support tool. External validation on fully unseen regions confirms the generalizability of the framework under realistic forecasting constraints. Full article
(This article belongs to the Special Issue AI in Bioinformatics)
Show Figures

Figure 1

Back to TopTop