Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (6,734)

Search Parameters:
Keywords = intelligent tools

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 843 KB  
Article
Artificial Intelligence in Literature Review Synthesis: A Step-by-Step Methodological Approach for Researchers and Academics
by Matolwandile M. Mtotywa, Jeri-Lee J. Mowers, Wavhudi Ndou, Thabang V. Q. Moleko and Matsobane J. Ledwaba
Informatics 2026, 13(3), 43; https://doi.org/10.3390/informatics13030043 (registering DOI) - 13 Mar 2026
Abstract
The integration of artificial intelligence (AI) in literature reviews aims to transform research by potentially automating processes, enhancing rigour, and improving quality. The study proposes a structured step-by-step approach to integrate AI tools into the literature review synthesis process. The developed methodological approach [...] Read more.
The integration of artificial intelligence (AI) in literature reviews aims to transform research by potentially automating processes, enhancing rigour, and improving quality. The study proposes a structured step-by-step approach to integrate AI tools into the literature review synthesis process. The developed methodological approach has five steps. The first step, planning and readiness, involves scoping, understanding practices, and defining boundaries of AI use. Next is selecting AI tools and aligning their capabilities with the literature needs through a matrix. The third step focuses on using AI to conduct the review, followed by validation and cross-referencing of AI-generated results. The final step is disclosing AI use in line with ethical and reporting standards. The approach is demonstrated through five scenarios: emerging or fragmented literature, large or saturated fields, interdisciplinary domains, methodologically diverse studies, and under-researched topics. This approach is designed to enhance transparency, potentially reduce bias, and support reproducibility by aligning AI functions with research goals. It also addresses ethical considerations and promotes human–AI collaboration. For researchers and academics, it aims to provide a practical roadmap for the responsible adoption of AI in literature reviews, supporting efficiency, ethical tool use, transparency, and the balance between machine assistance and academic judgment. Full article
Show Figures

Figure 1

17 pages, 602 KB  
Review
Artificial Intelligence Applications in Gastric Cancer Surgery: Bridging Early Diagnosis and Responsible Precision Medicine
by Silvia Malerba, Miljana Vladimirov, Aman Goyal, Audrius Dulskas, Augustinas Baušys, Tomasz Cwalinski, Sergii Girnyi, Jaroslaw Skokowski, Ruslan Duka, Robert Molchanov, Bojan Jovanovic, Francesco Antonio Ciarleglio, Alberto Brolese, Kebebe Bekele Gonfa, Abdi Tesemma Demmo, Zilvinas Dambrauskas, Adolfo Pérez Bonet, Mario Testini, Francesco Paolo Prete, Valentin Calu, Natale Calomino, Vikas Jain, Aleksandar Karamarkovic, Karol Polom, Adel Abou-Mrad, Rodolfo J. Oviedo, Yogesh Vashist and Luigi Maranoadd Show full author list remove Hide full author list
J. Clin. Med. 2026, 15(6), 2208; https://doi.org/10.3390/jcm15062208 (registering DOI) - 13 Mar 2026
Abstract
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk [...] Read more.
Background: Artificial intelligence is emerging as a promising tool in surgical oncology, with growing evidence suggesting potential applications in diagnostic support, intraoperative guidance, and perioperative risk assessment. In gastric cancer surgery, emerging applications range from AI-assisted endoscopic detection to data-driven perioperative risk prediction, while some technological developments, particularly in robotic autonomy, derive from broader surgical or experimental models that may inform future gastric procedures. Methods: A narrative review was conducted following established methodological standards, including the Scale for the Assessment of Narrative Review Articles (SANRA) and the Search–Appraisal–Synthesis–Analysis (SALSA) framework. English-language studies indexed in PubMed, Scopus, Embase, and Web of Science up to October 2025 were included. Evidence was synthesized thematically across five domains: AI-assisted anatomical recognition and lymphadenectomy support, autonomous robotic systems, early cancer detection, perioperative predictive and frailty models, and ethical and regulatory considerations. Results: AI-based computer vision and deep learning algorithms have demonstrated promising capabilities for real-time anatomical recognition, surgical phase classification, and intraoperative guidance, although evidence of direct patient-level benefit remains limited. In diagnostic settings, AI-assisted endoscopy and Raman spectroscopy have been shown to improve early lesion detection and reduce dependence on operator experience. Predictive models, including MySurgeryRisk and AI-driven frailty assessments, may support individualized prehabilitation planning and perioperative risk stratification. Persistent limitations include small and heterogeneous datasets, insufficient external validation, and unresolved concerns related to data privacy, algorithmic interpretability, and medico-legal responsibility. Conclusions: Artificial intelligence is progressively emerging as a promising tool in gastric cancer surgery, integrating automation, advanced analytics, and human clinical reasoning. Its safe and ethical adoption requires robust validation, transparent governance, and continuous surgeon oversight. When developed within human-centered and ethically grounded frameworks, AI can augment, rather than replace, surgical expertise, potentially advancing precision, safety, and equity in oncologic care. Full article
Show Figures

Figure 1

26 pages, 656 KB  
Article
Artificial Intelligence in Gastronomic Heritage Preservation: Governance and Community Acceptance in Tourism Contexts
by Marina Bugarčić, Dragan Vukolić, Ana Spasojević, Marija Mandarić, Mirjana Penić, Bojana Drašković, Maja Vrbanac, Gordana Bejatović, Momčilo Conić, Andrija Milutinović and Tamara Gajić
Heritage 2026, 9(3), 114; https://doi.org/10.3390/heritage9030114 (registering DOI) - 13 Mar 2026
Abstract
Gastronomic tourism heritage represents a significant segment of intangible cultural heritage, reflecting traditional knowledge, local identity, and long-standing culinary practices. The contemporary development of digital technologies, particularly artificial intelligence (AI), opens new possibilities for its preservation, documentation, and sustainable interpretation within cultural tourism. [...] Read more.
Gastronomic tourism heritage represents a significant segment of intangible cultural heritage, reflecting traditional knowledge, local identity, and long-standing culinary practices. The contemporary development of digital technologies, particularly artificial intelligence (AI), opens new possibilities for its preservation, documentation, and sustainable interpretation within cultural tourism. The aim of this research is to examine the role of artificial intelligence as a tool for preserving gastronomic tourism heritage from the perspective of local community members in Bosnia and Herzegovina, Serbia, and North Macedonia, regions characterised by shared gastronomic and cultural traditions. The study was conducted using a quantitative research design based on a structured questionnaire administered to 571 respondents. A convenience sampling approach was applied, targeting individuals involved in the preparation, transmission, or promotion of traditional gastronomy. Data were collected through a combination of field-based and online survey distribution. The analysis focuses on respondents’ perceptions of AI applications in documenting traditional recipes, interpreting gastronomic heritage, and promoting it within tourism, as well as on attitudes related to authenticity and cultural identity preservation. The findings indicate that, within the surveyed sample, artificial intelligence is generally perceived as a useful tool for safeguarding gastronomic heritage. At the same time, respondents emphasise the importance of transparent governance, community participation, and culturally sensitive implementation in order to minimise risks of commodification and loss of authenticity. Full article
Show Figures

Figure 1

13 pages, 1024 KB  
Article
Artificial Intelligence as a Support Tool for Preoperative Patient Education in Anesthesiology: A Comparative Evaluation of Five Large Language Models
by Ahmet Tuğrul Şahin, Mehtap Gürler Balta, Vildan Kölükçü, Ali Genç, Serkan Karaman, Tuğba Karaman and Hakan Tapar
J. Clin. Med. 2026, 15(6), 2197; https://doi.org/10.3390/jcm15062197 - 13 Mar 2026
Abstract
Background/Objectives: Large language models (LLMs) are increasingly used for patient education, yet comparative evidence regarding their accuracy, safety, and ethical performance remains limited, particularly in high-risk fields such as anesthesiology. This study aimed to conduct a multidimensional comparison of five contemporary LLMs [...] Read more.
Background/Objectives: Large language models (LLMs) are increasingly used for patient education, yet comparative evidence regarding their accuracy, safety, and ethical performance remains limited, particularly in high-risk fields such as anesthesiology. This study aimed to conduct a multidimensional comparison of five contemporary LLMs in answering common patient questions in anesthesiology. Methods: In this cross-sectional, comparative in silico study, 30 standardized patient questions covering general anesthesia, spinal/epidural anesthesia, and peripheral nerve blocks were submitted to ChatGPT, Gemini, Microsoft Copilot, DeepSeek, and Grok. Responses were independently evaluated under full blinding by five senior anesthesiology professors using a 5-point Likert scale across six domains: accuracy, safety, completeness, understandability, ethics, and overall assessment. Inter-rater reliability was assessed using intraclass correlation coefficients (ICC). Performance differences were analyzed using linear mixed-effects models accounting for question- and evaluator-level variability, with results reported as estimated marginal means. Results: Inter-rater agreement was good to excellent across all domains (ICC > 0.75). Significant model-related differences were observed for overall assessment, accuracy, safety, completeness, and ethics (all p < 0.001), whereas understandability did not differ significantly between models. ChatGPT achieved the highest overall performance, while Gemini demonstrated superior accuracy. Model performance varied across anesthesiology subspecialties, with significant model × topic interactions identified in multiple domains (p < 0.01). Conclusions: LLMs may serve as supportive tools for patient education in anesthesiology; however, their performance varies substantially across models and clinical contexts. Differences in accuracy, safety, and ethical performance highlight the need for cautious, context-aware integration of LLMs into clinical practice rather than their use as substitutes for anesthesiologists’ clinical judgment. Full article
(This article belongs to the Section Anesthesiology)
Show Figures

Figure 1

33 pages, 446 KB  
Review
Language Models and Food–Health Evidence: Challenges, Opportunities, and Implications
by David Jackson, Athanasios Gousiopoulos and Theodoros G. Soldatos
BioMedInformatics 2026, 6(2), 13; https://doi.org/10.3390/biomedinformatics6020013 - 13 Mar 2026
Abstract
Scientific evidence is fundamental to uncovering insights about health, including food and nutritional claims. Substantiating such claims requires robust scientific procedures that often include clinical studies, biochemical analyses, and the examination of multiple forms of data. The growing capabilities of artificial intelligence (AI) [...] Read more.
Scientific evidence is fundamental to uncovering insights about health, including food and nutritional claims. Substantiating such claims requires robust scientific procedures that often include clinical studies, biochemical analyses, and the examination of multiple forms of data. The growing capabilities of artificial intelligence (AI) and large language models (LLMs) present new opportunities for analyzing food–health relationships and supporting health claim validation. Yet, applying these technologies to the food and nutrition domain raises challenges that differ from those encountered in broader biomedical text mining (TM). In this perspective, we review key issues, including the complexity and heterogeneity of food-related data, the scarcity of food-specific language models and standardized resources, difficulties in interpreting nuanced and often contradictory evidence, and requirements for integrating AI tools into regulatory workflows. We compare modern LLM approaches with traditional TM methods and discuss how each may complement the other. Our position is that, despite their promise, current AI and LLM tools cannot yet reliably handle the subtleties of food–health evidence without substantial domain-specific refinement and human expert oversight. We advocate for hybrid approaches that combine the precision of established TM techniques with the analytical breadth of LLMs, supported by harmonized ontologies, multidimensional evaluation frameworks, and human-in-the-loop validation, particularly in regulatory contexts. We also highlight the importance of public education, transparent communication standards, and coordinated cross-disciplinary efforts to ensure these technologies serve broader goals of food safety, consumer trust, and global health. Full article
21 pages, 2278 KB  
Review
Artificial Intelligence for Microbial Isolation and Cultivation: Progress and Challenges
by Mingyu Li, Xiangwu Yao, Meng Zhang and Baolan Hu
Microorganisms 2026, 14(3), 654; https://doi.org/10.3390/microorganisms14030654 - 13 Mar 2026
Abstract
Microbial resources are crucial for biotechnology development and fundamental scientific research. Traditional microbial techniques fail to isolate and cultivate the vast majority of microorganisms in nature, severely limiting the discovery of novel microbial resources. The rise in artificial intelligence (AI) technologies provides new [...] Read more.
Microbial resources are crucial for biotechnology development and fundamental scientific research. Traditional microbial techniques fail to isolate and cultivate the vast majority of microorganisms in nature, severely limiting the discovery of novel microbial resources. The rise in artificial intelligence (AI) technologies provides new computational tools to overcome bottlenecks in microbial resource discovery and utilization. This review comprehensively examines the development of AI technologies in microbial isolation and cultivation over the past three decades from the perspective of microbial resource discovery. We propose a five-stage framework: the germination period (1997–2008), the early exploration period (2008–2015), the rapid development period (2015–2019), the deep learning (DL) explosion period (2020–2022), and the AI integration period (2023–present). We focus on how AI technologies at each stage address core challenges in microbiology—including insufficient knowledge reserves, dynamic phenotypic changes, and complex cultivation conditions—through applications at the genome, individual, and community levels. Our analysis demonstrates that, as AI technologies advance iteratively, microbial isolation and cultivation methods are transitioning from experience-driven to data-driven approaches, from single-objective to systematic integration, and from passive screening to active design. This methodological transition is expanding the scope of microbial resource discovery. Full article
(This article belongs to the Special Issue Advancing Microbial Biotechnology)
Show Figures

Figure 1

28 pages, 6918 KB  
Article
Improving Manufacturing Line Design Efficiency Using Digital Value Stream Mapping
by P Paryanto, Muhammad Faizin and Jörg Franke
J. Manuf. Mater. Process. 2026, 10(3), 98; https://doi.org/10.3390/jmmp10030098 - 13 Mar 2026
Abstract
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework [...] Read more.
This study proposes a real-time data-based Digital Value Stream Mapping (Digital VSM) framework that integrates Artificial Intelligence (AI) feature selection and discrete-event simulation validation to enhance production system performance. Unlike conventional VSM approaches that rely on static, manually aggregated data, the proposed framework uses real-time operational data to dynamically quantify Value Added (VA), Non-Value Added (NVA), and Necessary Non-Value Added (NNVA) activities. To improve decision accuracy, an Artificial Neural Network (ANN) combined with Genetic Algorithm (GA) feature selection is employed to identify dominant production variables influencing lead time and line imbalance. Furthermore, Ranked Positional Weight (RPW) optimization results are validated through Tecnomatix Plant Simulation to ensure robustness before physical implementation. The proposed framework was applied to a discrete manufacturing line, resulting in a reduction of total lead time from 8755 s to 6400 s and an increase in process ratio from 33.64% to 45.91%, with line efficiency reaching 91.7%. The findings demonstrate that integrating Digital VSM with AI-driven feature selection and simulation validation transforms Lean analysis from a descriptive tool into a predictive and validated decision-support system suitable for Industry 4.0 environments. Full article
(This article belongs to the Special Issue Emerging Methods in Digital Manufacturing)
Show Figures

Figure 1

20 pages, 2310 KB  
Review
Beyond Computer-Aided Diagnosis: Artificial Intelligence as a “Digital Mentor” for POCUS Image Acquisition and Quality Assurance: A Narrative Review
by Hyub Huh and Jeong Jun Park
Diagnostics 2026, 16(6), 858; https://doi.org/10.3390/diagnostics16060858 - 13 Mar 2026
Abstract
Point-of-care ultrasound (POCUS) is portable and radiation-free, but its clinical reliability is constrained by operator-dependent image acquisition and the limited scalability of expert quality assurance (QA) review. As handheld devices proliferate faster than mentorship capacity, trainees increasingly rely on heterogeneous free open access [...] Read more.
Point-of-care ultrasound (POCUS) is portable and radiation-free, but its clinical reliability is constrained by operator-dependent image acquisition and the limited scalability of expert quality assurance (QA) review. As handheld devices proliferate faster than mentorship capacity, trainees increasingly rely on heterogeneous free open access medical education (FOAMed) resources that rarely provide real-time psychomotor feedback. We conducted a structured narrative review (MEDLINE, Embase, Scopus, and Web of Science; last searched on 23 February 2026), with searches performed by H.H. and independently checked by J.J.P. (both POCUS-trained clinicians). After screening, 31 studies were included. We synthesized evidence on artificial intelligence (AI) systems that support bedside image acquisition and automate QA. The primary synthesis centered on key prospective or comparative clinical evaluations of AI-guided acquisition across echocardiography, focused assessment with sonography in trauma, abdominal aortic aneurysm screening, and lung ultrasound, complemented by peer-reviewed studies of FOAMed appraisal tools and online resource quality. These evaluations suggest that real-time probe guidance, view recognition, anatomy labeling, and automated capture may enable novices, after brief training, to acquire diagnostically adequate images for narrowly defined tasks. Early reports of automated QA scoring and program-level triage for expert review suggest potential to reduce expert workload and shorten feedback cycles, but external validation, generalizability across devices and patient habitus, and patient-centered outcomes remain limited. Acquisition-focused AI may therefore serve as an upstream “digital mentor” to improve novice image acquisition. We propose a practical pathway that integrates curated FOAMed resources and simulation with AI-guided bedside acquisition and continuous QA governance for safe deployment. Full article
(This article belongs to the Special Issue Application of Ultrasound Imaging in Clinical Diagnosis)
Show Figures

Figure 1

22 pages, 1101 KB  
Systematic Review
Radiomics for Detection and Differentiation of Intrahepatic Cholangiocarcinoma: A Systematic Review and Meta-Analysis
by Zayan Alidina, Illiyun Banani, Umm E. Abiha, Ujala Sultan and Timothy M. Pawlik
Cancers 2026, 18(6), 937; https://doi.org/10.3390/cancers18060937 - 13 Mar 2026
Abstract
Background: Intrahepatic cholangiocarcinoma (ICC) is an aggressive primary liver malignancy with limited survival, largely due to delayed diagnosis, recurrence and limited effective therapeutic options. Radiomics- and artificial intelligence (AI)-based imaging models have emerged as promising tools to improve noninvasive detection and differentiation of [...] Read more.
Background: Intrahepatic cholangiocarcinoma (ICC) is an aggressive primary liver malignancy with limited survival, largely due to delayed diagnosis, recurrence and limited effective therapeutic options. Radiomics- and artificial intelligence (AI)-based imaging models have emerged as promising tools to improve noninvasive detection and differentiation of ICC. We conducted a systematic review and meta-analysis to evaluate the diagnostic performance of radiomics-based AI models for ICC. Methods: A systematic search of PubMed, Embase, Scopus, and the Cochrane Library was performed from inception through 2025 in accordance with PRISMA guidelines. Studies assessing radiomics- or AI-based models derived from CT, MRI, PET, or ultrasound for differentiation of ICC from other hepatic lesions were included. Pooled sensitivity, specificity, positive likelihood ratio (PLR), and negative likelihood ratio (NLR) were estimated using a bivariate random-effects model. Study quality and risk of bias were assessed using the Radiomics Quality Score (RQS) and QUADAS-2 tools. Results: Twenty retrospective studies comprising 8746 participants were included. Across pooled validation and test datasets, radiomics-based AI models demonstrated a pooled sensitivity of 0.77 (95% CI, 0.69–0.84) and specificity of 0.88 (95% CI, 0.78–0.94) for differentiating ICC from non-ICC hepatic lesions. The pooled PLR was 6.81 (95% CI, 3.51–13.2), and the pooled NLR was 0.23 (95% CI, 0.09–0.61). CT-based models showed higher diagnostic performance compared with MRI and ultrasound. Subgroup and meta-regression analyses identified imaging modality, contrast phase, segmentation strategy, and validation approach as contributors to interstudy heterogeneity. The overall methodological quality demonstrated a mean Radiomics Quality Score (RQS) of 14.0 (range 11–24), corresponding to approximately 39% of the maximum achievable score. External validation cohorts were incorporated in 60% of the studies, although adherence to standardized feature reproducibility frameworks varied. Conclusions: Radiomics-based AI models demonstrate clinically meaningful diagnostic accuracy for noninvasive differentiation of ICC and may complement conventional imaging in preoperative evaluation. Prospective, multicenter studies with standardized imaging protocols and rigorous external validation are required before routine clinical adoption. Full article
(This article belongs to the Section Systematic Review or Meta-Analysis in Cancer Research)
Show Figures

Figure 1

10 pages, 238 KB  
Article
Feasibility of Artificial Intelligence Models for Longitudinal CT Analysis of Epicardial Adipose Tissue After Immunotherapy
by Eliodoro Faiella, Stefania Lamja, Rebecca Casati, Michele Tondo, Raffaele Ragone, Adriano Redi, Elva Vergantino, Bruno Beomonte Zobel, Francesco Grasso and Domiziana Santucci
Diagnostics 2026, 16(6), 852; https://doi.org/10.3390/diagnostics16060852 - 13 Mar 2026
Abstract
Background: Epicardial adipose tissue (EAT) is an imaging-derived biomarker increasingly associated with cardiovascular inflammation and metabolic risk. Computed tomography (CT) allows for accurate volumetric quantification of EAT, but the clinical interpretation of longitudinal changes remains challenging. Artificial Intelligence (AI) may provide additional [...] Read more.
Background: Epicardial adipose tissue (EAT) is an imaging-derived biomarker increasingly associated with cardiovascular inflammation and metabolic risk. Computed tomography (CT) allows for accurate volumetric quantification of EAT, but the clinical interpretation of longitudinal changes remains challenging. Artificial Intelligence (AI) may provide additional value by identifying patterns and predictors of EAT variation. Purpose: To evaluate longitudinal changes in CT-derived EAT volume and to assess the feasibility and performance of AI-based models in discriminating patients with EAT increase after immunotherapy. Methods: In this retrospective single-center study, EAT was volumetrically segmented on baseline and follow-up CT scans. EAT change (ΔEAT) was calculated, and patients were dichotomized according to EAT increase (ΔEAT > 0). Three supervised AI models—Support Vector Machine (SVM), Artificial Neural Network (ANN), and Extreme Gradient Boosting (XGBoost)—were trained using imaging-derived and clinical variables. Given the limited sample size and class imbalance, stratified two-fold cross-validation was adopted. Model performance was assessed using AUC, accuracy, and F1-score, and model interpretability was explored using permutation importance. Results: EAT volume showed a statistically significant increase at follow-up. In the AI analysis, SVM and ANN demonstrated good discriminative performance, with ANN achieving the highest AUC (~0.90). XGBoost failed to show meaningful predictive capability. Baseline EAT volume and follow-up duration emerged as the most relevant features. Conclusions: AI-based models, particularly SVM and ANN, are feasible tools for the analysis of CT-derived EAT changes, even in small cohorts. These results support the integration of AI-assisted EAT assessment into imaging-based cardio-oncology research. Full article
(This article belongs to the Special Issue Machine-Learning-Based Disease Diagnosis and Prediction)
26 pages, 5125 KB  
Article
A Hybrid Ensemble-Based Intelligent Decision Framework for Risk-Aware Photovoltaic Panel Soiling Detection and Cleaning
by Bakht Muhammad Khan, Abdul Wadood, Hani Albalawi, Shahbaz Khan, Aadel Mohammed Alatwi and Omar H. Albalawi
Electronics 2026, 15(6), 1192; https://doi.org/10.3390/electronics15061192 - 12 Mar 2026
Abstract
Soiling of solar panels has a considerable impact on the performance of photo voltaic (PV) systems, emphasizing the importance of developing reliable decision support tools for solar panel cleaning. Although recent convolutional neural network (CNN)-based models, including lightweight architectures such as SolPowNet, have [...] Read more.
Soiling of solar panels has a considerable impact on the performance of photo voltaic (PV) systems, emphasizing the importance of developing reliable decision support tools for solar panel cleaning. Although recent convolutional neural network (CNN)-based models, including lightweight architectures such as SolPowNet, have demonstrated high classification accuracy, their performance can be sensitive to dataset variability and domain shifts encountered in real-world PV environments. Motivated by the lightweight design philosophy of SolPowNet, this paper proposes a hybrid and ensemble-based intelligent cleaning decision framework that integrates classical image processing, machine learning, and deep learning techniques. The proposed approach combines physically interpretable handcrafted texture and sharpness features classified using a Random Forest model with a pretrained MobileNetV3-Small CNN through a conservative OR-based ensemble fusion strategy. In addition, a probability-driven Soiling Index (SI) is introduced to translate classification confidence into actionable cleaning decisions, including no cleaning, light cleaning, and full cleaning. Experimental results on multiple PV image datasets demonstrate that, under domain-shift conditions where individual models may experience performance degradation, the proposed ensemble framework achieves an accuracy of up to 85.93% and attains a dusty-panel detection rate of 0.90 on the unseen dataset. On the in-distribution evaluation, the proposed OR-ensemble achieves an average accuracy of 0.9663 ± 0.0177 with dusty recall of 0.9896 ± 0.0104 over repeated stratified runs. Importantly, the conservative fusion strategy minimizes high-risk false negative cases while avoiding excessive misclassification of clean panels. Overall, the proposed framework offers a robust, scalable, and deployment-ready solution for intelligent PV cleaning decision support, advancing CNN-based soiling detection toward practical and risk-aware operation and maintenance systems. Full article
(This article belongs to the Special Issue Image Processing Based on Convolution Neural Network: 2nd Edition)
Show Figures

Figure 1

33 pages, 4366 KB  
Article
Structured and Factorized Multi-Modal Representation Learning for Physiological Affective State and Music Preference Inference
by Wenli Qu and Mu-Jiang-Shan Wang
Symmetry 2026, 18(3), 488; https://doi.org/10.3390/sym18030488 - 12 Mar 2026
Abstract
Emotions and affective responses are core intervention targets in music therapy. Through acoustic elements, music can evoke emotional responses at physiological and neurological levels, influencing cognition and behavior while providing an important dimension for evaluating therapeutic efficacy. However, emotions are inherently abstract and [...] Read more.
Emotions and affective responses are core intervention targets in music therapy. Through acoustic elements, music can evoke emotional responses at physiological and neurological levels, influencing cognition and behavior while providing an important dimension for evaluating therapeutic efficacy. However, emotions are inherently abstract and difficult to represent directly. Artificial intelligence models therefore provide a promising tool for modeling and quantifying such abstract affective states from physiological signals. In this paper, we propose a structured and explicitly factorized multi-modal representation learning framework for joint affective state and preference inference. Instead of entangling heterogeneous dynamics within monolithic encoders, the framework decomposes representation learning into cross-channel interaction modeling and intra-channel temporal–spectral organization modeling. The framework integrates electroencephalography (EEG), peripheral physiological signals (GSR, BVP, EMG, respiration, and temperature), and eye-movement data (EOG) within a unified temporal modeling paradigm. At its core, a Dynamic Token Feature Extractor (DTFE) transforms raw time series into compact token representations and explicitly factorizes representation learning into (i) explicit channel-wise cross-series interaction modeling and (ii) temporal–spectral refinement via learnable frequency-domain gating. These complementary structural modules are implemented through Cross-Series Intersection (CSI) and Intra-Series Intersection (ISI), which perform low-rank channel dependency learning and adaptive spectral modulation, respectively. A hierarchical cross-modal fusion strategy integrates modality-level tokens in a representation-consistent and interaction-aware manner, enabling coordinated modeling of neural, autonomic, and attentional responses. The entire framework is optimized under a unified multi-task objective for valence, arousal, and liking prediction. Experiments on the DEAP dataset demonstrate consistent improvements over state-of-the-art methods. The model achieves 98.32% and 98.45% accuracy for valence and arousal prediction, 97.96% for quadrant classification in single-task evaluation, and 92.8%, 91.8%, and 93.6% accuracy for valence, arousal, and liking in joint multi-task settings. Overall, this work establishes a structure-aware and factorized multi-modal representation learning framework for robust affective decoding and intelligent music therapy systems. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

16 pages, 592 KB  
Article
Artificial Intelligence and Interreligious Dialogue: Emerging Implications for Faith-Based Organizations
by Jeff Clyde G. Corpuz
Religions 2026, 17(3), 354; https://doi.org/10.3390/rel17030354 - 12 Mar 2026
Abstract
This article advances a constructive theological account of Human-Centered Artificial Intelligence (HCAI) for Faith-Based Organizations (FBOs) engaged in interreligious dialogue (IRD). Drawing on a practical–theological methodology, the study follows four interrelated steps—descriptive–empirical, interpretive, normative, and pragmatic—to examine how AI-enabled practices such as translation, [...] Read more.
This article advances a constructive theological account of Human-Centered Artificial Intelligence (HCAI) for Faith-Based Organizations (FBOs) engaged in interreligious dialogue (IRD). Drawing on a practical–theological methodology, the study follows four interrelated steps—descriptive–empirical, interpretive, normative, and pragmatic—to examine how AI-enabled practices such as translation, textual analysis, and cross-scriptural synthesis are reshaping contemporary forms of dialogue among religious and non-religious communities. Through the empirical mapping of current AI applications, interdisciplinary interpretation informed by social and ethical analysis, and normative theological evaluation, the study identifies both the opportunities and risks of AI-mediated IRD. On this basis, it synthesizes three interdependent dimensions that structure the proposed framework: (1) Ethics, which clarifies the moral purpose and values guiding AI use; (2) Technology, which addresses mediation, governance, and power in AI systems; and (3) Humans, which centers institutional responsibility, agency, and sustainability within FBOs. From this synthesis, the article introduces an AI–IRD Integration Framework that translates theological and ethical reflection into practical guidance for responsible AI adoption. The study contributes an original interdisciplinary perspective that equips religious leaders, theologians, policymakers, and faith communities to engage AI not merely as a tool, but as a human-centered partner in fostering inclusive, sustainable, and ethically grounded dialogue in an era of AI–human coexistence. Full article
(This article belongs to the Special Issue Interreligious Dialogue: Validity and Sustainability)
Show Figures

Figure 1

32 pages, 4555 KB  
Review
AI-Enabled Digital Twins in Agriculture
by Marios Tsaousidis, Theofanis Kalampokas, Eleni Vrochidou and George A. Papakostas
AI 2026, 7(3), 108; https://doi.org/10.3390/ai7030108 - 12 Mar 2026
Abstract
Digital Twins (DTs) have emerged within the last decade due to the adequate maturity of several key technologies contributing to the realization of real-time virtual–physical world synchronization. Advancements in sensing, connectivity, computing processing power, and artificial intelligence have contributed to the deployment of [...] Read more.
Digital Twins (DTs) have emerged within the last decade due to the adequate maturity of several key technologies contributing to the realization of real-time virtual–physical world synchronization. Advancements in sensing, connectivity, computing processing power, and artificial intelligence have contributed to the deployment of DTs in several application sectors, such as in agriculture. This work aims to provide a scoping review of recent advancements in digital twin technologies and agricultural applications. Results indicate a special focus on plant-level models, soil moisture, and machinery, while most works are based on drone imagery combined with machine learning routines. Several works use the term DTs rather loosely, often describing systems that resemble decision support tools rather than a fully synchronized virtual–physical setup. Data integration emerges as the most important bottleneck, especially when the system mixes satellite data, local sensory data, and simulation outputs. Yet it is suggested that DTs could eventually support more adaptive and resource-efficient farm management. However, the field is still missing common frameworks and long-term evaluations. Based on this review, progress depends on better data-handling pipelines, clearer definitions of operational DTs, and more attention to economic and practical constraints faced by farmers rather than just technical proofs of concept. Full article
Show Figures

Figure 1

13 pages, 1037 KB  
Systematic Review
Artificial Intelligence in Esophagectomy: A Systematic Review
by Vladimir Aleksiev, Daniel Markov, Kristian Bechev, Desislav Stanchev, Filip Shterev and Galabin Markov
J. Clin. Med. 2026, 15(6), 2169; https://doi.org/10.3390/jcm15062169 - 12 Mar 2026
Abstract
Background: Esophagectomy remains a technically demanding oncologic procedure with substantial morbidity, despite ongoing advances in minimally invasive and robotic techniques. Limitations in intraoperative visualization and anatomical recognition contribute to complications such as nerve injury and bleeding. Artificial intelligence (AI)-based intraoperative video analysis [...] Read more.
Background: Esophagectomy remains a technically demanding oncologic procedure with substantial morbidity, despite ongoing advances in minimally invasive and robotic techniques. Limitations in intraoperative visualization and anatomical recognition contribute to complications such as nerve injury and bleeding. Artificial intelligence (AI)-based intraoperative video analysis has emerged as a potential adjunct to enhance surgical perception and safety, but its application in esophagectomy has not been comprehensively reviewed. Methods: A systematic review was conducted in accordance with PRISMA guidelines. PubMed, Scopus, and Web of Science were searched without a lower date limit to identify eligible studies published up to January 2026, capturing early and contemporary applications of intraoperative AI in esophagectomy. Human studies involving any surgical approach were included. Data on the AI task, methodology, validation strategy, performance metrics, and reported clinical outcomes was extracted. Risk of bias was assessed using the ROBINS-I tool. Results: Six studies met the inclusion criteria, predominantly evaluating AI-driven analysis of intraoperative video during minimally invasive or robotic esophagectomy. Reported applications included real-time anatomical structure recognition, recurrent laryngeal nerve segmentation, detection of excessive nerve traction, instrument and event recognition, and surgical phase identification. Across studies, AI systems demonstrated performance comparable to expert surgeons for selected tasks and achieved real-time or near–real-time inference. One study reported earlier detection of excessive recurrent laryngeal nerve traction compared to conventional nerve integrity monitoring. However, most studies were retrospective, single-center, and feasibility-focused, with limited external validation and minimal assessment of patient-centered clinical outcomes. Conclusions: Artificial intelligence-based intraoperative analysis in esophagectomy is increasingly achievable and may enhance anatomical recognition, intraoperative risk detection, and procedural awareness. Nevertheless, current evidence remains preliminary, heterogeneous, and largely exploratory. Prospective, multicenter studies with standardized reporting and clinically meaningful outcome evaluation are required before routine implementation. Until such data is available, AI should be regarded as a complementary intraoperative tool rather than a standalone clinical decision-making system. Full article
(This article belongs to the Special Issue Recent Clinical Advances in Esophageal Surgery)
Show Figures

Figure 1

Back to TopTop