Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (2,874)

Search Parameters:
Keywords = artificial intelligence capability

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
51 pages, 2921 KB  
Systematic Review
Uncovering the Mechanisms of Organisational Resilience: A Critical Realist Systematic Review
by Moataz Mahmoud, Ka Ching Chan and Mustafa Ally
Sustainability 2026, 18(10), 5003; https://doi.org/10.3390/su18105003 (registering DOI) - 15 May 2026
Abstract
This systematic review examines how organisational resilience is conceptualised, enacted, and enabled in the Digital Age, characterised by Artificial Intelligence (AI), Generative AI, the Internet of Things (IoT), Big Data, and Robotics. Despite their transformative potential, these technologies are often treated as peripheral [...] Read more.
This systematic review examines how organisational resilience is conceptualised, enacted, and enabled in the Digital Age, characterised by Artificial Intelligence (AI), Generative AI, the Internet of Things (IoT), Big Data, and Robotics. Despite their transformative potential, these technologies are often treated as peripheral tools rather than core mechanisms in resilience architectures. Adopting a critical realist paradigm, we conducted a Systematic Literature Review (SLR) following the PRISMA 2020 protocol to review thirty (30) peer-reviewed empirical studies (2017–present). A pre-SLR conceptual framework, linking Business Intelligence and Responsiveness constructs, guided data extraction and synthesis. Building on this, we propose a conceptual framework and explanatory model grounded in the Context–Mechanism–Outcome logic. The model distinguishes generative mechanisms (real domain), organisational responses (actual domain), and observable indicators (empirical domain). The review identifies Collective Capability (CC), Adaptive Capability (AC) and Dynamic Capability (DC) mechanisms as key generative powers, with Digital Age enablers embedded within Adaptive Capability (AC) and Dynamic Capability (DC). Together, these mechanisms contribute to Systemic Preparedness (SP), Rapid Recovery (RR) and Generative Stability (GS), thereby supporting the emergence of Organisational Resilience (OR). This reconceptualises resilience as an emergent, non-linear outcome of mechanism interactions, offering a unified direction. Future research should prioritise longitudinal multi-case studies and quantitative testing of Context–Mechanism–Outcome configurations, supported by mixed-method designs to validate and refine the proposed framework. Full article
Show Figures

Figure 1

22 pages, 1068 KB  
Article
Public Health Responsible AI Capability (PH-RAIC) Framework: A Conceptual Model for Integrating AI into Public Health Agencies
by Arnob Zahid, Ravishankar Sharma and Rezwan Ahmed
Healthcare 2026, 14(10), 1364; https://doi.org/10.3390/healthcare14101364 - 15 May 2026
Abstract
Background: Artificial intelligence (AI) is transitioning from experimental pilots to core public health functions such as disease surveillance, resource planning, and analysis of social and structural determinants of health. Yet, health data collection and stewardship remain fragmented across the globe; some jurisdictions still [...] Read more.
Background: Artificial intelligence (AI) is transitioning from experimental pilots to core public health functions such as disease surveillance, resource planning, and analysis of social and structural determinants of health. Yet, health data collection and stewardship remain fragmented across the globe; some jurisdictions still rely on paper-based systems, while others operate noninteroperable digital systems that can exacerbate inequities. Treating health data as a global good therefore requires governance that enables innovation while protecting rights, safety, and trust. This study aims to develop a conceptual meso-level capability framework that translates responsible AI principles into organizational practices for public health agencies. Methods: We developed the framework using a targeted narrative synthesis of contemporary governance guidance and documented early implementation experiences, purposively selected to represent major strands of current practice and debate. A structured expert panel consultation (n = 9) was subsequently conducted to assess the face validity and content validity of the proposed framework domains. Results: We propose the Public Health Responsible AI Capability (PH-RAIC) framework, which adapts principles of transparency, accountability, fairness, ethics, and safety to institutional realities faced by public health agencies. PH-RAIC identifies four interdependent capability domains: (1) strategic governance and alignment; (2) data and infrastructure stewardship; (3) participatory design, equity, and public engagement; and (4) lifecycle oversight, learning, and decommissioning. All four domains achieved Content Validity Index (CVI) values ≥ 0.85 in the expert panel consultation. The framework is presented as a conceptual, meso-level model that has undergone preliminary expert validation but requires further empirical testing in real-world agency settings. Conclusions: PH-RAIC links these domains to example practices, diagnostic questions, and illustrative measurement indicators to help agencies navigate efficiency–equity trade-offs and strengthen legitimacy and accountability in AI-enabled public health systems. It offers a validated conceptual basis for future empirical testing and operational readiness tools. Full article
31 pages, 5601 KB  
Article
Protection-Oriented Non-Intrusive Arc Fault Detection in Photovoltaic DC Systems via Rule–AI Fusion
by Lu HongMing and Ko JaeHa
Sensors 2026, 26(10), 3138; https://doi.org/10.3390/s26103138 - 15 May 2026
Abstract
Series arc faults on the DC side of photovoltaic (PV) systems are a critical hazard that can trigger system fires. Conventional contact-based detection methods suffer from cumbersome installation and high retrofit cost, whereas existing non-contact approaches mostly rely on megahertz-level high-frequency sampling and [...] Read more.
Series arc faults on the DC side of photovoltaic (PV) systems are a critical hazard that can trigger system fires. Conventional contact-based detection methods suffer from cumbersome installation and high retrofit cost, whereas existing non-contact approaches mostly rely on megahertz-level high-frequency sampling and therefore require expensive radio-frequency instrumentation or high-performance computing platforms. As a result, it remains difficult to simultaneously achieve strong interference immunity and real-time performance on low-cost embedded devices with limited resources. To address this engineering paradox between high-frequency sampling and constrained computational capability, this paper proposes a fully embedded, non-contact arc fault detection system based on a 12–80 kHz low-frequency sub-band selection strategy. By exploiting the physical characteristic of broadband energy elevation induced by arc faults, the proposed strategy avoids dependence on high-bandwidth hardware. Guided by this strategy, a Moebius-topology coaxial shielded loop antenna is employed as the near-field sensor, while an ultra-simplified passive analog front end is constructed directly by using the on-chip programmable gain amplifier and analog-to-digital converter of the microcontroller unit, enabling efficient signal acquisition and fast Fourier transform processing within the target sub-band. To cope with complex background noise in the low-frequency range, an environment-adaptive baseline mechanism based on exponential moving average and exponential absolute deviation is developed for dynamic decoupling. In addition, a lightweight INT8-quantized multilayer perceptron is introduced as a nonlinear auxiliary module, thereby forming a robust hybrid decision architecture with complementary rule-based and artificial intelligence components. Experimental results show that, under the tested household, laboratory, and PV-site conditions, the proposed system achieved an overall detection rate of 97%, while the remaining 3% mainly corresponded to failed ignition or non-sustained arc attempts rather than persistent false triggering during normal monitoring. Full article
(This article belongs to the Topic AI Sensors and Transducers)
9 pages, 3746 KB  
Article
Ultrafast Physical Random Bit Generation Based on an Integrated Mutual Injection DFB Laser
by Jianyu Yu, Pai Peng, Qi Zhou, Pan Dai, Xiangfei Chen and Yi Yang
Photonics 2026, 13(5), 493; https://doi.org/10.3390/photonics13050493 (registering DOI) - 15 May 2026
Abstract
Ultrafast physical random bit generators (PRBGs) are essential components for modern applications in secure communication, quantum cryptography, encrypted optical fiber sensing and artificial intelligence. While optical chaos-based PRBGs offer high-speed capabilities, conventional systems often rely on discrete components that suffer from system complexity [...] Read more.
Ultrafast physical random bit generators (PRBGs) are essential components for modern applications in secure communication, quantum cryptography, encrypted optical fiber sensing and artificial intelligence. While optical chaos-based PRBGs offer high-speed capabilities, conventional systems often rely on discrete components that suffer from system complexity and environmental instability. This paper proposes and experimentally demonstrates a robust, integrated solution using a two-section mutual injection DFB laser. The device was fabricated using the reconstruction equivalent chirp (REC) technique, which provides precise control over grating phase variation while utilizing low-cost, high-volume fabrication methods. The laser sections, each measuring 450 μm in length, were designed with a free-running wavelength difference of 0.3 nm to ensure a flat optical spectrum and enhanced chaotic dynamics. By optimizing the bias currents, we achieved a chaos RF bandwidth of 20.1 GHz. Notably, the resulting chaotic signal lacks time-delayed signatures, which simplifies the randomness extraction process. To generate random bits, the chaotic waveform was sampled by an 8-bit analog-to-digital converter at 100 GSa/s. Following post-processing through delay-subtracting and the extraction of the four least significant bits (4-LSBs), we realized a total physical random bit rate of 400 Gb/s. The randomness of the generated sequence was successfully verified using the NIST SP 800-22 statistical test suite. This approach offers a compact, energy-efficient, and high-performance integrated chaotic source suitable for secure communication and high-performance computation. Full article
(This article belongs to the Special Issue Advanced Lasers and Their Applications, 3rd Edition)
Show Figures

Figure 1

30 pages, 1418 KB  
Review
Digital Twins as an Emerging Solution in AI-Driven Modeling and Metrology of Industry 5.0/6.0 Production Systems
by Izabela Rojek and Dariusz Mikołajewski
Appl. Sci. 2026, 16(10), 4942; https://doi.org/10.3390/app16104942 (registering DOI) - 15 May 2026
Abstract
Article discusses Digital Twins (DTs) as a solution for artificial intelligence (AI)-based modeling and metrology in Industry 5.0 and Industry 6.0 manufacturing systems. DTs enable the creation of real-time virtual replicas of physical assets, processes, and systems, increasing transparency, prediction, and optimization in [...] Read more.
Article discusses Digital Twins (DTs) as a solution for artificial intelligence (AI)-based modeling and metrology in Industry 5.0 and Industry 6.0 manufacturing systems. DTs enable the creation of real-time virtual replicas of physical assets, processes, and systems, increasing transparency, prediction, and optimization in manufacturing environments. By integrating AI, machine learning (ML), and advanced sensor data, DT support adaptive, self-learning production models capable of responding to dynamic operating conditions. In metrology, DTs improve measurement accuracy, traceability, and quality assurance by continuously synchronizing data between the physical and virtual domains. This technology improves process simulation, predictive maintenance, and fault detection, reducing downtime and operating costs. Furthermore, DTs facilitate human-centric production by enabling collaborative decision-making between intelligent systems and skilled workers. Their role in sustainable production is significant, supporting energy optimization, waste reduction, and lifecycle performance analysis. In Industry 6.0, DTs go beyond cyber-physical integration to encompass cognitive intelligence, ethical automation, and autonomous optimization. However, challenges remain in data interoperability, cybersecurity, model scalability, and real-time computational performance. DTs represent a revolutionary framework for the development of intelligent, resilient, and precise manufacturing ecosystems in next-generation industrial systems. Full article
(This article belongs to the Special Issue Recent Advances and Future Challenges in Manufacturing Metrology)
16 pages, 750 KB  
Review
Role of Artificial Neural Networks in Optimizing Bioconversion of Antiretroviral Drugs: A Review
by Nelson T. Tsotetsi, Ndiwanga F. Rasifudi, Beauty Magage and Lukhanyo Mekuto
BioMedInformatics 2026, 6(3), 30; https://doi.org/10.3390/biomedinformatics6030030 - 15 May 2026
Abstract
Antiretroviral drugs (ARVDs) remain the cornerstone of HIV/AIDS management, but their therapeutic efficacy and safety are highly influenced by bioconversion processes such as hepatic metabolism and enzymatic transformation. Variability in metabolic pathways, mediated by cytochrome P450 enzymes and other liver-based systems, contributes to [...] Read more.
Antiretroviral drugs (ARVDs) remain the cornerstone of HIV/AIDS management, but their therapeutic efficacy and safety are highly influenced by bioconversion processes such as hepatic metabolism and enzymatic transformation. Variability in metabolic pathways, mediated by cytochrome P450 enzymes and other liver-based systems, contributes to interindividual differences in drug response, toxicity, and resistance. Recent advances in artificial intelligence, particularly artificial neural networks (ANNs), offer promising tools for modeling and optimizing these complex bioconversion processes. ANNs are capable of learning nonlinear relationships from high-dimensional datasets, making them ideal for predicting the pharmacokinetic parameters, enzyme–substrate interactions, and metabolic stability of ARVDs. This review explores the emerging role of ANNs in understanding and optimizing the metabolic transformation of antiretroviral agents. Key applications are discussed, including prediction of drug–enzyme interactions, in silico modeling of hepatic clearance, and simulation of enzyme kinetics. The integration of molecular descriptors, omics data, and clinical parameters into ANN models allows for improved prediction accuracy and personalized therapy. Furthermore, ANN-based tools can aid in early-stage drug development by identifying metabolic liabilities and guiding structural modifications to enhance metabolic stability. Despite their potential, challenges such as data scarcity, model interpretability, and standardization remain. Future research should focus on hybrid models combining ANN with mechanistic pharmacokinetics, the incorporation of real-world patient data, and validation against experimental outcomes. Overall, ANNs represent a powerful approach to optimizing ARVDs bioconversion, with the potential to improve efficacy, reduce toxicity, and support the development of next-generation antiretroviral therapies Full article
Show Figures

Figure 1

52 pages, 1516 KB  
Review
Multinuclear NMR and MRI Beyond Proton Imaging: Principles, Contrast Mechanisms, and Applications in Materials and Biomedicine
by Dorota Bartusik-Aebisher, Klaudia Dynarowicz, Barbara Smolak, Rostyslav Marunych, Wiesław Guz and David Aebisher
Int. J. Mol. Sci. 2026, 27(10), 4384; https://doi.org/10.3390/ijms27104384 - 14 May 2026
Abstract
Magnetic resonance techniques have evolved beyond conventional proton-based imaging, enabling access to a broader range of nuclei that provide complementary structural, functional, and molecular information. This review presents a comprehensive overview of multinuclear NMR and MRI in solid and soft materials as well [...] Read more.
Magnetic resonance techniques have evolved beyond conventional proton-based imaging, enabling access to a broader range of nuclei that provide complementary structural, functional, and molecular information. This review presents a comprehensive overview of multinuclear NMR and MRI in solid and soft materials as well as in biomedical applications, with particular emphasis on 1H, 13C, 31P, 23Na, and 19F nuclei. Proton-based methods remain the foundation of magnetic resonance due to their high sensitivity and widespread applicability, offering insights into molecular mobility, hydration, and microstructural heterogeneity. In contrast, heteronuclear approaches enable more specific characterization of chemical structure (13C), phosphorus-containing functional groups and membranes (31P), ionic homeostasis and transport (23Na), and exogenous tracers with negligible biological background (19F). Together, these techniques extend magnetic resonance from primarily anatomical imaging toward functional, metabolic, and molecular-level analysis. The review further discusses key hardware aspects, including magnetic field strength and radiofrequency coil design, highlighting the trade-offs between low- and high-field systems and the growing importance of multinuclear coil architectures. For example, because 1H, 23Na, 31P, and 19F resonate at different Larmor frequencies, multinuclear experiments require dedicated or multi-tuned RF coils that balance sensitivity, field homogeneity, and decoupling between channels. Mechanisms of contrast generation are examined in detail, distinguishing between endogenous sources—such as water, ions, and metabolites—and exogenous contrast agents, including gadolinium-, manganese-, and fluorine-based compounds, as well as targeted and theranostic platforms. A comparative framework of endogenous and exogenous signals is presented, emphasizing their complementary roles in balancing safety, specificity, and sensitivity. Finally, the opportunities and challenges of multinuclear magnetic resonance are critically evaluated, including limitations in sensitivity, signal-to-noise ratio, data interpretation in heterogeneous systems, and technical complexity. Emerging directions such as ultrahigh-field imaging, advanced RF technologies, hyperpolarization, and artificial intelligence-assisted reconstruction are discussed as key drivers for future development. Overall, multinuclear NMR and MRI represent a powerful and expanding toolbox for probing complex material and biological systems, with the potential to significantly enhance diagnostic capabilities and deepen our understanding of structure–function relationships across multiple scales. Full article
(This article belongs to the Special Issue Application of NMR Spectroscopy in Biomolecules: 2nd Edition)
17 pages, 5771 KB  
Article
Leveraging Artificial Intelligence in Hydrology to Process Citizen Science Photos of Water Levels
by Abhinna Manandhar and Christopher S. Lowry
Hydrology 2026, 13(5), 134; https://doi.org/10.3390/hydrology13050134 - 14 May 2026
Abstract
Emerging Large Language Model capabilities create opportunities for applying AI reasoning across various domains with minimal technical complexity. Motivated by the development of citizen scientists submitting photos of water levels on staff gauges and the increasing need for hydrologic data in ungauged watersheds, [...] Read more.
Emerging Large Language Model capabilities create opportunities for applying AI reasoning across various domains with minimal technical complexity. Motivated by the development of citizen scientists submitting photos of water levels on staff gauges and the increasing need for hydrologic data in ungauged watersheds, this research develops an artificial intelligence approach to measuring stream stage across an existing citizen science monitoring network. To lower the barrier to entry for professional scientists, this research develops a methodology leveraging a Large Language Model (LLM) to extract water levels from images submitted by citizen scientists, and then follows a human-in-the-loop workflow for validating the final results, leaving space for correcting reasoning errors and hallucinations. Various techniques, such as labeling the input image, are also explored in this research to extract maximum accuracy from the LLM. Full article
27 pages, 1555 KB  
Article
Integrating AI and Big Data for Firm Resilience: The Mediating Roles of AI Capabilities and Supply Chain Agility
by Thamir Hamad Alaskar
Systems 2026, 14(5), 554; https://doi.org/10.3390/systems14050554 (registering DOI) - 14 May 2026
Abstract
The integration of Artificial Intelligence (AI) and Big Data is increasingly associated with firms’ resilience in dynamic business environments. This study examines the relationships between AI–Big Data integration, AI capabilities, supply chain agility, and firm resilience, with particular attention paid to the mediating [...] Read more.
The integration of Artificial Intelligence (AI) and Big Data is increasingly associated with firms’ resilience in dynamic business environments. This study examines the relationships between AI–Big Data integration, AI capabilities, supply chain agility, and firm resilience, with particular attention paid to the mediating roles of AI capabilities and supply chain agility. Data were collected from 475 experts across firms in Saudi Arabia and analyzed using Partial Least Squares Structural Equation Modeling (PLS-SEM). The results indicate that AI–Big Data integration is positively associated with AI capabilities and supply chain agility, both of which, in turn, significantly contribute to firm resilience. In addition, AI capabilities show a direct positive relationship with supply chain agility. The findings further confirm the mediating roles of AI capabilities and supply chain agility in strengthening organizational resilience. This study contributes to the Dynamic Capabilities View (DCV) and Knowledge-Based View (KBV) by empirically examining how integrated AI–Big Data relates to capability development and firm outcomes. The results also provide implications for managers seeking to align AI and Big Data initiatives with supply chain capabilities to support resilience in dynamic environments. Full article
Show Figures

Figure 1

8 pages, 896 KB  
Proceeding Paper
OSIRIS—Generation of System-Specific Failure Cases Using Artificial Intelligence Based on Information from Abstract System Models
by Durga Sri Sharan Katabathula, Marcel Mischke, Sebastian Stoppa and Robin Frank
Eng. Proc. 2026, 133(1), 134; https://doi.org/10.3390/engproc2026133134 (registering DOI) - 13 May 2026
Abstract
The importance of system safety elevates with the introduction of novel technologies in the aviation industry. With the rise of system complexity, regular safety practices include iterative workflows and heavy reliance on expert knowledge. For the development of modern, efficient aircraft systems, there [...] Read more.
The importance of system safety elevates with the introduction of novel technologies in the aviation industry. With the rise of system complexity, regular safety practices include iterative workflows and heavy reliance on expert knowledge. For the development of modern, efficient aircraft systems, there is a need for innovative approaches. This paper presents a tool, OSIRIS (operational safety and integrated risk analysis), that supports safety and risk analyses utilizing artificial intelligence (AI) concepts. Developed as a key safety feature within the HADES modeling framework, OSIRIS aligns with an architecture-based design approach for abstract system modeling, adhering to model-based systems engineering (MBSE) principles and standards. It currently aids safety engineers in formulating system failure cases consistent with functional hazard assessments (FHA), representing model-based safety assessment (MBSA) in compliance with SAE ARP4761A. The methodological concepts and their implementation in OSIRIS are demonstrated considering an abstract system model from aeronautical applications. The generated results were evaluated against the system context to confirm compliance with the FHA process required for certification. Further, the future work will explore refining OSIRIS’s capabilities and its application cases. Full article
Show Figures

Figure 1

20 pages, 8004 KB  
Review
Advances in Zirconia Crowns: A Comprehensive Review of Strength, Aesthetics, Digital Manufacturing, and Clinical Performance
by Sohaib Fadhil Mohammed, Mohd Firdaus Yhaya, Matheel Al-Rawas and Tahir Yusuf Noorani
Ceramics 2026, 9(5), 50; https://doi.org/10.3390/ceramics9050050 (registering DOI) - 13 May 2026
Viewed by 7
Abstract
The use of zirconia as a material in the base of modern restorative dentistry is due to its high strength, biocompatibility, and improved aesthetic performance. The aim of this review is to provide an integrated and coherent overview of the recent developments in [...] Read more.
The use of zirconia as a material in the base of modern restorative dentistry is due to its high strength, biocompatibility, and improved aesthetic performance. The aim of this review is to provide an integrated and coherent overview of the recent developments in zirconia crowns by focusing on the development of materials, microstructure, digital fabrication processes, optical capabilities, and clinical performance. A survey of literature in the form of a narrative literature review was conducted in the most significant databases, such as PubMed, Scopus, Web of Science, and Google Scholar, including publications published since 2000, with a focus on systematic reviews, meta-analyses, clinical studies, and materials science studies. The results show that zirconia materials have developed beyond traditional 3Y-TZP systems, characterized by high strength and fracture toughness to high-translucency and multilayer zirconia (4Y 6Y-PSZ) systems, which provide better aesthetics at the cost of lower mechanical reliability. The implementation of CAD/CAM technologies has enhanced the accuracy of fabrication, marginal fit and reproducibility and the development of sintering, surface modification and bonding protocols has enhanced clinical performance. Recent clinical results have shown high survival rates (around 85–95 percent over 5–10 years), and the results depend on the design of the restoration, the zirconia generation, and the functional loading circumstances. Despite these developments, there are still concerns about the durability of bonding, trade-offs between translucency and strength, and long-term performance of high-translucency zirconia. The development of new technologies, such as additive manufacturing, design-aided artificial intelligence, and bioactive surface modification, is a promising avenue toward improving clinical reliability and performance. Full article
Show Figures

Figure 1

36 pages, 3578 KB  
Article
Task Scheduling Optimization in Cloud-Edge Collaborative Architecture via a Multi-Strategy Artificial Lemming Algorithm
by Yue Zhang and Jianfeng Wang
Mathematics 2026, 14(10), 1659; https://doi.org/10.3390/math14101659 - 13 May 2026
Viewed by 4
Abstract
In the cloud computing environment, various heterogeneous architectures have emerged, and the cloud-edge collaborative task scheduling architecture has come into being under this background. However, the complexity of cloud-edge heterogeneous architecture significantly restricts the improvement of scheduling performance. Therefore, researchers propose solving this [...] Read more.
In the cloud computing environment, various heterogeneous architectures have emerged, and the cloud-edge collaborative task scheduling architecture has come into being under this background. However, the complexity of cloud-edge heterogeneous architecture significantly restricts the improvement of scheduling performance. Therefore, researchers propose solving this problem by leveraging intelligent optimization algorithms. The Artificial Lemming Algorithm has received extensive attention due to its strong robustness. However, when dealing with the problem of cloud-edge collaborative task scheduling, there are still some drawbacks, such as long system response time and unstable scheduling performance. In response to the above problems, this paper proposes a multi-strategy artificial lemming algorithm. Specifically, by coordinating high-order Chebyshev polynomials with chaotic mapping to enhance the richness of the initial population, the scheduling response time is indirectly shortened. Secondly, the Adaptive Spatial Search Mechanism is introduced to make up for the deficiencies in the exploration stage, enhance the algorithm’s exploration ability, and thereby improve the optimization effect of scheduling satisfaction. Furthermore, the Bernstein-Guided Correction Strategy is introduced to enhance the exploitation capability of the algorithm to improve the stability of cloud-edge scheduling. The experimental results demonstrate that compared with the baseline algorithms, the proposed MALA reduces the total scheduling cost by at least 3% across cloud-edge collaborative resource scheduling problems of different scales. Full article
(This article belongs to the Special Issue AI, Machine Learning and Optimization)
33 pages, 2079 KB  
Review
Quantum Computing as a Disruptive Technology: Implications for Advanced Manufacturing and Industry 5.0
by Ganiyat Salawu and Bright Glen
Appl. Sci. 2026, 16(10), 4856; https://doi.org/10.3390/app16104856 - 13 May 2026
Viewed by 4
Abstract
Quantum computing is increasingly seen as a disruptive technology capable of expanding the computational limits of advanced manufacturing systems within the emerging Industry 5.0 framework. By utilizing quantum mechanical principles such as superposition, entanglement, and quantum parallelism, quantum computation enables new approaches to [...] Read more.
Quantum computing is increasingly seen as a disruptive technology capable of expanding the computational limits of advanced manufacturing systems within the emerging Industry 5.0 framework. By utilizing quantum mechanical principles such as superposition, entanglement, and quantum parallelism, quantum computation enables new approaches to solving complex optimization, simulation, and data-intensive problems that are challenging or impractical for classical computers. This paper offers a comprehensive and critical review of the potential impacts of quantum computing on advanced manufacturing, focusing on intelligent production planning, supply chain optimization, materials discovery, predictive maintenance, and human–machine collaboration, key aspects of Industry 5.0. The originality of this review lies in its integrated analysis of quantum computing alongside artificial intelligence, digital twins, and cyber–physical systems, highlighting how these technologies, when combined, improve decision-making speed, process efficiency, and sustainability. Despite these opportunities, the integration of quantum computing into Industry 5.0 systems faces critical challenges, including hardware limitations, algorithm scalability, data security concerns, workforce readiness, and the complexity of integrating quantum solutions with existing industrial infrastructures. The role of hybrid quantum-classical architectures is examined as a feasible and transitional approach for near-term manufacturing applications. By critically assessing both technological strengths and practical constraints, this review positions quantum computing as a promising enabler of resilient, human-centered, and sustainable manufacturing ecosystems. The insights aim to assist researchers, industry players, and policymakers in strategically managing the integration of quantum technologies as manufacturing systems advance toward Industry 5.0. Full article
(This article belongs to the Section Quantum Science and Technology)
36 pages, 4480 KB  
Article
An Explainable Transformer-Based Framework for Lung Cancer Classification and Automated Radiology Report Generation from Multi-Slice CT Images
by Oguzhan Katar, Tulin Akbalik and Ozal Yildirim
Biomedicines 2026, 14(5), 1103; https://doi.org/10.3390/biomedicines14051103 - 13 May 2026
Viewed by 23
Abstract
Background/Objectives: Lung cancer is one of the most common and lethal malignancies worldwide. Early detection remains challenging due to its variable biological behavior. Computed tomography (CT) is the primary imaging method used for early detection. However, the manual interpretation of CT scans is [...] Read more.
Background/Objectives: Lung cancer is one of the most common and lethal malignancies worldwide. Early detection remains challenging due to its variable biological behavior. Computed tomography (CT) is the primary imaging method used for early detection. However, the manual interpretation of CT scans is constrained by several challenges such as reliance on expert experience, increasing clinical workload, and considerable variability among observers. Methods: This study introduces an explainable transformer-based framework capable of distinguishing among the three principal clinical categories of lung cancer (small-cell lung cancer, non-small-cell lung cancer, and normal) while simultaneously generating automated radiology reports from CT images. In contrast to conventional single-slice methodologies, the proposed model employs a multi-slice volumetric encoding strategy that captures spatial continuity and anatomical relationships across the CT slices. Visual features extracted by a ViT-based encoder are transformed into a compact patient-level representation through a Learnable Query Attention Pooling (LQAP) mechanism, and this unified representation is subsequently used for both three-class prediction and report generation with a GPT-2-based decoder. To enhance explainability, slice-wise Grad-CAM maps are produced, visually highlighting the anatomical cues that guide the model’s decisions. Results: Experiments conducted on the newly curated LungCA dataset comprising 767 patients demonstrate that the model achieves 97.40% accuracy in the Turkish (TR) reporting scenario and 94.81% accuracy in the English (EN) scenario, alongside strong alignment with human-written reports in BLEU, ROUGE, METEOR, and CIDEr metrics. Conclusions: The findings demonstrate that the proposed multi-slice transformer framework achieves robust performance in both classification and radiology report generation, enhances transparency throughout the decision-making process, and provides a robust artificial intelligence solution capable of effectively supporting clinical workflows in lung cancer assessment. Full article
18 pages, 448 KB  
Review
AI-Assisted Training for Teleconsultation Competencies in Undergraduate Medical Education: A Narrative Review
by Wojciech Michał Glinkowski, Barbara Jacennik, Aldona Katarzyna Jankowska, Tomasz Cedro, Szymon Wilk and Rafał Doniec
Appl. Sci. 2026, 16(10), 4858; https://doi.org/10.3390/app16104858 - 13 May 2026
Viewed by 25
Abstract
Telemedicine has become a routine component of healthcare delivery, creating a need for dedicated undergraduate training in teleconsultation-specific competencies. Although artificial intelligence (AI)-assisted educational systems have been proposed as scalable tools to support teleconsultation training, the evidence remains fragmented, and their educational role [...] Read more.
Telemedicine has become a routine component of healthcare delivery, creating a need for dedicated undergraduate training in teleconsultation-specific competencies. Although artificial intelligence (AI)-assisted educational systems have been proposed as scalable tools to support teleconsultation training, the evidence remains fragmented, and their educational role is not yet clearly defined. Objective: To map and critically synthesize empirical evidence on AI-assisted teleconsultation training systems used in undergraduate medical education, with attention to skill domains, system capabilities, and implementation considerations. Methods: A structured narrative review with transparent search and study selection procedures was conducted. Literature published between January 2019 and December 2025 was identified through searches of major bibliographic databases and supplementary semantic and citation-based sources. Studies involving undergraduate medical students and evaluating AI-assisted interventions targeting teleconsultation-related skills were included. Results: Eight empirical full-text studies met the final eligibility criteria and were included in the structured narrative synthesis. Across the included studies, AI-assisted systems tended to show favorable patterns in structured domains such as verbal communication, history-taking, and selected aspects of early clinical reasoning during virtual consultations. Evidence regarding nonverbal communication and empathic or relational skills was more limited and methodologically heterogeneous, and human-based simulation remained important in these domains. Students generally reported favorable perceptions of usability, accessibility, and psychological safety, although satisfaction and perceived realism were not uniformly superior to human-based approaches. AI-assisted systems also appeared scalable and potentially cost-efficient, particularly as preparatory or supplementary training modalities. Conclusions: Current evidence suggests that AI-assisted teleconsultation training systems may be useful as preparatory and supportive tools in undergraduate medical education, particularly for structured and repeatable components of remote consultation practice. However, the evidence base remains limited and heterogeneous, and these systems do not replace human-led training for relational, nonverbal, and context-sensitive competencies. Their educational value appears greatest within blended training models that align platform capabilities with specific teleconsultation skills. Full article
(This article belongs to the Section Computing and Artificial Intelligence)
Show Figures

Figure 1

Back to TopTop