Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (83)

Search Parameters:
Keywords = AI optimisation techniques

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 510 KB  
Perspective
Beyond CABG vs. PCI: Contemporary and Future Coronary Revascularisation from Historical Evolution to Artificial Intelligence, Robotics, and Hybrid Strategies
by Justin Ren, Christopher M. Reid, Dion Stub, William Chan, Colin Royse, Jason E. Bloom, Garry W. Hamilton, Liam Munir, Gihwan Song, Daksh Tyagi, Joshua G. Kovoor, Aashray Gupta, Nilesh Srivastav and Alistair Royse
J. Clin. Med. 2026, 15(7), 2681; https://doi.org/10.3390/jcm15072681 - 1 Apr 2026
Viewed by 416
Abstract
Coronary artery bypass grafting (CABG) and percutaneous coronary intervention (PCI) are the two dominant revascularisation strategies for obstructive coronary artery disease, yet their relative roles continue to shift because they address coronary pathophysiology differently with ever-evolving techniques. PCI has advanced through iterative improvements, [...] Read more.
Coronary artery bypass grafting (CABG) and percutaneous coronary intervention (PCI) are the two dominant revascularisation strategies for obstructive coronary artery disease, yet their relative roles continue to shift because they address coronary pathophysiology differently with ever-evolving techniques. PCI has advanced through iterative improvements, including balloon angioplasty, bare-metal stents, and drug-eluting stents, with contemporary outcomes increasingly driven by procedural optimisation using intracoronary imaging and physiology-guided lesion selection rather than device category alone. CABG has progressed through perioperative management, improvements in operative safety, and, critically, conduit durability. Recognition of progressive saphenous vein graft failure has underpinned a conduit-optimisation era in which the left internal mammary artery to left anterior descending artery remains the gold standard. Further, broader arterial grafting (including radial artery use, multiple arterial grafting, and selected total-arterial strategies) has been increasingly applied, albeit with deliverability and competing-risk constraints highlighted in randomised evidence. This perspective review reframes the CABG versus PCI comparison not as a binary contest, but as a context-dependent assessment in which the relative value of each strategy depends on the specific technologies, techniques, and conduits available at the time of comparison. We summarise comparative effectiveness where evidence is most consistent and where it remains sensitive to anatomy, comorbidity, and endpoint definitions. In diabetes with multivessel disease, trial data favour CABG for long-term survival and clinical outcomes despite higher stroke risk. In left main disease, outcomes depend on lesion pattern and overall complexity, with trial-era stent technology and composite endpoint definitions influencing conclusions. In ischaemic left ventricular dysfunction, a long-term survival benefit is established for CABG added to medical therapy, while multi-vessel PCI has not demonstrated comparable prognostic modification in contemporary data. We then examine hybrid coronary revascularisation as territory-specific allocation, highlighting its physiological rationale, program dependence, and limited, adequately powered randomised evidence. Finally, we outline how artificial intelligence (AI) and robotics may accelerate a precision revascularisation paradigm by standardising lesion assessment, supporting procedural planning, improving procedural reproducibility, and enabling more patient-specific selection among PCI, contemporary CABG with optimised conduits, and hybrid pathways. Full article
(This article belongs to the Section Cardiology)
Show Figures

Graphical abstract

20 pages, 1310 KB  
Perspective
AI-Based Optimisation Techniques for Agrivoltaic Systems: Benefits, Challenges, and the Way Forward
by Aiken Monasterio and Alan Colin Brent
Energies 2026, 19(6), 1554; https://doi.org/10.3390/en19061554 - 21 Mar 2026
Viewed by 252
Abstract
The application of artificial intelligence (AI) and machine learning (ML) to the optimisation of agrivoltaic systems represents a promising frontier for enhancing dual land-use efficiency. Insights from the literature identify substantial opportunities for the transfer of mature AI methodologies from renewable energy and [...] Read more.
The application of artificial intelligence (AI) and machine learning (ML) to the optimisation of agrivoltaic systems represents a promising frontier for enhancing dual land-use efficiency. Insights from the literature identify substantial opportunities for the transfer of mature AI methodologies from renewable energy and agriculture applications to the emerging field of agrivoltaics. Despite agrivoltaic systems achieving reported Land Equivalent Ratios (LERs) of between 1.2 and 1.6—corresponding to a 20 to 60% increase in combined energy and crop productivity per unit of land—the adoption of dynamic, real-time optimisation remains limited. Key research gaps include the absence of cross-domain learning architectures, the limited integration of economic considerations within optimisation frameworks, and the lack of adaptive, multi-temporal modelling approaches. This perspective paper proposes a research roadmap for the development of next-generation AI systems capable of simultaneously optimising energy generation and agricultural productivity, thereby supporting sustainable land-use transitions in integrated agri-energy landscapes. Full article
Show Figures

Figure 1

35 pages, 10077 KB  
Article
Physically Interpretable and AI-Powered Applied-Field Thrust Modelling for Magnetoplasmadynamic Space Thrusters Using Symbolic Regression: Towards More Explainable Predictions
by Miguel Rosa-Morales, Matthew Ravichandran, Wenjuan Song and Mohammad Yazdani-Asrami
Aerospace 2026, 13(3), 245; https://doi.org/10.3390/aerospace13030245 - 5 Mar 2026
Viewed by 406
Abstract
Magnetoplasmadynamic thrusters (MPDTs) are becoming increasingly viable as electric propulsion (EP) technology for space missions, yet their complex plasma behaviour, intricate thrust-generation process, and nonlinear multi-physics thrust–field interactions prove difficult for conventional modelling approaches, including empirical techniques. Traditional empirical modelling shortcomings include failure [...] Read more.
Magnetoplasmadynamic thrusters (MPDTs) are becoming increasingly viable as electric propulsion (EP) technology for space missions, yet their complex plasma behaviour, intricate thrust-generation process, and nonlinear multi-physics thrust–field interactions prove difficult for conventional modelling approaches, including empirical techniques. Traditional empirical modelling shortcomings include failure to predict accurately across wide operational regimes. This paper introduces a physically interpretable, artificial intelligence (AI)-powered thrust model for Applied-Field Magnetoplasmadynamic Thrusters (AF-MPDTs), developed using symbolic regression (SR) to address the gap between data-driven prediction and physics-based understanding. The proposed method, an alternative to traditional black box AI methods, incorporates physics-aware composite-term operators, ensuring that the resulting analytical expressions are bounded by known physical behaviours while retaining the flexibility to discover previously overlooked nonlinear couplings. A comprehensive dataset of AF-MPDTs undergoes rigorous preprocessing to ensure dimensional consistency and noise robustness. The SR model then evolves candidate equations, balancing predictive accuracy with interpretability through Tree-Structured Parzen Estimator (TPE) optimisation. The results, closed-form surrogate correlations with 95.98% of accuracy as goodness of fit, root mean square error of 0.0199, mean absolute error of 0.0143, and mean absolute percentage error reduction of 28.91% against the benchmark model in the literature. A post-discovery protocol for numerical robustness and physical consistency is implemented, with Shapley Additive Explanations (SHAP) providing insight into the influence of each composite-term in the developed correlation, followed by a numerical robustness and physical consistency validation using a Monte Carlo (MC) envelope. A StabilityScore is calculated for all developed correlations, enabling explicit accuracy–complexity–stability comparisons. In doing so, we demonstrated that SR can systematically recover known physical relationships—such as the scaling of thrust with discharge current and applied magnetic field—while proposing interpretable higher-order corrections that improve fit quality. The resulting SR-based thrust models not only achieve competitive accuracy relative to state-of-the-art numerical and empirical methods but also offer more explainable and interpretable results capable of revealing compact formulations that capture essential acceleration mechanisms with transparency. Overall, this paper, using SR, advances explainable AI (XAI) methodologies capable of generating trustworthy, analytically transparent models for next-generation electric propulsion systems. Full article
(This article belongs to the Special Issue Artificial Intelligence in Aerospace Propulsion)
Show Figures

Figure 1

29 pages, 7418 KB  
Article
EvoDropX: Evolutionary Optimization of Feature Corruption Sequences for Faithful Explanations of Transformer Models
by Dhiraj Kumar Singh and Conor Ryan
Algorithms 2026, 19(3), 187; https://doi.org/10.3390/a19030187 - 2 Mar 2026
Viewed by 312
Abstract
As deep learning models become increasingly integrated into critical decision-making systems, the need for explainable Artificial Intelligence (xAI) has grown paramount to ensure transparency, accountability, and trust. Post hoc explainability methods, which analyse trained models to interpret their predictions without modifying the underlying [...] Read more.
As deep learning models become increasingly integrated into critical decision-making systems, the need for explainable Artificial Intelligence (xAI) has grown paramount to ensure transparency, accountability, and trust. Post hoc explainability methods, which analyse trained models to interpret their predictions without modifying the underlying architecture, have become increasingly important, especially in fields such as healthcare and finance. Modern xAI techniques often produce feature importance rankings that fail to capture the true causal influence of features, particularly in transformer-based models. Recent quantitative metrics, such as Symmetric Relevance Gain (SRG), which measures the area between the feature corruption performance curves of the Most Important Feature (MIF) and the Least Important Feature (LIF), provide a more rigorous basis for evaluating explanation fidelity. In this study, we first show that existing xAI methods exhibit consistently poor performance under the SRG criterion when explaining transformer-based text classifiers. To address these limitations, we introduceEvoDropX, a novel framework that formulates explanation as an optimisation problem. EvoDropX leverages Grammatical Evolution (GE) to evolve sequences of feature corruption with the explicit objective of maximising SRG, thereby identifying features that most strongly influence model predictions. EvoDropX provides interventional, input–output (behavioural) explanations and does not attempt to infer or interpret internal model mechanisms. Through comprehensive experiments across multiple datasets (IMDb movie reviews (IMDB), Stanford Sentiment Treebank (SST-2), Amazon Polarity (AP)), multiple transformer models (Bidirectional Encoder Representations from Transformers (BERT), RoBERTa, DistilBERT), and multiple metrics (SRG, MIF, LIF, Counterfactual Conciseness (CFC)), we demonstrate that EvoDropX significantly outperforms all state-of-the-art (SOTA) xAI baselines including Attention-Aware Layer- Wise Relevance Propagation for Transformers (AttnLRP), SHapley Additive exPlanations (SHAP), and Local Interpretable Model-agnostic Explanations (LIME), when evaluated using intervention-based faithfulness criteria. Notably, EvoDropX achieves 74.77% improvement in SRG than the best-performing baseline on the IMDB dataset with the BERT model, with consistent improvements observed across all dataset-model pairs. Finally, qualitative and linguistic analyses reveal that EvoDropX captures both sentiment-bearing terms and their structural relationships within sentences, yielding explanations that are both faithful and interpretable. Full article
Show Figures

Figure 1

41 pages, 815 KB  
Article
XAI-Compliance-by-Design: A Modular Framework for GDPR- and AI Act-Aligned Decision Transparency in High-Risk AI Systems
by Antonio Goncalves and Anacleto Correia
J. Cybersecur. Priv. 2026, 6(2), 43; https://doi.org/10.3390/jcp6020043 - 2 Mar 2026
Viewed by 1030
Abstract
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing [...] Read more.
High-risk Artificial Intelligence (AI) systems deployed in cybersecurity and privacy-critical contexts must satisfy not only demanding performance targets but also stringent obligations for transparency, accountability, and human oversight under the General Data Protection Regulation (GDPR) and the Artificial Intelligence Act (AI Act). Existing approaches often treat these concerns in isolation as follows: Explainable Artificial Intelligence (XAI) methods are added ad hoc to machine learning pipelines, while governance and regulatory frameworks remain largely conceptual and weakly connected to the concrete artefacts produced in practice. This article proposes XAI-Compliance-by-Design, a modular framework that integrates XAI techniques, compliance-by-design principles and trustworthy Machine Learning Operations (MLOps) practices into a unified architecture for high-risk AI systems in cybersecurity and privacy domains. The framework follows a dual-flow design that couples an upstream technical pipeline (data, model, explanation, and monitoring) with a downstream governance pipeline (policy, oversight, audit, and decision-making), orchestrated by a Compliance-by-Design Engine and a technical–regulatory correspondence matrix aligned with the GDPR, the AI Act, and ISO/IEC 42001. The framework is instantiated and evaluated through an end-to-end, Python-based proof of concept using a synthetic, intrusion detection system (IDS)-inspired anomaly detection scenario with a Random Forest (RF) classifier, Shapley Additive exPlanations (SHAP) and Local Interpretable Model-agnostic Explanations (LIME), drift indicators, and tamper-evident evidence bundles and decision dossiers. The results show that, even in a modest, toy setting, the framework systematically produces verifiable artefacts that support auditability and accountability across the model lifecycle. By linking explanation reports, drift statistics and compliance logs to concrete regulatory provisions, the approach illustrates how organisations operating high-risk AI for cybersecurity and privacy can move from model-centric optimisation to evidence-centric governance. The article discusses how the proposed framework can be generalised to real-world high-risk AI applications, contributing to the operationalisation of European digital sovereignty in AI governance. This article does not introduce a new intrusion detection algorithm; instead, it proposes an evidence-centric governance pipeline that captures decision provenance and compliance artefacts so that decisions can be audited and justified against regulatory obligations. Full article
(This article belongs to the Section Security Engineering & Applications)
Show Figures

Graphical abstract

39 pages, 3580 KB  
Review
Application of AI in Cyberattack Detection: A Review
by Yaw Jantuah Boateng, Nusrat Jahan Mim, Nasrin Akhter, Ranesh Naha, Aniket Mahanti and Alistair Barros
Sensors 2026, 26(5), 1518; https://doi.org/10.3390/s26051518 - 28 Feb 2026
Viewed by 715
Abstract
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a [...] Read more.
In today’s fast-changing digital environment, cyber-physical systems face escalating security challenges due to increasingly sophisticated cyberattacks. Artificial Intelligence (AI) has emerged as a powerful enabler of modern cyberattack detection, offering scalable, accurate, and adaptive solutions to counter dynamic threats. This paper provides a comprehensive review of recent advancements in AI-based cyberattack detection, focusing on Machine Learning (ML), Deep Learning (DL), Reinforcement Learning (RL), Federated Learning (FL), and emerging techniques such as generative AI, neuro-symbolic AI, swarm intelligence, lightweight AI, and quantum Computing. We evaluate the strengths and limitations of these approaches, highlighting their performance on benchmark datasets. The review discusses traditional signature-based Intrusion Detection Systems (IDS) and their limitations against novel attack patterns, contrasted with AI-driven anomaly-based and hybrid detection methods that improve detection rates for unknown and zero-day attacks. Key challenges, including computational costs, data quality, privacy concerns, and model interpretability, are analysed alongside the role of Explainable AI (XAI) in enhancing trust and transparency. The impact of computational resources, dataset representativeness, and evaluation metrics on AI model performance is also explored. Furthermore, we investigate the potential of lightweight AI for resource-constrained environments like IoT and edge devices, and quantum computing’s role in advancing detection efficiency and cryptographic security. The paper also draws attention to future research directions, particularly the development of up-to-date datasets, integration of hybrid quantum–classical models, and optimisation of asynchronous FL protocols to address evolving cybersecurity challenges. This study aims to inspire innovation in AI-driven cyberattack detection, fostering robust, interpretable, and efficient solutions for securing complex digital environments. Full article
(This article belongs to the Section Communications)
Show Figures

Figure 1

35 pages, 1715 KB  
Review
Optimization Strategies for Large-Scale PV Integration in Smart Distribution Networks: A Review
by Stefania Conti, Antonino Laudani, Santi A. Rizzo, Nunzio Salerno, Gian Giuseppe Soma, Giuseppe M. Tina and Cristina Ventura
Energies 2026, 19(5), 1191; https://doi.org/10.3390/en19051191 - 27 Feb 2026
Viewed by 371
Abstract
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective [...] Read more.
The large-scale integration of photovoltaic systems into modern distribution networks requires advanced forecasting and optimisation tools to address variability, uncertainty, and increasingly complex operational conditions. This review examines 160 peer-reviewed studies published primarily between 2018 and 2026 and provides a unified, system-level perspective that links photovoltaic power forecasting, photovoltaic optimisation, and energy storage system management within the broader context of Smart Grid operation. The analysis covers forecasting techniques across all temporal horizons, compares deterministic, stochastic, metaheuristic, and hybrid optimisation approaches, and reviews siting, sizing, and operational strategies for both PV units and Energy Storage Systems, including their effects on hosting capacity, reactive power control, and network flexibility. A key contribution of this work is the consolidation of planning- and operation-oriented methods into a coherent framework that clarifies how forecasting accuracy influences Distributed Energy Resources optimisation and system-level performance. The review also highlights emerging trends, such as reinforcement learning for real-time Energy Storage Systems control, surrogate-assisted multi-objective optimisation, data-driven hosting capacity evaluation, and explainable AI for grid transparency, as essential enablers for flexible, resilient, and sustainable distribution networks. Open challenges include uncertainty modelling, real-world validation of optimisation tools, interoperability with flexibility markets, and the development of scalable and adaptive optimisation frameworks for next-generation smart grids. Full article
(This article belongs to the Section A1: Smart Grids and Microgrids)
Show Figures

Figure 1

36 pages, 5431 KB  
Article
Explainable AI-Driven Quality and Condition Monitoring in Smart Manufacturing
by M. Nadeem Ahangar, Z. A. Farhat, Aparajithan Sivanathan, N. Ketheesram and S. Kaur
Sensors 2026, 26(3), 911; https://doi.org/10.3390/s26030911 - 30 Jan 2026
Cited by 1 | Viewed by 929
Abstract
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how [...] Read more.
Artificial intelligence (AI) is increasingly adopted in manufacturing for tasks such as automated inspection, predictive maintenance, and condition monitoring. However, the opaque, black-box nature of many AI models remains a major barrier to industrial trust, acceptance, and regulatory compliance. This study investigates how explainable artificial intelligence (XAI) techniques can be used to systematically open and interpret the internal reasoning of AI systems commonly deployed in manufacturing, rather than to optimise or compare model performance. A unified explainability-centred framework is proposed and applied across three representative manufacturing use cases encompassing heterogeneous data modalities and learning paradigms: vision-based classification of casting defects, vision-based localisation of metal surface defects, and unsupervised acoustic anomaly detection for machine condition monitoring. Diverse models are intentionally employed as representative black-box decision-makers to evaluate whether XAI methods can provide consistent, physically meaningful explanations independent of model architecture, task formulation, or supervision strategy. A range of established XAI techniques, including Grad-CAM, Integrated Gradients, Saliency Maps, Occlusion Sensitivity, and SHAP, are applied to expose model attention, feature relevance, and decision drivers across visual and acoustic domains. The results demonstrate that XAI enables alignment between model behaviour and physically interpretable defect and fault mechanisms, supporting transparent, auditable, and human-interpretable decision-making. By positioning explainability as a core operational requirement rather than a post hoc visual aid, this work contributes a cross-modal framework for trustworthy AI in manufacturing, aligned with Industry 5.0 principles, human-in-the-loop oversight, and emerging expectations for transparent and accountable industrial AI systems. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

24 pages, 1526 KB  
Article
EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification
by Thair A. Al-Janabi, Hamed S. Al-Raweshidy and Muthana Zouri
Sensors 2026, 26(3), 824; https://doi.org/10.3390/s26030824 - 26 Jan 2026
Viewed by 397
Abstract
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. [...] Read more.
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms. Full article
Show Figures

Figure 1

36 pages, 949 KB  
Systematic Review
Towards Sustainable Health Management in the Kingdom of Saudi Arabia: The Role of Artificial Intelligence—A Systematic Review, Challenges, and Future Directions
by Kholoud Maswadi and Ali Alhazmi
Sustainability 2026, 18(2), 905; https://doi.org/10.3390/su18020905 - 15 Jan 2026
Viewed by 1215
Abstract
The incorporation of Artificial Intelligence (AI) into medical services in Saudi Arabia offers a substantial opportunity. Despite the increasing integration of AI techniques such as machine learning, natural language processing, and predictive analytics, there persists an issue in the thorough comprehension of their [...] Read more.
The incorporation of Artificial Intelligence (AI) into medical services in Saudi Arabia offers a substantial opportunity. Despite the increasing integration of AI techniques such as machine learning, natural language processing, and predictive analytics, there persists an issue in the thorough comprehension of their applications, advantages, and issues within the Saudi healthcare framework. This study aims to perform a thorough systematic literature review (SLR) to assess the current status of AI in Saudi healthcare, determine its alignment with Vision 2030, and suggest practical recommendations for future research and policy. In accordance with the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) methodology, 699 studies were initially obtained from electronic databases, with 24 studies selected after the application of established inclusion and exclusion criteria. The results indicated that AI has been effectively utilised in disease prediction, diagnosis, therapy optimisation, patient monitoring, and resource allocation, resulting in notable advancements in diagnostic accuracy, operational efficiency, and patient outcomes. Nonetheless, limitations to adoption, such as ethical issues, legislative complexities, data protection issues, and shortages in worker skills, were also recognised. This review emphasises the necessity for strong ethical frameworks, regulatory control, and capacity-building efforts to guarantee the responsible and fair implementation of AI in healthcare. Recommendations encompass the creation of national AI ethics and governance frameworks, investment in AI education and training initiatives, and the formulation of modular AI solutions to guarantee scalability and cost-effectiveness. This breakthrough enables Saudi Arabia to realise its Vision 2030 objectives, establishing the Kingdom as a global leader in AI-driven healthcare innovation. Full article
(This article belongs to the Section Health, Well-Being and Sustainability)
Show Figures

Figure 1

64 pages, 13395 KB  
Review
Low-Cost Malware Detection with Artificial Intelligence on Single Board Computers
by Phil Steadman, Paul Jenkins, Rajkumar Singh Rathore and Chaminda Hewage
Future Internet 2026, 18(1), 46; https://doi.org/10.3390/fi18010046 - 12 Jan 2026
Viewed by 1787
Abstract
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence [...] Read more.
The proliferation of Internet of Things (IoT) devices has significantly expanded the threat landscape for malicious software (malware), rendering traditional signature-based detection methods increasingly ineffective in coping with the volume and evolving nature of modern threats. In response, researchers are utilising artificial intelligence (AI) for a more dynamic and robust malware detection solution. An innovative approach utilising AI is focusing on image classification techniques to detect malware on resource-constrained Single-Board Computers (SBCs) such as the Raspberry Pi. In this method the conversion of malware binaries into 2D images is examined, which can be analysed by deep learning models such as convolutional neural networks (CNNs) to classify them as benign or malicious. The results show that the image-based approach demonstrates high efficacy, with many studies reporting detection accuracy rates exceeding 98%. That said, there is a significant challenge in deploying these demanding models on devices with limited processing power and memory, in particular those involving of both calculation and time complexity. Overcoming this issue requires critical model optimisation strategies. Successful approaches include the use of a lightweight CNN architecture and federated learning, which may be used to preserve privacy while training models with decentralised data are processed. This hybrid workflow in which models are trained on powerful servers before the learnt algorithms are deployed on SBCs is an emerging field attacting significant interest in the field of cybersecurity. This paper synthesises the current state of the art, performance compromises, and optimisation techniques contributing to the understanding of how AI and image representation can enable effective low-cost malware detection on resource-constrained systems. Full article
Show Figures

Graphical abstract

57 pages, 9972 KB  
Review
Harnessing Transition Metal Chalcogenides for Efficient Performance in Magnesium–Sulfur Battery: Synergising Experimental and Theoretical Techniques
by Hassan O. Shoyiga and Msimelelo Siswana
Solids 2026, 7(1), 7; https://doi.org/10.3390/solids7010007 - 8 Jan 2026
Viewed by 1211
Abstract
Magnesium–sulfur (Mg-S) batteries represent a novel category of multivalent energy storage systems, characterised by enhanced theoretical energy density, material availability, and ecological compatibility. Notwithstanding these benefits, the practical implementation of this approach continues to be hindered by ongoing issues, such as polysulfide shuttle [...] Read more.
Magnesium–sulfur (Mg-S) batteries represent a novel category of multivalent energy storage systems, characterised by enhanced theoretical energy density, material availability, and ecological compatibility. Notwithstanding these benefits, the practical implementation of this approach continues to be hindered by ongoing issues, such as polysulfide shuttle effects, slow Mg2+ transport, and significant interfacial instability. This study emphasises recent progress in utilising transition metal chalcogenides (TMCs) as cathode materials and modifiers to overcome these challenges. We assess the structural, electrical, and catalytic characteristics of TMCs such as MoS2, CoSe2, WS2, and TiS2, highlighting their contributions to improving redox kinetics, retaining polysulfides, and enabling reversible Mg2+ intercalation. The review synthesises results from experimental and theoretical studies to offer a thorough comprehension of structure–function interactions. Particular emphasis is placed on morphological engineering, modulation of electronic conductivity, and techniques for surface functionalisation. Furthermore, we examine insights from density functional theory (DFT) simulations that corroborate the observed enhancements in electrochemical performance and offer predictive direction for material optimisation. This paper delineates nascent opportunities in Artificial Intelligence (AI)-enhanced materials discovery and hybrid system design, proposing future trajectories to realise the potential of TMC-based Mg-S battery systems fully. Full article
Show Figures

Graphical abstract

31 pages, 3607 KB  
Article
Hybrid AI–Taguchi–ANOVA Approach for Thermographic Monitoring of Electronic Devices
by Filippo Laganà, Danilo Pratticò, Marco F. Quattrone, Salvatore A. Pullano and Salvatore Calcagno
Eng 2026, 7(1), 28; https://doi.org/10.3390/eng7010028 - 6 Jan 2026
Cited by 8 | Viewed by 665
Abstract
Defects in printed circuit boards (PCBs), if not detected promptly, may persist over time until they cause the failure of critical components. Traditional monitoring methods, which are limited to simulations or superficial measurements, obstruct predictive maintenance and real-time fault detection. To address these [...] Read more.
Defects in printed circuit boards (PCBs), if not detected promptly, may persist over time until they cause the failure of critical components. Traditional monitoring methods, which are limited to simulations or superficial measurements, obstruct predictive maintenance and real-time fault detection. To address these issues and enhance real-time diagnostics of thermal anomalies in PCBs, this work proposes an integrated system that combines infrared thermography (IRT), artificial intelligence (AI) algorithms, and Taguchi–ANOVA statistical techniques. IR thermography was employed to identify thermal stresses in the devices during normal operation. The IR acquisitions were used to build a dataset for specialized AI model’s training, which combines thermal anomalies segmentation using U-Net with a Multilayer Perceptron (MLP) classifier for heat distribution patterns. The Taguchi method determines the optimal configuration of the selected parameters, while Analysis of Variance (ANOVA) evaluates the effect of each factor on the F1-score response. These techniques statistically validated the AI performance, confirming the optimal set of selected hyperparameters and quantifying their contribution to F1-score. The novelty of the study lies in the integration of real-time infrared thermography with an interpretable AI pipeline and a Taguchi–ANOVA statistical framework, which enables both optimisation and rigorous validation of AI performance under real-time operating conditions. Full article
(This article belongs to the Special Issue Artificial Intelligence for Engineering Applications, 2nd Edition)
Show Figures

Figure 1

26 pages, 808 KB  
Article
Artificial Intelligence for Sustainable Consumption: Assessing Its Role in Emission Reduction and Resource Optimisation in Bahrain
by Jaafar Al-Mesaiadeen
Sustainability 2026, 18(1), 322; https://doi.org/10.3390/su18010322 - 29 Dec 2025
Viewed by 741
Abstract
The rise in energy demand has heightened concerns about the inefficient use of resources, escalating emissions, and unsustainable consumption trends in Bahrain. The use of conventional methods to manage such challenges has proved to be inadequate, demanding innovative approaches to balance environmental sustainability [...] Read more.
The rise in energy demand has heightened concerns about the inefficient use of resources, escalating emissions, and unsustainable consumption trends in Bahrain. The use of conventional methods to manage such challenges has proved to be inadequate, demanding innovative approaches to balance environmental sustainability with economic growth. This study aims to investigate the impact of artificial intelligence on sustainable consumption, emission reduction, and resource optimisation in the energy sector in Bahrain. The study used a descriptive quantitative research design using a questionnaire distributed to 230 respondents from the energy sector in Bahrain using a stratified random sampling technique. According to the statistical findings, artificial intelligence has a significant positive effect on sustainable consumption (B = 0.634, t = 14.323, R2 = 0.474, p = 0.000), and reduction in emissions (B = 0.450, t = 9.950, R2 = 0.303, p = 0.000), as well as the optimisation of resources (B = 0.426, t = 10.316, R2 = 0.318, p = 0.000). These results confirm that there are strong positive correlations between AI and the three sustainability outcomes, and AI can account for between 30.3% and 47.4% of the variance of the dependent variables. This study presents new empirical insights regarding the role of artificial intelligence technologies in supporting national sustainability objectives as well as energy transition efforts. Full article
Show Figures

Figure 1

16 pages, 1776 KB  
Review
Artificial Intelligence and the Future of Cardiac Implantable Electronic Devices: Diagnostics, Monitoring, and Therapy
by Ibrahim Antoun, Alkassem Alkhayer, Ahmed Abdelrazik, Mahmoud Eldesouky, Kaung Myat Thu, Harshil Dhutia, Riyaz Somani and G. André Ng
J. Clin. Med. 2025, 14(24), 8824; https://doi.org/10.3390/jcm14248824 - 13 Dec 2025
Cited by 6 | Viewed by 1235
Abstract
Cardiac implantable electronic devices (CIEDs) such as pacemakers, implantable cardioverter-defibrillators (ICDs), and cardiac resynchronisation therapy (CRT) devices are generating unprecedented volumes of data in both inpatient and remote settings. Artificial intelligence (AI) techniques are increasingly being applied to enhance the management of these [...] Read more.
Cardiac implantable electronic devices (CIEDs) such as pacemakers, implantable cardioverter-defibrillators (ICDs), and cardiac resynchronisation therapy (CRT) devices are generating unprecedented volumes of data in both inpatient and remote settings. Artificial intelligence (AI) techniques are increasingly being applied to enhance the management of these devices and the patients who rely on them. Recent advances demonstrate that machine learning (ML) and deep learning (DL) can improve diagnostic capabilities (for example, by detecting arrhythmias and predicting clinical events), streamline remote monitoring workflows, and optimise device-based therapies. Key applications include AI-driven algorithms that accurately detect true arrhythmias while filtering out false alerts from pacemakers and implantable monitors, neural network models that predict ventricular arrhythmias weeks before ICD shocks, and personalised models that forecast which heart failure patients will respond to CRT. Moreover, novel approaches such as natural language processing (NLP) and reinforcement learning are being explored to integrate diverse data sources and to enable devices to self-adjust their programming. This narrative review summarises the major applications of AI in the CIED domain—diagnostics, remote monitoring, and therapy optimisation—with an emphasis on the recent literature over the past five years. The review highlights important studies and randomised trials in each area, discusses the variety of AI techniques employed, and outlines future directions and challenges (including data standardisation, validation in clinical trials, and regulatory considerations) for translating these innovations into routine clinical care. Full article
(This article belongs to the Special Issue Application of Artificial Intelligence in Cardiology)
Show Figures

Figure 1

Back to TopTop