Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (58)

Search Parameters:
Keywords = AI hardware security

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
36 pages, 4047 KB  
Review
Application of FPGA Devices in Network Security: A Survey
by Abdulmunem A. Abdulsamad and Sándor R. Répás
Electronics 2025, 14(19), 3894; https://doi.org/10.3390/electronics14193894 - 30 Sep 2025
Abstract
Field-Programmable Gate Arrays (FPGAs) are increasingly shaping the future of network security, thanks to their flexibility, parallel processing capabilities, and energy efficiency. In this survey, we examine 50 peer-reviewed studies published between 2020 and 2025, selected from an initial pool of 210 articles [...] Read more.
Field-Programmable Gate Arrays (FPGAs) are increasingly shaping the future of network security, thanks to their flexibility, parallel processing capabilities, and energy efficiency. In this survey, we examine 50 peer-reviewed studies published between 2020 and 2025, selected from an initial pool of 210 articles based on relevance, hardware implementation, and the presence of empirical performance data. These studies encompass a broad range of topics, including cryptographic acceleration, intrusion detection and prevention systems (IDS/IPS), hardware firewalls, and emerging strategies that incorporate artificial intelligence (AI) and post-quantum cryptography (PQC). Our review focuses on five major application areas: cryptographic acceleration, intrusion detection and prevention systems (IDS/IPS), hardware firewalls, and emerging strategies involving artificial intelligence (AI) and post-quantum cryptography (PQC). We propose a structured taxonomy that organises the field by technical domain and challenge, and compare solutions in terms of scalability, resource usage, and real-world performance. Beyond summarising current advances, we explore ongoing limitations—such as hardware constraints, integration complexity, and the lack of standard benchmarking. We also outline future research directions, including low-power cryptographic designs, FPGA–AI collaboration for detecting zero-day attacks, and efficient PQC implementations. This survey aims to offer both a clear overview of recent progress and a valuable roadmap for researchers and engineers working toward secure, high-performance FPGA-based systems. Full article
Show Figures

Figure 1

37 pages, 3784 KB  
Review
A Review on the Detection of Plant Disease Using Machine Learning and Deep Learning Approaches
by Thandiwe Nyawose, Rito Clifford Maswanganyi and Philani Khumalo
J. Imaging 2025, 11(10), 326; https://doi.org/10.3390/jimaging11100326 - 23 Sep 2025
Viewed by 492
Abstract
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep [...] Read more.
The early and accurate detection of plant diseases is essential for ensuring food security, enhancing crop yields, and facilitating precision agriculture. Manual methods are labour-intensive and prone to error, especially under varying environmental conditions. Artificial intelligence (AI), particularly machine learning (ML) and deep learning (DL), has advanced automated disease identification through image classification. However, challenges persist, including limited generalisability, small and imbalanced datasets, and poor real-world performance. Unlike previous reviews, this paper critically evaluates model performance in both lab and real-time field conditions, emphasising robustness, generalisation, and suitability for edge deployment. It introduces recent architectures such as GreenViT, hybrid ViT–CNN models, and YOLO-based single- and two-stage detectors, comparing their accuracy, inference speed, and hardware efficiency. The review discusses multimodal and self-supervised learning techniques to enhance detection in complex environments, highlighting key limitations, including reliance on handcrafted features, overfitting, and sensitivity to environmental noise. Strengths and weaknesses of models across diverse datasets are analysed with a focus on real-time agricultural applicability. The paper concludes by identifying research gaps and outlining future directions, including the development of lightweight architectures, integration with Deep Convolutional Generative Adversarial Networks (DCGANs), and improved dataset diversity for real-world deployment in precision agriculture. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

34 pages, 7182 KB  
Article
AI-Driven Attack Detection and Cryptographic Privacy Protection for Cyber-Resilient Industrial Control Systems
by Archana Pallakonda, Kabilan Kaliyannan, Rahul Loganathan Sumathi, Rayappa David Amar Raj, Rama Muni Reddy Yanamala, Christian Napoli and Cristian Randieri
IoT 2025, 6(3), 56; https://doi.org/10.3390/iot6030056 - 22 Sep 2025
Viewed by 285
Abstract
Industrial control systems (ICS) are increasingly vulnerable to evolving cyber threats due to the convergence of operational and information technologies. This research presents a robust cybersecurity framework that integrates machine learning-based anomaly detection with advanced cryptographic techniques to protect ICS communication networks. Using [...] Read more.
Industrial control systems (ICS) are increasingly vulnerable to evolving cyber threats due to the convergence of operational and information technologies. This research presents a robust cybersecurity framework that integrates machine learning-based anomaly detection with advanced cryptographic techniques to protect ICS communication networks. Using the ICS-Flow dataset, we evaluate several ensemble models, with XGBoost achieving 99.92% accuracy in binary classification and Decision Tree attaining 99.81% accuracy in multi-class classification. Additionally, we implement an LSTM autoencoder for temporal anomaly detection and employ the ADWIN technique for real-time drift detection. To ensure data security, we apply AES-CBC with HMAC and AES-GCM with RSA encryption, which demonstrates resilience against brute-force, tampering, and cryptanalytic attacks. Security assessments, including entropy analysis and adversarial evaluations (IND-CPA and IND-CCA), confirm the robustness of the encryption schemes against passive and active threats. A hardware implementation on a PYNQ Zynq board shows the feasibility of real-time deployment, with a runtime of 0.11 s. The results demonstrate that the proposed framework enhances ICS security by combining AI-driven anomaly detection with RSA-based cryptography, offering a viable solution for protecting ICS networks from emerging cyber threats. Full article
Show Figures

Figure 1

49 pages, 1462 KB  
Article
A Deep Learning Approach for Real-Time Intrusion Mitigation in Automotive Controller Area Networks
by Anila Kousar, Saeed Ahmed and Zafar A. Khan
World Electr. Veh. J. 2025, 16(9), 492; https://doi.org/10.3390/wevj16090492 - 1 Sep 2025
Cited by 1 | Viewed by 600 | Correction
Abstract
The digital revolution has profoundly influenced the automotive industry, shifting the paradigm from conventional vehicles to smart cars (SCs). The SCs rely on in-vehicle communication among electronic control units (ECUs) enabled by assorted protocols. The Controller Area Network (CAN) serves as the de [...] Read more.
The digital revolution has profoundly influenced the automotive industry, shifting the paradigm from conventional vehicles to smart cars (SCs). The SCs rely on in-vehicle communication among electronic control units (ECUs) enabled by assorted protocols. The Controller Area Network (CAN) serves as the de facto standard for interconnecting these units, enabling critical functionalities. However, inherited non-delineation in SCs— transmits messages without explicit destination addressing—poses significant security risks, necessitating the evolution of an astute and resilient self-defense mechanism (SDM) to neutralize cyber threats. To this end, this study introduces a lightweight intrusion mitigation mechanism based on an adaptive momentum-based deep denoising autoencoder (AM-DDAE). Employing real-time CAN bus data from renowned smart vehicles, the proposed framework effectively reconstructs original data compromised by adversarial activities. Simulation results illustrate the efficacy of the AM-DDAE-based SDM, achieving a reconstruction error (RE) of less than 1% and an average execution time of 0.145532 s for data recovery. When validated on a new unseen attack, and on an Adversarial Machine Learning attack, the proposed model demonstrated equally strong performance with RE < 1%. Furthermore, the model’s decision-making capabilities were analysed using Explainable AI techinques such as SHAP and LIME. Additionally, the scheme offers applicable deployment flexibility: it can either be (a) embedded directly into individual ECU firmware or (b) implemented as a centralized hardware component interfacing between the CAN bus and ECUs, preloaded with the proposed mitigation algorithm. Full article
(This article belongs to the Special Issue Vehicular Communications for Cooperative and Automated Mobility)
Show Figures

Graphical abstract

47 pages, 10198 KB  
Article
A Comprehensive Survey on Wearable Computing for Mental and Physical Health Monitoring
by Tarek Elfouly and Ali Alouani
Electronics 2025, 14(17), 3443; https://doi.org/10.3390/electronics14173443 - 29 Aug 2025
Viewed by 2367
Abstract
Wearable computing is evolving from a passive data collection paradigm into an active, precision-guided health orchestration system. This survey synthesizes developments across sensing modalities, wireless protocols, computational frameworks, and AI-driven analytics that collectively define the state of the art in mental and physical [...] Read more.
Wearable computing is evolving from a passive data collection paradigm into an active, precision-guided health orchestration system. This survey synthesizes developments across sensing modalities, wireless protocols, computational frameworks, and AI-driven analytics that collectively define the state of the art in mental and physical health monitoring. A narrative review methodology is used to map the landscape of hardware innovations—including microfluidic sweat sensing, smart textiles, and textile-embedded biosensing ecosystems—alongside advances in on-device AI acceleration, context-aware multimodal fusion, and privacy-preserving learning frameworks. The analysis highlights a shift toward multiplexed biochemical sensing for real-time metabolic profiling, neuromorphic and analog AI processors for ultra–low-power analytics, and closed-loop therapeutic systems capable of adapting interventions dynamically to both physiological and psychological states. These trends are examined in the context of emerging clinical and consumer use cases, with a focus on scalability, personalization, and data security. By grounding these insights in current research trajectories, this work positions wearable computing as a cornerstone of preventive, personalized, and participatory healthcare. Addressing identified technical and ethical challenges will be essential for the next generation of systems to become trusted, equitable, and clinically indispensable tools. Full article
Show Figures

Figure 1

23 pages, 535 KB  
Article
Feasibility Evaluation of Secure Offline Large Language Models with Retrieval-Augmented Generation for CPU-Only Inference
by Erick Tyndall, Torrey Wagner, Colleen Gayheart, Alexandre Some and Brent Langhals
Information 2025, 16(9), 744; https://doi.org/10.3390/info16090744 - 28 Aug 2025
Viewed by 666
Abstract
Recent advances in large language models and retrieval-augmented generation, a method that enhances language models by integrating retrieved external documents, have created opportunities to deploy AI in secure, offline environments. This study explores the feasibility of using locally hosted, open-weight large language models [...] Read more.
Recent advances in large language models and retrieval-augmented generation, a method that enhances language models by integrating retrieved external documents, have created opportunities to deploy AI in secure, offline environments. This study explores the feasibility of using locally hosted, open-weight large language models with integrated retrieval-augmented generation capabilities on CPU-only hardware for tasks such as question answering and summarization. The evaluation reflects typical constraints in environments like government offices, where internet access and GPU acceleration may be restricted. Four models were tested using LocalGPT, a privacy-focused retrieval-augmented generation framework, on two consumer-grade systems: a laptop and a workstation. A technical project management textbook served as the source material. Performance was assessed using BERTScore and METEOR metrics, along with latency and response timing. All models demonstrated strong performance in direct question answering, providing accurate responses despite limited computational resources. However, summarization tasks showed greater variability, with models sometimes producing vague or incomplete outputs. The analysis also showed that quantization and hardware differences affected response time more than output quality; this is a tradeoff that should be considered in potential use cases. This study does not aim to rank models but instead highlights practical considerations in deploying large language models locally. The findings suggest that secure, CPU-only deployments are viable for structured tasks like factual retrieval, although limitations remain for more generative applications such as summarization. This feasibility-focused evaluation provides guidance for organizations seeking to use local large language models under privacy and resource constraints and lays the groundwork for future research in secure, offline AI systems. Full article
Show Figures

Figure 1

28 pages, 2070 KB  
Article
Enhancing Security and Applicability of Local LLM-Based Document Retrieval Systems in Smart Grid Isolated Environments
by Kiho Lee, Sumi Yang, Jaeyeong Jeong, Yongjoon Lee and Dongkyoo Shin
Electronics 2025, 14(17), 3407; https://doi.org/10.3390/electronics14173407 - 27 Aug 2025
Viewed by 621
Abstract
The deployment of large language models (LLMs) in closed-network industrial environments remains constrained by privacy and connectivity limitations. This study presents a retrieval-augmented question-answering system designed to operate entirely offline, integrating local vector embeddings, ontology-based semantic enrichment, and quantized LLMs, while ensuring compliance [...] Read more.
The deployment of large language models (LLMs) in closed-network industrial environments remains constrained by privacy and connectivity limitations. This study presents a retrieval-augmented question-answering system designed to operate entirely offline, integrating local vector embeddings, ontology-based semantic enrichment, and quantized LLMs, while ensuring compliance with industrial security standards like IEC 62351. The system was implemented using OpenChat-3.5 models with two quantization variants (Q5 and Q8), and evaluated through comparative experiments focused on response accuracy, generation speed, and secure document handling. Empirical results show that both quantized models delivered comparable answer quality, with the Q5 variant achieving approximately 1.5 times faster token generation under limited hardware. The ontology-enhanced retriever further improved semantic relevance by incorporating structured domain knowledge into the retrieval stage. Throughout the experiments, the system demonstrated effective performance across speed, accuracy, and information containment—core requirements for AI deployment in security-sensitive domains. These findings underscore the practical viability of offline LLM systems for privacy-compliant document search, while also highlighting architectural considerations essential for extending their utility to environments such as smart grids or defense-critical infrastructures. Full article
Show Figures

Figure 1

16 pages, 1492 KB  
Proceeding Paper
Hardware Challenges in AI Sensors and Innovative Approaches to Overcome Them
by Filip Tsvetanov
Eng. Proc. 2025, 104(1), 19; https://doi.org/10.3390/engproc2025104019 - 25 Aug 2025
Viewed by 1588
Abstract
Intelligent sensors with embedded AI are key to modern cyber-physical systems. They find applications in industrial automation, medical diagnostics and healthcare, smart cities, and autonomous systems. Despite their significant potential, they face several hardware challenges related to computing power, energy consumption, communication capabilities, [...] Read more.
Intelligent sensors with embedded AI are key to modern cyber-physical systems. They find applications in industrial automation, medical diagnostics and healthcare, smart cities, and autonomous systems. Despite their significant potential, they face several hardware challenges related to computing power, energy consumption, communication capabilities, and security, which limit their effectiveness. This article analyzes factors influencing the production and deployment of AI sensors. The key limitations are energy efficiency, computing power, scalability, and integration of AI sensors in real-time conditions. Among the main problems are the high requirements for data processing, the limitations of traditional microprocessors, and the balance between performance and energy consumption. To meet these challenges, the article presents several practical and innovative approaches, including the development of specialized microprocessors and optimized architectures for “edge computing,” which promise radical reductions in latency and power consumption. Through a synthesis of current research and practical examples, the article emphasizes the need for intermediate hardware–software solutions and standardization for mass deployment of AI sensors. Full article
Show Figures

Figure 1

28 pages, 968 KB  
Article
EVuLLM: Ethereum Smart Contract Vulnerability Detection Using Large Language Models
by Eleni Mandana, George Vlahavas and Athena Vakali
Electronics 2025, 14(16), 3226; https://doi.org/10.3390/electronics14163226 - 14 Aug 2025
Viewed by 1047
Abstract
Smart contracts have become integral to decentralized applications, yet their programmability introduces critical security risks, exemplified by high-profile exploits such as the DAO and Parity Wallet incidents. Existing vulnerability detection methods, including static and dynamic analysis, as well as machine learning-based approaches, often [...] Read more.
Smart contracts have become integral to decentralized applications, yet their programmability introduces critical security risks, exemplified by high-profile exploits such as the DAO and Parity Wallet incidents. Existing vulnerability detection methods, including static and dynamic analysis, as well as machine learning-based approaches, often struggle with emerging threats and rely heavily on large, labeled datasets. This study investigates the effectiveness of open-source, lightweight large language models (LLMs) fine-tuned using parameter-efficient techniques, including Quantized Low-Rank Adaptation (QLoRA), for smart contract vulnerability detection. We introduce the EVuLLM dataset to address the scarcity of diverse evaluation resources and demonstrate that our fine-tuned models achieve up to 94.78% accuracy, surpassing the performance of larger proprietary models, while significantly reducing computational requirements. Moreover, we emphasize the advantages of lightweight models deployable on local hardware, such as enhanced data privacy, reduced reliance on internet connectivity, lower infrastructure costs, and improved control over model behavior, factors that are especially critical in security-sensitive blockchain applications. We also explore Retrieval-Augmented Generation (RAG) as a complementary strategy, achieving competitive results with minimal training. Our findings highlight the practicality of using locally hosted LLMs for secure, efficient, and reproducible smart contract analysis, paving the way for broader adoption of AI-driven security in blockchain ecosystems. Full article
(This article belongs to the Special Issue Network Security and Cryptography Applications)
Show Figures

Figure 1

26 pages, 1033 KB  
Article
Internet of Things Platform for Assessment and Research on Cybersecurity of Smart Rural Environments
by Daniel Sernández-Iglesias, Llanos Tobarra, Rafael Pastor-Vargas, Antonio Robles-Gómez, Pedro Vidal-Balboa and João Sarraipa
Future Internet 2025, 17(8), 351; https://doi.org/10.3390/fi17080351 - 1 Aug 2025
Viewed by 579
Abstract
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and [...] Read more.
Rural regions face significant barriers to adopting IoT technologies, due to limited connectivity, energy constraints, and poor technical infrastructure. While urban environments benefit from advanced digital systems and cloud services, rural areas often lack the necessary conditions to deploy and evaluate secure and autonomous IoT solutions. To help overcome this gap, this paper presents the Smart Rural IoT Lab, a modular and reproducible testbed designed to replicate the deployment conditions in rural areas using open-source tools and affordable hardware. The laboratory integrates long-range and short-range communication technologies in six experimental scenarios, implementing protocols such as MQTT, HTTP, UDP, and CoAP. These scenarios simulate realistic rural use cases, including environmental monitoring, livestock tracking, infrastructure access control, and heritage site protection. Local data processing is achieved through containerized services like Node-RED, InfluxDB, MongoDB, and Grafana, ensuring complete autonomy, without dependence on cloud services. A key contribution of the laboratory is the generation of structured datasets from real network traffic captured with Tcpdump and preprocessed using Zeek. Unlike simulated datasets, the collected data reflect communication patterns generated from real devices. Although the current dataset only includes benign traffic, the platform is prepared for future incorporation of adversarial scenarios (spoofing, DoS) to support AI-based cybersecurity research. While experiments were conducted in an indoor controlled environment, the testbed architecture is portable and suitable for future outdoor deployment. The Smart Rural IoT Lab addresses a critical gap in current research infrastructure, providing a realistic and flexible foundation for developing secure, cloud-independent IoT solutions, contributing to the digital transformation of rural regions. Full article
Show Figures

Figure 1

30 pages, 10173 KB  
Article
Integrated Robust Optimization for Lightweight Transformer Models in Low-Resource Scenarios
by Hui Huang, Hengyu Zhang, Yusen Wang, Haibin Liu, Xiaojie Chen, Yiling Chen and Yuan Liang
Symmetry 2025, 17(7), 1162; https://doi.org/10.3390/sym17071162 - 21 Jul 2025
Viewed by 932
Abstract
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized [...] Read more.
With the rapid proliferation of artificial intelligence (AI) applications, an increasing number of edge devices—such as smartphones, cameras, and embedded controllers—are being tasked with performing AI-based inference. Due to constraints in storage capacity, computational power, and network connectivity, these devices are often categorized as operating in resource-constrained environments. In such scenarios, deploying powerful Transformer-based models like ChatGPT and Vision Transformers is highly impractical because of their large parameter sizes and intensive computational requirements. While lightweight Transformer models, such as MobileViT, offer a promising solution to meet storage and computational limitations, their robustness remains insufficient. This poses a significant security risk for AI applications, particularly in critical edge environments. To address this challenge, our research focuses on enhancing the robustness of lightweight Transformer models under resource-constrained conditions. First, we propose a comprehensive robustness evaluation framework tailored for lightweight Transformer inference. This framework assesses model robustness across three key dimensions: noise robustness, distributional robustness, and adversarial robustness. It further investigates how model size and hardware limitations affect robustness, thereby providing valuable insights for robustness-aware model design. Second, we introduce a novel adversarial robustness enhancement strategy that integrates lightweight modeling techniques. This approach leverages methods such as gradient clipping and layer-wise unfreezing, as well as decision boundary optimization techniques like TRADES and SMART. Together, these strategies effectively address challenges related to training instability and decision boundary smoothness, significantly improving model robustness. Finally, we deploy the robust lightweight Transformer models in real-world resource-constrained environments and empirically validate their inference robustness. The results confirm the effectiveness of our proposed methods in enhancing the robustness and reliability of lightweight Transformers for edge AI applications. Full article
(This article belongs to the Section Mathematics)
Show Figures

Figure 1

33 pages, 2217 KB  
Review
A Comprehensive Review of Artificial Intelligence-Based Algorithms for Predicting the Remaining Useful Life of Equipment
by Weihao Li, Jianhua Chen, Sijuan Chen, Peilin Li, Bing Zhang, Ming Wang, Ming Yang, Jipu Wang, Dejian Zhou and Junsen Yun
Sensors 2025, 25(14), 4481; https://doi.org/10.3390/s25144481 - 18 Jul 2025
Cited by 1 | Viewed by 851
Abstract
In the contemporary big data era, data-driven prognostic and health management (PHM) methodologies have emerged as indispensable tools for ensuring the secure and reliable operation of complex equipment systems. Central to these methodologies is the accurate prediction of remaining useful life (RUL), which [...] Read more.
In the contemporary big data era, data-driven prognostic and health management (PHM) methodologies have emerged as indispensable tools for ensuring the secure and reliable operation of complex equipment systems. Central to these methodologies is the accurate prediction of remaining useful life (RUL), which serves as a pivotal cornerstone for effective maintenance and operational decision-making. While significant advancements in computer hardware and artificial intelligence (AI) algorithms have catalyzed substantial progress in AI-based RUL prediction, extant research frequently exhibits a narrow focus on specific algorithms, neglecting a comprehensive and comparative analysis of AI techniques across diverse equipment types and operational scenarios. This study endeavors to bridge this gap through the following contributions: (1) A rigorous analysis and systematic categorization of application scenarios for equipment RUL prediction, elucidating their distinct characteristics and requirements. (2) A comprehensive summary and comparative evaluation of several AI algorithms deemed suitable for RUL prediction, delineating their respective strengths and limitations. (3) An in-depth comparative analysis of the applicability of AI algorithms across varying application contexts, informed by a nuanced understanding of different application scenarios and AI algorithm research. (4) An insightful discussion on the current challenges confronting AI-based RUL prediction technology, coupled with a forward-looking examination of its future prospects. By furnishing a meticulous and holistic understanding of the traits of various AI algorithms and their contextual applicability, this study aspires to facilitate the attainment of optimal application outcomes in the realm of equipment RUL prediction. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

23 pages, 1575 KB  
Article
An Integrated Blockchain Framework for Secure Autonomous Vehicle Communication System
by Juan de Anda-Suárez, José Luis López-Ramírez, Daniel Jimenez-Mendoza, José Manuel Benitez-Quintero, Eli Gabriel Avina-Bravo, David Asael Gutierrez-Hernandez and Juan Gabriel Avina-Cervantes
Information 2025, 16(7), 557; https://doi.org/10.3390/info16070557 - 30 Jun 2025
Viewed by 837
Abstract
Autonomous Vehicles (AV) have been extensively studied in both scientific and social contexts. Over the past two decades, there has been a significant rise in their real-world applications, including neural networks, Blockchain, Internet of Things, autonomous navigation, computer vision, automation processes, and various [...] Read more.
Autonomous Vehicles (AV) have been extensively studied in both scientific and social contexts. Over the past two decades, there has been a significant rise in their real-world applications, including neural networks, Blockchain, Internet of Things, autonomous navigation, computer vision, automation processes, and various other areas. Hence, it is imperative to investigate the interplay between software, hardware, and individuals. To guarantee secure and unaffected interactions within autonomous vehicle devices and networks, decentralized Blockchain technology is proposed. This study presents the introduction of a framework we named “DEMU-NAV” for an ecosystem that includes Artificial Intelligence (AI), humans, and robots. The framework makes use of a decentralized Blockchain, Smart-Contract (SC), and Internet of things (IoT) network. Our framework was implemented using Ethereum and Python, enabling us to oversee Blockchain, Smart-Contracts, and the IoT for the facilitation of autonomous vehicle navigation. Full article
(This article belongs to the Special Issue Blockchain, Technology and Its Application)
Show Figures

Graphical abstract

46 pages, 4362 KB  
Review
AI-Driven Wearable Bioelectronics in Digital Healthcare
by Guangqi Huang, Xiaofeng Chen and Caizhi Liao
Biosensors 2025, 15(7), 410; https://doi.org/10.3390/bios15070410 - 26 Jun 2025
Cited by 3 | Viewed by 5974
Abstract
The integration of artificial intelligence (AI) with wearable bioelectronics is revolutionizing digital healthcare by enabling proactive, personalized, and data-driven medical solutions. These advanced devices, equipped with multimodal sensors and AI-powered analytics, facilitate real-time monitoring of physiological and biochemical parameters—such as cardiac activity, glucose [...] Read more.
The integration of artificial intelligence (AI) with wearable bioelectronics is revolutionizing digital healthcare by enabling proactive, personalized, and data-driven medical solutions. These advanced devices, equipped with multimodal sensors and AI-powered analytics, facilitate real-time monitoring of physiological and biochemical parameters—such as cardiac activity, glucose levels, and biomarkers—allowing for early disease detection, chronic condition management, and precision therapeutics. By shifting healthcare from reactive to preventive paradigms, AI-driven wearables address critical challenges, including rising chronic disease burdens, aging populations, and healthcare accessibility gaps. However, their widespread adoption faces technical, ethical, and regulatory hurdles, such as data interoperability, privacy concerns, algorithmic bias, and the need for robust clinical validation. This review comprehensively examines the current state of AI-enhanced wearable bioelectronics, covering (1) foundational technologies in sensor design, AI algorithms, and energy-efficient hardware; (2) applications in continuous health monitoring, diagnostics, and personalized interventions; (3) key challenges in scalability, security, and regulatory compliance; and (4) future directions involving 5G, the IoT, and global standardization efforts. We highlight how these technologies could democratize healthcare through remote patient monitoring and resource optimization while emphasizing the imperative of interdisciplinary collaboration to ensure equitable, secure, and clinically impactful deployment. By synthesizing advancements and critical gaps, this review aims to guide researchers, clinicians, and policymakers toward responsible innovation in the next generation of digital healthcare. Full article
Show Figures

Figure 1

23 pages, 650 KB  
Review
Advancing TinyML in IoT: A Holistic System-Level Perspective for Resource-Constrained AI
by Leandro Antonio Pazmiño Ortiz, Ivonne Fernanda Maldonado Soliz and Vanessa Katherine Guevara Balarezo
Future Internet 2025, 17(6), 257; https://doi.org/10.3390/fi17060257 - 11 Jun 2025
Cited by 1 | Viewed by 2552
Abstract
Resource-constrained devices, including low-power Internet of Things (IoT) nodes, microcontrollers, and edge computing platforms, have increasingly become the focal point for deploying on-device intelligence. By integrating artificial intelligence (AI) closer to data sources, these systems aim to achieve faster responses, reduce bandwidth usage, [...] Read more.
Resource-constrained devices, including low-power Internet of Things (IoT) nodes, microcontrollers, and edge computing platforms, have increasingly become the focal point for deploying on-device intelligence. By integrating artificial intelligence (AI) closer to data sources, these systems aim to achieve faster responses, reduce bandwidth usage, and preserve privacy. Nevertheless, implementing AI in limited hardware environments poses substantial challenges in terms of computation, energy efficiency, model complexity, and reliability. This paper provides a comprehensive review of state-of-the-art methodologies, examining how recent advances in model compression, TinyML frameworks, and federated learning paradigms are enabling AI in tightly constrained devices. We highlight both established and emergent techniques for optimizing resource usage while addressing security, privacy, and ethical concerns. We then illustrate opportunities in key application domains—such as healthcare, smart cities, agriculture, and environmental monitoring—where localized intelligence on resource-limited devices can have broad societal impact. By exploring architectural co-design strategies, algorithmic innovations, and pressing research gaps, this paper offers a roadmap for future investigations and industrial applications of AI in resource-constrained devices. Full article
Show Figures

Figure 1

Back to TopTop