Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,212)

Search Parameters:
Keywords = IoT edge computing

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
25 pages, 2737 KB  
Review
Integration of Artificial Intelligence in Food Processing Technologies
by Ali Ayoub
Processes 2026, 14(3), 513; https://doi.org/10.3390/pr14030513 (registering DOI) - 2 Feb 2026
Abstract
The food processing industry is undergoing a profound transformation with the integration of Artificial Intelligence (AI), evolving from traditional automation to intelligent, adaptive systems aligned with Industry 5.0 principles. This review examines AI’s role across the food value chain, including supply chain management, [...] Read more.
The food processing industry is undergoing a profound transformation with the integration of Artificial Intelligence (AI), evolving from traditional automation to intelligent, adaptive systems aligned with Industry 5.0 principles. This review examines AI’s role across the food value chain, including supply chain management, quality control, process optimization in key unit operations, and emerging areas. Recent advancements in machine learning (ML), computer vision, and predictive analytics have significantly improved detection in food processing, achieving accuracy exceeding 98%. These technologies have also contributed to energy savings of 15–20% and reduced waste through real-time process optimization and predictive maintenance. The integration of blockchain and Internet of Things (IoT) technologies further strengthens traceability and sustainability across the supply chain, while generative AI accelerates the development of novel food products. Despite these benefits, several challenges persist, including substantial implementation costs, heterogeneous data sources, ethical considerations related to workforce displacement, and the opaque, “black box” nature of many AI models. Moreover, the effectiveness of AI solutions remains context-dependent; some studies report only marginal improvements in dynamic or data-poor environments. Looking ahead, the sector is expected to embrace autonomous manufacturing, edge computing, and bio-computing, with projections indicating that the AI market in food processing could approach $90 billion by 2030. Full article
Show Figures

Figure 1

35 pages, 8093 KB  
Article
DACCA: Distributed Adaptive Cloud Continuum Architecture
by Nektarios Deligiannakis, Vassilis Papataxiarhis, Michalis Loukeris, Stathes Hadjiefthymiades, Marios Touloupou, Syed Mafooq Ul Hassan, Herodotos Herodotou, Thanasis Moustakas, Emmanouil Bampis, Konstantinos Ioannidis, Iakovos T. Michailidis, Stefanos Vrochidis, Elias Kosmatopoulos, Francisco Javier Romero Martínez, Rafael Marín Pérez, Amr Mousa, Jacopo Castellini and Pablo Strasser
Future Internet 2026, 18(2), 74; https://doi.org/10.3390/fi18020074 (registering DOI) - 1 Feb 2026
Abstract
Recently, the need for unified orchestration frameworks that can manage extremely heterogeneous, distributed, and resource-constrained environments has emerged due to the rapid development of cloud, edge, and IoT computing. Kubernetes and other traditional cloud-native orchestration systems are not built to facilitate autonomous, decentralized [...] Read more.
Recently, the need for unified orchestration frameworks that can manage extremely heterogeneous, distributed, and resource-constrained environments has emerged due to the rapid development of cloud, edge, and IoT computing. Kubernetes and other traditional cloud-native orchestration systems are not built to facilitate autonomous, decentralized decision-making across the computing continuum or to seamlessly integrate non-container-native devices. This paper presents the Distributed Adaptive Cloud Continuum Architecture (DACCA), a Kubernetes-native architecture that extends orchestration beyond the data center to encompass edge and Internet of Things infrastructures. Decentralized self-awareness and swarm formation are supported for adaptive and resilient operation, a resource and application abstraction layer is established for uniform resource representation, and a Distributed and Adaptive Resource Optimization (DARO) framework based on multi-agent reinforcement learning is integrated for intelligent scheduling in the proposed architecture. Verifiable identity, access control, and tamper-proof data exchange across heterogeneous domains are further ensured by a zero-trust security framework based on distributed ledger technology. When combined, these elements enable increasingly autonomous workload orchestration, trading centralized control for adaptive, decentralized operation with enhanced interoperability, scalability, and trust. Thus, the proposed architecture enables self-managing and context-aware orchestration systems that support next-generation AI-driven distributed applications across the entire computing continuum. Full article
26 pages, 3401 KB  
Article
Toward an Integrated IoT–Edge Computing Framework for Smart Stadium Development
by Nattawat Pattarawetwong, Charuay Savithi and Arisaphat Suttidee
J. Sens. Actuator Netw. 2026, 15(1), 15; https://doi.org/10.3390/jsan15010015 (registering DOI) - 1 Feb 2026
Abstract
Large sports stadiums require robust real-time monitoring due to high crowd density, complex spatial configurations, and limited network infrastructure. This research evaluates a hybrid edge–cloud architecture implemented in a national stadium in Thailand. The proposed framework integrates diverse surveillance subsystems, including automatic number [...] Read more.
Large sports stadiums require robust real-time monitoring due to high crowd density, complex spatial configurations, and limited network infrastructure. This research evaluates a hybrid edge–cloud architecture implemented in a national stadium in Thailand. The proposed framework integrates diverse surveillance subsystems, including automatic number plate recognition, face recognition, and panoramic cameras, with edge-based processing to enable real-time situational awareness during high-attendance events. A simulation based on the stadium’s physical layout and operational characteristics is used to analyze coverage patterns, processing locations, and network performance under realistic event scenarios. The results show that geometry-informed sensor deployment ensures continuous visual coverage and minimizes blind zones without increasing camera density. Furthermore, relocating selected video processing tasks from the cloud to the edge reduces uplink bandwidth requirements by approximately 50–75%, depending on the processing configuration, and stabilizes data transmission during peak network loads. These findings suggest that processing location should be considered a primary architectural design factor in smart stadium systems. The combination of edge-based processing with centralized cloud coordination offers a practical model for scalable, safety-oriented monitoring solutions in high-density public venues. Full article
(This article belongs to the Section Big Data, Computing and Artificial Intelligence)
Show Figures

Figure 1

42 pages, 7319 KB  
Review
A Comprehensive Survey on VANET–IoT Integration Toward the Internet of Vehicles: Architectures, Communications, and System Challenges
by Khalid Kandali, Said Nouh, Lamyae Bennis and Hamid Bennis
Future Transp. 2026, 6(1), 32; https://doi.org/10.3390/futuretransp6010032 (registering DOI) - 31 Jan 2026
Abstract
The convergence of Vehicular Ad Hoc Networks (VANETs) and the Internet of Things (IoT) is giving rise to the Internet of Vehicles (IoV), a key enabler of next-generation intelligent transportation systems. This survey provides a comprehensive analysis of the architectural, communication, and computing [...] Read more.
The convergence of Vehicular Ad Hoc Networks (VANETs) and the Internet of Things (IoT) is giving rise to the Internet of Vehicles (IoV), a key enabler of next-generation intelligent transportation systems. This survey provides a comprehensive analysis of the architectural, communication, and computing foundations that support VANET–IoT integration. We examine the roles of cloud, edge, and in-vehicle computing, and compare major V2X and IoT communication technologies, including DSRC, C-V2X, MQTT, and CoAP. The survey highlights how sensing, communication, and distributed intelligence interact to support applications such as collision avoidance, cooperative perception, and smart traffic management. We identify four central challenges—security, scalability, interoperability, and energy constraints—and discuss how these issues shape system design across the network stack. In addition, we review emerging directions including 6G-enabled joint communication and sensing, reconfigurable surfaces, digital twins, and quantum-assisted optimization. The survey concludes by outlining open research questions and providing guidance for the development of reliable, efficient, and secure VANET–IoT systems capable of supporting future transportation networks. Full article
17 pages, 1498 KB  
Article
Enhancing Network Security with Generative AI on Jetson Orin Nano
by Jackson Diaz-Gorrin, Candido Caballero-Gil and Ljiljana Brankovic
Appl. Sci. 2026, 16(3), 1442; https://doi.org/10.3390/app16031442 - 30 Jan 2026
Abstract
This study presents an edge-based intrusion detection methodology designed to enhance cybersecurity in Internet of Things environments, which remain highly vulnerable to complex attacks. The approach employs an Auxiliary Classifier Generative Adversarial Network capable of classifying network traffic in real-time while simultaneously generating [...] Read more.
This study presents an edge-based intrusion detection methodology designed to enhance cybersecurity in Internet of Things environments, which remain highly vulnerable to complex attacks. The approach employs an Auxiliary Classifier Generative Adversarial Network capable of classifying network traffic in real-time while simultaneously generating high-fidelity synthetic data within a unified framework. The model is implemented in TensorFlow and deployed on the energy-efficient NVIDIA Jetson Orin Nano, demonstrating the feasibility of executing advanced deep learning models at the edge. Training is conducted on network traffic collected from diverse IoT devices, with preprocessing focused on TCP-based threats. The integration of an auxiliary classifier enables the generation of labeled synthetic samples that mitigate data scarcity and improve supervised learning under imbalanced conditions. Experimental results demonstrate strong detection performance, achieving a precision of 0.89 and a recall of 0.97 using the standard 0.5 decision threshold inherent to the sigmoid-based binary classifier, indicating an effective balance between intrusion detection capability and false-positive reduction, which is critical for reliable operation in IoT scenarios. The generative component enhances data augmentation, robustness, and generalization. These results show that combining generative adversarial learning with edge computing provides a scalable and effective approach for IoT security. Future work will focus on stabilizing training procedures and refining hyperparameters to improve detection performance while maintaining high precision. Full article
28 pages, 3862 KB  
Review
A Review of Wireless Charging Solutions for FANETs in IoT-Enabled Smart Environments
by Nelofar Aslam, Hongyu Wang, Hamada Esmaiel, Naveed Ur Rehman Junejo and Adel Agamy
Sensors 2026, 26(3), 912; https://doi.org/10.3390/s26030912 - 30 Jan 2026
Viewed by 1
Abstract
Unmanned Aerial Vehicles (UAVs) are emerging as a fundamental part of Flying Ad Hoc Networks (FANETs). However, owing to the limited energy capacity of UAV batteries, wireless power transfer (WPT) technologies have recently gained interest from researchers, offering recharging possibilities for FANETs. Based [...] Read more.
Unmanned Aerial Vehicles (UAVs) are emerging as a fundamental part of Flying Ad Hoc Networks (FANETs). However, owing to the limited energy capacity of UAV batteries, wireless power transfer (WPT) technologies have recently gained interest from researchers, offering recharging possibilities for FANETs. Based on this background, this study highlights the need for wireless charging to enhance the operational endurance of FANETs in Internet-of-Things (IoT) environments. This review investigates WPT power replenishment to explore the dynamic usage of UAVs in two ways. The former is for using a UAV as a mobile charger to recharge the ground nodes, whereas the latter is for WPT applications in in-flight (UAV-to-UAV) charging. For the two research domains, we describe the different methods of WPT and its latest advancements through the academic and industrial research literature. We categorized the results based on the power transfer range, efficiency, wireless charger topology (ground or in-flight), coordination among multiple UAVs, and trajectory optimization formulation. A crucial finding is that in-flight UAV charging can extend the endurance by three times compared to using standalone batteries. Furthermore, the integration of IoT for the deployment of a clan of UAVs as a FANET is rigorously emphasized. Our data findings also indicate the present and future forecasting graphs of UAVs and IoT-integrating UAVs in the global market. Existing systems have scalability issues beyond 20 UAVs; therefore, future research requires edge computing for WPT scheduling and blockchains for energy trading. Full article
(This article belongs to the Special Issue Security and Privacy Challenges in IoT-Driven Smart Environments)
Show Figures

Figure 1

42 pages, 4980 KB  
Article
Socially Grounded IoT Protocol for Reliable Computer Vision in Industrial Applications
by Gokulnath Chidambaram, Shreyanka Subbarayappa and Sai Baba Magapu
Future Internet 2026, 18(2), 69; https://doi.org/10.3390/fi18020069 (registering DOI) - 27 Jan 2026
Viewed by 106
Abstract
The Social Internet of Things (SIoT) enables collaborative service provisioning among interconnected devices by leveraging socially inspired trust relationships. This paper proposes a socially driven SIoT protocol for trust-aware service selection, enabling dynamic friendship formation and ranking among distributed service-providing devices based on [...] Read more.
The Social Internet of Things (SIoT) enables collaborative service provisioning among interconnected devices by leveraging socially inspired trust relationships. This paper proposes a socially driven SIoT protocol for trust-aware service selection, enabling dynamic friendship formation and ranking among distributed service-providing devices based on observed execution behavior. The protocol integrates detection accuracy, round-trip time (RTT), processing time, and device characteristics within a graph-based friendship model and employs PageRank-based scoring to guide service selection. Industrial computer vision workloads are used as a representative testbed to evaluate the proposed SIoT trust-evaluation framework under realistic execution and network constraints. In homogeneous environments with comparable service-provider capabilities, friendship scores consistently favor higher-accuracy detection pipelines, with F1-scores in the range of approximately 0.25–0.28, while latency and processing-time variations remain limited. In heterogeneous environments comprising resource-diverse devices, trust differentiation reflects the combined influence of algorithm accuracy and execution feasibility, resulting in clear service-provider ranking under high-resolution and high-frame-rate workloads. Experimental results further show that reducing available network bandwidth from 100 Mbps to 10 Mbps increases round-trip communication latency by approximately one order of magnitude, while detection accuracy remains largely invariant. The evaluation is conducted on a physical SIoT testbed with three interconnected devices, forming an 11-node, 22-edge logical trust graph, and on synthetic trust graphs with up to 50 service-providing nodes. Across all settings, service-selection decisions remain stable, and PageRank-based friendship scoring is completed in approximately 20 ms, incurring negligible overhead relative to inference and communication latency. Full article
(This article belongs to the Special Issue Social Internet of Things (SIoT))
Show Figures

Graphical abstract

38 pages, 6181 KB  
Article
An AIoT-Based Framework for Automated English-Speaking Assessment: Architecture, Benchmarking, and Reliability Analysis of Open-Source ASR
by Paniti Netinant, Rerkchai Fooprateepsiri, Ajjima Rukhiran and Meennapa Rukhiran
Informatics 2026, 13(2), 19; https://doi.org/10.3390/informatics13020019 - 26 Jan 2026
Viewed by 231
Abstract
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things [...] Read more.
The emergence of low-cost edge devices has enabled the integration of automatic speech recognition (ASR) into IoT environments, creating new opportunities for real-time language assessment. However, achieving reliable performance on resource-constrained hardware remains a significant challenge, especially on the Artificial Internet of Things (AIoT). This study presents an AIoT-based framework for automated English-speaking assessment that integrates architecture and system design, ASR benchmarking, and reliability analysis on edge devices. The proposed AIoT-oriented architecture incorporates a lightweight scoring framework capable of analyzing pronunciation, fluency, prosody, and CEFR-aligned speaking proficiency within an automated assessment system. Seven open-source ASR models—four Whisper variants (tiny, base, small, and medium) and three Vosk models—were systematically benchmarked in terms of recognition accuracy, inference latency, and computational efficiency. Experimental results indicate that Whisper-medium deployed on the Raspberry Pi 5 achieved the strongest overall performance, reducing inference latency by 42–48% compared with the Raspberry Pi 4 and attaining the lowest Word Error Rate (WER) of 6.8%. In contrast, smaller models such as Whisper-tiny, with a WER of 26.7%, exhibited two- to threefold higher scoring variability, demonstrating how recognition errors propagate into automated assessment reliability. System-level testing revealed that the Raspberry Pi 5 can sustain near real-time processing with approximately 58% CPU utilization and around 1.2 GB of memory, whereas the Raspberry Pi 4 frequently approaches practical operational limits under comparable workloads. Validation using real learner speech data (approximately 100 sessions) confirmed that the proposed system delivers accurate, portable, and privacy-preserving speaking assessment using low-power edge hardware. Overall, this work introduces a practical AIoT-based assessment framework, provides a comprehensive benchmark of open-source ASR models on edge platforms, and offers empirical insights into the trade-offs among recognition accuracy, inference latency, and scoring stability in edge-based ASR deployments. Full article
Show Figures

Figure 1

24 pages, 1526 KB  
Article
EQARO-ECS: Efficient Quantum ARO-Based Edge Computing and SDN Routing Protocol for IoT Communication to Avoid Desertification
by Thair A. Al-Janabi, Hamed S. Al-Raweshidy and Muthana Zouri
Sensors 2026, 26(3), 824; https://doi.org/10.3390/s26030824 - 26 Jan 2026
Viewed by 175
Abstract
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. [...] Read more.
Desertification is the impoverishment of fertile land, caused by various factors and environmental effects, such as temperature and humidity. An appropriate Internet of Things (IoT) architecture, routing algorithms based on artificial intelligence (AI), and emerging technologies are essential to monitor and avoid desertification. However, the classical AI algorithms usually suffer from falling into local optimum issues and consuming more energy. This research proposed an improved multi-objective routing protocol, namely, the efficient quantum (EQ) artificial rabbit optimisation (ARO) based on edge computing (EC) and a software-defined network (SDN) concept (EQARO-ECS), which provides the best cluster table for the IoT network to avoid desertification. The methodology of the proposed EQARO-ECS protocol reduces energy consumption and improves data analysis speed by deploying new technologies, such as the Cloud, SDN, EC, and quantum technique-based ARO. This protocol increases the data analysis speed because of the suggested iterated quantum gates with the ARO, which can rapidly penetrate from the local to the global optimum. The protocol avoids desertification because of a new effective objective function that considers energy consumption, communication cost, and desertification parameters. The simulation results established that the suggested EQARO-ECS procedure increases accuracy and improves network lifetime by reducing energy depletion compared to other algorithms. Full article
Show Figures

Figure 1

32 pages, 2032 KB  
Article
Utilizing AIoT to Achieve Sustainable Agricultural Systems in a Climate-Change-Affected Environment
by Mohamed Naeem, Mohamed A. El-Khoreby, Hussein M. ELAttar and Mohamed Aboul-Dahab
Future Internet 2026, 18(2), 68; https://doi.org/10.3390/fi18020068 - 26 Jan 2026
Viewed by 149
Abstract
Smart agricultural systems are continually evolving to provide high-quality planting and defend against threats such as climate change, which necessitate improved adaptation and resource allocation. IoT technology offers a cost-effective approach to monitoring and managing system performance. However, this approach faces challenges, including [...] Read more.
Smart agricultural systems are continually evolving to provide high-quality planting and defend against threats such as climate change, which necessitate improved adaptation and resource allocation. IoT technology offers a cost-effective approach to monitoring and managing system performance. However, this approach faces challenges, including connectivity issues and complex decision-making. While researchers have studied these problems individually, no fully automated solution has addressed them simultaneously. There is still a need for an offline solution that manages multiple processes and reduces human error. This paper introduces an AI-powered edge computing system that serves as an early-warning solution for climate impacts. This system enables autonomous management through an Agentic AI model that observes, predicts, decides, and adapts. It provides a low-cost AIoT platform for data forecasting, classification, and decision-making, converting sensor data into actionable insights. The system integrates forecast evaluation with real-time data comparisons to optimize scheduling, efficiency, sustainability, and yields. Moreover, this solution is totally autonomous and independent of internet connectivity. Demonstrating its superior performance, it reduced errors by 50% and achieved an R-squared value of 0.985. Full article
(This article belongs to the Topic Smart Edge Devices: Design and Applications)
20 pages, 1854 KB  
Article
Dual-Optimized Genetic Algorithm for Edge-Ready IoT Intrusion Detection on Raspberry Pi
by Khawlah Harasheh, Satinder Gill, Kendra Brinkley, Salah Garada, Dindin Aro Roque, Hayat MacHrouhi, Janera Manning-Kuzmanovski, Jesus Marin-Leal, Melissa Isabelle Arganda-Villapando and Sayed Ahmad Shah Sekandary
J 2026, 9(1), 3; https://doi.org/10.3390/j9010003 - 25 Jan 2026
Viewed by 152
Abstract
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed [...] Read more.
The Internet of Things (IoT) is increasingly deployed at the edge under resource and environmental constraints, which limits the practicality of traditional intrusion detection systems (IDSs) on IoT hardware. This paper presents two IDS configurations. First, we develop a baseline IDS with fixed hyperparameters, achieving 99.20% accuracy and ~0.002 ms/sample inference latency on a desktop machine; this configuration is suitable for high-performance platforms but is not intended for constrained IoT deployment. Second, we propose a lightweight, edge-oriented IDS that applies ANOVA-based filter feature selection and uses a genetic algorithm (GA) for the bounded hyperparameter tuning of the classifier under stratified cross-validation, enabling efficient execution on Raspberry Pi-class devices. The lightweight IDS achieves 98.95% accuracy with ~4.3 ms/sample end-to-end inference latency on Raspberry Pi while detecting both low-volume and high-volume (DoS/DDoS) attacks. Experiments are conducted in a Raspberry Pi-based real lab using an up-to-date mixed-modal dataset combining system/network telemetry and heterogeneous physical sensors. Overall, the proposed framework demonstrates a practical, hardware-aware, and reproducible way to balance detection performance and edge-level latency using established techniques for real-world IoT IDS deployment. Full article
26 pages, 3900 KB  
Review
A Survey on the Computing Continuum and Meta-Operating Systems: Perspectives, Architectures, Outcomes, and Open Challenges
by Panagiotis K. Gkonis, Anastasios Giannopoulos, Nikolaos Nomikos, Lambros Sarakis, Vasileios Nikolakakis, Gerasimos Patsourakis and Panagiotis Trakadas
Sensors 2026, 26(3), 799; https://doi.org/10.3390/s26030799 - 25 Jan 2026
Viewed by 225
Abstract
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, [...] Read more.
The goal of the study presented in this work is to analyze all recent advances in the context of the computing continuum and meta-operating systems (meta-OSs). The term continuum includes a variety of diverse hardware and computing elements, as well as network protocols, ranging from lightweight Internet of Things (IoT) components to more complex edge or cloud servers. To this end, the rapid penetration of IoT technology in modern-era networks, along with associated applications, poses new challenges towards efficient application deployment over heterogeneous network infrastructures. These challenges involve, among others, the interconnection of a vast number of IoT devices and protocols, proper resource management, and threat protection and privacy preservation. Hence, unified access mechanisms, data management policies, and security protocols are required across the continuum to support the vision of seamless connectivity and diverse device integration. This task becomes even more important as discussions on sixth generation (6G) networks are already taking place, which they are envisaged to coexist with IoT applications. Therefore, in this work the most significant technological approaches to satisfy the aforementioned challenges and requirements are presented and analyzed. To this end, a proposed architectural approach is also presented and discussed, which takes into consideration all key players and components in the continuum. In the same context, indicative use cases and scenarios that are leveraged from a meta-OSs in the computing continuum are presented as well. Finally, open issues and related challenges are also discussed. Full article
(This article belongs to the Section Internet of Things)
Show Figures

Figure 1

54 pages, 3083 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 - 22 Jan 2026
Viewed by 151
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

43 pages, 898 KB  
Systematic Review
Transforming Digital Accounting: Big Data, IoT, and Industry 4.0 Technologies—A Comprehensive Survey
by Georgios Thanasas, Georgios Kampiotis and Constantinos Halkiopoulos
J. Risk Financial Manag. 2026, 19(1), 92; https://doi.org/10.3390/jrfm19010092 - 22 Jan 2026
Viewed by 292
Abstract
(1) Background: The convergence of Big Data and the Internet of Things (IoT) is transforming digital accounting from retrospective documentation into real-time operational intelligence. This systematic review examines how Industry 4.0 technologies—artificial intelligence (AI), blockchain, edge computing, and digital twins—transform accounting practices through [...] Read more.
(1) Background: The convergence of Big Data and the Internet of Things (IoT) is transforming digital accounting from retrospective documentation into real-time operational intelligence. This systematic review examines how Industry 4.0 technologies—artificial intelligence (AI), blockchain, edge computing, and digital twins—transform accounting practices through intelligent automation, continuous compliance, and predictive decision support. (2) Methods: The study synthesizes 176 peer-reviewed sources (2015–2025) selected using explicit inclusion criteria emphasizing empirical evidence. Thematic analysis across seven domains—conceptual foundations, system evolution, financial reporting, fraud detection, audit transformation, implementation challenges, and emerging technologies—employs systematic bias-reduction mechanisms to develop evidence-based theoretical propositions. (3) Results: Key findings document fraud detection accuracy improvements from 65–75% (rule-based) to 85–92% (machine learning), audit cycle reductions of 40–60% with coverage expansion from 5–10% sampling to 100% population analysis, and reconciliation effort decreases of 70–80% through triple-entry blockchain systems. Edge computing reduces processing latency by 40–75%, enabling compliance response within hours versus 24–72 h. Four propositions are established with empirical support: IoT-enabled reporting superiority (15–25% error reduction), AI-blockchain fraud detection advantage (60–70% loss reduction), edge computing compliance responsiveness (55–75% improvement), and GDPR-blockchain adoption barriers (67% of European institutions affected). Persistent challenges include cybersecurity threats (300% incident increase, $5.9 million average breach cost), workforce deficits (70–80% insufficient training), and implementation costs ($100,000–$1,000,000). (4) Conclusions: The research contributes a four-layer technology architecture and challenge-mitigation framework bridging technical capabilities with regulatory requirements. Future research must address quantum computing applications (5–10 years), decentralized finance accounting standards (2–5 years), digital twins with 30–40% forecast improvement potential (3–7 years), and ESG analytics frameworks (1–3 years). The findings demonstrate accounting’s fundamental transformation from historical record-keeping to predictive decision support. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

25 pages, 7167 KB  
Article
Edge-Enhanced YOLOV8 for Spacecraft Instance Segmentation in Cloud-Edge IoT Environments
by Ming Chen, Wenjie Chen, Yanfei Niu, Ping Qi and Fucheng Wang
Future Internet 2026, 18(1), 59; https://doi.org/10.3390/fi18010059 - 20 Jan 2026
Viewed by 128
Abstract
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud [...] Read more.
The proliferation of smart devices and the Internet of Things (IoT) has led to massive data generation, particularly in complex domains such as aerospace. Cloud computing provides essential scalability and advanced analytics for processing these vast datasets. However, relying solely on the cloud introduces significant challenges, including high latency, network congestion, and substantial bandwidth costs, which are critical for real-time on-orbit spacecraft services. Cloud-edge Internet of Things (cloud-edge IoT) computing emerges as a promising architecture to mitigate these issues by pushing computation closer to the data source. This paper proposes an improved YOLOV8-based model specifically designed for edge computing scenarios within a cloud-edge IoT framework. By integrating the Cross Stage Partial Spatial Pyramid Pooling Fast (CSPPF) module and the WDIOU loss function, the model achieves enhanced feature extraction and localization accuracy without significantly increasing computational cost, making it suitable for deployment on resource-constrained edge devices. Meanwhile, by processing image data locally at the edge and transmitting only the compact segmentation results to the cloud, the system effectively reduces bandwidth usage and supports efficient cloud-edge collaboration in IoT-based spacecraft monitoring systems. Experimental results show that, compared to the original YOLOV8 and other mainstream models, the proposed model demonstrates superior accuracy and instance segmentation performance at the edge, validating its practicality in cloud-edge IoT environments. Full article
(This article belongs to the Special Issue Convergence of IoT, Edge and Cloud Systems)
Show Figures

Figure 1

Back to TopTop