Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (8,998)

Search Parameters:
Keywords = IoT (Internet of Things)

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
20 pages, 1385 KB  
Article
Development of an IoT System for Acquisition of Data and Control Based on External Battery State of Charge
by Aleksandar Valentinov Hristov, Daniela Gotseva, Roumen Ivanov Trifonov and Jelena Petrovic
Electronics 2026, 15(3), 502; https://doi.org/10.3390/electronics15030502 (registering DOI) - 23 Jan 2026
Abstract
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with [...] Read more.
In the context of small, battery-powered systems, a lightweight, reusable architecture is needed for integrated measurement, visualization, and cloud telemetry that minimizes hardware complexity and energy footprint. Existing solutions require high resources. This limits their applicability in Internet of Things (IoT) devices with low power consumption. The present work demonstrates the process of design, implementation and experimental evaluation of a single-cell lithium-ion battery monitoring prototype, intended for standalone operation or integration into other systems. The architecture is compact and energy efficient, with a reduction in complexity and memory usage: modular architecture with clearly distinguished responsibilities, avoidance of unnecessary dynamic memory allocations, centralized error handling, and a low-power policy through the usage of deep sleep mode. The data is stored in a cloud platform, while minimal storage is used locally. The developed system combines the functional requirements for an embedded external battery monitoring system: local voltage and current measurement, approximate estimation of the State of Charge (SoC) using a look-up table (LUT) based on the discharge characteristic, and visualization on a monochrome OLED display. The conducted experiments demonstrate the typical U(t) curve and the triggering of the indicator at low charge levels (LOW − SoC ≤ 20% and CRITICAL − SoC ≤ 5%) in real-world conditions and the absence of unwanted switching of the state near the voltage thresholds. Full article
Show Figures

Figure 1

24 pages, 3559 KB  
Article
Design of a Dynamic Key Generation Mechanism and Secure Image Transmission Based on Synchronization of Fractional-Order Chaotic Systems
by Chih-Yung Chen, Teh-Lu Liao, Jun-Juh Yan and Yu-Han Chang
Mathematics 2026, 14(3), 402; https://doi.org/10.3390/math14030402 (registering DOI) - 23 Jan 2026
Abstract
With the rapid development of Internet of Things (IoT) and Artificial Intelligence (AI) technologies, information security has become a critical issue. To develop a highly secure image encryption transmission system, this study proposes a novel key generation mechanism based on the combination of [...] Read more.
With the rapid development of Internet of Things (IoT) and Artificial Intelligence (AI) technologies, information security has become a critical issue. To develop a highly secure image encryption transmission system, this study proposes a novel key generation mechanism based on the combination of fractional-order chaotic system synchronization control and the SHA-256 algorithm. This proposed method dynamically generates high-quality synchronous random number sequences and is combined with the Advanced Encryption Standard (AES) algorithm. To quantitatively evaluate the mechanism, the generated sequences are tested using NIST SP 800-22, ENT, and DIEHARD suites. The comparative results show that the key generation mechanism produces sequences with higher randomness and unpredictability. In the evaluation of image encryption, histogram distribution, information entropy, adjacent pixel correlation, NPCR, and UACI are used as performance metrics. Experimental results show that the histogram distributions are uniform, the values of information entropy, NPCR, and UACI are close to their ideal levels, and the pixel correlation is significantly reduced. Compared to recent studies, the proposed method demonstrates higher encryption performance and stronger resistance to statistical attacks. Furthermore, the system effectively addresses key distribution and management problems inherent in traditional symmetric encryption schemes. These results validate the reliability and practical feasibility of the proposed approach. Full article
35 pages, 7524 KB  
Review
Fiber-Optical-Sensor-Based Technologies for Future Smart-Road-Based Transportation Infrastructure Applications
by Ugis Senkans, Nauris Silkans, Remo Merijs-Meri, Viktors Haritonovs, Peteris Skels, Jurgis Porins, Mayara Sarisariyama Siverio Lima, Sandis Spolitis, Janis Braunfelds and Vjaceslavs Bobrovs
Photonics 2026, 13(2), 106; https://doi.org/10.3390/photonics13020106 - 23 Jan 2026
Abstract
The rapid evolution of smart transportation systems necessitates the integration of advanced sensing technologies capable of supporting the real-time, reliable, and cost-effective monitoring of road infrastructure. Fiber-optic sensor (FOS) technologies, given their high sensitivity, immunity to electromagnetic interference, and suitability for harsh environments, [...] Read more.
The rapid evolution of smart transportation systems necessitates the integration of advanced sensing technologies capable of supporting the real-time, reliable, and cost-effective monitoring of road infrastructure. Fiber-optic sensor (FOS) technologies, given their high sensitivity, immunity to electromagnetic interference, and suitability for harsh environments, have emerged as promising tools for enabling intelligent transportation infrastructure. This review critically examines the current landscape of classical mechanical and electrical sensor realization in monitoring solutions. Focus is also given to fiber-optic-sensor-based solutions for smart road applications, encompassing both well-established techniques such as Fiber Bragg Grating (FBG) sensors and distributed sensing systems, as well as emerging hybrid sensor networks. The article examines the most topical physical parameters that can be measured by FOSs in road infrastructure monitoring to support traffic monitoring, structural health assessment, weigh-in-motion (WIM) system development, pavement condition evaluation, and vehicle classification. In addition, strategies for FOS integration with digital twins, machine learning, artificial intelligence, quantum sensing, and Internet of Things (IoT) platforms are analyzed to highlight their potential for data-driven infrastructure management. Limitations related to deployment, scalability, long-term reliability, and standardization are also discussed. The review concludes by identifying key technological gaps and proposing future research directions to accelerate the adoption of FOS technologies in next-generation road transportation systems. Full article
(This article belongs to the Special Issue Advances in Optical Fiber Sensing Technology)
Show Figures

Figure 1

25 pages, 2071 KB  
Review
Power Control in Wireless Body Area Networks: A Review of Mechanisms, Challenges, and Future Directions
by Haoru Su, Zhiyi Zhao, Boxuan Gu and Shaofu Lin
Sensors 2026, 26(3), 765; https://doi.org/10.3390/s26030765 (registering DOI) - 23 Jan 2026
Abstract
Wireless Body Area Networks (WBANs) enable real-time data collection for medical monitoring, sports tracking, and environmental sensing, driven by Internet of Things advancements. Their layered architecture supports efficient sensing, aggregation, and analysis, but energy constraints from transmission (over 60% of consumption), idle listening, [...] Read more.
Wireless Body Area Networks (WBANs) enable real-time data collection for medical monitoring, sports tracking, and environmental sensing, driven by Internet of Things advancements. Their layered architecture supports efficient sensing, aggregation, and analysis, but energy constraints from transmission (over 60% of consumption), idle listening, and dynamic conditions like body motion hinder adoption. Challenges include minimizing energy waste while ensuring data reliability, Quality of Service (QoS), and adaptation to channel variations, alongside algorithm complexity and privacy concerns. This paper reviews recent power control mechanisms in WBANs, encompassing feedback control, dynamic and convex optimization, graph theory-based path optimization, game theory, reinforcement learning, deep reinforcement learning, hybrid frameworks, and emerging architectures such as federated learning and cell-free massive MIMO, adopting a systematic review approach with a focus on healthcare and IoT application scenarios. Achieving energy savings ranging from 6% (simple feedback control) to 50% (hybrid frameworks with emerging architectures), depending on method complexity and application scenario, with prolonged network lifetime and improved reliability while preserving QoS requirements in healthcare and IoT applications. Full article
(This article belongs to the Special Issue e-Health Systems and Technologies)
Show Figures

Figure 1

24 pages, 2099 KB  
Article
MoviGestion: Automating Fleet Management for Personnel Transport Companies Using a Conversational System and IoT Powered by AI
by Elias Torres-Espinoza, Luiggi Raúl Juarez-Vasquez and Vicky Huillca-Ayza
Computers 2026, 15(2), 71; https://doi.org/10.3390/computers15020071 (registering DOI) - 23 Jan 2026
Abstract
The increasing complexity of fleet operations often forces drivers and administrators to alternate between fragmented tools for geolocation, messaging, and spreadsheet-based reporting, which slows response times and increases cognitive load. This study evaluates a comprehensive architectural framework designed to automate fleet management in [...] Read more.
The increasing complexity of fleet operations often forces drivers and administrators to alternate between fragmented tools for geolocation, messaging, and spreadsheet-based reporting, which slows response times and increases cognitive load. This study evaluates a comprehensive architectural framework designed to automate fleet management in personnel transport companies. The research proposes a unified methodology integrating Internet-of-Things (IoT) telemetry, cloud analytics, and Conversational AI to mitigate information fragmentation. Through a Lean UX iterative process, the proposed system was modeled and validated, with 30 participants (10 administrators and 20 drivers) who performed representative operational tasks in a simulated environment. Usability was assessed through the System Usability Scale (SUS), obtaining a score of 71.5 out of 100, classified as “Good Usability”. The results demonstrate that combining conversational interfaces with centralized operational data reduces friction, accelerates decision-making, and improves the overall user experience in fleet management contexts. Full article
52 pages, 12794 KB  
Article
Generative Adversarial Networks for Energy-Aware IoT Intrusion Detection: Comprehensive Benchmark Analysis of GAN Architectures with Accuracy-per-Joule Evaluation
by Iacovos Ioannou and Vasos Vassiliou
Sensors 2026, 26(3), 757; https://doi.org/10.3390/s26030757 (registering DOI) - 23 Jan 2026
Abstract
The proliferation of Internet of Things (IoT) devices has created unprecedented security challenges characterized by resource constraints, heterogeneous network architectures, and severe class imbalance in attack detection datasets. This paper presents a comprehensive benchmark evaluation of five Generative Adversarial Network (GAN) architectures for [...] Read more.
The proliferation of Internet of Things (IoT) devices has created unprecedented security challenges characterized by resource constraints, heterogeneous network architectures, and severe class imbalance in attack detection datasets. This paper presents a comprehensive benchmark evaluation of five Generative Adversarial Network (GAN) architectures for energy-aware intrusion detection: Standard GAN, Progressive GAN (PGAN), Conditional GAN (cGAN), Graph-based GAN (GraphGAN), and Wasserstein GAN with Gradient Penalty (WGAN-GP). Our evaluation framework introduces novel energy-normalized performance metrics, including Accuracy-per-Joule (APJ) and F1-per-Joule (F1PJ), that enable principled architecture selection for energy-constrained deployments. We propose an optimized WGAN-GP architecture incorporating diversity loss, feature matching, and noise injection mechanisms specifically designed for classification-oriented data augmentation. Experimental results on a stratified subset of the BoT-IoT dataset (approximately 1.83 million records) demonstrate that our optimized WGAN-GP achieves state-of-the-art performance, with 99.99% classification accuracy, a 0.99 macro-F1 score, and superior generation quality (MSE 0.01). While traditional classifiers augmented with SMOTE (i.e., Logistic Regression and CNN1D-TCN) also achieve 99.99% accuracy, they suffer from poor minority class detection (77.78–80.00%); our WGAN-GP improves minority class detection to 100.00% on the reported test split (45 of 45 attack instances correctly identified). Furthermore, WGAN-GP provides substantial efficiency advantages under our energy-normalized metrics, achieving superior accuracy-per-joule performance compared to Standard GAN. Also, a cross-dataset validation across five benchmarks (BoT-IoT, CICIoT2023, ToN-IoT, UNSW-NB15, CIC-IDS2017) was implemented using 250 pooled test attacks to confirm generalizability, with WGAN-GP achieving 98.40% minority class accuracy (246/250 attacks detected) compared to 76.80% for Classical + SMOTE methods, a statistically significant 21.60 percentage point improvement (p<0.0001). Finally, our analysis reveals that incorporating diversity-promoting mechanisms in GAN training simultaneously achieves best generation quality AND best classification performance, demonstrating that these objectives are complementary rather than competing. Full article
31 pages, 1140 KB  
Review
A Survey of Multi-Layer IoT Security Using SDN, Blockchain, and Machine Learning
by Reorapetse Molose and Bassey Isong
Electronics 2026, 15(3), 494; https://doi.org/10.3390/electronics15030494 (registering DOI) - 23 Jan 2026
Abstract
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across [...] Read more.
The integration of Software-Defined Networking (SDN), blockchain (BC), and machine learning (ML) has emerged as a promising approach to securing Internet of Things (IoT) and Industrial IoT (IIoT) networks. This paper conducted a comprehensive review of recent studies focusing on multi-layered security across device, control, network, and application layers. The analysis reveals that BC technology ensures decentralised trust, immutability, and secure access validation, while SDN enables programmability, load balancing, and real-time monitoring. In addition, ML/deep learning (DL) techniques, including federated and hybrid learning, strengthen anomaly detection, predictive security, and adaptive mitigation. Reported evaluations show similar gains in detection accuracy, latency, throughput, and energy efficiency, with effective defence against threats, though differing experimental contexts limit direct comparison. It also shows that the solutions’ effectiveness depends on ecosystem factors such as SDN controllers, BC platforms, cryptographic protocols, and ML frameworks. However, most studies rely on simulations or small-scale testbeds, leaving large-scale and heterogeneous deployments unverified. Significant challenges include scalability, computational and energy overhead, dataset dependency, limited adversarial resilience, and the explainability of ML-driven decisions. Based on the findings, future research should focus on lightweight consensus mechanisms for constrained devices, privacy-preserving ML/DL, and cross-layer adversarial-resilient frameworks. Advancing these directions will be important in achieving scalable, interoperable, and trustworthy SDN-IoT/IIoT security solutions. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

55 pages, 3089 KB  
Review
A Survey on Green Wireless Sensing: Energy-Efficient Sensing via WiFi CSI and Lightweight Learning
by Rod Koo, Xihao Liang, Deepak Mishra and Aruna Seneviratne
Energies 2026, 19(2), 573; https://doi.org/10.3390/en19020573 (registering DOI) - 22 Jan 2026
Abstract
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often [...] Read more.
Conventional sensing expends energy at three stages: powering dedicated sensors, transmitting measurements, and executing computationally intensive inference. Wireless sensing re-purposes WiFi channel state information (CSI) inherent in every packet, eliminating extra sensors and uplink traffic, though reliance on deep neural networks (DNNs) often trained and run on graphics processing units (GPUs) can negate these gains. This review highlights two core energy efficiency levers in CSI-based wireless sensing. First ambient CSI harvesting cuts power use by an order of magnitude compared to radar and active Internet of Things (IoT) sensors. Second, integrated sensing and communication (ISAC) embeds sensing functionality into existing WiFi links, thereby reducing device count, battery waste, and carbon impact. We review conventional handcrafted and accuracy-first methods to set the stage for surveying green learning strategies and lightweight learning techniques, including compact hybrid neural architectures, pruning, knowledge distillation, quantisation, and semi-supervised training that preserve accuracy while reducing model size and memory footprint. We also discuss hardware co-design from low-power microcontrollers to edge application-specific integrated circuits (ASICs) and WiFi firmware extensions that align computation with platform constraints. Finally, we identify open challenges in domain-robust compression, multi-antenna calibration, energy-proportionate model scaling, and standardised joules per inference metrics. Our aim is a practical battery-friendly wireless sensing stack ready for smart home and 6G era deployments. Full article
Show Figures

Graphical abstract

44 pages, 5287 KB  
Systematic Review
Cybersecurity in Radio Frequency Technologies: A Scientometric and Systematic Review with Implications for IoT and Wireless Applications
by Patrícia Rodrigues de Araújo, José Antônio Moreira de Rezende, Décio Rennó de Mendonça Faria and Otávio de Souza Martins Gomes
Sensors 2026, 26(2), 747; https://doi.org/10.3390/s26020747 (registering DOI) - 22 Jan 2026
Abstract
Cybersecurity in radio frequency (RF) technologies has become a critical concern, driven by the expansion of connected systems in urban and industrial environments. Although research on wireless networks and the Internet of Things (IoT) has advanced, comprehensive studies that provide a global and [...] Read more.
Cybersecurity in radio frequency (RF) technologies has become a critical concern, driven by the expansion of connected systems in urban and industrial environments. Although research on wireless networks and the Internet of Things (IoT) has advanced, comprehensive studies that provide a global and integrated view of cybersecurity development in this field remain limited. This work presents a scientometric and systematic review of international publications from 2009 to 2025, integrating the PRISMA protocol with semantic screening supported by a Large Language Model to enhance classification accuracy and reproducibility. The analysis identified two interdependent axes: one focusing on signal integrity and authentication in GNSS systems and cellular networks; the other addressing the resilience of IoT networks, both strongly associated with spoofing and jamming, as well as replay, relay, eavesdropping, and man-in-the-middle (MitM) attacks. The results highlight the relevance of RF cybersecurity in securing communication infrastructures and expose gaps in widely adopted technologies such as RFID, NFC, BLE, ZigBee, LoRa, Wi-Fi, and unlicensed ISM bands, as well as in emerging areas like terahertz and 6G. These gaps directly affect the reliability and availability of IoT and wireless communication systems, increasing security risks in large-scale deployments such as smart cities and cyber–physical infrastructures. Full article
(This article belongs to the Special Issue Cyber Security and Privacy in Internet of Things (IoT))
Show Figures

Figure 1

52 pages, 3528 KB  
Review
Advanced Fault Detection and Diagnosis Exploiting Machine Learning and Artificial Intelligence for Engineering Applications
by Davide Paolini, Pierpaolo Dini, Abdussalam Elhanashi and Sergio Saponara
Electronics 2026, 15(2), 476; https://doi.org/10.3390/electronics15020476 (registering DOI) - 22 Jan 2026
Abstract
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies [...] Read more.
Modern engineering systems require reliable and timely Fault Detection and Diagnosis (FDD) to ensure operational safety and resilience. Traditional model-based and rule-based approaches, although interpretable, exhibit limited scalability and adaptability in complex, data-intensive environments. This survey provides a systematic overview of recent studies exploring Machine Learning (ML) and Artificial Intelligence (AI) techniques for FDD across industrial, energy, Cyber-Physical Systems (CPS)/Internet of Things (IoT), and cybersecurity domains. Deep architectures such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), Transformers, and Graph Neural Networks (GNNs) are compared with unsupervised, hybrid, and physics-informed frameworks, emphasizing their respective strengths in adaptability, robustness, and interpretability. Quantitative synthesis and radar-based assessments suggest that AI-driven FDD approaches offer increased adaptability, scalability, and early fault detection capabilities compared to classical methods, while also introducing new challenges related to interpretability, robustness, and deployment. Emerging research directions include the development of foundation and multimodal models, federated learning (FL), and privacy-preserving learning, as well as physics-guided trustworthy AI. These trends indicate a paradigm shift toward self-adaptive, interpretable, and collaborative FDD systems capable of sustaining reliability, transparency, and autonomy across critical infrastructures. Full article
Show Figures

Figure 1

37 pages, 483 KB  
Review
Lattice-Based Cryptographic Accelerators for the Post-Quantum Era: Architectures, Optimizations, and Implementation Challenges
by Hua Yan, Lei Wu, Qiming Sun and Pengzhou He
Electronics 2026, 15(2), 475; https://doi.org/10.3390/electronics15020475 (registering DOI) - 22 Jan 2026
Abstract
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively [...] Read more.
The imminent threat of large-scale quantum computers to modern public-key cryptographic devices has led to extensive research into post-quantum cryptography (PQC). Lattice-based schemes have proven to be the top candidate among existing PQC schemes due to their strong security guarantees, versatility, and relatively efficient operations. However, the computational cost of lattice-based algorithms—including various arithmetic operations such as Number Theoretic Transform (NTT), polynomial multiplication, and sampling—poses considerable performance challenges in practice. This survey offers a comprehensive review of hardware acceleration for lattice-based cryptographic schemes—specifically both the architectural and implementation details of the standardized algorithms in the category CRYSTALS-Kyber, CRYSTALS-Dilithium, and FALCON (Fast Fourier Lattice-Based Compact Signatures over NTRU). It examines optimization measures at various levels, such as algorithmic optimization, arithmetic unit design, memory hierarchy management, and system integration. The paper compares the various performance measures (throughput, latency, area, and power) of Field-Programmable Gate Array (FPGA) and Application-Specific Integrated Circuit (ASIC) implementations. We also address major issues related to implementation, side-channel resistance, resource constraints within IoT (Internet of Things) devices, and the trade-offs between performance and security. Finally, we point out new research opportunities and existing challenges, with implications for hardware accelerator design in the post-quantum cryptographic environment. Full article
26 pages, 4670 KB  
Article
Construction of Ultra-Wideband Virtual Reference Station and Research on High-Precision Indoor Trustworthy Positioning Method
by Yinzhi Zhao, Jingui Zou, Bing Xie, Jingwen Wu, Zhennan Zhou and Gege Huang
ISPRS Int. J. Geo-Inf. 2026, 15(1), 50; https://doi.org/10.3390/ijgi15010050 (registering DOI) - 22 Jan 2026
Abstract
With the development of the Internet of Things (IoT) and smart industry, the demand for high-precision indoor positioning is becoming increasingly urgent. Ultra-ideband (UWB) technology has become a research hotspot due to its centimeter-level ranging accuracy, good penetration, and high multipath resolution. However, [...] Read more.
With the development of the Internet of Things (IoT) and smart industry, the demand for high-precision indoor positioning is becoming increasingly urgent. Ultra-ideband (UWB) technology has become a research hotspot due to its centimeter-level ranging accuracy, good penetration, and high multipath resolution. However, in complex environments, it still faces problems such as high cost of anchor node layout, gross errors in observation data, and difficulty in eliminating systematic errors such as electronic time delay. To address the aforementioned problems, this paper proposes a comprehensive UWB indoor positioning scheme. By constructing virtual reference stations to enhance the observation network, the geometric structure is optimized and the dependence on physical anchors is reduced. Combined with a gross error elimination method under short-baseline constraints and a double-difference positioning model including virtual observations, it systematically suppresses systematic errors such as electronic delay. Additionally, a quality control strategy with velocity constraints is introduced to improve trajectory smoothness and reliability. Static experimental results show that the proposed double-difference model can effectively eliminate systematic errors. For example, the positioning deviation in the Xdirection is reduced from approximately 2.88 cm to 0.84 cm, while the positioning accuracy in the Ydirection slightly decreases. Overall, the positioning accuracy is improved. The gross error elimination method achieves an identification efficiency of over 85% and an accuracy of higher than 99%, providing high-quality observation data for subsequent calculations. Dynamic experimental results show that the positioning trajectory after geometric enhancement of virtual reference stations and velocity-constrained quality control is highly consistent with the reference trajectory, with significantly improved trajectory smoothness and reliability. In summary, this study constructs a complete technical chain from data preprocessing to result quality control, effectively improving the accuracy and robustness of UWB positioning in complex indoor environments, and exhibits promising engineering application potential. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

15 pages, 6829 KB  
Article
RF Energy Harvesting–Aided IoT Network: System Design and Prototype Implementation
by Yang Wang and Hangyi Chen
Micromachines 2026, 17(1), 137; https://doi.org/10.3390/mi17010137 - 22 Jan 2026
Abstract
As more and more Internet of Things (IoT) devices are widely deployed, the issue of energy supply for these devices is becoming increasingly prominent. Considering not only the wireless information transfer (WIT) function of traditional IoT networks but also the characteristics of wireless [...] Read more.
As more and more Internet of Things (IoT) devices are widely deployed, the issue of energy supply for these devices is becoming increasingly prominent. Considering not only the wireless information transfer (WIT) function of traditional IoT networks but also the characteristics of wireless power transfer (WPT), an RF energy harvesting–aided IoT network is proposed in this paper. In the new IoT network, a WPT transmitter and a WPT receiver are, respectively, introduced to the new gateway and the new end-device. A WPT transmitter is mainly composed of an antenna selection circuit, a power amplifier, and a directional antenna. A WPT receiver consists of a directional antenna, a matching network, a rectification circuit, and an energy management circuit. In order to coordinate WPT and WIT in an orderly manner and minimize WPT’s interference on WIT, a time-division scheme is adopted. The proposed new IoT network aims to offer a new IoT scheme combining both WIT and WPT technologies. In addition, IoT devices can obtain a new energy supply through RF energy harvesting. Both the effectiveness and efficiency of the proposed RF energy harvesting–aided IoT network have been validated through experimentation. Full article
(This article belongs to the Special Issue Micro-Energy Harvesting Technologies and Self-Powered Sensing Systems)
Show Figures

Figure 1

43 pages, 898 KB  
Systematic Review
Transforming Digital Accounting: Big Data, IoT, and Industry 4.0 Technologies—A Comprehensive Survey
by Georgios Thanasas, Georgios Kampiotis and Constantinos Halkiopoulos
J. Risk Financial Manag. 2026, 19(1), 92; https://doi.org/10.3390/jrfm19010092 (registering DOI) - 22 Jan 2026
Abstract
(1) Background: The convergence of Big Data and the Internet of Things (IoT) is transforming digital accounting from retrospective documentation into real-time operational intelligence. This systematic review examines how Industry 4.0 technologies—artificial intelligence (AI), blockchain, edge computing, and digital twins—transform accounting practices through [...] Read more.
(1) Background: The convergence of Big Data and the Internet of Things (IoT) is transforming digital accounting from retrospective documentation into real-time operational intelligence. This systematic review examines how Industry 4.0 technologies—artificial intelligence (AI), blockchain, edge computing, and digital twins—transform accounting practices through intelligent automation, continuous compliance, and predictive decision support. (2) Methods: The study synthesizes 176 peer-reviewed sources (2015–2025) selected using explicit inclusion criteria emphasizing empirical evidence. Thematic analysis across seven domains—conceptual foundations, system evolution, financial reporting, fraud detection, audit transformation, implementation challenges, and emerging technologies—employs systematic bias-reduction mechanisms to develop evidence-based theoretical propositions. (3) Results: Key findings document fraud detection accuracy improvements from 65–75% (rule-based) to 85–92% (machine learning), audit cycle reductions of 40–60% with coverage expansion from 5–10% sampling to 100% population analysis, and reconciliation effort decreases of 70–80% through triple-entry blockchain systems. Edge computing reduces processing latency by 40–75%, enabling compliance response within hours versus 24–72 h. Four propositions are established with empirical support: IoT-enabled reporting superiority (15–25% error reduction), AI-blockchain fraud detection advantage (60–70% loss reduction), edge computing compliance responsiveness (55–75% improvement), and GDPR-blockchain adoption barriers (67% of European institutions affected). Persistent challenges include cybersecurity threats (300% incident increase, $5.9 million average breach cost), workforce deficits (70–80% insufficient training), and implementation costs ($100,000–$1,000,000). (4) Conclusions: The research contributes a four-layer technology architecture and challenge-mitigation framework bridging technical capabilities with regulatory requirements. Future research must address quantum computing applications (5–10 years), decentralized finance accounting standards (2–5 years), digital twins with 30–40% forecast improvement potential (3–7 years), and ESG analytics frameworks (1–3 years). The findings demonstrate accounting’s fundamental transformation from historical record-keeping to predictive decision support. Full article
(This article belongs to the Section Financial Technology and Innovation)
Show Figures

Figure 1

33 pages, 2850 KB  
Article
Automated Vulnerability Scanning and Prioritisation for Domestic IoT Devices/Smart Homes: A Theoretical Framework
by Diego Fernando Rivas Bustos, Jairo A. Gutierrez and Sandra J. Rueda
Electronics 2026, 15(2), 466; https://doi.org/10.3390/electronics15020466 (registering DOI) - 21 Jan 2026
Abstract
The expansion of Internet of Things (IoT) devices in domestic smart homes has created new conveniences but also significant security risks. Insecure firmware, weak authentication and weak encryption leave households exposed to privacy breaches, data leakage and systemic attacks. Although research has addressed [...] Read more.
The expansion of Internet of Things (IoT) devices in domestic smart homes has created new conveniences but also significant security risks. Insecure firmware, weak authentication and weak encryption leave households exposed to privacy breaches, data leakage and systemic attacks. Although research has addressed several challenges, contributions remain fragmented and difficult for non-technical users to apply. This work addresses the following research question: How can a theoretical framework be developed to enable automated vulnerability scanning and prioritisation for non-technical users in domestic IoT environments? A Systematic Literature Review of 40 peer-reviewed studies, conducted under PRISMA 2020 guidelines, identified four structural gaps: dispersed vulnerability knowledge, fragmented scanning approaches, over-reliance on technical severity in prioritisation and weak protocol standardisation. The paper introduces a four-module framework: a Vulnerability Knowledge Base, an Automated Scanning Engine, a Context-Aware Prioritisation Module and a Standardisation and Interoperability Layer. The framework advances knowledge by integrating previously siloed approaches into a layered and iterative artefact tailored to households. While limited to conceptual evaluation, the framework establishes a foundation for future work in prototype development, household usability studies and empirical validation. By addressing fragmented evidence with a coherent and adaptive design, the study contributes to both academic understanding and practical resilience, offering a pathway toward more secure and trustworthy domestic IoT ecosystems. Full article
Show Figures

Figure 1

Back to TopTop