Previous Issue
Volume 6, June
 
 

IoT, Volume 6, Issue 3 (September 2025) – 14 articles

  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Select all
Export citation of selected articles as:
25 pages, 3109 KB  
Article
Radio Frequency Fingerprinting Authentication for IoT Networks Using Siamese Networks
by Raju Dhakal, Laxima Niure Kandel and Prashant Shekhar
IoT 2025, 6(3), 47; https://doi.org/10.3390/iot6030047 - 22 Aug 2025
Viewed by 196
Abstract
As IoT (internet of things) devices grow in prominence, safeguarding them from cyberattacks is becoming a pressing challenge. To bootstrap IoT security, device identification or authentication is crucial for establishing trusted connections among devices without prior trust. In this regard, radio frequency fingerprinting [...] Read more.
As IoT (internet of things) devices grow in prominence, safeguarding them from cyberattacks is becoming a pressing challenge. To bootstrap IoT security, device identification or authentication is crucial for establishing trusted connections among devices without prior trust. In this regard, radio frequency fingerprinting (RFF) is gaining attention because it is more efficient and requires fewer computational resources compared to resource-intensive cryptographic methods, such as digital signatures. RFF works by identifying unique manufacturing defects in the radio circuitry of IoT devices by analyzing over-the-air signals that embed these imperfections, allowing for the identification of the transmitting hardware. Recent studies on RFF often leverage advanced classification models, including classical machine learning techniques such as K-Nearest Neighbor (KNN) and Support Vector Machine (SVM), as well as modern deep learning architectures like Convolutional Neural Network (CNN). In particular, CNNs are well-suited as they use multidimensional mapping to detect and extract reliable fingerprints during the learning process. However, a significant limitation of these approaches is that they require large datasets and necessitate retraining when new devices not included in the initial training set are added. This retraining can cause service interruptions and is costly, especially in large-scale IoT networks. In this paper, we propose a novel solution to this problem: RFF using Siamese networks, which eliminates the need for retraining and allows for seamless authentication in IoT deployments. The proposed Siamese network is trained using in-phase and quadrature (I/Q) samples from 10 different Software-Defined Radios (SDRs). Additionally, we present a new algorithm, the Similarity-Based Embedding Classification (SBEC) for RFF. We present experimental results that demonstrate that the Siamese network effectively distinguishes between malicious and trusted devices with a remarkable 98% identification accuracy. Full article
(This article belongs to the Special Issue Cybersecurity in the Age of the Internet of Things)
Show Figures

Figure 1

21 pages, 510 KB  
Review
IoT and Machine Learning for Smart Bird Monitoring and Repellence: Techniques, Challenges, and Opportunities
by Samson O. Ooko, Emmanuel Ndashimye, Evariste Twahirwa and Moise Busogi
IoT 2025, 6(3), 46; https://doi.org/10.3390/iot6030046 - 7 Aug 2025
Viewed by 605
Abstract
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and [...] Read more.
The activities of birds present increasing challenges in agriculture, aviation, and environmental conservation. This has led to economic losses, safety risks, and ecological imbalances. Attempts have been made to address the problem, with traditional deterrent methods proving to be labour-intensive, environmentally unfriendly, and ineffective over time. Advances in artificial intelligence (AI) and the Internet of Things (IoT) present opportunities for enabling automated real-time bird detection and repellence. This study reviews recent developments (2020–2025) in AI-driven bird detection and repellence systems, emphasising the integration of image, audio, and multi-sensor data in IoT and edge-based environments. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses framework was used, with 267 studies initially identified and screened from key scientific databases. A total of 154 studies met the inclusion criteria and were analysed. The findings show the increasing use of convolutional neural networks (CNNs), YOLO variants, and MobileNet in visual detection, and the growing use of lightweight audio-based models such as BirdNET, MFCC-based CNNs, and TinyML frameworks for microcontroller deployment. Multi-sensor fusion is proposed to improve detection accuracy in diverse environments. Repellence strategies include sound-based deterrents, visual deterrents, predator-mimicking visuals, and adaptive AI-integrated systems. Deployment success depends on edge compatibility, power efficiency, and dataset quality. The limitations of current studies include species-specific detection challenges, data scarcity, environmental changes, and energy constraints. Future research should focus on tiny and lightweight AI models, standardised multi-modal datasets, and intelligent, behaviour-aware deterrence mechanisms suitable for precision agriculture and ecological monitoring. Full article
Show Figures

Figure 1

25 pages, 663 KB  
Systematic Review
IoT Devices and Their Impact on Learning: A Systematic Review of Technological and Educational Affordances
by Dimitris Tsipianitis, Anastasia Misirli, Konstantinos Lavidas and Vassilis Komis
IoT 2025, 6(3), 45; https://doi.org/10.3390/iot6030045 - 7 Aug 2025
Viewed by 708
Abstract
A principal factor of the fourth Industrial Revolution is the Internet of Things (IoT), a network of “smart” objects that communicate by exchanging helpful information about themselves and their environment. Our research aims to address the gaps in the existing literature regarding the [...] Read more.
A principal factor of the fourth Industrial Revolution is the Internet of Things (IoT), a network of “smart” objects that communicate by exchanging helpful information about themselves and their environment. Our research aims to address the gaps in the existing literature regarding the educational and technological affordances of IoT applications in learning environments in secondary education. Our systematic review using the PRISMA method allowed us to extract 25 empirical studies from the last 10 years. We present the categorization of educational and technological affordances, as well as the devices used in these environments. Moreover, our findings indicate widespread adoption of organized educational activities and design-based learning, often incorporating tangible interfaces, smart objects, and IoT applications, which enhance student engagement and interaction. Additionally, we identify the impact of IoT-based learning on knowledge building, autonomous learning, student attitude, and motivation. The results suggest that the IoT can facilitate personalized and experiential learning, fostering a more immersive and adaptive educational experience. Based on these findings, we discuss key recommendations for educators, policymakers, and researchers, while also addressing this study’s limitations and potential directions for future research. Full article
Show Figures

Figure 1

40 pages, 87432 KB  
Article
Optimizing Urban Mobility Through Complex Network Analysis and Big Data from Smart Cards
by Li Sun, Negin Ashrafi and Maryam Pishgar
IoT 2025, 6(3), 44; https://doi.org/10.3390/iot6030044 - 6 Aug 2025
Viewed by 433
Abstract
Urban public transportation systems face increasing pressure from shifting travel patterns, rising peak-hour demand, and the need for equitable and resilient service delivery. While complex network theory has been widely applied to analyze transit systems, limited attention has been paid to behavioral segmentation [...] Read more.
Urban public transportation systems face increasing pressure from shifting travel patterns, rising peak-hour demand, and the need for equitable and resilient service delivery. While complex network theory has been widely applied to analyze transit systems, limited attention has been paid to behavioral segmentation within such networks. This study introduces a frequency-based framework that differentiates high-frequency (HF) and low-frequency (LF) passengers to examine how distinct user groups shape network structure, congestion vulnerability, and robustness. Using over 20 million smart-card records from Beijing’s multimodal transit system, we construct and analyze directed weighted networks for HF and LF users, integrating topological metrics, temporal comparisons, and community detection. Results reveal that HF networks are densely connected but structurally fragile, exhibiting lower modularity and significantly greater efficiency loss during peak periods. In contrast, LF networks are more spatially dispersed yet resilient, maintaining stronger intracommunity stability. Peak-hour simulation shows a 70% drop in efficiency and a 99% decrease in clustering, with HF networks experiencing higher vulnerability. Based on these findings, we propose differentiated policy strategies for each user group and outline a future optimization framework constrained by budget and equity considerations. This study contributes a scalable, data-driven approach to integrating passenger behavior with network science, offering actionable insights for resilient and inclusive transit planning. Full article
(This article belongs to the Special Issue IoT-Driven Smart Cities)
Show Figures

Figure 1

37 pages, 6916 KB  
Review
The Role of IoT in Enhancing Sports Analytics: A Bibliometric Perspective
by Yuvanshankar Azhagumurugan, Jawahar Sundaram, Zenith Dewamuni, Pritika, Yakub Sebastian and Bharanidharan Shanmugam
IoT 2025, 6(3), 43; https://doi.org/10.3390/iot6030043 - 31 Jul 2025
Viewed by 545
Abstract
The use of Internet of Things (IoT) for sports innovation has transformed the way athletes train, compete, and recover in any sports activity. This study performs a bibliometric analysis to examine research trends, collaborations, and publications in the realm of IoT and Sports. [...] Read more.
The use of Internet of Things (IoT) for sports innovation has transformed the way athletes train, compete, and recover in any sports activity. This study performs a bibliometric analysis to examine research trends, collaborations, and publications in the realm of IoT and Sports. Our analysis included 780 Scopus articles and 150 WoS articles published during 2012–2025, and duplicates were removed. We analyzed and visualized the bibliometric data using R version 3.6.1, VOSviewer version 1.6.20, and the bibliometrix library. The study provides insights from a bibliometric analysis, showcasing the allocation of topics, scientific contributions, patterns of co-authorship, prominent authors and their productivity over time, notable terms, key sources, publications with citations, analysis of citations, source-specific citation analysis, yearly publication patterns, and the distribution of research papers. The results indicate that China and India have the leading scientific production in the development of IoT and Sports research, with prominent authors like Anton Umek, Anton Kos, and Emiliano Schena making significant contributions. Wearable technology and wearable sensors are the most trending topics in IoT and Sports, followed by medical sciences and artificial intelligence paradigms. The analysis also emphasizes the importance of open-access journals like ‘Journal of Physics: Conference Series’ and ‘IEEE Access’ for their contributions to IoT and Sports research. Future research directions focus on enhancing effective, lightweight, and efficient wearable devices while implementing technologies like edge computing and lightweight AI in wearable technologies. Full article
Show Figures

Figure 1

26 pages, 3844 KB  
Article
A No-Code Educational Platform for Introducing Internet of Things and Its Application to Agricultural Education
by George Lagogiannis and Avraam Chatzopoulos
IoT 2025, 6(3), 42; https://doi.org/10.3390/iot6030042 - 31 Jul 2025
Viewed by 387
Abstract
This study introduces a no-code educational platform created to introduce Internet of Things (IoT) to university students who lack programming experience. The platform allows users to set IoT sensor nodes, and create a wireless sensor network through a simple graphical interface. Sensors’ data [...] Read more.
This study introduces a no-code educational platform created to introduce Internet of Things (IoT) to university students who lack programming experience. The platform allows users to set IoT sensor nodes, and create a wireless sensor network through a simple graphical interface. Sensors’ data can be sent to cloud services but they can also be stored locally, which makes our platform particularly realistic in fieldwork settings where internet access may be limited. The platform was tested in a pilot activity within a university course that previously covered IoT only in theory and was evaluated using the Technology Acceptance Model (TAM). Results showed strong student engagement and high ratings for ease of use, usefulness, and future use intent. These findings suggest that a no-code approach can effectively bridge the gap between IoT technologies and learners in non-engineering fields. Full article
Show Figures

Figure 1

18 pages, 651 KB  
Article
Enhancing IoT Connectivity in Suburban and Rural Terrains Through Optimized Propagation Models Using Convolutional Neural Networks
by George Papastergiou, Apostolos Xenakis, Costas Chaikalis, Dimitrios Kosmanos and Menelaos Panagiotis Papastergiou
IoT 2025, 6(3), 41; https://doi.org/10.3390/iot6030041 - 31 Jul 2025
Viewed by 312
Abstract
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment [...] Read more.
The widespread adoption of the Internet of Things (IoT) has driven major advancements in wireless communication, especially in rural and suburban areas where low population density and limited infrastructure pose significant challenges. Accurate Path Loss (PL) prediction is critical for the effective deployment and operation of Wireless Sensor Networks (WSNs) in such environments. This study explores the use of Convolutional Neural Networks (CNNs) for PL modeling, utilizing a comprehensive dataset collected in a smart campus setting that captures the influence of terrain and environmental variations. Several CNN architectures were evaluated based on different combinations of input features—such as distance, elevation, clutter height, and altitude—to assess their predictive accuracy. The findings reveal that CNN-based models outperform traditional propagation models (Free Space Path Loss (FSPL), Okumura–Hata, COST 231, Log-Distance), achieving lower error rates and more precise PL estimations. The best performing CNN configuration, using only distance and elevation, highlights the value of terrain-aware modeling. These results underscore the potential of deep learning techniques to enhance IoT connectivity in sparsely connected regions and support the development of more resilient communication infrastructures. Full article
Show Figures

Figure 1

15 pages, 271 KB  
Article
Evaluating the Energy Costs of SHA-256 and SHA-3 (KangarooTwelve) in Resource-Constrained IoT Devices
by Iain Baird, Isam Wadhaj, Baraq Ghaleb, Craig Thomson and Gordon Russell
IoT 2025, 6(3), 40; https://doi.org/10.3390/iot6030040 - 11 Jul 2025
Viewed by 565
Abstract
The rapid expansion of Internet of Things (IoT) devices has heightened the demand for lightweight and secure cryptographic mechanisms suitable for resource-constrained environments. While SHA-256 remains a widely used standard, the emergence of SHA-3 particularly the KangarooTwelve variant offers potential benefits in flexibility [...] Read more.
The rapid expansion of Internet of Things (IoT) devices has heightened the demand for lightweight and secure cryptographic mechanisms suitable for resource-constrained environments. While SHA-256 remains a widely used standard, the emergence of SHA-3 particularly the KangarooTwelve variant offers potential benefits in flexibility and post-quantum resilience for lightweight resource-constrained devices. This paper presents a comparative evaluation of the energy costs associated with SHA-256 and SHA-3 hashing in Contiki 3.0, using three generationally distinct IoT platforms: Sky Mote, Z1 Mote, and Wismote. Unlike previous studies that rely on hardware acceleration or limited scope, our work conducts a uniform, software-only analysis across all motes, employing consistent radio duty cycling, ContikiMAC (a low-power Medium Access Control protocol) and isolating the cryptographic workload from network overhead. The empirical results from the Cooja simulator reveal that while SHA-3 provides advanced security features, it incurs significantly higher CPU and, in some cases, radio energy costs particularly on legacy hardware. However, modern platforms like Wismote demonstrate a more balanced trade-off, making SHA-3 viable in higher-capability deployments. These findings offer actionable guidance for designers of secure IoT systems, highlighting the practical implications of cryptographic selection in energy-sensitive environments. Full article
Show Figures

Figure 1

18 pages, 721 KB  
Article
An Adaptive Holt–Winters Model for Seasonal Forecasting of Internet of Things (IoT) Data Streams
by Samer Sawalha and Ghazi Al-Naymat
IoT 2025, 6(3), 39; https://doi.org/10.3390/iot6030039 - 10 Jul 2025
Viewed by 460
Abstract
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during [...] Read more.
In various applications, IoT temporal data play a crucial role in accurately predicting future trends. Traditional models, including Rolling Window, SVR-RBF, and ARIMA, suffer from a potential accuracy decrease because they generally use all available data or the most recent data window during training, which can result in the inclusion of noisy data. To address this critical issue, this paper proposes a new forecasting technique called Adaptive Holt–Winters (AHW). The AHW approach utilizes two models grounded in an exponential smoothing methodology. The first model is trained on the most current data window, whereas the second extracts information from a historical data segment exhibiting patterns most analogous to the present. The outputs of the two models are then combined, demonstrating enhanced prediction precision since the focus is on the relevant data patterns. The effectiveness of the AHW model is evaluated against well-known models (Rolling Window, SVR-RBF, ARIMA, LSTM, CNN, RNN, and Holt–Winters), utilizing various metrics, such as RMSE, MAE, p-value, and time performance. A comprehensive evaluation covers various real-world datasets at different granularities (daily and monthly), including temperature from the National Climatic Data Center (NCDC), humidity and soil moisture measurements from the Basel City environmental system, and global intensity and global reactive power from the Individual Household Electric Power Consumption (IHEPC) dataset. The evaluation results demonstrate that AHW constantly attains higher forecasting accuracy across the tested datasets compared to other models. This indicates the efficacy of AHW in leveraging pertinent data patterns for enhanced predictive precision, offering a robust solution for temporal IoT data forecasting. Full article
Show Figures

Figure 1

24 pages, 76230 KB  
Article
Secure and Efficient Video Management: A Novel Framework for CCTV Surveillance Systems
by Swarnalatha Camalapuram Subramanyam, Ansuman Bhattacharya and Koushik Sinha
IoT 2025, 6(3), 38; https://doi.org/10.3390/iot6030038 - 4 Jul 2025
Viewed by 481
Abstract
This paper presents a novel video encoding and decoding method aimed at enhancing security and reducing storage requirements, particularly for CCTV systems. The technique merges two video streams of matching frame dimensions into a single stream, optimizing disk space usage without compromising video [...] Read more.
This paper presents a novel video encoding and decoding method aimed at enhancing security and reducing storage requirements, particularly for CCTV systems. The technique merges two video streams of matching frame dimensions into a single stream, optimizing disk space usage without compromising video quality. The combined video is secured using an advanced encryption standard (AES)-based shift algorithm that rearranges pixel positions, preventing unauthorized access. During decoding, the AES shift is reversed, enabling precise reconstruction of the original videos. This approach provides a space-efficient and secure solution for managing multiple video feeds while ensuring accurate recovery of the original content. The experimental results demonstrate that the transmission time for the encoded video is consistently shorter compared to transmitting the video streams separately. This, in turn, leads to about 54% reduction in energy consumption across diverse outdoor and indoor video datasets, highlighting significant improvements in both transmission efficiency and energy savings by our proposed scheme. Full article
Show Figures

Figure 1

22 pages, 557 KB  
Article
Using Blockchain Ledgers to Record AI Decisions in IoT
by Vikram Kulothungan
IoT 2025, 6(3), 37; https://doi.org/10.3390/iot6030037 - 3 Jul 2025
Viewed by 1264
Abstract
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In [...] Read more.
The rapid integration of AI into IoT systems has outpaced the ability to explain and audit automated decisions, resulting in a serious transparency gap. We address this challenge by proposing a blockchain-based framework to create immutable audit trails of AI-driven IoT decisions. In our approach, each AI inference comprising key inputs, model ID, and output is logged to a permissioned blockchain ledger, ensuring that every decision is traceable and auditable. IoT devices and edge gateways submit cryptographically signed decision records via smart contracts, resulting in an immutable, timestamped log that is tamper-resistant. This decentralized approach guarantees non-repudiation and data integrity while balancing transparency with privacy (e.g., hashing personal data on-chain) to meet data protection norms. Our design aligns with emerging regulations, such as the EU AI Act’s logging mandate and GDPR’s transparency requirements. We demonstrate the framework’s applicability in two domains: healthcare IoT (logging diagnostic AI alerts for accountability) and industrial IoT (tracking autonomous control actions), showing its generalizability to high-stakes environments. Our contributions include the following: (1) a novel architecture for AI decision provenance in IoT, (2) a blockchain-based design to securely record AI decision-making processes, and (3) a simulation informed performance assessment based on projected metrics (throughput, latency, and storage) to assess the approach’s feasibility. By providing a reliable immutable audit trail for AI in IoT, our framework enhances transparency and trust in autonomous systems and offers a much-needed mechanism for auditable AI under increasing regulatory scrutiny. Full article
(This article belongs to the Special Issue Blockchain-Based Trusted IoT)
Show Figures

Figure 1

27 pages, 569 KB  
Article
Construction Worker Activity Recognition Using Deep Residual Convolutional Network Based on Fused IMU Sensor Data in Internet-of-Things Environment
by Sakorn Mekruksavanich and Anuchit Jitpattanakul
IoT 2025, 6(3), 36; https://doi.org/10.3390/iot6030036 - 28 Jun 2025
Viewed by 486
Abstract
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a [...] Read more.
With the advent of Industry 4.0, sensor-based human activity recognition has become increasingly vital for improving worker safety, enhancing operational efficiency, and optimizing workflows in Internet-of-Things (IoT) environments. This study introduces a novel deep learning-based framework for construction worker activity recognition, employing a deep residual convolutional neural network (ResNet) architecture integrated with multi-sensor fusion techniques. The proposed system processes data from multiple inertial measurement unit sensors strategically positioned on workers’ bodies to identify and classify construction-related activities accurately. A comprehensive pre-processing pipeline is implemented, incorporating Butterworth filtering for noise suppression, data normalization, and an adaptive sliding window mechanism for temporal segmentation. Experimental validation is conducted using the publicly available VTT-ConIoT dataset, which includes recordings of 16 construction activities performed by 13 participants in a controlled laboratory setting. The results demonstrate that the ResNet-based sensor fusion approach outperforms traditional single-sensor models and other deep learning methods. The system achieves classification accuracies of 97.32% for binary discrimination between recommended and non-recommended activities, 97.14% for categorizing six core task types, and 98.68% for detailed classification across sixteen individual activities. Optimal performance is consistently obtained with a 4-second window size, balancing recognition accuracy with computational efficiency. Although the hand-mounted sensor proved to be the most effective as a standalone unit, multi-sensor configurations delivered significantly higher accuracy, particularly in complex classification tasks. The proposed approach demonstrates strong potential for real-world applications, offering robust performance across diverse working conditions while maintaining computational feasibility for IoT deployment. This work advances the field of innovative construction by presenting a practical solution for real-time worker activity monitoring, which can be seamlessly integrated into existing IoT infrastructures to promote workplace safety, streamline construction processes, and support data-driven management decisions. Full article
Show Figures

Figure 1

24 pages, 9073 KB  
Article
Data-Bound Adaptive Federated Learning: FedAdaDB
by Fotios Zantalis and Grigorios Koulouras
IoT 2025, 6(3), 35; https://doi.org/10.3390/iot6030035 - 24 Jun 2025
Viewed by 570
Abstract
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied [...] Read more.
Federated Learning (FL) enables decentralized Machine Learning (ML), focusing on preserving data privacy, but faces a unique set of optimization challenges, such as dealing with non-IID data, communication overhead, and client drift. Adaptive optimizers like AdaGrad, Adam, and Adam variations have been applied in FL, showing good results in convergence speed and accuracy. However, it can be quite challenging to combine good convergence, model generalization, and stability in an FL setup. Data-bound adaptive methods like AdaDB have demonstrated promising results in centralized settings by incorporating dynamic, data-dependent bounds on Learning Rates (LRs). In this paper, FedAdaDB is introduced, which is an FL version of AdaDB aiming to address the aforementioned challenges. FedAdaDB uses the AdaDB optimizer at the server-side to dynamically adjust LR bounds based on the aggregated client updates. Extensive experiments have been conducted comparing FedAdaDB with FedAvg and FedAdam on three different datasets (EMNIST, CIFAR100, and Shakespeare). The results show that FedAdaDB consistently offers better and more robust outcomes, in terms of the measured final validation accuracy across all datasets, for a trade-off of a small delay in the convergence speed at an early stage. Full article
(This article belongs to the Special Issue IoT Meets AI: Driving the Next Generation of Technology)
Show Figures

Figure 1

24 pages, 1446 KB  
Article
MQTT Broker Architectural Enhancements for High-Performance P2P Messaging: TBMQ Scalability and Reliability in Distributed IoT Systems
by Dmytro Shvaika, Andrii Shvaika and Volodymyr Artemchuk
IoT 2025, 6(3), 34; https://doi.org/10.3390/iot6030034 - 23 Jun 2025
Viewed by 922
Abstract
The Message Queuing Telemetry Transport (MQTT) protocol remains a key enabler for lightweight and low-latency messaging in Internet of Things (IoT) applications. However, traditional broker implementations often struggle with the demands of large-scale point-to-point (P2P) communication. This paper presents a performance and architectural [...] Read more.
The Message Queuing Telemetry Transport (MQTT) protocol remains a key enabler for lightweight and low-latency messaging in Internet of Things (IoT) applications. However, traditional broker implementations often struggle with the demands of large-scale point-to-point (P2P) communication. This paper presents a performance and architectural evaluation of TBMQ, an open source MQTT broker designed to support reliable P2P messaging at scale. The broker employs Redis Cluster for session persistence and Apache Kafka for message routing. Additional optimizations include asynchronous Redis access via Lettuce and Lua-based atomic operations. Stepwise load testing was performed using Kubernetes-based deployments on Amazon EKS, progressively increasing message rates to 1 million messages per second (msg/s). The results demonstrate that TBMQ achieves linear scalability and stable latency as the load increases. It reaches an average throughput of 8900 msg/s per CPU core, while maintaining end-to-end delivery latency within two-digit millisecond bounds. These findings confirm that TBMQ’s architecture provides an effective foundation for reliable, high-throughput messaging in distributed IoT systems. Full article
(This article belongs to the Special Issue IoT and Distributed Computing)
Show Figures

Figure 1

Previous Issue
Back to TopTop