Next Article in Journal
FACDIM: A Face Image Super-Resolution Method That Integrates Conditional Diffusion Models with Prior Attributes
Previous Article in Journal
Negative Feedback Matters: Exploring Positive and Negative Correlations for Time Series Anomaly Detection
Previous Article in Special Issue
A Multi-Feature Stock Index Forecasting Approach Based on LASSO Feature Selection and Non-Stationary Autoformer
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments

by
Jakub Sychowiec
* and
Zbigniew Zieliński
Faculty of Cybernetics, Military University of Technology, 46, 00-908 Warsaw, Poland
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(10), 2067; https://doi.org/10.3390/electronics14102067
Submission received: 31 March 2025 / Revised: 8 May 2025 / Accepted: 15 May 2025 / Published: 20 May 2025
(This article belongs to the Special Issue Feature Papers in "Computer Science & Engineering", 2nd Edition)

Abstract

:
An industrial-scale increase in applications of the Internet of Things (IoT), a significant number of which are based on the concept of federation, presents unique security challenges due to their distributed nature and the need for secure communication between components from different administrative domains. A federation may be created for the duration of a mission, such as military operations or Humanitarian Assistance and Disaster Relief (HADR) operations. These missions often occur in very difficult or even hostile environments, posing additional challenges for ensuring reliability and security. The heterogeneity of devices, protocols, and security requirements in different domains further complicates the requirements for the secure distribution of data streams in federated IoT environments. The effective dissemination of data streams in federated environments also ensures the flexibility to filter and search for patterns in real-time to detect critical events or threats (e.g., fires and hostile objects) with changing information needs of end users. The paper presents a novel and practical framework for secure and reliable data stream dissemination in federated IoT environments, leveraging blockchain, Apache Kafka brokers, and microservices. To authenticate IoT devices and verify data streams, we have integrated a hardware and software IoT gateway with the Hyperledger Fabric (HLF) blockchain platform, which records the distinguishing features of IoT devices (fingerprints). In this paper, we analyzed our platform’s security, focusing on secure data distribution. We formally discussed potential attack vectors and ways to mitigate them through the platform’s design. We thoroughly assess the effectiveness of the proposed framework by conducting extensive performance tests in two setups: the Amazon Web Services (AWS) cloud-based and Raspberry Pi resource-constrained environments. Implementing our framework in the AWS cloud infrastructure has demonstrated that it is suitable for processing audiovisual streams in environments that require immediate interoperability. The results are promising, as the average time it takes for a consumer to read a verified data stream is in the order of seconds. The measured time for complete processing of an audiovisual stream corresponds to approximately 25 frames per second (fps). The results obtained also confirmed the computational stability of our framework. Furthermore, we have confirmed that our environment can be deployed on resource-constrained commercial off-the-shelf (COTS) platforms while maintaining low operational costs.

1. Introduction

We are observing an industrial-scale increase in the use of the Internet of things (IoT) in both the civilian and military spheres, a significant number of which are multi-domain. Multi-domain IoT environments such as intelligent transportation, smart power grids, resilient smart cities, advanced healthcare, and hybrid military operations increasingly adopt a federated concept. Establishing a federation aims to enable different entities to use shared resources and exchange (disseminate) information securely and efficiently without relying on a central authority, thus facilitating cooperation and increasing the resilience of the entire system.
Federated IoT environments are distributed, with their components and IoT devices located in different places and belonging to various entities. An illustrative example is the formation of a federation comprising NATO countries and non-NATO mission actors (Federated Mission Networking, FMN) [1]. In this model, each actor retains control over its capabilities and operations while accepting and meeting the requirements outlined in pre-negotiated and agreed-upon arrangements, such as the security policy. Another example involves the interaction of civilian services and military forces, which form a federation to provide humanitarian assistance during natural disasters (Humanitarian Assistance and Disaster Relief, HADR).
This collaboration necessitates a system architecture capable of processing data streams in real-time, filtering and searching for patterns to identify critical events (e.g., traffic congestion) or various threats (e.g., fire and hostile entities). Such requirements underscore the need for a system that can dynamically distribute data in response to the evolving information demands of end users (contextual dissemination). One approach to achieve this is by ranking information and services based on the value of information (VoI), a subjective measure that assesses the value of information to its users. Platforms like VOICE [2] utilize VoI to rank information objects (IOs) and data producers according to their relevance and usefulness.
The aforementioned system presents unique security and reliability challenges due to its distributed nature and the necessity for secure communication between components from different administrative domains. To an even greater extent, fulfilling secure data stream distribution requirements due to the heterogeneity of devices and protocols. The basic problem remains: how do we carry out the acquisition and fusion of data from various sources with different levels of trust and operate in computing environments with varying degrees of reliability and security? To solve it, it is first necessary to determine the answer to the following sub-questions:
  • Security gap: How do we implement secure data distribution among participants in a federated environment, adhering to the data-centric security paradigm [3]? This involves ensuring robust data protection starting from the point of origin, maintaining integrity throughout its entire life cycle, and facilitating granular access control mechanisms to enforce strict data permissions.
  • Identity management gap: How do we manage the identity of devices? How do we identify devices?
  • Real-time data access gap: How do we enable the processing and sharing of relevant IOs based on their VoI, importance, and relevance within a specified time range?
  • Resource allocation gap: How do we dynamically allocate resources to effectively manage trade-offs in a data dissemination environment while taking customer key performance indicators (KPIs) into account?
  • Network integration and interoperability gap: How do we organize interconnections, especially between unclassified systems (civilian systems) and military systems?
  • Resilience and centralization gap: How do we ensure data availability in constrained (partially isolated) environments?
To fulfill the mentioned gaps effectively, it is essential to integrate various concepts and technologies while considering the security requirements of Federated IoT Environments. This integration involves implementing a data-centric authentication mechanism that employs a unique identity (fingerprint) that can be reliably stored within a DLT. Additionally, this involves establishing a data stream processing system built on a lightweight and manageable pool of microservices, complemented by context-based real-time data dissemination technologies. Consequently, this paper addresses a multi-layered framework architecture aimed at ensuring the secure and reliable dissemination of data streams within a multi-organizational federation environment. Our framework implements data-centric authentication based on the unique identities of IoT devices. We consider our main contributions to be as follows:
  • We have developed a novel and practical framework for the secure and dynamic dissemination of data streams within a multi-organizational federation environment utilizing Hyperledger Fabric [4], Apache Kafka as data queuing technology, along with a microservice processing logic to verify and disseminate data streams (by utilizing the Kafka Streams API library in Java [5] and the Sarama library in Go [6]);
  • We have integrated a hardware-software IoT gateway with a DLT (Hyperledger Fabric) to authenticate IoT devices and verify data streams, which involves the deployment of the fingerprint enrichment layer in conjunction with the protocol forwarder (proxy) component;
  • We validated the effectiveness of the proposed framework by conducting extensive performance tests in two setups: the Amazon Web Services cloud-based and the Raspberry Pi resource-constrained environment.
The rationale for utilizing Kafka is grounded in its implementation of a publish–subscribe model and its adherence to a commercial off-the-shelf (COTS) approach. These are crucial for addressing the needs and constraints of federated IoT environments, where we emphasize interoperability between military and civilian systems. Moreover, we have adopted the Kafka Streams API library, a decision motivated by the inherent advantages of Kafka technology (the library leverages Kafka’s built-in mechanisms), including its fault tolerance and low operational costs associated with infrastructure deployment.
Furthermore, we have opted to integrate Hyperledger Fabric technology into our system, which requires that all participating parties be familiar with each other. This results in a permissioned blockchain that employs public key infrastructure. Our solution also features a single global instance of the distributed ledger, enabling the seamless transfer (mobility) of devices between organizations within the federation. This allows these devices to leverage another organization’s infrastructure for secure data dissemination. Additionally, private data channels can be established between organizations, ensuring that the identities of selected devices remain confidential from others.
By integrating these proposed solutions, multi-domain IoT environments can ensure secure and efficient distribution of data streams while effectively addressing the unique challenges associated with IoT devices and heterogeneous networks. This involves the implementation of end-to-end encryption to prevent unauthorized access during transmission, robust authentication measures for both devices and data, and utilizing blockchain technology to maintain data integrity and accountability. Additionally, achieving optimal performance and resource utilization can be facilitated through context-aware data dissemination, prioritizing information based on its significance and relevance. Furthermore, processing data streams locally at edge gateways can help minimize latency and enhance security.
The remainder of the article is structured as follows: Section 2 provides an overview of the relevant research that serves as the foundation for our solution. Section 3 details our multi-layered framework architecture, highlighting its key components and the security and reliability mechanisms employed to enhance confidentiality, integrity, availability, and accountability for data: in-process, in-transit, and at-rest. Section 4 outlines our experimental framework’s proposed message types and key operations. Section 5 presents a high-level security risk assessment considering several security and reliability threats. Section 6 introduces two configurations of our environment: one cloud-based and the other resource-constrained, along with benchmarks for latency metrics. Section 7 includes a thorough discussion of the results we obtained. Section 8 summarizes our conclusions and delineates our goals for future work. Lastly, the abbreviations used throughout this publication are defined at the end.

2. Related Work

This section presents related works that have had the greatest impact on the proposed framework architecture for secure and reliable data stream dissemination in federated IoT environments. These works address the basic problems related to the following:
  • Securing data processed by IoT devices with the usage of distributed ledger technology and blockchain mechanisms;
  • Behavior-based IoT device identification (IoT distinctive features);
  • The integration of heterogeneous military and civilian systems based on IoT devices, where specific KPIs must be achieved (e.g., zero-day interoperability).
Additionally, at the end of this section, we briefly discuss our solution against the analyzed works.

2.1. Blockchain-Based Device and Data Protection Mechanisms

The literature presents numerous attempts to integrate the IoT and blockchain (distributed ledger) technology. Ref. [7] describes the challenges and benefits of integrating blockchain with the IoT and its impact on the security of processed data. Similarly, in [8,9], a proposal for a four-tier structural model of blockchain and the IoT is presented.
Guo et al. [10] proposed a mechanism for authenticating IoT devices in different domains, where cooperating DLTs operating in the master–slave mode were used for data exchange. Xu et al. [11] presented the DIoTA framework based on a private Hyperledger Fabric blockchain, which was used to protect the authenticity of data processed by IoT devices.
Ref. [12] proposed an access control mechanism for devices, which used the Ethereum public blockchain placed in the Fog layer and public key infrastructure based on elliptic curves. Furthermore, NIST defined attribute-based access control (ABAC) [13] as a logical access control method that authorizes actions based on the attributes of both the publisher and subscriber, requested operations, and the specific context (current situational awareness).
H. Song et al. [14] proposed a blockchain-based ABAC system that employs smart contracts to manage dynamic policies, which are crucial in environments with frequently changing attributes, such as the IoT. Additionally, Lu Ye et al. [15] introduced an access control scheme that integrates blockchain with ciphertext-policy attribute-based encryption, featuring fine-grained attribute revocation specifically designed for cloud health systems.

2.2. Fingerprint Sampling Techniques

In addition to classification methods for identifying a group or type of similar IoT devices [16], an interesting area of research is fingerprint techniques [17,18], which aim to identify a unique image of a device’s identity through the appropriate selection of its distinctive features. The fundamental premise of fingerprint methods is the occurrence of manufacturing errors and configuration distinctions, which implies the non-existence of two identical devices. Subsequently, the main challenge associated with fingerprinting techniques is the selection of non-ephemeral parameters that make it possible to distinguish devices uniquely. Generally, three main fingerprint method classes can be identified for IoT devices as a result of distinction:
  • Hardware/Software class: hardware and software features of the device;
  • Flow class: characteristics of generated network traffic;
  • RF class: characteristics of generated radio signals.
The authors of the LAAFFI framework [19] presented a protocol designed to authenticate devices in federated environments based on unique hardware and software parameters extracted from a given IoT device.
Concerning distinctive radio features, Sanogo et al. [20] evaluated the power spectral density parameter. Ref. [21] indicates a proposal to use neural networks to identify devices based on the physical unclonable function in combination with radio features: frequency offset, in-phase (I) and quadrature (Q) imbalance, and channel distortion.
Charyyev et al. [22] proposed the LSIF fingerprint technique, where the Nilsimsa hash function was used to determine a unique IoT device network flow. In contrast, [23] demonstrated the inter-arrival time (IAT) differences between successively received data packets as a unique identification parameter.

2.3. Reliable Data Stream Dissemination

In addressing one of the primary challenges of deploying various IoT devices within federated environments, which necessitates the processing of vast data streams in a secure, reliable, and context-dependent manner, we undertook a thorough analysis of relevant literature on this subject. Our emphasis was also on developing solutions to the sub-questions outlined in Section 1, particularly those concerning the gaps in real-time data access, network integration, interoperability, and security, all of which can be effectively tackled with a data-centric approach.
Notably, NATO countries continually refine their requirements (KPIs) to address the mentioned challenges. Moreover, they establish research groups dedicated to identifying optimal solutions for coalition data dissemination systems within the framework of Federated Mission Networking.
Jansen et al. [24] developed an experimental environment involving four organizations, where data are distributed across two configurations. The first configuration employs two types of MQTT brokers: Mosquito and VerneMQ. In contrast, the second configuration operates without brokers, broadcasting MQTT streams using a connectionless protocol (e.g., user datagram protocol, UDP).
Suri et al. [25] performed an analysis and performance evaluation of eight data exchange systems utilized in mobile tactical networks, revealing that the DisService author’s protocol significantly outperforms alternatives such as Redis and RabbitMQ. Additionally, another study [26] suggests a data exchange system for IoT devices based on the MQTT protocol, incorporating elliptic curve cryptography for data security.
Furthermore, Yang et al. [27] introduced a system architecture designed for anonymized data exchange among participants, leveraging the Federation-as-a-Service cloud service model and built upon the Hyperledger Fabric.

2.4. Discussion

Although the previously mentioned publications provide solutions for data access management, data exchange systems, and device/data authentication methods, a notable gap exists for a data dissemination system tailored for dynamic and distributed environments, such as the federated IoT. Furthermore, there is a lack of systems that incorporate device authentication techniques based on unique fingerprints while ensuring data protection in accordance with the data-centric paradigm.
In our literature analysis, we have not identified a solution that effectively combines components of a distributed ledger (specifically, Hyperledger Fabric), data queue systems (like Apache Kafka), and stream processing microservices to address the identified gap in data dissemination.
Most of the reviewed publications rely on a trusted third-party infrastructure and a private DLT to enhance the security of processed data. In contrast to the approaches taken by Guo et al. (master–slave chain) [10] and Xu et al. (DIoTA framework) [11], our solution utilizes a single global instance of the distributed ledger. This allows for the seamless transfer of devices between organizations within the federation, enabling these devices to use another organization’s infrastructure for secure data dissemination.
Refs. [24,25] do not assess data queue technologies like Apache Kafka. Moreover, they concentrate solely on efficient data exchange and overlook the security of data streams. In our proposed solution, it is feasible to implement attribute-based access control while concurrently adhering to the data-centric paradigm [3].
Furthermore, we have prioritized interoperability between military and civilian systems, particularly considering the limitations of such environments. To this end, our proposed system incorporates recommendations from the NATO IST-150 working group [24], which examined disconnected, intermittent, and limited (DIL) tactical networks. Our system employs a publish–subscribe model and utilizes commercial off-the-shelf (COTS) components that are widely available, thereby minimizing operational costs, which is crucial for ensuring immediate interoperability.
Additionally, our work distinguishes the key used for securing the communication channel of IoT devices from the key used for data authenticity protection. Unlike the DIoTA framework, which employs an HMAC-based commitment scheme with randomly generated keys for message authentication, we propose using the unique fingerprint samples of the device. Specifically, we propose to utilize a sealing key as a hybrid identity image based on a combination of several fingerprint method classes.
Finally, our framework (specifically, stream microservices) can be enhanced with components that analyze, classify, and share data streams based on the specific VoI that IOs provide to consumers (context), as well as their relevance [2].

3. Framework Design

This section outlines our multilayered framework architecture for the reliable dissemination of data streams (messages) within a multi-organizational federation environment that requires the data-centric security approach. Figure 1 depicts the system’s overall structure of a federation comprising two organizations: Org1 and Org2. We proposed the adoption of specific technologies to address the various layers of our system. In the data queue layer [4], we deployed the Apache Kafka stream-processing platform [5]. For the distributed ledger layer, we selected the Hyperledger Fabric solution. Finally, for the streams microservice layer, we implemented two of the same streams processing logic, utilizing the Kafka Streams API library in Java [5] and the Sarama library in Go [6].
Apache Kafka brokers ingest, merge, and replicate data streams generated by publishers (producers) and provide access to this data for subscribers (consumers). Simultaneously, the proposed system facilitates the verification of data streams based on device fingerprints (identities), which are redundantly stored in a distributed Hyperledger Fabric blockchain. Additionally, a hardware–software IoT gateway has been introduced to support the verification process, enabling stream processing microservices to communicate with the distributed ledger. This approach allows devices to leverage various organizational data queue Layers, ensuring secure and reliable data dissemination while adhering to predefined policies, such as one-to-many relationships. As part of our pluggable architecture, we have identified the following layers:
  • Publisher Layer: This encompasses entities that facilitate the data-centric protection of generated (produced) data streams through the sealing process, where the sealing key is derived from the device’s fingerprint (identity) defined during the registration operation managed by the device maintain layer;
  • Subscriber Layer: This is made up of authenticated and authorized entities that read (consume) available and verified data streams from the data queue layer according to the access policy (e.g., one to many);
  • Data Queue layer: This is composed of distributed data queues that facilitate the acquisition, merging, storage, and replication of data streams transmitted from the publisher layer, subsequently making it accessible to the subscribers layer;
  • Fingerprint Enrichment Layer: This can transport connectionless protocols like UDP by employing a protocol forwarder that converts data streams into a connection-oriented format (e.g., transmission control protocol, TCP). This layer is essential due to the constraints of the connection types supported by the technologies available for the data queue layer. Moreover, it facilitates device behavior-based authentication by utilizing analyzers to gather various features, including network and radio characteristics, and by enriching messages with entity fingerprint samples;
  • Distributed Ledger Layer: This enables the interchangeable deployment of various DLT. Furthermore, it reliably stores the identities of devices belonging to organizations within the federation and retains information about entity data quality and subscriber data context dissemination [28];
  • Communication Layer: This enables the streams microservice and the device maintain layer to communicate with the distributed ledger via a hardware–software IoT gateway;
  • Streams Microservice Layer: This is tasked with verifying sealed data streams originating from entities within the publisher layer. Additionally, it has the capability to analyze, categorize, and disseminate streams pertinent to these entities, and enrich them (e.g., by detecting objects during image processing);
  • Device Maintain Layer: This manages the device registration operation, with additional responsibilities including updating and revoking identities. The registration process is initiated with the establishment of the entity’s identity through the use of fingerprint methods.
Figure 2 illustrates our detailed architecture, where entities representing the publisher’s layer ensure the security of their data streams by sealing them with identities registered within the device maintenance layer. These sealed streams are transmitted using the available (supported) communication protocols and mediums to the data queue layer (Kafka Cluster). In this layer, the streams are stored under a specific topic, such as topic_1-in. Additionally, the fingerprint enrichment layer participates (proxies) in transmission utilizing transformation of connectionless messages into a connection-oriented message format.
The subsequent step occurs in the streams microservice layer, which sequentially retrieves messages from the brokers to verify the sealed messages. The verify streams microservice queries the distributed ledger layer (Hyperledger Fabric) through the communication layer (IoT gateway) to obtain an image of the device’s identity. Next, the identity extracted from the message is compared with the one stored in the ledger.
Once a message is verified and approved, it is written back to the Kafka Cluster under a dedicated topic, such as topic_1-out, making it accessible to entities representing the subscriber’s layer. Optionally, any rejected messages during this process can be directed to a separate topic to aid in identifying and detecting potentially malicious devices.
As previously mentioned, a device identity image is critical to the verification process. The device maintenance layer encompasses a registration operation through the key generation center, during which the device identity is established using hybrid fingerprint techniques. These techniques determine the specific hardware and software features, generated network characteristics, and radio signals related to the device. Once the identity is defined, it is redundantly added (stored) in the distributed ledger layer through the execution of chaincode (a group of smart contracts). Successful registration is contingent upon achieving consensus among the participating federated organizations.
Moreover, within our framework, we have delineated two categories of data stores: the on-chain store and the off-chain store. The on-chain store refers to the ledger, while the off-chain store encompasses local storage utilized by applications and microservices, such as those leveraging the Kafka Streams API library. One of our key proposals is to employ the off-chain store as a micro-caching mechanism to store identities from the on-chain store temporarily. This mechanism can significantly mitigate the delays associated with ledger queries and improve the overall performance of an individual instance of the verify streams microservice.
To realize this, we proposed a minor extension to our environment. In the publisher’s layer, a dedicated listening application known as blockchain event listeners can be utilized to manage events emitted from the distributed ledger layer. These events will pertain to ledger operations, including device registration, updates, and revocations. Furthermore, a dedicated topic, such as dev_id, specifically for storing these events, can be established. Subsequently, the identity streams microservice can be deployed to add or update identities in the off-chain store. Additionally, by enforcing appropriate retention policies for this designated topic, we can preserve historical event-based records of device identity changes, functioning as a blockchain topic.
Alternatively, the off-chain store can be integrated with the continuum computing concept (CC) [2], which involves providing computing capabilities across the diverse layers of an IoT system (edge, fog, and cloud). The fundamental notion of this concept is to relocate high-performance cloud-based services to lower layers. While this escalates resource management’s complexity, it facilitates deploying services that demand ubiquitous and efficient computing capabilities. The concept also aims to optimize resource allocation, ensuring that devices perform service tasks as close as possible to data sources.
In the realm of computing modeling, our system is designed to process incoming requests sequentially to minimize the decay of information objects. Our architecture facilitates the dynamic deployment of microservices at all layers of the IoT system, offering both horizontal and vertical scaling for resource allocation. It also accommodates various deployment contexts.
The upcoming headings will provide an in-depth overview of the mentioned layers, along with mechanisms that enhance confidentiality, integrity, availability, and accountability for data: in-process, in-transit, and at-rest. Furthermore, a thorough analysis of the built-in security and reliability features associated with each specified technology will be provided.

3.1. Publisher’s Layer

The publisher layer encompasses entities that facilitate the data-centric protection of generated data streams through symmetric encryption and sealing operations. This leverages digital signatures and the device’s hybrid identity. Although the selection of encryption and digital signature algorithms is not the primary focus of this study, it invites consideration of the potential applications of lightweight and quantum-resilient cryptography. In 2022, NIST evaluated and approved two quantum-resistant algorithms for digital signatures [29]: FALCON and SPHINCS+. These algorithms utilize distinct mathematical approaches: lattice and hash systems, respectively. In our research, however, we opted for well-established algorithms such as AES for encryption and HMAC for signatures. Devices with limited resources, like Raspberry Pi, have adequate computational power to perform the cryptographic operations required by these algorithms.

3.2. Subscriber’s Layer

The subscriber layer comprises entities that have been granted authorized access to the data queue layer. Our framework’s architecture is designed to ensure the reliable dissemination of data across all levels of command: tactical, operational, and strategic. Additionally, the framework enables seamless data sharing across different domains within a federation, which includes allied forces and civilian institutions, fostering collaboration and coordinated efforts toward common objectives. Moreover, our system adeptly manages data security along the path from producer to consumer, while also implementing fine-grained access control mechanisms.

3.3. Data Queue Layer

The layer in the headline holds a pivotal position within our system, performing a multitude of important functions:
  • Storing and replicating sealed data streams within the layer;
  • Storing invalid records to trigger a detection mechanism of potentially malicious entities;
  • Intermediating within the micro-caching mechanism by linking the publisher layer (blockchain event listeners) with the streams microservice layer.
The publisher’s layer comprises numerous data sources, each generating real-time data streams that require processing. To facilitate the seamless processing of these messages, the Apache Kafka solution has been used. For example, Kul et al. [30] introduced a framework that leverages Kafka and neural networks to monitor (track) vehicles. In their study, the dataset was represented as data streams captured by CCTV.
Apache Kafka is a stream-message system that utilizes a producer–broker–consumer (publish–subscribe) model and classifies messages based on their topics. The Kafka Cluster is composed of message brokers that acquire, merge, and store data generated from the publisher layer (producers), and make it available to the subscriber layer (consumers). The layer is designed for the availability and reliability of data records, thanks to built-in synchronization and distributed data replication between brokers. Furthermore, it leverages serialization and compression mechanisms, such as lz4 and gzip, making messages payload format- and protocol-independent.
Furthermore, Kafka technology incorporates built-in components and mechanisms for defining and managing entity authorization through access control lists (ACLs). Essentially, ACLs outline which entities can access specific resources (topics) and delineate allowed operations (e.g, READ and WRITE) to perform on those resources. Establishing a distinct principal for each entity (device) and assigning only the necessary ACLs enables the processes of debugging and auditing, leading to the identification of which entity executes each operation.
However, large Kafka Cluster topologies that involve multiple publishing and subscribing entities (numerous topics) often encounter significant challenges in managing entity authorization. However, it is feasible to implement a more intricate authorization hierarchy within the cluster. This can imply an additional operational burden. In a previous article [3], we explored the application of ABAC in our environment. This solution can alleviate the authorization burden on the Kafka Cluster, thereby freeing its internal resources and allowing the streams microservice layer to manage access control more efficiently.

3.4. Fingerprint Enrichment Layer

A notable limitation of Kafka technology is its dependence on connection-oriented protocols, specifically the TCP. In contrast, our framework requires the ability to communicate with entities utilizing connectionless protocols, such as the UDP. To address this challenge, we propose the introduction of a fingerprint enrichment layer. A fundamental component of this layer is a protocol forwarder, which is designed to handle connectionless messages and convert them into a connection-oriented message format.
Furthermore, the mentioned layer can contribute to device behavior-based (fingerprint-compliance) authentication. This can be accomplished through the utilization of dedicated components, referred to as analyzers, which gather radio signals, network flows, and hardware features. For the radio fingerprinting analyzer, we propose employing a software-defined radio. This system replaces or supplements traditional hardware components, such as mixers, filters, amplifiers, and detectors, with software-based digital signal processing techniques. Providing flexibility, cost-effectiveness, versatile wideband reception, and enhanced interoperability within our architecture and radio communication systems. For the network analyzer, we can capture distinctive features from the headers of TCP/IP layers through comprehensive packet inspection. A detailed description of the proposed analyzer’s subsystem is beyond the scope of this publication and will be the subject of our future research.

3.5. Distributed Ledger Layer

Our experimental framework features a pluggable structure that allows for the interchangeable deployment of different DLTs. The distributed ledger layer indicates several functions:
  • Redundant and reliable storing of all identities of devices belonging to organizations that participate in the federation;
  • Redundant and reliable storing information about entities regarding the value of information and context dissemination;
  • Secure handling of the chaincode (smart contracts) execution (transaction steps) during the device registration operation;
  • Obtaining approvals (transaction authorizations) under endorsement policy from participating organizations;
  • Generating events related to actions on the distributed ledger (blockchain);
  • Being an integrated part of the verification process of devices and sealed messages.
Based on a performance comparison, we have opted to integrate the Hyperledger Fabric technology with our system. This particular technology has achieved a transaction throughput of 10,000 tps, as documented in [7]. It is noteworthy that the Ethereum ledger had a lower throughput. However, for Ethereum, only the proof of work consensus protocol was examined. At the same time, the currently less energy-intensive and scalable proof of stake consensus was not covered in these experiments.
The Fabric DLT adopts the practical Byzantine fault tolerance consensus algorithm, which mandates that all participating parties know of one another. As a consequence, it is a permissioned blockchain where public key infrastructure is deployed. To register the identity of devices, complex business logic is executed using multilingual chaincode (Go, Java, and Node.js). Moreover, the target execution environment for chaincode is a docker container, and its resources can be controlled (e.g., limited and isolated) through a Linux kernel feature, cgroups. Also, private data channels can be created between organizations, where the identity of selected devices can be hidden from other organizations.
The on-chain store used in Fabric technology consists of systematically organized structures: world state and transaction log. The world state serves as the ledger’s current state database, while the transaction log acts as a change data capture mechanism. This mechanism incrementally records both approved and rejected transactions, ensuring that data at rest are secure and accountable.

3.6. Communication Layer

The hardware–software IoT gateway manages communication between the device maintain layer, the streams microservice layer, and the distributed ledger layer via an interface to the Hyperledger Fabric Gateway services [4] that run on the ledger nodes. The communication layer facilitates seamless communication exchange for the following actions:
  • Performing queries to the on-chain store to read the examined identity from the distributed ledger layer during the verification process;
  • Participating in entity identities registering, updating, and revoking operations called by the device maintain layer;
  • Broadcasting of events generated by the distributed ledger layer as a result of approved transactions and blocks.
The IoT gateway utilizes a dynamic connection profile with the distributed ledger layer. This profile leverages the internal capabilities of ledger nodes to detect alterations in the network topology in real-time, thus ensuring the seamless functioning of the streams microservices layer, even in the event of node failure. Furthermore, the checkpointing mechanism proves to be beneficial, as it enables uninterrupted monitoring of ledger events without the risk of data loss due to connection interruptions.

3.7. Streams Microservice Layer

The streams microservice layer is responsible for verifying sealed messages originating from entities within the publisher’s layer. This layer is notable for executing complex operations on individual messages (records) in a sequential manner. To enhance the efficiency of message (stream) processing, selecting an appropriate framework or library is essential. Evaluations conducted by Karimov et al. [31] and Poel et al. [32] assessed various solutions designed for this purpose. Both studies concluded that the Apache Flink framework outperformed alternatives such as Kafka Streams API, Spark Streaming, and Structured Streaming, earning the highest overall ranking.
However, our proposed system architecture leverages the Streams API library to define custom verification processing logic. This choice is based on the inherent advantages of Kafka technology, including its failover and fault tolerance features. The Streams API library offers two approaches for implementing processing logic: the high-level Streams DSL and the low-level Processor API. We chose the Processor API for its pluggable architecture, which facilitates the deployment of various types of local off-chain stores. Moreover, the library employs a semantic guarantee pattern that ensures each message is processed exactly once from start to finish, thereby preventing any loss or duplication of messages in the event of a stream processor failure.
We opted against using the Spark and Structured Streaming frameworks as they rely on micro-batching, which processes messages within fixed time windows. Similarly, we did not consider the Apache Flink framework because it requires a separate processing cluster, which would increase operational costs for infrastructure deployment and negatively impact interoperability.
It is essential to highlight that our experimental system can further leverage the Kafka Streams API to support a range of tasks, including data enrichment, data quality assessment, and data context dissemination. In particular, this capability can be applied to object detection and classification during image processing. Figure 3 showcases the integration of our system for the mentioned use case.

3.8. Device Maintain Layer

The primary function of the device maintain layer is to manage device registration operations, with additional responsibilities including updating and revoking device identity. The registration process begins with establishing the entity’s identity through hybrid fingerprint techniques. Entities of the publisher’s Layer will use these identities during the sealing process, where the device identity image, working as a key, will be used with digital signatures to seal data streams sent to the data queue layer.
In the process of defining entity identity, we advocate for the use of a confidential computing strategy [33] that incorporates the defense-in-depth and hardware root of trust concepts. This strategy involves the implementation of multiple heterogeneous security layers (countermeasures) that are built on highly reliable hardware, firmware, and software components. These countermeasures are essential for executing critical security functions, such as session key generation and the secure storage of cryptographic materials, ensuring that any adverse operations not detected by one technology can still be identified and mitigated by another.
To safeguard data across their various states, whether in process, in transit, or at rest, secure enclaves, including trusted execution environments (TEEs), hardware security modules (HSMs), or trusted platform modules (TPMs), can be employed. TEEs provide a secure area within the processor, whereas HSMs are specialized hardware created specifically for key storage. Meanwhile, TPMs are hardware chips that offer a range of security functions, including secure key storage and platform (entity) integrity checks.
The specific procedures for key management fall outside the scope of this article. Instead, we propose a general procedure for defining the entity key (identity). During the registration operation, the device administrator places the device in an RF-shielded chamber to minimize potential interference that could impact the radio waves emitted by the device. Using specialized software and measurement equipment, the distinct characteristics of the device undergo a series of tests to establish a unique identity profile. In this study, we proposed a hybrid approach that combines several fingerprinting methods, primarily based on the parameters of the generated radio signals. The rationale for this selection includes the following:
  • Limitations arising from the heterogeneity of the environment and the need to maintain the mobility of IoT devices;
  • Devices’ vulnerability to extreme environmental factors (e.g., temperature and humidity);
  • Autonomy from the protocols used in the network.
The complete entity management process is conducted through the communication layer. Once the identity is established, it can be securely stored using the designated storage solutions on the device. Furthermore, the identity will also be recorded in the distributed ledger layer and may optionally be retained in the off-chain stores of the streams microservice layer.

4. Framework Basic Operations

This section delineates the main operations of our experimental framework, conscientiously examining the interrelationships among system layers and the flow of messages. Notably, we have integrated certain elements conceptualized by Jarosz et al. concerning a novel LAAFFI protocol [19].

4.1. Security Mechanisms and Message Types

As outlined in Section 3.4, the fingerprint enrichment layer utilizes the protocol forwarder component, which is specifically designed to manage connectionless streams and convert them into a connection-oriented format. This process employs an ETL (extract, transform, and load) mechanism, where a series of functions is applied to the extracted data, allowing them to be transformed into a standardized format. Moreover, the producer utilizing a data serialization mechanism is solely responsible for determining how to convert the data from a specific protocol (such as MQTT) into a byte representation. In contrast, the consumer defines how to interpret the byte string received from the broker through the deserialization process.
Within our environment, we proposed two primary types of messages that can be assigned to a single data stream. The first type is the data stream authentication message, referred to as AUTH_MSG (Figure 4). The second type is the session-related data stream message, known as DATA_MSG (Figure 5). Specific message fields are described below:
  • Globally Unique Identifier, G U I D : This is assigned to the entity during the registration process in the device maintain layer. It functions as a unique identifier, ensuring that each device can be distinctly recognized within a set of registered devices. For example, it may be represented as a human-readable combination of the federated organization name, type, and number, such as ORG1-SENSOR-0001;
  • Session Nonce, N s : This is a unique pseudo-random value generated by the entity that identifies a specific data stream, thus facilitating the correlation between the AUTH_MSG and the DATA_MSG;
  • Seals Indices, L S i d s = { x i 1 , , x i j } : These consist of a subset of indexes for the seals selected from the secure seal store, S l s t o r e = { S l 1 , S l 2 , , S l k } , where the seal S l x = H ( F I D x H S M V k ) . For each specific seal we proposed to use a hashed H ( ) exclusive OR multiplication ⊕ of the entity fingerprint sample, F I D x , recorded in the entity features store, E F s t o r e = { F I D 1 , F I D 2 , , F I D n } , combined with internal parameters from the hardware security module, H S M V k ;
  • Timestamp, T: This is used as a protection mechanism against replay attacks. When the Kafka topic is configured to use CreateTime for timestamps, the timestamp of a record will be the time that the producer sets when the record (message) is created;
  • Message Signatures, S A U T H , S D A T A : These are utilized to guarantee both the integrity and authenticity of the data streams. These signatures are compared to signature values calculated during the processing of messages within the streams microservice layer. The seal, S l x , or subset of seals, S S l s t o r e S l s t o r e , and session nonce work in conjunction with a key-derivation function to generate the key for the signature function, S e a l s k = K e y g e n ( S S l s t o r e , N s ) . The use of signatures fits naturally into the architecture of our system because of the Kafka message format autonomy (independence).
Figure 4. AUTH_MSG structure.
Figure 4. AUTH_MSG structure.
Electronics 14 02067 g004
Figure 5. DATA_MSG structure.
Figure 5. DATA_MSG structure.
Electronics 14 02067 g005
The message structures of AUTH_MSG and DATA_MSG that the Kafka Broker can handle (store and queue) are illustrated in Figure 6 and Figure 7. The Kafka_Key and Kafka_Value consist of sequences of bytes that form the message. The Kafka_Key plays a crucial role in directing a message to a specific partition within the Kafka topic. When a key is provided, all messages associated with that key are directed to the same partition, ensuring they are processed sequentially. The Kafka_Value contains the data that consumers will read and process. Additionally, we have included optional header fields related to the implementation of specialized feature (behavior-based) analyzers deployed within the fingerprint enrichment layer (see Figure 6).
We acknowledge the critical need to safeguard data throughout their various states, whether in process, in transit, or at rest. In our deployment, we have employed SSL/TLS communication solely between the streams microservice layer and the distributed ledger layer. Our framework follows a data-centric approach, which facilitates the encryption of data stream payloads using AES, along with data authentication through device fingerprints. This approach allows us to bypass the need for communication protection within the data queue layer, particularly among producers, subscribers, and brokers. This decision reduces additional overhead affecting microservice verification latency.
In conjunction with the Kafka serialization and deserialization mechanism, the aforementioned approach facilitates independent data exchange between producers and subscribers. It is essential to highlight that a microservice is unable to access the data payload, as it may be encrypted using a key that has been exchanged in advance between producers and subscribers. Nonetheless, this limitation does not impact the verification process of the data streams. A detailed discussion of this topic was provided in our previous work [3].
Additionally, to safeguard the E F s t o r e and S l s t o r e during transit, we propose utilizing an encryption mechanism that employs one-time pre-shared keys, which will be securely maintained solely within the entity (e.g., event listener) and the microservice instance. To protect these stores in an off-chain environment (at-rest state), we recommend implementing the confidential computing strategy and secure enclaves, as detailed in Section 3.8.

4.2. Entity Registration

The top-level sequence diagram (Figure 8) outlines the message flow and the specific actions (steps) that are related to the operation of registering entity identity:
  • Step 1: The device registration process begins with the device maintain layer defining an entity’s identity through hybrid fingerprint techniques. This operation is conducted within a secure enclave (e.g., TEE);
  • Step 2: The entity’s identity, represented by fingerprint samples, is recorded in the entity features store, E F s t o r e = { F I D 1 , F I D 2 , , F I D n } ;
  • Step 3: A set of seals, S l s t o r e = { S l 1 , S l 2 , , S l k } , is generated based on a subset of fingerprint samples from the features store. For each specific seal, S l x , we proposed to use a hashed H ( ) exclusive OR multiplication ⊕ of the feature sample, F I D x , combined with internal parameters from the hardware security module, H S M V p ;
  • Step 4: The chaincode is invoked, a component of the distributed ledger layer that manages the secure transaction of adding a new identity to the ledger. This transaction incorporates entity features and secure seal stores as part of its payload, ( E F s t o r e , S l s t o r e ) .
  • Step 5: The transaction is executed upon receiving the necessary approvals from the organizations specified by the endorsement policy. This step is crucial for ensuring the integrity and transparency of the ledger, as it guarantees that all identities are accurately and securely recorded;
  • Step 6: The entity features and secure seal store are recorded in the distributed ledger through separate channels: an entity features channel, E F c h a n n e l , and a secure seal channel, S l c h a n n e l , respectively;
  • Step 7: A confirmation of the registration is sent back to the device maintain layer;
  • Step 8: Considering the confidential computing strategy, the S l s t o r e is written to the entity’s secure enclave. Any cached values related to the registration process should be cleared (wiped out).
Figure 8. Top-level sequence diagram for registering entity identity.
Figure 8. Top-level sequence diagram for registering entity identity.
Electronics 14 02067 g008

4.3. Blockchain Event Listener Application

As an enhancement to the entity registration operation, we proposed to incorporate device identity into the local off-chain data store, which is a component of the streams microservice layer. This improvement aims to reduce time delays during message verification by eliminating the need for a ledger query step (micro-caching mechanism). Figure 9 presents a top-level sequence diagram for the operation, illustrating the flow of messages and interactions involved (steps):
  • Step 1–4: These remain the same as for registering the entity identity top-sequence diagram (Figure 8);
  • Step 5: Upon receiving the necessary approvals from the organizations specified by the endorsement policy, the transaction is executed (the distributed ledger layer);
  • Step 6: The entity features and secure seal stores are recorded in the distributed ledger through separate channels: E F c h a n n e l and S l c h a n n e l ;
  • Step 7: An application called the blockchain event listener monitors events that are emitted by the distributed ledger layer. This application represents a special entity within the publisher’s layer. As a result of the approved and executed transaction, the G U I D and the seal store are written to the event payload. Then, the event is emitted by the distributed ledger layer.
  • Step 8: A confirmation of the registration is sent back to the device maintain layer;
  • Step 9: The secure seal store is written to the entity’s secure enclave (the device maintain layer);
  • Step 10: The blockchain event listener (the publisher’s layer) interprets the occurrence of the event, and the seal store is extracted from the event payload;
  • Step 11: The seal store extracted from the event payload is written to the data queue layer (Kafka Cluster). The dedicated Kafka topic is utilized for this purpose;
  • Step 12: The streams microservice layer reads the seal store from the cluster in a sequential manner;
  • Step 13: During the process() method handled by the streams microservice layer, the seal store is added to the local off-chain data store and can be utilized by other streams microservices within the pool.
Figure 9. Top-level sequence diagram for adding identities using a blockchain event listener.
Figure 9. Top-level sequence diagram for adding identities using a blockchain event listener.
Electronics 14 02067 g009

4.4. Data Streams Verification

A tailored stream processing algorithm has been proposed. The top-level sequence diagram (Figure 10) outlines the message flow and the specific actions (steps) that are related to the operation of data streams verification. Additionally, the local off-chain data store has been incorporated into the streams microservice layer, which can enhance the overall performance of the system:
  • Step 1: When an entity (IoT device) from the publisher’s layer intends to transmit a data stream, it invokes the generate_data_stream() method. During this execution, initial parameters for the cryptographic primitive (sealing) are chosen from the secure seal store, S l s t o r e = { S l 1 , S l 2 , , S l k } , where the seal, S l x = H ( F I D x H S M V k ) , or subset of seals, S S l s t o r e S l s t o r e , is selected along with its corresponding indices, L S i d s = { x i 1 , , x i j } , and a session nonce, N s , is generated;
  • Step 2a: For the chosen seal and the session nonce, a session seal key is generated, S e a l s k = K e y g e n ( S S l s t o r e , N s ) . Next, the AUTH_MSG is crafted and sealed using a signature algorithm founded on the specified parameters: S A U T H N s = S i g n ( A U T H M S G , S e a l s k ) ;
  • Step 2b: Subsequently, the session-related data stream messages are sealed using the session seal key generated in Step 2a: S D A T A N s i = S i g n ( D A T A M S G , S e a l s k ) ;
  • Step 3a: The sealed AUTH_MSG is transmitted through a reliable communication channel via the fingerprint enrichment layer to the data queue layer (Kafka Cluster), stored under a designated topic. The AUTH_MSG includes ( G U I D , L S i d s , N s , T , S A U T H N s ) ;
  • Step 3b: The sealed DATA_MSG messages are transmitted in the same manner as described in Step 3a. The DATA_MSG contains ( G U I D , N s , T , S D A T A N s i ) ;
  • Step 4: Optionally, within the fingerprint enrichment layer, specialized behavior-based analyzers handle the sampling() method to capture fingerprint samples associated with a specific device’s AUTH_MSG;
  • Step 5a: The handle_auth() method transforms the raw AUTH_MSG message into a structure suitable for loading into the Kafka Broker;
  • Step 5b: The handle_data() method transforms the raw DATA_MSG message into a format that can be processed by the data queue layer;
  • Step 6a: The data stream authentication message, AUTH_MSG, is forwarded to the data queue layer;
  • Step 6b: The session-related messages, DATA_MSG, are forwarded;
  • Step 7: The process() method of the streams microservice layer sequentially reads the AUTH_MSG message from the specified topic;
  • Step 8: The parameters ( G U I D , L S i d s , N s , T , S A U T H N s ) are extracted from the AUTH_MSG for subsequent verification;
  • Step 9a: The micro-caching mechanism is utilized. The verify streams microservice queries local off-chain storage to retrieve the device fingerprint (identity) that sealed the message. The query is composed of the GUID and seals indices ( G U I D , L S i d s ). As an extension, a stored procedure or a trigger-like mechanism can be employed with the local off-chain storage to generate the session seal key, S e a l s k x = K e y g e n ( S S l s t o r e x , N s ) . In this case, the body query is extended with the session nonce parameter ( N s ), allowing for a reduction in the number of steps necessary before proceeding to Step 12. The confidential computing strategy should be implemented to ensure the strict protection of data in transit;
  • Step 9b: The appropriate identity ( G U I D , S S l s t o r e ) is returned, or an identity showing Not Found Error is generated. If the extension mentioned in Step 9a is applied, an alternate response of ( G U I D , S e a l s k x ) is returned;
  • Step 10: If the session seal key, S e a l s k x , is successfully generated (or obtained), the steps related to querying the distributed ledger layer (Hyperledger Fabric) are omitted, and Step 12 is executed instead;
  • Step 11a: Otherwise, the identity Not Found Error results in a query via the communication layer to the Hyperledger Fabric Gateway service of the distributed ledger layer;
  • Step 11b: The chaincode (transaction) is executed to generate S e a l s k x based on the device GUID, a list of seal indices, and the session nonce ( G U I D , L S i d s , N s ) ;
  • Step 11c: The device GUID along with the session seal key is returned from the ledger ( G U I D , S e a l s k x ) , or a relevant error is produced;
  • Step 12: Entity identities are compared by verifying the extracted signature S A U T H N s that sealed the AUTH_MSG against the signature, S A U T H N s x = S i g n ( A U T H M S G , S e a l s k x ) , with the session seal key returned from Step 10, Step 11c, or optionally, Step 9b. If the signatures match ( S A U T H N s = S A U T H N s x ), the AUTH_MSG undergoing the verification process will be preserved. If they do not match, the message may either be discarded or stored in a separate queue for the identification of potentially malicious or faulty devices;
  • Step 13: The streams microservice sequentially reads and verifies the session-related messages, DATA_MSGN si;
  • Step 14: The session seal key from either Step 10 or Step 11c is used, and the signatures of the DATA_MSG are compared;
  • Step 15: Verified DATA_MSG is sent to the appropriate topic;
  • Step 16: An entity representing the subscribers layer reads the verified data streams read_streams() based on the subscribed topics and the authorization policy.
Figure 10. Sequence diagram for verification of the data streams operation.
Figure 10. Sequence diagram for verification of the data streams operation.
Electronics 14 02067 g010

5. Security and Reliability Risk Assessment

In this section, we have conducted a high-level security risk assessment considering several security and reliability threats across the application, network, and perception layers of the dissemination framework.

5.1. Analysis of Attack Resilience

The proposed framework for data distribution within the federation may be exposed to risks due to the valuable data collected, transmitted, and processed. These data can be particularly sensitive, and their disclosure may expose mission participants to serious human or material losses. The attacker’s goal in federated IoT ecosystems is most often to steal or forge data through manipulated IoT devices, disrupt transmissions, and use IoT devices to launch other attacks, including DoS/DDoS, preventing access to federated services. The attacks that can be carried out on federated IoT networks are the same as those that can be carried out on non-federated networks. However, when analyzing attacks on federated networks, it is essential to consider an attacker from an organization other than targeted devices or services. In the case of an attacker from outside the organization that owns the IoT device or service, it is likely to have more rights than a non-federated entity, but less than a member of the organization that owns the devices or services. Table 1 lists possible attacks on the developed data dissemination system. The criterion for selecting attacks is the possibility of violating information security properties. The list of attacks is based on the work on the security of the federated environment [19]. Some attacks presented in Table 1 are generalized categories encompassing different attack techniques. An example of such an attack is the DoS/DDoS attack, which can include flood attacks, fragmentation attacks, and reflection amplification attacks.
Cryptanalysis includes a set of attacks aimed at obtaining the plaintext of a transmitted message, which can be carried out in three distinguished layers of the framework. Examples of this attack include brute-force, dictionary, and statistical attacks. To make it challenging to obtain the plain text of a message sent between an IoT device and an application gateway or other IoT device, it is recommended to use strong encryption and HMAC algorithms that are considered secure. The parameters used to create the key must also be of sufficient entropy so that an attacker cannot guess the key used to secure the message.
Spoofing attacks include a large set of attacks designed to impersonate, in the case of our framework, a device, application gateway, or distributed ledger node. Since every message is authenticated in the framework, it is impossible to launch attacks of this type until the attacker has access to the parameters from which the key can be created. Furthermore, mutual authentication of the application gateway with the ledger node is carried out using TLS based on the certificates of both parties.

5.1.1. General Attacks

An attack that can apply to all layers of the architecture of the adopted solution is a (distributed) denial of service (DoS/DDoS) attack. When applied to the perception layer, it can involve the unauthorized seizure of IoT device resources. The attack aims to exploit an available resource; the most common resources are network bandwidth, CPU time, and internal memory. IoT devices are vulnerable to this type of attack due to their resource limitations. For the network layer, a DoS/DDoS attack can involve jamming communications between devices. The attacker tries to hijack the transmission bandwidth by sending a signal using the same frequencies as the devices communicating. As with the DoS/DDoS attack at the perception layer, defense options against this attack are very limited. At the application layer, the framework is also susceptible to DoS/DDoS attacks due to the capabilities of the transactions performed involving the distributed register. These transactions require adding data to the ledger, such as device registration or data reading from the ledger, as IoT devices’ identity parameters reading, through which Hyperledger Fabric resources can be exhausted. However, it is possible to point out the solutions adopted in the work, such as off-chain data storage or the multiplication of ledger nodes in each organization, significantly reducing the effectiveness of carrying out DoS attacks at the application layer. Furthermore, in our framework, the number of Kafka brokers, microservices, and IoT gateways could be increased to handle more requests. Moreover, using security information and event management (SIEM), we can identify specific properties of requests involved in DoS to detect a source of overloaded data and reject all malicious requests at the gateway level.

5.1.2. Perception Layer Attacks

  • Device capture: The attacker can access the IoT device and generate messages sealed with its identity. In this situation, it is assumed that such a device’s behavioral pattern (distinctive features) will change. Consequently, it will be possible to use analytics tools (SIEM) to detect these changes, mainly related to network fingerprints. Moreover, when a compromised device is detected, it can be immediately marked and revoked from the distributed ledger. Also, security enclaves (e.g., HSM) can increase device resilience against capture and manipulation.
  • Malicious devices: The attack involves adding a fake IoT device to the network. In our structure, a device is registered once in a protected environment. Therefore, we assume that the process will be coordinated by an authorized person. Therefore, it is not possible to register a fake device. If a device is not registered, it will not be authenticated and messages from such a device will be rejected, and consequently, the device will be detected and blocked.
  • Device tampering: The attack consists of changing software or hardware components of the IoT device. In our framework, any changes to a unique device fingerprint would generate numerous failed verification attempts.
  • Sybil attacks: The attack consists of having a multi-identity device by the IoT device. This situation is prevented via a secure registration process.
  • Side-channel (timing) attacks: Attacks consist of obtaining the key by analyzing the implementation of the protocol (e.g., current power consumption and time dependencies). The framework could be susceptible to a timing attack when the device uses unique data to seal messages. In this situation, it is possible to predict from where these data are read, but not the values of these data; therefore, we believe that this attack is rather difficult to perform in practice.

5.1.3. Network Layer Attacks

  • Eavesdropping: This attack involves eavesdropping on transmissions and obtaining messages (credentials). In our framework, we have separated the key used to secure the communication channel for IoT devices from the key used for the data authenticity protection mechanism. Only the registration phase is critical and must be carried out in a protected, trusted environment.
  • Replay attack: This attack is based on intercepting a message and sending it later. In the case of the developed solution, it is impossible to perform this attack because each message contains a creation timestamp, which is verified. When decrypting the message, the IoT device or the gateway compares this timestamp with the current time. The disadvantage of this solution is having a correctly synchronized time on each IoT device.
  • Packet injection: This attack involves injecting packets to disrupt communications. In the case of our framework, the attack can consist of duplicating transmitted packets or creating invalid packets. Since the packets are verified by the streams microservice layer in both cases, the recipient will ignore them.
  • Session hijacking: This attack compromises the session key by stealing or predicting. Within our framework, it is mitigated through the session nonce, timestamps, and the device-based key used for data authenticity.
  • Man-in-the-Middle: This attack consists of changing the messages sent between the IoT device and the verifying microservices. Any change to the message will prevent it from being verified due to the data authenticity protection mechanism. Moreover, invalid messages can be logged to identify faulty (malicious) devices.

5.1.4. Application Layer Attacks

  • Storage attack: This attack consists of changing device identity features. To prevent this attack, access to the device should be properly secured to prevent changes to data stored in it. In our framework, if the device identity is changed, the device will not be able to authenticate itself. It is almost impossible to change data in a distributed ledger without the knowledge and consent of the organization that owns the IoT device.
  • Malicious insider attack: This attack consists of using credentials by an authorized person. In the framework, access to data stored in the Hyperledger Fabric is possible only by an authenticated and authorized entity that uses an appropriate private key and a valid X.509 certificate. Each access attempt is logged. Resistance to this attack can be enhanced by using SIEM.
  • Abuse of authority: The possibility of a successful attack is minimized through chaincodes designed to verify every operation performed on data stored in the distributed ledger. For this reason, each organization has, among other things, the ability to modify the permissions of its IoT devices, but unauthorized entities cannot perform this.
  • Permissions modification by unauthorized users: In the framework, the modification of permissions is only possible through chaincodes; thus, any modification requires the conditions written in the chaincode to be met. If the chaincode is written correctly, only the device owner or authorized entity can modify the device’s IoT permissions.

5.2. Analysis of Fault Resilience

  • Distributed ledger node failures: In our framework, we propose that each organization has a minimum of two nodes. Since the data in the blockchain are replicated, the failure of a single node does not affect the operation of the entire Hyperledger Fabric network.
  • Kafka cluster (brokers) failure: In our framework, it is possible to maintain the availability of data generated by IoT devices by using the built-in synchronization mechanism and setting an appropriate replication factor.

6. Framework Evaluation

Benchmarking the performance of streaming data processing systems poses a considerable challenge due to the complexities of the global concept of time. This section provides benchmarks for our framework in the Amazon Web Services (AWS) cloud environment and the Raspberry Pi device-based environment. In the AWS environment, we measured the average times for consumers to read the data stream processed through microservices utilizing the Java Kafka Streams API. Our objective was to validate the applicability of our framework in the context of audiovisual streams and to assess its computational stability. Conversely, the Raspberry Pi setup focused on gathering the latency metrics of key internal operations on resource-constrained devices. We also compared the operation latencies of implementations in Java (Kafka Streams API library) and Go (Sarama library) programming languages.

6.1. Cloud Setup

Our experiment seeks to verify our framework’s capability to process audiovisual streams in a distributed and federated cloud environment. Table 2 describes and Figure 11 illustrates the various components of our experimental framework deployed using AWS technology (Setup I).
The AWS cloud, due to its pay-as-you-go model and pluggable architecture for the COTS services Amazon Managed Streaming–Apache Kafka and Amazon Managed Blockchain–Hyperledger Fabric, enables efficient deployment of our framework while simultaneously minimizing the operational costs associated with its various components’ provisioning, configuration, and maintenance. Consequently, our framework is suitable for federated environments, which are required to ensure zero-day interoperability.

6.2. Resource-Constrained Setup

In the second benchmark (Setup II), we chose resource-constrained devices to evaluate their utilization and computational power. We positioned the specific layers of our framework within the IoT environment locations: actuators/sensors, edge, Fog, data center (Cloud). Moreover, our setup simulates infrastructure placement in tactical military networks and can be adapted for scenarios involving civil components during HADR operations. Table 3 and Figure 12 represent the device-based environment.
Furthermore, in Section 3, we presented the meaning of the computing continuum concept. By relocating the ledger peer from the data center to the Fog location, we refined Setup II to facilitate benchmarking with the mentioned concept (Figure 13).
The Raspberry Pi device-based environment facilitates the straightforward out-of-the-box deployment of our framework while maintaining low operational costs. Moreover, the presented environment promotes the computing continuum concept, which can enhance the deployment of services at the tactical level in settings characterized by DIL networks, as well as in federated environments that require zero-day interoperability.
Additionally, we considered integrating information classification approaches with the CC concept to enable the processing and sharing of relevant IOs based on their value of information. The versatility of Raspberry Pi devices allows for integration with a neural processing unit, a specialized processor designed to accelerate artificial intelligence and machine learning tasks. This feature can facilitate parallel processing of large data volumes within the Fog component, making it especially well-suited for applications involving contextual dissemination through image recognition or natural language processing, as well as for data quality assessment.
Moreover, the hardware of these devices is compatible with various operating systems, including Windows 10 IoT Core and Linux OS. Lastly, the general-purpose input/output (GPIO) pins provide the flexibility to experiment with a range of communication protocols such as Sigfox, LoRaWAN, NB-IoT, Zigbee, and BLE.

6.3. Processing Systems Benchmarking

When conducting performance studies (benchmarks) of streaming data processing systems, it is necessary to consider the following key metrics [31,32]: latency, throughput, the usage of hardware–software resources (CPU and RAM), and power consumption (PC). Furthermore, the overall performance evaluation can be affected by the input parameters (e.g., system configuration) and processing scenarios (workloads) [30]. In the context of the proposed framework, several parameters are listed below:
  • Configuration of the data queue layer: number of brokers, partitions, and data stream replication factor;
  • Parallelization (horizontal-scaling) of stream processors (microservices);
  • Kind (e.g., windowed aggregation) and type of operations (e.g., stateless);
  • Number of organizations that joined a federated IoT environment and registered devices (identity count);
  • Number of peers of the distributed ledger layer;
  • Selected programming language for microservices and chaincodes.
Generally, latency is the interval of time it takes for a system (platform) under test (SUT) to process a message, calculated from the moment the input message is read from the source until the output message is written by SUT. Hence, it is important to distinguish the latency metric [31] into its two types: event-time latency and processing-time latency. The first refers to the interval between a timestamp being assigned to the input message by the source and the time the SUT generates an output message. The second one refers to the interval calculated between the time the SUT ingests an input message (read) and the time the SUT generates an output message.

6.3.1. Data Dissemination System Benchmarking

In the context of Apache Kafka, latency (end-to-end, E2E) is defined as the total time from when application logic produces a record using KafkaProducer.send() to when the application logic can consume that record through KafkaConsumer.poll(). This E2E latency includes various intermediate operations that can affect the overall duration. The publish operation time encompasses flying time, queuing time, and internal record processing time. Flying time refers to the duration for transmitting the record to the leader broker, queuing time pertains to the time taken to add the request to the broker queue by the network thread, and record processing time involves the reading of the request from the queue and the appending of logs (records). Furthermore, the replication factor and cluster load impact commit time, which is the time required for the slowest in-sync follower broker to retrieve a record from the leader. Moreover, the catch-up operation refers to the time a consumer takes to read the latest record ( O f f s e t N ) while its offset pointer is lagging ( O f f s e t K , where K < N ). Lastly, fetch operation time impacts the Kafka consumer record read latency, as the consumer continually polls the leader broker for more data from its subscribed topic partition.
As previously noted, various factors can influence latency. Our experiments focused on the detailed configuration of the Kafka parameters for our microservice that operates both as a producer and a consumer during single record processing. Consequently, the following configuration was uniformly applied across all scenarios (Table 4).

6.3.2. Distributed Ledger System Benchmarking

Regarding the distributed ledger operations benchmarking, we focused on collecting latency metrics from two sources: one from the microservice ledger query operation (DL query), and another directly from the log entries of the ledger peer server. By analyzing the peer logs, we focused on two types of entries: the duration of the gRPC call and the time required to evaluate the chaincode (chaincode execution), specifically to acquire the device’s seal from the ledger. The first metric is essential for monitoring and performance analysis, as it offers valuable insights into the efficiency and responsiveness of the services involved in gRPC communication. Conversely, the second metric pertains to a peer node that endorses transactions, reflecting the computation burden.
In examining potential peer (database) microcaching mechanisms, we can differentiate between two phases of the ledger peer: the cold peer (CP) and the hot peer (HP). The CP occurs when the peer is restarted each time before the microservice is launched. Conversely, the HP refers to a state in which the peer has already been queried for all identities present in the ledger, and then the microservice under test is launched. These phases can significantly affect the performance of the chaincode latency [34,35]. We selected Apache CouchDB as the primary backend database for our ledger peers. This NoSQL database employs a schema-free JSON format and a B-tree indexing system, making it ideal for complex, read-heavy queries. However, it is not optimized for write operations like a log-structured merge tree. Fortunately, our framework primarily focuses on ledger read operations.

6.3.3. Microservice Benchmarking

The Go programming language is statically compiled into machine code, resulting in a single executable that includes all necessary dependencies and can be run directly on the operating system. This approach eliminates the two-step compilation process utilized by Java, which embodies the concept of “write once, run anywhere”.
At the onset of a Java microservice, the Java virtual machine (JVM) performs just-in-time (JIT) dynamic compilation, leading to fluctuations in internal operation latencies during the initial execution phase. This phase, commonly referred to as the warm-up state (WS), involves the JIT, analyzes the bytecode (pre-compiled form), and translates it into machine code. It identifies frequently executed methods and loops, omits unused code segments, applies constant folding, and manages object allocation. Once this optimization concludes, the microservice attains a steady state (SS) of performance [36].
We recognize the impact of JIT optimization on our metrics. Hence, as a mitigation strategy during our benchmarks, we focused on capturing latencies for microservices operating in both the WS and the SS. Nevertheless, fluctuations in performance during the WS state can lead to notable latencies, especially on resource-constrained platforms.
Our implementation involves collecting the durations of key microservice operations: cryptographic primitives initialization, distributed ledger query, HMAC validation, AES decryption, and message forwarding. Each operation will be executed a fixed number of times based on the selected message volume. Lastly, the primary method process(), which includes these operations, will be repeated multiple times using fresh JVM instances (forks). A similar approach will be employed for the Go microservice.

6.4. Resource Utilization

We employed the real-time monitoring system based on the node exporter service to manage device resource utilization during benchmarks and identify potential anomalies. This service allows for the accumulation of an extensive range of metrics related to hardware and operating systems, such as RAM, I/O disk, and CPU usage. The collected data can then be ingested and stored in Prometheus [37], an open-source monitoring system that implements a pull model. For in-depth analysis, we utilized Grafana, a web application that provides interactive visualizations and customizable dashboards, allowing us to derive valuable insights regarding the tested platforms.

6.5. Power Consumption

During benchmarking, monitoring potential voltage spikes that may decrease CPU instructions per clock (IPC) is crucial, as these issues can adversely affect microservice execution. Moreover, maintaining a focus on power consumption (PC) facilitates the identification of high-power instructions (operations) within an actively running microservice under specific workloads, allowing for further optimization.
It is also important to detect throttle events that could result in microservice instability and crashes. For the Raspberry Pi platform, the risk of such events occurs when the CPU temperature, as measured by the built-in sensor, and ranges from 80 to 85 °C. This leads to a gradual throttling of the CPU cores, reducing their frequencies.
We recognize that PC can fluctuate, and the current draw may vary based on usage. Therefore, during the benchmarking process, we took the following preliminary steps:
  • Devices for testing were deployed in their unmodified state, without any overclocking;
  • The lower limit for CPU temperature (throttle protection) was not modified;
  • No additional devices were connected to the GPIO and USB ports; an SSH connection was used to control the devices.
To monitor power consumption during our tests, we employed the RPi5 Power Management Integrated Circuit (PMIC, Renesas DA9091 “Gilmour”) [38], which integrates eight switch-mode power supplies to deliver various voltages required by the PCB components, including Cortex-A76 CPU cores (VDD_CORE). We integrated a revision-agnostic software tool called vcgencmd, which provides access to information about the voltage and current for each component managed by the PMIC. This tool is particularly versatile for monitoring various parameters, including the CPU core’s status, temperature, and throttle state (represented as a bit-pattern, flag: 0 × 40,000). Additionally, we implemented the Prometheus Scraper, which operates on a pull model, to collect power metrics from devices under test. These metrics are then ingested into a Prometheus instance for further visualization in Grafana (Figure 14).

6.6. Processing Scenarios

6.6.1. Scenarios of Setup I: Cloud-Based Environment

This paper examined the processing time latency of microservices developed using the Java Kafka Streams API and deployed in the AWS cloud-based environment (Setup I). Our objective was to validate the applicability of our framework in the context of audiovisual streams and to assess its computational stability. We designed two scenarios for this purpose:
  • Setup I, Scenario I: This involved verifying the input (sealed) message by comparing the extracted device identity and the identity stored in the distributed ledger.
  • Setup I, Scenario II: We verify the sealed message by comparing the extracted device identity with the identity stored in the off-chain data store. In this scenario, all device identities from the ledger were synchronized and stored in the off-chain store.
In the scenarios outlined, the burst-at-startup technique was employed [30]. This process involved generating and sealing each input message with a pseudo-random device identity in advance. Once a predetermined number of input messages had been created, a single instance of the microservice responsible for their verification was activated. Furthermore, we aimed to extend workloads utilizing varying quantities of registered identities within the distributed ledger.

6.6.2. Scenarios of Setup II: Resource-Constrained Environment

For the Raspberry Pi resource-constrained environment (Setup II), we benchmarked microservices developed in Java (Kafka Streams API) and the Go (Sarama) programming language. Furthermore, beyond the full processing time, we measured the duration of microservices’ significant (time-consuming) operations: cryptographic primitives initialization, distributed ledger query, HMAC validation, AES decryption, and message forwarding. Moreover, we looked at hardware and software utilization through real-time monitoring. We planned to conduct the following scenarios:
  • Setup II, Scenario I: This involved verifying the sealed message by comparing the extracted device identity with the identity stored in the distributed ledger located in the data center component, while also applying the burst at startup technique.
  • Setup II, Scenario II: We verified the sealed message by comparing the extracted device identity with the identity stored in the ledger peer relocated to the Fog processing component (the continuum computing concept enabled). The burst at startup technique was applied.
  • Setup II, Scenario III: We verified the sealed message by comparing the extracted device identity with the identity stored in the distributed ledger within the data center component. Throughout this scenario, continuous writing operations were conducted to the data queue layer, maintaining a consistent message (real-world data flow) throughput of 400 messages per second [32].

6.7. Reference Points for Experiments

6.7.1. Processing Scenarios

To compare the RPi3 and the RPi5 platforms we present results from Setup II, Scenario I (steady state, hot peer phase, 10,000 identities, and 10,000 messages), derived from microservice benchmarking on the SLap platform (Table 5: Java; Table 6: Go). For consistency during further reading, the tables detailing microservice latency present averaged results with the average absolute deviation in brackets, calculated from 10 microservice launches (forks). The following metrics were employed to evaluate the results:
  • Average (Avg), i.e., the average absolute deviations of data points from their mean value (Avg Dev), and minimum (Min) latency;
  • Standard deviation based on the entire population (Pop Std Dev);
  • Confidence intervals for the mean and standard deviation of the normal distribution ([ α = 0.10 ; α = 0.05 ; α = 0.01 ]);
  • Quantiles of order: p90, p95, and p99 (percentiles [ p = 0.9 ; p = 0.95 ; p = 0.99 ] ).
The Java-based verify streams microservice took an average of 5.76 (±0.10) ms, with the minimum delay recorded at 3.59 (±0.07) ms, and an average latency deviation of 0.90 (±0.08) ms. The long-tail latencies of the p90 and p95 quantiles were 7.04 (±0.17) and 7.81 (±0.24) ms, respectively. The confidence intervals for α levels of 0.1 and 0.05 were ±0.029 (±0.50% of the average) and ±0.0345 (±0.60%) ms. The most time-intensive operation was the DL query, which averaged 5.69 (±0.10) ms. Based on log entries, the chaincode execution operation took an average of 1.10 (±0.02) ms, while a full gRPC call lasted 2.47 (±0.11) ms. For the message forwarding operation, p90 percentiles did not exceed 0.051 (±0.0016) ms.
Table 5. Setup II, Scenario I: Processing-time latency metrics (in ms) for Java code.
Table 5. Setup II, Scenario I: Processing-time latency metrics (in ms) for Java code.
OperationAvgAvg DevMinPop
Std Dev
Confidence Intervals [ α = 0.10 ; α = 0.05 ; α = 0.01 ]Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.003 (0.0001)0.001 (0.0001)0.001 (0.0001)0.005 (0.0003)[0.0003; 0.0003; 0.0004][0.004 (0.0003); 0.006 (0.0002); 0.009 (0.0005)]
DL Query5.69 (0.10)0.89 (0.08)3.56 (0.07)1.74 (0.36)[0.0286; 0.0341; 0.0448][6.95 (0.17); 7.71 (0.23); 10.96 (0.78)]
HMAC Validation0.006 (0.0002)0.002 (0.0001)0.001 (0.0001)0.004 (0.0004)[0.0001; 0.0001; 0.0001][0.009 (0.0003); 0.012 (0.0004); 0.022 (0.0009)]
AES Decryption0.024 (0.0006)0.009 (0.0005)0.006 (0.0004)0.021 (0.0064)[0.0003; 0.0004; 0.0005][0.041 (0.0015); 0.052 (0.0015); 0.083 (0.0028)]
Msg Forwarding0.030 (0.0007)0.015 (0.0005)0.005 (0.0008)0.089 (0.0073)[0.0015; 0.0017; 0.0023][0.051 (0.0016); 0.077 (0.0017); 0.117 (0.0031)]
Full Processing5.76 (0.10)0.90 (0.08)3.59 (0.07)1.76 (0.36)[0.0290; 0.0345; 0.0453][7.04 (0.17); 7.81 (0.24); 11.05 (0.79)]
Chaincode Exec (logs)1.10 (0.02)0.19 (0.03)0.00 (0.00)0.49 (0.06)[0.0081; 0.0096; 0.0126][1.10 (0.18); 2.00 (0.00); 3.00 (0.00)]
gRPC Call (logs)2.47 (0.11)0.34 (0.07)1.51 (0.02)0.70 (0.19)[0.0115; 0.0137; 0.0180][2.93 (0.27); 3.29 (0.42); 4.71 (0.69)]
The Go-based microservice fork averaged 6.48 (±0.24) ms, with a minimum delay of 4.30 (±0.17) ms and an average latency deviation of 0.95 (±0.13) ms. The p90 and p95 latencies were measured at 7.87 (±0.47) and 9.08 (±0.63) ms, respectively. The confidence intervals for α levels of 0.1 and 0.05 were ±0.0275 (±0.42%) and ±0.0327 (±0.50%) ms. The most time-intensive operation was the DL query, which averaged 4.23 (±0.23) ms. Chaincode execution took 1.31 (±0.12) ms, while gRPC call had an average duration of 2.81 (±0.16) ms. For the p90 quantile, the latency for message forwarding operations was 2.53 (±0.08) ms.
Table 6. Setup II, Scenario I: Processing-time latency metrics (in ms) for Go code.
Table 6. Setup II, Scenario I: Processing-time latency metrics (in ms) for Go code.
OperationAvgAvg DevMinPop Std DevConfidence intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.009 (0.0003)0.002 (0.0001)0.001 (0.0002)0.004 (0.0005)[0.0001; 0.0001; 0.0001][0.012 (0.0004); 0.015 (0.0011); 0.029 (0.0046)]
DL Query4.23 (0.23)0.81 (0.15)2.64 (0.11)1.40 (0.25)[0.0230; 0.0274; 0.0361][5.44 (0.54); 6.56 (0.60); 9.56 (1.31)]
HMAC Validation0.011 (0.0003)0.002 (0.0004)0.002 (0.0002)0.008 (0.0049)[0.0001; 0.0002; 0.0002][0.012 (0.0005); 0.016 (0.001); 0.031 (0.0064)]
AES Decryption0.009 (0.0006)0.003 (0.0009)0.001 (0.0002)0.012 (0.0074)[0.0002; 0.0002; 0.0003][0.013 (0.001); 0.018 (0.0009); 0.037 (0.0038)]
Msg Forwarding2.197 (0.0914)0.276 (0.0321)1.499 (0.0590)0.760 (0.1696)[0.0125; 0.0149; 0.0196][2.526 (0.0845); 2.688 (0.0866); 3.813 (0.6615)]
Full Processing6.48 (0.24)0.95 (0.13)4.30 (0.17)1.67 (0.20)[0.0275; 0.0327; 0.0430][7.87 (0.47); 9.08 (0.63); 13.12 (1.55)]
Chaincode Exec (logs)1.31 (0.12)0.48 (0.13)0.00 (0.00)0.86 (0.18)[0.0141; 0.0169; 0.0222][1.90 (0.36); 2.9 (0.36); 4.9 (0.9)]
gRPC Call (logs)2.81 (0.16)0.58 (0.13)1.72 (0.06)0.99 (0.20)[0.0163; 0.0194; 0.0255][3.67 (0.41); 4.60 (0.51); 6.69 (0.86)]

6.7.2. Resource Utilization

In light of Kingman’s formula [39], which indicates that queue latencies increase exponentially with resource utilization, we assessed the performance of the ledger peer, Kafka Cluster, and microservice execution on the SLap platform. We did not observe any significant increases in resource utilization during our assessment. There were no events such as CPU idling, decreases in IPC, or throttling. Additionally, we recorded a stable temperature of 30 °C. Furthermore, cluster I/O disk utilization and queuing delays did not significantly dominate the time taken for the message forwarding operation. Lastly, the network traffic monitoring revealed that the microservice producer’s limit on the number of unacknowledged requests sent to the leader broker remained within the configured upper limit of five, which did not affect the latency associated with the record publish.

6.7.3. Power Consumption

According to the official documentation for the Raspberry Pi platform [40], all models require a 5.1 V power supply. The current requirements generally increase based on the specific model. Notably, the RPi5 consumes between 1.2 W and 1.6 W when connected to a power source, even when powered off. When connected to a power supply unit that delivers 5A, the RPi5 can provide up to 1.6 A of power to downstream USB peripherals. Typical active current consumption for the RPi5 is 800 mA, while for the RPi3, it is 500 mA.
The average power consumption (PC) for the RPi3 platform during the idle state was 1.53 W. Under stress conditions, the average PC increased to 4.335 (max. 6.834) W. The documentation does not include the PC for the RPi5 platform. However, we applied our own measurements, and the following tables present the average PC on the RPi5. Table 7 details the PC for the device under test (SM) and the Kafka Cluster (B1, B2, and B3) during idle state, where all associated services were inactive. In contrast, Table 8 illustrates the PC when all related services were operational in a listening state, the CC concept was enabled, and the ledger peer was running on the device under test (SM+CC). However, there was no record processing or chaincode execution.
By comparing both states, we recorded a slightly different average PC: for the idle state (SM) − 2.12 (±0.18) W, and for the running state (SM + CC) − 2.15 (±0.22) W. The PC for the Kafka Cluster during the running state did not exceed 2.9 W. Furthermore, during the catch-up state, where ledger peer full synchronization was required, PC was found to be 51% greater than during the concurrent state (Table 9).

6.8. Benchmarks

The following section of the article provides the results achieved for the workloads described in Section 6.6 along with the resource utilization and power consumption of the platforms under test.
  • Processing Scenarios:
  • Setup I Scenario I
Table 10 presents the results from the Setup I, Scenario I, where we benchmarked the microservice utilizing the burst at startup technique. Regarding the results, changing the number of registered identities did not affect the processing-time latency associated with the verification of a single message. An average delay of approximately 39 ms was measured. The minimum delay was 31.7 (±1.30) ms. In contrast, the average deviations for quantiles of the order p90 and p99 did not consecutively exceed 2 ms and 3 ms. For 100,000 registered identities, quantiles of p90 latencies were 44.7 (±0.56) ms, and for p99, 55.8 (±1.64) ms. The confidence intervals for α levels of 0.1 and 0.05 were ±0.2861 (±0.75%) and ±0.3409 (±0.89%) ms, respectively.
Setup I Scenario II
Table 11 illustrates the average time required for consumers to read the data stream while the SUT was simultaneously processing it. Importantly, every second message referenced an identity not registered in the DLT. This method was designed to minimize the influence of optimization mechanisms (caching). The use of the off-chain data store reduced the reading time of the data streams by nearly half, with an average delay of approximately 21 ms recorded.
  • Setup II Scenario I
The upcoming tables detail the metrics for the microservice during the hot peer phase and Java steady state. Table 12 (Java) and Table 13 (Go) provide results for the RPi5 platform, while Table 14 (Java) and Table 15 (Go) focus on the RPi3. Our analysis revealed that the full processing latency for the Java-based microservice on the RPi5 was 7.38 (±0.18) ms, approximately 1.62 ms longer than that on the standalone laptop. Remarkably, 98.7% of this processing time was associated with the DL query operation. Additionally, the microservice on the RPi3 demonstrated a latency of 21.14 (±0.28) ms, nearly three times greater than that of the RPi5.
Our comparison of metrics from Go-based microservice execution revealed that the RPi5 consistently surpassed the performance of the SLap and RPi3 platforms. The average time to verify a single message was 6.18 (±0.29) ms, with a minimum delay of 4.14 (±0.17) ms. The recorded p90 and p95 latencies were 8.30 (±0.44) ms and 9.32 (±0.56) ms, respectively. The confidence intervals for α levels of 0.1 and 0.05 were ±0.0314 (±0.51%) and ±0.0374 (±0.61%) ms. Notably, 68.6% of the processing time was attributed to the DL query, while 31.2% was related to the message forwarding operation.
Figure 15 presents a comparison of the message forwarding operation latencies on different platforms under test. During the Java warm-up state, we recorded the following operation times: SLap, 0.09 (±0.004) ms; RPi5, 0.24 (±0.008) ms; RPi5 with CC enabled, 0.24 (±0.009) ms; RPi3, 0.99 (±0.031) ms. In the Java steady state, the times were 0.03 (±0.001) ms, 0.08 (±0.002) ms, 0.09 (±0.002) ms, and 0.39 (±0.010) ms, respectively. For the Go-based microservice, we measured the following: SLap, 2.25 (±0.017) ms; RPi5, 1.97 (±0.016) ms; RPi5 with CC enabled, 2.12 (±0.016) ms; RPi3, 2.70 (±0.020) ms. Notably, the ledger peer state (hot or cold) did not influence the latency of the examined operation. The variations in latencies were primarily attributable to the handling of synchronous operations and the wait time associated with acknowledgement responses. In contrast, we found that the metric latency for the Go implementation using the non-blocking asynchronous producer (sarama.AsyncProducer) was significantly lower: SLap, 0.007 (±0.001) ms; RPi5, 0.002 (±0.001) ms; RPi5 with CC enabled, 0.002 (±0.001) ms; RPi3, 0.032 (±0.003) ms. Furthermore, it is noteworthy that for the Go implementation, the latencies for the RPi5 were, on average, about 0.3 ms lower than those recorded for the SLap platform.
  • Setup II Scenario II
Table 16 (Java) and Table 17 (Go) outline the results from Setup II, Scenario II, in which the ledger peer was relocated to the Fog processing component. The benchmarking within this scenario concentrated on comparing latencies for the RPi5 platform with the continuum computing concept enabled (CC). In this configuration, both the ledger peer and the microservice were running on the same device platform, Raspberry Pi 5.
The results (Figure 16 and Figure 17) showed closely aligned metrics for the ledger peer during the HP phase and both the WS and SS states, with latencies averaging approximately 11 ms for WS and 7.3 ms for SS. Furthermore, the DL query operation latency for RPi5 during WS was recorded at 14.55 (±0.95) ms, while for the RPi5 with CC enabled, it was 18.62 (±3.66) ms. Lastly, we observed a significant error of 1.65 ms for the WS phase and HP state, which was associated with the DL query. Figure 18 presents the results obtained for Go-based microservice tested on different platforms and with both ledger peer states.
  • Resource Utilization:
Section 6.7 presents reference results from our experiments evaluating the performance (resource utilization) of the ledger peer, Kafka Cluster, and microservice execution on the SLap platform. We applied similar evaluation criteria for the measurements collected from the RPi5 and RPi3 platforms. Notably, we did not observe significant increases in resource utilization throughout our assessment. No CPU reductions in instructions per clock or throttling events were observed. Additionally, we recorded a stable temperature of about 52 °C on both devices. Moreover, the ratio of CPU usage to I/O wait time for the Kafka Cluster did not exceed 10%, and disk utilization (queuing delays) did not substantially affect the time required for record publish operations. Lastly, our network traffic monitoring revealed that the microservice producer’s limit on the number of unacknowledged requests sent to the leader broker remained within the configured upper limit of five, ensuring that the latency associated with the message forwarding operation was not impacted.
  • Power Consumption:
Table 18 and Table 19 provide an analysis of the power consumption for the RPi5 platform during microservice execution. The findings indicate that the Java microservice (SM) exhibits a higher power consumption (PC) compared to its Go counterpart. Specifically, we recorded an average PC of 3.11 (±0.63) W for Java and 2.27 (±0.29) W for Go. When comparing these results to the reference PC, we observed an increase of approximately 1.00 W for Java and 0.15 W for Go. With the CC concept enabled (SM + CC), the power consumption values remained stable at 3.88 (±0.58) W for Java and 3.17 (±0.59) W for Go. During the Java microservice execution, the ledger peer consumed approximately 0.77 W, while the consumption for Go was about 0.87 W. The average PC for the Kafka cluster during both microservice executions did not exceed 3.4 W. Finally, no voltage spikes that could potentially impact CPU IPC were detected. Only drops associated with logging computation statistics were identified.

7. Discussion

Deploying our framework within the AWS Cloud infrastructure revealed that it is suitable for processing audiovisual streams within environments that demand immediate interoperability. Moreover, we confirmed that our environment can be deployed on resource-constrained COTS platforms while maintaining low operational costs. Furthermore, the computational stability of our framework is essential for future performance studies, including those focused on maximum throughput. Table 20 and Table 21 highlight our key findings.

8. Conclusions and Future Work

One of the ongoing challenges is acquiring, analyzing, and disseminating the considerable volumes of data generated by various entities, especially IoT devices, while ensuring that distribution is secure and reliable. Furthermore, to effectively manage trade-offs in a data dissemination environment while considering key performance indicators, it is imperative to implement thorough monitoring for potential adjustments.
To address the identified challenges, we proposed and evaluated our experimental framework architecture, which was designed for secure and fault-tolerant distribution of data streams within the multi-organizational federation environment. Our research primarily emphasized data durability, fault tolerance, and low-latency KPIs. We also considered Kingman’s formula, which indicates that increasing the number of producers or consumers can exponentially increase the load on brokers as they accommodate more connections (queue burden). This can result in request bursts and long-tail latencies, underscoring the necessity of effectively scaling the data queue layer.
After thoroughly comparing metrics for the SLap, RPi5, and RPi3 platforms, we determined that adopting horizontal scaling will be more advantageous for our environment. This strategy will allow us to distribute workloads across multiple devices, enhancing resilience and adaptability to fluctuating demands of the IoT environments. On the one hand, by increasing the pool of microservices, we can improve the latency-throughput trade-off. On the other hand, we proposed deploying the fingerprint enrichment layer in conjunction with the protocol forwarder (proxy) component. This component can reduce network bandwidth consumption while simultaneously alleviating the load on the cluster (the data queue layer).
Moreover, this article identified and measured the negative impact of Java’s just-in-time compilation mechanism on the stream processing microservice latencies. Fortunately, techniques exist to minimize warm-up state by employing suitable JVM flags, selecting the appropriate compilation tier, applying ahead-of-time compilation, coordinating restoration at checkpoints, or utilizing ahead-of-time fake traffic mirroring. In our upcoming research, we intend to implement the latter technique and apply dynamic estimation [36].
Additionally, we validated the effectiveness of the computing continuum concept within our framework. The ledger peer arrangement utilized a mechanism similar to a local off-chain store, enhancing efficiency by positioning the peer and the microservice in close proximity within the Fog component. An intriguing avenue for future research is integrating the Kafka Streams API off-chain store handling mechanism with the on-chain peer, which could prove advantageous for DIL network deployments.
Furthermore, our research will thoroughly evaluate security and reliability risk assessments, considering a diverse range of threats across the application, network, and perception layers of the IoT system. We aim to establish a formal multi-level security model that addresses data confidentiality and integrity protection requirements. In addition, we will conduct experiments aimed at maximizing throughput by applying various configurations to the components defining each layer of our environment. We will also undertake more in-depth studies focusing on the value of information, data quality, and context dissemination. Additionally, we plan to deploy lightweight and delay-tolerant consensus algorithms within the distributed ledger layer, which employs a directed-acyclic graph structure for achieving consensus.
Although the proposed implementation of the streams microservice layer enables handling single record batches (individual messages in the data stream), our pluggable architecture supports integration with resource orchestration and automation platforms, allowing for the microservice dynamic allocation based on varying KPIs. Moreover, our framework offers significant potential as a foundation for various applications. For instance, it could be incorporated into systems designed to detect and neutralize UAVs. Such integration would empower civilian and military organizations to maintain comprehensive oversight of air defense operations, enabling IoT devices within smart city and army infrastructures to relay data regarding UAV locations securely via our solution. Another possible application involves establishing an ad hoc system for coordinating international efforts focused on HADR operations. In this scenario, our system would facilitate reliable data dissemination from CCTV cameras and health devices, such as SOS wristbands. This capability would significantly reduce response times for individuals needing assistance and enhance decision-making through improved information sharing.
Another possible use case for a data distribution system operating in a federated IoT ecosystem environment is a border monitoring system based mainly on distributed multispectral sensors (IoT devices), including thermal imaging, daylight, radar, and acoustic sensors. These devices are deployed stationary along the border line and on vehicles and UAVs. The military and border guards own drones equipped with these sensors. The data obtained from the sensors are sent to a distribution center, which, after initial processing, forwards them to five coordination centers corresponding to the individual services within the federation: the army, police, border guards, fire department, and emergency medical services. All of these services operate within the framework of the federation. Each coordination center should maintain at least two distributed registry nodes and a group of Kafka brokers. A group of servers with microservices and Kafka brokers filters and transmits the processed data in search of events such as the detection of persons at the border or an unidentified object crossing the border. Data about these events are forwarded to patrol vehicles and individual soldiers near the incident and, if necessary, to medical services. After processing, the data are also sent to a separate analytical server that aggregates operational and technical data for further analysis by the border guard commander.

Author Contributions

Conceptualization, J.S. and Z.Z.; methodology, Z.Z.; software, J.S.; validation, J.S.; investigation, J.S. and Z.Z.; resources, Z.Z.; data curation, J.S.; writing—original draft preparation, J.S. and Z.Z.; writing—review and editing, J.S. and Z.Z.; visualization, J.S.; supervision, Z.Z.; project administration, Z.Z.; funding acquisition, Z.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ABACAttribute-Based Access Control
ACLAccess Control List
AESAdvanced Encryption Standard
AWSAmazon Web Services
AoRArea of Responsibility
CCCompute Continuum concept
CCTVCity Surveillance Camera
COTSCommercial Off-The-Shelf
CPCold Ledger Peer Phase
DILDisconnected, Intermittent, Limited
DLTDistributed Ledger Technology
FMNFederated Mission Networking
gRPCRemote Procedure Calls
GUIDGlobally Unique Identifier
HADRHumanitarian Assistance And Disaster Relief
HMACKeyed-Hash Message Authentication Code
HPHot Ledger Peer Phase
IOsInformation Objects
IPCCPU Instructions per Clock
IoTInternet of Things
JITJust-In-Time
JVMJava Virtual Machine
KPIsKey Performance Indicators
MQTTMQ Telemetry Transport
NATONorth Atlantic Treaty Organization
PCPower Consumption
RPi3Raspberry Pi 3
RPi5Raspberry Pi 5
SLapStandalone Laptop
SSJava JIT Steady State
TCPTransmission Control Protocol
TLSTransport Layer Security
UAVUnmanned Aerial Vehicle
UDPUser Datagram Protocol
VoIValue of Information
WSJava JIT Warm-Up State

References

  1. Johnsen, F.T.; Hauge, M. Interoperable, adaptable, information exchange in NATO coalition operations. J. Mil. Stud. 2022, 11, 49–62. [Google Scholar] [CrossRef]
  2. Zaccarini, M.; Cantelli, B.; Fazio, M.; Fornaciari, W.; Poltronieri, F.; Stefanelli, C.; Tortonesi, M. VOICE: Value-of-Information for Compute Continuum Ecosystems. In Proceedings of the 2024 27th Conference on Innovation in Clouds, Internet and Networks (ICIN), Paris, France, 11–14 March 2024; pp. 73–80. [Google Scholar] [CrossRef]
  3. Pióro, L.; Sychowiec, J.; Kanciak, K.; Zieliński, Z. Application of Attribute-Based Encryption in Military Internet of Things Environment. Sensors 2024, 24, 5863. [Google Scholar] [CrossRef]
  4. Hyperledger Fabric Documentation. Available online: https://hyperledger-fabric.readthedocs.io/ (accessed on 3 March 2025).
  5. Apache Kafka Streams API Documentation. Available online: https://kafka.apache.org/documentation/streams/ (accessed on 3 March 2025).
  6. Sarama Go Library. Available online: https://pkg.go.dev/github.com/shopify/sarama (accessed on 3 March 2025).
  7. Wang, X.; Zha, X.; Ni, W.; Liu, R.P.; Guo, Y.J.; Niu, X.; Zheng, K. Survey on blockchain for Internet of Things. Comput. Commun. 2019, 136, 10–29. [Google Scholar] [CrossRef]
  8. Kumar, R.; Khan, F.; Kadry, S.N.; Rho, S. A Survey on blockchain for industrial Internet of Things. Alex. Eng. J. 2021, 61, 6001–6022. [Google Scholar] [CrossRef]
  9. Alfandi, O.; Khanji, S.I.R.; Ahmad, L.; Khattak, A.M. A survey on boosting IoT security and privacy through blockchain. Clust. Comput. 2020, 24, 37–55. [Google Scholar] [CrossRef]
  10. Guo, S.; Wang, F.; Zhang, N.; Qi, F.; Song, Q.X. Master-slave chain based trusted cross-domain authentication mechanism in IoT. J. Netw. Comput. Appl. 2020, 172, 102812. [Google Scholar] [CrossRef]
  11. Xu, L.; Chen, L.; Gao, Z.; Fan, X.; Suh, T.; Shi, W.L. DIoTA: Decentralized-Ledger-Based Framework for Data Authenticity Protection in IoT Systems. IEEE Netw. 2020, 34, 38–46. [Google Scholar] [CrossRef]
  12. Khalid, U.; Asim, M.; Baker, T.; Hung, P.C.K.; Tariq, M.A.; Rafferty, L. A decentralized lightweight blockchain-based authentication mechanism for IoT systems. Clust. Comput. 2020, 23, 2067–2087. [Google Scholar] [CrossRef]
  13. Chung; Ferraiolo, D.; Kuhn, D.; Schnitzer, A.; Sandlin, K.; Miller, R.; Scarfone, K. Guide to Attribute Based Access Control (ABAC) Definition and Considerations; U.S. Department of Commerce: Washington, DC, USA, 2019. [CrossRef]
  14. Song, H.; Tu, Z.; Qin, Y. Blockchain-Based Access Control and Behavior Regulation System for IoT. Sensors 2022, 22, 8339. [Google Scholar] [CrossRef]
  15. Lu, Y.; Feng, T.; Liu, C.; Zhang, W. A Blockchain and CP-ABE Based Access Control Scheme with Fine-Grained Revocation of Attributes in Cloud Health. Comput. Mater. Contin. 2024, 78, 2787–2811. [Google Scholar] [CrossRef]
  16. Sivanathan, A.; Gharakheili, H.H.; Loi, F.; Radford, A.; Wijenayake, C.; Vishwanath, A.; Sivaraman, V. Classifying IoT Devices in Smart Environments Using Network Traffic Characteristics. IEEE Trans. Mob. Comput. 2019, 18, 1745–1759. [Google Scholar] [CrossRef]
  17. Xu, Q.; Zheng, R.; Saad, W.; Han, Z. Device Fingerprinting in Wireless Networks: Challenges and Opportunities. IEEE Commun. Surv. Tutor. 2015, 18, 94–104. [Google Scholar] [CrossRef]
  18. Jagannath, A.; Jagannath, J.; Kumar, P.S.P.V. A Comprehensive Survey on Radio Frequency (RF) Fingerprinting: Traditional Approaches, Deep Learning, and Open Challenges. Comput. Netw. 2022, 219, 109455. [Google Scholar] [CrossRef]
  19. Jarosz, M.; Wrona, K.; Zieliński, Z. Distributed Ledger-Based Authentication and Authorization of IoT Devices in Federated Environments. Electronics 2024, 13, 3932. [Google Scholar] [CrossRef]
  20. Sanogo, L.; Alata, E.; Takacs, A.; Dragomirescu, D. Intrusion Detection System for IoT: Analysis of PSD Robustness. Sensors 2023, 23, 2353. [Google Scholar] [CrossRef]
  21. Chatterjee, B.; Das, D.; Maity, S.; Sen, S. RF-PUF: Enhancing IoT Security Through Authentication of Wireless Nodes Using In-Situ Machine Learning. IEEE Internet Things J. 2018, 6, 388–398. [Google Scholar] [CrossRef]
  22. Charyyev, B.; Gunes, M.H. IoT Traffic Flow Identification using Locality Sensitive Hashes. In Proceedings of the ICC 2020—2020 IEEE International Conference on Communications (ICC), Dublin, Ireland, 7–11 June 2020; pp. 1–6. [Google Scholar] [CrossRef]
  23. Neumann, C.; Heen, O.; Onno, S. An Empirical Study of Passive 802.11 Device Fingerprinting. In Proceedings of the 2012 32nd International Conference on Distributed Computing Systems Workshops, Macau, China, 18–21 June 2012; pp. 593–602. [Google Scholar] [CrossRef]
  24. Jansen, N.; Manso, M.; Toth, A.; Chan, K.S.; Bloebaum, T.H.; Johnsen, F.T. NATO Core Services profiling for Hybrid Tactical Networks—Results and Recommendations. In Proceedings of the 2021 International Conference on Military Communication and Information Systems (ICMCIS), Hague, The Netherlands, 4–5 May 2021; pp. 1–8. [Google Scholar] [CrossRef]
  25. Suri, N.; Fronteddu, R.; Cramer, E.; Breedy, M.R.; Marcus, K.M.; in’t Velt, R.; Nilsson, J.; Mantovani, M.; Campioni, L.; Poltronieri, F.; et al. Experimental Evaluation of Group Communications Protocols for Tactical Data Dissemination. In Proceedings of the MILCOM 2018—2018 IEEE Military Communications Conference (MILCOM), Los Angeles, CA, USA, 29–31 October 2018; pp. 133–139. [Google Scholar] [CrossRef]
  26. Rango, F.D.; Potrino, G.; Tropea, M.; Fazio, P. Energy-aware dynamic Internet of Things security system based on Elliptic Curve Cryptography and Message Queue Telemetry Transport protocol for mitigating Replay attacks. Pervasive Mob. Comput. 2020, 61, 101105. [Google Scholar] [CrossRef]
  27. Yang, M.; Margheri, A.; Hu, R.; Sassone, V. Differentially Private Data Sharing in a Cloud Federation with Blockchain. IEEE Cloud Comput. 2018, 5, 69–79. [Google Scholar] [CrossRef]
  28. Byabazaire, J.; O’Hare, G.M.P.; Collier, R.W.; Delaney, D.T. IoT Data Quality Assessment Framework Using Adaptive Weighted Estimation Fusion. Sensors 2023, 23, 5993. [Google Scholar] [CrossRef]
  29. Post-Quantum Cryptography PQC—Selected Algorithms: Digital Signature Algorithms. Available online: https://csrc.nist.gov/Projects/post-quantum-cryptography/selected-algorithms-2022 (accessed on 3 March 2025).
  30. Kul, S.; Tashiev, I.; Sentas, A.; Sayar, A. Event-Based Microservices with Apache Kafka Streams: A Real-Time Vehicle Detection System Based on Type, Color, and Speed Attributes. IEEE Access 2021, 9, 83137–83148. [Google Scholar] [CrossRef]
  31. Karimov, J.; Rabl, T.; Katsifodimos, A.; Samarev, R.S.; Heiskanen, H.; Markl, V. Benchmarking Distributed Stream Data Processing Systems. In Proceedings of the 2018 IEEE 34th International Conference on Data Engineering (ICDE), Paris, France, 16–19 April 2018. [Google Scholar] [CrossRef]
  32. van Dongen, G.; den Poel, D.V. Evaluation of Stream Processing Frameworks. IEEE Trans. Parallel Distrib. Syst. 2020, 31, 1845–1858. [Google Scholar] [CrossRef]
  33. Bartock, M.; Souppaya, M.; Savino, R.F.; Knoll, T.; Shetty, U.; Cherfaoui, M.; Yeluri, R.; Malhotra, A.; Banks, D.; Jordan, M.; et al. Hardware-Enabled Security: Enabling a Layered Approach to Platform Security for Cloud and Edge Computing Use Cases; U.S. Department of Commerce: Washington, DC, USA, 2021. [CrossRef]
  34. Nakaike, T.; Zhang, Q.; Ueda, Y.; Inagaki, T.; Ohara, M. Hyperledger Fabric Performance Characterization and Optimization Using GoLevelDB Benchmark. In Proceedings of the 2020 IEEE International Conference on Blockchain and Cryptocurrency (ICBC), Toronto, ON, Canada, 2–6 May 2020; pp. 1–9. [Google Scholar] [CrossRef]
  35. Swathi, P.; Venkatesan, M. Scalability improvement and analysis of permissioned-blockchain. ICT Express 2021, 7, 283–289. [Google Scholar] [CrossRef]
  36. Traini, L.; Cortellessa, V.; Pompeo, D.D.; Tucci, M. Towards effective assessment of steady state performance in Java software: Are we there yet? Empir. Softw. Eng. 2022, 28, 13. [Google Scholar] [CrossRef]
  37. Monitoring Linux with the Node Exporter. Available online: https://prometheus.io/docs/guides/node-exporter/ (accessed on 3 March 2025).
  38. Extra PMIC Features: RPi 4, RPi 5, and Compute Module 4. Available online: https://pip.raspberrypi.com/categories/685-whitepapers-app-notes/documents/RP-004340-WP/Extra-PMIC-features-on-Raspberry-Pi-4-and-Compute-Module-4.pdf (accessed on 3 March 2025).
  39. Kingman, J.F.C.; Atiyah, M.F. The single server queue in heavy traffic. Math. Proc. Camb. Philos. Soc. 1961, 57, 902–904. [Google Scholar] [CrossRef]
  40. Raspberry Pi Hardware—Power Supply: Typical Power Requirements. Available online: https://www.raspberrypi.com/documentation/computers/raspberry-pi.html#typical-power-requirements (accessed on 3 March 2025).
  41. Keval, H.U.; Sasse, M.A. to catch a thief—You need at least 8 frames per second: The impact of frame rates on user performance in a CCTV detection task. In Proceedings of the ACM Multimedia, Vancouver, BC, Canada, 27–31 October 2008. [Google Scholar] [CrossRef]
  42. Bekaroo, G.; Santokhee, A. Power consumption of the Raspberry Pi: A comparative analysis. In Proceedings of the 2016 IEEE International Conference on Emerging Technologies and Innovative Business Practices for the Transformation of Societies (EmergiTech), Balaclava, Mauritius, 3–6 August 2016; pp. 361–366. [Google Scholar] [CrossRef]
Figure 1. Proposed framework general overview.
Figure 1. Proposed framework general overview.
Electronics 14 02067 g001
Figure 2. Proposed framework detailed overview.
Figure 2. Proposed framework detailed overview.
Electronics 14 02067 g002
Figure 3. Message enrichment within our experimental system.
Figure 3. Message enrichment within our experimental system.
Electronics 14 02067 g003
Figure 6. AUTH_MSG Kafka structure.
Figure 6. AUTH_MSG Kafka structure.
Electronics 14 02067 g006
Figure 7. DATA_MSG Kafka structure.
Figure 7. DATA_MSG Kafka structure.
Electronics 14 02067 g007
Figure 11. Setup I: An overview of the AWS cloud-based environment.
Figure 11. Setup I: An overview of the AWS cloud-based environment.
Electronics 14 02067 g011
Figure 12. Setup II: An overview of the Raspberry Pi device-based environment.
Figure 12. Setup II: An overview of the Raspberry Pi device-based environment.
Electronics 14 02067 g012
Figure 13. Setup II: The enabled computing continuum concept.
Figure 13. Setup II: The enabled computing continuum concept.
Electronics 14 02067 g013
Figure 14. Power Consumption visualization in the Grafana tool.
Figure 14. Power Consumption visualization in the Grafana tool.
Electronics 14 02067 g014
Figure 15. Message forwarding latencies on different platforms under test.
Figure 15. Message forwarding latencies on different platforms under test.
Electronics 14 02067 g015
Figure 16. Java microservice latencies on different platforms under test: ledger hot peer phase.
Figure 16. Java microservice latencies on different platforms under test: ledger hot peer phase.
Electronics 14 02067 g016
Figure 17. Java microservice latencies on different platforms under test: ledger cold peer phase.
Figure 17. Java microservice latencies on different platforms under test: ledger cold peer phase.
Electronics 14 02067 g017
Figure 18. Go microservice latencies on different platforms under test: HP and CP ledger peer states.
Figure 18. Go microservice latencies on different platforms under test: HP and CP ledger peer states.
Electronics 14 02067 g018
Table 1. Types of attacks that can be attempted against the framework.
Table 1. Types of attacks that can be attempted against the framework.
Perception LayerNetwork LayerApplication Layer
DoS/DDoSDoS/DDoSDoS/DDoS
SpoofingSpoofingSpoofing
CryptanalysisCryptanalysisCryptanalysis
Device captureEavesdroppingStorage attack
Malicious deviceReplay attackMalicious insider attack
Device tamperingPacket injectionAbuse of authority
Side-channel (timing) attacksSession hijackingPermissions modification by unauthorized users
Sybil attackMan-in-the-middle attack
Table 2. Setup I: An overview of AWS cloud-based environment components.
Table 2. Setup I: An overview of AWS cloud-based environment components.
Framework
Layer
Hardware/Software ComponentDescription
CommunicationAmazon Virtual Private Cloud (VPC) and PrivateLink interface (VPC Endpoint)The multi-region link was set to connect two AWS regions that simulate geographical distances: the MSK region belongs to Org1 and the AMB region belongs to Org1 and Org2. Moreover, the VPC Endpoint enables the communication between the streams microservice and the distributed ledger layer.
Publishers/SubscribersTwo Amazon Elastic Compute Cloud (EC2) virtual machines: t2.micro (vCPU: 1 and Memory: 1 GiB)Virtual machines were deployed in the MSK region in two isolated (availability) zones to simulate the Org1 data stream producer and consumer.
Data QueueCluster of the Amazon Managed Streaming for Apache Kafka (ver. 2.8.1) that consists of three brokers: kafka.t3.small (vCPU: 2 and Memory: 2 GiB)To ensure durability (fault tolerance), cluster brokers were deployed in separate MSK region zones. Furthermore, each broker has a default configuration with a single partition and a replication factor of three.
Streams MicroserviceSingle EC2 virtual machine: t2.micro (vCPU: 1 and Memory: 1 GiB)A Java microservice was deployed in the MSK region to verify data streams.
Distributed LedgerAmazon Managed Blockchain Starter Edition for Hyperledger Fabric (ver. 2.2) with two nodes (peers) per organization: bc.t3.small (vCPU: 2 and Memory: 2 GiB)Ledger peers were deployed in the AMB region, where a single channel for identities was created within the blockchain network. Each member (Org1 and Org2) of the channel has two peers to ensure durability.
Table 3. Setup II: An overview of the Raspberry Pi device-based environment components.
Table 3. Setup II: An overview of the Raspberry Pi device-based environment components.
Framework LayerHardware/Software ComponentDescription
CommunicationGigabit switch and an IEEE 802.11 routerAt the Tactical level (unit area of responsibility, AoR), three zones (subunit command posts) were designated to simulate geographical distances.
Publishers/SubscribersThree RPi3 (CPU: 1.4 GHz Arm Cortex A53 and Memory: 1 GiB)Devices were deployed at each subunit AoR to be used as producers and consumers.
Data QueueCluster of the Apache Kafka that consist three brokers: RPi5 devices (CPU: 2.4 GHz Arm Cortex A76 and Memory: 8 GiB)Cluster was deployed in the edge processing location. Furthermore, each broker was configured with a single partition and a replication factor of three.
Streams MicroserviceThree different devices: RPi3 (CPU: 1.4 GHz Arm Cortex A53 and Memory: 1 GiB), RPi5 (CPU: 2.4 GHz Arm Cortex A76 and Memory: 8 GiB), and SLap (CPU: 3.3 GHz i5-12450H and Memory: 16 GB)Resource-constrained devices (RPi3 and RPi5) were used to test Java and Go microservices. The SLap platform was deployed as a reference for microservice benchmarking. All devices were placed in the Fog processing location.
Distributed LedgerHyperledger Fabric Blockchain Network (ver. 2.2) with two nodes (peers) per organization: VMs (vCPU: 2 and Memory: 2 GiB)Ledger peers were deployed on a server using virtualization technology (data centers processing location).
Table 4. All scenarios configuration parameters.
Table 4. All scenarios configuration parameters.
Configuration ParameterDescription
linger.ms = 0The produce operation was set without artificial delay, and a record was sent immediately to the Kafka Cluster.
acks = allMicroservice reliability was configured to complete operations only after all in-sync replicas received the record and sent back an acknowledgment. This setting includes both publish and commit times, which adds latency overhead to our microservice.
replication-factor 3The replication factor of a single record was set to three. Fault-tolerance KPI.
num.replica.fetchers = 1The number of replica fetcher threads per source broker was set to the default value of one.
enable.idempotence = trueIdempotence was set. This enables the Kafka mechanism to identify and eliminate duplicate messages by comparing the unique sequence number of each record sent to a partition.
listenersThere was no separate listener for replication traffic on brokers and for client (producer/consumer) traffic.
fetch.min.bytes = 1The microservice record consumer was configured to operate without incurring any additional latency, while ensuring that the processing guarantee is established as exactly once. Upon submission of a fetch request by microservice, a response is provided immediately when a single byte of data (record) becomes available from the Kafka Broker.
Table 7. Power Consumption: Idle state.
Table 7. Power Consumption: Idle state.
Device NameCPU Idle (%)RAM Usage (%)PC (W)
SM98.69.82.12 (0.18)
B198.75.52.20 (0.14)
B298.25.62.25 (0.18)
B398.15.72.30 (0.16)
Table 8. Power Consumption: Running state.
Table 8. Power Consumption: Running state.
Device NameCPU Idle (%)RAM Usage (%)PC (W)
SM + CC98.120.62.15 (0.22)
B195.216.92.83 (0.51)
B294.817.12.71 (0.48)
B394.817.12.87 (0.49)
Table 9. Different ledger peer syncing states.
Table 9. Different ledger peer syncing states.
Sync StateCPU Idle (%)RAM Usage (%)PC (W)
No Sync98.120.62.15 (0.22)
Catch Up62.821.03.30 (0.72)
Concurrent97.121.02.18 (0.22)
Table 10. Setup I Scenario I: Processing-time latency metrics (in milliseconds).
Table 10. Setup I Scenario I: Processing-time latency metrics (in milliseconds).
Identity CountAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
10,00038.3 (0.76)3.0 (0.00)32.7 (0.56)5.5 (0.70)[0.2861; 0.3409; 0.448][44.3 (0.76); 47.7 (1.24); 56.7 (2.16)]
20,00039.5 (1.20)4.0 (0.00)32.2 (1.28)6.6 (0.60)[0.343; 0.409; 0.538][46.7 (1.90); 50.7 (1.70); 61.4 (2.00)]
35,00039.9 (1.12)3.7 (0.42)32.6 (1.08)5.9 (0.54)[0.307; 0.366; 0.481][46.5 (1.10); 50.5 (1.30); 60.2 (2.16)]
50,00038.2 (1.28)3.3 (0.42)31.7 (1.30)5.5 (0.60)[0.2861; 0.3409; 0.448][44.6 (1.72); 47.8 (2.00); 55.7 (2.70)]
100,00038.6 (0.72)3.3 (0.42)32.0 (0.40)5.5 (0.50)[0.2861; 0.3409; 0.448][44.7 (0.56); 47.8 (0.64); 55.8 (1.64)]
Table 11. Time of data stream (1000 messages) retrieval by consumer (in seconds).
Table 11. Time of data stream (1000 messages) retrieval by consumer (in seconds).
Identity Count
10,00020,00035,00050,000100,000
Scenario I38.86 (0.64)39.95 (1.39)40.19 (0.98)38.59 (1.22)39.15 (0.61)
Scenario II20.42 (0.77)21.03 (1.20)21.15 (1.06)20.39 (1.12)20.06 (0.69)
Table 12. Setup II (RPi5), Scenario I: Processing-time latency metrics (in ms) for Java code.
Table 12. Setup II (RPi5), Scenario I: Processing-time latency metrics (in ms) for Java code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.007 (0.0004)0.003 (0.0003)0.002 (0.0003)0.024 (0.0156)[0.0004; 0.0005; 0.0006][0.010 (0.0005); 0.014 (0.0004); 0.027 (0.0005)]
DL Query7.20 (0.18)1.10 (0.19)5.29 (0.06)3.21 (2.66)[0.0528; 0.0629; 0.0827][8.80 (0.33); 9.99 (0.43); 13.83 (0.76)]
HMAC Validation0.015 (0.0004)0.006 (0.0003)0.006 (0.0010)0.013 (0.0017)[0.0002; 0.0003; 0.0003][0.022 (0.0004); 0.03 (0.0007); 0.062 (0.0016)]
AES Decryption0.057 (0.0017)0.027 (0.0005)0.024 (0.0037)0.061 (0.0100)[0.001; 0.0012; 0.0016][0.098 (0.0034); 0.136 (0.0021); 0.244 (0.0041)]
Msg Forwarding0.082 (0.0021)0.044 (0.0021)0.027 (0.0038)0.262 (0.0435)[0.0043; 0.0051; 0.0067][0.144 (0.0058); 0.193 (0.0046); 0.354 (0.0247)]
Full Processing7.38 (0.18)1.16 (0.18)5.38 (0.05)3.31 (2.64)[0.0544; 0.0649; 0.0853][9.08 (0.32); 10.34 (0.41); 14.42 (0.74)]
Chaincode Exec (logs)1.19 (0.09)0.34 (0.14)0.00 (0.00)0.77 (0.19)[0.0127; 0.0151; 0.0198][1.50 (0.60); 2.30 (0.48); 3.70 (0.84)]
gRPC Call (logs)2.55 (0.11)0.47 (0.11)1.57 (0.09)0.91 (0.16)[0.015; 0.0178; 0.0234][3.28 (0.4); 3.89 (0.52); 5.59 (0.68)]
Table 13. Setup II (RPi5), Scenario I: Processing-time latency metrics (in ms) for Go code.
Table 13. Setup II (RPi5), Scenario I: Processing-time latency metrics (in ms) for Go code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.004 (0.0002)0.002 (0.0003)0.002 (0.0001)0.006 (0.0038)[0.0001; 0.0001; 0.0002][0.006 (0.0001); 0.007 (0.0003); 0.013 (0.002)]
DL Query4.24 (0.27)1.26 (0.16)2.50 (0.07)1.84 (0.28)[0.0303; 0.0361; 0.0474][6.25 (0.39); 7.13 (0.52); 10.12 (1.03)]
HMAC Validation0.003 (0.0002)0.001 (0.0004)0.002 (0.0001)0.008 (0.0063)[0.0001; 0.0002; 0.0002][0.004 (0.0001); 0.005 (0.0003); 0.011 (0.0019)]
AES Decryption0.005 (0.0006)0.002 (0.0010)0.003 (0.0001)0.015 (0.0131)[0.0002; 0.0003; 0.0004][0.006 (0.0002); 0.008 (0.0006); 0.027 (0.0086)]
Msg Forwarding1.926 (0.0622)0.252 (0.0387)1.463 (0.0193)0.678 (0.1895)[0.0112; 0.0133; 0.0175][2.221 (0.0706); 2.391 (0.1051); 3.691 (0.8053)]
Full Processing6.18 (0.29)1.33 (0.16)4.14 (0.11)1.91 (0.24)[0.0314; 0.0374; 0.0492][8.30 (0.44); 9.32 (0.56); 13.00 (1.46)]
Chaincode Exec (logs)1.79 (0.21)1.05 (0.16)0.00 (0.00)1.48 (0.31)[0.0243; 0.029; 0.0381][3.50 (0.50); 4.40 (0.48); 6.60 (0.72)]
gRPC Call (logs)3.24 (0.24)1.13 (0.16)1.73 (0.05)1.62 (0.3)[0.0266; 0.0318; 0.0417][5.13 (0.31); 5.87 (0.48); 8.24 (0.67)]
Table 14. Setup II (RPi3), Scenario I: Processing-time latency metrics (in ms) for Java code.
Table 14. Setup II (RPi3), Scenario I: Processing-time latency metrics (in ms) for Java code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.039 (0.0015)0.013 (0.0017)0.015 (0.0013)0.071 (0.017)[0.0012; 0.0014; 0.0018][0.052 (0.0016); 0.067 (0.0018); 0.108 (0.0022)]
DL Query20.28 (0.29)3.70 (0.29)14.50 (0.13)11.28 (2.42)[0.1856; 0.2211; 0.2906][25.49 (0.36); 29.99 (0.50); 42.24 (1.51)]
HMAC Validation0.078 (0.0012)0.019 (0.0014)0.046 (0.0063)0.063 (0.0288)[0.001; 0.0012; 0.0016][0.097 (0.0019); 0.133 (0.0031); 0.236 (0.0035)]
AES Decryption0.280 (0.0061)0.109 (0.0068)0.148 (0.0100)0.321 (0.0980)[0.0053; 0.0063; 0.0083][0.457 (0.0214); 0.586 (0.0152); 0.952 (0.0144)]
Msg Forwarding0.389 (0.0103)0.182 (0.0056)0.185 (0.0157)0.968 (0.0928)[0.0159; 0.019; 0.0249][0.647 (0.0172); 0.866 (0.0355); 1.636 (0.1188)]
Full Processing21.14 (0.28)3.97 (0.28)15.02 (0.14)11.56 (2.37)[0.1902; 0.2266; 0.2978][26.9 (0.33); 31.54 (0.45); 44.62 (1.55)]
Chaincode Exec (logs)1.14 (0.04)0.25 (0.07)0.00 (0.00)0.65 (0.21)[0.0107; 0.0127; 0.0167][1.40 (0.48); 2.00 (0.00); 2.90 (0.72)]
gRPC Call (logs)2.58 (0.05)0.36 (0.05)1.37 (0.17)0.77 (0.18)[0.0127; 0.0151; 0.0198][3.06 (0.07); 3.39 (0.15); 4.70 (0.58)]
Table 15. Setup II (RPi3), Scenario I: Processing-time latency metrics (in ms) for Go code.
Table 15. Setup II (RPi3), Scenario I: Processing-time latency metrics (in ms) for Go code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
Crypt. Prim. Init0.030 (0.0005)0.008 (0.0007)0.021 (0.0002)0.017 (0.0058)[0.0003; 0.0003; 0.0004][0.043 (0.0013); 0.054 (0.0014); 0.085 (0.0072)]
DL Query5.10 (0.18)0.76 (0.15)3.72 (0.06)1.40 (0.40)[0.023; 0.0274; 0.0361][6.14 (0.31); 7.08 (0.54); 10.49 (2.01)]
HMAC Validation0.030 (0.0009)0.008 (0.0012)0.023 (0.0001)0.023 (0.0071)[0.0004; 0.0005; 0.0006][0.042 (0.0013); 0.052 (0.0035); 0.086 (0.0073)]
AES Decryption0.062 (0.0018)0.014 (0.0026)0.048 (0.0003)0.053 (0.0436)[0.0009; 0.001; 0.0014][0.080 (0.0016); 0.091 (0.0023); 0.126 (0.006)]
Msg Forwarding2.591 (0.1124)0.303 (0.0225)1.947 (0.0196)0.703 (0.1376)[0.0116; 0.0138; 0.0181][2.988 (0.1207); 3.204 (0.1204); 4.407 (0.3333)]
Full Processing7.86 (0.21)0.94 (0.14)6.04 (0.07)1.65 (0.37)[0.0271; 0.0323; 0.0425][9.22 (0.33); 10.21 (0.62); 14.18 (2.01)]
Chaincode Exec (logs)1.20 (0.08)0.33 (0.11)0.00 (0.00)0.70 (0.19)[0.0115; 0.0137; 0.018][1.64 (0.46); 2.18 (0.30); 3.45 (0.68)]
gRPC Call (logs)2.67 (0.15)0.44 (0.09)1.71 (0.07)0.84 (0.22)[0.0138; 0.0165; 0.0216][3.32 (0.25); 3.87 (0.34); 5.61 (0.81)]
Table 16. Setup II (RPi5), Scenario II: Processing-time latency metrics (in ms) for Java code.
Table 16. Setup II (RPi5), Scenario II: Processing-time latency metrics (in ms) for Java code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
DL Query7.13 (0.19)1.10 (0.28)5.77 (0.02)1.40 (0.84)[0.023; 0.0274; 0.0361][8.30 (0.05); 9.33 (0.09); 13.04 (0.20)]
Chaincode Exec (logs)0.79 (0.02)0.41 (0.01)0.00 (0.00)0.57 (0.01)[0.0094; 0.0112; 0.0147][1.00 (0.00); 1.00 (0.00); 2.20 (0.32)]
gRPC Call (logs)2.67 (0.02)0.32 (0.02)2.04 (0.1)1.2 (0.8)[0.0197; 0.0235; 0.0309][3.11 (0.03); 3.43 (0.06); 4.85 (0.11)]
Table 17. Setup II (RPi5), Scenario II: Processing-time latency metrics (in ms) for Go code.
Table 17. Setup II (RPi5), Scenario II: Processing-time latency metrics (in ms) for Go code.
OperationAvgAvg DevMinPop Std DevConfidence Intervals
[ α = 0.10 ; α = 0.05 ; α = 0.01 ]
Percentiles
[p = 0.9; p = 0.95; p = 0.99]
DL Query3.23 (0.24)0.22 (0.06)2.83 (0.21)0.35 (0.06)[0.0058; 0.0069; 0.009][3.59 (0.15); 3.79 (0.15); 4.51 (0.25)]
Chaincode Exec (logs)0.53 (0.26)0.35 (0.08)0.00 (0.00)0.43 (0.05)[0.0071; 0.0084; 0.0111][1.00 (0.00); 1.00 (0.00); 1.10 (0.18)]
gRPC Call (logs)2.60 (0.20)0.17 (0.05)2.27 (0.18)0.27 (0.05)[0.0044; 0.0053; 0.007][2.90 (0.13); 3.04 (0.12); 3.51 (0.18)]
Table 18. Cluster and Peer: Java.
Table 18. Cluster and Peer: Java.
Device NameCPU Idle (%)RAM Usage (%)PC (W)
SM85.216.03.11 (0.63)
SM + CC60.131.63.88 (0.58)
B186.017.73.34 (0.68)
B283.317.83.32 (0.68)
B391.417.73.40 (0.66)
Table 19. Cluster and Peer: Go.
Table 19. Cluster and Peer: Go.
Device NameCPU Idle (%)RAM Usage (%)PC (W)
SM96.111.02.27 (0.29)
SM + CC71.227.43.17 (0.59)
B186.417.93.29 (0.71)
B285.918.13.13 (0.66)
B392.418.03.21 (0.57)
Table 20. Benchmarking key findings: Setup I.
Table 20. Benchmarking key findings: Setup I.
Setup I, Scenario I: Suitable for audiovisual streams processing
The average time for a consumer to read the verified data stream was approximately 39 s. In the context of audiovisual streams, the measured time (1000 msg/39 s) represents about 25 frames per second (fps). Notably, the work of [41] demonstrated that CCTV cameras with a minimum 8 fps rate are required to identify objects on video correctly (Table 10).
Setup I, Scenario II: Microcaching with off-chain data store usage
The obtained results confirmed the rightness of local off-chain data store applicability as a microcaching mechanism for stream verification. The usage of the mentioned store almost doubled the reading time of data streams (Table 11).
Table 21. Benchmarking key findings: Setup II.
Table 21. Benchmarking key findings: Setup II.
Setup II, Scenario I: Resource-constrained Java microservice benchmarking
The overall processing latency for the Java-based microservice on the RPi5 was
7.38 (±0.18) ms, which is approximately 1.62 ms longer than that on the SLap platform. Notably, 98.7% of this processing time was associated with the DL query operation. The RPi3 microservice execution exhibited a nearly three times greater latency than the RPi5 (Table 5, Table 12, and Table 14).
Setup II, Scenario I: Resource-constrained Go microservice benchmarking
For Go-based microservice execution, we observed that the RPi5 consistently outperformed both SLap and RPi3. The average time required to verify a single message on the RPi5 was 6.18 (±0.29) ms. Also, 68.6% was attributed to the DL query, while 31.2% was related to the message forwarding operation (Table 6, Table 13 and Table 15).
Setup II, Scenario I: The message forwarding operation (libraries) comparison
We observed that when uniformly applying the configuration for the message forwarding operation, the Kafka Streams API outperforms the Sarama library. The discrepancies in latencies for stream forwarding can largely be attributed to the handling of synchronous operations and the wait time for acknowledgment responses. In contrast, the use of asynchronous data flow adversely affects reliability, which is not in alignment with the key performance indicators we have established (Figure 15).
Setup II, Scenario II: The computing continuum concept comparison
For cold and hot phases with the computing continuum enabled, the DL query operation exhibited the lowest latencies among all platforms, averaging 5.05 (±0.09) ms for CP and 3.23 (±0.24) ms for HP. Moreover, the latency for chaincode execution was more than twice as low on the CC platform compared to others. In contrast, latencies for gRPC calls remained largely consistent across the various platforms (Table 12 and Table 16; Figure 16 and Figure 17).
Setup II, Scenario II: The ledger peer phase comparison
The phase of the ledger peer, whether it is cold or hot, combined with JVM dynamic compilation and garbage collection, results in fluctuations in internal operation latencies during the microservice warm-up state. We especially observed significant instability (deviations) in both chaincode execution and gRPC calls during testing that involved the warm-up state and the cold peer phase. We particularly noted the latency of chaincode execution, which averaged 9.11 (±3.04) ms. This was higher than the gRPC call average of 8.16 (±0.13) ms. This anomaly in the metrics can be attributed to the duration received from log entries, where the execution time for the chaincode is rounded to whole milliseconds. To mitigate this, we can estimate the duration of the warm-up state statically using tools such as the Java Microbenchmark Harness. However, one drawback of this approach is its dependence on the researcher’s coding experience, which can lead to inaccurate estimates. Alternatively, dynamic estimation methods can be employed, such as a sliding window, to compare the most recent iterations [36]. This approach can effectively reduce the duration of performance testing while maintaining an acceptable accuracy level (Table 13 and Table 17; Figure 18).
Setup II, Scenario III: Platform Power Consumption
Upon comparing our results with those detailed in Bekaroo et al. [42], we found that the PC for the RPi5 under test during the idle state was nearly identical (2.20 W). Furthermore, the results for the CC concept associated with the Java microservice can be likened to activities that necessitate the use of a persistent network connection and graphic rendering (3.80 W) (Table 7, Table 8 and Table 9, Table 18, and Table 19).
Framework monitoring
As we noted earlier, several factors can influence our environment’s metrics, including the cluster’s configuration, resource usage, the settings for producing/consuming applications, and the communication medium used. Through detailed monitoring of resource utilization and power consumption with the Prometheus environment integrated into our setup, we were able to identify potential issues, such as CPU IPC decrease, queueing delays, and PC spikes. Fortunately, none of these events occurred during our experiments.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sychowiec, J.; Zieliński, Z. A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments. Electronics 2025, 14, 2067. https://doi.org/10.3390/electronics14102067

AMA Style

Sychowiec J, Zieliński Z. A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments. Electronics. 2025; 14(10):2067. https://doi.org/10.3390/electronics14102067

Chicago/Turabian Style

Sychowiec, Jakub, and Zbigniew Zieliński. 2025. "A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments" Electronics 14, no. 10: 2067. https://doi.org/10.3390/electronics14102067

APA Style

Sychowiec, J., & Zieliński, Z. (2025). A Blockchain-Based Framework for Secure Data Stream Dissemination in Federated IoT Environments. Electronics, 14(10), 2067. https://doi.org/10.3390/electronics14102067

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop