Latency, Throughput and Success Rate Evaluation
Our second experimental setup focuses on analyzing the performance of DDS communication in our proposed CTI sharing framework. Latency, throughput, and success rate are considered to be the most important performance metrics and are used to evaluate the performance of our proposed CTI sharing framework. Latency represents the time it takes for a data sample to be transmitted from the Publisher to the Subscriber. Throughput indicates the average number of data samples successfully delivered to the consumers per second. Success rate measures the percentage of samples successfully received out of the total number of samples sent during the test execution. For this evaluation, we ran the prototype of our framework on two different machines connected via a network switch.
Table 9 summarizes the technical specifications of the test environment.
We configure the DDS policies that are essential to CTI information sharing. These policies govern how data is published, stored, and transmitted during the sharing process.
Table 10 outlines the QoS policies and parameter values selected for this evaluation. The QoS policies are briefly described below.
Partition: A Partition is used to allow or deny data writers or data readers of a topic to exchange data. In CTI sharing, it allows logical grouping of CTI Providers and Consumers according to their respective organizations or trust groups. This grouping is achieved by assigning string-based identifiers, known as partitions, to Publishers and Subscribers. This policy is useful in CTI sharing where multiple organizations participate in CTI sharing, as it allows for dynamic control over data visibility at the topic level. For instance, we can assign specific partitions to healthcare organizations, ensuring they only share and receive threat intelligence relevant to their sector. Similarly, industrial organizations can operate within their own partitions, maintaining a clear separation of data streams.
Durability: Specifies whether data published by a DataWriter is stored and made available to late-joining DataReaders. We can choose among several options: VOLATILE, which does not store data; TRANSIENT_LOCAL, which retains data in the DataWriter’s own memory for as long as it remains active; TRANSIENT, which leverages the Persistence Service to hold data in volatile memory; and PERSISTENT, which uses non-volatile storage via the Persistence Service. In our CTI sharing framework, we configure this policy to PERSISTENT. This configuration ensures that critical threat intelligence remains accessible to late-joining organizations, allowing them to retrieve previously published data. By leveraging the PERSISTENT setting, we enhance the resilience and reliability of our framework, ensuring that all participants have access to the complete threat intelligence data.
Reliability: Specifies whether data lost during transmission should be repaired by the system. We can choose between BEST_EFFORT, where samples are sent without delivery guarantees, and RELIABLE, which automatically retransmits any lost data. In our framework, we set this policy to RELIABLE to ensure that all threat intelligence data is successfully delivered to all participating organizations. This policy guarantees the completeness of shared data, even in the presence of network instability. By ensuring reliable delivery, we enhance the dependability of our CTI sharing framework and maintain a comprehensive understanding of potential threats. Thereby improving organizations’ security posture.
History: Controls how much data is stored and how stored data is managed for a DataWriter or DataReader. It determines whether the system keeps all data samples or only the most recent ones, and it works in conjunction with the Resource Limits QoS Policy to define the actual memory allocation for the data samples. We configured the History QoS Policy to control how our system stores and manages threat intelligence data. This policy offers two modes. In KEEP_LAST, we retain only the most recent n samples per data instance, where n is set by the depth parameter. This mode suits applications that need only the latest updates. In KEEP_ALL, we keep every sample until it is explicitly acknowledged or removed, with overall storage capped by our Resource Limits QoS Policy. In our CTI sharing framework, we choose KEEP_ALL to ensure that all threat intelligence data is retained until it is successfully delivered to all subscribers.
Resource Limits: Configures the amount of memory that a DataWriter or DataReader may allocate to store data in local caches, also referred to as send or receive queues. It is set by the integer variables max_samples and max_samples_per_instance. The default value is defined as LENGTH_UNLIMITED, which means there is no predefined limit, and the middleware can allocate memory dynamically as needed. In our CTI sharing framework, we use the default LENGTH_UNLIMITED setting for this QoS policy. This ensures that the system can handle varying workloads and high volumes of threat intelligence, allowing the system to scale effectively.
To simulate a real-world CTI sharing environment, we simulate the exchange of CTI information between the CTI Publisher and Subscribers within a single DDS domain. Each test iteration, the CTI provider publishes 1000 samples, to which the CTI consumers subscribe and acknowledge. To ensure a meaningful performance comparison with the similar study [
18], we adopted message transmission rates of 50, 75, 100, and 125 messages per second. A similar study, which reports that hacking attempts expose about 44 records every second, inspired this choice. The study used these send rates with 1000 events per test iteration to focus on this threat level and demonstrate that their framework can function even with the increased threat level. However, while they used a fixed message size and performed their experiment on a single machine, our study extends the scope by also varying the message payload size to 32, 64, 128, 256, 512, 1024, 2048, 4096 and 8192 bytes. This provides a more comprehensive view of our framework’s scalability and performance under diverse operating conditions.
Table 11 presents the average latency, throughput, and success rate of transferring 1000 messages between the CTI provider and the CTI consumer. The reliability QoS policy is configured to be reliable to ensure that all the CTI messages are reliably delivered to the CTI consumers.
To establish a performance baseline, we first evaluate our framework without security and then measure the impact of security on performance.
Figure 11 illustrates the average latency in microseconds observed for each send rate in relation to the message sizes without security. From the figure, we can see that our framework consistently maintains latency below 1 millisecond across all test scenarios. However, as message size and send rates increase, we observe a steady increase. At the lowest send rate (50 msg/s), latency ranged from 779 µs (32-byte messages) to 824 µs (8192-byte messages). When we tested the send rate (125 msg/s), latency values increased slightly, ranging from 844 µs to 995 µs. This minor increase indicates a modest impact of message size and send rate on latency. It aligns with expectations, where larger message sizes require more time for transmission.
To understand how different security mechanisms influence the performance of our proposed CTI sharing framework, we evaluate our framework under varying security configurations. This assessment examines end-to-end latency across different message sizes for the following scenarios: No Protection, which does not employ any security mechanism, Integrity only, which ensures message integrity through the use of message authentication code (MAC), Integrity plus confidentiality which ensures message integrity through the use of message authentication code and encryption algorithm, and full security which provides Integrity, confidentiality, and origin authentication.
Our results demonstrate that adding security features has a minimal impact on latency. As depicted in
Figure 12, No Protection latencies remain consistently low, increasing only modestly as message size grows. However, as security features are enabled incrementally, each additional mechanism: Integrity checking, encryption, and origin authentication introduces some additional overhead. For example, with Confidentiality, Integrity and origin Authentication enabled, latency increases by approximately 20% relative to the no-security baseline for large message sizes. This trend is expected, given the computational costs of cryptographic operations; however, the overall system latency remains far below the commonly accepted threshold of 100 milliseconds for real-time systems [
70].
Having established the system’s baseline and security-protected performance, we turn our attention to assessing the scalability of our proposed framework’s ability. Peer-to-peer CTI sharing involves dissemination to multiple CTI consumers. To emulate these conditions, we incrementally increase the number of subscribers testing configurations with 1, 2, 4, 8, and 10 subscribers. In each scenario, we measure end-to-end latency with different message sizes.
Figure 13 highlights our proposed framework ability to scale out to many CTI participants with minimal impact on the system performance. For all tested message sizes at 50 send rate, adding subscribers leads to only a marginal increase in average latency. For instance, when scaling from a single subscriber to ten, the maximum latency increase observed is less than 10%, even at the largest message size.
Another important aspect of our results is the success rate, which remained consistently high at 100% across all testing scenarios, as shown in
Table 11. This indicates that DDS successfully delivered all messages, even under higher message sizes and send rates. Thus, our framework’s ability to consistently ensure reliable message delivery, without packet loss, is vital for CTI sharing, where even a minor data loss can lead to the successful execution of a cyberattack.
As seen in
Figure 14, throughput shows a clear relationship with both message size and send rate. At the lowest send rate, 50 messages per second (msg/s), throughput starts at 0.013 Mbps for 32-byte messages and increases linearly to 3.4 Mbps for 8192-byte messages. This trend becomes more prominent at higher send rates. For instance, at 75 msg/s, throughput reaches 4.9 Mbps, while at 100 msg/s and 125 msg/s, it further increases to 6.6 Mbps and 8.2 Mbps. From
Table 11, we can see that the throughput in messages per second consistently matched the targeted send rates across all the payload sizes. These results confirm that our proposed framework can handle increased data flow efficiently, providing scalable throughput as the message size and send rate increase.
Table 12 presents the CPU and memory consumption of the proposed framework under varying send rates, evaluated both with and without security features enabled. As observed, CPU usage without security slightly increases from 6.83% at 50 msg/s to 7.59% at 125 msg/s, reflecting the expected computational effort required to handle higher message throughput. Similarly, memory usage slightly increases from 84 MB at the lowest send rate to 89 MB at the highest send rate. With security enabled, the system incurs slightly higher resource overheads, which is expected with the added cost of cryptographic operations. CPU usage rises from 6.88% at 50 msg/s to 7.64% at 125 msg/s, while memory usage increases from 86 MB to 94 MB over the same range. Overall, our proposed framework demonstrates lightweight behavior in terms of both CPU and memory usage, which highlights its suitability for resource-constrained environments such as edge computing or IoT devices.
To ensure real-time performance, it is essential to consider not only the average latency but also the maximum latency observed, as this represents the worst-case scenario experienced by the system. The average latency reflects the typical behavior under normal operating conditions, whereas the minimum latency corresponds to the best-case scenario achievable by the middleware. However, because there is no guarantee that any particular run will capture the absolute best case, the minimum values do not always increase smoothly with payload size.
Figure 15 displays the minimum, average, and maximum latency measured for each message size under three different security settings: Integrity only, Integrity plus confidentiality, and Integrity plus confidentiality with origin authentication. For every message size, three lines are plotted per security level, dashed for minimum and maximum, and solid for average latency. The results show that adding additional security features consistently increases the average latency, which scales smoothly with message size due to serialization and cryptographic processing overhead. For example, in the most secure configuration (CIA), the mean latency rises from 822 µs at 32 bytes to 986 µs at 8192 bytes, representing a 20% increase as message size grows. In contrast, both the minimum and maximum latencies exhibit fluctuations rather than a monotonic trend, which is expected given the non-real-time operating system used for testing. These irregularities arise from operating system scheduling jitter, process preemption, and network stack behavior, rather than the DDS middleware itself. Despite these variations, the maximum latency remains bounded within a few milliseconds across all configurations, ensuring that the system can still support real-time communication requirements.
Figure 16 shows the relative percentage latency overhead introduced by different DDS Security configurations (Integrity Only, Integrity plus Confidentiality, and Integrity plus Confidentiality with Origin Authentication, CIA) compared to the baseline with no security, across a range of message sizes. The results reveal a clear progression in latency overhead as additional security mechanisms are enabled. Integrity protection alone adds a negligible cost, with overhead ranging from 0.13% to 1.7%, confirming that signing is lightweight and suitable for real-time deployments without noticeable performance penalties. Enabling confidentiality in addition to integrity introduces a measurable cost, with overhead increasing from 2.2% at 32 bytes to 6.9% at 8192 bytes, reflecting the block-based nature of AES encryption, where computational cost scales with payload size. The CIA configuration introduces the highest overhead, rising from 5.5% at 32 bytes to nearly 20% at 8192 bytes, due to the need for per receiver MACs, and additional 20 extra bytes per message, which increase both message size and cryptographic computational cost. Overall, while stronger security features increase latency, the overhead remains within acceptable limits for real-time CTI sharing. However, Integrity only or Integrity plus Confidentiality provide an effective balance between performance and protection.
When comparing our proposed CTI sharing framework with existing research [
18] that utilizes blockchain, our proposed framework outperforms other frameworks proposed by other researchers in terms of latency, throughput, and success rate.
Table 13 shows that our proposed framework is superior in terms of latency, success rate, and throughput. Both frameworks were tested under similar conditions, using identical message send rates of 50, 75, 100, and 125 messages per second, and a fixed workload of 1000 events per test iteration. While other researchers employ blockchain in their framework, our proposed framework leverages DDS Secure with RTPS protocol to provide an effective and efficient CTI sharing framework.
The blockchain framework’s evaluation was conducted on a single machine running Ubuntu 20.04.4 LTS, equipped with an Intel Core i7-6700HQ processor (3.5 GHz, 4 cores), and 16 GB RAM. As reported in the blockchain-based CTI framework evaluation, the latency per message consistently exceeded 3 s, regardless of the send rate. In contrast, our framework achieved latencies in the range of 822 to 1013 microseconds across all payload sizes and send rates. This represents a three-order-of-magnitude improvement in delivery time. This gap is critical in CTI sharing, where a delay in CTI sharing can render the shared intelligence obsolete. Throughput further reinforces this contrast. The blockchain-based framework, constrained by consensus and queuing overhead, peaks at just 0.32 messages per second regardless of the configured send rate. In comparison, our DDS-based framework achieves full delivery of all published messages, scaling linearly with the input rate as shown in
Figure 17. The advantage of the RELIABLE QoS policy in our proposed framework is evident when we look at the success rate results in
Table 11. While the blockchain-based framework demonstrates relatively high success rates ranging between 96% and 99%, our proposed DDS-based framework consistently achieves a 100% success rate across all testing scenarios, regardless of message size or transmission rate. This highlights the effectiveness of DDS’s reliability mechanisms to ensure successful delivery of all the CTI information to the relevant consumers.
Figure 18 illustrates the CPU usage observed at increasing message send rates for both frameworks. As shown in the figure, the blockchain-based framework exhibits a higher CPU consumption, starting at approximately 34% and rising to nearly 37% as the send rate approaches 120 messages per second. In contrast, our proposed framework demonstrates remarkable efficiency, maintaining CPU usage consistently below 8% across all tested rates.
This substantial gap in processor demand reflects the inherent computational overhead introduced by consensus algorithms and transaction validation in blockchain systems, which is not the case in DDS architecture. A similar pattern emerges in memory utilization as seen in
Table 13. Our proposed framework demonstrates a significant reduction in memory consumption. Notably, in Pahlevan & Ionita [
18], memory usage was reported in ‘GiB’ (e.g., 1696.54 GiB), although their testbed machine had only 16 GB RAM. These values therefore correspond to megabytes (MB), not gibibytes, indicating a likely unit mislabeling in the original paper. We therefore report them in MB for consistency. This substantial decrease in resource consumption not only underlines the scalability of our proposed framework, but also affirms its suitability for deployment in resource-constrained devices.
In addition to the detailed comparison with the blockchain-based framework [
18], we extend our comparison with the other comparable solutions, including another blockchain-based solution proposed by Hajizadeh et al. [
17] and the TAXII-based system in [
22]. To ensure fairness, we restrict our comparison to overlapping workloads where comparable metrics are reported, specifically at 50 and 100 msg/s send rates.
Table 14 summarizes this horizontal comparison.
Notably, the blockchain system by Hajizadeh et al. [
17] was evaluated using a Linux machine running Ubuntu 16.04, equipped with an Intel i5 CPU at 2.40 GHz and 8 GB of RAM. Both Kathara and Hyperledger Fabric v1.1 were deployed within a single virtual machine configured with 4 vCPUs and 4 GB of RAM. For the TAXII-based system in [
22], the evaluation setup included a Windows 10 Enterprise Edition (build 17134) and a Raspbian system; however, detailed hardware specifications such as processor or RAM were not reported. At both send rates, our proposed DDS-based framework sustains the full workload with sub-millisecond latency and a 100% success rate. In contrast, the blockchain framework by Hajizadeh et al. [
17] demonstrates better throughput than the blockchain solution in [
18], reaching up to 49.7 msg/s at a 50 msg/s send rate, but fails to scale effectively. When the send rate is doubled to 100 msg/s, throughput only increases to 59.7 msg/s while average latency rises to 10 s and maximum latency approaches 36 s, highlighting its limited scalability. Similarly, the TAXII-based solution achieves average transmission delays between 48 and 72 ms, which is superior to blockchain approaches but still demonstrates higher latency than DDS. These results reinforce the architectural benefits of DDS middleware. Unlike blockchain systems that are constrained by consensus mechanisms or TAXII’s centralized bottlenecks, DDS leverages the Real-time Publish-Subscribe (RTPS) protocol with fine-grained QoS policies, which provides scalable and efficient real-time communication among distributed systems [
71,
72,
73].
When we look at the throughput result in
Figure 14, we can see that the highest throughput we achieved at a 125 send rate and 8192-byte message size is 8.2 Mbps. A recent study [
74] reports that approximately 44 records are exposed to hacking attacks per second.
Table 11 shows that our framework can send 125 messages per second, demonstrating its ability to reliably handle message volumes nearly three times the current threat level. However, the achieved throughput of 8.2 Mbps is far below the theoretical capacity of our testbed, given that both the network interface card (NIC) and Ethernet switch support 1 Gbps. This performance gap arises because the earlier experiments were conducted with a fixed batch of 1000 events and controlled send rates (50, 75, 100, and 125 msg/s) for comparison with existing solutions. Since the theoretical maximum throughput is limited by the 1 Gbps capacity, we conducted an additional stress test to evaluate the maximum achievable throughput of our framework.
In this test, the system was allowed to send messages continuously for 20 s without imposing a send-rate limit or a fixed number of messages. We compared the performance of our framework with DDS and Socket based communication to further investigate the advantages of DDS in our proposed framework. Looking at
Figure 19, we can see that the average number of messages per second for smaller message sizes, ranging from 32 to 256 bytes, sent by our framework with DDS is significantly higher than what the framework with sockets sends. This is because when message sizes are small (32–256 B), the limiting factor is not the network bandwidth but message-handling overhead. DDS is optimized with efficient serialization, enabling it to send a large number of small messages per second. In contrast, socket-based communication has a less optimized data path and incurs more per-message overhead, which is why the number of messages sent via socket is much lower than what DDS achieves at small message sizes. At larger payload sizes (1024–8192 bytes), the gap narrows as throughput becomes bounded by available bandwidth rather than message-handling efficiency, making the message rates of DDS and socket nearly the same.
Figure 20 further highlights throughput trends and sample loss. For small payloads (32–256 bytes), our framework with DDS achieves substantially higher throughput due to batching, which aggregates multiple small samples into a single packet, thereby reducing transmission overhead. At larger payload sizes of 512 to 8192 bytes, the throughput advantage of DDS starts to shrink and sockets begin to achieve performance closer to DDS. This occurs because, as message size increases, the number of messages that DDS can aggregate into a single packet decreases due to bandwidth limitations, thereby reducing the efficiency gains of batching. At the largest payload (8192 bytes), DDS nearly saturates the available bandwidth, reaching about ~95% of bandwidth utilization, which is the maximum that the NICs support. By contrast, socket throughput begins to decline and exhibits packet loss, with 83 dropped samples at 512 bytes and an additional 49 at 8192 bytes.
While socket achieve throughput relatively closer to DDS at larger payloads, it is important to note that socket lacks several essential features that DDS provides for effective and efficient CTI sharing. Unlike DDS, socket lacks discovery mechanisms (i.e., connections must be manually established with each consumer), reliability or durability QoS settings (lost CTI messages cannot be repaired), and a real-time publish–subscribe model for automated and scalable intelligence dissemination.
Although our experimental evaluation was limited to two physical machines to establish a controlled performance baseline, the proposed DDS-based CTI sharing framework is designed for deployment in larger, real-world environments. DDS natively supports large participant counts through its decentralized publish and subscribe model, and federation across organizational or geographic boundaries can be achieved using RTI Routing Service to bridge domains and subnets and RTI Cloud Discovery Service to enable discovery where multicast is unavailable; partitions further help scope and manage high fan out data flows. While we used Tkinter as a lightweight prototype interface for administrator review during testing, this was only to validate the sanitization workflow. In a production deployment, the review step would be implemented through scalable solutions such as a REST-based API or a web dashboard, which are better suited for enterprise-scale operation.
To study scalability beyond our physical testbed, the framework can be emulated using containerized or virtualized nodes that instantiate large numbers of publishers and subscribers. Such simulation approaches are common in DDS intensive domains like aerospace and autonomous systems, where hundreds of participants are validated in controlled testbeds before field deployment. A full-scale benchmark of CTI sharing across organizational domains is left as future work, but the architectural properties of DDS and the availability of federation mechanisms such as Routing Service, Cloud Discovery Service, and partition QoS policy provide confidence in the framework’s scalability for real-world CTI sharing.