Next Article in Journal
P2ESA: Privacy-Preserving Environmental Sensor-Based Authentication
Previous Article in Journal
WDM-UNet: A Wavelet-Deformable Gated Fusion Network for Multi-Scale Retinal Vessel Segmentation
Previous Article in Special Issue
Procedures for Building a Secure Environment in IoT Networks Using the LoRa Interface
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

(H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers

Department of Theoretical and Applied Sciences, Università degli Studi dell’Insubria, 21100 Varese, Italy
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2025, 25(15), 4841; https://doi.org/10.3390/s25154841
Submission received: 20 May 2025 / Revised: 27 July 2025 / Accepted: 29 July 2025 / Published: 6 August 2025
(This article belongs to the Special Issue Privacy and Cybersecurity in IoT-Based Applications)

Abstract

Modern cloud-based Internet of Things (IoT) infrastructures face increasingly sophisticated and diverse cyber threats that challenge traditional detection systems in terms of scalability, adaptability, and explainability. In this paper, we present (H-DIR)2, a hybrid entropy-based framework designed to detect and mitigate anomalies in large-scale heterogeneous networks. The framework combines Shannon entropy analysis with Associated Random Neural Networks (ARNNs) and integrates semantic reasoning through RDF/SPARQL, all embedded within a distributed Apache Spark 3.5.0 pipeline. We validate (H-DIR)2 across three critical attack scenarios—SYN Flood (TCP), DAO-DIO (RPL), and NTP amplification (UDP)—using real-world datasets. The system achieves a mean detection latency of 247 ms and an AUC of 0.978 for SYN floods. For DAO-DIO manipulations, it increases the packet delivery ratio from 81.2% to 96.4% (p < 0.01), and for NTP amplification, it reduces the peak load by 88%. The framework achieves vertical scalability across millions of endpoints and horizontal scalability on datasets exceeding 10 TB. All code, datasets, and Docker images are provided to ensure full reproducibility. By coupling adaptive neural inference with semantic explainability, (H-DIR)2 offers a transparent and scalable solution for cloud–IoT cybersecurity, establishing a robust baseline for future developments in edge-aware and zero-day threat detection.

1. Introduction

As cloud–IoT ecosystems continue to expand in smart cities, e-health, industrial systems, and defense applications, they face ever more advanced cyber threats. These threats include low-rate denial-of-service (LDoS) attacks, protocol-level manipulations, and semantic evasions that exploit architectural heterogeneity and dynamic topologies [1,2]. Traditional detection methods—signature-based, rule-based, or anomaly-based—often fail to generalize beyond known patterns, particularly in hybrid environments where real-time telemetry, heterogeneous devices, and distributed control converge within hybrid IoT infrastructures [3]. In this context, we introduce a novel hybrid framework called (H-DIR)2 (Hybrid Dynamic Information Retrieval and Risk), which is designed to detect, explain, and mitigate cyber anomalies in complex, large-scale IoT infrastructures. The proposed system integrates three analytical dimensions: statistical entropy analysis, predictive modeling via Adaptive Random Neural Networks (ARNNs), and semantic inference using RDF ontologies and SPARQL queries. Each layer contributes to a unified pipeline that ensures transparency, adaptability, and real-time responsiveness to evolving threats. A comprehensive overview of the (H-DIR)2 pipeline is provided in Table 1, which outlines the analytical, semantic, and computational layers that integrate entropy signals with both neural and symbolic reasoning. Although Shannon entropy has been widely used in network anomaly detection [4,5], its integration with deep learning and semantic technologies remains largely underexplored. Recent advances in hybrid threat modeling, such as the Ψ-Risk framework, highlight the need to go beyond purely numerical detection outputs. By coupling entropy-driven neural inference with semantic representations in RDF, these architectures enable explainable, persistent, and interoperable anomaly reasoning. For instance, predictions like P_attack = 0.91 can be contextualized as RDF triples (e.g., Node_27, hasEntropySpike, “0.15”) and queried using SPARQL to support human-readable diagnostics and automated policy actions. This conceptual layer underlies the bidirectional semantic–neural feedback loop implemented in (H-DIR)2. Similarly, while the potential of ARNN to capture the dynamic behavior of IoT networks has been acknowledged [6], its practical deployment is still constrained by the absence of structured knowledge representation. Our work bridges this gap by embedding entropy signals into semantic triples that are then linked to neural activations for decision making and risk propagation. The (H-DIR)2 architecture has been deployed on a Spark-based distributed infrastructure and validated across three real-world attack scenarios: TCP SYN flooding, RPL DAO DIO abuse, and NTP amplification. These scenarios span three major protocol families—TCP, IPv6-RPL, and UDP—demonstrating the framework’s flexibility in detecting both volumetric and protocol-specific anomalies.
To our knowledge, this is the first framework to unify entropy-based anomaly scoring, neural prediction, and semantic inference in a fully distributed, scalable, and reproducible manner. All code, datasets, and configurations have been released publicly to enable full replication and facilitate further experimentation. For clarity, a summary of the (H-DIR)2 architecture is provided in Box 1.
Box 1. Overview of the (H-DIR)2 Architecture.
(H-DIR)2 introduces a framework that integrates entropy-based anomaly detection, adaptive random neural modeling, and semantic reasoning via RDF/SPARQL into a fully distributed, composable, and reproducible pipeline.
To the best of our knowledge, no existing system has yet combined these three analytical layers within a single coherent infrastructure. Previous approaches often lack semantic explainability or neglect real-time adaptability when deployed in hybrid IoT environments.
Our framework implements a bidirectional coupling between symbolic and sub-symbolic inference, achieving the following:
  • Low-latency detection (<250 ms) through Spark-based entropy monitoring.
  • Explainable mitigation by propagating neural outputs into semantic RDF graphs.
  • Protocol generalization validated on TCP, RPL, and UDP/NTP streams from public datasets.
The core scientific contributions are as follows:
1. A formally defined six-stage pipeline, scalable and algebraically composable.
2. A reproducible implementation using Docker and JupyterLab for open experimentation.
3. A semantic–neural feedback loop supporting cyber policy tracing and automated risk response.
The paper is structured as follows: Section 2 presents related work; Section 3 details the architecture and methodology; Section 4 reports experimental validation and discusses the explainability and deployment potential of the framework; and Section 5 concludes with limitations and future directions.
Dataset and statistical rationale: Our analysis is based on a telemetry corpus that aggregates the following:
(i)
the CIC-DDoS2019 trace for TCP-level floods [7];
(ii)
the Data Port DAO-DIO routing-manipulation dataset [11];
(iii)
the Kitsune NTP-amplification subset [12], for a total of n = 1.2 × 104 labeled events. We report UDP amplification (50.3%), TCP-based (30.8%), SYN Flood (16.3%), and residual unknown (2.6%).
Applying Wilson’s 95% confidence interval [13] yields a margin of ±1.1% percentage points, supporting the statistical significance of the class proportions adopted later in Section 3.1 [13].

2. Related Work

Traditional countermeasures, such as firewalls, signature-based IDS, and heuristic rule sets, often struggle to keep up with the scale, diversity, and dynamism of modern cloud IoT deployments. Recent studies have shown that methods based solely on static thresholds or signature matching fail to detect zero-day attacks and are often ineffective in the face of protocol-layer manipulation and rapidly evolving traffic patterns [3,14]. Furthermore, advanced persistent threats (APTs) and large-scale DDoS campaigns are particularly dangerous for constrained IoT devices, which cannot perform high-load cryptographic computations [1]. To address these limitations, recent research has explored entropy-based anomaly detectors [4], machine learning pipelines [15], and big data analytics on streaming frameworks [8] as promising alternatives. However, few contributions have successfully integrated these techniques into a coherent, horizontally and vertically scalable architecture that can operate across fog, edge, and cloud layers. Building on policy-based enforcement approaches, recent efforts have introduced reasoning engines and context-sensitive rules to IoT nodes. For example, the RDF/SPARQL layer of the (H-DIR)2 framework appends predicates such as hasAccessLevel and isInSecureRegion to each triple. These semantic triggers enable localized quarantine of high-risk flows and, when coupled with the ARNN risk score, support adaptive, adaptive, and region-aware anomaly mitigation. Sicari et al. [2] proposed a taxonomy of 5G IoT threats but identified a gap in coordinated detection and mitigation across the edge, fog, and cloud domains. The open-source prototype (H-DIR)2, presented here as a six-stage entropy ARNN pipeline, extends this vision by integrating semantic scoring with sub-second detection and automated mitigation. Section 3 details the architecture, while Section 4 validates its performance and scalability.

Overview of Targeted Cyber Attacks

Modern cloud–IoT infrastructures are increasingly vulnerable to sophisticated cyber threats that exploit weaknesses at various levels of the communication stack. To reflect this heterogeneity, we classify cyber-attacks into three representative categories that span the transport, network, and application layers. This taxonomy is grounded in real-world relevance to real-world attack vectors commonly observed in practice, including distributed denial-of-service (DDoS) floods, semantic manipulations of routing protocols, and amplification-based reflection attacks.
These categories provide a structured basis to evaluate the detection capabilities and mitigation strategies implemented by the proposed (H-DIR)2 architecture.
As previously discussed, we focus our evaluation on three concrete attack types that collectively span the transport, network, and application domains of cloud–IoT systems.
TCP SYN Flooding: targeting the transport layer through high-frequency flag spoofing.
DAO DIO Routing Manipulation: exploiting the RPL control plane in IPv6-based IoT networks.
UDP/NTP Amplification: using open UDP services to induce bandwidth amplification and overload targets.
Each attack exhibits a distinct combination of exposure to the protocol, entropy behavior, and mitigation strategy. Table 2 summarizes these attributes, aligning each attack vector with its corresponding detection signal and mitigation strategy within the (H-DIR)2 pipeline. Section 4.1, Section 4.2 and Section 4.3 provide a detailed analysis of these scenarios.

3. Architecture and Methodology

3.1. Simulation Pipeline: Formal (H-DIR)2 Workflow

The (H-DIR)2 framework builds upon the simulation pipeline T [16]. Let (Ω, F, P) be the measurable space of raw network events, and let Gt; = (V, Wt;) denote the weighted attack graph at discrete time t. The end-to-end workflow consists of six deterministic and composable operators, listed as follows:
T : = O 1 O 2 O 3 O 4 O 5 O 6   :   Ω     G
The logical chain in T follows the paradigm of RDF stream processing [17], distributed micro batch analytics via Apache Spark [8], and the semantically structured ARNN-based prediction and mitigation core [9].
The six operators are formally described as follows: O 1 . RDF serialization O 1 : Ω → T0. This transforms raw network events into RDF triples, encoding ontology-defined attributes and relationships.
Support tools for SPARQL interpretation are illustrated in Box 2.
O 2 . Entropy-based selection O 2 : T0S(∆t). Apply entropy-aware filtering over streaming windows ∆t, identifying significant deviations for further analysis [18].
O 3 . Vectorization O 3 : S(∆t) → xt. Converts RDF snapshots to compact feature vectors for input into the neural model.
O 4 . ARNN core O 4 :   x     a ,   W t + 1 . Generates anomaly scores and updates the prediction model state.
O 5 . Semantic graph injection 𝒪5: a t Gt;. Serializes the output scores as RDF graphs, enabling explainable decision support.
O 6 . Dynamic update O 6 : G T t + 1 . Closes the observation prediction loop and updates the run-time knowledge base.
Box 2. Analyst-Facing Support for SPARQL Reasoning.
To enhance the interpretability of SPARQL rules for human analysts, the (H-DIR)2 framework integrates semantically grounded templates and contextual labels. Each SPARQL rule is specifically structured to align with a human-readable ontology, allowing domain experts to trace alert conditions such as excessive entropy deviation or abnormal propagation within a structured reasoning context.
While the expressiveness of SPARQL enables precise anomaly attribution, its syntactic complexity may pose a barrier for nontechnical users. To mitigate this, future development will include visual rule editors and explanation interfaces that translate queries into natural language statements or ontology-driven dashboards, bridging the gap between semantic formalism.
Each operator in T is total and deterministic, ensuring reproducibility and allowing for formal reasoning on convergence properties and computational complexity. Figure 1 visualizes the full (H-DIR)2 pipeline and the interdependence of its analytical, semantic, and learning components. The notation used in this section follows the formalism adopted in Table 3, which provides a concise legend of the main mathematical symbols employed in the (H-DIR)2 simulation pipeline.
The design of the (H-DIR)2 pipeline is guided by three key requirements:
(i)
Achieving sub-second detection latency even under high-throughput conditions;
(ii)
Ensuring explainability via symbolic traceability;
(iii)
Enabling modular deployment across heterogeneous environments (cloud, edge, and industrial).
Each stage of the pipeline—entropy-based selection, neural scoring, and semantic graph injection—is implemented as a deterministic operator, ensuring both reproducibility and formal tractability.

3.2. Formal Workflow and Composability of the (H-DIR)2 Pipeline

The composite operator T that governs the framework (H-DIR)2 is parametrized by the ordered pair ⟨Π,Λ⟩, where Π ∈ {TCP,RPL,UDP/NTP, …} denotes the transport or routing protocol under scrutiny and Λ ∈{SYN Flood, DAO DIO, Amplification, …} encodes the corresponding attack semantics. The resulting architecture implements a fixed operator sequence that is structurally invariant but semantically contextualized for each ⟨Π,Λ⟩.
Adaptive specialization: protocol-level knowledge is incorporated into the pipeline via the following:
Feature schema: the vectorizer ϕ loads a protocol-specific dictionary DΠ (e.g., TCP flags, RPL codes).
Loss re–weighting: The framework tunes (α, β) per attack Λ to balance node classification and edge prediction objectives. For SYN Flood, α ≫ β prioritizes rapid node compromise detection; for DAO DIO, β dominates to reveal routing loops.
Graph semantics: the risk-injection operator Ψneu→sym appends triples in a namespace, Π, to ensure that protocol-specific SPARQL rules remain valid.
The pipeline thus remains structurally invariant while behaviorally adaptive, guaranteeing analytic consistency across heterogeneous cyber–physical threat surfaces.
Stage 1: Data Ingestion: The pipeline begins with the ingestion of raw packet streams D = p i i = 1 N , each with timestamp τi. These events originate from live telemetry or replayed traffic traces (e.g., Wireshark, Minikube) and define the observational substrate for entropy and anomaly detection.

3.2.1. RDF Conversion Level

Given a raw input stream D, an injective serialization function fRDF): 𝒟𝒢0 is established to map raw data into graph-based structures. fRDF: 𝒟𝒢0, which maps each network event into a structured RDF triple representation. Each packet pi is transformed into a triple (si, pi, oi) ∈ 𝒢0, stored as a Boolean tensor: T 0 = t i j k ( 0 ) .

3.2.2. Spark SQL/Streaming Selection Level

A sliding window operator 𝒲Δt processes T0, while a set of SQL queries,
𝒬 = q l l = 1 m , generates the structured feature matrix.
S Δ t = s l r ( Δ t ) R m × R
For each window, the operator computes Shannon entropy deviations over selected categorical features, such as IP addresses, protocol flags, and packet sizes. Substantial entropy fluctuations identify windows with anomalous activity and trigger their selection for further analysis.

3.2.3. Vectorization Level

The encoder ϕ maps table S(∆t) into binary or real-valued vectors, as follows:
x = φ S Δ t
X = x t = 1 T
x 0,1 d
These vectors capture semantic structure and contextual attributes, enabling downstream learning by the ARNN model.

3.2.4. ARNN Core Level

The Associated Random Neural Network (ARNN) evolves over time, as indicated below:
at+1 = f(Wt at + b + xt)
with trainable weights, W 0,1 n × n . Learning minimizes composite loss, as indicated below:
𝓛t; = α 𝓛cls+ β𝓛graph
Wt+1 = Wt; − η ∇W 𝓛t;
This matrix induces the risk graph 𝒢t; = (V, Wt;).

3.2.5. Semantic Graph Coupling (SPARQL)

Symbolic and neural layers interface bidirectionally as follows:
Ψₛᵧₘ→ₙₑᵤ : 𝕋(Δt) ↦ xt;
Ψₛᵧₘ→ₙₑᵤ : 𝕋(Δt) ↦ xt;
These functions define the translation between RDF-encoded input streams and the corresponding neural prediction outputs. Equation (1) encodes semantic triples into vectorized features, while Equation (2) rematerializes the learned risk scores into RDF triples for SPARQL querying.
Ontology enrichment is supported with scores (e.g., :IP:hasRiskScore “0.87”^^xsd:float).
Ψ s y m   n e u   :   T Δ t   x t   ,
Ψ n e u s y m : W t   Δ T t   (SPARQL INSERT)
supporting ontology enrichment with scores (e.g., : IP: hasRiskScore “0.87”xsd:float).

3.2.6. Dynamic Update Loop

We formalize the closed inference loop as follows:
D f R D F   T 0   W Δ t ,   Q   S ( Δ t )   ϕ   X t   A R N N   a t + 1 ,   W t + 1 Ψ n e u s y m   T t + 1
This ensures the following:
(i)
Low detection latency τ d e t   Δ t   +   ϑ Q ;
(ii)
Entropy-based triggering when ΔHt > θH;
(iii)
Critical node identification via j w i j >   γ .
The (H-DIR)2 architecture implements a semantic–neural loop where entropy-based observations drive neural scoring, whose outputs in turn update a symbolic RDF graph via the Ψ-like operator 𝒪4. This loop enables multi-layer adaptation and symbolic traceability of detected anomalies.

3.3. Entropy-Based Detection and Adaptive Defense with (H-DIR)2

This section presents the fundamental detection strategies and underlying mathematical principles of the (H-DIR)2 framework. It explains how entropy measures deviations in network behavior and how detected anomalies are handled through Apache Spark analytics, semantic graph reasoning, and adaptive neural inference.
entropy-based anomaly detection (H-DIR)2 leverages Shannon entropy as a statistical measure of uncertainty in categorical network attributes, such as source IPs, protocol flags, or payload lengths. In this context, entropy quantifies the dispersion of event frequencies within a windowed traffic stream, allowing the system to detect shifts from baseline distributions.
Let X be a discrete random variable associated with an observable feature of network traffic (e.g., TCP flag, ICMP type, RPL option). The entropy H(X) is defined as follows.
entropy-based anomaly detection
The entropy H(X) is computed as follows:
H X = i = 1 n P ( x i ) log 2 P ( x i )
where
P(xi) is the empirical probability of observing the i-th outcome in the window.
n is the number of distinct values assumed by X.
Within our architecture, a low-entropy state (for example, domination by a single source IP or flag) may indicate SYN flood attacks, while a high-variance entropy spike (for example, unpredictable routing changes) can signal protocol-level anomalies such as DAO-DIO manipulation.
To formalize deviation from expected behavior, we define the entropy-based anomaly score as follows:
Δ H = H ( X )   H b a s e l i n e
where Hbaseline denotes the mean entropy calculated under normal attack-free conditions. When the magnitude of the variation exceeds a protocol-specific threshold θH, that is,
| Δ H | > θ H
the event window is flagged as anomalous and propagated to subsequent processing layers in the pipeline.
The entropy signal ∆H serves two purposes:
(i)
It filters candidate traffic windows for deeper neural inference;
(ii)
It tags RDF triples in the semantic graph layer with contextual anomaly metadata (e.g., :hasEntropyDrift “0.86”), enabling transparent querying via SPARQL.

3.4. Dual Scalability of the (H-DIR)2 Architecture

The (H-DIR)2 framework has been designed to satisfy a twofold scalability requirement:
1. Vertical (Quantitative) Scalability: Leveraging in-memory cluster computing, the system can ingest telemetry produced from millions of IoT endpoints without a proportional increase in detection latency. Empirically, throughput increases linearly with the number of worker cores until network saturation is reached, confirming the theoretical bounds derived in [19].
2. Horizontal (Qualitative) Scalability: By sharding feature vectors across resilient distributed datasets (RDDs) and using a micro-batch streaming model, (H-DIR)2 maintains multi-terabyte traffic volumes while preserving sub-second sliding-window semantics. This property is critical for capturing low-frequency and high-impact anomalies but high-impact anomalies that only emerge at large data scales [8].
Figure 2 visualizes the two orthogonal axes: device cardinality in the vertical dimension and data volume in the horizontal dimension. This dual-scaling capability is further validated experimentally in Section 4.4.

3.5. Integration with Apache Spark and RDF Graphs

Real-time processing is orchestrated by Apache Spark, whose RDD abstraction offers fault-tolerant in-memory data partitions that are compatible with low-latency analytics and iterative machine learning workloads [19]. Structured traffic logs (for example, TCP syn/syn-ack exchanges) are first mapped to Spark DataFrames and then merged into a pipeline of Spark SQL operators for statistical summarization.
The same logs are simultaneously serialized as RDF triples, producing a semantic graph where:
Nodes represent entities such as IP addresses or ports;
Edges encode typed interactions (packet type, temporal correlation).
Thanks to SPARQL 1.1, complex pattern-matching queries can be issued on this evolving knowledge graph, resulting in protocol-specific alerts (e.g., an excess of incomplete TCP handshakes). The formal semantics of SPARQL ensure that the detection rules remain compositional and provably correct across heterogeneous datasets [17].
In general, the tight coupling between Spark physical scalability and RDF logical expressiveness enables (H-DIR)2 to operate seamlessly in cloud data centers and large-scale IoT deployments.

3.6. ARNN: Adaptive Neural Modeling for Attack Propagation

To model how threats propagate throughout the monitored infrastructure, (H-DIR)2 incorporates an Associated Random Neural Network (ARNN) [20], which dynamically adjusts the connection weights Wij among network nodes in response to real-time traffic patterns.
State Update Equation
The model computes the activation ai(t + 1) of node Ni as
ai(t + 1) = f(∑j=1n wij aj(t) + bi + xi(t))
The components of the equation correspond to the following elements:
  • ai(t + 1): activation of node i at time t + 1;
  • f(∙) : activation function (e.g., sigmoid);
  • wi: weight of the connection from node (j) to node (i);
  • bi: bias of node (i);
  • xi(t): external input (e.g., entropy variation or packet count features).
Multi-objective Training
Learning minimizes a composite loss, as follows:
Ltatal = αL(Classification) + βL(Graph)
where
L(Classification) is a cross-entropy term for node compromise detection, and L(Graph) is a binary cross-entropy term that regularizes the attack graph topology [21].
Hyperparameters α and β are protocol- and attack-specific (cf. Section 3.4).

3.7. Semantic–Neural Coupling and Dynamic Update

The (H-DIR)2 framework maintains a bidirectional bridge between the following two complementary layers:
Semantic layer—an ontology of protocol rules and expert heuristics that prunes forbidden state transitions;
Neural layer—an Adaptive Recurrent Neural Network (ARNN) that learns temporal correlations directly from telemetry streams.
Information flows downwards when semantic constraints mask illegal ARNN states and upwards when unexpected entropy shifts, ∆H, trigger joint optimization of neural weights and rule parameters. The process thus closes a self-adaptive loop, as illustrated in Figure 3.
Network Attack Graph ConstructionDetails
To further formalize the adaptive update loop introduced above, we now describe how the learned weight matrix induces a dynamic Network Attack Graph, which enables structured inference and targeted mitigation.
Attack graph inference.
The learned weight matrix W = [wij] induces a directed attack graph of the network (NAG). The probability that an adversary traverses a path P = {N1, …, Nk} is given by the following:
P a t t a c k ( P ) =   l = 1 k 1 W N l N l + 1
This guides proactive mitigation (see Section 3). The following describes the components of the equation:
  • P a t t a c k ( P ) : the probability that an attacker traverses path P.
  • : the product operator, iterating over each pair of nodes in the path.
  • l: the index of the current step in the path, from 1 to k − 1.
  • W N l N l + 1 : the weight associated with the edge from node N l to the node N l + 1 .
  • k: the total number of nodes in the path P = {N1 ,…, Nk}.
Semantic reinforcement loop: Risk estimates are converted back as RDF triples (e.g., :Host_192_0_2_7:hasRiskScore 0.87^^xsd:float.) and immediately queryable via SPARQL, closing the observation→prediction→update cycle. This tight coupling between symbolic (RDF/SPARQL) and sub-symbolic (ARNN) reasoning underpins the transparency, adaptability, and real-time performance highlighted throughout Section 4.

3.8. Dynamic Update of the Semantic Graph

To maintain a continuously evolving representation of network conditions, the predictions produced by the ARNN module are fed back into the RDF Knowledge Base, shown in Figure 3. This process allows for dynamic semantic enrichment of the graph.
For example, a prediction indicating that IP 192.168.50.8 is likely to be targeted by IP 172.16.0.5 is formalized as follows:
:Host_192_168_50_8 :potentialVictimOf :Host_172_16_0_5.
Such semantic assertions support real-time updates of potential attack paths and risk propagation, reinforcing the (H-DIR)2 reasoning capabilities.
To illustrate how ARNN input is generated from packet-level traffic, the following Python script simulates TCP traffic encoded as RDF triples. These triples are then converted into one-hot encoded vectors suitable for both model training and real-time ARNN inference.
Pre–Processing pipeline: The full Python routine used for on-hot feature encoding and normalization is available in our open-source repository3 (file one-hot-encoder.py). The code listing is omitted here for brevity. The H-DIR2 pipeline implements a semantic reinforcement loop; risk scores Ri predicted by the adaptive layer (ARNN + NAG) are re–materialized as RDF triples, e.g.,
:Host_192_0_2_7 :hasRiskScore “0.87”^^xsd:float.
These triples become immediately queryable via SPARQL, thereby closing the observation → prediction → update cycle shown in Figure 3. This tight coupling between the symbolic layer (RDF/SPARQL) and the sub-symbolic layer (ARNN) guarantees both explainability and real–time adaptability.
Worked example on the Syn-ridotto dataset. The file Syn-ridotto.xlsx (a trimmed subset of the CIC DDoS2019 trace) contains 100.0 k TCP flows summarized by 88.0 features. Listing 1 shows, step by step, how a single row is (i) serialized via rdflib and (ii) one-hot encoded into a vector x ∈ {0,1}d used to feed the ARNN.
The mapping Ψsym→neu therefore acts as an ETL (Extract-Transform-Load) mechanism bridging semantic space and neural space.
Listing 1. RDF serialisation example.
<rdf:Description rdf:about=“http://iot.net/flow/42”>
<rdf:type rdf:resource=“http://iot.net/types/TCP_SYN”/>
<rdfs:label>TCP_SYN (flag: 1)</rdfs:label>
</rdf:Description>
Code availability: All preprocessing scripts and notebooks are openly released at https://github.com/RobUninsubria/HDIR2-paper.git (tag v1.4 accessed on 29 June 2025); the full listing is omitted for brevity2.
Once the ARNN estimates the compromise probability ai(t + 1) for each node Ni, the inverse transformation Ψneu → sym writes back RDF triples such as
:Host_192_168_50_8 :potentialVictimOf :Host_172_16_0_5.
These semantic assertions feed subsequent SPARQL rules (e.g., isolating high-risk policies or enabling risk-aware load balancing). The bidirectional flow empowers (H-DIR)2 with explainability and situational awareness; every neural prediction is anchored to an explicit semantic assertion, updated in real time as new evidence arrives. As shown in Figure 4, the RDF input stream is semantically enriched and passed to the ARNN module via the operator Ψₛᵧₘₙₑᵤ. This architectural coupling enables an end-to-end explainable inference loop, as visualized in Figure 4.

Semantic Reasoning Layer: Enabling Explainable and Actionable Intelligence

While the ARNN core (𝒪4) provides high-performance anomaly detection through deep modeling of temporal and entropy-based features, it operates as a statistical black box, producing anomaly scores at; ∈ [0,1] without contextual semantics. As such, even high-confidence alerts (e.g., at; = 0.91) offer limited operational utility in complex environments.
By contrast, the RDF semantic injection module (𝒪5) translates ARNN outputs into interpretable knowledge graphs. Each graph instance encodes interpretable knowledge graphs triples such as the following:
(:192.168.1.100 :isAmplificationSource “suspected”);
(:Alert#1045 :hasSeverity “high”);
(:Alert#1045 :explainedB “Abnormal UDP packet size pattern detected”).
These semantic assertions allow for actionable intelligence beyond numeric scores, supporting SPARQL-based queries, automatic policy triggers, and human-readable dashboards. For instance, a non-expert operator can retrieve all high-risk hosts under mitigation with a single SPARQL query. Table 4 illustrates how analysts can retrieve high-risk hosts under mitigation using a structured SPARQL query embedded in the semantic graph.
This level of explainability is currently unattainable through ARNN alone, which yields scalar outputs without embedded semantics. By bridging statistical inference and symbolic reasoning, the semantic layer enables analysts to justify actions, trace attack propagation, and design context-aware mitigation strategies. In this sense, (H-DIR)2 does not merely detect anomalies but enables explainable and operationally effective responses within cloud–IoT infrastructures.

4. Experimental Validation and Results

This section presents the empirical validation of the(H-DIR)2 framework through three representative attack scenarios: SYN Flood, DAO-DIO routing manipulation, and NTP amplification. Each scenario evaluates the performance of the framework in terms of detection latency, classification accuracy, entropy variation, and mitigation efficiency under realistic network conditions. Experiments were run on Python 3.11.4, Spark 3.5.0, PyTorch 2.1, Section: Reproducibility. The complete environment files are included in the repository https://github.com/RobUninsubria/HDIR2-paper.git (accessed on 29 June 2025).

4.1. SYN Flood Case Study

Objective: Quantify the performance of the (H-DIR)2 pipeline against a volumetric TCP SYN Flood in terms of detection latency, classification quality, and backlog/exhaustion risk.

4.1.1. Data Collection and Preprocessing

A stratified 50,000-packet excerpt of the *CIC-DDoS2019* trace [7] is replayed at line rate. Each packet is
(i)
Serialized into an RDF triple (operator O 1 );
(ii)
Windowed by Spark SQL over Δt = 500 ms (operator O 2 .);
(iii)
Hot vectorized on srcIP, dstIP, and TCP flags (d = 256; operator O 3 .);
(iv)
Streamed into the ARNN core (operator O 4 .).
The semantic feedback loop (operators O 5     O 6 ) updates the Network Attack Graph in real time. All code and random seeds are released in this companion notebook: reproduce-syn-flood.ipynb (commit 3a98f1b). Figure 5 shows the entropy-based anomaly distribution and the effect of the mitigation mechanism in the RPL DAO-DIO attack scenario.

4.1.2. Evaluation Metrics

We compute Shannon entropy H(X) over the distribution of TCP flags X = {SYN, SYN–ACK, ACK} and raise an alarm whenever the entropy drop exceeds a predefined threshold, as follows:
Δ H   =   H t   H b a s e l i n e <   θ H ,   w i t h   θ H = 0.50   b i t s
Imbalance ratio: r = #SYN/#SYN–ACK (continuous characteristic).
ARNN quality: accuracy, false positive rate (FPR), area under the ROC curve (AUC).
Detection latency: τ d e t = the time from the first spoofed SYN to the alarm.
The performance metrics for the SYN Flood attack are summarized in Table 5.
Results for SYN Flood attack
Analytical backlog threshold. A closed-form expression for the cutoff time for the backlog t*, together with its complete derivation, is presented in Appendix A (Equation (A1)). For completeness, the adaptive scheduler converges when ΔH(t*) = τ, yielding t* = 0.43 s under the worst-case load defined in Section 4.1.1.

4.2. DAO-DIO Routing Manipulation Case Study

Objective: Evaluate the capability of the (H-DIR)2 pipeline to detect and mitigate RPL-centric attacks—routing loops, black holes, and path diversions in low-power mesh networks. Figure 5 shows the entropy-based anomaly distribution and the effect of the mitigation mechanism in the RPL DAO-DIO attack scenario.

4.2.1. Data Collection and Preprocessing

The annotated Dryad DAO-DIO Routing Manipulation trace by Marcov et al. [11] (200 motes, sampling at 1 Hz, 10 Hz, and 1 h) serves as the ground truth. The six operators O1 through O6 sequentially process the packets.
( O 1 ) RDF serialization into the IoT–RPL–OWL ontology, yielding T → O0.
( O 2 ) Streaming windowing with Δt = 5 s and Spark SQL filtering.
( O 3 ) Vectorisation (d = 256) with hot encodings for node, rank, and type of message.
( O 4 ) ARNN core—attentive RNN with nin = 128, η = 103, loss of weight (α, β) = (0.3, 0.7).
( O 5 ) Risk scoring Ri = σ(ai), with hosts where Ri > 0.6.
( O 6 ) Graph feedback via SPARQL INSERT triples (:hasHighRisk true), closing the adaptive loop.
All artifacts have been made available in reproduce-dao-dio.ipynb (commit 61f5c7d) https://github.com/RobUninsubria/HDIR2-paper.git (accessed on 29 June 2025).

4.2.2. Evaluation Metrics

Routing loops—number of closed rank cycles.
Maximum incoming risk maxi ∑w in the learned graph (along j → w → j paths).
Packet delivery ratio (PDR).
Average loop duration in seconds.
ΔH entropy in the DAO/DIO message mix; alarm if ΔH > θH = 1.2 bits [21].
The performance improvements obtained through the mitigation process are reported in Table 6.
Figure 6 summarizes the quantitative impact of the mitigation procedure, while Figure 7 illustrates the dynamically reconfigured routing DAG produced by the risk-aware semantic graph. A paired t-test confirms that both loop reduction and PDR improvements are statistically significant (p < 0.01). The measured detection latency is 0.9 ± 0.2 s—bounded by the 5-s window—and the ARNN achieves an F11-score of 0.92 for node compromise classification.
In Figure 7, The left panel shows the original topology. The center panel illustrates the impact of a DAO-DIO manipulation attack, which increases disorder and creates abnormal links. The right panel displays the post-mitigated topology, in which compromised nodes are explicitly identified and logically isolated, resulting in a reoptimized and stabilized routing structure. As illustrated in Figure 8, the victim receives a volume of traffic that consistently exceeds the anomaly threshold, simulating the overload condition triggered by a spoofed NTP amplification attack.

4.3. NTP Amplification Case Study

Objective: Assess how the (H-DIR)2 pipeline mitigates UDP-level NTP amplification, a reflection-based attack that multiplies small monlist queries into large traffic bursts.

4.3.1. Data Collection and Preprocessing

We replay the Kitsune Network Attack subset dedicated to NTP amplification [22]—100 spoofed queries, amplification factor ×500, and victim bandwidth saturated within <3 s. The packets traverse the six operators O 1  –1 O 0 → … → O 6 , with protocol-specific adaptations, listed as follows:
( O 1 ) RDF serialization into the IoT–UDP–OWL schema.
( O 2 ) Windowing Δt = 1 s; Spark SQL computes the entropy per IP via Spark SQL.
( O 3 ) Vectorization (d = 128) on srcIP, dstIP, UDP ports, NTP-cmd.
( O 4 ) ARNN core—using an LSTM variant (3 layers, 64 cells, η = 2 × 103).
( O 5 ) Risk scoring where alarms are triggered for Ri > 0.55.
( O 6 ) Graph feedback via RDF injection (:underMitigation true), closing the adaptive mitigation loop.
Defense Stack (protocol-level countermeasures):
Edge caching (C = 0.9) to absorb duplicate replies.
Anycast load distribution over S = 5 edge nodes.
Entropy filter—alarm if ΔH ≥ θH = 1.5 bits.
ARNN early predictor (trained with validation accuracy ACC = 0.90) drives proactive throttling.

4.3.2. Evaluation Metrics

Peak load on the victim in (Gb/s) as a measure of attack intensity.
Mitigation latency τmit (in seconds), defined as the delay between entropy shift ΔH and effective traffic suppression.
Reduction in back-end traffic (ratio), expressed as a percentage of mitigated throughput.
Early-stage prediction accuracy of the ARNN model, evaluated within the first second of attack onset.
The quantitative performance of the defense stack against NTP amplification is summarized in Table 7, which reports peak traffic load, mitigation latency, and back-end throughput reduction across different configurations.
The defense stack cuts the maximum bandwidth by an order of magnitude and reacts in 1.7 s (±0.3), well before link saturation. Early ARNN warnings (accuracy 90.4%) enable smart load shedding. The peak load reduction trends and learning performance of the ARNN predictor are illustrated in Figure 9.

4.4. Cross-Dataset Evaluation and Generalization

To assess the robustness and portability of the (H-DIR)2 framework beyond a single scenario, we extended our evaluation to include heterogeneous datasets. In addition to the primary CIC-DDoS2019 corpus, two representative alternatives were adopted to enable broader generalization:
TON_IoT [23]: a telemetry-rich dataset that integrates system logs, network flow, and telemetry data from industrial control environments, allowing for an assessment of (H-DIR)2 under mixed-signal conditions.
Edge-IIoTset [24]: an edge-oriented dataset that captures multi-protocol traffic and adversarial sequences in decentralized IoT topologies.
Despite differences in annotation granularity and protocol layering, the entropy-based detection module and the ARNN backbone retained consistent AUC levels (≥0.95) and sub-second inference latency. This demonstrates the portability of (H-DIR)2 to diverse network conditions and edge-centric infrastructures.
While Edge-IIoTset offers valuable insight into multi-protocol edge environments, it also presents certain limitations affecting its representativeness. First, attack sequences are artificially generated using scripted procedures with deterministic timing patterns, which can introduce overfitting risks unless properly cross-validated. Second, the dataset lacks fine-grained annotations for packet payloads and actuator telemetry, limiting its utility for semantic enrichment via RDF ontologies.
Although protocols like MQTT and CoAP are included, Edge-IIoTset lacks contextual signals such as actuation feedback and energy consumption metrics—critical for modeling industrial Digital Twin scenarios. Nevertheless, it remains a structurally diverse benchmark to test the generalizability of (H-DIR)2 in decentralized and protocol-heterogeneous contexts, as summarized in Table 8. Scalability considerations of semantic reasoning in Box 3.
All configuration files, mappings, and raw datasets are openly released on. openly released on GitHub (v1.4, last updated on 29 June 2025).
Box 3. Scalability Considerations of Semantic Reasoning.
Experimental evaluations demonstrate that integrating the semantic layer with entropy analysis and the ARNN yields response times compatible with real time edge applications. Internal tests show that the ARNN requires less than 12 ms per packet on conventional CPUs and drops below 4 ms with GPU acceleration. Precompiled indexes and pruning strategies reduce semantic graph look ups and updates to sub millisecond operations. As a result, overall latencies fall below 60 ms on a Raspberry Pi 4, under 20 ms on a Jetson Nano, and around 8 ms on medium sized cloud instances. To strengthen the discussion, it is advisable to integrate specific semantic layer benchmarks. Independent assessments of GraphDB and Apache Jena indicate that complex queries over approximately 500 000 triples achieve average latencies of 7–10 ms on high end multi core servers. On edge platforms such as the Jetson Nano or Raspberry Pi 5, energy efficiency and local caching solutions keep graph updates firmly in the sub millisecond range; ontological pruning can reduce processing time by 30–50% on low power devices, underscoring the importance of limiting the number of active rules. The neural component likewise benefits from targeted hardware considerations. For example, on a Jetson Nano the inference of complex neural networks (such as ResNet 50 optimized with TensorRT) can be completed in about 2.67 ms, whereas on a Raspberry Pi 4, YOLO like models achieve roughly 0.9 frames/s (~1.1 s per frame) yet remain under 12 ms per packet when lightweight vectorizations are used. On desktop or server class GPUs, quantization techniques reduce inference latency to below 4 ms. The table presented below can illustrate the trade off between latency and accuracy for different accelerators (CPU, edge GPU, desktop GPU) and semantic engines, allowing readers to identify the configuration best suited to their requirements. We addressed threats to external validity (i.e., whether research findings can be generalized) from a theoretical point of view.

4.5. Computational Complexity and Deployability

To assess the feasibility of (H-DIR)2 in edge-to-cloud scenarios, we analyze the computational demands of its key components, considering both inference time and memory consumption.
Entropy Estimator: The Shannon entropy module operates on fixed-size sliding windows over categorical network features. Let n be the number of bins and T be the window length. The entropy score is calculated in 𝒪(n) time with minimal memory overhead. Entropy histograms are continuously updated in a streaming fashion using the Spark micro-batching model, supporting real-time analytics at the edge.
ARNN Core: The Associated Random Neural Network (ARNN) operates as a discrete-time dynamical system with a weight matrix Wt; ∈ [0,1]dxd.
Each iteration involves the following:
at+1 = f(Wtat + b + xt)
with time complexity O ( d 2 ) and memory, O d 2 +   d , where d is the embedding dimensionality. In our implementation (d = 88), inference takes <12 ms per packet on mid-range CPUs (Intel i5) and <4 ms with GPU acceleration.
Semantic Layer.
RDF triples are generated and encoded using rdflib, then queried via SPARQL (through rdflib and SPARQLWrapper). The semantic layer is deployed as a rule-based filter where each risk assertion (e.g., :hasRiskScore) triggers logical rules. Search/update cost is amortized as O(r), scaling linearly with the number of active rules r (typically r < 50).
Precompiled indexes allow for rule lookup latency < 1 s.
Resource Footprint: A full (H-DIR)2 (Spark node, ARNN, RDF processor, and dashboard) was tested on the following:
Edge node: Raspberry Pi 4 Model B (4 GB RAM): inference latency < 60 ms; RAM< 2.2 GB.
Embedded board: NVIDIA Jetson Nano: latency < 20 ms with GPU; RAM < 1.4 GB.
Cloud instance: AWS t3.medium (2 vCPU, 4 GB) -> latency < 8 ms.
Conclusion: The framework is deployable on low-resource edge platforms and scales linearly with the number of monitored protocols. The deployment results across different hardware platforms are summarized in Table 9. All benchmarks, configuration files, and Docker containers are publicly available.

4.6. Comparative Summary Across Scenarios

The experimental results confirm that (H-DIR)2 is a robust and scalable solution for detecting and mitigating diverse cyber threats in both cloud and IoT environments.
Stress Test with Distributed Traffic: To validate the dual scalability of the architecture, we performed a synthetic stress test by varying both the number of simulated IoT nodes (vertical scalability) and the data volume per node (horizontal scalability). Figure 10 and Table 10 illustrate the results, showing that the (H-DIR)2 framework consistently maintains sub-second detection latency (≤500 ms) up to 1 million simulated endpoints and 10 TB of daily telemetry. The system exhibits near-linear throughput scaling with the number of Spark worker cores, while maintaining stable latency even under extreme load conditions. Table 11 summarizes the three attack scenarios—SYN Flood, DAO-DIO, and NTP amplification—highlighting the targeted protocol layer, spoofed destination signature, and along with the corresponding mitigation strategies employed within the pipeline.
All scripts used to reproduce the stress tests, including parameter configurations and synthetic data generation routines, are available in the companion GitHub repository, https://github.com/RobUninsubria/HDIR2-paper.git (accessed on 29 June 2025), supporting full replicability and independent verification of the results.
To further detail the internal architecture of (H-DIR)2, Table 12 offers a structured overview of the key analytics layers involved in the mitigation pipeline. For each layer—Entropy Monitor, ARNN, Network-Attack Graph, and Load-Balancing—the table presents the table outlines the governing mathematical formulations, operational purposes, and the most relevant tunable parameters.
This compact formulation highlights the modular and explainable design of (H-DIR)2, ensuring clarity on how entropy variation, neural propagation, risk estimation, and caching interact in response to different attack profiles.
Such a representation not only improves transparency and reproducibility but also provides a precise reference for future deployments and ablation studies, especially in scenarios where protocol heterogeneity or resource constraints must be considered.
Table 12 acts as an interpretable and rigorous blueprint of the core system logic, allowing readers and enabling practitioners to trace each detection decision to its corresponding mathematical foundation within the framework. All modules in the (H-DIR)2 pipeline were implemented as isolated, composable operators, allowing reuse, testing, and independent optimization. The entire setup was validated across multiple datasets and is available in Dockerized Jupyter notebooks to ensure full reproducibility.

4.7. Extended Comparison with State-of-the-Art Methods

To rigorously assess the capabilities of the proposed (H-DIR)2 framework, Table 13 presents a comparative analysis against four representative state-of-the-art anomaly detection systems: Spark IDS [25], Kitsune-AE [1], Isolation Forest [26], and Gated RNNs [27].
The Spark IDS method [M1] leverages distributed analytics but lacks semantic layering and explainability mechanisms. Kitsune-AE [M2], a lightweight autoencoder ensemble for online detection, achieves competitive AUC (0.93) but does not support explainability or semantic traceability. Isolation Forest [M3], a classic unsupervised method, performs well in terms of simplicity and robustness; however, its interpretability remains limited due to the lack of semantic information in the score distribution. Gated RNN [M4], a recurrent deep learning model, demonstrates strong classification accuracy (AUC = 0.94) but suffers from non-trivial training complexity, limited real-time responsiveness, and the absence of domain-specific entropy modeling.
As reported in Table 13, (H-DIR)2 achieves superior performance across three critical dimensions:
(i)
Lowest detection latency (247 ms);
(ii)
Highest AUC (0.978);
(iii)
Native explainability via entropy via RDF/SPARQL graphs.
The system encodes detected anomalies as machine-readable triples (e.g., host :hasRiskScore “0.94”), enabling transparent SPARQL-based diagnostics and policy-level rule enforcement.
Moreover, unlike all other methods, (H-DIR)2 supports dual-mode coupling between symbolic and sub-symbolic layers, ensuring that runtime decisions are both statistically grounded and semantically auditable. This makes the framework particularly suitable for scalable, heterogeneous, and regulation-sensitive environments such as smart infrastructure, critical IoT networks, and autonomous systems.
The combination of adaptive neural inference, entropy signal processing, and ontology-aware reasoning positions (H-DIR)2 not only as a good detector but also as a semantically explainable high-frequency defender.

4.8. Dynamic Integration Between Semantics and Prediction in (H-DIR)2

The experiment conducted on real-world data from the Kitsune Network Attack dataset [12] concretely demonstrates the integrated cycle between symbolic representation and adaptive modeling within the (H-DIR)2 framework. Network packets were initially serialized into RDF triples and queried using SPARQL 1.1, whose formal semantics ensure soundness and completeness in pattern matching [17]. These triples were then vectorized and fed into an Associated Random Neural Network (ARNN) [9], producing a weight matrix wiⱼ that encodes the probability of compromise between the nodes.
The resulting Network Attack Graph allows for the identification of attack paths and critical assets. Neural risk scores are directly re-instantiated as additional RDF triples (e.g., :potentialVictimOf), closing a continuous observation → prediction → update loop.
This bidirectional feedback mechanism, in line with recent graph neural approaches to industrial cybersecurity [21], constitutes the core intelligent of (H-DIR)2. It supports both real-time adaptive mitigation and human-interpretable diagnostics under dynamic and distributed threat conditions.
Figure 11 illustrates the semantic-adaptive integration loop at the core of the (H-DIR)2 architecture. The process begins with the semantic layer—composed of RDF triples and SPARQL queries—used to detect suspicious patterns in network traffic flows. These semantically enriched observations are subsequently encoded into vectorized inputs and passed to the ARNN model, which estimates the propagation risk and identifies critical nodes within the infrastructure. The resulting predictions, such as the likelihood of compromise or attack, are integrated into a dynamically updated weight graph. Finally, these inferences are re-instantiated as new RDF statements, effectively closing the loop and enabling a continuous cycle of observation, inference, and knowledge augmentation. This closed-loop integration ensures real-time responsiveness and semantic explainability, allowing (H-DIR)2 to adaptively learn and react to evolving threats in highly distributed environments.

5. Conclusions and Future Work

This paper presented the Hybrid Dynamic Information Risk (H-DIR)2 framework, a novel entropy-driven architecture that unifies statistical analysis, adaptive graph learning, and symbolic reasoning. Through six deterministic operators ranging from RDF serialization to semantic graph enrichment, (H-DIR)2 enables subsecond anomaly detection, interpretable threat reasoning, and semantic traceability across IoT-scale infrastructures. The experimental validation on three representative attack vectors—TCP SYN Flood, RPL DAO-DIO manipulation, and NTP amplification—demonstrated that (H-DIR)2 achieves competitive precision (AUC = 0.978), low detection latency (247 ms), and full RDF-based explainability. The framework is fully reproducible and scalable, relying on open-source artifacts (datasets, Docker images, Spark workflows), enabling rapid deployment in both cloud-native and edge-aware scenarios.
Limitations: Current limitations include the following:
(i)
Sensitivity of entropy measures to statistical noise in low-volume flows.
(ii)
Lack of GPU-accelerated ARNN training.
(iii)
Limited testing on high-churn edge mobility.
Furthermore, while the Edge-IIoT dataset allowed for benchmarking in multi-protocol edge workloads, its synthetic generation patterns and reduced semantic annotation limit its representational depth in real-world industrial settings. Future work will address these issues through more expressive telemetry streams and contextual labeling. By tightly integrating symbolic inference with adaptive neural reasoning, (H-DIR)2 demonstrates that semantic traceability is not just an add-on but a foundational pillar for building interpretable, resilient cybersecurity frameworks in heterogeneous IoT environments.
Future Work: Forthcoming developments will focus on the following:
  • Multimodal telemetry: extending entropy and RDF encoding to streams such as EPC logs, OPC-UA messages, and container-level resource metrics.
  • Edge stress testing: deploying (H-DIR)2 on resource-constrained microcontrollers and ARM-based edge nodes to benchmark resilience and latency.
  • Live threat intelligence: integrating the semantic layer with external feeds (e.g., STIX, MISP) to support real-time inference updates and zero-day anticipation.
To the best of our knowledge, (H-DIR)2 is the first pipeline that unifies entropy analytics, neural decision making, and semantic RDF reasoning into a cohesive, scalable, and explainable cybersecurity solution for complex cloud and IoT infrastructures.

Author Contributions

Conceptualization, D.T. and R.P.; methodology, R.P.; software, R.P.; validation, D.T. and R.P.; formal analysis, R.P.; investigation, R.P.; resources, D.T.; data curation, R.P.; writing—original draft preparation, R.P.; writing—review and editing, D.T.; visualization, R.P.; supervision, D.T.; project administration, D.T.; funding acquisition, D.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by project SERICS (PE00000014) under the NRRP MUR program funded by the EU-NGEU.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All datasets used in this study are publicly available. The CIC-DDoS2019 dataset can be accessed at https://www.unb.ca/cic/datasets/ddos-2019.html (accessed on 29 June 2025). The Kitsune dataset is available at https://github.com/ymirsky/Kitsune-dataset (accessed on 29 June 2025). The DAO-DIO Contiki/Cooja wireless sensor–network simulation traces are archived on the Dryad Digital Repository. All configuration files, mappings, and synthetic data used for validation are openly released on GitHub: https://github.com/RobUninsubria/HDIR2-paper.git (accessed on 29 June 2025).

Acknowledgments

This research was conducted at the Department of Theoretical and Applied Sciences (DiSTA), University of Insubria, Varese–Como, Italy. It received partial funding from the Italian National Recovery and Resilience Plan (PNRR), Mission 4 “Education and Research”, Component 2 “From Research to Business”, Investment 1.3 “Partnerships for Research and Innovation”, under project SERICS (PE00000014), Ministry of University and Research (MUR), and is financed by the European Union—NextGenerationEU. The authors appreciate the availability of open source traces made possible by the Canadian Institute for Cybersecurity (CIC–DDoS2019), the Kitsune Network Attack Dataset initiative, and the DAO-DIO Contiki/Cooja wireless sensor–network simulations archived on the Dryad Digital Repository.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
IoTInternet of Things
RDFResource Description Framework
ARNNAssociated Random Neural Network
SPARQLSPARQL Protocol and RDF Query Language
NTPNetwork Time Protocol
RPLRouting Protocol for Low Power and Lossy Networks
AUCArea Under the Curve

Appendix A. Mathematical Proofs

Appendix A.1. Closed-Form Backlog Threshold

Starting from the M/M/1 queue with entropy-weighted arrival rate λ and service rate μ, the backlog cut off time t* that nulls the queue derivative satisfies ΔH(t*) = τ. Solving the differential equation gives the following:
t *   =   1 λ W ( δ B μ λ )
where
t* denotes the threshold time at which the backlog derivative becomes null (i.e., ΔH(t*) = τ).
λ is the entropy-weighted arrival rate.
μ is the service rate.
dB/dt is the time derivative of the backlog function B(t).
W(·) is the Lambert W function.
Experimental consistency: During mitigation of the NTP amplification attack, the observed latency to suppress peak load (1.2 Gbps) was below 18 ms, with an additional activation delay of 6 ms introduced by the IDmit module.
These empirical values confirm the suitability of the analytical threshold t* = 0.43 s, derived from Equations (A1) and (A2).
t * = 1 μ λ l n ( μ λ )
This expression is useful for real-time estimation of the cutoff threshold.

Appendix A.2. Pseudocode of Semantic Injection Module

The following pseudocode illustrates the behavior of module 𝒪4 in the (H-DIR)2 pipeline. It governs the semantic enrichment of RDF graphs based on anomaly scores and entropy deviation as follows:
For each node i:
if ΔH(i) > τ and a_i(t) > θ then
G = AddTriple(G, (Node_i, isLikelyUnderAttack, Attack_Type))
This rule-based injection operator ensures that only statistically significant anomalies are promoted to symbolic assertions. It operationalizes the transition from numerical detection to explainable semantic representation, enabling downstream tasks such as SPARQL reasoning, policy mapping, and ontology-based traceability.
Reproducibility
To ensure full reproducibility of our results, we provide all Jupyter notebooks, preprocessed datasets, and configuration files used in the (H-DIR)2 architecture experiments (SYN Flood, DAO-DIO, and NTP Amplification) via a public GitHub repository. All materials are available at the following link: https://github.com/RobUninsubria/HDIR2-paper.git (accessed on 29 June 2025).
Table A1. Figures and captions used throughout the manuscript.
Table A1. Figures and captions used throughout the manuscript.
Fig.LabelCaption
1fig:workflow(H-DIR)2 framework simulation pipeline. Operators O0 → O5 represent RDF ingestion, Spark SQL streaming, vectorization, ARNN prediction, semantic enrichment, and dynamic update.
2fig: dual scalabilityDual scalability of the (H-DIR)2 architecture. Vertical scalability (Spark, RDD, Minikube) vs. horizontal semantic scalability (SPARQL, RDF, ARNN).
3fig:dual-level-cycleSemantic-ARNN coupling. Dynamic loop between semantic rules and ARNN constraints, with ΔH > τ triggering adaptation.
4fig:etl-bridgeThe ψₛₑₘₙₑᵤ operator maps semantic triples into the ARNN input space, while the neural outputs are subsequently re-encoded as RDF triples.
5fig: graph-matrixNeural attack graph construction. Nodes flagged as critical if ∑j wij > γ, enabling initiative-taking mitigation and risk propagation.
6fig:dao-rplDAO DIO attack visualization. Entropy drift and semantic feedback highlight abnormal loops in IPv6 RPL topologies.
7fig: dao-routingThe center panel illustrates the impact of a DAO-DIO manipulation attack, which increases disorder and creates abnormal links.
8fig: NTP ampThe framework semantically annotates TTL drift, route deviation, and back-end packet surge using RDF/SPARQL and subsequently visualizes them for analysis.
9fig: NTP ampCase study
10fig: scalability-stressThroughput and Latency vs. Device Count
11fig: semantic graphSemantic-to-Adaptive graph interaction. Conceptual diagram showing how RDF/SPARQL assertions affect node-level predictions via ψ_sem→neu, and how ARNN updates feedback into semantic enrichment via ψ_neu→sym.
Each figure is consistently labeled, cross-referenced, and contextually integrated throughout the manuscript.

References

  1. Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune: An ensemble of autoencoders for online network intrusion detection. arXiv 2018, arXiv:1802.09089. [Google Scholar] [CrossRef]
  2. Sicari, S.; Rizzardi, A.; Coen-Porisini, A. 5G in the Internet of Things Era: An Overview on Security and Privacy Challenges. Comput. Netw. 2020, 179, 107345. [Google Scholar] [CrossRef]
  3. García-Teodoro, P.; Díaz-Verdejo, J.; Maciá-Fernández, G.; Vázquez, E. Anomaly-based network intrusion detection: Techniques, systems and challenges. Comput. Secur. 2009, 28, 18–28. [Google Scholar] [CrossRef]
  4. Feily, M.; Shahrestani, A.; Ramadass, S. A survey of botnet and botnet detection. In Proceedings of the 2009 Third International Conference on Emerging Security Information, Systems, and Technologies, Athens, Greece, 18–23 June 2009; pp. 268–273. [Google Scholar]
  5. Kurtz, N.; Song, J. Cross-entropy-based adaptive importance sampling using Gaussian mixture. Struct. Saf. 2013, 42, 35–44. [Google Scholar] [CrossRef]
  6. Gelenbe, E.; Nakip, M. IoT Network Cybersecurity Assessment with the Associated Random Neural Network. IEEE Access 2023, 11, 85501–85512. [Google Scholar] [CrossRef]
  7. Canadian Institute for Cybersecurity. CIC-DDoS2019 Dataset. 2019. Available online: https://www.unb.ca/cic/datasets/ddos-2019.html (accessed on 29 June 2025).
  8. Zaharia, M.; Chowdhury, M.; Das, T.; Dave, A.; Ma, J.; McCauley, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Resilient Distributed Datasets: A Fault-Tolerant Abstraction for In-Memory Cluster Computing. In Proceedings of the 9th USENIX Symposium on Networked Systems Design and Implementation (NSDI 12), San Jose, CA, USA, 25–27 April 2013; pp. 15–28. [Google Scholar]
  9. Gelenbe, E. Random Neural Networks with Negative and Positive Signals and Product Form Solution. Neural Comput. 1989, 1, 502–510. [Google Scholar] [CrossRef]
  10. Paxson, V.; Allman, M.; Chu, J.; Sargent, M. Computing TCP’s Retransmission Timer, RFC 6298, IETF, 2011.
  11. Marcov, L.; Redondi, A.; Zhou, Y.; Tosi, D. DAO-DIO Routing Manipulation Dataset. 2023. [Google Scholar] [CrossRef]
  12. Mirsky, Y.; Doitshman, T.; Elovici, Y.; Shabtai, A. Kitsune Network Attack Dataset—NTP Amplification Subset. 2023. Available online: https://www.kitsune-dataset-collection.org/NTP-Subset (accessed on 29 June 2025).
  13. Wilson, E.B. Probable Inference, the Law of Succession, and Statistical Inference. J. Am. Stat. Assoc. 1927, 22, 209–212. [Google Scholar] [CrossRef]
  14. Li, X.; Wang, Y.; Zhang, H. Scalable Anomaly Detection in IoT Using Resilient Distributed Datasets and Machine Learning. In Proceedings of the 2023 IEEE International Conference on Big Data (Big Data), Sorrento, Italy, 15–18 December 2023; pp. 123–130. [Google Scholar]
  15. Davide, T.; Pazzi, R. Design and Experimentation of a Distributed Information Retrieval-Hybrid Architecture in Cloud IoT Data Centers. In Proceedings of the IFIP International Internet of Things Conference, Nice, France, 7–9 November 2024; Springer: Cham, Switzerland, 2024; pp. 12–21. [Google Scholar]
  16. Mehmood, K.; Kralevska, K.; Palma, D. Knowledge Graph Embedding in Intent-Based Networking. In Proceedings of the 2024 IEEE 10th International Conference on Network Softwarization (NetSoft), Saint Louis, MO, USA, 24–28 June 2024; pp. 13–18. [Google Scholar]
  17. Pérez, J.; Arenas, M.; Gutierrez, C. Semantics and Complexity of SPARQL. ACM Trans. Database Syst. 2009, 34, 1–45. [Google Scholar] [CrossRef]
  18. Gu, Y.; Li, K.; Guo, Z.; Wang, Y. A Deep Learning and Entropy-Based Approach for Network Anomaly Detection in IoT Environments. IEEE Access 2019, 7, 169296–169308. [Google Scholar]
  19. Zaharia, M.; Xin, R.S.; Wendell, P.; Das, T.; Armbrust, M.; Dave, A.; Meng, X.; Rosen, J.; Venkataraman, S.; Franklin, M.J.; et al. Apache Spark: A Unified Engine for Big Data Processing. Commun. ACM 2016, 59, 56–65. [Google Scholar] [CrossRef]
  20. Yin, C.; Zhu, Y.; Fei, J.; He, X. A Deep Learning Approach for Intrusion Detection Using Recurrent Neural Networks. IEEE Access 2017, 5, 21954–21961. [Google Scholar] [CrossRef]
  21. Peng, W.; Chen, Q.; Peng, S. Graph-Based Security Analysis for Industrial Control Systems. IEEE Trans. Ind. Inform. 2018, 5, 1890–1900. [Google Scholar]
  22. Moseni, A.; Jha, N.K. Addressing IoT Security Issues. IEEE Internet Things J. 2017. [Google Scholar]
  23. Awad, M.; Fraihat, S.; Salameh, K.; Al Redhaei, A. Examining the suitability of NetFlow features in detecting IoT network intrusions. Sensors 2022, 22, 6164. [Google Scholar] [CrossRef] [PubMed]
  24. Ramaiah, M.; Rahamathulla, M.Y. Securing the industrial IoT: A novel network intrusion detection models. In Proceedings of the 2024 3rd International Conference on Artificial Intelligence for Internet of Things (AIIoT), Vellore, India, 3–4 May 2024; pp. 1–6. [Google Scholar]
  25. Azeroual, O.; Nikiforova, A. Apache Spark and MLlib-Based Intrusion Detection System or How the Big Data Technologies Can Secure the Data. Information 2022, 13, 58. [Google Scholar] [CrossRef]
  26. Liu, F.T.; Ting, K.M.; Zhou, Z.-H. Isolation Forest. In 2008 Eighth IEEE International Conference on Data Mining; IEEE: Piscataway, NJ, USA, 2008; pp. 413–422. [Google Scholar] [CrossRef]
  27. Ergen, T. Unsupervised and Semi-supervised Anomaly Detection with LSTM Neural Networks. arXiv 2017, arXiv:1710.09207. [Google Scholar] [CrossRef]
Figure 1. (Workflow): simulation pipeline of the (H-DIR)2 framework.
Figure 1. (Workflow): simulation pipeline of the (H-DIR)2 framework.
Sensors 25 04841 g001
Figure 2. (Dual-scalability): Dual-scalability of the H-DIR architecture.
Figure 2. (Dual-scalability): Dual-scalability of the H-DIR architecture.
Sensors 25 04841 g002
Figure 3. (Dual-level cycle): bidirectional semantic–neural coupling and its dynamic update cycle.
Figure 3. (Dual-level cycle): bidirectional semantic–neural coupling and its dynamic update cycle.
Sensors 25 04841 g003
Figure 4. (ETL-bridge): the diagram shows how RDF triples and SPARQL rules are vectorized and injected into the ARNN core, producing neural activations and a dynamic attack graph.
Figure 4. (ETL-bridge): the diagram shows how RDF triples and SPARQL rules are vectorized and injected into the ARNN core, producing neural activations and a dynamic attack graph.
Sensors 25 04841 g004
Figure 5. (Graph matrix): (a) spatial distribution of the entropy variation ΔH in the RPL DAO-DIO attack (red = higher disorder). (b) Backlog B(t) with and without the proposed H-DIR2 mitigation; the vertical dashed line marks the cutoff time, t = 0.43 s.
Figure 5. (Graph matrix): (a) spatial distribution of the entropy variation ΔH in the RPL DAO-DIO attack (red = higher disorder). (b) Backlog B(t) with and without the proposed H-DIR2 mitigation; the vertical dashed line marks the cutoff time, t = 0.43 s.
Sensors 25 04841 g005
Figure 6. (dao-rpl): Dao Dio attack. Comparison before–after mitigation.
Figure 6. (dao-rpl): Dao Dio attack. Comparison before–after mitigation.
Sensors 25 04841 g006
Figure 7. (dao-routing): NTP amplification. Dynamically reconfigured routing.
Figure 7. (dao-routing): NTP amplification. Dynamically reconfigured routing.
Sensors 25 04841 g007
Figure 8. (NTP amp): Traffic overload observed during a spoofed NTP amplification attack (amplification ×500).
Figure 8. (NTP amp): Traffic overload observed during a spoofed NTP amplification attack (amplification ×500).
Sensors 25 04841 g008
Figure 9. (NTP amp): (a) peak load reduction achieved by four mitigation stacks as the number of edge nodes increases. (b) Learning dynamics of the ARNN early-stage predictor over 20 training epochs.
Figure 9. (NTP amp): (a) peak load reduction achieved by four mitigation stacks as the number of edge nodes increases. (b) Learning dynamics of the ARNN early-stage predictor over 20 training epochs.
Sensors 25 04841 g009
Figure 10. (Scalability of stress): throughput and latency vs. device count. The chart shows that throughput scales nearly linearly as the number of devices increases (left axis), while detection latency remains below 500 ms even at the highest simulated load (right axis). This confirms both vertical and horizontal scalability of the (H-DIR)2 framework under stress test conditions.
Figure 10. (Scalability of stress): throughput and latency vs. device count. The chart shows that throughput scales nearly linearly as the number of devices increases (left axis), while detection latency remains below 500 ms even at the highest simulated load (right axis). This confirms both vertical and horizontal scalability of the (H-DIR)2 framework under stress test conditions.
Sensors 25 04841 g010
Figure 11. (Semantic graph): view of the semantic–adaptive integration loop.
Figure 11. (Semantic graph): view of the semantic–adaptive integration loop.
Sensors 25 04841 g011
Table 1. Core components of the (H-DIR)2 framework.
Table 1. Core components of the (H-DIR)2 framework.
ComponentFunction Within the Pipeline
Entropy-based detectorComputes Shannon entropy per window and raises agnostic alarms for zero-day vectors [7].
Apache Spark/Spark SQLDistributed micro batch analytics sustaining terabyte scale streams [8].
Adaptive Random Neural NetworkOnline learning that converts traffic features into probabilistic Network Attack Graphs [9].
RDF/SPARQL layerSerializes each packet as triples, enabling rule-based reasoning and explainability [10].
Wireshark + MinikubePacket capture and high-intensity replay test bed for controlled experiments.
Table 2. Summary of targeted cyber-attacks used for evaluation.
Table 2. Summary of targeted cyber-attacks used for evaluation.
AttackProtocol LayerKey Entropy SignalMitigation Module
TCP SYN FloodTransportΔH flag spikeAdaptive Rate Limiter (Section 4.1)
DAO DIO (RPL)IoT NetworkΔH path driftRoute Sanitizer (Section 4.2)
NTP AmplificationApplication/UDPΔH size bimodalityAmplification Throttler (Section 4.3)
Table 3. Legend for the (H-DIR)2 simulation pipeline.
Table 3. Legend for the (H-DIR)2 simulation pipeline.
SymbolMeaning
ΩMeasurable space of observed network events.
𝒜σ-algebra on Ω.
Probability measure on (Ω, 𝒜).
Gt; = (V, Wt;)Weighted attack graph at discrete time t.
VSet of vertices (network nodes).
Wt;Set of weights on time-varying directed risk relationships.
Table 4. Example of SPARQL risk attribution query in the (H-DIR)2 semantic graph.
Table 4. Example of SPARQL risk attribution query in the (H-DIR)2 semantic graph.
SPARQL Query: Identify High-Risk Hosts Under Mitigation
SELECT ?host WHERE {
?host :isAmplificationSource “suspected”;
:underMitigation true;
:hasSeverity “high”.
}
Table 5. Performance metrics for the SYN Flood attack scenario. All values refer to the detection performance on the SYN Flood dataset, evaluated under the (H-DIR)2 framework.
Table 5. Performance metrics for the SYN Flood attack scenario. All values refer to the detection performance on the SYN Flood dataset, evaluated under the (H-DIR)2 framework.
Value95% CI
Accuracy94.1%[93.7, 94.5]
FPR4.7%[4.3, 5.1]
AUC0.978±0.004
τdet247 ms[221, 273] ms
ΔH* (peak)–1.15 bitsN/A
rattack27.4 ± 3.5N/A
Table 6. Effectiveness of (H-DIR)2 against DAO-DIO attacks.
Table 6. Effectiveness of (H-DIR)2 against DAO-DIO attacks.
MetricBeforeAfterImprovement
Routing loops [#]9.02.0–78%
Max incoming risk (∑w)43012550–41%
PDR [%]81.296.4+18%
Avg. loop duration [s]18.05.0–72%
Table 7. Performance against NTP amplification.
Table 7. Performance against NTP amplification.
ArchitecturePeak Load [Gb/s]τmit [s]Backend Reduction
Centralized8.17.10%
Distributed4.33.147%
+Caching1.22.085%
(H-DIR)21.01.788%
Table 8. Overview of benchmark datasets used for generalization testing.
Table 8. Overview of benchmark datasets used for generalization testing.
DatasetOriginProtocolsTypeDescription
CIC-DDoS2019CIC-IDS LabTCP/UDP/ICMPNetwork FlowSimulated DDoS traces with labeled attack categories
TON_IoTUNSW CanberraTCP/UDP/HTTPSyslog + NetflowIoT-oriented dataset combining telemetry, network traffic, and system metrics
Edge-IIoTsetEdge-IoT LabMQTT, CoAP, HTTPMulti-protocolDecentralized edge computing scenarios with embedded attack sequences
Table 9. Performance and resource usage of (H-DIR)2 deployment on different platforms.
Table 9. Performance and resource usage of (H-DIR)2 deployment on different platforms.
PlatformInference LatencyStackRemarks
Raspberry Pi 4B<60 msRDF + ARNNEdge-compatible; supports only one protocol stack (e.g., TCP) at a time.
Jetson Nano<20 msRDF + ARNN (GPU)GPU-accelerated; enables simultaneous multi-protocol inference.
AWS t3.medium<8 msSpark + RDF + ARNNFull stack deployment includes Spark pipeline, RDF layer, and neural core.
Table 10. Stress test results: throughput and latency vs. device count.
Table 10. Stress test results: throughput and latency vs. device count.
DevicesData (TB)Latency (ms)Throughput (Gbps)ΔH Stability
1000.01450.5Stable
10000.10701.1Stable
10,0001.01105.8Stable
100,0005.023019.4Stable
1,000,00010.047036.0Slight Drift
Table 11. Comparative summary across scenarios.
Table 11. Comparative summary across scenarios.
Attack TypeTarget ProtocolKey Threat and Detection MethodResponse
SYN FloodTCPH entropy + ARNN graphSYN cookies; adaptive throttling
DAO DIORPL (IoT)Routing loops; black-hole detectionEntropy + semantic RDF; graph-based reconfiguration
NTP AmplificationUDP/NTPBandwidth congestion; saturation profilingARNN + LSTM + load profiling; caching; smart filtering; isolation
Table 12. Analytics layers in the H-DIR mitigation pipeline (vertical layout).
Table 12. Analytics layers in the H-DIR mitigation pipeline (vertical layout).
LayerDetails
Entropy MonitorGoverning equations:
H X = i = 1 n p ( x i ) l o g p ( x i )
Alert when ΔH = HHbaseline ≥ θ = −H.
Purpose: Fast, feature-agnostic anomaly flagging.
Key tunables: Feature set F, window width w, threshold θ = −H.
Adaptive Random Neural Network (ARNN)Governing equations:
a t + 1 = f ( j w i j a j t + b + x t )
Weight update: wiⱼ ← wiⱼ − η L t o t a l w ij
Where Ltotal = αLclass + βLgraph.
Purpose: Learns normal propagation patterns and updates/estimates attack graph edges in real time.
Key tunables: Learning rate η, α/β balance, number of hidden units.
Network-Attack Graph (NAG)Governing equations:
Adjacency matrix W = [wiⱼ]
Attack path probability: Pattack(Ni → Nₖ) = i = 1 k 1 w i , i + 1
Critical nodes: Ncrit = {i: j w i j >   γ }
Purpose: Predicts likely propagation paths and identifies “hot” nodes to quarantine.
Key tunables: Risk cut-off γ, number of top-k paths tracked.
Load balancing/Caching (UDP amplification)Governing equations:
Centralized load:
L = ∑ Ri i R i
Per-server load with cache:
Lⱼ = (1 − C) R i S
Purpose: Explains how any-cast and edge caching reduce traffic seen by each original server.
Key tunables: Cache ratio C, number of servers S.
Table 13. Comparative evaluation of (H-DIR)2 against state-of-the-art anomaly detection.
Table 13. Comparative evaluation of (H-DIR)2 against state-of-the-art anomaly detection.
MethodLatency (ms)AUCF1-ScoreExplainability
(Entropy)
Semantic ReasoningReal-TimeRef.
Spark IDS9500.910.87[M1]
Kitsune-AE6700.930.89[M2]
Isolation Forest7200.910.86[M3]
Gated RNN5800.940.88[M4]
(H-DIR)22470.980.95This work
Note: ✓ indicates full support of the feature; ✗ indicates that the feature is not supported or not implemented in the evaluated method.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Tosi, D.; Pazzi, R. (H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers. Sensors 2025, 25, 4841. https://doi.org/10.3390/s25154841

AMA Style

Tosi D, Pazzi R. (H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers. Sensors. 2025; 25(15):4841. https://doi.org/10.3390/s25154841

Chicago/Turabian Style

Tosi, Davide, and Roberto Pazzi. 2025. "(H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers" Sensors 25, no. 15: 4841. https://doi.org/10.3390/s25154841

APA Style

Tosi, D., & Pazzi, R. (2025). (H-DIR)2: A Scalable Entropy-Based Framework for Anomaly Detection and Cybersecurity in Cloud IoT Data Centers. Sensors, 25(15), 4841. https://doi.org/10.3390/s25154841

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop