1. Introduction
Vehicular Ad Hoc Networks (VANETs) enable time-critical vehicle-to-vehicle and vehicle-to-infrastructure communication to enhance road safety, optimize traffic flows, and support cooperative services [
1,
2]. However, high mobility, rapidly changing topologies, intermittent connectivity, and short-lived communication windows make integrity, availability, and timely agreement difficult to achieve in practice [
3,
4]. In this setting, the core systems challenge is not only which consensus algorithm to use, but governance: deciding when and how strongly to validate and commit information under strict V2X freshness constraints and mobility-driven fragmentation.
Blockchain-style ledgers have been proposed to improve auditability, integrity, and coordination in VANETs, yet most prior work focuses on isolated objectives: fixed-parameter consensus performance [
5,
6], entropy-inspired indicators without closed-loop control [
7], or single-algorithm evaluations under restricted conditions [
8]. What remains missing is a unified operational framework that (i) makes injection–validation dynamics explicit under deadlines, (ii) measures spatial dispersion in real time (a key driver of partitions, forks, and coherence loss), and (iii) adapts the consensus regime and validation rigor to instantaneous disorder rather than relying on fixed baselines.
We model VANET ledger governance as a control loop with two antagonistic legs: (i) message/transaction injection, which increases informational disorder, and (ii) consensus validation, which consumes resources to compress disorder and improve an operational Quality-of-Information (QoI) proxy. Injection tends to increase informational entropy
S and is exacerbated by topology fragmentation captured by spatial entropy
, whereas validation acts as the “work” leg that filters stale/invalid microstates and stabilizes convergence under deadlines. The Ideal Information Cycle (
Figure 1) is used strictly as an operational governance abstraction—not as a claim of physical thermodynamic equivalence—and motivates monotonicity/stability constraints imposed during policy-map calibration.
Building on this abstraction, we introduce a decentralized, cluster-local VANET Engine (e.g., per intersection/segment/connected subgraph) implemented as a real-time control loop in NS-3.35 [
9]. The Engine monitors normalized Shannon entropies, informational entropy
S over active transactions and spatial entropy
over occupancy bins (both normalized to
), and adapts (a) the consensus mode (latency-feasible PoW vs. signature/quorum-based modes such as PoS/FBA) and (b) key rigor parameters via calibrated policy maps. Governance is formulated as a constrained operational objective trading per-block resource expenditure (radio + cryptography) against a QoI proxy derived from delay/error tiers, subject to latency and ledger-coherence pressure (formal statement in
Section 3.6). To implement this objective, the Engine applies
where
D denotes a PoW target register (256-bit scale) and
T a dimensionless stake/quorum rigor threshold anchored by
; both maps are learned via constrained policy approximation with cross-validation and stability constraints (
Section 3.5.5).
Our claims are limited to operational security and performance for V2X-oriented ledgers under mobility and churn, with full PHY/MAC fidelity up to
vehicles. Entropy is used strictly as a measurable governance signal (not a physical quantity). Signature/quorum-based modes (PoS/DPoS/FBA) are interpreted in a permissioned/consortium sense (configured validator sets/quorum slices), consistent with realistic deployments involving RSUs and credentialing. PoW targets are intentionally latency-feasible to respect freshness constraints and therefore do not provide cryptocurrency-grade majority-hash security; security implications and mode-transition safeguards (including entropy manipulation) are discussed in
Section 4.
To match the empirical tests in
Section 6, we evaluate the following hypotheses using matched-seed contrasts:
- H1:
In latency-feasible regimes, PoW cryptographic energy per block increases with informational entropy S, while signature/quorum-based modes are more weakly coupled to S.
- H2:
A consensus-first (CF) policy (validate immediately when ) reduces agreement/commit latency (time from consensus trigger to block commit) and packet overhead and increases throughput relative to broadcast-first (BF) baselines with dwell .
- H3:
Increasing spatial disorder () degrades timeliness and validation accuracy under static settings; the adaptive Engine mitigates this degradation by tightening rigor where dispersion is highest.
- H4:
Increasing mobility (speed v) increases orphan/fork rates and finality latency (time to irreversible confirmation) under static schemes; the adaptive Engine limits these increases via entropy-driven mode/parameter updates.
- H5:
Under high spatial disorder, the adaptive Engine reduces ledger divergence relative to static baselines.
We implement the Engine across urban and highway settings with 30 matched random seeds per configuration and 600 s runs (unless otherwise stated), evaluating scenarios up to
vehicles under full PHY/MAC fidelity. We include scheduled partitions (controlled
k-cut disconnections) and bounded adversarial stressors (Sybil-like pseudonyms, Byzantine proposers, and eclipse windows) as sensitivity analyses; parameters and definitions are reported where used (
Section 5 and
Section 3.5.5). All references to
use the same LCP-normalized longest common prefix definition in
Section 3.1; binning/aggregation and confidence intervals follow that definition consistently. The exact definition used for all curves and statistics is given in Section Ledger Coherence and Fork/Orphan Detection (and reiterated in
Appendix A for completeness). Reproducibility artifacts (scenario drivers, seed lists, configuration files, and plotting scripts) are released in the public repository snapshot described in
Section 5.6.
Control-inspired injection–validation abstraction. An Ideal Information Cycle providing a consistent operational interpretation of injection, validation effort, and QoI under V2X deadline constraints.
Entropy-aware decentralized governance loop. A modular VANET Engine that monitors normalized entropies and adapts consensus regimes and rigor parameters in real time.
Stability-constrained policy maps for hybrid consensus. Cross-validated mappings and learned under monotonicity and Lipschitz stability constraints to avoid brittle fixed-parameter hybrids.
Prototype and evaluation under mobility and stressors. An NS-3 implementation with 30 matched seeds and bootstrap confidence intervals, reporting latency, per-block energy, throughput, finality, and coherence under mobility, partitions, and bounded adversarial stressors.
3. Hypothesis Formulation and Methodology
This work targets secure, low-latency governance in VANETs by combining a control-inspired Ideal Information Cycle—used strictly as an engineering analogy for resource–quality trade-offs—with a modular VANET Engine deployed per geographic cluster. The Engine monitors system disorder and adapts the consensus mode and rigor in real time. To minimize redundancy, formal metric definitions are provided below (
Section 3.1), while symbols are summarized in
Table 4.
To operationalize the control loop, we use two normalized Shannon entropies (mapped to ) as the primary real-time observables:
, the entropy of the active transaction microstate distribution;
, the entropy over spatial occupancy bins.
These observables quantify the disorder that the system must counteract. The resulting governance cycle, where validation rigor balances injected disorder to maintain a Quality-of-Information (QoI) proxy, is illustrated in
Figure 1.
3.1. Core Metrics, Normalization, and Boundedness
We distinguish (i) state observables used by the controller (entropies and QoI proxy) and (ii) evaluation metrics reported in the results (agreement latency, finality, energy, throughput, orphaning, and coherence).
Let
be the set of distinct pending transactions at time
t and let
be the number of nodes currently holding transaction id
. Define the normalized copy distribution
and the bounded Shannon entropy
We apply Laplace smoothing
with
to avoid numerical issues; results are insensitive to
. We use natural logarithms (nats) and normalize by
so that
by construction (restated in
Appendix A.8).
Partition the area into
M spatial bins; let
be the number of vehicles in bin
j and
. With
,
Spatial binning uses a fixed
M per scenario, and normalization by
ensures
(restated in
Appendix A.8).
For each transaction
, we record (i) end-to-end delay
(ms) and (ii) a binary validity indicator
, where
denotes an invalid signature, stale timestamp, or failed format/consistency check. QoI is operationalized through delay/error tiers (
Section 3.5.5) and used in candidate-set formation (prioritize small
and discard
). QoI is an engineering proxy and is not interpreted as a physical quantity.
Agreement (commit) latency
L is measured as the time from a consensus trigger (proposal/validation start) to the time the block is committed locally:
This metric matches the “sub-100 ms” agreement claim in favorable regimes; finality is reported separately.
Finality F measures time to irreversibility under the mode-specific rule: for PoW, F is the time from commit until the block is buried by k subsequent blocks (we report k with the results); for signature/quorum modes, F is the time from proposal until the required quorum/threshold is reached and a commit certificate is formed.
Ledger Coherence and Fork/Orphan Detection
Let
denote the ordered block sequence (main-chain view) at node
u at time
t, with height
. For any pair
, define
as the length of the longest common prefix of
and
. We define the pairwise bounded divergence
and the network-average ledger divergence
where
is the number of active nodes at time
t. In all figures and statistical analyses, “
” refers to this LCP-normalized definition.
We report orphan/fork rate
O as the fraction of produced blocks that do not lie on the final main chain at simulation end:
computed from the logged
orphan_flag (
Section 5.5).
3.2. Nomenclature
A compact list of symbols is provided in
Table 4 to avoid re-defining variables throughout the methods.
3.3. The VANET Engine: Entropy-Driven Governance
Every
(default 1 s), each cluster-local Engine samples
and
and applies exponential smoothing (EMA) to avoid thrashing around thresholds (
Appendix A, Algorithm A1). If either smoothed observable exceeds its threshold (
or
), the Engine increases validation rigor; otherwise it relaxes rigor. Thresholds and
are selected and validated as described in
Section 3.5.5.
Consensus rigor is instantiated via calibrated maps:
where
D is the PoW target and
T denotes the mode-specific stake/quorum threshold. The maps are selected by cross-validated function-class search under monotonicity and stability constraints (
Section 3.5.5).
When the policy switches away from PoW (typically under elevated
S and/or
), the Engine selects a signature/quorum-based mode using a deterministic rule based on the same logged observables to ensure reproducibility. Concretely, (i)
PoS is preferred when the validator set remains stable within the current cluster window (low churn and low dispersion), (ii)
DPoS is selected when limiting committee size is beneficial to reduce message overhead under moderate density, and (iii)
FBA is preferred under high dispersion/fragmentation because quorum slices can be configured to preserve safety within partially connected components. The selected mode and its rigor parameter
are logged per event (
Section 5.5), enabling exact replay of all mode decisions in post-processing.
Candidate sets prioritize fresher transactions (small
) and discard invalid/stale ones (
) before consensus. Algorithm 1 summarizes the cluster-local control loop.
| Algorithm 1 VANET Engine (cluster-local control loop) |
1: Repeat every : sample ; compute and |
2: if
or
then |
3: mode ← signature/quorum-based |
4: else |
5: mode ← PoW |
6: end if |
7: Build a QoI-aware candidate set; execute the selected consensus; append block |
8: Purge invalid transactions; update local state; loop |
3.4. Hypotheses
We test the following hypotheses using matched-seed contrasts across identical mobility/load conditions.
- H1:
In latency-feasible regimes, PoW cryptographic energy per block increases with informational disorder S, while signature/quorum-based modes are more weakly coupled to S.
- H2:
A consensus-first (CF) policy (validate immediately when ) reduces agreement latency L and packet overhead and increases throughput relative to dwell-based broadcast-first (BF) baselines.
- H3:
Increasing spatial disorder () degrades timeliness and coherence under static settings; the adaptive Engine mitigates this degradation by tightening rigor where dispersion is highest.
- H4:
Increasing mobility increases orphaning and finality time F under static schemes; the adaptive Engine limits these increases via entropy-driven mode/parameter updates.
- H5:
Under high disorder, the Engine preserves microstate consistency by reducing relative to static baselines.
3.5. Detailed Methodology
3.5.1. Simulation Environment and Implementation
All reported experiments rely on NS-3.35 [
9]. The implementation follows a modular architecture:
3.5.2. Mobility and Dynamic Conditions
We generate mobility traces with BonnMotion [
27] and evaluate multiple densities to capture both moderate and city-scale regimes. Unless otherwise stated, we report results for
vehicles (matched seeds), and we include an additional high-density trial with
vehicles to stress the MAC/PHY under broadcast-heavy urban conditions (i.e., a broadcast-storm-like regime). Each configuration runs for 600 s with 30 matched seeds.
3.5.3. Partition and Adversarial Stressors (Sensitivity Analysis)
We include scheduled partitions and bounded adversarial stressors (Sybil-like pseudonyms, Byzantine proposers, and eclipse windows) to assess sensitivity and failure modes. Parameters are fully disclosed for reproducibility (
Table 5 and related text). Extended cases and implementation hooks are documented in
Appendix A and
Section 5.9.
3.5.4. Experimental Metrics
We record agreement latency L (ms), finality F (ms), per-block energy (J; radio+crypto), throughput (tx/s), orphan rate O (%), and ledger divergence . Metrics are computed per run and aggregated across matched seeds.
3.5.5. Parameterization and Calibration
To avoid overfitting crypto energy to a single hardware stack, we model cryptographic costs using OBU profiles that capture variability in CPU class, firmware/crypto library efficiency, and power states. Concretely, the per-operation energy terms are treated as profile-dependent:
(hash) and
(signature verification), where
. Unless otherwise stated, the main plots use the mid-profile (OBU-M), and we report sensitivity across profiles in the scalability/robustness analysis.
Table 6 reports the OBU-aware ranges used throughout.
Let
denote the PoW target (smaller ⇒ harder). Under uniform hashes,
. Define effective difficulty bits
Targets are calibrated for timeliness in the simulated environment and are not intended to match cryptocurrency-grade PoW hardness; implications are discussed in
Section 4.
We select and via a grid scan over on a training subset of seeds using knee detection, where latency increases nonlinearly and delivery drops below 95%. Robustness is evaluated on held-out seeds; perturbations yield variation in headline metrics.
To instantiate
g and
f, we perform a constrained function-class search with five-fold cross-validation over mobility and load profiles. Candidate families include low-order polynomial, exponential/log, and spline bases; Fourier terms in
are admitted only if they improve out-of-sample fit without violating stability constraints. We enforce (i) a non-increasing
with
S, (ii) bounded sensitivity to
via a Lipschitz cap to prevent thrashing, and (iii) feasibility bounds for
. Coefficients, diagnostics, and constraint checks are reported for reproducibility (
Section 3.5.6 and
Appendix B).
3.5.6. Fitted Policy Maps and Coefficients
We report the final closed-form maps used by the Engine:
with coefficient values in
Table 7.
3.5.7. Statistical Procedures
All figures report means over 30 matched seeds with 95% bias-corrected and accelerated (BCa) bootstrap confidence intervals (10,000 resamples). Seeds are matched across conditions (blocking by mobility/load), and Holm–Bonferroni correction is applied for multiple pairwise contrasts. For headline comparisons we additionally report paired contrasts across matched seeds (
Appendix A).
3.6. Theoretical Formulation and Validation Protocol
We model governance as a control decision taken at discrete times
k (period
), based on the observed state
. The governance action is
where
is the consensus mode,
are mode-dependent rigor registers, and
denotes the triggering policy.
We define a free-governance potential that trades resource cost against QoI under timeliness and coherence pressure:
where
and
define the operational V2X envelope.
We impose bounded registers, timeliness/coherence envelopes, monotone response in
S, Lipschitz response in
, and anti-thrashing switching (hysteresis + minimum dwell). These constraints are enforced during policy-map calibration and in the runtime supervisor (
Appendix A).
Clamping ensures well-posed bounded registers; EMA + Lipschitz caps bound control variation; and hysteresis/minimum dwell prevent chattering. The monotone envelope in
S ensures that increasing transaction disorder cannot yield weaker validation, aligning the implemented policy with (
1) in high-disorder regimes.
We compare (i) the adaptive Engine (mode + ), (ii) static PoW (fixed D), (iii) static signature/quorum-based modes (fixed T), and (iv) Vanilla Hybrid (switching with fixed ) under matched seeds and identical mobility/load.
Vanilla Hybrid uses the same
switching surface as the Engine but keeps all rigor parameters fixed:
Thus VH isolates the effect of mode switching alone without continuous entropy-conditioned tuning.
4. Security Analysis and Threat Model
Our objective is operational security for V2X-oriented ledgers: (i) integrity at admission (reject invalid/stale items), (ii) timeliness under churn and contention, and (iii) convergence/coherence within cluster-local connectivity windows. We do not claim cryptocurrency-grade, permissionless Nakamoto security. In particular, PoW targets in our experiments are calibrated to satisfy V2X timeliness constraints (
Section 3.6); therefore, PoW in this paper must be interpreted as a latency-feasible mechanism rather than a stand-alone defense against sustained majority-hash adversaries.
4.1. Threat Model and Assumptions
We aim to protect (A1) ledger integrity (no invalid/stale transactions admitted), (A2) timely agreement under V2X budgets, and (A3) ledger coherence (bounded divergence/orphaning) despite mobility-driven fragmentation.
Beyond static attacker capabilities (Sybil identities, Byzantine proposers, and eclipse/partition attempts), we explicitly consider a control-loop adversary that adaptively observes recent mode decisions and perturbs the transaction/event stream to steer the controller toward unfavorable operating points (e.g., increased overhead, elevated latency, or prolonged conservative operation). This includes attempts to manipulate measured disorder signals (entropy inflation/deflation) and to exploit transition dynamics. We assume that the adversary remains crypto-bounded (cannot break signatures/hashes) and resource-bounded (cannot sustain global majority control indefinitely) but can mount targeted, bursty attacks.
Section 4.4 details hardenings that maintain safety under such adaptive pressure.
Transactions and control messages are signed and undergo timestamp/freshness checks; invalid or stale items are labeled (
) and handled by QoI-aware admission (
Section 3.1). Signature/quorum-based modes (PoS/DPoS/FBA) are treated as permissioned/consortium-style modes (configured validator sets/quorum slices), consistent with V2X deployments where RSUs/authorities or membership services exist. We do not assume a global always-connected network; instead, we assume cluster-local connectivity windows in which quorums may form and convergence can be evaluated.
4.2. Latency-Feasible PoW and Explicit Majority-Hash Risk
To meet sub-second (often sub-150 ms) freshness constraints, PoW targets are calibrated so that expected mining/confirmation latency fits the timeliness envelope. This necessarily implies targets that are far easier than cryptocurrency-scale PoW.
Under the uniform hash assumption, the success probability per hash is proportional to the target, and the expected trials satisfy
(
Section 3.5.5). Define effective difficulty bits
. In our timeliness-feasible regimes,
is intentionally small relative to open permissionless cryptocurrencies, implying that an adversary with substantially higher hashing capacity than honest OBUs could out-mine the network if PoW were used as a permissionless security anchor. Therefore, we make the following operational stance explicit:
We do not claim PoW-based Sybil/majority-hash resistance comparable to Bitcoin/Ethereum-style assumptions.
PoW in this paper is a latency-feasible operational mode suitable only under a restricted envelope (e.g., credentialed miners, RSU anchoring/checkpointing, or limited adversarial exposure).
In adversarially exposed deployments, PoW should be disabled or strictly constrained, and the Engine should prefer permissioned signature/quorum modes (PoS/FBA) as the security-bearing consensus family.
4.3. Mode Transitions and Safety Across Switching
A reviewer concern is whether transitions between modes can introduce double-commit or allow adversaries to exploit weaker regimes. We enforce a transition barrier that makes switching protocol-safe:
Epoch boundary (anti-chattering). A mode is fixed for an epoch of length at least (minimum dwell time), preventing rapid oscillations around thresholds.
Commit (checkpoint) barrier. Before a switch, nodes either (i) finalize the current candidate under the current mode or (ii) time out and discard the candidate. The epoch then freezes a snapshot consisting of the mempool candidate set and the last committed header .
Prefix consistency rule. The next epoch can only extend the checkpoint header (no backward reorg beyond the barrier). This bounds reorganization depth across transitions and prevents concurrent commits under multiple modes.
Operationally, the barrier ensures that switching does not create ambiguous concurrent commits. In our NS-3 implementation, transition events are logged and included in the divergence/orphan computation (
Section 5.5).
4.4. Entropy Manipulation and Control Signal Robustness
A second reviewer concern is whether adversaries can manipulate entropy observables to force mode selection.
- (1)
The policy is conservative under disorder.
The Engine increases validation rigor (and switches away from PoW) when either S or rises above thresholds. Thus, inflating disorder observables cannot force the system into a weaker validation regime; the dominant risk is instead DoS-like overhead increase (forcing conservative validation more often), not a security bypass.
- (2)
Hardening against entropy inflation (bounded impact).
To mitigate entropy inflation attempts (transaction flooding and Sybil-induced dissemination noise), the implementation supports:
Validated-only estimation: Compute S and from signed, freshness-checked events only (invalid/stale items do not contribute).
Admission control: Cap per-credential transaction injection and bound candidate-set growth per epoch, limiting the ability to inflate
S via flooding. In addition, we can pre-filter transaction streams using lightweight anomaly detection to suppress injected noise before it biases entropy estimation. For example, ML-based fraud/anomaly detection pipelines designed for high-volume transaction streams can be adapted to flag bursty, repetitive, or statistically implausible submissions prior to admission, reinforcing the admission control gate [
28].
Robust aggregation: Optionally use trimmed/median aggregation across cluster reports to limit outlier influence.
Hysteresis and smoothing: EMA + two-threshold hysteresis reduces the effect of short-lived spikes on mode selection (constraints C5–C6 in
Section 3.6).
These mechanisms bound the impact of entropy manipulation primarily to performance degradation within the stated envelope rather than consensus compromise.
- (3)
Hardening against entropy deflation (under-reporting).
An adaptive adversary may also attempt to suppress the measured disorder signal (e.g., selective forwarding, under-reporting, or shaping the observed microstate) to keep S and/or below thresholds and prolong PoW operation. We mitigate this risk via (i) local computation from directly observed, validated events (not solely neighbor reports), (ii) cross-checks/robust aggregation across cluster views, and (iii) a safety default: when signal quality is low or reports are inconsistent, the controller biases toward the rigorous family and enforces minimum rigor floors.
Entropy Injection Attacks (Control-Loop Manipulation)
In an entropy injection attack, an adversary crafts inputs that mimic high-disorder dynamics (e.g., bursty multi-identity submissions, timing jitter, and content variation) to artificially elevate measured entropy and force premature transitions into more conservative operating modes. This differs from classic Sybil/Byzantine roles because the target is the measurement–decision pipeline. The primary consequence is therefore controlled performance degradation (higher overhead/latency), not validation weakening. We mitigate entropy injection via (i) robust estimation (windowed aggregation with trimming/winsorization), (ii) hysteresis and dwell-time constraints that prevent rapid mode flipping, (iii) safety floors on rigor (never dropping below a minimum verification set under uncertainty), and (iv) stream pre-filtering (admission control plus anomaly detection) to remove suspicious bursts prior to entropy computation. These measures preserve bounded adaptation even when disorder signals are partially adversarial.
4.5. What Our Evaluation Secures (and Non-Claims)
Within the stated envelope, we empirically evaluate (i) integrity at admission via QoI-aware filtering (invalid/stale exclusion), (ii) timeliness improvements under CF vs. BF policies, and (iii) convergence/coherence improvements under partitions/eclipses via reduced divergence and bounded orphaning (
Section 3.5.3 and
Section 6).
We explicitly do not claim robustness to (i) sustained majority-hash adversaries in PoW under latency-feasible targets, (ii) fully adaptive collusion that learns and exploits the control dynamics globally, or (iii) permissionless cryptoeconomic security without external membership/identity services.
For deployments requiring stronger guarantees, we recommend permissioned validator governance (PoS/FBA), RSU anchoring/checkpointing to bound reorg depth, Sybil-resistant admission controls, and eclipse-resistant peer rotation as natural extensions.
5. NS-3 Implementation
We validate the proposed entropy-driven governance loop with a modular NS-3.35 framework [
9]. Our implementation encapsulates the cluster-local VANET Engine (metric sampling, mode/rigor selection, and state updates) into a reusable helper (
VanetEngineHelper) and a node application (
NodeApp). Consensus is pluggable (PoW, PoS, DPoS, and FBA) through a unified
ConsensusModule interface, and a structured logging pipeline records all observables required to reproduce figures and tables.
All results in
Section 6 and
Section 7 follow a single execution profile: NS-3.35, 600 s per run, 30 matched seeds per configuration, and 95% confidence intervals computed via BCa bootstrap in post-processing (
Section 3.5.7). The exact simulator configuration, command-line flags, and per-event logs are versioned in the public artifact described in
Section 5.6; each run stores a
run_id and the repository
git_commit in the CSV header for traceability.
5.1. General Configuration and Simulation Parameters
We consider two mobility settings with identical communication stacks and Engine sampling policies: an urban grid (1 km × 1 km) and a highway segment (5 km). Unless explicitly stated otherwise, the communication stack uses IEEE 802.11p/WAVE via WaveHelper. Experiments involving additional C-V2X/LTE-V2X integrations are reported only when explicitly flagged and are fully versioned in the artifact (to avoid ambiguity across NS-3 releases).
Safety messaging is emulated at the application layer: we generate periodic CAM-like beacons at 10 Hz (100 ms) and DENM-like event-driven alerts, which allows controlled freshness constraints without claiming a full ETSI ITS-G5 stack.
Radio energy is modeled with
BasicEnergySource and
WifiRadioEnergyModel. Cryptographic energy is added analytically using the per-operation constants in
Table 6 (parameterization in
Section 3.5.5). Entropy metrics follow the normalized definitions in
Section 3.1. QoI tiers are derived from delay and validity indicators
, where
denotes invalid/stale items (
Section 3.1). The full NS-3 configuration is summarized in
Table 8.
5.2. VanetEngineHelper and NodeApp Workflow
Figure 2 summarizes the helper and application pipelines. Each cluster-local Engine instance retrieves
S and
from a shared metrics service, compares them against
, and selects the active
ConsensusConfig (mode and rigor parameters). To reduce control-plane chatter, our implementation optionally disseminates the selected configuration to neighboring nodes; however, the policy is deterministic given the logged observables, and any node can compute the same decision locally.
The per-tick computational cost is over local pending transactions and spatial bins; in our settings, this overhead is negligible relative to PHY/MAC processing and consensus execution.
5.3. Partition and Adversary Plugins
We expose explicit partition and adversary plugins for controlled stress testing, aligned with the bounded stressor profiles summarized in
Table 5. These plugins are used in sensitivity analyses and in the adversarial/partition result blocks reported in
Section 6 (and referenced from
Section 4).
Our primary (claiming) partition mechanism is a scheduled
k-cut over
NetDevice links: during a partition window, selected links are programmatically disabled (Tx/Rx) to separate the network into disconnected components and then re-enabled to measure recovery. Each partition window is logged with
partition_id, start/end timestamps, affected link sets, and realized component sizes (fields
cc_id,
cc_size; see
Section 5.5). Alternative fading-style partitions (e.g., corridor fades) are used only as non-claiming sensitivity checks and are explicitly flagged when reported.
Sybil identities, Byzantine proposers (equivocation/double-propose), eclipse windows, and corrupted transactions are injected via an
AdversaryModule wrapper around
NodeApp send/receive paths and consensus callbacks. Stressor parameters (e.g.,
sybil_ratio,
bz_rate, and
eclipse_dur_s) are logged per run and per event (
Section 5.5). These stressors characterize robustness and failure modes; headline figures focus on baseline mobility/load unless a stressor is explicitly stated.
5.4. Consensus Modules
Mechanisms are pluggable via a unified ConsensusModule. We differentiate instrumentation by protocol family:
PoW: iterates hashes to find a nonce; we log realized hash attempts per committed block.
PoS/DPoS/FBA: we log the cryptographic workload (proposal, voting, and verification) as an aggregate count .
Total energy combines NS-3 device models (radio) with analytical per-operation terms (crypto), as detailed in
Section 5.7 and
Figure 3.
5.5. Instrumentation and Logging
All results are derived from per-event logs written in a normalized CSV schema. Each row corresponds to a key event (tx reception/validation, proposal, commit, mode switch, and stressor/partition markers), enabling matched-seed pairing and fully reproducible post-processing.
We log observables so that all reported metrics follow exactly the manuscript definitions: (i)
and
are normalized Shannon entropies in
(
Section 3.1); (ii) the validity indicator is
, with
denoting invalid/stale items (
Section 3.1); (iii) agreement latency
L is measured as the proposal-to-commit time for a block (
Section 3.5); (iv) finality
F follows the operational definitions in
Section 3.5 (PoW:
k-deep; quorum modes: commit threshold); and (v)
uses the LCP-normalized definition in Section Ledger Coherence and Fork/Orphan Detection and is reconstructed offline from logged chain pointers and per-node heads (and may optionally be logged at
sample_state).
BCa bootstrap confidence intervals (95%, 10,000 resamples) are computed offline from seed-wise aggregates (
Section 3.5.7); no CI quantities are logged. This avoids mixing sampling uncertainty with simulator observables and ensures that all inference is reproducible from the raw event logs.
All knobs are exposed via NS-3
CommandLine (e.g.,
–Seeds=30 –BlockSize=10 –EngineDt=1s –Sth=0.5 –Hth=0.6 –Adversary=Sybil:0.05). The artifact provides a script enumerating the full factorial design used in this paper (
Section 5.6).
5.6. Reproducibility Artifact
To enable exact replication of all reported figures and tables, we release a versioned artifact that bundles (i) the NS-3 scenario drivers and helper/application code (VanetEngineHelper and NodeApp); (ii) the full set of configuration files (mobility/load profiles, thresholds, and default parameters); (iii) explicit seed lists per experimental condition (seeds.csv) and the script that enumerates the factorial design used in this paper; (iv) the normalized per-event CSV logging schema and parsers; and (v) the Python post-processing pipeline that generates every plot/table from raw logs (software versions are recorded in the artifact metadata).
Each simulation run writes a deterministic
run_id and the repository
git_commit hash in the CSV header, together with the full command-line string (all flags) and a normalized configuration snapshot. This provenance block ensures that every data point in
Section 6 can be traced to an immutable code revision and a unique seed/configuration tuple.
The artifact follows a fixed directory structure: /ns3/ (scenario drivers, helpers, and modules), /configs/ (scenario and policy parameters), /seeds/ (matched-seed lists), /logs/ (raw per-run CSV outputs, indexed by run_id), and /analysis/ (Pandas/Matplotlib scripts producing all figures/tables; versions recorded in the artifact metadata). This layout is used consistently throughout this paper and is sufficient to regenerate all results from raw logs.
To reproduce the main results, one executes the provided run script to generate logs for the declared design (600 s runs; 30 matched seeds per configuration) and then runs the plotting pipeline on the generated CSVs. The post-processing uses only the logged observables (including S, , QoI tiers, orphan flags, agreement latency L, finality F, and chain-pointer/head fields enabling reconstruction per Section Ledger Coherence and Fork/Orphan Detection), ensuring that all results are derived from recorded, seed-indexed events rather than manual interventions.
The stressor-specific result blocks (Sybil/Byzantine/eclipse/partition) reported in
Section 6 are generated exclusively from the fields in
Table 9 together with the declared stressor parameters in
Table 5.
5.7. Crypto Energy Accounting
Cryptographic energy per block is computed as
In all runs,
are selected from the OBU profile ranges in
Table 6. We report OBU-M as the default configuration and include OBU-L/OBU-H sweeps to quantify hardware-driven variability in both latency and energy. For PoW under a 256-bit target scale
D, the hit probability is
and
; we log realized
(attempts until success) and compute per-block crypto energy from the same accounting. For PoS/DPoS/FBA, we set
and count signature operations in
(proposal + votes/endorsements + verifications).
Total per-block energy reported in the Results is
where
is logged from NS-3 radio energy models and aligned to the block commit event. No host-side power tools are used in reported energy curves, tables, or confidence intervals.
5.8. Default Parameter Rationale and Sensitivity
Default simulation parameters are summarized in
Table 8. Sensitivity checks for cryptographic constants
and controller thresholds
follow
Section 3.5.5. When sensitivity analyses are reported (including bounded stressors), the enabled stressor type and its parameters are recorded in the event logs (
Table 9) and in the run-level configuration snapshot stored in the artifact (
Section 5.6).
Algorithmic complexity, profiling costs, and the justification for default control parameters are summarized in
Table A1 and
Table A2, respectively, and are traceable to the versioned artifact described in
Section 5.6.
5.9. Post-Processing and Visualization
After each run, CSV logs and FlowMonitor outputs are processed with Python (Pandas [
29] and Matplotlib [
30]) to generate all figures and summary tables. The repository organizes raw logs, derived summaries, and plotting scripts in a fixed layout to support exact replication. Inference (BCa bootstrap CIs; Holm–Bonferroni adjustments for multiple contrasts) is performed strictly in post-processing from seed-wise aggregates (
Section 3.5.7).
5.10. OBU Resource Contention Model (CPU/Radio Co-Scheduling)
Real OBUs multiplex safety-critical stacks (perception, control, and V2X safety beacons) with ledger operations over shared CPU, memory, and radio resources. Because the default NS-3 pipeline does not model full ECU scheduling and contention across heterogeneous in-vehicle workloads, we introduce an abstraction that captures effective compute and radio availability.
We model compute availability via a CPU budget factor
applied to cryptographic service time, i.e.,
, which increases both queueing and energy for the same operation under contention. Similarly, we model radio availability via a MAC duty factor
that captures background safety traffic and channel busy time, effectively reducing the usable service rate for ledger messages. In
Section 6.8 and the stress/sensitivity analysis, we sweep
to quantify robustness under resource competition.
5.11. Scalability Harness (Algorithmic Profiling)
To facilitate profiling beyond full PHY/MAC fidelity, we provide an auxiliary harness with simplified PHY and FlowMonitor-only instrumentation to scale to 500+ nodes. Results from this harness are reported only as algorithmic profiling and are not used for the headline claims in this paper.
All reported results are produced with NS-3.35 using matched-seed configurations and a single energy accounting pipeline: NS-3 radio device models plus an analytical cryptographic term parameterized by computed from operation counts. No host-side power profilers are mixed with simulated energy values; versioning, seeds, and configuration manifests are provided in the artifact repository.
6. Results
Unless otherwise noted, results are means over 30 matched seeds with 95% confidence intervals via bias-corrected and accelerated (BCa) bootstrap (
Section 3.5.7). Symbols and units follow
Table 4. Throughout this section,
S and
denote the normalized entropies defined in
Section 3.1.
Energy is reported per confirmed block and is decomposed into
(from NS-3 device energy models) and
(analytical accounting from logged operation counts;
Section 5.7).
Latency-related metrics are reported for three operational endpoints: (i) agreement/commit latency
(consensus trigger →
commit_block), (ii) transaction confirmation latency (tx admission → inclusion/confirmation; mode-dependent), and (iii) finality
F, measured as
finality_ms at commit events (
Section 5.5). All plotted/tabulated quantities are derived from the normalized CSV log schema in
Table 9 plus standard NS-3 counters (FlowMonitor) where explicitly stated.
Unless explicitly stated otherwise, quoted percentage improvements refer to the urban,
configuration with
, broadcast-first dwell
ms, and the default adaptive maps
from
Section 3.5.5. We do not extrapolate beyond
; an auxiliary (non-claiming) scalability harness is described in
Section 5.11.
6.1. —Entropy-Conditioned Crypto Energy Scaling
We test whether cryptographic expenditure per confirmed block increases sharply with informational disorder under PoW, whereas signature/quorum-based modes (PoS/DPoS/FBA) remain comparatively weakly coupled to S.
Because
is an observable rather than a directly settable treatment, we aggregate per-block cryptographic energy conditionally on realized
S. We partition observed
values into five bins centered at
(equal-width windows around each center) and compute the mean
for PoW, PoS, DPoS, and FBA within each bin. All points aggregate 30 matched seeds using the energy constants in
Table 6 and the calibration in
Section 3.5.5.
Figure 4 shows a strong coupling between PoW crypto-energy and disorder: mean
rises from approximately 100 J at low
S to approximately 600 J at high
S. In contrast, PoS remains near
–
J across bins, DPoS remains around 10–12 J, and FBA around 5–7 J, indicating comparatively weak dependence on
S.
As shown in
Figure 4, PoW exhibits a steep increase with disorder, consistent with increased aggregate hashing and re-work pressure under unstable propagation and elevated orphaning. In contrast, PoS/DPoS/FBA remain nearly flat across
S bins. This supports
and motivates entropy-aware avoidance of PoW in high-
S regimes within this paper’s stated security envelope (
Section 4).
Energy is reported per confirmed block aggregated over the cluster participants (i.e., the sum of cryptographic work attributed to the block’s production/verification events). Using
with
J/hash (
Table 6), a PoW block at
J implies
aggregate hash attempts across participating miners during that block window. Under the 256-bit target scale logged as
D_target,
and
; hence the effective
D used in the PoW baseline is deadline-calibrated rather than cryptocurrency-grade (
Section 4.2).
6.2. —Consensus-First Trigger vs. Broadcast-First Dwell
We compare consensus-first (CF: start consensus as soon as ) versus broadcast-first (BF: delay consensus by dwell time ) in terms of (i) agreement/commit latency, (ii) confirmed throughput, and (iii) message overhead.
We use urban topology;
;
;
ms; and 30 matched seeds. Metrics follow
Section 3.5. Latency
denotes agreement/commit latency (trigger →
commit_block) and thus includes the dwell time in BF by design.
Figure 5 shows that CF consistently reduces agreement latency and improves throughput relative to BF. At
, CF reduces
from
ms (BF) to
ms (CF) and increases confirmed throughput from
to
tx/s. Normalized message overhead is also reduced by approximately half (
Figure 6).
As shown in
Figure 5, in the urban,
baseline, CF reduces agreement/commit latency from ∼300 ms to ∼150 ms and increases confirmed throughput from ∼120 tx/s to ∼200 tx/s. Additionally, it halves normalized message overhead (
Figure 6). Across densities, latency reductions range within 33–57%, supporting
. We stress that the sub-100 ms operational target is achieved only in favorable regimes (lower disorder/dispersion and/or density); the baseline under
ms and the given load yields mean commit latencies above 100 ms.
6.3. —Mitigation Effects of QoI Filtering (Reducing Wasted Work)
We test whether early filtering of low-QoI transactions (high delay and/or invalid/stale ) reduces wasted consensus work (invalid blocks/forks) and improves latency.
We use the urban scenario (
) and default thresholds
. We compare the Engine with QoI filtering ON (candidate set excludes stale/invalid transactions;
Section 3.3) versus QoI filtering OFF (candidate set is FIFO without QoI screening). All other parameters are unchanged; we use 30 matched seeds (
Table 10).
QoI filtering substantially reduces invalid traffic admitted to consensus and lowers the share of wasted cryptographic work, while improving agreement latency and reducing forks. These effects support and justify retaining QoI-aware candidate selection as a default Engine policy.
6.4. Spatial Dispersion Effects (Governance Signal Supporting –)
We quantify how spatial entropy affects latency and validation quality and whether entropy-aware adaptation mitigates degradation.
We use the urban scenario (). We compare static PoW (fixed difficulty ) against the adaptive Engine (entropy-driven ). Because is an observable, we bin observations into four equal-width bins and report means at bin midpoints.
We define validation accuracy
as the share of committed content that remains on the final main chain and is valid at commit time. At
commit_block events, we record
tx_valid=1 iff all included transactions pass signature/freshness checks; otherwise
tx_valid=0. Operationally,
using the logged
event,
tx_valid, and
orphan_flag fields (
Table 9).
Figure 7 shows that, under static PoW, latency increases from ∼100 ms to ∼250 ms as
grows, while accuracy degrades from ∼92% to ∼80%. The adaptive Engine mitigates latency growth (remaining at ∼175 ms at the highest bin) and improves accuracy to >95% in high-dispersion regimes.
As illustrated in
Figure 7, under static PoW, dispersion drives latencies up to ∼250 ms and reduces
to ∼80%. The adaptive Engine mitigates this degradation by switching away from PoW and/or tightening validation when dispersion is greatest, consistent with the conservative policy under disorder described in
Section 4.4. This supports the role of
as a governance signal and motivates the mobility/coherence evaluations in
–
.
6.5. —Mobility, Orphans, and Finality
We evaluate orphaned-block rate O and finality F under increasing mobility (v) for static PoW/PoS vs. the adaptive Engine.
We use urban (
),
m/s, and 30 matched seeds. Orphan rate
O is computed from
orphan_flag; finality
F is the logged
finality_ms at
commit_block events (
Table 9).
Figure 8 shows that mobility increases instability for static baselines. At
m/s, static PoW reaches
with
ms, while static PoS reaches
with
ms. The adaptive Engine holds orphaning to
and finality to
ms across the tested mobility range.
As illustrated in
Figure 8, at 30 m/s, static PoW reaches
and
ms, whereas the adaptive Engine holds
and
ms by modulating consensus selection/rigor based on instantaneous entropy. This supports
and links mobility-induced disorder to control actions. The Engine approaches sub-100 ms finality in less adverse regimes; the worst case shown here remains within ∼140 ms under the stated configuration.
6.6. —Microstate Consistency (Ledger Divergence)
We relate to ledger divergence and assess whether the adaptive Engine preserves microstate coherence under high dispersion.
We investigate urban (
) and static PoW vs. adaptive Engine. For each seed, we compute
following
Section 3.1 using the logged longest common prefix information (
Table 9). We time-average
within four equal-width bins of
and report the mean across seeds with 95% BCa CIs.
Figure 9 shows that, without adaptation, divergence increases sharply with dispersion, reaching ∼0.18 in the highest bin. The adaptive Engine limits divergence to below ∼0.07 by switching conservatively and strengthening validation in high-dispersion regimes.
As illustrated in
Figure 9, without adaptation, divergence rises to ∼0.18 at high dispersion. The Engine caps divergence below ∼0.07 by strengthening validation and enforcing conservative mode selection when spatial entropy is largest. This supports
and demonstrates improved ledger coherence under stress within cluster-local connectivity windows.
6.7. Stress Tests: Adversaries and Partitions (Sensitivity Analyses)
We quantify robustness under bounded Sybil behavior, bounded Byzantine proposers (equivocation), transient eclipse windows, and scheduled partitions, consistent with the threat model in
Section 4.1.
We use the urban scenario with
. Sybil ratio is set to
(3–5 pseudonyms per attacker). Byzantine proposers double-propose within 200 ms. Eclipse attacks drop non-colluding peers for 5–10 s. Partitions are scheduled
k-cuts (link-disabling windows) for 10–30 s (implementation details in
Section 5.3). Metrics are computed from
orphan_flag,
finality_ms, and
(via
lcp_len), as logged in
Table 9.
(i) Sybil: Median orphan rate increases by pp for static PoW and pp for static PoS; the Engine increases from to (IQR ). (ii) Byzantine proposers: Finality tails widen for signature-based modes; the Engine’s consensus-first policy limits dwell-induced amplification (95th percentile ms vs. ms for static PoS). (iii) Eclipse: Transient divergence spikes (peak ) decay within ∼1.2 s under the Engine vs. ∼2.4 s for static PoW. (iv) Partitions: Mean recovery time to pre-partition divergence levels is s (static PoW) vs. s (Engine); once reconnected, F reduces from >250 ms (static PoW) to ∼140 ms (Engine).
Across bounded stressors, the Engine primarily exhibits graceful degradation: performance overhead and short-lived divergence spikes increase, but convergence recovers faster than static baselines. These tests are sensitivity analyses rather than claims of security against fully adaptive global coalitions (
Section 4.5).
6.8. City-Scale Trial: 500 Vehicles Under Broadcast-Heavy Load
We validate controller stability and confirmation-latency tails when scaling beyond vehicles, approaching dense urban regimes where broadcast contention and channel pressure can dominate.
We instantiate an urban grid with
vehicles under the same IEEE 802.11p/WAVE PHY/MAC stack and the same logging pipeline used in the main experiments. Safety messaging follows the CAM-like periodic beaconing configuration described in
Section 5 (10 Hz) and the ledger workload is injected concurrently, yielding a broadcast-heavy regime. Each run lasts 600 s and uses matched seeds (same protocol as the main campaign).
Table 11 reports confirmation-latency percentiles (
and
), confirmed throughput, and orphan rate under high contention.
Figure 10 summarizes median and tail latencies in the city-scale regime. The Engine maintains bounded tails (
ms) while static baselines degrade sharply under broadcast contention.
Under , the Hybrid Engine reduces tail latency by 69% versus the non-adaptive Vanilla Hybrid (180 vs. 580 ms) and by 86% versus static PoS (180 vs. 1250 ms), while also increasing confirmed throughput by 45% (450 vs. 310 tx/s) and reducing orphaning by roughly half (9.2% vs. 18.5%). Compared to FBA, the Engine lowers the tail by 56% (180 vs. 410 ms) while improving throughput (+18%) and reducing orphaning (−19% relative).
This city-scale behavior is consistent with the
comparative evaluation (
Section 6.9), where the Engine achieves the lowest confirmation latency and remains among the most efficient options. As density increases from
to
, all methods experience contention-driven tail inflation; however, the Engine preserves deadline-feasible tails (
ms) by combining conservative mode selection under disorder with admission control/QoI gating (
Section 4.4), limiting broadcast-spike amplification into consensus.
6.9. Expanded Comparative Evaluation
We compare five consensus options in a 100-vehicle urban scenario (5–15 m/s): pure PoW, pure PoS, FBA, a non-adaptive hybrid, and the proposed Hybrid Engine.
When a method fails to confirm within the 600 s simulation horizon, the plotted confirmation latency is right-censored at 600,000 ms (reported as “≥600 s”).
Figure 11,
Figure 12 and
Figure 13 summarize the comparative trade-offs across consensus options in the 100-vehicle urban configuration, reporting confirmation latency, total per-block energy, and confirmed throughput under matched-seed evaluation.
The Hybrid Engine achieves the lowest confirmation latency (approximately 30 ms), near-minimal total energy (order ∼10 J per confirmed block), and higher throughput among practical options, outperforming both pure schemes and non-adaptive hybrids. These gains are consistent with entropy-aware control and with the governance-signal role of
S and
, while maintaining the explicit non-claims about permissionless PoW security (
Section 4.2).
6.10. Fork-Rate Definition and Distribution
We report orphan/fork rate
O as the fraction of produced blocks that do not lie on the final main chain at simulation end (field
orphan_flag;
Table 9). For the urban
case at
m/s, the median
O over seeds is static PoW
(IQR
at 30 m/s), static PoS
, and Engine
. The Engine reduces but does not eliminate forks under high mobility, which is consistent with intermittent connectivity and partition-like micro-cuts in the mobility traces.
6.11. Robustness and Statistical Validity
Trends are stable under
perturbations of calibrated parameters (
Section 3.5.5) and across additional highway traces (omitted for space; logs in the repository described in
Section 5.9). With
seeds per condition, bootstrap 95% CI half-widths are typically
of the mean for latency and energy and
pp for orphan rates in Engine configurations.
Seed-wise paired contrasts align with medium-to-large effects for headline comparisons (e.g., CF vs. BF agreement latency at ; adaptive vs. static PoW energy for ). Where applicable, paired t-tests across matched seeds indicate significance ( after Holm–Bonferroni correction) for the headline contrasts. We also repeated key analyses with disjoint seed folds (five-way split); qualitative trends persisted with comparable CI overlap (logs included in the artifact).
7. Discussion
The entropy-driven
VANET Engine provides convergent evidence for
–
and delivers consistent improvements in energy, latency, throughput, validation quality, and ledger coherence under realistic urban dynamics. We interpret these outcomes through the Ideal Information Cycle (
Figure 1), relate them to the challenge–mitigation matrix (
Table 3), and clarify (i) normalization and units, (ii) the observable (rather than treatment) nature of entropy, (iii) parameter provenance and censoring, and (iv) instrumentation/traceability via the logging schema (
Table 9). To avoid redundancy, we do not restate metric definitions already introduced in
Section 3.1 and the detailed protocol in
Section 3.5.
7.1. Connection to the Ideal Information Cycle
The Ideal Information Cycle comprises an injection leg (increasing informational disorder) and a validation leg (compressing disorder through consensus and verification work). Our empirical results align with these mechanics:
(Crypto energy vs. S). Figure 4 shows that PoW crypto energy per confirmed block rises steeply with realized informational disorder
S (from ∼100 J at low
S to ∼600 J at high
S). This is consistent with increased hashing demand and higher orphaning pressure during unstable propagation windows (validation work must “catch up” with injection). By contrast, PoS/DPoS/FBA remain weakly coupled to
S (sub-Joule to single-digit Joule regimes), motivating entropy-aware avoidance of PoW in high-
S windows under this paper’s stated threat model and deadline regime.
(Consensus-first vs. broadcast-first dwell). Figure 5 and
Figure 6 show that triggering consensus immediately when
shortens the injection leg: agreement/commit latency drops by 33–57% across densities, throughput increases by ∼1.6–
, and normalized message overhead is roughly halved in the urban
baseline.
(QoI filtering reduces wasted work). Table 10 shows that QoI-aware candidate selection reduces invalid/stale admission and wasted cryptographic work (energy spent on orphaned or invalid blocks), while improving agreement latency and reducing forks/orphans. Operationally, this concentrates validation effort on higher-quality microstates, increasing the probability that expended work contributes to the final ledger.
(Mobility, orphans, and finality). Figure 8 shows that increasing mobility inflates orphaning and finality for static schemes. The Engine keeps both bounded (
and
ms at 30 m/s) by switching mode/rigor as
rise; i.e., it schedules additional validation work when injection becomes more volatile.
(Ledger coherence under dispersion). Figure 9 shows that the Engine constrains ledger divergence to ∼0.07 at high
versus ∼0.18 under static PoW, indicating improved microstate consistency and more effective validation under fragmented connectivity.
7.2. Spatial Dispersion as a Governance Signal
Beyond the hypothesis tests,
Figure 7 demonstrates that
is a strong predictor of consensus stress: dispersion increases latency and degrades validation accuracy under static PoW. The adaptive Engine mitigates this trend by (i) switching away from expensive regimes when dispersion is greatest and/or (ii) tightening validation rigor in those windows, improving both timeliness and correctness during intermittent connectivity windows.
7.3. Normalization, Units, and Terminology (to Prevent Ambiguity)
Normalized entropies. S and
are Shannon entropies computed over normalized distributions (
Section 3.1) and scaled to
. For readability, plots and text use
S and
to denote these normalized quantities throughout the Results and Discussion.
Observables vs. treatments (binning interpretation). and are observed time-varying signals, not externally set controls. Curves reported against S or therefore represent conditional aggregation (binning) over realized entropy levels rather than causal effects of a manipulated treatment.
Ledger divergence definition. All references to
follow the same LCP-normalized definition in
Section 3.1, computed from the logged longest common prefix field
lcp_len (
Table 9).
Consensus knobs and rigor parameters. The PoW quantity D_target is a 256-bit target (dimensionless; smaller ⇒ harder) used only when mode=PoW. The rigor knob T_rigor is a mode-specific threshold (stake/quorum/committee rigor) used when mode . These inherit simulator control scales rather than physical units.
Energy reporting across figures. Figure 4 reports
to isolate entropy-conditioned cryptographic scaling. The expanded comparison (
Figure 12) reports total energy
in the NS-3 energy-source scale (
Section 5.7); absolute magnitudes should be interpreted within that scale, while qualitative trends remain the focus.
7.4. From Metaphor to an Operational Control Model
Our thermodynamic vocabulary is operational and does not claim physical equivalence:
State variables. S quantifies transaction-state disorder and
quantifies occupancy dispersion (
Section 3.1). QoI is a monotone proxy of freshness/validity implemented through delay/error tiers (
Section 3.5).
Work and dissipation as design constraints. Consensus work is the expected resource cost (crypto operations plus message exchange) required to reduce disorder sufficiently for safe acceptance. The Landauer-like bound in
Section 3.6 is used as a design constraint motivating adaptive rigor, not as a physical law.
Cycle semantics. Injection raises S (and often ) under churn and bursty loads; validation expends work to compress disorder, improving correctness and coherence. The Engine schedules this work locally and adaptively based on .
7.5. Why the Adaptive Forms and
The chosen function families impose empirically supported shape constraints and remain compact enough for calibration:
Difficulty/target map g. The selected family (
Section 3.5.5) captures increased PoW burden as disorder rises and includes dispersion sensitivity through a low-order dependence on
. In practice, this map primarily serves to avoid PoW in high-entropy regimes by making it unattractive under deadlines, complementing the mode-switch logic.
Rigor map f. The chosen family increases with
(with diminishing returns) and adjusts with
S so that high-disorder windows receive stronger validation while avoiding oscillations. Monotonicity/stability constraints and
coefficient perturbations preserve qualitative decisions (
Section 3.5.5).
7.6. Calibration Anchors, Magnitude Sanity Checks, and Sensitivity
Magnitude sanity check (PoW crypto energy). The high-entropy PoW point in
Figure 4 follows directly from
with
nJ/hash (
Table 6) and the logged realized hash attempts
n_hash aggregated per confirmed block. Under the 256-bit target scale (
D_target),
and
, implying targets far easier than cryptocurrency-grade difficulty but consistent with deadline-constrained V2X-style operation.
Sensitivity. Qualitative trends persist under
perturbations of calibrated parameters and across additional traces (logs described in
Section 5.9), indicating that improvements are not brittle to small calibration changes.
7.7. Instrumentation Choices and Measurement Scope
7.8. Statistical Validity, Forks, and Stressors
Uncertainty and power. We use
matched seeds per condition and report BCa bootstrap CIs throughout. Seed-wise paired contrasts (Holm–Bonferroni-adjusted) support headline comparisons, as summarized in
Section 6.11.
Fork metric. Orphan/fork rate
O is the fraction of blocks not on the final main chain (field
orphan_flag;
Section 6.10). Under high mobility, the Engine reduces but does not eliminate forks (medians
–
), consistent with intermittent connectivity and micro-partition behavior in mobility traces.
Adversaries and partitions as sensitivity analyses. Stress tests (Sybil, Byzantine proposers, eclipse windows, and scheduled
k-cuts) in
Section 6.7 demonstrate graceful degradation and faster recovery relative to static baselines. These are not claims of robustness against fully adaptive, global coalitions beyond the stated threat model.
7.9. Interpreting the Expanded Comparison (Censoring and Practicality)
In the expanded comparative evaluation (
Section 6.9), Pure PoW may fail to confirm within the 600 s horizon in the densest/high-disorder regimes. Reported PoW confirmation latency is therefore right-censored at 600,000 ms when applicable (“≥600 s”). Under these operating conditions, the Engine’s advantage reflects practical deadline satisfaction and reduced wasted work, not marginal improvement under unconstrained convergence.
7.10. Implications for the Challenge–Mitigation Map (Table 3)
Mobility/topology churn. Entropy-aware switching lowers orphaning and finality (
Figure 8), operationalizing mitigation under churn.
Congestion and bandwidth fluctuation. Consensus-first triggering reduces message overhead (
Figure 6) and increases throughput (
Figure 5), limiting injection-phase amplification.
Integrity/coherence under disorder. Strengthening rigor in high-
windows reduces divergence (
Figure 9) and improves validation outcomes (
Table 10).
OBU constraints. Favoring PoS/FBA (or an adaptive hybrid) over PoW in high-
S intervals reduces cryptographic energy by orders of magnitude (
Figure 4) while preserving correctness/coherence.
7.11. Limitations and Threats to Validity
This study is simulation-based (NS-3.35) and omits several physical and deployment effects, including heterogeneous hardware accelerators, GNSS error, clock skew, firmware-level scheduling, and cross-layer resource contention. Adversaries are stylized and bounded (sensitivity analyses) rather than fully adaptive; mixed deployments (802.11p with LTE-V2X) may shift the optimal thresholds . These limitations do not negate the central finding within the reported regime: entropy-conditioned mode/rigor control improves timeliness and coherence under churn and dispersion.
To partially address this gap, we incorporate an explicit contention abstraction (
Section 5.10) using CPU and MAC availability factors that bound cryptographic service time and effective radio service rate, providing a first-order sensitivity analysis despite the absence of full ECU scheduling in NS-3.
Because many IoT/IoV security stacks adopt hierarchical ledgers (cluster/region chains anchored to higher tiers), the proposed Engine can be integrated as a regional control layer that adapts rigor locally while maintaining periodic anchoring to a backbone chain for audit and cross-domain authentication [
16,
17,
18,
19].
7.12. Operational Guidance
For dense corridors, trigger consensus-first when to avoid dwell-induced rebroadcast storms. Prefer signature/quorum-based modes (PoS/FBA) or an adaptive hybrid in high-S/high- windows to meet V2X-style deadlines and conserve energy. Deploy Engines per cluster/segment in partition-prone areas to preserve convergence and accelerate recovery, and retain QoI filtering to reduce invalid admission and wasted work.