1. Introduction
Connected vehicles represent a pivotal advancement in transportation technology, offering transformative capabilities to enhance system efficiency, reduce traffic incidents, improve safety measures, and mitigate congestion impacts [
1]. Industry projections indicate a substantial growth trajectory, with the global connected vehicle market expected to reach 9.9 billion units by 2035 [
2]. However, the rapid development of these technologies has concurrently introduced significant security vulnerabilities. A notable demonstration of these challenges occurred in 2015 when Chrysler initiated a recall of 1.4 million vehicles following the discovery of critical remote attack vulnerabilities [
3]. The rapid development of IoT technologies (as surveyed in [
4]) and mobile edge computing standardization efforts [
5] have created both opportunities and challenges for secure V2X communications.
The automotive industry has progressively integrated vehicle-to-everything (V2X) communication technologies, encompassing both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) systems, into modern vehicles. From a security perspective, ensuring the trustworthiness of messages exchanged among vehicles, infrastructure, and cloud platforms is paramount. Within the Car Communication Consortium, extensive deliberations have been conducted regarding security protocols and protective measures for vehicular systems. These discussions culminated in the establishment of comprehensive security frameworks, including the definition of trusted assurance levels and specific security requirements [
6].
The connected vehicles network, encompassing V2V, V2I, and V2X communications, exhibits significant architectural similarities with edge computing paradigms, as illustrated in
Figure 1. Recent advancements in industrial edge computing [
7] have demonstrated the critical role of low-latency processing and distributed trust management in mission-critical systems, providing valuable insights for vehicular network design. This correspondence necessitates a targeted consideration of both connected vehicle network characteristics and edge computing features in system design. The core research focus lies in developing efficient and comprehensive interaction mechanisms for trusted node access within these networks. However, existing trust models demonstrate limited capability in addressing the critical requirements for secure interaction among connected vehicle nodes, highlighting a substantial research gap in this domain.
The establishment of efficient node interaction fundamentally depends on the maintenance of robust trust relationships, which can be effectively quantified through trust models [
8]. However, the implementation of these models necessitates the comprehensive consideration of multiple attributes. Current trust modeling approaches are primarily categorized into two distinct paradigms: centralized trust modeling [
9,
10] and distributed trust modeling [
11,
12], differentiated by the entity responsible for trust metric computation. A critical research challenge lies in developing effective trust assessment mechanisms for connected vehicle nodes, particularly in leveraging architectural features to integrate the respective advantages of both centralized and distributed trust models for comprehensive node trust modeling in connected vehicle networks.
In this study, we propose a novel trusted measurement scheme for connected vehicles, incorporating trust classification and trust reversal mechanisms. The primary contributions of this work are as follows:
We have developed a comprehensive trusted measurement framework specifically designed for connected vehicle systems. This framework integrates carefully selected trust attributes and calculation methodologies, meticulously aligned with the unique security requirements of connected vehicle networks.
We introduce an innovative score reversal mechanism to address malicious mutual evaluation scenarios. This mechanism demonstrates superior capability in identifying concealed malicious nodes compared to conventional approaches that fail to account for problematic scoring patterns.
This paper is organized into six distinct sections.
Section 1 presents the research background and outlines the core innovations of this study.
Section 2 provides a comprehensive review of recent advancements in relevant research domains.
Section 3 elaborates on the rationale behind the selection of trust attributes in our proposed framework.
Section 4 details the methodology for trust value computation.
Section 5 presents the experimental evaluation and analysis of the results. Finally,
Section 6 concludes this paper with a summary of the proposed scheme and its key findings.
3. Trusted Multidimensional Assessment Metrics
In response to the critical data collection requirements and diverse security risks inherent in connected vehicle networks, we have developed a comprehensive trust evaluation framework comprising two distinct components: static trust and dynamic trust. The static trust component evaluates the inherent characteristics and integrity of node hardware and software, enabling the identification of fundamental device faults or persistent security compromises. Conversely, the dynamic trust component assesses real-time node behavior and data quality, facilitating the timely detection of anomalous activities and potential data inaccuracies, even in otherwise operational nodes. This dual-component approach provides a robust mechanism for comprehensive trust evaluation in connected vehicle systems.
The proposed network architecture in this study organizes connected vehicles into multiple clusters
, where each cluster is responsible for executing similar tasks or connecting to shared infrastructure. Within each cluster, numerous nodes
operate under the supervision of designated management nodes
. These management nodes, selected from infrastructure components or ordinary nodes, facilitate intra-cluster coordination and inter-cluster communication. For the purpose of this study, we assume that all management nodes maintain complete security through the implementation of established security mechanisms. The comprehensive network architecture is illustrated in
Figure 2.
Each node must verify its trustworthiness to the manage through both static (inherent characteristics) and dynamic (real-time behavior) trust assessments.
3.1. Hardware Metrics
We represent hardware components as , where denotes non-replaceable node components and indicates user-replaceable parts (e.g., batteries). During initialization, the nodes send hardware information to the manage as .
The hardware trust value is shown in Equation (
1):
The formulation explicitly distinguishes between critical hardware components, which are non-replaceable, and external extended hardware, which permits partial replacement. However, the safety thresholds governing such replacements are exclusively determined by system administrators.
3.2. Software Metrics
We characterize software components as , where encompasses critical, non-modifiable system software, including a bootloader and OS kernel, while comprises user-configurable applications. Following system configuration, nodes submit their software information to the manage during the initialization phase.
The software trust value is shown in Equation (
2):
The formulation explicitly distinguishes between critical software components, which are non-replaceable, and external extended software, which permits partial modification. While users may sequentially deploy additional applications, the software trust value maintains inherent resilience. However, the safety thresholds governing such modifications are exclusively determined by system administrators.
3.3. Dynamic Metrics
Given the frequent state transitions of operational nodes, the manage requires a periodic dynamic assessment of each node.
We categorize dynamic trust metrics into two distinct dimensions based on node characteristics: behavioral trust and data trust. Behavioral trust encompasses network behavior patterns, energy consumption profiles, and security policy compliance. Data trust is established through a comparative analysis of data collected by individual nodes against peer node datasets.
The dynamic assessment framework incorporates five key parameters: security policy compliance , energy efficiency , behavioral consistency , network activity , and data reliability .
3.3.1. Security Policy Score
In connected vehicle networks, multiple nodes collaborate through inter-node communication to accomplish shared tasks. Within each device, communication occurs between specific processes. We model these communication processes as , with the corresponding permission sets required for task execution denoted as . The access privileges granted to external processes by a node are represented as .
Then, the security policy trust value is shown in Equation (
3):
The security policy score quantifies the degree of alignment between the permissions required by the current process and the authorized permission set.
3.3.2. Energy Score
Energy status serves as a critical metric for node valuation, providing dual insights into system operation and security. Primarily, energy consumption patterns reflect the operational load distribution across IoT devices, potentially indicating network segmentation or uneven workload distribution. Secondarily, anomalous energy fluctuations may signify device malfunctions or security breaches. Node energy consumption originates from three primary sources: data transmission/reception, computational processing, and device operation. The input/output energy consumption is calculated as , where b represents the number of transmitted/received bits, d denotes the inter-node distance, indicates the average energy consumption per bit, and accounts for the energy required to establish communication standards.
Computational energy consumption
is determined by task complexity and device efficiency, while collection device consumption
depends on device power requirements and quantity. Given the practical challenges in the real-time measurement of
,
, and
, we employ an estimation approach based on the assumption of relatively stable operational patterns. Within a defined timeframe, the total energy reduction can be equated to the sum of these consumption components, the function as shown in Equation (
4):
These consumption variations serve as critical indicators for dynamic trust assessment. For the energy score evaluation, we establish a threshold-based mechanism. Let
represent the acceptable energy deviation threshold for device
i. The collective threshold vector for
n devices is defined as
. Considering historical energy consumption data
, the energy score can be computed through the following formulation, as shown in Equation (
5):
3.3.3. Behavior Score
Node behavior analysis focuses on data forwarding characteristics, encompassing three critical metrics:
Network forwarding rate : When requests N data packets from node and node successfully forwards n packets , the forwarding rate is evaluated based on the ratio of delivered packets received by .
Network repeat rate : This metric quantifies data duplication in node reports. A lower repetition rate indicates higher node trustworthiness. Conversely, increased repetition rates may suggest malicious activity, as compromised nodes often replay historical messages to evade detection, thereby reducing their trustworthiness.
Network transmission delay : The transmission delay must remain within the network architecture’s permissible range. The time interval for data forwarding from is constrained by system-specified thresholds.
The above node behavior
can be described as shown in Equation (
6):
The behavior score of
is shown in Equation (
7):
represents the historical value of the i-th component of the node behavior metric vector. It reflects the normal behavior pattern of the node in the past. By comparing the current value with its historical value , we can effectively detect abnormal changes in the node’s behavior.
3.3.4. Node Activity
Node activity serves as a crucial indicator of the vitality and operational stability of a node. The trustworthiness of a node is positively correlated with its activity level, which is determined by the frequency and success rate of interactions with other nodes and clusters.
We define the node activity calculation function
using Equation (
8):
is a positive regulation constant; when , .
The node activity
is determined using Equation (
9):
represents the number of nodes interacting with the target node, while denotes the number of clusters interacting with the node.
3.3.5. Data Trust
Data trust is established by comparing a node’s reported values with those of other nodes in the same environment to assess data accuracy. For instance, a temperature variation exceeding 5 °C within a confined space is highly improbable.
Significant deviations in a node’s reported data relative to its peers may indicate either node malfunction or environmental anomalies. This cross-node validation mechanism aligns with the IoT data integration frameworks proposed in [
32], where distributed device collaboration is essential for ensuring data credibility in large-scale sensing networks. To quantify data trust, we define an acceptable offset threshold for each measurable metric within the system. The collective threshold vector for
n metrics is represented as
. Given historical data
, the data trust value can be calculated by the following formula, as shown in Equation (
10):
4. Trusted Multidimensional Assessment Methods
4.1. Dynamic Trusted Weight
To effectively weight different trust dimensions, our methodology incorporates information entropy, a concept adapted from thermodynamic entropy by Shannon information theory. This approach provides a robust mechanism for quantifying uncertainty in trust evaluation, ensuring that metrics from more reliable nodes align more closely with actual conditions.
Building upon the trust value components outlined in
Section 3, the information entropy
for each dimension is computed as shown in Equation (
11):
represents the trust value of a specific dimension for a node, and denotes the corresponding untrustworthiness.
When comparing the information entropy of two distinct dimensions, a higher entropy value indicates greater uncertainty within that trust dimension. This increased uncertainty suggests a more significant contribution to the overall trustworthiness assessment, thereby warranting a higher weighting in the evaluation process.
Following the computation of information entropy
for various trust indicators, the trust differentiation
for each dimension can be derived as shown in Equation (
12):
L represents the number of trust levels, standardized to five in this study. In practical applications, this parameter is configurable by system administrators. The metric weights
for different trust indicators are calculated as shown in Equation (
13):
In the formula, , and .
4.2. Time Decay
Trust evaluation should incorporate not only the current state but also historical trust patterns and their temporal evolution. This temporal sensitivity necessitates a dynamic decay mechanism for historically acquired trust values over time.
Consider a node subjected to
n trust evaluations between time
and
t, yielding a sequence of multidimensional trust assessment indicators
. Here,
represents the oldest metric while
corresponds to the most recent assessment. The time weight coefficient is defined as in Equation (
14):
denotes the time decay function, constrained by
. This decay function assigns temporally weighted significance to trust metrics, prioritizing recent interactions over historical ones. The time decay function can be formulated as shown in Equation (
15):
4.3. Trust Feedback and Dispersion
Nodes inherently operate within clusters, necessitating frequent communication with peer devices and servers. Consequently, trust evaluation must incorporate assessments from other devices, with trust feedback and dispersion serving as fundamental mechanisms for node management. Each node is obligated to provide feedback following interactions, enabling the server to maintain unified control and update trust assessments based on collective feedback.
Following an interaction between a node and other devices, the subject node evaluates the guest node’s behavior on a scale of , where 0 indicates complete dissatisfaction and 1 represents full satisfaction.
Let
denote the interaction satisfaction of node
toward node
, which can be calculated as shown in Equation (
16):
In this formulation,
represents the trust weight defined in
Section 4.1 while
denotes the
i-th component of node behavior metrics.
Upon computing the interaction satisfaction, the
stores the results and updates the historical record vector as shown in Equation (
17):
represents the timestamp of the i-th interaction and records the subjective satisfaction at time .
The feedback trust value is calculated by aggregating historical interaction satisfaction, as shown in Equation (
18):
Given the potential instability of node services, it is essential to calculate the trust dispersion based on historical interaction satisfaction after obtaining the direct trust value. This dispersion metric serves as a subjective evaluation of the reliability of the node’s self-assessment and is subsequently reported to the server during feedback submission. The dispersion degree
is calculated as shown in Equation (
19):
The server maintains a feedback trust record matrix structured as shown in Equation (
20):
The feedback trust value from to , denoted as , along with the dispersion degree , is recorded. When , this record is excluded from the calculation of trust feedback, dispersion, and overall trust.
The trust evaluation for node j, denoted as , is calculated as the mathematical expectation of the j-th column in the feedback trust matrix.
4.4. Detection and Reversal Mechanism of Scores from a Malicious Evaluation
In scenarios where a system is compromised, attackers may infiltrate multiple devices to collaboratively manipulate trust evaluations. These compromised nodes may artificially inflate trust scores for other malicious nodes during the feedback process, thereby deceiving the system into classifying them as trustworthy. To address this vulnerability, we propose a score reversal mechanism designed to mitigate the impact of fraudulent evaluations from compromised devices.
Prior to implementing the reversal mechanism, we employ a filtering process to identify suspicious scores. Our methodology utilizes the k-means clustering algorithm for anomaly detection in trust evaluations.
The input is node list , with the value of each node’s trust and the feedback record matrix as each variable’s threshold. The K-means maximum number of iterations is . The output is a list of problematic mutual evaluations, denoted as . The algorithm procedure consists of the following steps:
Initialization phase: The method utilizes node i’s trust value and its evaluation tuple toward node j to construct data points. Two initial centroids are randomly selected from these points: and . The iteration counter is initialized to 0, and a list is created to store identified problematic nodes.
K-means clustering: Two cluster lists, and , are initialized to represent the K-means clusters. A centroid change flag is introduced to track updates to the centroids, with its default value set to false. In each iteration, is increased by 1. For each data point , distances and to the current centroids are computed. The point is assigned to the cluster with the smaller distance, and centroids are recalculated based on the updated clusters. The algorithm terminates if the maximum iteration count is reached or if centroids remain unchanged.
Screening stage: The direct trust values of nodes in and are compared, and the cluster with lower trust values is identified as containing problematic mutual evaluations. These nodes are added to the problem group .
The pseudo-code is listed in Algorithm 1.
Algorithm 1 Calculate problematic scores |
Input: Node list ; Each node’s trust value , Feedback Record Matrix ; Each variable’s thresholdl; Maximum number of iterations . Output: Problematic scores record matrix . Each manager of clusters calculates its own nodes.
- 1:
for do - 2:
Constructing . - 3:
end for - 4:
Randomly select two points and , , . - 5:
repeat - 6:
, , , - 7:
for do - 8:
- 9:
- 10:
if then - 11:
- 12:
else - 13:
- 14:
end if - 15:
end for - 16:
- 17:
- 18:
if then - 19:
- 20:
- 21:
end if - 22:
if then - 23:
- 24:
- 25:
end if - 26:
until or - 27:
if then - 28:
- 29:
else - 30:
- 31:
end if - 32:
return
|
The score reversal mechanism operates as follows: when a node receives a high trust score during feedback evaluation but demonstrates consistently low scores across other trust dimensions, the server will automatically replace the inflated score with a lower, more representative value. Simultaneously, the system generates an administrative alert regarding the anomalous evaluation of the node in question.
The pseudo-code is listed in Algorithm 2.
Algorithm 2 Trust reverse |
Input: Node list ; Each node’s trust value , Feedback Record Matrix ; Each variable’s threshold; Problematic scores record matrix . Output: Fixed feedback record Matrix . Each manager of clusters calculates its own nodes.
- 1:
for do - 2:
if ’s then - 3:
for do - 4:
if and ’s then - 5:
- 6:
else - 7:
- 8:
end if - 9:
end for - 10:
end if - 11:
end for - 12:
return
|
4.5. Trust Grouping
Connected vehicle nodes exhibit characteristics analogous to social communities, where each cluster functions as a micro-society with distinct structures and roles. Within these clusters, nodes share resources and engage in interactions. A node’s initial trust is significantly influenced by its cluster’s reputation, particularly before establishing its own trust history.
Building upon the variables defined in previous sections, we define the static metric vector of a node as , and its operating vector as .
Let
represent the expected static metric vector of managed devices from the perspective of
. For a node under evaluation
, the static metric evaluation function
in the trust grouping process is computed as shown in Equation (
21):
exhibits dynamic trust characteristics, requiring the evaluation of its reliability over the time interval
to determine its trustworthiness. This reliability is calculated as shown in Equation (
22):
The administrator must establish a threshold for group membership. A node is permitted to join the group managed by if its calculated trust metric satisfies .
For nodes lacking direct neighbor trust relationships, their feedback trust values are derived from the cluster’s average trust value.
4.6. Total Trust
For each node, we obtain both its direct trust status and the modified trust feedback from other nodes. Consequently, our proposed method can be applied to compute the total trust of a node as shown in Equation (
23):
and represent the subjective and objective coefficients, respectively, which are determined by the system’s conditions. These coefficients satisfy the constraint . The term denotes the value of each relevant variable as defined in the previous section.
Other nodes assess whether they should interact with a given node based on its computed total trust and their own trust requirements.
Following an interaction, the global trustworthiness of both nodes is updated based on the evaluation of the new interaction.
5. Scheme Analysis
To validate the proposed scheme, we developed a comprehensive test model and established an experimental environment, the detailed configuration of which is presented in
Table 1.
We conducted comprehensive performance evaluations using MATLAB R2022a, comparing the proposed scheme with Hussein’s scheme 1 [
33] and Xie’s scheme 2 [
34]. And,
Table 2 shows the comparison of the model architectures of the three solutions.
The experimental setup consists of 100 connected vehicle nodes distributed within a 100 × 100 m area. Nodes in this environment are susceptible to malicious attacks or internal failures, both of which result in reduced trust values. Upon detection of anomalous behavior, compromised nodes are isolated from the network. We measured two critical performance metrics: the information success rate (ISR), which reflects communication reliability, and the trust rate (TR), which indicates system integrity.
Each simulation iteration involves randomly deploying a predetermined number of malicious nodes within the test area, followed by system initialization and operation.
For each evaluated scheme, we executed the simulation environment over 150 operational cycles. The resulting system node states are visually represented in
Figure 3 and
Figure 4.
A comparative analysis of the three schemes reveals that our proposed approach demonstrates superior performance in minimizing the incorporation of problematic nodes, represented by red hollow nodes in the visualization.
We systematically measured the average information success rate (ISR) across varying initial percentages of problematic nodes, ranging from 5% to 20%. The comprehensive results are presented in
Figure 5.
The information success rate (ISR) for each operational cycle is graphically presented in
Figure 6.
As illustrated in
Figure 6, our proposed scheme demonstrates superior capability in maintaining the system ISR, particularly under conditions of low initial problematic node concentrations. Furthermore, in scenarios with higher initial problematic node populations, our approach exhibits enhanced performance following the stabilization of mutual evaluation metrics.
We systematically analyzed the trust rate (TR) across various initial system configurations, with the comprehensive results presented in
Figure 7.
The proposed scheme demonstrates superior resilience, as evidenced by a slower degradation rate of trust states across various initial conditions compared to alternative approaches. When analyzed in conjunction with ISR metrics, these results confirm the efficacy of our novel score reversal mechanism and validate the overall effectiveness of our proposed solution.
6. Conclusions
This study develops a trusted measurement framework for connected vehicle networks that unifies the static attestation of hardware and software with an entropy-weighted, time-decayed assessment of five dynamic dimensions. A two-stage defense, built on K-means outlier detection followed by score reversal, neutralizes collusive rating abuse and preserves rating integrity. Simulations with one hundred vehicles arranged in four clusters reveal that the framework raises the information success rate by 12 percent and caps the false-positive isolation rate at 6.1 percent, markedly outperforming Hussein’s community trust and Xie’s dynamic behavior baselines under identical conditions. Because the core algorithms execute with distributed O(n log n) complexity, they can run in real time on resource-constrained vehicular edge devices. Although the current evaluation is limited to MATLAB, the design is hardware-agnostic and cluster-aware, suggesting readiness for large-scale V2X deployments. Future work will extend verification to 5G/ITS-G5 testbeds, introduce reinforcement-based threshold tuning, and investigate latency and energy overhead in high-mobility settings to further refine practical applicability.