Next Article in Journal
DAF-UNet: Deformable U-Net with Atrous-Convolution Feature Pyramid for Retinal Vessel Segmentation
Previous Article in Journal
Linearly Coupled Quantum Harmonic Oscillators and Their Quantum Entanglement
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse

1
China National Institute of Standardization, Beijing 100191, China
2
College of Computer Science, Beijing University of Technology, Beijing 100124, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1453; https://doi.org/10.3390/math13091453
Submission received: 28 March 2025 / Revised: 18 April 2025 / Accepted: 26 April 2025 / Published: 28 April 2025
(This article belongs to the Section E1: Mathematics and Computer Science)

Abstract

:
As security issues in vehicular networks continue to intensify, ensuring the trustworthiness of message exchanges among vehicles, infrastructure, and cloud platforms has become increasingly critical. Although trust authentication serves as a fundamental solution to this challenge, existing models fail to effectively address the specific requirements of vehicular networks, particularly in defending against malicious evaluations. This paper proposes a novel multidimensional trust evaluation framework that integrates both static and dynamic metrics. To tackle the issue of malicious ratings in peer assessments, a rating reversal mechanism based on K-means clustering is designed to effectively identify and correct abnormal trust feedback. In addition, the framework incorporates an entropy-based trust weight allocation mechanism and a time decay model to enhance adaptability in dynamic environments. The simulation results demonstrate that, compared with traditional approaches, the proposed scheme improves the average successful information rate by 12% and reduces the false positive rate to 6.1%, confirming its superior performance in securing communications within the vehicular network ecosystem.

1. Introduction

Connected vehicles represent a pivotal advancement in transportation technology, offering transformative capabilities to enhance system efficiency, reduce traffic incidents, improve safety measures, and mitigate congestion impacts [1]. Industry projections indicate a substantial growth trajectory, with the global connected vehicle market expected to reach 9.9 billion units by 2035 [2]. However, the rapid development of these technologies has concurrently introduced significant security vulnerabilities. A notable demonstration of these challenges occurred in 2015 when Chrysler initiated a recall of 1.4 million vehicles following the discovery of critical remote attack vulnerabilities [3]. The rapid development of IoT technologies (as surveyed in [4]) and mobile edge computing standardization efforts [5] have created both opportunities and challenges for secure V2X communications.
The automotive industry has progressively integrated vehicle-to-everything (V2X) communication technologies, encompassing both vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) systems, into modern vehicles. From a security perspective, ensuring the trustworthiness of messages exchanged among vehicles, infrastructure, and cloud platforms is paramount. Within the Car Communication Consortium, extensive deliberations have been conducted regarding security protocols and protective measures for vehicular systems. These discussions culminated in the establishment of comprehensive security frameworks, including the definition of trusted assurance levels and specific security requirements [6].
The connected vehicles network, encompassing V2V, V2I, and V2X communications, exhibits significant architectural similarities with edge computing paradigms, as illustrated in Figure 1. Recent advancements in industrial edge computing [7] have demonstrated the critical role of low-latency processing and distributed trust management in mission-critical systems, providing valuable insights for vehicular network design. This correspondence necessitates a targeted consideration of both connected vehicle network characteristics and edge computing features in system design. The core research focus lies in developing efficient and comprehensive interaction mechanisms for trusted node access within these networks. However, existing trust models demonstrate limited capability in addressing the critical requirements for secure interaction among connected vehicle nodes, highlighting a substantial research gap in this domain.
The establishment of efficient node interaction fundamentally depends on the maintenance of robust trust relationships, which can be effectively quantified through trust models [8]. However, the implementation of these models necessitates the comprehensive consideration of multiple attributes. Current trust modeling approaches are primarily categorized into two distinct paradigms: centralized trust modeling [9,10] and distributed trust modeling [11,12], differentiated by the entity responsible for trust metric computation. A critical research challenge lies in developing effective trust assessment mechanisms for connected vehicle nodes, particularly in leveraging architectural features to integrate the respective advantages of both centralized and distributed trust models for comprehensive node trust modeling in connected vehicle networks.
In this study, we propose a novel trusted measurement scheme for connected vehicles, incorporating trust classification and trust reversal mechanisms. The primary contributions of this work are as follows:
  • We have developed a comprehensive trusted measurement framework specifically designed for connected vehicle systems. This framework integrates carefully selected trust attributes and calculation methodologies, meticulously aligned with the unique security requirements of connected vehicle networks.
  • We introduce an innovative score reversal mechanism to address malicious mutual evaluation scenarios. This mechanism demonstrates superior capability in identifying concealed malicious nodes compared to conventional approaches that fail to account for problematic scoring patterns.
This paper is organized into six distinct sections. Section 1 presents the research background and outlines the core innovations of this study. Section 2 provides a comprehensive review of recent advancements in relevant research domains. Section 3 elaborates on the rationale behind the selection of trust attributes in our proposed framework. Section 4 details the methodology for trust value computation. Section 5 presents the experimental evaluation and analysis of the results. Finally, Section 6 concludes this paper with a summary of the proposed scheme and its key findings.

2. Related Work

2.1. Security Mechanism of Connected Vehicles

Despite extensive research efforts by both academia and industry, connected vehicle nodes continue to face significant security challenges in open environments. Several approaches have been proposed to address these challenges. Jiang et al. [13] identified four primary research directions for security enhancement in IoT and connected vehicles: authentication, access control, management, and low-cost encryption schemes. Alfardus and Rawa [14] developed an RNN-based detection system using wavelet transforms for real-time IVN protection. Aledhari et al. [15] proposed new CAV security protocols and threat classifications. Traditional solutions include Ning’s [16] implementation of vendor-provided trusted execution environments to ensure secure node operation and Jung et al.’s [17] development of security-enhanced management platforms utilizing ARM Platform Security Architecture. However, these solutions exhibit limitations in terms of hardware dependency and limited generalizability. Recent studies on edge computing security frameworks [18] have emphasized the necessity of hierarchical protection mechanisms aligned with cybersecurity standards, which resonates with the distributed trust challenges in vehicular networks. Furthermore, while some researchers have explored time-automata-based secure gateways through firewall retrofitting similar to traditional network security approaches, such solutions demonstrate inadequate support for the distributed nature of connected vehicle networks. Current research continues to develop more adaptive solutions to overcome these challenges.The proposed trust evaluation framework operates without relying on proprietary hardware. It integrates multidimensional metrics, including static and dynamic trust, offering strong scalability and platform independence. This enables effective security protection for the Internet of Things and makes the framework well suited for large-scale deployment environments.

2.2. Trust Model

The concept of trust, originating from sociological studies, was first adapted for device management by Blaze et al. [19] in 1996. Their pioneering work conceptualized trust models as proactive defense mechanisms, establishing a framework for enhancing the trustworthiness of connected systems through trust extension. Building upon this foundation, recent advancements have significantly expanded trust management techniques for vehicular networks. Shahariar and Phillips [20] proposed a tamper-proof device-based trust management framework for VANETs that regulates driver behavior and reduces communication overhead by disseminating feedback only when conflicting information arises. Mwanje et al. [21] further introduced a fuzzy-logic-based trust evaluation framework that considers factors such as message freshness and sender location to determine the trust level of vehicles in the network. Ma et al. [22] developed an integrated trust-based evaluation model for edge nodes, facilitating efficient edge computing operations, while Sultan et al. [23] proposed a collaborative trust approach for detecting malicious nodes in vehicular ad hoc networks (VANETs). However, the literature lacks detailed documentation regarding the specific trust metric processes employed in their model. In the domain of vehicular networks, Hamssa [24] introduced a hybrid trust model (HTM) coupled with a malicious behavior detection system (MDS) specifically designed for VANET systems. While effective within its target domain, this approach demonstrates limited applicability to other technological domains. The information-entropy-based multidimensional trust evaluation mechanism designed in this paper incorporates a time decay function, enabling the system to dynamically adjust the weight of each trust dimension over time. This approach allows for more effective differentiation between short-term anomalies and long-term malicious behaviors, thereby improving the accuracy of trust inference and ensuring its effective applicability in the Internet of Things domain.

2.3. Machine Learning

Machine learning (ML) has emerged as a prevalent and effective technique for malicious node detection in various network environments. Akbani et al. [25] pioneered an approach combining machine learning with reputation systems, while Liu et al. [26] developed a trust system that evaluates node trustworthiness through routing path analysis. Subsequent advancements by Liu et al. [27] introduced a classification scheme utilizing linear regression methods to distinguish between malicious and benign nodes, incorporating the potential for multiple mixed attacks. Mehmet Aslan and Sevil [28] Sen proposed a dynamic ML-based trust management model for VANETs, adaptively assessing trust levels to enhance malicious node detection while reducing computational overhead. Further research by Yang et al. [29] identified sophisticated selective edge packet attacks, highlighting the evolving intelligence of malicious nodes in executing targeted attacks. Wang et al. [30] developed MESMERIC, an ML-driven trust management mechanism that integrates direct trust, indirect trust, and contextual information to separate trustworthy vehicles via optimal decision boundaries. Rashid et al. [31] further designed an adaptive real-time detection framework for VANETs, employing a distributed multi-layer classifier validated through OMNET++/SUMO simulations, achieving high DDoS attack detection accuracy. However, despite these advancements, existing machine-learning-based approaches face limitations in their applicability to connected vehicle systems with edge computing capabilities, primarily due to high computational overhead and inherent model constraints. The lightweight rating reversal mechanism proposed in this paper employs the K-means clustering algorithm to detect abnormal rating behaviors and automatically correct malicious ratings. This significantly reduces the system’s computational overhead while maintaining high detection accuracy, thereby enhancing its practical deployability in resource-constrained vehicular environments.

3. Trusted Multidimensional Assessment Metrics

In response to the critical data collection requirements and diverse security risks inherent in connected vehicle networks, we have developed a comprehensive trust evaluation framework comprising two distinct components: static trust and dynamic trust. The static trust component evaluates the inherent characteristics and integrity of node hardware and software, enabling the identification of fundamental device faults or persistent security compromises. Conversely, the dynamic trust component assesses real-time node behavior and data quality, facilitating the timely detection of anomalous activities and potential data inaccuracies, even in otherwise operational nodes. This dual-component approach provides a robust mechanism for comprehensive trust evaluation in connected vehicle systems.
The proposed network architecture in this study organizes connected vehicles into multiple clusters ( c l u 1 , c l u 2 , , c l u n ) , where each cluster is responsible for executing similar tasks or connecting to shared infrastructure. Within each cluster, numerous nodes ( n o d e n 1 , n o d e n 2 , , n o d e n m ) operate under the supervision of designated management nodes ( m a n a g e n 1 , m a n a g e n 2 , , m a n a g e n m ) . These management nodes, selected from infrastructure components or ordinary nodes, facilitate intra-cluster coordination and inter-cluster communication. For the purpose of this study, we assume that all management nodes maintain complete security through the implementation of established security mechanisms. The comprehensive network architecture is illustrated in Figure 2.
Each node must verify its trustworthiness to the manage through both static (inherent characteristics) and dynamic (real-time behavior) trust assessments.

3.1. Hardware Metrics

We represent hardware components as ( h 0 , h 1 , , h k , , h n ) , where ( h 1 , , h k ) denotes non-replaceable node components and ( h k + 1 , , h n ) indicates user-replaceable parts (e.g., batteries). During initialization, the nodes send hardware information to the manage as ( h 0 , h 1 , , h n ) .
The hardware trust value is shown in Equation (1):
M e ( H a ) = i = 1 k ( h i h i ) ( 1 n k i = k n h i h i )
The formulation explicitly distinguishes between critical hardware components, which are non-replaceable, and external extended hardware, which permits partial replacement. However, the safety thresholds governing such replacements are exclusively determined by system administrators.

3.2. Software Metrics

We characterize software components as ( s o 0 , s o 1 , , s o k , , s o n ) , where ( s o 0 , , s o k ) encompasses critical, non-modifiable system software, including a bootloader and OS kernel, while ( s o k + 1 , , s o n ) comprises user-configurable applications. Following system configuration, nodes submit their software information ( s o 0 , s o 1 , , s o k , , s o n ) to the manage during the initialization phase.
The software trust value is shown in Equation (2):
M e ( S o ) = i = 1 k ( s o i s o i ) ( 1 n k i = k n s o i s o i )
The formulation explicitly distinguishes between critical software components, which are non-replaceable, and external extended software, which permits partial modification. While users may sequentially deploy additional applications, the software trust value maintains inherent resilience. However, the safety thresholds governing such modifications are exclusively determined by system administrators.

3.3. Dynamic Metrics

Given the frequent state transitions of operational nodes, the manage requires a periodic dynamic assessment of each node.
We categorize dynamic trust metrics into two distinct dimensions based on node characteristics: behavioral trust and data trust. Behavioral trust encompasses network behavior patterns, energy consumption profiles, and security policy compliance. Data trust is established through a comparative analysis of data collected by individual nodes against peer node datasets.
The dynamic assessment framework incorporates five key parameters: security policy compliance M e ( S t ) , energy efficiency M e ( Δ E ) , behavioral consistency M e ( B e h ) , network activity M e ( N a ) , and data reliability M e ( D a t a ) .

3.3.1. Security Policy Score

In connected vehicle networks, multiple nodes collaborate through inter-node communication to accomplish shared tasks. Within each device, communication occurs between specific processes. We model these communication processes as c 1 , c 2 , , c n , with the corresponding permission sets required for task execution denoted as { j 1 , j 2 , , j n } . The access privileges granted to external processes by a node are represented as j a .
Then, the security policy trust value is shown in Equation (3):
M e ( S t ) = 1 l 1 i = 1 l j i j u
The security policy score quantifies the degree of alignment between the permissions required by the current process and the authorized permission set.

3.3.2. Energy Score

Energy status serves as a critical metric for node valuation, providing dual insights into system operation and security. Primarily, energy consumption patterns reflect the operational load distribution across IoT devices, potentially indicating network segmentation or uneven workload distribution. Secondarily, anomalous energy fluctuations may signify device malfunctions or security breaches. Node energy consumption originates from three primary sources: data transmission/reception, computational processing, and device operation. The input/output energy consumption E I O is calculated as E I O = C I O · b + C I n i t · b · d 2 , where b represents the number of transmitted/received bits, d denotes the inter-node distance, C I O indicates the average energy consumption per bit, and C I n i t accounts for the energy required to establish communication standards.
Computational energy consumption ( E C o m ) is determined by task complexity and device efficiency, while collection device consumption ( E C o l l ) depends on device power requirements and quantity. Given the practical challenges in the real-time measurement of E I O , E C o m , and E C o l l , we employ an estimation approach based on the assumption of relatively stable operational patterns. Within a defined timeframe, the total energy reduction can be equated to the sum of these consumption components, the function as shown in Equation (4):
Δ E = E t E t 1 = E I O + E C o m + E C o l l
These consumption variations serve as critical indicators for dynamic trust assessment. For the energy score evaluation, we establish a threshold-based mechanism. Let Eth i represent the acceptable energy deviation threshold for device i. The collective threshold vector for n devices is defined as E n e r g y t h r e s h o l d = { E t h 1 , E t h 2 , , E t h n } . Considering historical energy consumption data Δ E h , the energy score can be computed through the following formulation, as shown in Equation (5):
M e ( Δ E ) = 1 Δ E Δ E h < E n e r g y t h r e s h o l d 1 Δ E Δ E h Δ E Δ E h E n e r g y t h r e s h o l d

3.3.3. Behavior Score

Node behavior analysis focuses on data forwarding characteristics, encompassing three critical metrics:
Network forwarding rate n f r : When n o d e requests N data packets from node and node successfully forwards n packets ( n N ) , the forwarding rate is evaluated based on the ratio of delivered packets received by n o d e .
Network repeat rate n r r : This metric quantifies data duplication in node reports. A lower repetition rate indicates higher node trustworthiness. Conversely, increased repetition rates may suggest malicious activity, as compromised nodes often replay historical messages to evade detection, thereby reducing their trustworthiness.
Network transmission delay n t d : The transmission delay must remain within the network architecture’s permissible range. The time interval for data forwarding from n o d e is constrained by system-specified thresholds.
The above node behavior Ω B can be described as shown in Equation (6):
Ω B = ( n f r , n r r , n t d )
The behavior score of Ω B is shown in Equation (7):
Me ( B e h ) = i = 1 n Ω B i × Ω B i h
Ω B i h represents the historical value of the i-th component of the node behavior metric vector. It reflects the normal behavior pattern of the node in the past. By comparing the current value Ω B i h with its historical value Ω B i h , we can effectively detect abnormal changes in the node’s behavior.

3.3.4. Node Activity

Node activity serves as a crucial indicator of the vitality and operational stability of a node. The trustworthiness of a node is positively correlated with its activity level, which is determined by the frequency and success rate of interactions with other nodes and clusters.
We define the node activity calculation function N o d e A c t C a l c using Equation (8):
N o d e A c t C a l c ( x ) = 1 1 x + ε
ε is a positive regulation constant; when ε , t e x t c o l o r r e d N o d e A c t C a l c ( x ) 1 .
The node activity M e ( N a ) is determined using Equation (9):
M e ( N a ) = N o d e A c t C a l c ( n o d e ) + N o d e A c t C a l c ( c l u ) 2
N o d e A c t C a l c ( n o d e ) represents the number of nodes interacting with the target node, while N o d e A c t C a l c ( c l u ) denotes the number of clusters interacting with the node.

3.3.5. Data Trust

Data trust is established by comparing a node’s reported values with those of other nodes in the same environment to assess data accuracy. For instance, a temperature variation exceeding 5 °C within a confined space is highly improbable.
Significant deviations in a node’s reported data relative to its peers may indicate either node malfunction or environmental anomalies. This cross-node validation mechanism aligns with the IoT data integration frameworks proposed in [32], where distributed device collaboration is essential for ensuring data credibility in large-scale sensing networks. To quantify data trust, we define an acceptable offset threshold for each measurable metric within the system. The collective threshold vector for n metrics is represented as D a t a t h r e s h o l d = { D t t h 1 , D t t h 2 , , D t t h n } . Given historical data D a t a h , the data trust value can be calculated by the following formula, as shown in Equation (10):
M e ( D a t a ) = i = 1 n 1 D a t a i D a t a i h < D t t h i 1 D a t a i D a t a i h D a t a i D a t a i h D t t h i

4. Trusted Multidimensional Assessment Methods

4.1. Dynamic Trusted Weight

To effectively weight different trust dimensions, our methodology incorporates information entropy, a concept adapted from thermodynamic entropy by Shannon information theory. This approach provides a robust mechanism for quantifying uncertainty in trust evaluation, ensuring that metrics from more reliable nodes align more closely with actual conditions.
Building upon the trust value components outlined in Section 3, the information entropy C t for each dimension is computed as shown in Equation (11):
Ct ( M e ( i ) ) = M e ( i ) log M e ( i ) ( 1 M e ( i ) ) log ( 1 M e ( i ) )
M e ( i ) represents the trust value of a specific dimension for a node, and 1 M e ( i ) denotes the corresponding untrustworthiness.
When comparing the information entropy of two distinct dimensions, a higher entropy value indicates greater uncertainty within that trust dimension. This increased uncertainty suggests a more significant contribution to the overall trustworthiness assessment, thereby warranting a higher weighting in the evaluation process.
Following the computation of information entropy C t for various trust indicators, the trust differentiation T d for each dimension can be derived as shown in Equation (12):
T d i = 1 1 log L C t ( M e ( i ) ) C t ( M e ( i ) ) > ρ 0 C t ( M e ( i ) ) < ρ
L represents the number of trust levels, standardized to five in this study. In practical applications, this parameter is configurable by system administrators. The metric weights α i for different trust indicators are calculated as shown in Equation (13):
α i = T d i i = 1 n T d i i = 1 , 2 , , n
In the formula, 0 α i 1 , and i = 1 n α i = 1 .

4.2. Time Decay

Trust evaluation should incorporate not only the current state but also historical trust patterns and their temporal evolution. This temporal sensitivity necessitates a dynamic decay mechanism for historically acquired trust values over time.
Consider a node subjected to n trust evaluations between time t Δ t and t, yielding a sequence of multidimensional trust assessment indicators { M 1 , M 2 , , M n } . Here, M 1 represents the oldest metric while M n corresponds to the most recent assessment. The time weight coefficient is defined as in Equation (14):
w i = h ( i ) / i
h ( i ) denotes the time decay function, constrained by h ( i ) [ 0 , 1 ] . This decay function assigns temporally weighted significance to trust metrics, prioritizing recent interactions over historical ones. The time decay function can be formulated as shown in Equation (15):
h ( i ) = 1 i = n h ( i 1 ) = h ( i ) 1 n 1 i n

4.3. Trust Feedback and Dispersion

Nodes inherently operate within clusters, necessitating frequent communication with peer devices and servers. Consequently, trust evaluation must incorporate assessments from other devices, with trust feedback and dispersion serving as fundamental mechanisms for node management. Each node is obligated to provide feedback following interactions, enabling the server to maintain unified control and update trust assessments based on collective feedback.
Following an interaction between a node and other devices, the subject node evaluates the guest node’s behavior on a scale of [ 0 , 1 ] , where 0 indicates complete dissatisfaction and 1 represents full satisfaction.
Let E x i j denote the interaction satisfaction of node P i toward node P j , which can be calculated as shown in Equation (16):
E x i j = i = 1 n α i Ω B i 1 / n
In this formulation, α i represents the trust weight defined in Section 4.1 while Ω B i denotes the i-th component of node behavior metrics.
Upon computing the interaction satisfaction, the s e r v e r stores the results and updates the historical record vector as shown in Equation (17):
E x i j h = ( t 1 , E x i j h ( 1 ) , t 2 , E x i j h ( 2 ) , , t n , E x i j h ( n ) )
t i represents the timestamp of the i-th interaction and E x i j h ( i ) records the subjective satisfaction at time t i .
The feedback trust value is calculated by aggregating historical interaction satisfaction, as shown in Equation (18):
M e ( E x ) = w t E x i j + w t 1 T i j h
Given the potential instability of node services, it is essential to calculate the trust dispersion based on historical interaction satisfaction after obtaining the direct trust value. This dispersion metric serves as a subjective evaluation of the reliability of the node’s self-assessment and is subsequently reported to the server during feedback submission. The dispersion degree ρ i j is calculated as shown in Equation (19):
ρ i j = 1 1 n 1 k = 1 n 1 ( E x i j h ( k ) M e ( E x ) ) 2 + 1
The server maintains a feedback trust record matrix structured as shown in Equation (20):
T M a P i P j = E x 11 , ρ 11 E x 12 , ρ 12 E x 1 n , ρ 1 n E x 21 , ρ 21 E x 22 , ρ 22 E x 2 n , ρ 2 n E x n 1 , ρ n 1 E x n 2 , ρ n 2 E x n n , ρ n n
The feedback trust value from P i to P j , denoted as T M a P i P j , along with the dispersion degree ρ i j , is recorded. When i = j , this record is excluded from the calculation of trust feedback, dispersion, and overall trust.
The trust evaluation for node j, denoted as T M a P * P j , is calculated as the mathematical expectation of the j-th column in the feedback trust matrix.

4.4. Detection and Reversal Mechanism of Scores from a Malicious Evaluation

In scenarios where a system is compromised, attackers may infiltrate multiple devices to collaboratively manipulate trust evaluations. These compromised nodes may artificially inflate trust scores for other malicious nodes during the feedback process, thereby deceiving the system into classifying them as trustworthy. To address this vulnerability, we propose a score reversal mechanism designed to mitigate the impact of fraudulent evaluations from compromised devices.
Prior to implementing the reversal mechanism, we employ a filtering process to identify suspicious scores. Our methodology utilizes the k-means clustering algorithm for anomaly detection in trust evaluations.
The input is node list n o d e = { n o d e n 1 , n o d e n 2 , , n o d e n m } , with the value of each node’s trust M e and the feedback record matrix T M a P i P j as each variable’s threshold. The K-means maximum number of iterations is k m max . The output is a list of problematic mutual evaluations, denoted as f i x . The algorithm procedure consists of the following steps:
  • Initialization phase: The method utilizes node i’s trust value M e i and its evaluation tuple E x i j , ρ i j toward node j to construct data points. Two initial centroids are randomly selected from these points: ( M e x . < E x x j , ρ x j > ) and ( M e y . < E x y j , ρ y j > ) . The iteration counter k m i is initialized to 0, and a list f i x is created to store identified problematic nodes.
  • K-means clustering: Two cluster lists, L i s t 1 and L i s t 2 , are initialized to represent the K-means clusters. A centroid change flag f l a g _ c h a n g e is introduced to track updates to the centroids, with its default value set to false. In each iteration, k m i is increased by 1. For each data point ( M e i . < E x i j , ρ i j > ) , distances d i s t 1 and d i s t 2 to the current centroids are computed. The point is assigned to the cluster with the smaller distance, and centroids are recalculated based on the updated clusters. The algorithm terminates if the maximum iteration count k m m a x is reached or if centroids remain unchanged.
  • Screening stage: The direct trust values of nodes in L i s t 1 and L i s t 2 are compared, and the cluster with lower trust values is identified as containing problematic mutual evaluations. These nodes are added to the problem group f i x .
The pseudo-code is listed in Algorithm 1.
Algorithm 1 Calculate problematic scores
  • Input: Node list n o d e = { n o d e n 1 , n o d e n 2 , , n o d e n m } ; Each node’s trust value M e , Feedback Record Matrix T M a P i P j ; Each variable’s thresholdl; Maximum number of iterations k m max .
  • Output: Problematic scores record matrix f i x .
  • Each manager of clusters calculates its own nodes.
    1:
for  n o d e n i  do
    2:
   Constructing ( M e i . < E x i j , ρ i j > ) .
    3:
end for
    4:
Randomly select two points ( M e x . < E x x j , ρ x j > ) and ( M e y . < E x y j , ρ y j > ) , f l a g _ c h a n g e = 0 , k m i = 0 .
    5:
repeat
    6:
    C 1 = , C 2 = , f l a g = f a l s e , k m i + +
    7:
   for  ( M e i . < E x i j , ρ i j > )  do
    8:
        d i s t 1 = M e i M e x 2 + E x i j , ρ i j E x x j , ρ x j 2
    9:
        d i s t 2 = M e i M e y 2 + E x i j , ρ i j E x y j , ρ y j 2
  10:
       if  d i s t 1 d i s t 2  then
  11:
           C 1 = C 1 ( M e i . < E x i j , ρ i j > )
  12:
       else
  13:
           C 2 = C 2 ( M e i . < E x i j , ρ i j > )
  14:
       end if
  15:
   end for
  16:
    M e t e m p 1 ; E x t e m p 1 j , ρ t e m p 1 j = 1 C 1 M e i C 1 M e i ; 1 C 1 E x i j , ρ i j C 1 E x i j , ρ i j
  17:
    M e t e m p 2 ; E x t e m p 2 j , ρ t e m p 2 j = 1 C 2 M e i C 2 M e i ; 1 C 2 E x i j , ρ i j C 2 E x i j , ρ i j
  18:
   if  M e t e m p x ; E x t e m p x j , ρ t e m p x j ( M e x . < E x x j , ρ x j > )  then
  19:
        ( M e x . < E x x j , ρ x j > ) = M e t e m p x ; E x t e m p x j , ρ t e m p x j
  20:
        f l a g = t r u e
  21:
   end if
  22:
   if  M e t e m p y ; E x t e m p y j , ρ t e m p y j ( M e y . < E x y j , ρ y j > )  then
  23:
        ( M e y . < E x y j , ρ y j > ) = M e t e m p y ; E x t e m p y j , ρ t e m p y j
  24:
        f l a g = t r u e
  25:
   end if
  26:
until  k m i k m m a x or f l a g = = f a l s e
  27:
if  M e x M e y  then
  28:
    F i x = C 1
  29:
else
  30:
    F i x = C 2
  31:
end if
  32:
return  F i x
The score reversal mechanism operates as follows: when a node receives a high trust score during feedback evaluation but demonstrates consistently low scores across other trust dimensions, the server will automatically replace the inflated score with a lower, more representative value. Simultaneously, the system generates an administrative alert regarding the anomalous evaluation of the node in question.
The pseudo-code is listed in Algorithm 2.
Algorithm 2 Trust reverse
  • Input: Node list n o d e = { n o d e n 1 , n o d e n 2 , , n o d e n m } ; Each node’s trust value M e , Feedback Record Matrix T M a P i P j ; Each variable’s threshold; Problematic scores record matrix f i x .
  • Output: Fixed feedback record Matrix T M a f i x P i P j .
  • Each manager of clusters calculates its own nodes.
    1:
for  n o d e n i  do
    2:
    if  n o d e n i ’s M e < t h r e s h o l d  then
    3:
        for  E x y j , ρ y j T M a P i P j  do
    4:
            if  E x y j , ρ y j > F i x and n o d e n j ’s M e < t h r e s h o l d  then
    5:
                 T M a f i x P i P j = T M a P i P j 1
    6:
            else
    7:
                 T M a f i x P i P j = T M a P i P j
    8:
            end if
    9:
        end for
  10:
    end if
  11:
end for
  12:
return  T M a f i x P i P j

4.5. Trust Grouping

Connected vehicle nodes exhibit characteristics analogous to social communities, where each cluster functions as a micro-society with distinct structures and roles. Within these clusters, nodes share resources and engage in interactions. A node’s initial trust is significantly influenced by its cluster’s reputation, particularly before establishing its own trust history.
Building upon the variables defined in previous sections, we define the static metric vector of a node as I C = ( M e ( H a ) , M e ( S o ) ) , and its operating vector as R C = ( M e ( S t ) , M e ( Δ E ) , M e ( B e h ) ) .
Let I C ¯ = ( M e ( H a ) ¯ , M e ( S o ) ¯ ) represent the expected static metric vector of managed devices from the perspective of m a n a g e n m . For a node under evaluation n o d e n , the static metric evaluation function M e I C in the trust grouping process is computed as shown in Equation (21):
M e I C ( I C ¯ , I C ) = I C ¯ · I C I C ¯ 2 I C 2
n o d e n exhibits dynamic trust characteristics, requiring the evaluation of its reliability over the time interval [ t i , t i + 1 ] to determine its trustworthiness. This reliability is calculated as shown in Equation (22):
M e R C = M e I C ( I C ¯ , I C ) t i + 1 t i t i t i + 1 M e ( S t ) M e ( Δ E ) Me ( B e h ) d t
The administrator must establish a threshold t h for group membership. A node n o d e n is permitted to join the group managed by m a n a g e n m if its calculated trust metric satisfies M e T C > t h .
For nodes lacking direct neighbor trust relationships, their feedback trust values are derived from the cluster’s average trust value.

4.6. Total Trust

For each node, we obtain both its direct trust status and the modified trust feedback from other nodes. Consequently, our proposed method can be applied to compute the total trust of a node as shown in Equation (23):
H T t o t a l = β o i = 1 7 α i M e ( i ) + β s T M a P * P j
β 0 and β s represent the subjective and objective coefficients, respectively, which are determined by the system’s conditions. These coefficients satisfy the constraint β 0 + β s = 1 . The term M e ( i ) denotes the value of each relevant variable as defined in the previous section.
Other nodes assess whether they should interact with a given node based on its computed total trust T t o t a l and their own trust requirements.
Following an interaction, the global trustworthiness of both nodes is updated based on the evaluation of the new interaction.

5. Scheme Analysis

To validate the proposed scheme, we developed a comprehensive test model and established an experimental environment, the detailed configuration of which is presented in Table 1.
We conducted comprehensive performance evaluations using MATLAB R2022a, comparing the proposed scheme with Hussein’s scheme 1 [33] and Xie’s scheme 2 [34]. And, Table 2 shows the comparison of the model architectures of the three solutions.
The experimental setup consists of 100 connected vehicle nodes distributed within a 100 × 100 m area. Nodes in this environment are susceptible to malicious attacks or internal failures, both of which result in reduced trust values. Upon detection of anomalous behavior, compromised nodes are isolated from the network. We measured two critical performance metrics: the information success rate (ISR), which reflects communication reliability, and the trust rate (TR), which indicates system integrity.
Each simulation iteration involves randomly deploying a predetermined number of malicious nodes within the test area, followed by system initialization and operation.
For each evaluated scheme, we executed the simulation environment over 150 operational cycles. The resulting system node states are visually represented in Figure 3 and Figure 4.
A comparative analysis of the three schemes reveals that our proposed approach demonstrates superior performance in minimizing the incorporation of problematic nodes, represented by red hollow nodes in the visualization.
We systematically measured the average information success rate (ISR) across varying initial percentages of problematic nodes, ranging from 5% to 20%. The comprehensive results are presented in Figure 5.
The information success rate (ISR) for each operational cycle is graphically presented in Figure 6.
As illustrated in Figure 6, our proposed scheme demonstrates superior capability in maintaining the system ISR, particularly under conditions of low initial problematic node concentrations. Furthermore, in scenarios with higher initial problematic node populations, our approach exhibits enhanced performance following the stabilization of mutual evaluation metrics.
We systematically analyzed the trust rate (TR) across various initial system configurations, with the comprehensive results presented in Figure 7.
The proposed scheme demonstrates superior resilience, as evidenced by a slower degradation rate of trust states across various initial conditions compared to alternative approaches. When analyzed in conjunction with ISR metrics, these results confirm the efficacy of our novel score reversal mechanism and validate the overall effectiveness of our proposed solution.

6. Conclusions

This study develops a trusted measurement framework for connected vehicle networks that unifies the static attestation of hardware and software with an entropy-weighted, time-decayed assessment of five dynamic dimensions. A two-stage defense, built on K-means outlier detection followed by score reversal, neutralizes collusive rating abuse and preserves rating integrity. Simulations with one hundred vehicles arranged in four clusters reveal that the framework raises the information success rate by 12 percent and caps the false-positive isolation rate at 6.1 percent, markedly outperforming Hussein’s community trust and Xie’s dynamic behavior baselines under identical conditions. Because the core algorithms execute with distributed O(n log n) complexity, they can run in real time on resource-constrained vehicular edge devices. Although the current evaluation is limited to MATLAB, the design is hardware-agnostic and cluster-aware, suggesting readiness for large-scale V2X deployments. Future work will extend verification to 5G/ITS-G5 testbeds, introduce reinforcement-based threshold tuning, and investigate latency and energy overhead in high-mobility settings to further refine practical applicability.

Author Contributions

Z.D. Methodology; M.W. software; Q.F. Resources; B.G. Validation; M.C. Text polishing. All authors have read and agreed to the published version of the manuscript.

Funding

Special Fundamental Research Fund for the Central Public Scientific Research Institutes (602024Y-11436, 602025Y-12515 and 602025Y-12516).

Data Availability Statement

The original contributions presented in the study are included in the article, further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gupta, R.; Kumari, A.; Tanwar, S. A Taxonomy of Blockchain Envisioned Edge as a Connected Autonomous Vehicles. Trans. Emerg. Telecommun. Technol. 2021, 32, e4009. [Google Scholar] [CrossRef]
  2. Fuji Keizai. Investigations of the Connected Vehicle Marketing; Marketing Reports; Fuji Keizai: Tokyo, Japan, 2018. [Google Scholar]
  3. Miller, C.; Valasek, C. Remote Exploitation of an Unaltered Passenger Vehicle. In Proceedings of the Black Hat USA, Las Vegas, NV, USA, 1–6 August 2015. [Google Scholar]
  4. Al-Fuqaha, A.; Guizani, M.; Mohammadi, M.; Aledhari, M.; Ayyash, M. Internet of Things: A Survey on Enabling Technologies, Protocols, and Applications. IEEE Commun. Surv. Tutor. 2015, 17, 2347–2376. [Google Scholar] [CrossRef]
  5. Hu, Y.C.; Patel, M.; Sabella, D.; Sprecher, N.; Young, V. Mobile Edge Computing—A Key Technology Towards 5G. ETSI White Paper 2015, 11, 1–16. [Google Scholar]
  6. Kiening, A.; Angermeier, D.; Seudie, H.; Stodart, T.; Wolf, M. Trust Assurance Levels of Cybercars in V2X Communication. In Proceedings of the 2013 ACM Workshop on Security, Privacy & Dependability for Cyber Vehicles, Berlin, Germany, 4 November 2013; ACM: New York, NY, USA, 2013; pp. 49–60. [Google Scholar]
  7. Wang, Q.; Jin, G.; Li, Q.; Wang, K.; Yang, Z.; Wang, H. Industrial Edge Computing: Vision and Challenges. Inf. Control 2021, 50, 257–274. [Google Scholar]
  8. Li, X.-Y.; Gui, X.-L. Research on Dynamic Trust Model for Large Scale Distributed Environment. Ph.D. Thesis, South China University of Technology, Guangzhou, China, 2013. [Google Scholar]
  9. Niu, B.; You, W.; Tang, H.; Wang, X. 5G Network Slice Security Trust Degree Calculation Model. In Proceedings of the 2017 3rd IEEE International Conference on Computer and Communications (ICCC), Chengdu, China, 13–16 December 2017; IEEE: New York, NY, USA, 2017; pp. 1150–1157. [Google Scholar]
  10. Al-Khafajiy, M.; Baker, T.; Asim, M.; Guo, Z.; Ranjan, R.; Longo, A.; Puthal, D.; Taylor, M. COMITMENT: A Fog Computing Trust Management Approach. J. Parallel Distrib. Comput. 2020, 137, 1–16. [Google Scholar] [CrossRef]
  11. You, J.; Shangguan, J.; Xu, S.; Li, Q.; Wang, Y. Distributed Dynamic Trust Management Model Based on Trust Reliability. J. Softw. 2017, 28, 2354–2369. [Google Scholar]
  12. Amirian, S.; Taha, T.R.; Rasheed, K.; Arabnia, H.R. Generative Adversarial Network Applications in Creating a Meta-Universe. In Proceedings of the 2021 International Conference on Computational Science and Computational Intelligence (CSCI), Las Vegas, NV, USA, 15–17 December 2021; IEEE: New York, NY, USA, 2021; pp. 175–179. [Google Scholar]
  13. Jiang, Y.; Zhang, Y.; Xu, A.; Kuang, X.; Meng, J.; Chu, H. An Overview: Data Security Mechanism of Power Terminal in Edge Computing. In Proceedings of the 2020 IEEE International Conference on Energy Internet (ICEI), Sydney, NSW, Australia, 24–28 August 2020; IEEE: New York, NY, USA, 2020; pp. 22–27. [Google Scholar]
  14. Aledhari, M.; Razzak, R.; Rahouti, M.; Yazdinejad, A.; Parizi, R.M.; Qolomany, B.; Guizani, M.; Qadir, J.; Al-Fuqaha, A. Safeguarding Connected Autonomous Vehicle Communication: Protocols, Intra- and Inter-Vehicular Attacks and Defenses. Comput. Secur. 2025, 151, 104352. [Google Scholar] [CrossRef]
  15. Alfardus, A.; Rawat, D.B. Machine Learning-Based Anomaly Detection for Securing In-Vehicle Networks. Electronics 2024, 13, 1962. [Google Scholar] [CrossRef]
  16. Ning, Z.; Zhang, F.; Shi, W. A Study of Using TEE on Edge Computing. J. Comput. Res. Dev. 2019, 56, 1441–1453. [Google Scholar]
  17. Jung, J.; Kim, B.; Cho, J.; Lee, B. A Secure Platform Model Based on ARM Platform Security Architecture for IoT Devices. IEEE Internet Things J. 2021, 9, 5548–5560. [Google Scholar] [CrossRef]
  18. Wu, W.; Zhang, Q.; Wang, H.J. Edge Computing Security Protection from the Perspective of Classified Protection of Cybersecurity. In Proceedings of the 2019 6th International Conference on Information Science and Control Engineering (ICISCE), Shanghai, China, 20–22 December 2019; IEEE: New York, NY, USA, 2019; pp. 278–281. [Google Scholar]
  19. Blaze, M.; Feigenbaum, J.; Lacy, J. Decentralized Trust Management. In Proceedings of the 1996 IEEE Symposium on Security and Privacy, Oakland, CA, USA, 6–8 May 1996; IEEE: New York, NY, USA, 1996; pp. 164–173. [Google Scholar]
  20. Shahariar, R.; Phillips, C. A Trust Management Framework for Vehicular Ad Hoc Networks. Int. J. Secur. Priv. Trust Manag. 2023, 12, 15–36. [Google Scholar] [CrossRef]
  21. Mwanje, M.D.; Kaiwartya, O.; Aljaidi, M.; Cao, Y.; Kumar, S.; Jha, D.; Naser, A.; Lloret, J. Cyber Security Analysis of Connected Vehicles. IET Intell. Transp. Syst. 2024, 18, 1175–1195. [Google Scholar] [CrossRef]
  22. Ma, X.; Li, X. Trust Evaluation Model in Edge Computing Based on Integrated Trust. In Proceedings of the 2018 International Conference on Algorithms, Computing and Artificial Intelligence; ACM: Sanya, China, 2018; pp. 1–6. [Google Scholar]
  23. Sultan, S.; Javaid, Q.; Malik, A.J.; Al-Turjman, F.; Attique, M. Collaborative-Trust Approach Toward Malicious Node Detection in Vehicular Ad Hoc Networks. Environ. Dev. Sustain. 2022, 24, 7532–7550. [Google Scholar] [CrossRef]
  24. Hasrouny, H.; Samhat, A.E.; Bassil, C.; Laouiti, A. Trust Model for Secure Group Leader-Based Communications in VANET. Wirel. Netw. 2019, 25, 4639–4661. [Google Scholar] [CrossRef]
  25. Akbani, R.; Korkmaz, T.; Raju, G.V.S. A Machine Learning Based Reputation System for Defending Against Malicious Node Behavior. In Proceedings of the IEEE GLOBECOM 2008—2008 IEEE Global Telecommunications Conference, New Orleans, LA, USA, 30 November–4 December 2008; IEEE: New York, NY, USA, 2008; pp. 1–5. [Google Scholar]
  26. Liu, X.; Abdelhakim, M.; Krishnamurthy, P.; Tipper, D. Identifying Malicious Nodes in Multihop IoT Networks Using Dual Link Technologies and Unsupervised Learning. Open J. Internet Things 2018, 4, 109–125. [Google Scholar]
  27. Liu, L.; Ma, Z.; Meng, W. Detection of Multiple-Mix-Attack Malicious Nodes Using Perceptron-Based Trust in IoT Networks. Future Gener. Comput. Syst. 2019, 101, 865–879. [Google Scholar] [CrossRef]
  28. Aslan, M.; Sen, S. A Dynamic Trust Management Model for Vehicular Ad Hoc Networks. Vehic. Commun. 2023, 41, 100608. [Google Scholar] [CrossRef]
  29. Yang, L.; Liu, L.; Ma, Z.; Ding, Y. Detection of Selective-Edge Packet Attack Based on Edge Reputation in IoT Networks. Comput. Netw. 2021, 188, 107842. [Google Scholar] [CrossRef]
  30. Wang, Y.; Mahmood, A.; Sabri, M.F.M.; Zen, H.; Kho, L.C. MESMERIC: Machine Learning-Based Trust Management Mechanism for the Internet of Vehicles. Sensors 2024, 24, 863. [Google Scholar] [CrossRef]
  31. Rashid, K.; Saeed, Y.; Ali, A.; Jamil, F.; Alkanhel, R.; Muthanna, A. An Adaptive Real-Time Malicious Node Detection Framework Using Machine Learning in Vehicular Ad-Hoc Networks (VANETs). Sensors 2023, 23, 2594. [Google Scholar] [CrossRef]
  32. Georgakopoulos, D.; Jayaraman, P.P. Internet of Things: From Internet Scale Sensing to Smart Services. Computing 2016, 98, 1041–1058. [Google Scholar] [CrossRef]
  33. Hussein, D.; Bertin, E.; Frey, V. A Community-Driven Access Control Approach in Distributed IoT Environments. IEEE Commun. Mag. 2017, 55, 146–153. [Google Scholar] [CrossRef]
  34. Xie, L.X.; Wei, R.X. A Dynamic Trustworthiness Assessment Method for Internet of Things Nodes. Comput. Appl. 2019, 3, 2597–2603. [Google Scholar]
Figure 1. Connected vehicles structure.
Figure 1. Connected vehicles structure.
Mathematics 13 01453 g001
Figure 2. Node organization structure.
Figure 2. Node organization structure.
Mathematics 13 01453 g002
Figure 3. Initialized node distribution with 5% problem node status.
Figure 3. Initialized node distribution with 5% problem node status.
Mathematics 13 01453 g003
Figure 4. Node status after 150 cycles (5% problem node initial) for different evaluated schemes. Red nodes indicate malicious devices, the rest are normal devices. (a) Node status after 150 cycles with 5% initial problem nodes in our scheme. (b) Node status after 150 cycles with 5% initial problem nodes in Scheme 1. (c) Node status after 150 cycles with 5% initial problem nodes in Scheme 2.
Figure 4. Node status after 150 cycles (5% problem node initial) for different evaluated schemes. Red nodes indicate malicious devices, the rest are normal devices. (a) Node status after 150 cycles with 5% initial problem nodes in our scheme. (b) Node status after 150 cycles with 5% initial problem nodes in Scheme 1. (c) Node status after 150 cycles with 5% initial problem nodes in Scheme 2.
Mathematics 13 01453 g004
Figure 5. Average ISR in different initial states. Schemes follow the same formatting.
Figure 5. Average ISR in different initial states. Schemes follow the same formatting.
Mathematics 13 01453 g005
Figure 6. Change in system trust value in different scheme. (a) ISR change in 5% initial problem mode. (b) ISR change in 10% initial problem mode. (c) ISR change in 15% initial problem mode. (d) ISR change in 20% initial problem mode.
Figure 6. Change in system trust value in different scheme. (a) ISR change in 5% initial problem mode. (b) ISR change in 10% initial problem mode. (c) ISR change in 15% initial problem mode. (d) ISR change in 20% initial problem mode.
Mathematics 13 01453 g006
Figure 7. Change in system trust value in different scheme. (a) System trust value change in 5% initial problem mode. (b) System trust value change in 10% initial problem mode. (c) System trust value change in 15% initial problem mode. (d) System trust value change in 20% initial problem mode.
Figure 7. Change in system trust value in different scheme. (a) System trust value change in 5% initial problem mode. (b) System trust value change in 10% initial problem mode. (c) System trust value change in 15% initial problem mode. (d) System trust value change in 20% initial problem mode.
Mathematics 13 01453 g007
Table 1. Experiment parameter.
Table 1. Experiment parameter.
ParameterValueDescription
Node100Number of nodes
Management4Number of clusters
Times150System run cycle
Transenergy20Energy consumption during transmission (nJ per time)
Computeenergy2Energy consumption at the time of calculation (nJ per time)
Init problem node rate (%)5 10 15 20Number of problem nodes at initialization
Table 2. Comparison of trust evaluation schemes.
Table 2. Comparison of trust evaluation schemes.
DimensionMy SchemeHussein’s SchemeXie’s Scheme
Trust DimensionsFive-dimensional—static + dynamic (hard/soft/behavior/energy/data)Three-dimensional—community trust (interaction frequency/success rate/reputation)Four-dimensional—dynamic behavior (network/computation/storage/communication)
Malicious Node DetectionTwo-stage detection (K-means clustering + score reversal)Community-voting-based anomaly removalSingle-threshold behavior deviation detection
Dynamic AdaptabilityTime decay function + entropy-based dynamic weightingFixed sliding time window averagingExponentially weighted moving average (EWMA)
Computational ComplexityO(n log n), distributed computingO(n²), full-connection voting mechanismO(n), centralized computation
Applicable ScenarioHighly dynamic vehicular networksStatic IoT device networkingMedium/low-speed mobile sensor networks
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Diao, Z.; Wang, M.; Fu, Q.; Gong, B.; Chen, M. A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse. Mathematics 2025, 13, 1453. https://doi.org/10.3390/math13091453

AMA Style

Diao Z, Wang M, Fu Q, Gong B, Chen M. A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse. Mathematics. 2025; 13(9):1453. https://doi.org/10.3390/math13091453

Chicago/Turabian Style

Diao, Zipeng, Mengxiang Wang, Qiang Fu, Bei Gong, and Meng Chen. 2025. "A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse" Mathematics 13, no. 9: 1453. https://doi.org/10.3390/math13091453

APA Style

Diao, Z., Wang, M., Fu, Q., Gong, B., & Chen, M. (2025). A Trusted Measurement Scheme for Connected Vehicles Based on Trust Classification and Trust Reverse. Mathematics, 13(9), 1453. https://doi.org/10.3390/math13091453

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop