Next Article in Journal
Robust Model-Free Control for MMC Inverters in Cold Ironing Systems
Previous Article in Journal
Enhancing Drone Navigation and Control: Gesture-Based Piloting, Obstacle Avoidance, and 3D Trajectory Mapping
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm and Blockchain in Internet of Vehicles

1
College of Computer Science and Technology, Changchun University, Changchun 130012, China
2
College of Computer Science and Technology, Jilin University, Changchun 130025, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(13), 7341; https://doi.org/10.3390/app15137341 (registering DOI)
Submission received: 3 May 2025 / Revised: 8 June 2025 / Accepted: 13 June 2025 / Published: 30 June 2025

Abstract

The rapid growth of computationally intensive tasks in the Internet of Vehicles (IoV) poses a triple challenge to the efficiency, security, and stability of Mobile Edge Computing (MEC). Aiming at the problems that traditional optimization algorithms tend to fall into, where local optimum in task offloading and edge computing nodes are exposed to the risk of data tampering, this paper proposes a secure offloading strategy that integrates the Improved Polar Lights Optimization algorithm (IPLO) and blockchain. First, the truncation operation when a particle crosses the boundary is improved to dynamic rebound by introducing a rebound boundary processing mechanism, which enhances the global search capability of the algorithm; second, the blockchain framework based on the Delegated Byzantine Fault Tolerance (dBFT) consensus is designed to ensure data tampering and cross-node trustworthy sharing in the offloading process. Simulation results show that the strategy significantly reduces the average task processing latency (64.4%), the average system energy consumption (71.1%), and the average system overhead (75.2%), and at the same time effectively extends the vehicle’s power range, improves the real-time performance of the emergency accident warning and dynamic path planning, and significantly reduces the cost of edge computing usage for small and medium-sized fleets, providing an efficient, secure, and stable collaborative computing solution for IoV.

1. Introduction

With the rapid development of automotive technology, the Internet of Vehicles (IoV), as a core component of intelligent transportation systems, is gradually changing people’s travel and traffic management modes with the aim to build an efficient, intelligent, and safe transportation ecosystem through the interconnectivity between vehicles and vehicles (V2V), vehicles and infrastructure (V2I), and vehicles and pedestrians (V2P) [1].
In IoV environments, vehicles are equipped with sensors and computing devices that generate large amounts of data and require complex computational task processing. However, the vehicles’ own computational resources are often relatively limited to meet the demands of these increasing tasks [2]. Mobile Edge Computing (MEC) [3], as an innovative technology, shows unique advantages in IoV scenarios. It enables vehicles to offload computing-intensive and latency-sensitive tasks to edge nodes for processing by deploying computing resources at base stations or local servers close to the vehicles. This not only effectively alleviates the dilemma of limited computing resources in the vehicle itself but also significantly reduces data transmission latency and energy consumption. However, in the complex and dynamically changing environment of IoVs, computation offloading faces many challenges. On the one hand, the high-speed mobility of vehicles leads to unstable network connections between them and edge servers, and frequent switching and fluctuations in channel quality make computation offloading decisions extremely difficult. On the other hand, since it involves the transmission of sensitive information and critical data of vehicles, it is also crucial to prevent data leakage, tampering, and malicious attacks.
To solve complex computational offloading decision optimization problems, traditional heuristic algorithms have been widely used in IoV; for example, Genetic Algorithm (GA) [4] and Ant Colony Optimization (ACO) [5] and its derivatives [6]. These algorithms search for a better computation offloading strategy through specific heuristic rules in a complex solution space to achieve optimization of task scheduling and resource allocation. However, these traditional algorithms often suffer from the defects of slow convergence and easily fall into local optimal solutions. In this regard, the literature [7] proposes an improved Artificial Bee Colony Algorithm (MGABC), which improves the problems of slow convergence and susceptibility to premature convergence in complex optimization problems. The literature [8] proposes a novel hybrid differential evolutionary algorithm with adaptive variance factors and crossover rates designed to balance the global and local search capabilities of the algorithm. While improved algorithms are being proposed, the development of MEC continues to accelerate. The literature [9] proposes an auxiliary architecture for the collaboration of MEC layer and cloud layer, which achieves complementary resource utilization, reduces service latency, and improves resource utilization through task migration mechanism. The literature [10] proposes a Vehicle Fog Edge Computing (VFEC) network architecture, which realizes the cooperation between the computing resources in the MEC layer and the idle resources of the vehicles in the fog layer and improves the system performance.
Blockchain technology, with its unique features of decentralization, non-tampering, and traceability, provides a potential solution to the security and trust issues in the IoV computing offloading process [11]. By constructing a distributed ledger structure, blockchain is able to record all the transaction information in the computation offloading process, realize the trusted interaction between vehicles and servers, and enhance the security of the whole system. In this regard, the literature [12] proposes a trust management system to address the reliability of vehicle messages in the Internet of Vehicles, using a specific consensus mechanism and smart contracts to effectively improve the accuracy of the vehicle’s judgment of events. The literature [13] proposes a roadside-unit-assisted authentication and key negotiation protocol, adopting a trusted organization network model, using blockchain to manage the vehicle information ledger, realizing cross-organization authentication, and moving the computational load to the roadside unit to improve efficiency. The literature [14] proposes a secure event information sharing protocol, which solves the single point of failure that exists in traditional IoV systems by designing a consensus mechanism and a smart contract to ensure that the information is not tampered with and securely shared. The literature [15] proposes a vehicle data sharing scheme based on federated blockchain, designs an enhanced delegated proof-of-interest consensus algorithm to balance security and efficiency, and introduces a trust score model to improve the reliability of data.
Although computational offloading and blockchain have been converged and achieved some milestones in IoV scenarios, the existing convergence schemes still have obvious deficiencies: on the one hand, there is still much room for improvement in reducing key performance indicators such as system latency, energy consumption, and overhead; on the other hand, the existing solutions do not guarantee system stability under dynamic changes in network topology, making it difficult to adapt to the demand for reliable services in the high-speed mobile environment of Telematics. These limitations seriously restrict the practical application of converged solutions in complex Telematics scenarios. In order to solve these problems, we propose a dynamic adaptive edge computing offloading architecture for Telematics, which deeply integrates a blockchain security mechanism and intelligent optimization algorithm. First, a blockchain-based single-vehicle multi-task computing offloading model is constructed; second, a novel polar lights optimization algorithm is designed for avoiding falling into the local optimum problem, which achieves significant improvement in key indicators such as system delay, energy consumption, overhead, and stability; finally, the blockchain consensus optimization mechanism with authorized Byzantine fault tolerance is used to ensure the security of the data in the transmission process. In summary, the core work carried out in this paper is as follows:
  • A blockchain-based multi-vehicle multi-task computational offloading model for mobile scenarios is established that can accurately map the task computational demand, offloading the logic and blockchain consensus process generated by vehicles in the process of dynamic mobility;
  • A new polar lights optimization algorithm is adopted to optimize the computational offloading strategy, which accelerates the convergence speed while reducing the risk of falling into the local optimal solution and is significantly better than the traditional optimization methods;
  • An authorized Byzantine fault-tolerant consensus mechanism is adopted to dynamically select consensus nodes through stake vote election, which ensures transactions are tamperable;
  • The impact of task load, number of vehicles, and data volume on system performance under different offloading strategies is demonstrated in simulation experiments, which verifies the stability of the proposed strategies in dynamic IoV environments and confirms the effectiveness of the strategies proposed in this paper.

2. System Model

2.1. Network Model

As shown in Figure 1, the network model in this paper consists of vehicles, roadside units (RSUs), edge servers, and blockchain network. Each RSU can act as a relay node to realize the information interaction between vehicles and edge servers within its communication coverage. The roadside has RSUs at equal distances, and an MEC server is deployed in each RSU, which are connected by fiber-optic wired links. The set of vehicles is defined as m = { 1,2 , , m } . The set of RSUs is defined as n = { 1,2 , , n } . Within the blockchain system, all RSUs constitute blockchain nodes, which can be divided into two categories. One category is ordinary nodes, which only undertake transmission and reception and do not participate in the consensus process; the other category is consensus nodes elected by virtue of voting, and such nodes are responsible for generating new blocks and implementing the consensus process. When the vehicle generates computationally intensive and delay-sensitive tasks to be processed, the system determines the offloading decision based on energy consumption, delay, and overhead. If the task is assigned to be processed locally, the vehicle will rely on its own computational resources to perform the computation, and after the end of the computation, the results will be transmitted to the RSU for on-chain storage and sharing; if the task is assigned to be processed by the MEC server, the computation results will be fed back to the RSU to realize on-chain storage and data sharing between the RSUs. Therefore, regardless of whether the task is offloaded to the local computing or edge server, the calculation results will not be tampered with, and safe and reliable data sharing between RSUs can be realized.

2.2. Mission Model

The tasks for vehicle generation follow the Poisson distribution law [16], and each vehicle has multiple computing tasks P = { D m , C m , T m m a x , T c o n s m a x } , where D m is the data size of the task, C m is the CPU cycles required for the task, T m m a x is the maximum delay tolerated for task completion, and T c o n s m a x is the maximum time tolerated by the blockchain consensus process.

2.3. Communications Model

The vehicle offloads the computation task to the MEC server via RSU, and the server computes the results and then feeds them back to the vehicle via RSU. This process ignores channel interference [17]. According to the Shannon formula, the rate at which the vehicle transmits data to the RSU is defined as:
R m = B m log 2 ( 1 + L m P m B 0 )  
where, B m denotes the subchannel bandwidth between the vehicle and the MEC server, L m denotes the channel gain between the vehicle and the RSU, P m denotes the vehicle’s transmitter power, and B 0 denotes the power of the Gaussian channel white noise.

2.4. Computational Model

(1)
Local Computing
When a vehicle performs a computational task locally, the delay and energy consumption of the computation depend on the vehicle’s own computational capability, which is determined by the on-board unit (OBU, on board unit) [18] placed on the vehicle. Therefore, the energy consumption and latency of the task computed locally are, respectively:
E m L O C = δ ( f m L O C ) 2 C m
T m L O C = C m f m L O C
where f m L O C denotes the computing power of vehicle m , δ ( f m L O C ) 2 denotes the power consumed by the core computing of each vehicle, and δ denotes the effective open capacitance factor of the corresponding chip structure of the vehicle [19].
(2)
Edge Server Computing
When a vehicle performs a computationally intensive and delay-sensitive task and uses an edge server to process it, the delay and energy consumption of the results fed back to the vehicle are negligible because the size of the output data after computation at the edge server is much smaller than the size of the input data [20]. Therefore, the energy consumption and latency of the task computed at the edge server are:
E m M E C = D m P m R m
T m M E C = T m u p + T m c o u n t
where R m denotes the wireless transmission rate between the task vehicle and the edge server, and T m u p and T m c o u n t denote the transmission delay of the task vehicle and the edge server and the processing delay of the task in the edge server, respectively, defined as:
T m u p = D m R m
T m c o u n t = C m f m M E C

2.5. Vehicle Mobility Model

The vehicle is traveling at a constant speed with speed v . To prevent the problem of interrupted task transmission due to RSU switching, the task should be completed within the range belonging to the current RSU as much as possible. Assuming that the vehicle is within the coverage range of the RSU, the expected dwell time [21] of the vehicle in this range is:
T m s t o p = R + r v
where R is the radius of communication coverage of RSUs and r is the horizontal distance between the vehicle and the RSU, denoted as:
r = v × t m + R , t m < R v v × t m R , t m R v

2.6. Blockchain Model

As shown in Figure 2, the Delegated Byzantine Fault Tolerance algorithm (dBFT) [22] is used as the consensus mechanism in this paper, which is mainly responsible for the distributed verification of the task scheduling scheme generated by the optimization algorithm, to ensure the credibility and consistency of scheduling decisions. In the selection of consensus nodes, voting is based on shares to determine the nodes participating in the next round of consensus, which can quickly reach consensus and tolerate a certain percentage of malicious nodes. In the consensus process, the number of consensus nodes is assumed to be K ( 3 f + 1 ) ( f is the maximum number of malicious nodes that can be tolerated), the master node (moderator) is responsible for broadcasting the new block, and the deputy nodes (participants) are responsible for voting on the new block. When the number of malicious nodes is less than f , the view switching mechanism can re-elect the master node even if there is a network outage to ensure that the consensus is finally reached and no error blocks are generated. However, if the number of malicious nodes exceeds f , the system will trigger the dynamic member group adjustment mechanism to restore system activity and security by adding or subtracting nodes [23]. Define S k as the size of a block and S p as the average size of a transaction. The consensus process is divided into four phases: preparation request, preparation reply, submission, and broadcast, where each consensus phase generates energy consumption and latency [24].
(1)
Preparation request:
The moderator will pull transactions from the memory pool to build a new block and then send a ready request message to the participants, thus starting a new consensus round. In this phase, the moderator generates one signature and K 1 message authentication codes (MACs). It is assumed that it takes α and β central processing unit (CPU) cycles to generate and verify a signature and MAC, respectively. Therefore, the CPU cycles required by the moderator are denoted as:
C n p r = ( K 1 ) β + α
The energy consumption and latency generated in the preparation request phase are, respectively:
E n p r = P c o n s T n p r
T n p r = C n p r f c o n s
where   f c o n s denotes the average computational power of the node and P c o n s denotes the average transmission power of the node.
(2)
Preparation reply:
The moderator receives the ready request message and verifies it. If the verification passes, it sends the ready reply message to all the consensus nodes. In this phase, the participant needs to verify the signature and MAC in the new block and then generate one signature with K 1 MACs for reply. Therefore, the CPU cycle required by the participant is denoted as:
C n p s = α + β + S k ( α + β ) S p + α + ( K 1 ) β
The energy consumption and latency generated in the preparation reply phase are, respectively:
E n p s = 1 k P c o n s T n p s
T n p s = C n p s f c o n s
(3)
Submission:
The consensus node starts verifying the legitimacy of each message if it obtains not less than K f other nodes’ ready replies and broadcasts them after passing the verification. In this phase, the consensus node has to verify K f signatures and MACs and generate one signature with K 1 MACs as a submit message. Therefore, the CPU cycle required by the consensus node is denoted as:
C n c o m = ( K f ) ( α + β ) + α + ( K 1 ) β
The energy consumption and latency generated in the submission phase are, respectively:
E n c o m = 1 k P c o n s T n c o m
T n c o m = C n c o m f c o n s
(4)
Broadcast:
The consensus node broadcasts to all nodes if it obtains no less than K f submissions and verifies them. In this phase, the consensus node has to verify K f signatures and MACs. Therefore, the CPU cycles required by the consensus node are denoted as:
C n b r o = ( K f ) ( α + β )
The energy consumption and latency generated in the broadcast phase are, respectively:
E n b r o = 1 k P c o n s T n b r o
T n b r o = f n b r o f c o n s
To summarize, the energy consumption and latency generated in the consensus process are:
E n C O N S = E n p r + E n p s + E n c o m + E n b r o
T n C O N S = T n p r + T n p s + T n c o m + T n b r o

2.7. Joint Optimization Problems

The task is determined, whether it is processed locally or offloaded to the MEC server for processing, by the offloading decision variable λ . When λ = 0 , it means that the task is computed locally, and when λ = 1 , it means that the task is offloaded to the MEC server for computation. Thus, for the offloading task of vehicle m , the total energy consumption incurred by the system is E ( m ) = ( 1 λ ) E m L O C + λ E m M E C + E n C O N S , and the total delay is T ( m ) = ( 1 λ ) T m L O C + λ E m M E C + T n C O N S .
In this paper, the minimum weighted sum of the energy consumption and delay incurred by joint task offloading and blockchain consensus is used as an optimization problem with the aim of minimizing the system overhead. The problem model is expressed as:
O ( m ) = η E ( m ) + γ T ( m )
s . t . C 1 : λ { 0.1 } C 2 : T m u p T m s t o p C 3 : S k < S p C 4 : T m L O C + T m M E C + T n C O N S T m m a x C 5 : T n C O N S T c o n s m a x
where η and γ denote the coefficients weighing energy consumption and delay, where 0 η 1 , 0 γ 1 , and η + γ = 1 . In practical applications, the selection of the weighing factors needs to be determined in the context of specific scenarios and requirements [25]. C 1 indicates that the vehicle can only choose one offloading decision: local offloading or MEC server offloading; C 2 indicates that it is ensured that the vehicle completes the offloading task before driving away from the coverage area of the RSUs; C 3 indicates that the size of the block has to be larger than the size of the total transaction, thereby ensuring that only one consensus process exists; C 4 means that the total delay in processing the offloading task must not exceed the maximum tolerable delay of the system; and C 5 means that the blockchain consensus process has to be completed within a specified time.

3. Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm

3.1. Improved Polar Lights Optimization

Polar Lights Optimization Algorithm (PLO) is a meta-heuristic algorithm. The algorithm proposes a unique optimization strategy by simulating the trajectories and behaviors of energetic particles during the formation of aurora borealis [26]. However, the original algorithm may lose diversity by directly truncating the solution during boundary processing, which affects the global search ability of the algorithm. For this reason, this paper proposes an Improved Polar Lights Optimization algorithm (IPLO) that innovatively introduces the Bounce Boundary Handling Mechanism [27] to enhance the algorithm’s searching ability in the boundary region of the solution space, which no longer directly fixes the particles at the boundary value when their positions are beyond the boundary of the solution space but instead adjusts the direction of their motion through the Bounce mechanism. The three main forms of motion include gyration motion, aurora oval walk, and particle collision. As shown in Figure 3, the exploration of the problem space and the search for the optimal solution can be realized through these three kinds of motions.

3.2. Algorithm Design

(1)
Initializing the Population
In IPLO, a random initialization method is used to initialize the population, which is presented in the form of a matrix with the specification of N rows and D columns. By randomly generating a set of initial solutions in a given solution space based on the principle of uniform distribution, the algorithm is able to cover different potentially high-quality regions at the beginning and avoid prematurely falling into localized patterns. Thus, the method of initializing the population is denoted as:
X N , D = r a n d × ( u b l b ) + l b = X 1,1 X 1,2 X 2,1 X 2,2 X 1 , D X 2 , D X N , 1 X N , 2 X N , D
where u b and l b denote the upper and lower boundaries of the solution space, respectively, and r a n d denotes a randomly taken value in the range [ 0,1 ] .
(2)
Updating Population Locations
First, two adaptive weights w 1 and w 2 are precisely calculated to coordinate the global exploration and local exploitation of the algorithm. These two weights are dynamically adjusted with the evaluation process, denoted as:
w 1 = 2 ( e 2 ( t / M t ) 4 + 1 ) 1
w 2 = e ( 2 t / M t ) 3
where the number of initialized adaptation evaluations t = 0 and M t denotes the maximum number of evaluations. Then, gyration motion is carried out, denoted as:
L S ( t ) = C e q B μ m × t
where the constant C provides the basic physical reference for the motion, q and m represent the charge and mass of the point-bearing particles and affect the rotational momentum and direction, respectively, B serves as an analog of the strength of the Earth’s magnetic field, which introduces an external guiding force for the particle motion, and the damping factor μ mimics the energy loss in the real environment, which effectively controls the amplitude of the rotation and prevents the convergence of the rotation too quickly. Subsequently, it enters the aurora oval walk phase, denoted as:
G S = l e v y ( d ) × ( X m e a n ( j ) X ( i , j ) ) + l b + a 1 × ( u b l b ) 2
where l e v y ( d ) is the global search step to guarantee the search efficiency of the algorithm in a large range, X m e a n ( j ) is the center of mass position of the energetic particle population, denoted as X m e a n ( j ) = i = 1 N X ( i ) N , and a 1 and a 2 are taken as the interference terms produced by the environment and other factors on the particles, which take the value in the interval of [ 0,1 ] . Finally, based on the combined results of the rotational motion and elliptical walk motion, the position changes brought about by the two motions are fused to accurately calculate the final population update position, denoted as:
X n e w ( i , j ) = X ( i , j ) + a 2 × ( w 1 × L S ( t ) + w 2 × G S )
(3)
Population Variation
In order to strongly circumvent the dilemma of falling into the local optimum, high-energy particles need to carry out the mutation operation at the right time, in which an element of randomness is injected into the mutation process to ensure the unpredictability of the direction of the mutation, denoted as:
X n e w i , j = X i , j + sin ( a 3 × π ) × ( X ( i , j ) X ( n , j ) )
where a 3 , a 4 , and a 5 are random values of [ 0,1 ] , Z = t M t is the probability of variation, which needs to satisfy a 4 < Z and a 5 < 0.05 , and X ( n , j ) serves as an arbitrary particle in the cluster of particles, whose randomness of selected variations further enriches the diversity of the population.
(4)
Bounce Boundary Handling Mechanism
Traditional PLO algorithms usually adopt the truncation method when the particle positions exceed the solution space and directly set the transgression value as the boundary value. Although this method is simple, it may lead to a decrease in population diversity and fall into local optimization. For this reason, this paper uses bounce boundary handling: when the particle position crosses the boundary, the rebound phenomenon in the physical collision is simulated to adjust the position, which is expressed as:
X n e w ( i , j ) = 2 X m i n ( j ) X n e w ( i , j ) , X n e w ( i , j ) < X m i n ( j ) 2 X m a x ( j ) X n e w ( i , j ) , X n e w ( i , j ) > X m a x ( j ) X n e w ( i , j ) , o t h e r w i s e
where X m i n ( j ) denotes the lower value of the solution space in the j th dimension and X m a x ( j ) denotes the upper value of the solution space in the j th dimension.

3.3. The Overall Flow of the Algorithm

In summary, the overall flow of the IPLO algorithm is shown in Algorithm 1.
Algorithm 1. Computation offloading strategy based on IPLO algorithm
1.
2.




3.
4.
5.
6.
7.
8.
9.
10.
11.
12.
13.
14.
15.
16.
17.
18.
19.
20.
21.
22.
23.
Input: X
Initialize high-energy particle populations X[i](i = 1, 2, 3, ...);
Initialize the number of iterations FEs;
Initialize the maximum number of iterations MaxFEs;
Initialization N, X_new, BestS, BestP, BestTotal;
Calculation of adaptation values f ( x ) ;
While FEs <= MaxFEs do
 Use Equation (27) to calculate w 1 ;
 Use Equation (28) to calculate w 2 ;
For i = 0 to N do
  Use Equation (29) to calculate L S ;
  Use Equation (30) to calculate G S ;
  Use Equation (31) to update X _ n e w ;
  If  a 4 < Z and a 5 < 0.05 then
   Use Equation (32) to update X _ n e w ;
   Use Equation (33) to update X _ n e w ;
  End if
   Calculate the fitness value f ( X _ n e w ) ;
   FEs = FEs + 1;
End for
If  f ( X _ n e w ) < f ( X ) then
    B e s t S = f ( X _ n e w ) ;
End if
 Use Equation (22), Equation (23) to calculate BestP;
 Calculate BestTotal from the sum of BestS and BestP;
End While
Output: BestTotal.

4. Simulation Experiments and Data Analysis

4.1. Simulation Parameters

In order to verify the efficiency and stability of the proposed optimization scheme, this paper simplifies the network model and limits the research scope to a small-scale environment; therefore, the following parameter settings are implemented. The setup scenario is a vehicle traveling at a speed v at a uniform speed on a 2000-meter road, and the tasks generated by the vehicle can be randomly adjusted in the range between [20, 100]. An RSU is installed every 400 m along the road. Each RSU is equipped with an Intel Atom x7-E3950 (4-core) CPU, and each RSU is accompanied by an MEC server equipped with an AMD EPYC 9554P (64-core) processor. Five RSUs are set up as nodes in the blockchain, where the nodes participating in the consensus are set up as four. The simulation experiment is repeated for 20 independent trials to eliminate the effect of randomness. Other parameter settings in the simulation environment are shown in Table 1.

4.2. Analysis of Experimental Results

In order to verify the efficiency of the proposed algorithm, it will be compared with the traditional algorithms from three perspectives: the number of CPU cycles required for (average energy consumption, average delay, and average overhead) tasks, the amount of data, and the number of tasks.
Figure 4, Figure 5 and Figure 6 show the relationship curves between the number of CPU cycles required for the task and the average energy consumption, average latency, and average overhead in the system for five different schemes when processing the same task. It can be seen that all three metrics increase with the number of CPU cycles required for the task for all schemes. Due to the limited CPU resources of the local devices, the devices need to run at full capacity for a long time, resulting in a more significant increase in the curves for the locally executed scenarios. In contrast, the optimization scheme proposed in this paper has a relatively flat curve growth and, at the same time, yields lower energy consumption, latency, and overhead compared to the other schemes. The reason for this is that when the CPU cycle tasks are high, they are prioritized to be assigned to idle and efficient nodes, which achieves a globally optimal balance. When the number of CPU cycles required by a task reaches 1900 Megacycles, the average energy consumption obtained by the optimization scheme proposed in this paper is 81.1%, 48.2%, 24.8%, and 8.6% lower than the local execution scheme, average offloading scheme, FOX scheme [38], and PLO scheme, respectively; the average latency is 83.9%, 53.7%, 53.7%, 53.7%, 53.7%, 53.7%, 28.2%, and 9.7%, and the average overhead is reduced by 86.1%, 59.7%, 39.7%, and 24.6%, respectively. Therefore, this result shows that the scheme in this paper is able to maintain low energy consumption, latency, and overhead as the number of CPU cycles required for a task increases.
The effect of task data volume on average energy consumption, average delay, and average overhead under different scenarios is shown in Figure 7, Figure 8 and Figure 9. The average energy consumption, average delay, and average overhead within the system all show a linear growth trend with the increase in data volume. The relatively flat growth of the local execution scheme is due to the fact that the local scheme avoids the uncertainty associated with network communication, making the performance metrics dependent on the fixed parameters of the local device. Moreover, it can be observed that the proposed scheme in this paper is optimal in terms of reducing energy consumption, delay, and overhead. When the task data size reaches 2.0 MB, the average energy consumption obtained by the optimization scheme proposed in this paper is reduced by 60.9%, 23.9%, 10.3%, and 7.4% over the local execution scheme, the average offloading scheme, the FOX scheme, and the PLO scheme, respectively, while the average latency is reduced by 73.4%, 39.6%, 18.2%, and 13.3%, respectively, and the average overhead is reduced by 63.9%, 26.5%, 11.8%, and 7.1%. Therefore, the results show that the scheme in this paper can dynamically balance the utilization efficiency of computing resources according to the amount of task data and improve the overall performance.
Figure 10, Figure 11 and Figure 12 demonstrate the relationship between the number of tasks and the average energy consumption, average latency, and average overhead within the system. As the number of tasks increases, the local execution scheme has the least fluctuating metrics due to the fact that the local scheme processes the tasks in a fixed order and does not need to rely on optimization algorithms to assign tasks. However, its performance is the worst; in contrast, the FOX scheme, PLO scheme, and IPLO scheme all show better performance, among which the optimization scheme proposed in this paper performs the most outstandingly due to the fact that its optimization algorithm can effectively adapt to high-load scenarios and dynamically adjusts the task allocation strategy to reduce the system’s energy consumption, latency, and overhead. When the number of tasks reaches 100, the average energy consumption obtained by the optimization scheme proposed in this paper is reduced by 51.2%, 47.5%, 17.7%, and 13.8% compared with the local execution scheme, average offloading scheme, FOX scheme, and PLO scheme, respectively; the average latency is reduced by 57.7%, 54.7%, 17.2%, and 9.4%, respectively; and the average overhead is reduced by 75.5%, 50.7%, 15%, and 9.1%, respectively.
Regarding the IPLO scheme, the above experiments have fully confirmed its excellent and efficient performance. Compared with other schemes, the lower energy consumption can significantly extend the vehicle range, especially in long-distance transportation or high-load tasks to avoid overloading the battery; the lower latency can improve the response speed of the emergency tasks, which is usually applied to traffic accident warning and dynamic path planning scenarios; and the lower overhead can reduce the cost of computing resources for the vehicle, making it possible for small and medium-sized enterprises to use the IPLO scheme. The lower overhead can reduce the cost of computing resources for vehicles, making high-performance edge computing services affordable for small and medium-sized vehicles. At the same time, the dynamic decision thresholds of IPLO are determined through experimental data (1000 Megacycles of CPU cycles required for the task, 0.5 MB of data, and 20 tasks). Tasks below these thresholds are mostly simple tasks that can be handled locally by the vehicle (e.g., tire pressure monitoring, mileage counting), as they have low computational requirements and loose real-time requirements, while tasks above the thresholds usually involve complex scenarios (e.g., multi-vehicle cooperative obstacle avoidance). Tasks above the threshold usually involve complex scenarios (e.g., multi-vehicle cooperative obstacle avoidance, real-time rendering of high-definition maps) that must be dynamically allocated to the edge nodes through the IPLO algorithm to achieve global resource optimization. This design ensures that the algorithm strikes a precise balance between resource consumption and performance requirements. The solution can also be more highly applicable to variable networks, quickly adapting to network state changes through a dynamic task allocation mechanism. In scenarios such as signal fluctuations and bandwidth limitations in 5G networks, tasks can be assigned to more appropriate nodes in a timely manner, avoiding task backlogs and performance degradation caused by network instability.
In order to further compare the stability of the scheme, the FOX scheme, the Aurora optimization scheme, and the improved Aurora optimization scheme are tested under different constraints (fixed amount of data and number of tasks, fixed CPU cycles and number of tasks, fixed CPU cycles and amount of data).
Table 2 demonstrates the stability performance of the algorithm overhead under different conditions. From the results, it can be seen that the IPLO scheme has the smallest standard deviation in the test scenarios, indicating that the algorithm has high stability compared to the FOX scheme and PLO scheme, which have larger standard deviations. The reason is that the FOX scheme is prone to ineffective searching at the edges of the solution space due to the lack of a specialized design for the boundary problem; although the PLO scheme possesses better search capability, the boundary truncation processing leads to a rapid decline in population diversity at the later stage of the iteration. Therefore, the improved boundary processing strategy significantly improves the stability of the algorithm in complex optimization scenarios.
In order to analyze the sensitivity of the IPLO scheme, we systematically investigate the impact of different thresholds on the performance of the algorithm by controlling the probability of particle collision in the mutation operation. The probability of particle collision directly determines how often the algorithm introduces random perturbations during the search process, and by adjusting this threshold, we can explore the adaptive ability of the algorithm under different environmental conditions. Same as the constraints for testing the stability of the algorithm, we test the algorithm in terms of fixed data volume and number of tasks, fixed CPU cycles and number of tasks, and fixed CPU cycles and data volume, respectively.
Table 3 demonstrates the sensitivity performance of the average overhead of the IPLO algorithm under different collision probabilities. The experimental results show that the average overhead of the algorithm does not show a significant difference when the collision probability threshold is varied in the range of 0.01 to 0.1. The reason is that IPLO is forced to keep the globally optimal particles in each generation during the iteration process, and the update of other particles is strictly guided by the optimal particles. Even if the collision probability threshold is high, newly generated randomized perturbation solutions that are inferior to the current optimal solution will be directly discarded. This low sensitivity ensures that the system can still maintain stable service quality under the dynamic environment of high-speed vehicle movement and frequent changes in network topology, and the algorithm can quickly restore stable scheduling based on the historical optimal solution even if there is a local communication interruption problem

5. Conclusions

In this paper, we propose a computing offloading strategy for IoV that integrates IPLO algorithm and blockchain technology, which realizes efficient allocation and safe control of computing resources in the IoV environment by establishing a single-vehicle multi-tasking computing offloading model and adopting an authorized Byzantine fault-tolerant consensus mechanism. Among them, the IPLO algorithm significantly reduces the system latency, energy consumption, and total overhead by introducing the bounce boundary processing mechanism, while demonstrating excellent stability in high-speed mobile scenarios, while the dBFT consensus mechanism based on the election of equity votes effectively prevents the risk of data tampering by dynamically selecting consensus nodes, providing a reliable foundation for trusting IoV. The scheme provides an efficient, stable, and secure edge computing solution for IoV through the co-design of algorithm optimization and security mechanisms.
The designed scheme demonstrates great practical value in IoV. In terms of performance improvement, it significantly enhances the real-time and economy of data processing in IoV by reducing system latency, energy consumption, and total overhead, providing a smoother operation environment for IoV applications. At the same time, lower latency can reduce the risk of traffic accidents and improve road safety. In terms of safety and security, the introduction of blockchain technology ensures the integrity of data transmission and effectively guards against the risk of malicious attacks and data leakage. In terms of system stability, the optimized algorithmic design allows the system to maintain reliable operation in scenarios of high-speed vehicle movement and frequent changes in network topology. Together, these advantages provide a solid technical foundation for IoV. In the future, this research will expand to frontier areas, such as quantum computing optimization and intelligent consensus mechanism, and is committed to breaking through the existing technical bottlenecks and helping the IoV technology to move to a higher level of development.

Author Contributions

Conceptualization, Y.L. and Y.D.; methodology, Y.L. and B.Y.; software, Y.L. and B.Y.; validation, B.W. and Q.S.; investigation, B.W. and Q.S.; writing—original draft preparation, B.Y. and B.W.; writing—review and editing, Y.L. and B.Y.; supervision, Y.L. and Y.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by a fund from the Natural Science Foundation Program of Jilin Province: 20250102232JC; Key Scientific Research Project of the Jilin Provincial Department of Education: JJKH20251099KJ.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Maanak, G.; James, B.; Farhan, P.; Ravi, S. Secure V2V and V2I Communication in Intelligent Transportation Using Cloudlets. IEEE Trans. Serv. Comput. 2020, 15, 1912–1925. [Google Scholar]
  2. Yueyue, D.; Du, X.; Sabita, M.; Yan, Z. Joint Load Balancing and Offloading in Vehicular Edge Computing and Networks. IEEE Internet Things J. 2019, 6, 4377–4387. [Google Scholar]
  3. Chen, C.; Yini, Z.; Huan, L.; Yangyang, L.; Shaohua, W. A Multihop Task Offloading Decision Model in MEC-Enabled Internet of Vehicles. IEEE Internet Things J. 2023, 10, 3215–3230. [Google Scholar] [CrossRef]
  4. Katoch, S.; Chauhan, S.; Kumar, V. A Review on Genetic Algorithm: Past, Present, and Future. Multimed. Tools Appl. 2020, 80, 8091–8126. [Google Scholar] [CrossRef]
  5. Jonas, S.; Tatiana, K.; Ian, D.; Mani, J. Dynamic Impact for Ant Colony Optimization Algorithm. Swarm Evol. Comput. 2021, 69, 100993. [Google Scholar]
  6. Shugang, L.; Yanfang, W.; Xin, L.; He, Z.; Zhaoxu, Y. A New Fast Ant Colony Optimization Algorithm: The Saltatory Evolution Ant Colony Optimization Algorithm. Mathematics 2022, 10, 925. [Google Scholar]
  7. Xinyu, Z.; Jiaxin, L.; Junhong, H.; Maosheng, Z.; Mingwen, W. Enhancing Artificial Bee Colony Algorithm with Multi-Elite Guidance. Inf. Sci. 2021, 543, 242–258. [Google Scholar]
  8. Du, X.; Zhou, Y. A Novel Hybrid Differential Evolutionary Algorithm for Solving Multi-objective Distributed Permutation Flow-Shop Scheduling Problem. Int. J. Comput. Intell. Syst. 2025, 18, 67. [Google Scholar] [CrossRef]
  9. Penglin, D.; Kaiwen, H.; Xiao, W.; Huanlai, X.; Fei, T.; Zhaofei, Y. A Probabilistic Approach for Cooperative Computation Offloading in MEC-Assisted Vehicular Networks. IEEE Trans. Intell. Transp. Syst. 2020, 23, 899–911. [Google Scholar]
  10. Yuwei, L.; Bo, Y.; Hao, W.; Qiaoni, H.; Cailian, C.; Xinping, G. Joint Offloading Decision and Resource Allocation for Vehicular Fog-Edge Computing Networks: A Contract-Stackelberg Approach. IEEE Internet Things J. 2022, 9, 15969–15982. [Google Scholar]
  11. Ramesh, R. Blockchain Technology: An Overview. IEEE Potentials 2022, 41, 6–12. [Google Scholar]
  12. Haibin, Z.; Jiajia, L.; Huanlei, Z.; Peng, W.; Nei, K. Blockchain-Based Trust Management for Internet of Vehicles. IEEE Trans. Emerg. Top. Comput. 2020, 9, 1397–1409. [Google Scholar]
  13. Zisang, X.; Wei, L.; Kuanching, L.; Jianbo, X.; Hai, J. A Blockchain-Based Roadside Unit-assisted Authentication and Key Agreement Protocol for Internet of Vehicles. J. Parallel Distrib. Comput. 2021, 149, 29–39. [Google Scholar]
  14. Sanjeev, K.D.; Ruhul, A.; Satyanarayana, V.; Rashmi, C. Blockchain-based Secured Event-Information Sharing Protocol in Internet of Vehicles for Smart Cities. Comput. Electr. Eng. 2020, 86, 106719. [Google Scholar]
  15. Cui, J.; Ouyang, F.; Ying, Z.; Wei, L.; Zhong, H. Secure and Efficient Data Sharing among Vehicles Based on Consortium Blockchain. IEEE Trans. Intell. Transp. Systems 2021, 23, 8857–8867. [Google Scholar]
  16. Qinglai, W.; Liyuan, H.; Tielin, Z. Spiking Adaptive Dynamic Programming Based on Poisson Process for Discrete-Time Nonlinear Systems. IEEE Trans. Neural Netw. Learn. Syst. 2021, 33, 1846–1856. [Google Scholar]
  17. Pengfei, H.; Wai, C. Software-Defined Edge Computing (SDEC): Principle, Open IoT System Architecture, Applications, and Challenges. UIC 2019, 7, 5934–5945. [Google Scholar]
  18. Zhang, K.; Mao, Y.; Leng, S.; Maharjan, S.; Zhang, Y. Optimal delay constrained offloading for vehicular edge computing networks. In Proceedings of the 2017 IEEE International Conference on Communications (ICC), Paris, France, 21–25 May 2017; pp. 1–6. [Google Scholar]
  19. Quyuan, L.; Changle, L.; Tom, H.L.; Weisong, S.; Weigang, W. Self-Learning Based Computation Offloading for Internet of Vehicles: Model and Algorithm. IEEE Trans. Wirel. Commun. 2021, 20, 5913–5925. [Google Scholar]
  20. Zhaolong, N.; Peiran, D.; Xiaojie, W.; Liang, G.; Joel, R.; Xiangjie, K.; Jun, H.; Ricky, Y.K.K. Deep Reinforcement Learning for Intelligent Internet of Vehicles: An Energy-Efficient Computational Offloading Scheme. IEEE Trans. Cogn. Commun. Netw. 2019, 5, 1060–1072. [Google Scholar]
  21. Zhen, D.; Fan, L.; Weijie, Y.; Christos, M.; Zenghui, Z.; Shuqiang, X.; Giuseppe, C. Integrated Sensing and Communications for V2I Networks: Dynamic Predictive Beamforming for Extended Vehicle Targets. IEEE Trans. Wirel. Commun. 2023, 22, 3612–3627. [Google Scholar]
  22. Tyler, C.; Vincent, G.; Mikel, L.; Michel, R. DBFT: Efficient Leaderless Byzantine Consensus and Its Application to Blockchains. In Proceedings of the 2018 IEEE 17th International Symposium on Network Computing and Applications (NCA), Cambridge, MA, USA, 1–3 November 2018. [Google Scholar]
  23. Qin, W.; Jiangshan, Y.; Zhiniang, P.; Shiping, C.; Yong, D.; Yang, X. Formal Security Analysis on Dbft Protocol of NEO. Comput. Res. Repos. 2022, 2, 20–31. [Google Scholar]
  24. Dodo, K.; Low, T.J.; Manzoor, A.H. Systematic Literature Review of Challenges in Blockchain Scalability. Appl. Sci. 2021, 11, 9372. [Google Scholar]
  25. Wang, J.; Lv, T.; Huang, P. Mobility-aware partial computation offloading in vehicular networks: A deep reinforcement learning based scheme. China Commun. 2020, 17, 31–49. [Google Scholar] [CrossRef]
  26. Chong, Y.; Dong, Z.; Ali, A.H.; Lei, L.; Yi, C.; Huiling, C. Polar Lights Optimizer: Algorithm and Applications in Image Segmentation and Feature Selection. Neurocomputing 2024, 607, 128427. [Google Scholar]
  27. Shi, T.; Qing, H.; Baiman, C.; Xiaoping, Y.; Simin, H. One-point Second-Order Curved Boundary Condition for Lattice Boltzmann Simulation of Suspended Particles. Comput. Math. Appl. 2018, 76, 1593–1607. [Google Scholar]
  28. Orlando, E.; Contreras, P.; Cyrlene, C.; Rohit, N. Knowledge sharing among engineers: An empirical examination. In Proceedings of the 2017 IEEE Technology & Engineering Management Conference (TEMSCON), Santa Clara, CA, USA, 8–10 June 2017; pp. 260–266. [Google Scholar]
  29. Yi, L.; Chao, Y.; Li, J.; Shengli, X.; Yan, Z. Intelligent Edge Computing for IoT-Based Energy Management in Smart Cities. IEEE Netw. 2019, 33, 111–117. [Google Scholar]
  30. Ali Bulut, U.; Ali Ozgur, Y. Oversampling in One-Bit Quantized Massive MIMO Systems and Performance Analysis. IEEE Trans. Wirel. Commun. 2018, 17, 7952–7964. [Google Scholar]
  31. Ranjeet Singh, T.; Shekhar, V. RSU-supported MAC Protocol for Vehicular Ad Hoc Networks. Int. J. Veh. Saf. 2012, 6, 162. [Google Scholar]
  32. Ray, K. Adaptive Bernstein–von Mises Theorems in Gaussian White Noise. Ann. Stat. 2017, 45, 2511–2536. [Google Scholar] [CrossRef]
  33. Choi, J. NOMA Based Random Access with Multichannel ALOHA. IEEE J. Sel. Areas Commun. 2017, 35, 2736–2743. [Google Scholar] [CrossRef]
  34. Fengxian, G.; Richard, F.Y.; Heli, Z.; Hong, J.; Mengting, L. Adaptive Resource Allocation in Future Wireless Networks with Blockchain and Mobile Edge Computing. IEEE Trans. Wirel. Commun. 2020, 19, 1689–1703. [Google Scholar]
  35. Mengting, L.; Richard, F.Y.; Yinglei, T.; Victor, C.M. Leung.; Mei, Song. Performance Optimization for Blockchain-Enabled Industrial Internet of Things (iiot) Systems: A Deep Reinforcement Learning Approach. IEEE Trans. Ind. Inform. 2019, 15, 3559–3570. [Google Scholar]
  36. Leliang, R.; Weilin, G.; Yong, X.; Zhenyu, L.; Daqiao, Z.; Shaopeng, L. Deep Reinforcement Learning Based Integrated Evasion and Impact Hierarchical Intelligent Policy of Exo-Atmospheric Vehicles. Chin. J. Aeronaut. 2025, 38, 103193. [Google Scholar]
  37. Chunmei, M.; Jinqi, Z.; Ming, L.; Hui, Z.; Nianbo, L.; Xinyu, Z. Parking Edge Computing: Parked-Vehicle-Assisted Task Of-floading for Urban VANETs. IEEE Internet Things J. 2021, 8, 9344–9358. [Google Scholar]
  38. Dawid, P.; Marcin, W. Red Fox Optimization Algorithm. Expert Syst. Appl. 2020, 166, 114107. [Google Scholar]
Figure 1. Network model.
Figure 1. Network model.
Applsci 15 07341 g001
Figure 2. The consensus process.
Figure 2. The consensus process.
Applsci 15 07341 g002
Figure 3. Three forms of motion for the IPLO algorithm.
Figure 3. Three forms of motion for the IPLO algorithm.
Applsci 15 07341 g003
Figure 4. The relationship between the number of CPU cycles required for a task and the average energy consumption of the system.
Figure 4. The relationship between the number of CPU cycles required for a task and the average energy consumption of the system.
Applsci 15 07341 g004
Figure 5. The relationship between the number of CPU cycles required for a task and the average latency of the system.
Figure 5. The relationship between the number of CPU cycles required for a task and the average latency of the system.
Applsci 15 07341 g005
Figure 6. The relationship between the number of CPU cycles required for a task and the average overhead of the system.
Figure 6. The relationship between the number of CPU cycles required for a task and the average overhead of the system.
Applsci 15 07341 g006
Figure 7. The relationship between the task data volume and the average energy consumption of the system.
Figure 7. The relationship between the task data volume and the average energy consumption of the system.
Applsci 15 07341 g007
Figure 8. The relationship between the task data volume and the average latency of the system.
Figure 8. The relationship between the task data volume and the average latency of the system.
Applsci 15 07341 g008
Figure 9. The relationship between the task data volume and the average overhead of the system.
Figure 9. The relationship between the task data volume and the average overhead of the system.
Applsci 15 07341 g009
Figure 10. The relationship between the number of tasks and the average energy consumption of the system.
Figure 10. The relationship between the number of tasks and the average energy consumption of the system.
Applsci 15 07341 g010
Figure 11. The relationship between the number of tasks and the average latency of the system.
Figure 11. The relationship between the number of tasks and the average latency of the system.
Applsci 15 07341 g011
Figure 12. The relationship between the number of tasks and the average overhead of the system.
Figure 12. The relationship between the number of tasks and the average overhead of the system.
Applsci 15 07341 g012
Table 1. Simulation parameters.
Table 1. Simulation parameters.
ParametersNumerical
Data size of the task D m
CPU cycles required for the task C m
Transmitting power of the vehicle P m [28]
Computing power of the vehicles f m L O C [29]
Computing power of the MEC server f m M E C [30]
Radius of communication coverage of RSUs R [31]
Gaussian channel white noise B 0 [32]
Subchannel bandwidth B m [33]
Weighting of latency η
Weighting of energy γ
CPU cycles required for signature α [34]
CPU cycles required for MAC β [34]
Average deal size S p [35]
Average computing power of RSUs f c o n s [36]
Average transmission power of RSUs P c o n s [37]
Number of blockchain nodes n
Number of blockchain consensus nodes K
Constant speed v
[0.5–1.5] MB
[1000–2000] Megacycles
46 dBm
2.0 GHz
8.0 GHz
500 m
−147 dBm
2 MHz
0.8
0.2
1 Megacycles
10 Megacycles
200 B
3 GHz
1000 mW
6
4
40 Km/s
Table 2. Standard deviations of algorithmic overhead under different conditions.
Table 2. Standard deviations of algorithmic overhead under different conditions.
ArithmeticFixed Data Volume and Number of TasksFixed CPU Cycles and Number of TasksFixed CPU Cycles and Data Volume
FOX0.0223257120.1215298470.084908155
PLO0.0061603790.1204564520.086015252
IPLO0.0017920030.1156586190.071276243
Table 3. Average overhead of IPLO algorithm under different collision probabilities.
Table 3. Average overhead of IPLO algorithm under different collision probabilities.
Collision ProbabilityFixed Data Volume and Number of TasksFixed CPU Cycles and Number of TasksFixed CPU Cycles and Data Volume
0.010.4502159070.403708820.615417646
0.030.4512006180.3973507830.619286688
0.050.4515401670.4001501430.616115548
0.070.4509002380.3994675430.618433646
0.10.450023460.4030754150.616594892
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, Y.; Yan, B.; Wang, B.; Sun, Q.; Dai, Y. Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm and Blockchain in Internet of Vehicles. Appl. Sci. 2025, 15, 7341. https://doi.org/10.3390/app15137341

AMA Style

Liu Y, Yan B, Wang B, Sun Q, Dai Y. Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm and Blockchain in Internet of Vehicles. Applied Sciences. 2025; 15(13):7341. https://doi.org/10.3390/app15137341

Chicago/Turabian Style

Liu, Yubao, Bocheng Yan, Benrui Wang, Quanchao Sun, and Yinfei Dai. 2025. "Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm and Blockchain in Internet of Vehicles" Applied Sciences 15, no. 13: 7341. https://doi.org/10.3390/app15137341

APA Style

Liu, Y., Yan, B., Wang, B., Sun, Q., & Dai, Y. (2025). Computation Offloading Strategy Based on Improved Polar Lights Optimization Algorithm and Blockchain in Internet of Vehicles. Applied Sciences, 15(13), 7341. https://doi.org/10.3390/app15137341

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop