Next Article in Journal
Flow around an Aircraft Model—Comparison between Hydrodynamic Tunnel Tests and Computational Fluid Dynamics Simulations
Previous Article in Journal
Design of the Production Technology of a Bent Component
Previous Article in Special Issue
A Reputation-Based Collaborative User Recruitment Algorithm in Edge-Aided Mobile Crowdsensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Intelligent Data-Enabled Task Offloading for Vehicular Fog Computing

by
Ahmed S. Alfakeeh
1,* and
Muhammad Awais Javed
2
1
Department of Information Systems, King Abdul Aziz University, Jeddah 21589, Saudi Arabia
2
Department of Electrical and Computer Engineering, COMSATS University Islamabad, Islamabad 45550, Pakistan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2023, 13(24), 13034; https://doi.org/10.3390/app132413034
Submission received: 20 September 2023 / Revised: 4 December 2023 / Accepted: 4 December 2023 / Published: 6 December 2023
(This article belongs to the Special Issue New Insights into Pervasive and Mobile Computing)

Abstract

:
Fog computing is a key component of future intelligent transportation systems (ITSs) that can support the high computation and large storage requirements needed for autonomous driving applications. A major challenge in such fog-enabled ITS networks is the design of algorithms that can reduce the computation times of different tasks by efficiently utilizing available computational resources. In this paper, we propose a data-enabled cooperative technique that offloads some parts of a task to the nearest fog roadside unit (RSU), depending on the current channel quality indicator (CQI). The rest of the task is offloaded to a nearby cooperative computing vehicle with available computing resources. We developed a cooperative computing vehicle selection technique using an artificial neural network (ANN)-based prediction model that predicts both the computing availability once the task is offloaded to the potential computing vehicle and the link connectivity when the task result is to be transmitted back to the source vehicle. Using detailed simulation results in MATLAB 2020a software, we show the accuracy of our proposed prediction model. Furthermore, we also show that the proposed technique reduces total task delay by 37% compared to other techniques reported in the literature.

1. Introduction

Vehicular networks are important components of future intelligent transportation systems (ITSs). These networks equip vehicles with wireless transmitters and receivers, thus enabling them to share traffic information with other vehicles and the cloud [1,2]. As a result, city-level traffic information can be collected and used to manage congestion on the roads and implement safe driving practices [3,4,5,6].
Due to the increasing number of ITS applications, the data shared among ITS entities (including vehicles and infrastructure nodes, known as roadside units) are growing at a rapid pace [7,8,9,10]. Also, these ITS applications require the computation of several application-related tasks, such as decisions on particular driving actions using the current state of traffic in the vicinity. Thus, fog computing nodes, in the form of roadside units (RSUs), are needed to manage the data collection and computation of application-related tasks [11,12,13,14]. Vehicles offload their tasks to nearby RSUs, which act as fog nodes and provide computation services [15,16].
Task offloading in vehicular fog computing works in four steps. In the first step, the appropriate fog node for a given task is selected [17]. This decision is made based on several factors, such as the workloads of fog nodes, the channels between vehicles and fog nodes, etc. In the second step, the task is transmitted to the selected fog node using wireless communication technology, such as 6G [18,19]. In the third step, the task is computed at the fog node. Finally, the last step involves transmitting the result of the task back to the vehicle [20,21].
A major challenge in vehicular fog computing is the consideration of vehicle mobility and dynamic wireless channel quality for making task offloading decisions [22]. In addition, other factors, such as computing resource availability at fog nodes, the distances between vehicles and fog nodes, etc. also need attention. Moreover, the offloading ratio, which dictates the percentage of tasks that are computed locally by the vehicle itself, also needs to be decided.
Neighborhood vehicles that have available computing resources can also share their resources with task-generating vehicles, thus computing the tasks even closer than fog RSUs. While fog RSUs have much higher computing capacities, they also have higher computing loads that are generated by several vehicles in their range. Similarly, there are scenarios where the channel quality between vehicles, fog RSUs and task-generating source vehicles is not good. Thus, transmitting tasks to fog RSUs may result in higher transmission delay and increased overall task delay. Thus, cooperative computing by neighborhood vehicles will play an important role in supporting fog RSUs in managing task computation within vehicular networks.
In this paper, we propose a data-enabled task offloading technique for vehicular fog computing. The key contributions of the proposed work are as follows:
  • We consider a scenario in which vehicles use fog RSUs and other nearby vehicles to compute a task. A vehicle with a task to compute offloads some percentage of the task to a fog RSU, based on the channel quality indicator (CQI) of the link with the fog RSU. The remaining task is offloaded to a cooperative computing vehicle in its neighborhood.
  • We propose a cooperative computing vehicle selection algorithm, based on the metrics of link connectivity time and computing availability time. These metrics consider factors such as source vehicle speed, potential computing vehicles’ speed, distance between vehicles, the computing loads of vehicles and channel quality between vehicles.
  • We propose an ANN-based prediction model to predict the values of the link connectivity time and computing availability time of potential cooperative computing vehicles. This prediction helps to evaluate the computing loads at the potential cooperative computing vehicles once the task has been transmitted and find the link connectivity value once the task result is transmitted back from the potential computing vehicle to the source vehicle.
  • We present detailed simulation results to show the high accuracy of the proposed prediction model. Moreover, we also implement the complete task offloading process and show that task delay is significantly reduced when using the proposed technique compared to using matching-based task offloading techniques.
The paper is organized as follows: Section 2 reviews recent work related to task offloading in vehicular fog computing; Section 3 presents the system model considered in the paper; Section 4 explains the working of the proposed technique; Section 5 discusses the performance evaluation and simulation results; Section 6 presents our conclusions.

2. Literature Review

In this section, we review the recent literature related to task offloading in vehicular fog computing. A summary of the literature review is presented in Table 1.
In [23], the authors put forth the significance of fog computing by efficient task offloading for mobile devices. To deal with delay-sensitive applications, an online learning-based offloading approach is applied in two stages. In the first stage, resources are distributed between fog nodes while keeping computation costs at a minimum. In the later stage, offloading delay is minimized by optimizing task allocation and spectrum scheduling. Simulation results show the performance gain of the proposed technique compared to the upper confidence bound (UCB) technique.
The work in [24] involves the efficient engagement of servers for computing, as well as the offloading of tasks under the uncertainty and asymmetry of the transmitted information. A two-phase-based design is proposed to achieve this goal. For the efficient assignment of servers, a convex–concave optimization approach is defined while maximizing the anticipated usage of the operator under the condition of symmetric information. Additionally, to minimize the total delay of the network, a price-matching solution is also proposed. This price matching is extended to information uncertainty scenarios and a matching-based offloading framework is developed, which helps to minimize total delay. Their results show that the proposed algorithm achieves the efficient sharing of resources without requiring global information and assures the bounded divergence of the algorithm.
Vehicular fog computing (VFC) is employed in [25], where the computation task is offloaded from a base station (BS) to underutilize the computational resources of vehicles in the vicinity. Major challenges in VFC, such as the absence of efficient incentives and mechanisms for task assignment, are presented. Firstly, a mechanism for efficient incentives is proposed, which is based on contract theoretical modeling, in which a contract is designed by examining the distinctive characteristics of every vehicle type to maximize the projected utility of the BS. The problem of task assignment is converted into a dual-sided matching problem between user equipment and vehicles, which is resolved by using a stable matching algorithm that is based on pricing. Their numerical results show a substantial performance improvement when using the proposed technique.
Autonomous driving is a promising paradigm for enabling C-ITS. A significant amount of computation, as well as delay-intensive response, is needed to deal with autonomous driving. Therefore, in [26], VFC is applied to migrate computing resources from overcrowded BSs to nearby vehicles. Two-stage-based VFC architecture is suggested to address the challenges of server assignment and task processing tactics under the assumption of information asymmetry and uncertainty. Vehicular computing resources are managed according to contract theory and the efficient offloading of tasks is governed by learning-based matching. Their simulation results also exhibit improvements in performance, in terms of both resource management and offloading delay.
VFC can also contribute to load issues in highly congested areas during peak times. Thus, vehicles can be assumed to be fog nodes, which are then used to assist in computing offloaded tasks. However, there are some concerns regarding the deployment of VFC, like the deficiency of incentives for the distribution of resources, the increased complexity of systems and collisions between offloading vehicles. An innovative contract-based procedure is proposed in [27] whereby joint resource contribution and utilization are achieved via deep reinforcement learning. The aim is to distribute resources while reducing the complexity of the system. Furthermore, to handle the collision problem in simultaneous offloading in multiple vehicle scenarios, a queuing model is also proposed. Their results show that the performance of both task offloading and resource allocation is significantly improved.
Similar to the above model, in [28], the authors present the idea of vehicles enabled with computation capabilities, which ensures less latency and enhancements in overall system efficiency. A major issue faced in this scenario is how and which vehicle is to be selected for task offloading when taking into consideration the delay costs and resource allocation for the multi-vehicle model. A convex optimization problem is formulated for the consumption function of the offloading model, which is then subjected to inequality constraints. Further, this problem is solved using the Lagrange dual approach and, in the end, a low-complexity algorithm is designed to find the optimized values of offloading ratios, the selection of computing vehicles and the consumption of the system.
Vehicular fog and cloud computing (VFCC) systems may consume large computing power for processing numerous computation-intensive and delay-sensitive tasks. To solve this problem, in [29], an optimal offloading scheme is proposed that takes into account the departure of occupied vehicles. Task offloading is formulated as a semi-Markov decision process (SMDP) and a value iteration algorithm is used to maximize the utility of the system. Compared to the greedy scheme, the proposed algorithm achieves higher utility.
In [30], the authors propose a task offloading technique for vehicular networks. The key idea is to use the stable matching technique to find the most suitable fog node for a given task. Vehicle mobility and dynamics are considered in the developed algorithm and a one-to-one matching technique based on the Kuhn–Munkres algorithm is used to find a stable match. Their results show that the task offloading rate is improved by the proposed technique.
In addition, there are several intelligent routing protocols for vehicular networks that are also used to transmit packets from vehicles to fog nodes [31,32,33,34,35]. These algorithms utilize techniques, such as fuzzy logic, genetic algorithms and machine learning, to obtain information about optimal routes.

3. System Model

In this paper, we consider a segment of a bidirectional multi-lane highway that contains N number of vehicles and several RSUs located along the road. The RSUs serve as fog nodes, thus providing computing services to the vehicles, as shown in Figure 1. The RSUs are connected to the cloud server via optical fiber links. The time taken to transmit tasks between neighboring RSUs and from RSUs to the cloud is considered negligible. Each vehicle is equipped with a communication device that allows for communication with nearby vehicles and other RSUs.

3.1. Mobility Modeling

For the vehicle mobility modeling, we assume that each vehicle has a random initial speed s i , which is independent of the speed of other vehicles. As time progresses, the speed of the vehicle changes but remains within a uniformly distributed speed range. Thus, fast-moving vehicles can overtake slower vehicles using the other lane.

3.2. Data Sharing Model

Vehicles periodically share traffic information and RSUs every 100 ms using wireless communication [6]. Furthermore, vehicles also share the current status of their computational resources in the form of a metric we define in this paper as computation availability time T c a . This is defined as the time after which the vehicle’s computational resources will become available. Thus, it measures the busy time of a vehicle’s computational resources and is calculated directly from the computational latency of the current task allocated to the vehicle. T c a is given by the following equation:
T c a = W C b C
where W measures the task size (given in bits), C b is the number of cycles taken by the vehicle’s CPU to compute one bit (given in cycles/bit) and C is the frequency of the vehicle’s CPU (given in cycles/s).

3.3. Computational Model

Our proposed work considers task offloading at fog nodes, as well as cooperative vehicles. Let α be the offloading ratio, which is defined as the percentage of a task that is offloaded to a fog node. The remaining percentage of task 1 α is offloaded to a cooperative vehicle. The computational delay is the sum of the two task delays (i.e., task delay at the fog node and task delay at the cooperative vehicle) multiplied by the respective percentage of the offloaded task.
T c o m p = ( 1 α ) × T c a + α × T f o g
where T c a is the task delay at the cooperative vehicle and T f o g is given as follows:
T f o g = W C b f o g C f o g
where C b f o g is the cycles/bit of the fog node and C f o g is the frequency of the CPU at the fog node (measured in cycles/s).

4. Proposed Technique

In this section, we explain the working of the proposed technique, which is divided into two major parts. The first part evaluates the offloading ratio of the task to the nearest fog node, whereas the second part finds the best cooperative computing vehicle to compute the rest of the task. A flow chart for the proposed technique is shown in Figure 2.

4.1. CQI-Based Offloading to Fog Nodes

The key idea of the first part of the proposed technique is to offload the task to the nearest fog node, depending on the channel quality between the task node and the fog node. In a vehicular network, RSUs regularly send pilot signals to vehicles for control and synchronization purposes. The received signal strength of these pilot signals is mapped to a 4 bit value (varying from 0 to 15), known as the channel quality indicator (CQI), which is a measure of channel conditions between RSUs and vehicles. These CQI values are periodically reported by each vehicle to RSUs.
Let v s be the source vehicle and r s be the connected fog RSU. We take α ( v s , r s ) as the offloading ratio of the task transmitted from v s to r s . The proposed technique uses the mapped CQI value obtained at v s to find α ( v s , r s ) , as follows:
α ( v s , r s ) = 1 1 + e c 1 ( ( C Q I m a x C Q I v s , r s ) c 2 )
Equation (4) is a logistic function, known as sigmoid. Here, C Q I m a x is the maximum CQI value of 15. Based on the values of c 1 and c 2 , the sigmoid function returns a different value for the offloading ratio for different C Q I v s , r s values. A key feature of the sigmoid function used in Equation (4) is that it returns a higher offloading ratio if the channel conditions are good, i.e., ( C Q I m a x C Q I v s , r s ) is less. As channel conditions deteriorate, the offloading ratio significantly reduces.

4.2. Cooperative Computing Vehicle Selection and Task Offloading

As the source vehicle offloads part of the task to a fog RSU, the remaining part of the task is offloaded to a cooperative computing vehicle, which is selected using an ANN-based prediction algorithm. The key idea of the second part of the proposed technique is to predict which neighborhood vehicles will have long-term connectivity with the source vehicle and also have low computation available time (which means that the cooperative vehicle’s CPU will be available sooner). When the source vehicle v s has a task to offload, it will request an RSU to nominate the most suitable cooperative computing vehicle for task offloading.
To find the optimal cooperative vehicle, we utilize two key metrics. The first metric is known as link connectivity time, which calculates the time for which the two vehicles (moving either in the same direction or opposite directions) will remain connected [36,37] and is calculated using the following equation:
T i , j L L T = Δ s i , j Δ d i , j + | Δ s i , j | R ( Δ s i , j ) 2
where Δ s i , j is the relative speed, Δ d i , j is the relative distance between the two vehicles and R is the communication range dictated by channel conditions.
The second metric used to find the optimal cooperative computing vehicle is the computation available time, which was already explained in Section 3. Each RSU gathers traffic information for all vehicles within its communication range.
Once a request from source vehicle v s to find the optimal cooperative computing vehicle is received at the RSU, it selects all j neighborhood vehicles of v s that have a T s , j L L T greater than a selected threshold. This threshold is selected based on the time taken for the task to be computed and received back at v s . The rationale for choosing such a threshold is to make sure that the source vehicle and cooperative computing vehicle remain connected once the task is to be transmitted back to the source vehicle.
After the list of the j potential cooperative computing vehicles is finalized, the RSU then sorts them according to their computing available time and selects the cooperative computing vehicle with the lowest computing available time. The task is then offloaded to the selected cooperative computing vehicle.

4.3. ANN-Based Prediction for Cooperative Computing Vehicle Selection

In this paper, we further utilize ANN-based prediction to find cooperative computing vehicles. This is because the CPU of a cooperative computing vehicle must be available once the task is transmitted to it, rather than at the current time. Task transmission takes a certain amount of time, depending on the task size and channel conditions. Hence, this prediction mechanism helps to better evaluate cooperative computing vehicles by taking into account task transmission time. Similarly, this prediction also helps to consider link connectivity between source and cooperative computing vehicles when a task is computed and ready for transmission back to the source vehicle. Thus, instead of using these metrics at the current time, predicted values are used to improve the selection of cooperative computing vehicles. An ANN is a non-linear tool that consists of an input layer and multiple output layers. The main constituent of an ANN is neurons (nodes). The input layer acquires data from an external source, which could be images, data files or audio data, and then the desired output is calculated after those data have been processed through multiple hidden layers of neurons.

4.3.1. Structure of ANN

The fundamental structure of the ANN is shown in Figure 3, where x i represents the input of the model, which is then multiplied by the weights w i j of neurons i and j of the hidden and output layers, respectively.
The net output at hidden layer j after passing through the activation function is given as follows [38]:
y j = f ( U j )
U j is given as the net input at hidden layer j after summing the bias in its value:
U j = i = 1 m x i w i j + b j
where, in Equation (6), f is the activation function.

4.3.2. Activation Function

In our implementation, we utilize the rectified linear unit (ReLu), which is further defined by:
f ( U ) = m a x ( 0 , a )
ReLu activation functions are non-linear and their output is a when the input is positive, otherwise the output is zero. The output at the hidden layer is further passed to the output layer, which has k neurons and is defined as follows:
z k = j = 1 h y j w j k + b k
In Equation (9), w i j represents the weights of the jth neuron of the hidden layer and the kth neuron of the output layer. Here, b k is the value of the bias. After passing through the ReLu activation function, the output is given as follows:
z k = f ( Z k )

4.3.3. Loss Function

The calculated output z k is then compared to the target output t k , which gives us the error between the target and calculated output states L N t as follows:
L N t = 1 N t k = 1 N t L ( t k , z k )
L ( t k z k ) is the loss function, which gives us the error between the actual and desired outputs. The most common loss function is the mean square error (MSE), which is further defined as follows:
L N t = M S E ( t k , z k ) = k = 1 N t ( t k z k ) 2
where N t is the total number of samples. Furthermore, the output layer gradient is identified as follows:
δ k = ( t k t k ) f ( z k )

4.3.4. Backpropagation Algorithm

As part of the ANN, we implement backpropagation (BP) using the Adam optimizer. The learning rate is given as η , whereas the momentum is adaptively changed by the Adam optimizer itself. The stability and convergence of the algorithm depend on the value of η : the lower the rate, the more stable the network and the slower the convergence. On the contrary, higher rates lead to faster convergence with unstable systems.
The BP algorithm is used to update the weights and biases. The updated weights w j k and biases b k from the hidden layer to the output layer are given by:
Δ w j k = η δ k y j
Δ b k = η δ k
The gradient error at the hidden layer is specified as follows:
δ j = k = 1 K δ k w j k f ( z k )
From the input neuron i to the hidden layer neuron j, the updated weights and biases are calculated as follows:
Δ w i j = η δ j x i
Δ b j = η δ j
Algorithm 1 describes the backpropagation algorithm for updating the weights and biases [39]. It consists of two parts: a feed-forward pass and a corresponding backward pass. The forward pass is performed by Equations (6)–(10), whereas the backward pass is calculated by Equations (13)–(16b).
Algorithm 1: Backpropagation algorithm
Applsci 13 13034 i001

4.3.5. Adam Optimization

An extension of the classical stochastic gradient descent (SGD) algorithm, called the Adam optimization algorithm, is used to improve the performance of the ANN. The cost and loss functions used in our case are taken from [40] and are given in Equations (11) and (12).
L ^ W , b = 1 N t n t S S G D L t k , z k W , b
A major difference between the Adam algorithm and the SGD algorithm is that the SGD algorithm uses a single learning rate η for the whole weight update process. On the other hand, the Adam optimization algorithm uses adaptive independent rates that are computed for different parameters based on the estimation of the first and second moments of the gradient. It is a combination of two algorithms: the adaptive gradient algorithm (AdaGrad) and root mean square propagation (RMSProp) [41]. Therefore, the Adam algorithm has the advantages of both algorithms. Moreover, when the algorithm uses Adam optimization, it applies correction terms with a scaling factor close to one to both the first and the second moments. Algorithm 2 presents the Adam optimization algorithm [39].

4.3.6. Configuration Parameters for the Adam Optimization Algorithm

The following are the configuration parameters for the Adam optimization algorithm that were used for the ANN model training:
  • η represents the learning rate or the step size;
  • γ 1 and γ 2 are the exponential decay rate parameters that are to be initialized close to 1 (such as 0.9 and 0.9999 , respectively);
  • ε is given as a tolerance factor (usually taken as the very small value of 10 8 );
  • m and v represent the moving averages of the first and second moments of the gradient, while their corresponding corrected biases are further introduced by m ^ and v ^ ;
  • At the end of the algorithm, W and b are the updated adaptive weights and biases.
The Adam optimization algorithm is faster at choosing parameters compared to the traditional SGD algorithm, which ultimately results in quicker convergence. Choosing appropriate initial values for the parameters used in Adam optimization also plays an important role in efficient convergence, given the fact that the problem is non-convex.    
Algorithm 2: Adam optimization algorithm for ANN model training.
Applsci 13 13034 i002

4.3.7. Proposed Prediction Model

Our proposed ANN-based prediction model is divided into two parts:
  • Link connectivity time prediction: The link connectivity time T i , j L L T between the source vehicle and all potential computing vehicles is predicted in the first part of the model using four inputs ((i) the speed of the source vehicle, (ii) the speed of a potential computing vehicle, (iii) the distance between the source and potential computing vehicles and (iv) channel gain between the vehicles).
  • Computation availability time prediction: The computation availability time T c a of potential computing vehicles is predicted in the second part of the model using two inputs ((i) the number of tasks already at the potential computing vehicle and (ii) the size of those tasks).
The developed ANN model was first trained and then used for prediction. Based on the developed ANN model, the proposed technique predicts the values of link connectivity time and computation available time T 1 and T 2 seconds ahead. The value of T 1 is chosen so that the link connectivity between the source vehicle and the computing vehicle remains intact until the task result is transmitted back to the source vehicle. The value of T 2 is selected so that the computing vehicle’s CPU is available once the task is transmitted to it from the source vehicle for computation.

5. Performance Evaluation

In this section, we present our performance evaluation of the proposed technique.

5.1. Dataset Generation for ANN Training

The details of the dataset used for training the proposed ANN model are shown in Table 2. Then, we explain the salient characteristics of the dataset generated to train the ANN model.
  • Link connectivity time model: There are three input parameters for the link connectivity time model. In Table 2, we mention the ranges of the values for all features used to generate the dataset. The value of speed is taken from the range of 30–120 km/h, which is generated in our case using a Gaussian random distribution. The second input feature is the distance between the vehicles. This depends on the speeds of the source and potential computing vehicles and the range of the distance is taken as 5–100 m. The last input parameter is the channel gain, which indicates the conditions of the vehicle-to-vehicle channel. The channel gain is generated using complex Gaussian random variables of zero mean, thus providing a Rayleigh fading channel.
  • Computation availability model: There are two inputs for the computation availability model. The first input is the number of tasks or requests to offload at a particular time slot t. The Poisson distribution is used to generate the number of tasks, with an average rate of λ . The second input is the task size, the value of which lies in the range of 0.5 MB–16 MB. This range is used to realistically model the computation availability as at a particular time slot; however, there may be scenarios in which there are fewer tasks but the tasks are larger or more tasks but the tasks are smaller, etc.

5.2. Simulation Parameters

The values of the simulation parameters used are given in Table 3. All simulations were implemented in MATLAB 2020a software. We use a task size of 0.5 MB–16 MB. The cycles per bit of the vehicles’ CPUs is taken as 500, whereas the fog node processor is considered to be four times better, thus providing C b of 2000 cycles/bit. The frequencies of the vehicles’ CPUs and the fog node’s CPU are taken as 2G and 8G cycles per second, respectively.

5.3. Performance Metrics

The following four types of error score indicators are used as performance metrics for the evaluation of the ANN prediction model. These include root mean square error (RMSE), mean absolute percentage error (MAPE), mean absolute bias error (MABE) and the coefficient of determination ( R 2 ). The following equations describe the mathematical formulae used to evaluate these four types of errors:
R M S E = 1 n i = 1 n ( Z i Z i ) 2
M A P E = 1 n i = 1 n | Z i Z i | Z i × 100
M A B E = i = 1 n | Z i Z i | n
R 2 = 1 i = 1 n ( Z i Z i ) 2 i = 1 n Z i
where n is the total number of dataset samples and Z i and Z i are the actual and estimated values, respectively.
RMSE is the first indicator employed to examine model performance, which has a value of zero in the ideal case and positive or non-zero values in non-ideal cases. MAPE is another indicator of the accuracy of ANN models and has low values for accurate models. Similarly, the MABE error score can be used to evaluate the precision of ANN models. Lastly, R 2 shows how the fitness of the model and its value lie in the range of 0 1 , with 1 being an indicator of the most accurate models.

5.4. ANN Prediction Model Results

The results of the ANN prediction model are presented in this subsection.
  • Training Phase: In the training phase of the ANN model for link connectivity time, the learning rate η is taken as 0.01. The value of momentum is adaptively changed by the Adam optimization algorithm and is finally set to 0.9 for optimal results. The number of training samples is taken as 100,000. The number of neurons in the hidden layer is taken as [ 30 , 20 , 1 ] ), which provides the best model results in terms of all four evaluation error scores ( R M S E = 3.5598 , M A B E = 1.9919 , M A P E = 8.6981 and R 2 = 0.9854 ).
    In the computation availability time ANN model, the same parameters are used, except that the number of hidden layer neurons is taken as [ 20 , 20 , 1 ] to provide the best results. The error scores for this model are calculated as R M S E = 1.7857 , M A B E = 1.0189 , M A P E = 8.8644 and R 2 = 0.9853 .
  • Testing Phase: After describing the details of the training data, the data samples are subjected to the testing phase. The prediction is performed based on both models and the time values T 1 and T 2 are taken as 1, 2 and 3 s. Generally, T 1 , which is associated with link connectivity time, is higher than T 2 as link connectivity is needed once the task is computed and returned to the source vehicle.
The one-second-ahead prediction is shown in Figure 4, in which the x-axis represents the number of testing samples and the y-axis gives the link connectivity time in seconds. We utilize 20,000 values to test the prediction model. In Figure 4, we show the results of the first 5000 testing samples. It can be seen from the graph that the actual and predicted link connectivity times show similar results, with a high level of accuracy. Accordingly, the prediction results at two seconds and three seconds ahead are also presented in Figure 5 and Figure 6, respectively.
The error scores of the link connectivity model for all predictions are given in Table 4. The general trend is that the prediction accuracy reduces as the prediction time ahead increases.
For the computation availability model, the predicted and actual results are illustrated in Figure 7, Figure 8 and Figure 9. Similar to the previous model, we use 100,000 samples for training and 20,000 samples for testing. In the graphs, we show the prediction results of the first 5000 samples. It can be seen that the actual and predicted values of computation availability time are quite close in all cases.
Table 5 lists the error scores for the computation availability model. With these results, it can be concluded that the ANN model can accurately predict computation availability time a few seconds ahead of the current time.

5.5. Task Offloading Results

We implement the proposed offloading technique and compare it to three other techniques: the matching-based task offloading technique [30], the knapsack-based task offloading technique [42] and the exhaustive search algorithm. The comparison is conducted using different ratios of vehicles generating the task. In Figure 10, the results show that the proposed task offloading technique computes all tasks within 5 s at a high task workload scenario, i.e., when 90% of vehicles have a computation task. On the other hand, the matching-based and knapsack-based task offloading techniques show inferior results, with a total delay of around 7.8–8 s in the high task workload scenario. Lastly, the exhaustive search technique shows the best results because it tries all combinations and selects the best combination with the lowest task delay. It can be seen that the proposed technique achieves a similar performance to the exhaustive search technique, with a slightly higher task delay.
Figure 11 shows the task delay of the proposed technique in comparison to when CQI-based task offloading is not used and instead, a fixed 50% of the task is offloaded to the fog node. The results show that without using CQI-based offloading, the task delay goes up to 10 s. This is because the channel conditions between the source vehicle and fog node are not considered, thus resulting in lower transmission delay.
Figure 12 shows the task delay when cooperative computing vehicles are either selected without ANN-based prediction or are selected randomly. It can be seen that when cooperative computing vehicles are randomly selected, task delay goes up to 12 s. Without using ANN-based prediction and only making decisions based on the current values of link connectivity time and computation availability time, task delay increases by up to 20% compared to using the proposed technique.

6. Conclusions

In this paper, we present a data-enabled task offloading technique for vehicular networks that utilizes cooperation between fog nodes and other vehicles in the neighborhood. Initially, a certain percentage of a task is offloaded to a nearby fog node, based on channel quality indicator values. Subsequently, the rest of the task is offloaded to a nearby cooperative computing vehicle. The selection of the cooperative computing vehicle is based on factors such as computing availability time and link connectivity time. An ANN-based prediction model is used to find the values of these parameters once the task is offloaded to the cooperative computing vehicle and the result of the task is to be sent back to the source vehicle. Our simulation results highlight the accuracy of the prediction model in terms of computation availability time and link connectivity time. Moreover, we also show that total task delay is reduced when using the proposed technique compared to using other available techniques in the literature. In the future, we aim to work on utilizing more than one cooperative computing vehicle for the further parallel processing of tasks and further improvements in task delay.

Author Contributions

This article was prepared through the collective efforts of all authors. Conceptualization, A.S.A. and M.A.J.; Writing—original draft, A.S.A. and M.A.J.; Writing—review and editing, A.S.A. and M.A.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was funded by Institutional Fund Projects under grant no. IFPIP: 917-611-1443. The authors gratefully acknowledge technical and financial support from the Ministry of Education and King Abdulaziz University, DSR, Jeddah, Saudi Arabia.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data presented in this study are available in article.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Wang, F.Y.; Lin, Y.; Ioannou, P.A.; Vlacic, L.; Liu, X.; Eskandarian, A.; Lv, Y.; Na, X.; Cebon, D.; Ma, J.; et al. Transportation 5.0: The DAO to Safe, Secure, and Sustainable Intelligent Transportation Systems. IEEE Trans. Intell. Transp. Syst. 2023, 24, 10262–10278. [Google Scholar] [CrossRef]
  2. Chen, J.; Zhang, Y.; Teng, S.; Chen, Y.; Zhang, H.; Wang, F.Y. ACP-Based Energy-Efficient Schemes for Sustainable Intelligent Transportation Systems. IEEE Trans. Intell. Veh. 2023, 8, 3224–3227. [Google Scholar] [CrossRef]
  3. Sun, Y.; Hu, Y.; Zhang, H.; Chen, H.; Wang, F.Y. A Parallel Emission Regulatory Framework for Intelligent Transportation Systems and Smart Cities. IEEE Trans. Intell. Veh. 2023, 8, 1017–1020. [Google Scholar] [CrossRef]
  4. Gong, T.; Zhu, L.; Yu, F.R.; Tang, T. Edge Intelligence in Intelligent Transportation Systems: A Survey. IEEE Trans. Intell. Transp. Syst. 2023, 24, 8919–8944. [Google Scholar] [CrossRef]
  5. Moso, J.C.; Cormier, S.; de Runz, C.; Fouchal, H.; Wandeto, J.M. Streaming-Based Anomaly Detection in ITS Messages. Appl. Sci. 2023, 13, 7313. [Google Scholar] [CrossRef]
  6. Javed, M.A.; Zeadally, S.; Hamida, E.B. Data analytics for Cooperative Intelligent Transport Systems. Veh. Commun. 2019, 15, 63–72. [Google Scholar] [CrossRef]
  7. Falahatraftar, F.; Pierre, S.; Chamberland, S. An Intelligent Congestion Avoidance Mechanism Based on Generalized Regression Neural Network for Heterogeneous Vehicular Networks. IEEE Trans. Intell. Veh. 2023, 8, 3106–3118. [Google Scholar] [CrossRef]
  8. Hosseini, M.; Ghazizadeh, R. Stackelberg Game-Based Deployment Design and Radio Resource Allocation in Coordinated UAVs-Assisted Vehicular Communication Networks. IEEE Trans. Veh. Technol. 2023, 72, 1196–1210. [Google Scholar] [CrossRef]
  9. Al-Essa, R.I.; Al-Suhail, G.A. AFB-GPSR: Adaptive Beaconing Strategy Based on Fuzzy Logic Scheme for Geographical Routing in a Mobile Ad Hoc Network (MANET). Computation 2023, 11, 174. [Google Scholar] [CrossRef]
  10. Hamdi, A.M.A.; Hussain, F.K.; Hussain, O.K. Task offloading in vehicular fog computing: State-of-the-art and open issues. Future Gener. Comput. Syst. 2022, 133, 201–212. [Google Scholar] [CrossRef]
  11. Li, L.; Fan, P. Latency and Task Loss Probability for NOMA Assisted MEC in Mobility-Aware Vehicular Networks. IEEE Trans. Veh. Technol. 2023, 72, 6891–6895. [Google Scholar] [CrossRef]
  12. Hui, Y.; Huang, Y.; Li, C.; Cheng, N.; Zhao, P.; Chen, R.; Luan, T.H. On-Demand Self-Media Data Trading in Heterogeneous Vehicular Networks. IEEE Trans. Veh. Technol. 2023, 72, 11787–11799. [Google Scholar] [CrossRef]
  13. Ren, Q.; Liu, K.; Zhang, L. Multi-objective optimization for task offloading based on network calculus in fog environments. Digit. Commun. Netw. 2022, 8, 825–833. [Google Scholar] [CrossRef]
  14. Hamdi, A.; Hussain, F.K.; Hussain, O.K. iVFC: An Intelligent Framework for Task Offloading in Vehicular Fog Computing. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4465948 (accessed on 30 November 2023).
  15. Geng, N.; Bai, Q.; Liu, C.; Lan, T.; Aggarwal, V.; Yang, Y.; Xu, M. A Reinforcement Learning Framework for Vehicular Network Routing Under Peak and Average Constraints. IEEE Trans. Veh. Technol. 2023, 72, 6753–6764. [Google Scholar] [CrossRef]
  16. Wei, Z.; Li, B.; Zhang, R.; Cheng, X.; Yang, L. Dynamic Many-to-Many Task Offloading in Vehicular Fog Computing: A Multi-Agent DRL Approach. In Proceedings of the GLOBECOM 2022—2022 IEEE Global Communications Conference, Rio de Janeiro, Brazil, 4–8 December 2022; pp. 6301–6306. [Google Scholar] [CrossRef]
  17. Gao, Z.; Yang, L.; Dai, Y. Fast Adaptive Task Offloading and Resource Allocation via Multiagent Reinforcement Learning in Heterogeneous Vehicular Fog Computing. IEEE Internet Things J. 2023, 10, 6818–6835. [Google Scholar] [CrossRef]
  18. Giordani, M.; Polese, M.; Mezzavilla, M.; Rangan, S.; Zorzi, M. Toward 6G Networks: Use Cases and Technologies. IEEE Commun. Mag. 2020, 58, 55–61. [Google Scholar] [CrossRef]
  19. Saad, W.; Bennis, M.; Chen, M. A Vision of 6G Wireless Systems: Applications, Trends, Technologies, and Open Research Problems. IEEE Netw. 2020, 34, 134–142. [Google Scholar] [CrossRef]
  20. Singh, J.; Singh, P.; Hedabou, M.; Kumar, N. An Efficient Machine Learning-Based Resource Allocation Scheme for SDN-Enabled Fog Computing Environment. IEEE Trans. Veh. Technol. 2023, 72, 8004–8017. [Google Scholar] [CrossRef]
  21. Tong, S.; Liu, Y.; Chang, X.; Mišić, J.; Zhang, Z. Joint Task Offloading and Resource Allocation: A Historical Cumulative Contribution Based Collaborative Fog Computing Model. IEEE Trans. Veh. Technol. 2023, 72, 2202–2215. [Google Scholar] [CrossRef]
  22. Ning, Z.; Huang, J.; Wang, X. Vehicular Fog Computing: Enabling Real-Time Traffic Management for Smart Cities. IEEE Wirel. Commun. 2019, 26, 87–93. [Google Scholar] [CrossRef]
  23. Wang, K.; Tan, Y.; Shao, Z.; Ci, S.; Yang, Y. Learning-Based Task Offloading for Delay-Sensitive Applications in Dynamic Fog Networks. IEEE Trans. Veh. Technol. 2019, 68, 11399–11403. [Google Scholar] [CrossRef]
  24. Zhou, Z.; Liao, H.; Zhao, X.; Ai, B.; Guizani, M. Reliable Task Offloading for Vehicular Fog Computing Under Information Asymmetry and Information Uncertainty. IEEE Trans. Veh. Technol. 2019, 68, 8322–8335. [Google Scholar] [CrossRef]
  25. Zhou, Z.; Liu, P.; Feng, J.; Zhang, Y.; Mumtaz, S.; Rodriguez, J. Computation Resource Allocation and Task Assignment Optimization in Vehicular Fog Computing: A Contract-Matching Approach. IEEE Trans. Veh. Technol. 2019, 68, 3113–3125. [Google Scholar] [CrossRef]
  26. Zhou, Z.; Liao, H.; Wang, X.; Mumtaz, S.; Rodriguez, J. When Vehicular Fog Computing Meets Autonomous Driving: Computational Resource Management and Task Offloading. IEEE Netw. 2020, 34, 70–76. [Google Scholar] [CrossRef]
  27. Zhao, J.; Kong, M.; Li, Q.; Sun, X. Contract-Based Computing Resource Management via Deep Reinforcement Learning in Vehicular Fog Computing. IEEE Access 2020, 8, 3319–3329. [Google Scholar] [CrossRef]
  28. Li, H.; Li, X.; Wang, W. Joint optimization of computation cost and delay for task offloading in vehicular fog networks. Trans. Emerg. Telecommun. Technol. 2020, 31, e3818. [Google Scholar] [CrossRef]
  29. Wu, Q.; Ge, H.; Liu, H.; Fan, Q.; Li, Z.; Wang, Z. A Task Offloading Scheme in Vehicular Fog and Cloud Computing System. IEEE Access 2020, 8, 1173–1184. [Google Scholar] [CrossRef]
  30. Tian, S.; Deng, X.; Chen, P.; Pei, T.; Oh, S.; Xue, W. A dynamic task offloading algorithm based on greedy matching in vehicle network. Ad Hoc Netw. 2021, 123, 102639. [Google Scholar] [CrossRef]
  31. Elhoseny, M.; El-Hasnony, I.M.; Tarek, Z. Intelligent energy aware optimization protocol for vehicular adhoc networks. Sci. Rep. 2023, 13, 9019. [Google Scholar] [CrossRef]
  32. Naeem, A.; Rizwan, M.; Alsubai, S.; Almadhor, A.; Akhtaruzzaman, M.; Islam, S.; Rahman, H. Enhanced clustering based routing protocol in vehicular ad-hoc networks. IET Electr. Syst. Transp. 2023, 13, e12069. [Google Scholar] [CrossRef]
  33. Karabulut, M.A.; Shah, A.F.M.S.; Ilhan, H.; Pathan, A.S.K.; Atiquzzaman, M. Inspecting VANET with Various Critical Aspects—A Systematic Review. Ad Hoc Netw. 2023, 150, 103281. [Google Scholar] [CrossRef]
  34. Kumar, S.; Raw, R.S.; Bansal, A.; Singh, P. UF-GPSR: Modified geographical routing protocol for flying ad-hoc networks. Trans. Emerg. Telecommun. Technol. 2023, 34, e4813. [Google Scholar] [CrossRef]
  35. Arafat, M.Y.; Moh, S. A Q-Learning-Based Topology-Aware Routing Protocol for Flying Ad Hoc Networks. IEEE Internet Things J. 2022, 9, 1985–2000. [Google Scholar] [CrossRef]
  36. Zhang, J.; Dai, L.; He, Z.; Jin, S.; Li, X. Performance Analysis of Mixed-ADC Massive MIMO Systems Over Rician Fading Channels. IEEE J. Sel. Areas Commun. 2017, 35, 1327–1338. [Google Scholar] [CrossRef]
  37. Zhang, Z.; Xu, K.; Gan, C. The Vehicle-to-Vehicle Link Duration Scheme Using Platoon-Optimized Clustering Algorithm. IEEE Access 2019, 7, 78584–78596. [Google Scholar] [CrossRef]
  38. Mohamed, Z.E. Using the artificial neural networks for prediction and validating solar radiation. J. Egypt. Math. Soc. 2019, 27, 47. [Google Scholar] [CrossRef]
  39. Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  40. Zappone, A.; Di Renzo, M.; Debbah, M. Wireless networks design in the era of deep learning: Model-based, AI-based, or both? IEEE Trans. Commun. 2019, 67, 7331–7376. [Google Scholar] [CrossRef]
  41. Kingma, D.P.; Ba, J. Adam: A Method for Stochastic Optimization. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, 7–9 May 2015. [Google Scholar]
  42. Alvi, A.N.; Javed, M.A.; Hasanat, M.H.A.; Khan, M.B.; Saudagar, A.K.J.; Alkhathami, M.; Farooq, U. Intelligent Task Offloading in Fog Computing Based Vehicular Networks. Appl. Sci. 2022, 12, 4521. [Google Scholar] [CrossRef]
Figure 1. Illustration of the considered system model.
Figure 1. Illustration of the considered system model.
Applsci 13 13034 g001
Figure 2. Flow chart of the proposed technique.
Figure 2. Flow chart of the proposed technique.
Applsci 13 13034 g002
Figure 3. Internal structure of an ANN.
Figure 3. Internal structure of an ANN.
Applsci 13 13034 g003
Figure 4. One- second-ahead prediction of link connectivity time.
Figure 4. One- second-ahead prediction of link connectivity time.
Applsci 13 13034 g004
Figure 5. Two-seconds-ahead prediction of link connectivity time.
Figure 5. Two-seconds-ahead prediction of link connectivity time.
Applsci 13 13034 g005
Figure 6. Three-seconds-ahead prediction of link connectivity time.
Figure 6. Three-seconds-ahead prediction of link connectivity time.
Applsci 13 13034 g006
Figure 7. One- second-ahead prediction of computation availability.
Figure 7. One- second-ahead prediction of computation availability.
Applsci 13 13034 g007
Figure 8. Two-seconds-ahead prediction of computation availability.
Figure 8. Two-seconds-ahead prediction of computation availability.
Applsci 13 13034 g008
Figure 9. Three-seconds-ahead prediction of computation availability.
Figure 9. Three-seconds-ahead prediction of computation availability.
Applsci 13 13034 g009
Figure 10. Task offloading delay with different numbers of task-generating vehicles.
Figure 10. Task offloading delay with different numbers of task-generating vehicles.
Applsci 13 13034 g010
Figure 11. Task offloading delay with and without CQI-based offloading.
Figure 11. Task offloading delay with and without CQI-based offloading.
Applsci 13 13034 g011
Figure 12. Task offloading delay with and without ANN prediction.
Figure 12. Task offloading delay with and without ANN prediction.
Applsci 13 13034 g012
Table 1. Recent work conducted in vehicular fog computing.
Table 1. Recent work conducted in vehicular fog computing.
Ref.ScenarioProposed TechniqueDesign Objective
[23]Vehicular fog computingOnline learning-based offloading1. Resources are shared between fog nodes while maintaining lower computation costs
2. Task allocation decisions and spectrum scheduling are used to minimize offloading delay
[24]Vehicular fog computingConvex–concave optimization
and price-matching task
offloading
1. The efficient assignment of servers is achieved via the convex–concave optimization approach while
maximizing the anticipated utility of the operator
2. The total delay of the network is minimized using a price matching solution
[25]Vehicular fog computingContract theoretical modeling
and stable matching algorithm
1. Based on contract theoretical modeling, a mechanism is proposed to maximize the projected utility of BS
2. A stable matching algorithm is used for task assignment amongst user equipment and vehicles
[26]    Vehicular fog computingContract theory and learning
-based matching
1. A contract theory computing resource management mechanism is proposed
2. A learning-based matching task offloading method is also proposed
[27]Multi-vehicular fog computingDeep reinforcement
learning
1. Resources are allocated while reducing the complexity of the system
2. A queuing model is proposed to solve the collision problem faced by simultaneous offloading
[28]Multi-vehicular fog computingLagrange dual approach1. A low-complexity algorithm is proposed to find the optimized values of offloading ratios, which vehicles to
select and the consumption of the system
[29]Vehicular fog and cloud
computing
Semi-Markov decision process1. The task offloading problem is presented as a semi-Markov decision process
2. A value iteration algorithm is proposed to maximize the total long-term reward of the system
[30]Vehicular fog computingStable matching1. Vehicle mobility is considered while offloading
2. The Kuhn–Munkres algorithm is used to find a stable match
Table 2. Details of the dataset generated to train the ANN model.
Table 2. Details of the dataset generated to train the ANN model.
ANN ModelFeatures UsedData Range
Computation Available TimeVehicle speed30–120 km/h
Distance between source and computing vehicles5–100 m
Channel gainRayleigh distribution
Link Connectivity TimeNumber of tasksPoisson distribution
Task size0.5 MB–16 MB
Table 3. Simulation parameters and values used.
Table 3. Simulation parameters and values used.
ParameterValue
Task size0.5 MB–16 MB
C b of vehicles’ CPUs500 cycles/bit
C b of fog node’s CPU2000 cycles/bit
Frequency C of vehicles’ CPUs2G cycles/s
Frequency C of fog node’s CPU8G cycles/s
Table 4. Error scores of link connectivity model.
Table 4. Error scores of link connectivity model.
ErrorOne-Second-Ahead PredictionTwo-Seconds-Ahead PredictionThree-Seconds-Prediction
RMSE3.98396.01477.4742
MABE2.30663.27454.4182
R 2 0.98380.96310.9430
MAPE5.77678.848011.4301
Table 5. Error scores of the computation availability model.
Table 5. Error scores of the computation availability model.
ErrorOne-Second-Ahead PredictionTwo-Seconds-Ahead PredictionThree-Seconds-Ahead Prediction
RMSE1.58592.57563.2739
MABE0.75701.27071.6789
R 2 0.97460.93310.8919
MAPE8.896414.808618.3649
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alfakeeh, A.S.; Javed, M.A. Intelligent Data-Enabled Task Offloading for Vehicular Fog Computing. Appl. Sci. 2023, 13, 13034. https://doi.org/10.3390/app132413034

AMA Style

Alfakeeh AS, Javed MA. Intelligent Data-Enabled Task Offloading for Vehicular Fog Computing. Applied Sciences. 2023; 13(24):13034. https://doi.org/10.3390/app132413034

Chicago/Turabian Style

Alfakeeh, Ahmed S., and Muhammad Awais Javed. 2023. "Intelligent Data-Enabled Task Offloading for Vehicular Fog Computing" Applied Sciences 13, no. 24: 13034. https://doi.org/10.3390/app132413034

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop