Next Article in Journal
Multiple-Target Matching Algorithm for SAR and Visible Light Image Data Captured by Multiple Unmanned Aerial Vehicles
Previous Article in Journal
Exploring the Potential of Remote Sensing to Facilitate Integrated Weed Management in Smallholder Farms: A Scoping Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Joint Incentive Mechanism Design and Energy-Efficient Resource Allocation for Federated Learning in UAV-Assisted Internet of Vehicles

1
Beijing Key Laboratory of Work Safety Intelligent Monitoring, Beijing University of Posts and Telecommunications, Beijing 100876, China
2
School of Network Security, Jinling Institute of Technology, Nanjing 211169, China
3
Department of Computing, Glasgow Caledonian University, Glasgow G4 0BA, Scotland, UK
*
Author to whom correspondence should be addressed.
Drones 2024, 8(3), 82; https://doi.org/10.3390/drones8030082
Submission received: 30 December 2023 / Revised: 21 February 2024 / Accepted: 23 February 2024 / Published: 26 February 2024

Abstract

:
With the increasing demand for application development of task publishers (e.g., automobile enterprises) in the Internet of Vehicles (IoV), federated learning (FL) can be used to enable vehicle users (VUs) to conduct local application training without disclosing data. However, the challenges of VUs’ intermittent connectivity, low proactivity, and limited resources are inevitable issues in the process of FL. In this paper, we propose a UAV-assisted FL framework in the context of the IoV. An incentive stage and a training stage are involved in this framework. UAVs serve as central servers, which assist to incentivize VUs, manage VUs’ contributed resources, and provide model aggregation, making sure communication efficiency and mobility enhancement in FL. The numerical results show that, compared with the baseline algorithms, the proposed algorithm reduces energy consumption by 50.3% and improves model convergence speed by 30.6%.

1. Introduction

With the deployment of the Internet of Things (IoT) in more fields, there is an increasing number of connected devices working together to jointly serve various applications. These devices have sensing and computing capabilities, which support them in collecting raw data in the environment for further processing [1]. As an important extension of the IoT, the Internet of Vehicles (IoV) has established a wide range of vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) connections [2,3]. Vehicle users (VUs) are equipped with more powerful integrated sensors to capture information in the road environment.
The extensive data collected by VUs have become a new driving force for emerging applications [4], such as autopilot, location management, traffic prediction, etc. However, traditional cloud computing requires VUs to transfer raw data to the cloud, which brings the challenges of a high communication overhead, a high transmission delay, and privacy disclosure [5]. Federated learning (FL) [6] has been proposed as a distributed machine learning architecture to push the computing process to the devices to protect users’ data privacy.
However, communication inefficiency remains a key bottleneck in FL. Vehicles are constantly moving, leading to a dynamic and changing network topology. This mobility can result in frequent changes in the availability of communication links between vehicles and roadside units (RSUs), affecting the reliability and stability of FL. In addition, due to the mobility of vehicles, connections between them may be intermittent. This can lead to challenges in maintaining continuous communication for model updates and coordination between vehicles. Inspired by the recent advancements in unmanned aerial vehicles’ (UAVs’) assisted cellular communications, we propose an UAV-assisted FL framework in the Internet of Vehicles (IoV) network, which can bring several advantages and motivations:
  • Edge computing capabilities: UAVs can serve as edge services for federated learning, performing computation and learning tasks locally. This reduces the need for extensive data transfers and minimizes latency. Edge computing on UAVs enhances the scalability and efficiency of federated learning in distributed environments.
  • Communication efficiency and mobility enhancement: By processing model aggregation locally on UAVs, there is a significant reduction in the need to transmit large amounts of raw data to a central server. What is more, UAVs can cover a wide geographic area efficiently, which addresses the training challenges associated with the high dynamicity of vehicles in IoV.
  • Privacy-preserving surveillance: Federated learning allows for model training on decentralized data, addressing privacy concerns. UAVs can collect data or models locally without transmitting sensitive information to a central server. Privacy-preserving techniques, such as differential privacy, can be integrated into federated learning to further protect individual privacy.
  • Network resilience: UAVs can operate in areas with limited network infrastructure or during network failures. Federated learning’s decentralized nature makes it resilient to intermittent connectivity, aligning well with the mobility and varying connectivity conditions in the IoV.
The proposed framework involves two stages: the incentive stage and the training stage. In the incentive stage, a contract-based incentive mechanism is designed to encourage VUs to proactively contribute their data. In the training stage, an energy-efficient resource allocation algorithm is designed between the UAVs and the VUs to manage the process of federated training, which allocates VUs’ computing and communication resources according to the result of the contract. The contributions of this article are as follows:
  • We propose a UAV-assisted FL framework in the IoV. Each UAV acts as a central server to provide the model aggregation or model parameter relay in the sky, which increases the reliability of FL under the uncertain and high-mobility conditions in the IoV network.
  • We design a contract-based incentive mechanism between UAVs and UVs. UAVs make contracts to assist VUs in responding to task publishers’ service requests and participating in federated training. Moreover, the contract mechanism determines VUs’ contributed local data based on their specific types in the presence of information asymmetry. VUs can receive corresponding revenues according to their data contributions.
  • We design an energy-efficient resource allocation algorithm to minimize total energy consumption in the training stage. According to the contract’s specific willingness and types, UAV manages VUs’ computing and communication resources to achieve energy-efficient federated training.
The rest of this paper is organized as follows. We review the relevant literature in Section 2. Then, we present the UAV-assisted FL framework in Section 3. We present the design of the contract incentive mechanism in Section 4 and the energy-efficient resource allocation algorithm in Section 5. Section 6 provides the performance of the proposed system. Finally, we give the discussion and conclusions of the paper in Section 7 and Section 8.

2. Literature Review

In this section, we review the most relevant achievements and milestones of incentive mechanism and resource allocation in FL.

2.1. Incentive Mechanism in FL

Incentive is a concept in the field of economics that is often used in crowdsensing [7,8], edge computing [9], and other fields to encourage members to participate in certain tasks. Classified according to the design method of incentive mechanisms, most of the existing research adopts the Stackelberg game [10,11,12,13,14], contract theory [15,16,17,18,19,20], and auction [21,22,23,24,25].
In the Stackelberg game, both sides choose their own strategies according to the possible strategies of the other side to ensure the maximization of their own interests. Y. Sarikaya et al. [10] established a Stackelberg game between the terminal devices and the central server. The central server allocates revenue according to the CPU power consumption of the devices, and both parties optimize their utility functions individually.
Contract theory maximizes utility functions by designing contract optimization problems between the two parties. A contract optimization problem is designed between mobile users and mobile application providers in [17], and an iterative contract algorithm is designed to maximize the utility function of all agents.
Like contract theory, auction can solve the problem of information asymmetry. Auction is divided into unilateral auction and double auction. Double auction can protect the interests of both buyers and sellers through incentive mechanisms and maximize the total welfare of the whole market. Work [21] used multi-dimensional procurement auction to select multiple edge nodes to participate in FL, and adopted the Nash equilibrium strategy for edge nodes. Matched double auction is used to model between edge computing servers and terminal devices in [22]. The device requests computing services from the server with a bid, and the server also asks for a price to sell services.

2.2. Resources Allocation in FL

At present, there are a large number of resource allocation studies on the energy-saving optimization of FL in wireless communication scenarios [26,27,28,29,30]. Here, we focus on resource allocation in the IoV scenario.
In the IoV, the research on resource allocation mostly aims at high performance or energy efficiency. Among them, the research on resource allocation with high performance as the optimization goal focuses more on performance indicators, delay, and transmission rate. L. Feng et al. jointly carried out calculation and URLLC resource allocation in [31] to ensure the stability of the C-V2X network. Work [32,33] used the deep reinforcement learning method to allocate resources. S. Bhadauria et al. allocated communication resources in [32] to ensure the low transmission delay of users, and H. Ye et al. allocated V2V resources in [33] to seek the optimal transmission bandwidth and power according to the interaction between V2V communication link agents and the environment.
The research on resource allocation with high energy efficiency as the optimization goal focuses more on energy consumption. Work [34] proposed a cellular network uplink energy-saving transmission scheme based on V2X relay communication. Work [35] studied the resource management problem of maximizing vehicle energy efficiency and proposed a new resource allocation scheme based on Lyapunov theory.
In the research on resource allocation, the above papers have either considered high performance or high energy efficiency, but there exist few studies that have combined the two goals to optimize them. Moreover, few studies simultaneously consider optimizing both the data quantity and resources for VUs.

3. Proposed FL Framework in UAV-Assisted IoV

We consider an application scenario that consists of task publishers, a fleet of UAVs, and VUs, as illustrated in Figure 1.
Task publishers are automobile enterprises that want to train intelligent road applications, such as automatic driving applications, road landmark recognition applications, traffic management applications, etc. They employ VUs to utilize local data to train intelligent applications. The set of task publishers is denoted as A = { a 1 , a 2 , . . . , a K } and K is the number of publishers.
Due to the limited coverage of UAVs, a UAV often serves as the wireless relay for a specific area of the road, and it is carrying a specific task from the task publishers. The UAV incentivizes VUs to participate and organizes VUs to train models within its coverage area. It is responsible for aggregating the models of VUs and distributing new models. Assuming that there are M UAVs, their set is defined as S = { s 1 , s 2 , . . . , s M } . The UAV s i at position ( x s , i , y s , i ) has a coverage radius R s , i .
VUs use their collected data for local training, and only need to upload the trained model parameters to UAVs. Assume that N VUs are selected to participate in training in one area. The set of VUs is denoted as C = { c 1 , c 2 , . . . , c N } , and the location of c j as ( x c , j , y c , j ) .
Our proposed UAV-assisted FL framework in the IoV involves an incentive stage and a training stage, as shown in Figure 2.
During the incentive stage, the UAV is responsible for signing contracts with VUs participating in FL within the coverage area. The VUs are stimulated to select contracts that align with their respective types and determine the data quantity needed for local training. In the training stage, UAVs organize the VUs for FL in their areas. Each VU conducts local training using the needed data quantity and allocated computing resources. Subsequently, the VUs upload the local model parameters to the UAV. The UAV aggregates local models of all participating VUs and updates the global model. The global model is then distributed to the VUs again, and the training process is iterated until the global model converges. It should be noted that if a VU exceeds the coverage range of the UAV in the current area during the training process, the UAV can obtain the model parameters of the VU through mobile vehicle tracking or model feedback. Each UAV submits the final global model to the corresponding task publisher.
Notations used in this paper are listed in Table 1.

4. Incentive Mechanism for Contracts of Vehicles

As mentioned earlier, we have introduced an incentive stage to boost the enthusiasm of VUs to respond to task publishers and finalize the amount of data that different types of VUs need to contribute during training. Due to the information asymmetry between the task publishers and VUs, the UAVs should design specific contracts for different types of VUs with different data orders of magnitude to improve their profits. The UAVs provide different reward packages from publishers to VUs based on the VU type to reward them for providing local data.
For VUs with a different amount of data, the UAV provides a contract ( R j , q j ) to VU c j , where q j refers to the amount of data contributed by j-VU, q j D j , and  R j refers to the corresponding reward for that VU. Assume that ρ j denotes the proportion of type-j VU, which satisfies j = 1 J ρ j = 1 , and x denotes the number of participating VUs. Focusing on a task publisher, the utility function (i.e., publisher’s revenue from achieving models) can be modeled as follows:
U T P = ρ j x j = 1 J ω log ( 1 + b q j ) R
where ω is the transformation parameter from model performance to revenue, and b is the dynamical parameter. The log function captures the relationship between the data quantity and the performance of the model.
Correspondingly, the utility function U j for the VU of type-j is as follows:
U j = ε j R j c q j
where ε j is the willingness of type-j VU to participate in training, ε 1 < < ε j < < ε J , and c is VU’s unit cost required for the training of data.
As such, the optimization problem can be expressed as follows:
max R j , q j U T P = ρ j x j = 1 J ω log ( 1 + b q j ) R s . t ε 1 R 1 c q 1 0 ε j R j c q j ε j R j 1 c q j 1 j = 1 J ρ j x q j q m a x
where q m a x is the total amount of the data contributed by VUs.

4.1. Contract Feasibility

For feasibility, each contract must satisfy the following conditions:
Definition 1.
Individual rationality (IR): Each VU only participates in the federated learning task when the utility of the VU is not less than zero, i.e.,
ε j R j c q j 0
Definition 2.
Incentive compatibility (IC): Each VU of type-j only chooses the contract designed for its type, i.e.,  R j , q j instead of any other contracts ( R z , q z ) to maximize utility, i.e.,
ε j R j c q j ε j R z c q z
Lemma 1.
Monotonicity: For contract ( R j , q j ) and ( R z , q z ) , we can see that R j R z , if and only if ε j ε z and  j z , j , z 1 , , J .
Proof. 
Based on the IC constraints of type-m VU and type-z VU, we can see that
ε j R j c q j ε j R z c q z
ε z R z c q z ε z R j c q j
This paper first proves adequacy: adding Equations (6) and (7) by transformation, we can obtain
ε j R j + ε z R z ε j R z + ε z R j
ε j R j ε z R j ε j R z ε z R z
Combining Equations (8) and (9), we can obtain R j ε j ε z R z ε j ε z . Note that ε j ε z 0 , thus it can be proven that R j R z . Similarly, this demonstrates necessity: we can obtain ε j R j R z ε z R j R z . If  R j R z 0 , it follows that ε j ε z . As such, Lemma 1 is proven.    □
Lemma 2.
If the IR constraint of type-1 is satisfied, then other IR constraints will also hold.
According to the IC constraint, j 2 , . . . , J , we can obtain
ε j R j c q j ε j R 1 c q 1
and, because ε 1 < < ε j < < ε J , we can obtain
ε j R 1 c q 1 ε 1 R 1 c q 1
Therefore, by combining Equations (10) and (11), we can obtain
ε j R j c q j ε 1 R 1 c q 1
Equation (12) indicates that when the IR constraint of type-1 VUs is satisfied, other IR constraints will automatically remain unchanged. Therefore, other IR constraints can be limited to the IR conditions of type-1 VUs.
Lemma 3.
According to the monotonicity in Lemma 1, the IC constraints can be simplified to local downward incentive constraints (LDICs), expressed as follows:
ε j R j c q j ε j R j 1 c q j 1
Proof. 
The IC constraint between type-j and type-z, z 1 , , j 1 , is defined as a downward IC (DIC), expressed as ε j R j c q j ε j R z c q z .
First, it is proven that the DIC can be reduced to two adjacent types of DIC, which are called LDICs. Given that ε j 1 < ε j < ε j + 1 , j 2 , . . . J 1 , we can obtain
ε j + 1 R j + 1 c q j + 1 ε j + 1 R j c q j
ε j R j c q j ε j R j 1 c q j 1
By utilizing the monotonicity, i.e., R j R z , if and only if ε j ε z , j z , and  j , z 1 , , J , we can obtain
ε j + 1 R j R j 1 ε j R j R j 1
Equation (15) can be transformed to obtain the following equation:
ε j R j R j 1 q j q j 1
Combining Equations (16) and (17), we can obtain
ε j + 1 R j c q j ε j + 1 R j 1 c q j 1
Combining Equations (14) and (18), we can obtain
ε j + 1 R j + 1 c q j + 1 ε j + 1 R j 1 c q j 1
The above formula can be generalized to prove that all DICs can be preserved to type-1, and thus can be obtained:
ε j + 1 R j + 1 c q j + 1 ε j + 1 R j 1 c q j 1 ε j + 1 R 1 c q 1 , j 1 , , j 1
Similarly, we can prove that all the UICs can be held until type-J, expressed as follows:
ε j 1 R j 1 c q j 1 ε j 1 R j + 1 c q j + 1 ε j 1 R J c q J , j j + 1 , , J
   □
Proof. 
The IC constraint between type-m and type-z, z j + 1 , , J , is defined as an upward IC (UIC), expressed as ε j R j c q j ε j R z c q Z .
First, it is proven that an UIC can be reduced to two adjacent types of UIC, which are called LUICs. Given that ε j 1 < ε j < ε j + 1 , j 2 , . . . J 1 , we can obtain
ε j 1 R j 1 c q j 1 ε j 1 R j c q j
ε j R j c q j ε j R j + 1 c q j + 1
By utilizing the monotonicity, i.e., R j R z if and only if ε j ε z , j z , and  j , z 1 , , J , we can obtain:
ε j R j + 1 R j ε j 1 R j + 1 R j
Equation (23) can be transformed to obtain the following equation:
ε j R j + 1 R j c q j + 1 q j
Combining Equations (24) and (25), we can obtain
ε j 1 R j c q j ε j 1 R j + 1 c q j + 1
Combining Equations (22) and (26), we can obtain
ε j 1 R j 1 c q j 1 ε j 1 R j + 1 c q j + 1
Hence, with the LUIC, all the UICs hold and can be reduced, i.e., Equation (27) can be extended to Equation (21).    □

4.2. Optimal Contract

In order to derive the optimal contract in Equation (3), we first solve the relaxation problem of Equation (3) without the monotonic constraint, and then test whether the obtained solution satisfies the monotonic condition. By using the iterative method on IC conditions and IR conditions, we can obtain the optimal reward, expressed as
R j * = c q 1 ε 1 , j = 1 R j 1 + c ε j q j q j 1 , j = 2 , , J
According to [7], the optimal rewards can be expressed as
R T * = c q 1 ε 1 + t = 1 T t
where t = c ε j q j q j 1 and 1 = 0 . By plugging R j into j = 1 J ρ j x R j , we can obtain
j = 1 J ρ j x R j = j = 1 J ρ j x c q j ε j + ρ t x t = j + 1 J Λ t
where Λ t = c q t ε t c q t ε t + 1 and  Λ J = 0 .
By plugging Equation (30) into the problem in Equation (3) and getting rid of all R j , we can rewrite Equation (3) as
max R j , q j U T P = ρ j x j = 1 J ω log ( 1 + b q j ) j = 1 J ρ j x c q j ε j + ρ t x t = j + 1 J Λ t = ρ j x j = 1 J ω log ( 1 + b q j ) j = 1 J ρ j x c q j ε j + ρ t x t = j + 1 J c q t ε t c q t ε t + 1 s . t j = 1 J ρ j x q j q m a x
By dividing U T P by q j , we can obtain 2 U T P / q j 2 < 0 , so we can find that U T P is a concave function. The maximized concave function is actually a convex optimization problem. Therefore, we can use convex optimization toolkits such as CVX to solve for the optimal data quantity q j and the corresponding reward R j .

5. Contract-Based Energy-Efficient Resource Allocation for UAV-Assisted FL

In the incentive stage, the UAVs have already incentivized VUs to sign the corresponding optimal contract based on their willingness, and the contract determines the VUs’ contributed data quantity q j for local training. During the training stage, the VUs utilize the results of the contract for local training. In this section, we design a resource allocation problem based on the contract’s willingness to minimize the VUs’ total energy consumption in training.
UAVs broadcast the applications to VUs in their coverage, and the responding VUs are further scheduled by the UAVs to form federations. VUs use local data for training and upload the trained model to the current UAV. UAVs aggregate when receiving all models and then distribute new model parameters. The whole process repeats a number of rounds to reach convergence.
Assuming that the initial model is defined as ω ( 0 ) , c j trains local models according to its assigned dataset D j ( t ) in each iteration t. If the local model of c j at t timeslot is denoted as ω j ( t ) , the local training process can be expressed as
ω j ( t + 1 ) = ω j ( t ) ϵ | b | f ( ω j ( t ) , D j ( t ) )
where ϵ is the learning rate, ϵ [ 0 , 1 ] , | b | is one batch size, and f ( ω j ( t ) , D j ( t ) ) is the loss function of c j .
In each iteration t, VUs upload local models to the current UAV. The UAV uses the federated averaging algorithm (FedAvg) to aggregate all models. Assume the number of VUs involved is J, where J N , then the aggregation process can be expressed as
ω ( t ) = 1 J j = 1 J ω j ( t )

5.1. System Models

During the FL process, the real-time communication model, computation model, and mobility model are formulated as follows.

5.1.1. Communication Model

The channel states of VUs vary at different periods. Different from terrestrial channel models, the channel between the UAV and VU is affected by line-of-sight (LoS) and none-line-of-sight (NLoS) propagation modes. We can define the probability of having a LoS link between the UAV and VU as P ( LoS , t ) , which can be represented as Equation (34):
P ( LoS , t ) = 1 1 + a exp b 180 π θ ( t ) a
where a and b are determined by the environment and θ ( t ) is the elevation angle, which is equal to arctan ( h ( t ) r ( t ) ) , where h(t) is the height of the UAV and r(t) is the horizontal distance between the UAV and VU, as depicted in Figure 2. The  probability increases with the enlargement of angle θ ( t ) .
Assume that the path loss between the VU and the UAV is defined as g j ( t ) , and we can express it in Equation (35):
g j ( t ) = 20 log 4 π f c d ( t ) c + P ( LoS , t ) η LoS + P ( NLoS , t ) η NLoS
where f c is the carrier frequency, d(t) is the distance between the UAV and the VU, and c is the speed of light. P ( NLoS , t ) is the probability of having a NLoS link and is equal to 1 P ( LoS , t ) . η LoS and η NLoS are the path loss coefficients. The values of a , b , f c , η LoS , and η NLoS are referenced in [36].
Assume that the UAV allocates the subchannels equally to all VUs and that the bandwidth of each VU is B. The achievable data rate of c j in iteration t is
r j ( t ) = B log 2 ( 1 + p j ( t ) g j ( t ) N 0 )
where p j ( t ) is the transmission power and N 0 is the noise power.
When c j broadcasts local model ω j ( t ) , the real-time communication delay through the process is computed by
T j c o m ( t ) = ω j ( t ) r j ( t )
The corresponding communication energy consumption is computed by
E j c o m ( t ) = p j ( t ) T j c o m ( t )

5.1.2. Computation Model

f j is the CPU cycle frequency of c j and ξ j is the number of CPU cycles required to train one data sample. The computation energy consumption through the local training of c j can be computed as
E j t r a i n ( t ) = τ ξ j D j ( t ) f j 2
where τ is the capacitance coefficient of computing chipset. The computation delay through the local training of c j can be computed as
T j t r a i n ( t ) = ξ j D j ( t ) f j
In this paper, the landmark recognition application is taken as an example. The amount of VUs’ training data will affect the accuracy and the local iterations of the local model. Define the local iterations and the global iterations as I j l o c and I 0 , respectively, where I j l o c is calculated in Section 4. Therefore, the total time consumed by the communication and computation processes of c j can be computed as T j in Equation (41):
T j ( t ) = I 0 ( I j l o c T j t r a i n ( t ) + T j c o m ( t ) )
Similarly, the total energy consumption is
E j ( t ) = I 0 ( I j l o c E j t r a i n ( t ) + E j c o m ( t ) )

5.1.3. Mobility Model

In this work, we consider the VU’s sojourn time under the current UAV to ensure that its current training process can be completed within that time. When the VU enters the next UAV’s coverage, it may participate in other services. According to [37], the remaining distance of c j under the coverage of s i can be computed as d i , j :
d i , j = R s , i 2 ( y s , i y c , j ) 2 ± ( x s , i x c , j )
Assuming the average speed of c j under the UAV as v ¯ j , the sojourn time of it in the coverage of s i is defined by T s o j o u r n i , j in Equation (44):
T s o j o u r n i , j = d i , j v ¯ j

5.2. Problem Formulation

Due to the use of landmark recognition as the VUs’ local task, the classic MNIST dataset is used to fit the local training accuracy. According to the fitting of a large number of experimental results using the MNIST dataset, we can obtain the relationship between the accuracy of the VU’s local model and the data size, as shown in Figure 3.
In Figure 3, the x axis represents the data size of a VU, and the y axis represents the model accuracy of its local training. We define the local model accuracy as η j l o c , η j l o c = 0.32 ln ( q j ) 1.91 . According to [26], we represent I j l o c = α log ( 1 η j l o c ) , where α is the difficulty coefficient.
According to the results of the contract during the incentive stage, the optimal data quantity for each VU is q j . By substituting q j into the fitted formula, the local iterations for each VU can be obtained. Our optimization problem is the minimization of the total energy consumption of VUs, which is the sum of local training and model transmission energy consumption. The variables to be optimized are the CPU frequency f j and transmission power p j . Based on the energy consumption defined in Equation (42), an optimization problem can be formulated as   P 1 :
P 1 : min f j , p j j = 1 J E j s . t .
I 0 ( s r j + I j l o c ξ j q j f j ) ε j T s o j o u r n i , j , j J
f j m i n < f j f j m a x , j J
p j m i n < p j p j m a x , j J
Constraint (45a) restricts the training time of a VU within its sojourn time under the current UAV, where s means the model size needed to be transmitted. ε j corresponds to the degree of willingness in the contract. The higher the willingness of the VU to participate in training, the larger the value of ε j . Here, we design a VU with a greater willingness to have a longer sojourn time for training. Constraints (45b) and (45c) set the ranges of the computation frequency and transmission power of each VU.
In the contract, the greater the willingness to participate, the higher the data quantity allocated to the vehicle. Due to the constraint of sojourn time in constraint (45a), VUs with higher data quantities are more likely to use more computing and transmission resources to ensure that training can be completed within the time limit. Therefore, in the contract-based resource allocation problem designed, each optimization variable is mutually restrictive.
Since several products in the objective function and constraint (45a) are not convex, P 1 is non-convex and thus can be difficult to solve. Consequently, we decompose this problem to make it solvable.
We characterize P 1 ’s solution by decomposing it into simpler sub-problems about variables f j and p j , called P 2 and P 3 , respectively. If we fix a set of p first, the sub-problem P 2 is only about variable f j .
P 2 : min f j I 0 j = 1 J ( I j l o c ξ j q j f j 2 )
s . t . I 0 ( s r j + I j l o c ξ j q j f j ) ε j T s o j o u r n i , j , j J
f j m i n < f j f j m a x , j J
When f j is solved and substituted into P 1 , the sub-problem P 3 becomes an optimization problem only related to variable p j .
P 3 : min p j I 0 j = 1 J p j s B log 2 ( 1 + p j g j N 0 )
s . t . I 0 [ s B log 2 ( 1 + p j g j N 0 ) + T j t r a i n ] ε j T s o j o u r n i , j , j J
p j m i n < p j p j m a x , j J

5.3. Solution of the Optimization Problem

5.3.1. Solution to P2

It is obvious that Equation (46) is about the quadratic function of f j , and the quadratic coefficient is positive, so P 2 is a convex quadratic optimization problem. Among the constraints, Equation (46a) is a linear constraint that does not affect the convexity property. Although Equation (46b) is a non-linear constraint, its denominator only exists with the variable f j , so it is also a convex function.
Therefore, classical Karush–Kuhn–Tucker (KKT) conditions can be used to solve P 2 . A set of initial solutions sol ( 0 ) = ( p ( 0 ) , f ( 0 ) ) should be defined. We then have the Lagrangian of P 2 by transferring the constraints to the objective.
L ( f j , λ ) = I 0 j = 1 J ( I j l o c ξ j q j f j 2 ) + λ I 0 ( s r j + I j l o c ξ j q j f j ) ε j T s o j o u r n i , j
where λ is the multiplier corresponding to the constraint (46a). By applying KKT conditions, we obtain
L f j = 2 I 0 I j l o c ξ j q j f j λ I 0 I j l o c ξ j q j f j 2 = 0
λ [ I 0 ( s r j + I j l o c ξ j q j f j ) ε j T s o j o u r n i , j ] = 0
From condition (49) we can obtain
f j * = λ * 2 3
From condition (50), since λ cannot be 0, we have
f j * = I 0 I j l o c ξ j q j r j r j ε j T s o j o u r n i , j s I 0
Therefore, the optimal solution of P 2 is f * in Equation (52).

5.3.2. Solution to P3

Theorem 1.
P 3 is quasiconvex on p j .
Proof. 
The theorem is proven in Appendix A.    □
Since the quasiconvex problem is also a unimodal problem, we use the bisection method to solve it. To translate constraint (47a) into Equation (53), the lower limit of the bisection method can be obtained.
r j s ε j T s o j o u r n i , j I 0 I j l o c ξ j q j f j
where r j is defined in Equation (36), and from Equation (53), we can derive the lower limit of p j :
p j 2 s I 0 f j B ( f j ε j T s o j o u r n i , j I 0 I j l o c ξ j q j ) 1 g j
Let the right-hand side of Equation (54) be equal to G j , which represents the lower limit of p j . It should be noted that P j m i n is very close to 0, so G j must be greater than p j m i n . If G j is greater than p j m a x , it means that the VU has a short sojourn time or a long local training time, resulting in higher requirements for transmission power. In this case, the local optimal solution of this VU is set as p j m a x , and the optimal solution will be obtained by iterating the solutions of P 2 and P 3 .
The bisection process of solving the optimal p * is given in Algorithm 1.
Algorithm 1 Bisection Method for Transmission Power Allocation Algorithm
Input:  Initial section [ a 0 , b 0 ] = [ G j , p j m a x ] , maximum tolerance ϵ > 0 ,
       Set f ( p j ) = I i g l o b j = 1 J p j s B log 2 ( 1 + p j g j N 0 ) , t = 1 , c o n v = 0
Output:   G j p j * p j m a x
     1:  while c o n v = 0 do
     2:        p j t = a t 1 + b t 1 2
     3:       Compute f ( p j t )
     4:       if  f ( p j t ) = 0  then
     5:               p j * = p j t , set c o n v = 1
     6:       end if
     7:       if  f ( p j t ) < 0  then
     8:              Update the section to [ a t , b t ] = [ p j t , b t 1 ]
     9:              if  | b t 1 p j t | ϵ  then
   10:                   p j * = p j t , set c o n v = 1
   11:                  set t = t + 1
   12:              end if
   13:       end if
   14:       if  f ( p j t ) > 0  then
   15:              Update the section to [ a t , b t ] = [ a t 1 , p j t ]
   16:              if  | p j t a t 1 | ϵ  then
   17:                       p j * = p j t , set c o n v = 1
   18:                      set t = t + 1
   19:              end if
   20:       end if
   21:   end while
We achieve the optimal f * and p * by solving the sub-problems iteratively. The whole solution of P 1 is shown in Algorithm 2.
Algorithm 2 Iterative Algorithm for the Whole Optimization Problem
Input:  Initial sol ( 0 ) = ( f ( 0 ) , p ( 0 ) ) , t = 1 , c o n v = 0 , maximum tolerance ϵ 0 > 0
Output:  Optimal sol * = ( f * , p * )
  1:   while  c o n v = 0 do
  2:         Solve P 2 to obtain f j t according to Equation (52)
  3:         Solve P 3 to obtain p j t according to Algorithm 1
  4:          sol ( t ) = ( f ( t ) , p ( t ) ) , and set t = t + 1
  5:         Check the convergence of | sol ( t ) sol ( t 1 ) |
  6:         if  | sol ( t ) sol ( t 1 ) | ϵ 0  then
  7:                sol ( * ) = sol ( t ) , and set c o n v = 1
  8:         end if
  9:  end while

6. Experimental Evaluation

In this section, we analyze the performance of the contract incentive mechanism and resource allocation algorithm for UAV-assisted FL.

6.1. Simulation Settings

In the contract incentive algorithm, we use the CVX tool to solve the optimization problem. In the resource allocation algorithm, we deploy 20 to 50 VUs under the coverage of one UAV. The coverage radius of the UAV is set to 3 km, the coordinates of the VUs are randomly generated with the UAV located directly above the road, and the height of the UAV is set to 100 m. The values of a , b , η LoS , and η NLoS are set to 12.08, 0.11, 1.6, and 23, as employed in [36].
Taking the landmark recognition task as an example, we conduct the VUs’ FL experiments on the MNIST dataset, which is a grayscale handwritten image dataset that includes 60,000 training pictures and 10,000 test pictures. The CNN model is used as the backbone model of FL, which is composed of two convolutional layers, two activation functions, two pooling layers, and two fully connected layers. Among them, the two convolutional layers have 10 and 20 (5 × 5) convolution kernels. The two fully connected layers have 50 and 10 neurons, respectively. We exported the CNN parameter file (including all weights and biases) with a size of approximately 0.05 Mbit.
Due to the small size of the transmission parameters, only one RB is needed to complete the transmission. An RB is typically set to 180 kHz in the 4G protocol and also in the 5G protocol. The other simulation parameters are set as in Table 2. Among them, the maximum transmission power refers to the definition of 3GPP [38,39].

6.2. Contract Optimality

Firstly, a total of 20 VUs and 5 VU types are set up to determine the optimal data quantity and optimal reward based on the contract. We obtained solutions for the optimal data quantity for 5 VU types, which are 550, 600, 700, 750, and 800. The distribution of the 20 VUs in the 5 types is 5, 4, 6, 3, and 2.
We can see from Figure 4 and Figure 5 that, as the VU type increases, both the data quantity and the reward for the VU type increase. This means that the contract we designed satisfies the monotonicity constraint, which is proved in Lemma 1.
The utility of VUs is shown in Figure 6. We can observe that all types of VUs achieve their maximum utility only when they choose the contract item designed entirely for their type, which explains the IC constraints. In addition, each VU can obtain a non-negative utility when selecting the contract item corresponding to their type, thus validating the IR constraints.

6.3. Performance of the Resource Allocation

In this part, we evaluate the performance of the proposed energy-efficient resource allocation. By substituting the solution of the optimal data quantity obtained from the contract into the optimization problem, the optimal computing resource, transmission power, and total energy consumption corresponding to resource allocation can be obtained. To demonstrate the energy-efficient performance of the proposed algorithm, we set up three baseline algorithms as follows.
  • DCM: DCM [40] is a resource allocation algorithm in mobile edge computing that aims at capturing the trade-off between learning efficiency and energy consumption. It optimizes CPU frequency, data volume, and total FL delay.
  • Benchmark1: Compared with the proposed algorithm, the CPU frequency f j of the vehicle is directly set to f j m a x .
  • Benchmark2: Compared with the proposed algorithm, the power p j of the vehicle is directly set to p j m a x .
  • Benchmark3: Compared with the proposed algorithm, the data volume of vehicles is randomly allocated.
Figure 7 illustrates the relationship between different transmission parameter sizes and energy consumption. It can be observed that, as the parameter size increases, the total energy consumption also increases. This is because larger parameters require more energy for transmission.
The proposed algorithm achieves the lowest energy consumption. This is attributed to our algorithm considering the sojourn time based on the degree of contract willingness. The higher the willingness of the VU, the longer its sojourn time, and even if the transmission delay increases, there is still enough time for training. However, in the case of increased transmission delay, the DCM algorithm will reduce training time due to the limitation of total sojourn time, thereby allocating more CPU frequency and resulting in greater energy consumption.
Figure 8 illustrates the relationship between VU’s sojourn time and energy consumption. It can be observed that, as the sojourn time increases, the energy consumption decreases. This is because VUs have a longer training time, leading to a lower allocated CPU frequency and transmission power, resulting in reduced computational and transmission energy consumption. The curve’s trend gradually flattens out as the computation or communication resources reach their minimum values. Among the compared algorithms, Benchmark1 exhibits the least significant decrease in energy consumption. This is because its CPU frequency is set to the maximum value and cannot be reduced. In the DCM algorithm, latency is optimized, which may lead to a trade-off between latency and energy consumption, resulting in higher energy consumption compared to the proposed algorithm in this paper. The proposed algorithm comprehensively considers contract-based sojourn time, computational capacity, and transmission capacity, aiming to minimize the total energy consumption within a given sojourn time requirement.
Figure 9 illustrates the relationship between the number of VUs (J) and the total energy consumption in FL. As the number of VUs increases, the energy consumption inevitably increases. For Benchmark1 and Benchmark2, the allocation of maximum CPU frequency and transmission power, respectively, results in high energy consumption. For Benchmark3, the data allocation to VUs is not optimized, resulting in many VUs being assigned the minimum amount of data, which may lead to poor model performance. As the number of VUs increases, the performance gap between the DCM algorithm and the proposed algorithm significantly increases. This is because the DCM algorithm optimizes the total latency and model accuracy and may allocate as much data and as many resources as possible to improve accuracy and reduce latency, but it ignores the consideration of the willingness of VUs. The proposed algorithm outperforms all benchmarks when the number of VUs increases, demonstrating that the designed algorithm achieves good stability.

6.4. Performance of Federated Learning

We validated the performance of FL on the MNIST dataset. Taking 20 VUs as an example, we substituted the data quantity obtained from the contract algorithm into the fitting formula mentioned earlier to determine the number of local training iterations for each VU. By incorporating these values into FL, we obtain the corresponding model accuracy and loss in Figure 10.
Figure 10 shows the comparison of the proposed algorithm’s accuracy with local training and the classical federated averaging (FedAVG) algorithm with an equal dataset allocation. The results show that the accuracy of the proposed scheme is higher and the convergence speed is faster than other algorithms on the MNIST dataset. This is because the proposed algorithm assigns a training dataset size for VUs, and for the group of datasets, a large number of experimental fitting results are used to calculate the training rounds of each VU. Our algorithm makes up for the insufficient training caused by the small amount of data compared with local training. Compared with classical FedAVG, it also employs FedAvg, but performs data allocation and iteration rounds more accurately, which makes the model’s performance improve faster. The results of the training loss further confirm the better performance of the proposed algorithm.

7. Discussion

With the emergence of a large number of intelligent road applications and the increasing competition in the intelligent transportation industry, more and more intelligent transportation application enterprises (task publishers) tend to dispatch intelligent VUs to assist in completing the training of application models. Currently, a large number of studies have proposed applying the FL framework to IoV scenarios, utilizing VUs’ resources for local training, thereby greatly reducing communication latency and improving training efficiency. However, the dynamicity, low participation, and limited resources of VUs are all challenges faced by FL in the IoV.
The proposed UAV-assisted FL framework utilizes UAVs to enhance connectivity. The contract incentive mechanism proposed in the system aims to stimulate VUs to participate and determine VUs’ data quantity used for training. The experiments show that VUs of different types should contribute different amounts of data and achieve corresponding optimal rewards. In addition, VUs can obtain their maximum utility and ensure non-negative utility only when they choose appropriate contract items, which verifies the IC and IR principles of the contract mechanism.
The energy-efficient resource allocation algorithm is designed to manage VUs’ computing and communication resources during the training process. This algorithm determines the local training iterations and training willingness of different VUs based on the results of the contract mechanism, and then constructs and solves the energy consumption minimization problem. Experiments show that the energy consumption of the proposed algorithm is lower than that of the baseline algorithms under the conditions of different transmission parameters, different sojourn times, and different numbers of participating VUs.
Utilizing the data quantity determined by the contract for FL, it is observed that the accuracy and convergence speed of the proposed system surpass those of the comparative algorithms. This further validates the efficacy of the proposed system, demonstrating its ability to effectively incentivize VUs to participate in federated training using appropriate resources.
The UAV-assisted FL framework leverages the role of UAVs as edge servers, enhancing connectivity in dynamic vehicular networks and assisting in the training process of federated models. Future research needs to focus more on the augmenting role of UAVs in system communication, addressing issues such as reliability and stability. More attention should be given to the UAVs’ contribution to reducing the risk of communication interruptions and minimizing data transmission latency.

8. Conclusions

In this study, we have proposed a UAV-assisted FL framework in the context of the IoV to overcome the challenges of VUs’ intermittent connectivity, low proactivity, and limited resources. An incentive stage and a training stage are involved in this framework, where a contract-based incentive mechanism and an energy-efficient resource allocation algorithm are designed separately. With the assistance of UAVs, VUs can benefit from enhanced communication efficiency and mobility, thus ensuring better training performance. The experimental results show that the proposed framework achieves effectiveness in terms of incentives and outperforms the baseline methods in terms of FL performance.
Due to the energy consumption during FL and the limited computing resources, VUs with a higher data quantity and quality might be reluctant to participate in federated training because their standalone local training could yield better-performing models. To address this challenge of low proactivity, we have designed a contract-based incentive mechanism that categorizes VUs into different types, each signing distinct contracts. Due to the information asymmetry between task publishers and VUs, UAVs serve as intermediaries to assist in contract signing. These contracts specify the amount of data required for training for each vehicle type and the corresponding rewards. This contract mechanism enhances the proactive involvement of VUs in training and effectively manages data resources. The corresponding experiments verify the effectiveness of the contract mechanism.
During the federated training process, it is necessary to consider both model performance and the total energy consumption induced by training to comprehensively manage training resources. Given the variation in data collected by VUs and their limited computing and communication resources, the complexity of resource management increases. Existing resource management studies mostly focus on single metrics of federated training and rarely optimize both performance and energy consumption simultaneously. The proposed contract-based efficient resource allocation algorithm in this paper addresses both performance and energy consumption in FL. It formulates the energy minimization problem based on contract results. As the contract mechanism manages data based on VU types and determines the training iterations, it enhances the performance of FL. Moreover, by introducing VUs’ participation willingness in the energy optimization problem, the algorithm effectively manages computing and communication resources for training, thereby reducing the total energy consumption. The experiments have demonstrated that the proposed algorithm reduces energy consumption and improves performance in FL.
In summary, this study addresses the challenges of VUs’ intermittent connectivity, low proactivity, and limited resources in the context of FL. In potential future work, we would primarily focus on two aspects. Firstly, while we have already considered the role of UAVs as edge servers to assist FL, future attention could be directed towards evaluating the contribution of UAVs to system communication metrics. This includes assessing the extent to which UAVs contribute to reducing the risk of communication interruptions and minimizing data transmission latency, among other factors. Secondly, we have not yet considered issues of cooperation and competition among UAVs. Future research could explore the formation of alliances among UAVs to enhance the overall efficiency of the system.

Author Contributions

Funding acquisition, S.L., J.M., and H.T.; Investigation, S.L., Y.L., and Z.H.; Methodology, S.L., Y.L., and Z.H.; Project administration, S.L. and H.T.; Resources, S.L.; Writing—original draft, Y.L. and Z.H.; Writing—review and editing, S.L., B.Z., and H.T. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by the Royal Society of Edinburgh–National Natural Science Foundation of China Joint Project (under grants No. 61701034 (NSFC) and No. 63007 (RSE)).

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

For P 3 , since all variables are fixed except p j , the limiting condition (45a) can be changed to r j = B log 2 ( 1 + p j | h j | 2 g j N 0 ) r j r e q . The region of constraints (45a) and (45b) are convex sets.
In addition, since r j = B log 2 ( 1 + p j | h j | 2 g j N 0 ) is concave, 1 B log 2 ( 1 + p j | h j | 2 g j N 0 ) is convex. The sublevel sets of convex functions are convex sets. Therefore, all the sublevel sets and constraints of P 3 are convex sets, which satisfy the necessary and sufficient conditions for quasiconvex functions [41].

References

  1. Zhao, L.; Valero, M.; Pouriyeh, S.; Li, L.; Sheng, Q.Z. Communication-Efficient Semihierarchical Federated Analytics in IoT Networks. IEEE Internet Things J. 2022, 9, 12614–12627. [Google Scholar] [CrossRef]
  2. Qureshi, K.N.; Din, S.; Jeon, G.; Piccialli, F. Internet of Vehicles: Key Technologies, Network Model, Solutions and Challenges with Future Aspects. IEEE Trans. Intell. Transp. Syst. 2021, 22, 1777–1786. [Google Scholar] [CrossRef]
  3. Lu, N.; Cheng, N.; Zhang, N.; Shen, X.; Mark, J.W. Connected Vehicles: Solutions and Challenges. IEEE Internet Things J. 2014, 1, 289–299. [Google Scholar] [CrossRef]
  4. Sodhro, A.H.; Pirbhulal, S.; de Albuquerque, V.H.C. Artificial Intelligence-Driven Mechanism for Edge Computing-Based Industrial Applications. IEEE Trans. Ind. Inform. 2019, 15, 4235–4243. [Google Scholar] [CrossRef]
  5. Zhang, J.; Letaief, K.B. Mobile Edge Intelligence and Computing for the Internet of Vehicles. Proc. IEEE 2020, 108, 246–261. [Google Scholar] [CrossRef]
  6. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. ACM Trans. Intell. Syst. Technol. 2019, 10, 12. [Google Scholar] [CrossRef]
  7. Lim, W.Y.B.; Xiong, Z.; Miao, C.; Niyato, D.; Yang, Q.; Leung, C.; Poor, H.V. Hierarchical Incentive Mechanism Design for Federated Machine Learning in Mobile Networks. IEEE Internet Things J. 2020, 7, 9575–9588. [Google Scholar] [CrossRef]
  8. Zhan, Y.; Li, P.; Wang, K.; Guo, S.; Xia, Y. Big Data Analytics by CrowdLearning: Architecture and Mechanism Design. IEEE Netw. 2020, 34, 143–147. [Google Scholar] [CrossRef]
  9. Liu, Y.; Xu, C.; Zhan, Y.; Liu, Z.; Guan, J.; Zhang, H. Incentive mechanism for computation offloading using edge computing: A Stackelberg game approach. Comput. Netw. 2017, 129, 399–409. [Google Scholar] [CrossRef]
  10. Sarikaya, Y.; Ercetin, O. Motivating Workers in Federated Learning: A Stackelberg Game Perspective. IEEE Netw. Lett. 2020, 2, 23–27. [Google Scholar] [CrossRef]
  11. Khan, L.U.; Pandey, S.R.; Tran, N.H.; Saad, W.; Han, Z.; Nguyen, M.N.H.; Hong, C.S. Federated Learning for Edge Networks: Resource Optimization and Incentive Mechanism. IEEE Commun. Mag. 2020, 58, 88–93. [Google Scholar] [CrossRef]
  12. Liu, T.; Di, B.; Wang, S.; Song, L. A Privacy-Preserving Incentive Mechanism for Federated Cloud-Edge Learning. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  13. Zhao, Y.; Liu, Z.; Qiu, C.; Wang, X.; Yu, F.R.; Leung, V.C. An Incentive Mechanism for Big Data Trading in End-Edge-Cloud Hierarchical Federated Learning. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  14. Jiang, S.; Wu, J. A Reward Response Game in the Federated Learning System. In Proceedings of the 2021 IEEE 18th International Conference on Mobile Ad Hoc and Smart Systems (MASS), Denver, CO, USA, 4–7 October 2021; pp. 127–135. [Google Scholar] [CrossRef]
  15. Ding, N.; Fang, Z.; Huang, J. Optimal Contract Design for Efficient Federated Learning with Multi-Dimensional Private Information. IEEE J. Sel. Areas Commun. 2021, 39, 186–200. [Google Scholar] [CrossRef]
  16. Deng, Y.; Lyu, F.; Ren, J.; Chen, Y.C.; Yang, P.; Zhou, Y.; Zhang, Y. FAIR: Quality-Aware Federated Learning with Precise User Incentive and Model Aggregation. In Proceedings of the IEEE INFOCOM 2021—IEEE Conference on Computer Communications, Vancouver, BC, Canada, 10–13 May 2021; pp. 1–10. [Google Scholar] [CrossRef]
  17. Saputra, Y.M.; Nguyen, D.N.; Hoang, D.T.; Dutkiewicz, E. Incentive Mechanism for AI-Based Mobile Applications with Coded Federated Learning. In Proceedings of the 2021 IEEE Global Communications Conference (GLOBECOM), Madrid, Spain, 7–11 December 2021; pp. 1–6. [Google Scholar] [CrossRef]
  18. Li, L.; Yu, X.; Cai, X.; He, X.; Liu, Y. Contract-Theory-Based Incentive Mechanism for Federated Learning in Health CrowdSensing. IEEE Internet Things J. 2023, 10, 4475–4489. [Google Scholar] [CrossRef]
  19. Kang, J.; Xiong, Z.; Niyato, D.; Yu, H.; Liang, Y.C.; Kim, D.I. Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach. In Proceedings of the 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), Singapore, 28–30 August 2019; pp. 1–5. [Google Scholar] [CrossRef]
  20. Li, G.; Cai, J. An Online Incentive Mechanism for Crowdsensing with Random Task Arrivals. IEEE Internet Things J. 2020, 7, 2982–2995. [Google Scholar] [CrossRef]
  21. Zeng, R.; Zhang, S.; Wang, J.; Chu, X. FMore: An Incentive Scheme of Multi-dimensional Auction for Federated Learning in MEC. In Proceedings of the 2020 IEEE 40th International Conference on Distributed Computing Systems (ICDCS), Singapore, 29 November–1 December 2020; pp. 278–288. [Google Scholar] [CrossRef]
  22. Sun, W.; Liu, J.; Yue, Y.; Zhang, H. Double Auction-Based Resource Allocation for Mobile Edge Computing in Industrial Internet of Things. IEEE Trans. Ind. Inform. 2018, 14, 4692–4701. [Google Scholar] [CrossRef]
  23. Naveen, K.P.; Sundaresan, R. A double-auction mechanism for mobile data-offloading markets with strategic agents. In Proceedings of the 2018 16th International Symposium on Modeling and Optimization in Mobile, Ad Hoc, and Wireless Networks (WiOpt), Shanghai, China, 7–11 May 2018; pp. 1–8. [Google Scholar] [CrossRef]
  24. Ng, J.S.; Bryan Lim, W.Y.; Dai, H.N.; Xiong, Z.; Huang, J.; Niyato, D.; Hua, X.S.; Leung, C.; Miao, C. Communication-Efficient Federated Learning in UAV-enabled IoV: A Joint Auction-Coalition Approach. In Proceedings of the GLOBECOM 2020—2020 IEEE Global Communications Conference, Taipei, Taiwan, 7–11 December 2020; pp. 1–6. [Google Scholar] [CrossRef]
  25. Le, T.H.T.; Tran, N.H.; Tun, Y.K.; Han, Z.; Hong, C.S. Auction based Incentive Design for Efficient Federated Learning in Cellular Wireless Networks. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Republic of Korea, 25–28 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  26. Yang, Z.; Chen, M.; Saad, W.; Hong, C.S.; Shikh-Bahaei, M. Energy Efficient Federated Learning Over Wireless Communication Networks. IEEE Trans. Wirel. Commun. 2021, 20, 1935–1949. [Google Scholar] [CrossRef]
  27. Dinh, C.T.; Tran, N.H.; Nguyen, M.N.H.; Hong, C.S.; Bao, W.; Zomaya, A.Y.; Gramoli, V. Federated Learning Over Wireless Networks: Convergence Analysis and Resource Allocation. IEEE/ACM Trans. Netw. 2021, 29, 398–409. [Google Scholar] [CrossRef]
  28. Lu, Y.; Huang, X.; Zhang, K.; Maharjan, S.; Zhang, Y. Communication-Efficient Federated Learning for Digital Twin Edge Networks in Industrial IoT. IEEE Trans. Ind. Inform. 2021, 17, 5709–5718. [Google Scholar] [CrossRef]
  29. Zhou, X.; Zhao, J.; Han, H.; Guet, C. Joint Optimization of Energy Consumption and Completion Time in Federated Learning. In Proceedings of the 2022 IEEE 42nd International Conference on Distributed Computing Systems (ICDCS), Bologna, Italy, 10–13 July 2022; pp. 1005–1017. [Google Scholar] [CrossRef]
  30. Feng, J.; Zhang, W.; Pei, Q.; Wu, J.; Lin, X. Heterogeneous Computation and Resource Allocation for Wireless Powered Federated Edge Learning Systems. IEEE Trans. Commun. 2022, 70, 3220–3233. [Google Scholar] [CrossRef]
  31. Feng, L.; Li, W.; Lin, Y.; Zhu, L.; Guo, S.; Zhen, Z. Joint Computation Offloading and URLLC Resource Allocation for Collaborative MEC Assisted Cellular-V2X Networks. IEEE Access 2020, 8, 24914–24926. [Google Scholar] [CrossRef]
  32. Bhadauria, S.; Shabbir, Z.; Roth-Mandutz, E.; Fischer, G. QoS based Deep Reinforcement Learning for V2X Resource Allocation. In Proceedings of the 2020 IEEE International Black Sea Conference on Communications and Networking (BlackSeaCom), Odessa, Ukraine, 26–29 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  33. Ye, H.; Li, G.Y.; Juang, B.H.F. Deep Reinforcement Learning Based Resource Allocation for V2V Communications. IEEE Trans. Veh. Technol. 2019, 68, 3163–3173. [Google Scholar] [CrossRef]
  34. Anwar, W.; Franchi, N.; Fettweis, G. Physical Layer Evaluation of V2X Communications Technologies: 5G NR-V2X, LTE-V2X, IEEE 802.11bd, and IEEE 802.11p. In Proceedings of the 2019 IEEE 90th Vehicular Technology Conference (VTC2019-Fall), Honolulu, HI, USA, 22–25 September 2019; pp. 1–7. [Google Scholar] [CrossRef]
  35. Gao, L.; Hou, Y.; Tao, X.; Zhu, M. Energy-Efficient Power Control and Resource Allocation for V2V Communication. In Proceedings of the 2020 IEEE Wireless Communications and Networking Conference (WCNC), Seoul, Republic of Korea, 25–28 May 2020; pp. 1–6. [Google Scholar] [CrossRef]
  36. Bor-Yaliniz, R.I.; El-Keyi, A.; Yanikomeroglu, H. Efficient 3-D placement of an aerial base station in next generation cellular networks. In Proceedings of the 2016 IEEE International Conference on Communications (ICC), Kuala Lumpur, Malaysia, 22–27 May 2016; pp. 1–5. [Google Scholar] [CrossRef]
  37. Shinde, S.S.; Bozorgchenani, A.; Tarchi, D.; Ni, Q. On the Design of Federated Learning in Latency and Energy Constrained Computation Offloading Operations in Vehicular Edge Computing Systems. IEEE Trans. Veh. Technol. 2022, 71, 2041–2057. [Google Scholar] [CrossRef]
  38. Study LTE-Based V2X Services (Release 14), Document TR 36.885 Std., 3GPP; ETSI: Sophia Antipolis, France, 2016.
  39. LTE; 5G; Overall Description of Radio Access Network (RAN) Aspects for Vehicle-to-Everything (V2X) Based on LTE and NR V.16.0.0. Technical Specification 3GPP TS 37.985 Release 16; ETSI: Sophia Antipolis, France, 2020.
  40. Kim, J.; Kim, D.; Lee, J.; Hwang, J. A Novel Joint Dataset and Computation Management Scheme for Energy-Efficient Federated Learning in Mobile Edge Computing. IEEE Wirel. Commun. Lett. 2022, 11, 898–902. [Google Scholar] [CrossRef]
  41. Boyd, S.; Vandenberghe, L.; Faybusovich, L. Convex Optimization. IEEE Trans. Autom. Control 2006, 51, 1859. [Google Scholar] [CrossRef]
Figure 1. Application scenario.
Figure 1. Application scenario.
Drones 08 00082 g001
Figure 2. UAV-assisted FL framework.
Figure 2. UAV-assisted FL framework.
Drones 08 00082 g002
Figure 3. Fitting of local data size and local model accuracy.
Figure 3. Fitting of local data size and local model accuracy.
Drones 08 00082 g003
Figure 4. Different data quantities versus VU types.
Figure 4. Different data quantities versus VU types.
Drones 08 00082 g004
Figure 5. Different optimal rewards versus VU types.
Figure 5. Different optimal rewards versus VU types.
Drones 08 00082 g005
Figure 6. VUs’ utility versus contract items.
Figure 6. VUs’ utility versus contract items.
Drones 08 00082 g006
Figure 7. Total energy consumption versus parameter size.
Figure 7. Total energy consumption versus parameter size.
Drones 08 00082 g007
Figure 8. Total energy consumption versus average sojourn time.
Figure 8. Total energy consumption versus average sojourn time.
Drones 08 00082 g008
Figure 9. Total energy consumption versus number of VUs.
Figure 9. Total energy consumption versus number of VUs.
Drones 08 00082 g009
Figure 10. The performance of federated learning. (a) Accuracy on a representative VU and (b) Loss on a representative VU.
Figure 10. The performance of federated learning. (a) Accuracy on a representative VU and (b) Loss on a representative VU.
Drones 08 00082 g010
Table 1. Notations.
Table 1. Notations.
DescriptionSymbol
Path loss between c j and UAV at timeslot t g j ( t )
Transmission model size of c j ω j
Transmission power of VU c j p j
CPU cycle frequency of c j f j
Communication delay T j c o m
Communication energy consumption E j c o m
Computation delay T j t r a i n
Computation energy consumption E j t r a i n
Total delay of c j T j
Total energy consumption of c j E j
Local model accuracy of c j η j l o c
Local training iterations of c j I j l o c
Global training iterations of I 0
Sojourn time of c j under UAV s i T s o j o u r n i , j
The data quantity c j contributed to task publisher q j
The reward of c j R j
The willingness of c j to participate in training ε j
The proportion of type-j VU ρ j
VU’s unit cost required for trainingc
The utility of task publisher U T P
The utility of c j U j
Table 2. Simulation parameters.
Table 2. Simulation parameters.
ParametersValues
Average speed of VU ( v ¯ j )[20–30] m/s
Noise power ( N 0 )−174 dBm/Hz
Total data size47 MB
Bandwidth of each VU (B)180 kHz
Transmitted model size ( ω j )0.05 Mbits
Maximum transmission power ( p j m a x )20 dBm
Maximum CPU frequency ( f j m a x )2 GHz
Learning rate0.001
Batch size128
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lin, S.; Li, Y.; Han, Z.; Zhuang, B.; Ma, J.; Tianfield, H. Joint Incentive Mechanism Design and Energy-Efficient Resource Allocation for Federated Learning in UAV-Assisted Internet of Vehicles. Drones 2024, 8, 82. https://doi.org/10.3390/drones8030082

AMA Style

Lin S, Li Y, Han Z, Zhuang B, Ma J, Tianfield H. Joint Incentive Mechanism Design and Energy-Efficient Resource Allocation for Federated Learning in UAV-Assisted Internet of Vehicles. Drones. 2024; 8(3):82. https://doi.org/10.3390/drones8030082

Chicago/Turabian Style

Lin, Shangjing, Yueying Li, Zhibo Han, Bei Zhuang, Ji Ma, and Huaglory Tianfield. 2024. "Joint Incentive Mechanism Design and Energy-Efficient Resource Allocation for Federated Learning in UAV-Assisted Internet of Vehicles" Drones 8, no. 3: 82. https://doi.org/10.3390/drones8030082

Article Metrics

Back to TopTop