Next Article in Journal
Trustworthiness Optimisation Process: A Methodology for Assessing and Enhancing Trust in AI Systems
Previous Article in Journal
An Evolutionary Game-Theoretic Approach to Triple-Strategy Coordination in RRT*-Based Path Planning
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

FAFedZO: Faster Zero-Order Adaptive Federated Learning Algorithm

School of Mathematics and Statistics, Henan University of Science and Technology, Luoyang 471023, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(7), 1452; https://doi.org/10.3390/electronics14071452
Submission received: 7 March 2025 / Revised: 31 March 2025 / Accepted: 1 April 2025 / Published: 3 April 2025

Abstract

:
Federated learning represents a newly emerging methodology in the field of machine learning that enables distributed agents to collaboratively learn a centralized model without sharing their raw data. Some scholars have already proposed many first-order algorithms and second-order algorithms for federated learning to reduce communication costs and speed up convergence. However, these algorithms generally rely on gradient or Hessian information, and we find it difficult to solve such federated optimization problems when the analytical expression of the loss function is not available, that is, when gradient information is not available. Therefore, we employed derivative-free federated zero-order optimization in this paper, which does not rely on specific gradient information, but instead utilizes the changes in function values or model outputs to estimate the optimization direction. Furthermore, to enhance the performance of derivative-free zero-order optimization, we propose an effective adaptive algorithm that can dynamically adjust the learning rate and other hyperparameters based on the performance during the optimization process, aiming to accelerate convergence. We rigorously analyze the convergence of our approach, and the experimental findings demonstrate our method can indeed achieve faster convergence speed on the MNIST, CIFAR-10 and Fashion-MNIST datasets in cases where gradient information is not available.

1. Introduction

With the rapid development of big data and artificial intelligence technologies, machine learning has become one of the key technologies in various fields. However, in practical applications, issues of data privacy and security have become a non-negligible challenge. In traditional machine learning, data usually need to be sent to data centers for model training, but in the process of transmission, the problems of data leakage and privacy protection are faced. To address this issue, federated learning (FL) [1], an emerging distributed machine learning technology, has emerged. It achieves collaborative learning among multiple data sources while protecting data privacy by exchanging model parameters instead of raw data. Currently, FL is utilized across diverse domains such as autonomous driving [2], personalized recommendation systems [3], and medical informatics [4], among others.
In order to meet the requirements of various scenarios in real-world applications, a large number of corresponding algorithms for federated learning have been studied. In recent years, to achieve fast convergence rates and reduce communication loads, a wide range of algorithms have been proposed, and these algorithms are commonly divided into two categories. One category is first-order methods [1,5,6], which only rely on the first-order derivative (gradient) of the objective function. The gradient indicates the direction in which the function increases most rapidly, and this type of method uses this information to iteratively update the model parameters. The other category is second-order methods. This kind of method not only utilizes the first-order derivative but also makes use of the second-order derivative (Hessian matrix), such as Fed-Sophia [7]. Most of these algorithms rely on gradient or Hessian information, and their scope of application is usually limited to differentiable functions. However, when the objective function has difficulty computing gradients or the gradient computation cost is excessively high, these algorithms will not be able to solve such problems, such as in the case of black-box functions and in federated hyperparameter tuning [8], so we must look for other solutions.
Zero-order optimization is a method proposed to reduce the dependence on gradient or Hessian information for optimizing the objective function. Its characteristic is that it does not require the calculation of the gradient. Instead, it estimates the change of the function by sampling the objective function to find the optimal solution. Therefore, this method is applicable to cases where the objective function is non-differentiable or the gradient is difficult to calculate. However, existing zero-order methods lack the utilization of gradient and other related information, resulting in the need for more iterations to achieve convergence during the optimization process. This greatly increases the computational time and affects the performance and convergence speed of the model. In deep learning, an excessively large learning rate may lead to unstable model training, resulting in overfitting or underfitting. Furthermore, traditional fixed learning rate methods often struggle to adapt to complex and diverse training data and model structures, potentially leading to a slow convergence speed of the model. The emergence of adaptive methods has effectively alleviated the problem of slow convergence. The adaptive method can dynamically adjust the learning rate based on feedback from the training process, thereby accelerating the convergence of the model. In addition, compared with stochastic gradient descent, this method can also escape saddle points more quickly [9]. Therefore, introducing it in practice represents a crucial approach to enhancing the performance of federated learning algorithms.
However, the improper design of adaptive federated learning methods may lead to convergence issues [10]. In the context of federated learning, data distribution across different devices often does not follow the IID (Independent and Identically Distributed) assumption, which further complicates the convergence problem. Reddi et al. [11] first proposed a federated version of the adaptive optimizer, among which are FedAdagrad and FedYogi. However, their analysis is only valid under the condition that β 1 = 0 , where β 1 is the decay parameter, which cannot leverage the advantages of momentum. And these limitations become even more pronounced when dealing with non-IID (Non-Independent and Identically Distributed) data. The algorithm FedCAMS [12] provides a complete proof, but it does not improve the convergence speed. Moreover, they usually require a global learning rate for initialization and adjustment, and the selection of this global learning rate may affect the performance of the algorithm. Existing adaptive methods still have shortcomings, such as a lagging response to complex environmental changes and a high consumption of computational resources. Our goal is to design adaptive algorithms tailored to various real-world scenarios, characterized by efficiency, precision, and flexibility, so as to rapidly adapt to environmental changes, optimize resource allocation, and provide robust support for development across various fields.
Based on the above, we combine the gradient-free optimization with adaptive methods, aiming to leverage the advantages of both to solve the problem of unavailable gradient information of the objective function in a finite-sum optimization problem while achieving fast convergence and effectively improving the training efficiency and performance of the model.
The contributions of this paper are summarized as follows:
  • By combining the zero-order optimization and adaptive gradient method, we proposed a novel faster zero-order adaptive federated learning algorithm, called FAFedZO, which can eliminate the reliance on gradient information and accelerate convergence at the same time.
  • We conducted a theoretical analysis of the proposed zero-order adaptive algorithm and provided a convergence analysis framework under some mild assumptions, demonstrating its convergence. Additionally, we have analyzed the computational complexity and convergence rate of the algorithm.
  • We have conducted a large number of comparative experiments on the MNIST, CIFAR-10, and Fashion-MNIST datasets. The experimental results verify the effectiveness of the FAFedZO algorithm. Compared with traditional zero-order optimization algorithms, this algorithm demonstrates significant performance advantages in both IID and non-IID scenarios.
The structure of the rest of this paper is outlined below. Section 2 provides a summary of the related work. In Section 3, we present the formulation of the federated optimization problem and the algorithm framework of FAFedZO. Section 4 provides the convergence analysis of FAFedZO. Section 5 presents the outcomes of our experiments. The paper concludes with Section 6.

2. Related Work

2.1. Federated Learning

There have been numerous studies on federated learning. Just as in the flight control neural network [13], optimal feedback gains are obtained through the meticulous design of the network architecture and training, and federated learning is also seeking the optimal parameter combinations for each client to achieve better global model performance. The pioneering work on federated learning began with [1], which proposed an algorithm called FedAvg. After FedAvg, numerous additional first-order schemes emerged, such as FedNova [14], FedProx [15], SCAFFOLD [16], FedSplit [17], and FedPD [18]. Among them, SCAFFOLD employs control variates to rectify “client drift” while maintaining the same sampling and communication complexity as FedAvg. FedProx introduced a penalty-based approach, which can reduce communication complexity to O ( ϵ 2 ) . To further minimize communication costs, various second-order optimization methods have been introduced, including GIANT [19] and FedDANE [20]. Additionally, there are also some FL algorithms based on momentum, including [21,22,23]. The work presented in [21] proposes a momentum fusion approach for synchronizing the server and local momentum buffers; however, it does not aim to decrease complexity. Ref. [22] proposed a momentum-based global update algorithm, Fed-GLOMO, which reduces variance on the server side using variance-reduction techniques. Ref. [23] proposed the STEM algorithm, which employs momentum-assisted stochastic gradient directions for updates at both the worker nodes and the central server. However, there are still numerous federated optimization problems in reality that are challenging to solve, for instance, when gradient information is unavailable or costly to acquire. Therefore, the study of gradient-free zero-order optimization is essential.

2.2. Zero-Order Optimization

Early literature that began to use the zero-order idea for estimation includes [24,25,26]. Specifically, in [24], the authors developed a distributed zero-order algorithm utilizing gradient tracking techniques. In [25], the author provides the first generalization error analysis for black-box learning via derivative-free optimization and demonstrates that under the assumption of Lipschitz and smoothing (unknown) losses, the ZoSS method attains a comparable generalization error boundary to stochastic gradient descent (SGD). Ref. [26] proposed and analyzed zero-order stochastic approximation algorithms for non-convex and convex objective functions, focusing on solving the problems of constrained optimization and high-dimensional settings. Recent key research achievements have also applied zero-order methods to various fields, such as [27,28,29,30]. In [27], a derivative-free algorithm, FedZO, is proposed. It is proven that this algorithm achieves a linear increase in speed in relation to the number of devices involved and local iteration times under non-convex settings. Ref. [28] proposed the FedDisco algorithm, leveraging zero-order optimization techniques to reduce communication overhead significantly. Ref. [29] designed a federated zero-order algorithm, FedZeN, which focused on convex optimization, and estimated the curvature of the global objective. Under the framework of cross-device federated learning, Ref. [30] introduces a dual-communication zero-order method, which is the first technique to incorporate wireless channels into the algorithm. This represents a significant new achievement in the field of zero-order optimization, achieving a convergence rate of O ( 1 K 3 ) in non-convex settings, where K represents the total number of iterations. However, due to the nature of zero-order optimization, it can lead to slower convergence speeds. Therefore, an adaptive method is introduced below.

2.3. Adaptive Methods

Early proposed adaptive algorithms include [31,32]. The Adam method in [31] introduces decay coefficients and combines momentum optimization to better adapt to non-stationary data and large-scale datasets, thereby accelerating model convergence. The AdaGrad method in [32] can adaptively adjust the learning rate of each parameter that can make sparse features obtain larger learning rates, while frequently occurring features obtain smaller learning rates. Then, researchers have extended these algorithms to the context of federated learning, like recent studies such as [33,34,35,36]. In [33], the authors designed an Adaptive Local Iteration Differential Privacy Federated Learning algorithm (ALI-DPFL) and demonstrated its superiority in resource-constrained scenarios. The adaptive methods in [34,35] effectively mitigate the issue of non-IID data, achieving critical progress in enabling better performance when dealing with non-IID data. Ref. [35] introduced a novel framework called FedARF, which adopted an adaptive feature fusion strategy, enabling the model to better adapt to the data distribution of each client, thereby accelerating the convergence speed on non-IID data. Ref. [36] introduced a novel federated learning algorithm named AFedAvg, which significantly reduced communication costs and accelerated convergence speed by combining adaptive communication frequency and gradient sparsity techniques. However, none of these studies have investigated the integration of adaptive methods within the context of zero-order optimization. Hence, our article addresses this by combining zero-order optimization with adaptive approaches to conduct our discussion.

3. Problem Formulation and Algorithm Design

In this section, we will introduce the federated optimization problem and the design of the zero-order adaptive federated optimization algorithm FAFedZO.

3.1. Federated Optimization Problem Formulation

We consider a federated learning task involving a central server and Q edge devices with an index of { 1 , 2 , , Q } . The central server aims to facilitate collaboration among these devices to address a specific optimization problem
min Ξ R d f ( Ξ ) 1 Q i = 1 Q f i ( Ξ ) ,
in which
f i ( Ξ ) E ϑ i D i [ F i ( Ξ , ϑ i ) ] ,
where Ξ R d represents a d-dimensional model parameter. For each edge device i [ Q ] , [ Q ] represents all Q edge devices from 1 to Q, f i ( Ξ ) denotes its local loss function, while f ( Ξ ) stands for the global loss function. In Equation (1), f i ( Ξ ) evaluates the anticipated risk on the data distribution D i on the edge device i, which is presented in Equation (2). ϑ i D i denotes that the random variable ϑ i is uniformly drawn from the distribution D i , and F i ( Ξ , ϑ i ) denotes the loss function of ϑ i at the parameter Ξ .

3.2. Algorithm Design of FAFedZO

We explored how to design a method that combines an adaptive gradient with zero-order optimization in federated learning.
Firstly, we expand our algorithm description within the context of the FedAvg framework. We are focused here on solving problem (1) through zero-order optimization methods and propose a novel zero-order fast adaptive FL method (FAFedZO) that employs a shared adaptive learning rate. In particular, Algorithm 1 outlines the specifics of our FAFedZO method.
At the beginning, input the parameters and perform initialization. For all i, compute Ξ 1 , i = Ξ 0 , i η 0 n 0 , i , which represents the first model update based on the initial parameters and the initial gradient estimates.
Then, from round t = 1 to T, perform the following: for each edge device i [ Q ] , first, extract a mini-batch B t , i of size b from the local dataset D i . Next, compute the stochastic gradient estimates g ^ t , i and g ^ t 1 , i for the current model parameters Ξ t , i based on the mini-batch B t , i . Here, we elaborate on the specific method for estimating the gradients.
To address the issue of unavailable gradient information and to reduce the frequency of model exchange, we use gradient estimators and perform stochastic zero-order updates in each communication round. By utilizing the gradient estimator, we can approximate the gradient through random sampling and estimation, without the need to precisely calculate the derivative of the function. In particular, at the t-th round, edge device i calculates a two-point stochastic gradient estimator [24], as outlined below
˜ v μ F i ( Ξ t , i , ϑ t , i ) = d v t μ t F i ( Ξ t , i + μ t v t , ϑ t , i ) F i ( Ξ t , i , ϑ t , i ) ,
where Ξ t , i denotes the local model of edge device i, while ϑ t , i signifies a random variable drawn by edge device i according to its local data distribution D i during the t-th round. v t represents a randomly chosen d-dimensional direction, uniformly sampled from the unit sphere S d , while μ t stands for a positive step size.
Afterward, in step 8 of Algorithm 1, we calculate the first-order momentum n t , i . Each update step not only depends on the current gradient but also integrates information from historical gradients. In this way, it can provide more stable and directional guidance for the update of model parameters, which helps accelerate the convergence of the model to the optimal solution. Its definition is as follows
n t , i = ˜ v μ F i ( Ξ t , i ; ϑ t , i ) + ( 1 χ t ) ( n t 1 , i ˜ v μ F i ( Ξ t 1 , i ; ϑ t , i ) ) ,
where the hyperparameter χ t ( 0 ,   1 ) , represents the decay factor.
In step 9 of Algorithm 1, we calculate the second-order momentum ι t , i . It measures the second moment of the gradient, enabling the adaptive adjustment of the learning rate in different parameter dimensions according to the historical changes in gradients. We adopt the coordinate-wise adaptive learning rate approach, akin to that utilized in Adam [31], which is defined as follows:
ι t , i = ϱ ι t 1 , i + ( 1 ϱ ) ( ˜ v μ F i ( Ξ t , i ; ϑ t , i ) ) 2 ,
where the hyperparameter ϱ ( 0 ,   1 ) , represents the decay factor.
Then, when the number of rounds t is an integer multiple of the local update number p (i.e., mod ( t , p ) = 0 ), for ι t , i , we perform the aggregation and periodic averaging steps at the server side, resulting in ι ¯ t . Subsequently, we utilize the ι ¯ t to create an adaptive matrix K t = d i a g ( ι ¯ t + ρ ) , where d i a g denotes diagonal matrix and ρ > 0 represents the decay factor. Since ι ¯ t is obtained by averaging ι t , i across all devices, and ι t , i is related to the square of the gradient estimation value, as ι ¯ t is updated, K t is correspondingly updated. In this way, K t can be adaptively adjusted based on the changes in the gradient information during the update process of the local model. This enables the algorithm to better adapt to the data distributions and model training situations on different devices, accelerate the convergence speed of the model, and enhance the final performance of the model.
Then, the averaging step is also performed on all the first-order momentums n t , i at the server side, and the global model is updated based on the obtained K t and n ¯ t . Otherwise (if mod ( t , p ) 0 ), keep K t as its previous value K t 1 and update the global model in the same manner as before. Here, the central server aggregates the model parameters, the first-order momentum n t , i , and the second-order momentum ι t , i every p steps.
Finally, after all iterations are completed, the algorithm outputs a model Ξ ¯ , which is uniformly randomly selected from all the global models { Ξ ¯ t } t = 1 T obtained during the iterations, as the final result.
To have a more intuitive and clear understanding of the workflow of the FAFedZO algorithm; we roughly summarize it as the following workflow diagram shown in Figure 1:
Algorithm 1: FAFedZO Algorithm
1:
Input: the number of iterations T, decay factor χ t , ϱ , learning rate η t , the number of local updates p, mini batch size b and initial batch-size B;
2:
initialize: Initialize: Ξ 0 , i = Ξ ¯ 0 = 1 Q i = 1 Q Ξ 0 , i , n 0 , i = n ¯ 0 = 1 Q i = 1 Q n ^ 0 , i with n ^ 0 , i = ˜ v μ F ( Ξ 0 , i , B 0 , i ) and ι 0 , i = ι ¯ 0 = 1 Q i = 1 Q ι ^ 0 , i with ι ^ 0 , i = ( ˜ v μ F ( Ξ 0 , i , B 0 , i ) ) 2 where B 0 , i = B from D i for i [ Q ] . K 0 = d i a g ( ι ¯ 0 + ρ )
3:
Ξ 1 , i = Ξ 0 , i η 0 n 0 , i , for all i [ Q ]
  
Electronics 14 01452 i001
16:
Output: Ξ ¯ chosen uniformly random from { Ξ ¯ t } t = 1 T .

4. Convergence Analysis of FAFedZO Method

In this section, the convergence of the FAFedZO method will be discussed. To facilitate the theoretical analysis of the proposed algorithm, we need to make some assumptions as follows.
Assumption 1. 
The global loss function f ( Ξ ) in (1) has a lower bound, that is, there is a fixed value f * that exists f ( Ξ ) f * > .
Assumption 1 means that regardless of the value of the optimization variable Ξ , the global loss function f ( Ξ ) will not decrease indefinitely, and there exists a minimum value f * , ensuring that the optimal solution we are looking for is within a meaningful range. Otherwise, the algorithm may keep trying to find a lower value and fail to converge.
Assumption 2. 
We suppose that functions F i ( Ξ , ϑ i ) , f i ( Ξ ) , f ( Ξ ) are all L-smooth; that is, for all Ξ 1 R d , Ξ 2 R d , we can conclude that
f i ( Ξ 1 ) f i ( Ξ 2 ) L Ξ 1 Ξ 2 ,
where L > 0 represents the Lipschitz constant, and · represents the l 2 norm. In addition, it can also be expressed in the form of f ( Ξ 1 ) f ( Ξ 2 ) + f ( Ξ 2 ) , Ξ 1 Ξ 2 + L 2 Ξ 1 Ξ 2 2 , where the symbol · , · represents the inner product.
Assumption 2 is common in the theoretical analysis of non-convex optimization [24,27], indicating that the gradient changes of f i ( Ξ ) , f ( Ξ ) are smooth and do not change suddenly. It is widely used in optimization analysis, and many typical federated learning algorithms have adopted this assumption, such as Fed-GLOMO [22], STEM [23], and so on.
Assumption 3. 
F i ( Ξ , ϑ i ) serves as a precise approximation of f i ( Ξ ) without bias, namely
E ϑ i [ F i ( Ξ , ϑ i ) ] = f i ( Ξ ) , Ξ R d .
Assumption 3 is usually used in stochastic optimization. It allows us to approximate the true gradient by using the gradient of random sampling, thus reducing the computational complexity. This is because, especially in the cases of large-scale data and complex models, it is often infeasible to calculate the gradient precisely.
Assumption 4. 
The variance of the stochastic gradient is limited within a certain range; that is, there is a constant σ g > 0 that satisfies
E ϑ i F i ( Ξ , ϑ i ) f i ( Ξ ) 2 σ g 2 , Ξ R d .
Assumption 4 indicates that the variance of stochastic gradients will not be infinite. If the variance is too large, the algorithm may experience violent oscillations during the optimization process and fail to converge. Bounded variance is a necessary condition for controlling the noise of SGD and ensuring the convergence rate.
Assumption 5. 
The difference between each local loss function and the global loss function remains within a certain limit; that is, there exists a constant σ h > 0 that satisfies
f ( Ξ ) f i ( Ξ ) 2 σ h 2 , Ξ R d .
Assumption 5 is used to describe the heterogeneity between the local loss and the global loss and is also taken into account in distributed zero-order optimization [37]. It ensures that local optimization will not lead to a significant decline in global performance.
Assumption 6. 
The inter-node variance is bounded, namely,
f i ( Ξ ) f j ( Ξ ) 2 ζ 2 , Ξ R d ,
where ζ is the heterogeneity parameter, which represents the level of data heterogeneity.
Assumption 6 is a typical assumption used to constrain data heterogeneity in the federated learning algorithm. When the data are set to be IID, that is, for all i , j [ Q ] , D i = D j , then ζ = 0 . This assumption indicates that the differences between the loss functions on different devices are limited, ensuring that the optimization processes on different devices will not deviate too much.
Assumption 7. 
In our algorithms, for all t 1 , the adaptive matrx K t satisfies the condition that
λ m i n ( K t ) ρ > 0 ,
where ρ represents an appropriate positive value and λ m i n ( · ) represents the minimum eigenvalue of a matrix.
Assumption 7 ensures that the adaptive matrix K t is positive definite for any t 1 . By ensuring that the minimum eigenvalue is greater than the positive number ρ , it can guarantee that the algorithm has a sufficient update step size in each iteration, thus ensuring the convergence of the algorithm.
Assumption 8. 
The gradients of function f i ( Ξ ) are G-bounded; that is, for all i, Ξ R d , we have
f i ( Ξ ) G .
where G > 0 is a positive constant.
Assumption 8 provides an upper bound for the gradient in the adaptive method. This is a typical assumption in the adaptive method, which is used to constrain the upper bound of the adaptive learning rate. This is reasonable and can usually be satisfied in practice. For example, it holds for the finite sum problem.
In the context of optimization algorithms, in order to prove the final specific conclusion, we need to introduce the concept of the ϵ -stationary point. We define the ϵ -stationary point as follows:
Definition 1. 
A point Ξ is called ϵ-stationary if f ( Ξ ) ϵ . Generally, a stochastic algorithm is defined to achieve an ϵ-stationary point in T iterations if E f ( Ξ T ) ϵ .
Then, we investigate the convergence characteristics of our novel method based on Assumptions 1–8. We first obtain the following six lemmas:
Lemma 1 gives the upper bound of the adaptive matrix K t .
Lemma 1. 
Suppose the adaptive matrix sequence { K t } t = 1 T is derived from the algorithm. On the basis of Assumptions 1–8, we can conclude that
E K t 2 2 d ( G 2 + σ g 2 ) + 2 ρ 2 + 1 2 d 2 L 2 μ 2
Lemma 2 measures the boundary of variance between gradients.
Lemma 2. 
For i [ Q ] , where [ Q ] represents all Q edge devices from 1 to Q, we can conclude that
E ˜ v μ F i ( Ξ t , i ; B t , i ) g t , i 2 L 2 μ 2
E g t 1 g ¯ t 2 6 L 2 E Ξ t 1 Ξ ¯ t 2 + 3 Q ζ 2
Lemma 3 studies the bounds under different values of i = 1 Q Ξ t , i Ξ ¯ t 2 , which is important for the proof of the subsequent theorems.
Lemma 3. 
Given that t [ t / p p , t / p ( p + 1 ) ] , Ξ t is derived from the algorithm; we can conclude that
(1) If t = s t = t / p p , then we can obtain
i = 1 Q Ξ s t , i Ξ ¯ s t 2 = 0
(2) If t t / p p , then we can obtain
i = 1 Q Ξ t , i Ξ ¯ t 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q K s 1 ( n s , i n ¯ s ) 2
Lemma 4 gives the relationship between the expected values of the function f at the model parameters Ξ ¯ t + 1 and Ξ ¯ t .
Lemma 4. 
Assuming the sequence { Ξ t } 0 T is generated by the algorithm, then we can obtain
E f ( Ξ ¯ t + 1 ) E f ( Ξ ¯ t ) ( 3 ρ 4 η t L 2 ) E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 + 5 η t 2 ρ E g ¯ t n ¯ t 2 + 5 η t L 2 2 ρ Q E Ξ t 1 Ξ ¯ t 2
Lemma 5 gives the iterative relationship between E n ¯ t g ¯ t 2 and E n ¯ t 1 g ¯ t 1 2 .
Lemma 5. 
Suppose that n t they are produced by the algorithm; we subsequently obtain
E n ¯ t g ¯ t 2 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 4 ( 1 χ t ) 2 L 2 Q E Ξ t Ξ t 1 2 + 4 χ t 2 L 2 μ 2
Lemma 6 is also a crucial conclusion for proving the final theorem.
Lemma 6. 
Suppose that n t is produced by the algorithm; we subsequently obtain
30 ρ 72 Q t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 ρ 4 t = s t s ¯ 1 η t E Ξ ¯ t + 1 Ξ ¯ t 2 + ρ μ 2 c 2 4 Q + 3 ρ ζ 2 c 2 4 L 2 t = s t s ¯ η t 3
The proofs of these lemmas are provided in Appendix A. Then, based on the conclusions of the above lemmas, we can prove the final convergence theorem:
Theorem 1. 
Assume that the sequence { Ξ ¯ t } t = 1 T is produced by Algorithm 1. Based on Assumptions 1–8, given that t 0 , χ t + 1 = c · η t 2 , c = 1 12 L p h ¯ 3 ρ 2 + 30 L 2 ρ 2 60 L 2 ρ 2 , w t = max ( 3 2 , 1728 L 3 p 3 h ¯ 3 t ) , h ¯ = 1 L and
η t = ρ h ¯ w t + t 1 / 3
we can conclude that
1 T t = 1 T E f ( Ξ ¯ t ) P [ 12 L p ρ 2 T + L ρ 2 T 2 / 3 E f Ξ ¯ 0 f * + 6 L 2 μ 2 p 2 ρ 2 Q T + 12 2 × 75 p ρ 2 T + 900 ρ 2 T 2 / 3 5 L 2 μ 2 3 + 3 ζ 2 ( ln T + 1 ) + L 2 μ 2 p 2 ρ 2 Q T 2 / 3 ] 1 / 2 ,
where P = 5 2 d ( G 2 + σ g 2 ) + 2 ρ 2 + 1 2 d 2 L 2 μ 2 .
Due to limited space, we will only briefly introduce the outline of the proof here: Firstly, by combining the objective function f ( Ξ ¯ t ) and the gradient estimation error term, we construct a Lyapunov function Γ t for analysis. Secondly, we establish a recurrence inequality for Γ t . Through cumulative summation, we can estimate the bounds and convergence of the term 1 T t = 0 T 1 E 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2 . Finally, based on the above, we can derive the convergence bound of the objective function 1 T t = 1 T E f ( Ξ ¯ t ) .
The detailed proof of Theorem 1 can be found in Appendix A.
Remark 1. 
We utilize ζ as an indicator of data heterogeneity. The final results demonstrate that an increase in ζ (indicating greater data heterogeneity) leads to a slowdown in the training process. In addition, we consider that the step size μ t remains unchanged during the training process. Therefore, we omit its subscript and represent it with μ.
Remark 2. 
A suitable value of ρ ensures a balanced incorporation of adaptive information in the learning rate. In practice, we commonly select ρ to be within the order of O ( 1 ) , steering clear of excessively small or large values.
Remark 3 
(Computational complexity). For Q t = 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2 , we have
1 T t = 0 T 1 E [ Q t ] = 1 T t = 0 T 1 E 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2 12 L p ρ 2 T + L ρ 2 T 2 / 3 E f Ξ ¯ 0 f * + 6 L 2 μ 2 p 2 ρ 2 Q T + L 2 μ 2 p 2 ρ 2 Q T 2 / 3 + 12 2 × 75 p ρ 2 T + 900 ρ 2 T 2 / 3 5 L 2 μ 2 3 + 3 ζ 2 ( ln T + 1 )
Referring to the method in [38], without loss of generality, we let p Q = O ( 1 ) and choose p = T 1 3 . To make the right side of the inequality less than ϵ 2 , we can obtain T = O ( ϵ 3 ) and T p = T 2 3 = ϵ 2 . Therefore, to satisfy the definition of an ϵ-stationary point, which is E f ( Ξ T ) ϵ and E [ Q t ] ϵ 2 , we obtain the total sample cost as O ( ϵ 3 ) and the communication round as O ( ϵ 2 ) .
Remark 4 
(Convergence speed). In the FedZO algorithm [27], regardless of whether all devices participate or only partial devices participate, we can observe that the FedZO algorithm achieves a convergence rate of O ( 1 T 1 / 2 ) . Furthermore, as indicated by Theorem 1, our algorithm can achieve a convergence rate of O ( 1 T 2 / 3 ) . Therefore, our FAFedZO method theoretically possesses a faster convergence rate compared to general zero-order methods. This further validates the advantages of our approach.

5. Experimental Results

In this section, we conduct comparative experiments on the MNIST, CIFAR-10 and Fashion-MNIST datasets and present some experimental outcomes to assess the performance of the proposed FAFedZO method in federated black-box attacks, thereby confirming the advantages of this algorithm.

5.1. Experimental Environment and Datasets

Experimental environment configuration: This study utilizes the FedZO framework as the experimental baseline to conduct research on federated learning algorithms. For this study, Python version 3.10.13 is adopted as the programming language, and PyTorch 2.2.0 serves as the development platform. All tests are conducted on a Windows 10 platform equipped with an NV GTX1650 GPU version 572.83 and CUDA version 12.4.99.
Dataset: We conduct comparative experimental studies on the MNIST, CIFAR-10 and Fashion-MNIST datasets.
The MNIST dataset is a classic handwritten digit image dataset. This dataset comprises a collection of grayscale images of handwritten digits (0–9) with 60,000 training samples and 10,000 test samples, with each image measuring 28 × 28 pixels. The CIFAR-10 dataset consists of 60,000 32 × 32 color images across 10 categories, each category contains 6000 images, and the dataset is divided into 50,000 training and 10,000 test images. The Fashion-MNIST dataset has the same data structure as the MNIST dataset, containing 10 categories of grayscale images representing different types of clothing like T-shirts, trousers, sweaters, and so on.

5.2. Experimental Result Analysis

Here, we present the results of simulation experiments to assess the performance of the FAFedZO method in the context of black-box attack strategies.
Given the characteristics of black-box scenarios, optimizing black-box attacks falls into the realm of zero-order optimization. We investigate black-box attacks on a trained deep neural network (DNN) classification model. An attack refers to an intentional attempt to manipulate the input data of a model so as to interfere with its normal operation. It can be achieved by adding carefully designed perturbations to the original data, for example, the interference images that we aim to train on in this experiment. The normal learning process is based on the assumption that the input data follow a certain distribution. However, when under attack, the introduced adversarial examples will violate this distribution assumption. This will force the model to make incorrect predictions, and if this kind of attack is not properly dealt with, it may lead to the failure of the model’s generalization ability. The purpose of our experiment is to train an interference image with the same size as the image in the dataset, which makes it difficult for the human eyes to recognize the difference between the original image and the adversarial image after adding the interference image, but it can induce the classification model to make a wrong judgment. We want to achieve a higher attack success rate with as little disturbance as possible, so we consider the loss function shown below:
ψ ( Ξ , ϑ ) = max φ y ϑ 1 2 t a n h t a n h 1 2 ϑ + Ξ max j y ϑ φ j 1 2 t a n h t a n h 1 2 ϑ + Ξ , 0 I + c 1 2 t a n h t a n h 1 2 ϑ + Ξ ϑ 2 I I
where Ξ represents the interference image to be trained, ϑ represents the original image in the dataset, y ϑ represents the label corresponding to the image ϑ (for example, the label of the image “deer” is 4), and φ j ( ϑ ) represents the confidence that the classification model recognizes image ϑ as label j. The item I in Equation (16) measures the probability of attack failure (marked as attack loss), I I represents the image distortion caused by disturbance, and c is the balance coefficient. In this way, our goal can be achieved by minimizing ψ ( Ξ , ϑ ) . Next, we use Equation (16) to construct the local loss function.
We divide the samples in the dataset into Q groups randomly and unevenly without repetition and then distribute them to each edge device, where Q is the total count of edge devices we preset. Then, for all edge devices, we define their local loss function as
f i ( Ξ ) E ϑ i D i ψ ( Ξ , ϑ i ) , i { 1 , 2 , , Q }
where D i represents the dataset at the ith edge device. In this way, the federal black-box attack problem on the DNN classification model can be formulated as a federated optimization problem: f ( Ξ ) 1 Q i = 1 Q f i ( Ξ ) . Next, we use the FAFedZO algorithm proposed in this paper to solve this problem.
We select balance parameter c = 1 . For the remaining parameters, we select them as ( b 1 , b 2 , μ ) = ( 25 , 20 , 0.001 ) and set the initial learning rate η to be 0.1.
We select the total number of edge devices Q = 50 and the number of participating edge devices M = 30 and study the influence of the number of local updates E on the attack loss and attack accuracy of the proposed FAFedZO algorithm. Then, we compare it with the FedZO algorithm. We conduct experiments on the MNIST, CIFAR-10, and Fashion-MNIST datasets, respectively, and the results are shown in Figure 2.
Taking the MNIST dataset as an example, Figure 2a,d illustrate the impact of the number of local updates E { 3 , 15 , 60 } on the performance of the algorithm. It can be observed that the larger the value of E, the lower the attack loss of the algorithm and the higher the accuracy. In addition, the attack accuracy of FAFedZO is significantly higher than that of FedZO. When E = 60 , they achieve comparable accuracy, which proves that the algorithm proposed in this paper is superior to the original FedZO algorithm. Similar results can also be obtained on the remaining two datasets.
In Figure 3, with the total number of edge devices Q = 100 and the number of local updates E = 60 , the impact of the number M of participating edge devices on the convergence performance of the FAFedZO method is studied on the MNIST and Fashion-MNIST datasets. It can be seen that by adjusting the value of M, the FAFedZO algorithm can effectively enhance the attack accuracy.
In Figure 4, we select the parameters Q = 50 and M = 50 ; that is, when all devices are involved, to study the impact of the number of local updates E on the convergence performance of the algorithm. We have also conducted experiments on the MNIST, CIFAR-10, and Fashion-MNIST datasets, respectively. It is obvious that, compared with the FedZO algorithm under different values of E, the FAFedZO method can significantly reduce the attack loss and improve the attack accuracy. Moreover, as the value of E increases, the convergence speed of the FAFedZO algorithm also accelerates, with lower attack loss and higher attack accuracy, and both tend to stabilize more quickly.
Finally, we study the impact of the number of participating edge devices M on the algorithm under the conditions of Q = 50 and E = 1 . We conducted experiments on the MNIST and Fashion-MNIST datasets, and the results are shown in Figure 5. It can be seen that the FAFedZO algorithm far outperforms the FedZO algorithm in terms of both attack loss and accuracy. Even when the FedZO algorithm takes the optimal value among the selected values of M, its performance is still weaker than the worst value of the FAFedZO algorithm. In addition, as M increases, the attack loss value decreases and the accuracy increases, which is in line with our expectations.
In addition to investigating the impact of the number of local updates and the number of participating edge devices on the algorithm’s performance, we also study the influence of varying the number of random directions on the proposed algorithm. As shown in Figure 6, on the MNIST dataset, under the conditions of Q = 50 and M = 10 , we present the curves illustrating the impact of the number of directions H { 3 , 15 , 60 } on attack loss and attack accuracy. It can be observed that we have similar conclusions to the above figures, which further confirms the superiority of the FAFedZO algorithm.
In addition, we also discuss the performance of our algorithm under the setting of non-Independent and Identically Distributed (non-IID) data.
Figure 7 presents the impact of the number of local updates on the attack loss and accuracy of the algorithm under the non-IID setting. It can be observed that, compared with Figure 2a,d, the attack accuracy in this case is lower than that under the IID setting, which is attributed to the influence of the non-IID setting and aligns with our expectations. Furthermore, the results also indicate that under the non-IID setting, the performance of the FAFedZO algorithm remains superior to that of the FedZO algorithm, and both achieve comparable performance levels when E = 60 .
The conclusions we previously mentioned are also supported by Figure 8 and Figure 9. Therefore, in summary, the FAFedZO algorithm demonstrates superior performance in both IID and non-IID environments.

6. Conclusions

In this paper, we proposed FAFedZO, a federated optimization algorithm that combines derivative-free zero-order optimization with an adaptive method. We conducted a theoretical analysis of the algorithm to prove its convergence and presented the computational complexity and convergence rate of the FAFedZO algorithm. Finally, we conducted a large number of comparative experiments on the MNIST, CIFAR-10, and Fashion-MNIST datasets. The results demonstrate that our method can achieve a faster convergence rate compared to general zero-order optimization algorithms, verifying the effectiveness of the algorithm proposed in this paper.

Author Contributions

Methodology, Y.X.; Formal analysis, Y.Z.; Writing—original draft, Y.L.; Writing—review & editing, H.G.; Supervision, Y.X.; Project administration, Y.X.; Funding acquisition, H.G. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by Key Technologies Research and Development Program of Henan Province under Grant No. 242102210102.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The authors declare there is no conflicts of interest.

Appendix A

Here, we present a thorough examination of the convergence properties of our algorithm. To facilitate the analysis, we introduce the notation g t , i that g t , i = f i ( Ξ t , i ) in the subsequent sections. Wedefine s t as s t = t / p p and use ⊗ as Kronecker product symbol.
At first, for the zero-order gradient ˜ v μ F i ( Ξ t , i , ϑ t , i ) , based on the characteristics of the gradient estimator outlined in ([39], Lemma 4.2), we can deduce that
E ˜ v μ F i ( Ξ t , i , ϑ t , i ) = f i μ Ξ t , i
Then, we can obtain
E ˜ v μ F i ( Ξ t , i , ϑ t , i ) f i μ Ξ t , i 2 E ˜ v μ F i ( Ξ t , i , ϑ t , i ) 2 2 d E F i ( Ξ t , i , ϑ t , i ) 2 + 1 2 d 2 L 2 μ 2 = 2 d E F i ( Ξ t , i , ϑ t , i ) f i ( Ξ t , i ) + f i ( Ξ t , i ) 2 + 1 2 d 2 L 2 μ 2 = 2 d E F i ( Ξ t , i , ϑ t , i ) f i ( Ξ t , i ) 2 + 2 d E f i ( Ξ t , i ) 2 + 1 2 d 2 L 2 μ 2 2 d σ g 2 + 2 d E f i ( Ξ t , i ) 2 + 1 2 d 2 L 2 μ 2 2 d ( G 2 + σ g 2 ) + 1 2 d 2 L 2 μ 2
where the second equality is due to Assumption 3, and the establishment of these four inequalities is because E z E z 2 E z 2 , ([39], Lemma 4.1), Assumptions 4 and 8.
Thus, we can obtain
E ˜ v μ F i ( Ξ t , i , ϑ t , i ) f i μ Ξ t , i 2 2 d ( G 2 + σ g 2 ) + 1 2 d 2 L 2 μ 2
At the same time, we can also obtain
E ˜ v μ F i ( Ξ t , i , ϑ t , i ) 2 2 d ( G 2 + σ g 2 ) + 1 2 d 2 L 2 μ 2
The following presents the specific proofs of Lemma 1 to Lemma 6:
Proof of Lemma 1. 
Firstly, we know that
E K t 2 = E K s t 2 = E ι ¯ s t + ρ 2 2 E ι ¯ s t + 2 ρ 2
where K t = diag ( ι ¯ t + ρ ) is a diagonal matrix. And following the definition of ι s t , we can deduce that
E ι ¯ s t = E ϱ ι ¯ s t 1 + ( 1 ϱ ) 1 Q i = 1 Q [ ˜ v μ F i ( Ξ s t 1 , i , ϑ s t 1 , i ) ] 2 ϱ E ι ¯ s t 1 + ( 1 ϱ ) 1 Q i = 1 Q E [ ˜ v μ F i ( Ξ s t 1 , i , ϑ s t 1 , i ) ] 2 ϱ E ι ¯ s t 1 + ( 1 ϱ ) 1 Q i = 1 Q E ˜ v μ F i ( Ξ s t 1 , i , ϑ s t 1 , i ) 2 2 ϱ E ι ¯ s t 1 + ( 1 ϱ ) 1 Q i = 1 Q [ 2 d ( G 2 + σ g 2 ) + 1 2 d 2 L 2 μ 2 ] = ϱ E ι ¯ s t 1 + ( 1 ϱ ) [ 2 d ( G 2 + σ g 2 ) + 1 2 d 2 L 2 μ 2 ] ϱ E ι ¯ s t 1 + ( 1 ϱ ) C
Therefore, because of ϱ ( 0 , 1 ) , taking the recursive expansion, we can obtain
E ι ¯ s t ( 1 ϱ ) C + ϱ ( 1 ϱ ) C + ϱ 2 ( 1 ϱ ) C + + ϱ s t 1 ( 1 ϱ ) C = 1 ϱ s t 1 ϱ ( 1 ϱ ) C 1 2 C
So, we finally obtain
E K t 2 2 E ι ¯ s t + 2 ρ 2 2 ρ 2 + C = 2 d ( G 2 + σ g 2 ) + 2 ρ 2 + 1 2 d 2 L 2 μ 2
Lemma 1 is proved. □
Proof of Lemma 2. 
For the inequality (7), we have
E ˜ v μ F i ( Ξ t , i ; B t , i ) g t , i 2 = E 1 b ϑ t , i B t , i ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) 2 = 1 b 2 E ϑ t , i B t , i ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) 2 1 b ϑ t , i B t , i E f i μ ( Ξ t , i ; ϑ t , i ) g t , i 2 L 2 μ 2
where the first inequality is because i = 1 q a i 2 q i = 1 q a i 2 , and the last inequality follows from ([24], Lemma 5.2).
For the inequality (8), we have
E g t 1 g ¯ t 2 = i = 1 Q E g t , i g ¯ t 2 3 i = 1 Q E [ g t , i f i ( Ξ ¯ t ) 2 + 1 Q j = 1 Q f j ( Ξ ¯ t ) g t , j 2 + 1 Q j = 1 Q f i ( Ξ ¯ t ) f j ( Ξ ¯ t ) 2 ] = 3 i = 1 Q E g t , i f i ( Ξ ¯ t ) 2 + 3 i = 1 Q 1 Q j = 1 Q E f j ( Ξ ¯ t ) g t , j 2 + 3 i = 1 Q 1 Q j = 1 Q E f i ( Ξ ¯ t ) f j ( Ξ ¯ t ) 2 3 L 2 i = 1 Q E Ξ t , i Ξ ¯ t 2 + 3 L 2 j = 1 Q E Ξ ¯ t Ξ t , j 2 + 3 i = 1 Q 1 Q j = 1 Q E f i ( Ξ ¯ t ) f j ( Ξ ¯ t ) 2 6 L 2 E Ξ t 1 Ξ ¯ t 2 + 3 Q ζ 2
the last two inequalities here are attributed to Assumptions 2 and 6.
Thus, Lemma 2 is proved. □
Proof of Lemma 3. 
(1) t = s t = t / p p ; then, we can conclude that mod ( t , p ) = 0 , so there is Ξ t + 1 , i = Ξ ¯ t + 1 , and thus, we have obtained the final result.
(2) We have
Ξ t , i = Ξ s t , i s = s t t 1 η s K s 1 n s , i Ξ ¯ t = Ξ ¯ s t s = s t t 1 η s K s 1 n ¯ s
thus,
i = 1 Q Ξ t , i Ξ ¯ t 2 = i = 1 Q Ξ s t , i Ξ ¯ s t s = s t t 1 η s K s 1 n s , i s = s t t 1 η s K s 1 n ¯ s 2 = i = 1 Q s = s t t 1 η s K s 1 n s , i n ¯ s 2 ( t s t ) s = s t t 1 η s 2 i = 1 Q K s 1 ( n s , i n ¯ s ) 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q K s 1 ( n s , i n ¯ s ) 2
Lemma 3 is proved. □
Proof of Lemma 4. 
From the smoothness assumption, we know that
f ( Ξ ¯ t + 1 ) f ( Ξ ¯ t ) + f ( Ξ ¯ t ) , Ξ ¯ t + 1 Ξ ¯ t + L 2 Ξ ¯ t + 1 Ξ ¯ t 2 = f ( Ξ ¯ t ) + f ( Ξ ¯ t ) n ¯ t , Ξ ¯ t + 1 Ξ ¯ t ( 1 ) + n ¯ t , Ξ ¯ t + 1 Ξ ¯ t ( 2 ) + L 2 Ξ ¯ t + 1 Ξ ¯ t 2
Regarding (1), we can obtain
( 1 ) = f ( Ξ ¯ t ) n ¯ t , Ξ ¯ t + 1 Ξ ¯ t f Ξ ¯ t n ¯ t Ξ ¯ t + 1 Ξ ¯ t η t ρ f ( Ξ ¯ t ) n ¯ t 2 + ρ 4 η t Ξ ¯ t + 1 Ξ ¯ t 2
Regarding term (2), K t = diag ( ι ¯ s t + ρ ) , Ξ ¯ t + 1 = Ξ ¯ t η t K t 1 n ¯ t . According to the definition of K t and Assumption 7, we can obtain
n ¯ t , 1 η t ( Ξ ¯ t Ξ ¯ t + 1 ) = K t 1 η t ( Ξ ¯ t Ξ ¯ t + 1 ) , 1 η t ( Ξ ¯ t Ξ ¯ t + 1 ) ρ 1 η t ( Ξ ¯ t Ξ ¯ t + 1 ) 2
So, when estimating the second term, there is
( 2 ) = n ¯ t , Ξ ¯ t + 1 Ξ ¯ t η t ρ 1 η t ( Ξ ¯ t Ξ ¯ t + 1 ) 2 = ρ η t Ξ ¯ t + 1 Ξ ¯ t 2
Bringing (A13), (A11) into (A10), we can obtain
f ( Ξ ¯ t + 1 ) f ( Ξ ¯ t ) + η t ρ f ( Ξ ¯ t ) n ¯ t 2 + ρ 4 η t Ξ ¯ t + 1 Ξ ¯ t 2 ρ η t Ξ ¯ t + 1 Ξ ¯ t 2 + L 2 Ξ ¯ t + 1 Ξ ¯ t 2 f ( Ξ ¯ t ) η t 4 ρ f ( Ξ ¯ t ) n ¯ t 2 + 5 η t 4 ρ f ( Ξ ¯ t ) n ¯ t 2 ( 3 ρ 4 η t L 2 ) Ξ ¯ t + 1 Ξ ¯ t 2 f ( Ξ ¯ t ) ( 3 ρ 4 η t L 2 ) Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ f ( Ξ ¯ t ) n ¯ t 2 + 5 η t 2 ρ g ¯ t n ¯ t 2 + 5 η t 2 ρ f ( Ξ ¯ t ) g ¯ t 2
We consider the last term in (A14); taking this expectation on both sides, we have
E f ( Ξ ¯ t ) g ¯ t 2 = E 1 Q i = 1 Q ( f i ( Ξ ¯ t ) g t , i ) 2 1 Q 2 Q i = 1 Q E f i ( Ξ ¯ t ) g t , i 2 L 2 Q i = 1 Q E Ξ t , i Ξ ¯ t 2 = L 2 Q E Ξ t 1 Ξ ¯ t 2
By substituting it into (A14) and then taking the expectation of both sides, we can obtain this conclusion.
Lemma 4 is proved. □
Proof of Lemma 5. 
We know that
n ¯ t = 1 Q i = 1 Q [ ˜ v μ F i ( Ξ t , i ; B t , i ) + ( 1 χ t ) ( n ¯ t 1 ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ) ]
So we can obtain
E n ¯ t g ¯ t 2 = E 1 Q i = 1 Q [ ˜ v μ F i ( Ξ t , i ; B t , i ) + ( 1 χ t ) ( n ¯ t 1 ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ) ] g ¯ t 2 = E 1 Q i = 1 Q [ ( ˜ v μ F i ( Ξ t , i ; B t , i ) g t , i ) ( 1 χ t ) ( ˜ v μ F i ( Ξ t 1 , i ; B t , i ) g t 1 , i ) ] + ( 1 χ t ) ( n ¯ t 1 g ¯ t 1 ) 2 = E 1 b ϑ t , i B t , i 1 Q i = 1 Q [ ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) ( 1 χ t ) ( f i μ ( Ξ t 1 , i ; ϑ t , i ) g t 1 , i ) ] + ( 1 χ t ) ( n ¯ t 1 g ¯ t 1 ) 2 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 2 b 2 Q 2 b Q ϑ t , i B t , i i = 1 Q E ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) ( 1 χ t ) ( f i μ ( Ξ t 1 , i ; ϑ t , i ) g t 1 , i ) 2 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 2 b Q ϑ t , i B t , i i = 1 Q E ( 1 χ t ) [ ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) ( f i μ ( Ξ t 1 , i ; ϑ t , i ) g t 1 , i ) ] + χ t ( f i μ ( Ξ t , i ; ϑ t , i ) g t , i ) 2 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 4 ( 1 χ t ) 2 b Q ϑ t , i B t , i i = 1 Q E f i μ ( Ξ t , i ; ϑ t , i ) f i μ ( Ξ t 1 , i ; ϑ t , i ) 2 + 4 χ t 2 b Q ϑ t , i B t , i i = 1 Q E f i μ ( Ξ t , i ; ϑ t , i ) g t , i 2 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 4 ( 1 χ t ) 2 L 2 Q i = 1 Q E Ξ t , i Ξ t 1 , i 2 + 4 χ t 2 L 2 μ 2 = 2 ( 1 χ t ) 2 E n ¯ t 1 g ¯ t 1 2 + 4 ( 1 χ t ) 2 L 2 Q E Ξ t Ξ t 1 2 + 4 χ t 2 L 2 μ 2
where the first inequality is due to a + b 2 2 a 2 + 2 b 2 and the last inequality is because of the L-smoothness and Lemma 2.
Lemma 5 is proved. □
Proof of Lemma 6. 
We first observe that
i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 i = 1 Q E K t 1 [ [ ˜ v μ F i ( Ξ t , i ; B t , i ) + ( 1 χ t ) ( n t 1 , i ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ) ] 1 Q i = 1 Q [ ˜ v μ F i ( Ξ t , i ; B t , i ) + ( 1 χ t ) ( n t 1 , i ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ) ] ] 2 = i = 1 Q E K t 1 [ ( 1 χ t ) ( n t 1 , i n ¯ t 1 ) + [ ˜ v μ F i ( Ξ t , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t , i ; B t , i ) ( 1 χ t ) ( ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ) ] ] 2 ( 1 + γ ) ( 1 χ t ) 2 i = 1 Q E K t 1 ( n t 1 , i n ¯ t 1 ) 2 + ( 1 + 1 γ ) 1 ρ 2 i = 1 Q E [ ˜ v μ F i ( Ξ t , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t , i ; B t , i ) ] ( 1 χ t ) [ ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ] 2
where (A17) arises is because K t ρ I d . Continuing to process the latter term in (A17) yields
i = 1 Q E ˜ v μ F i ( Ξ t , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t , i ; B t , i ) ( 1 χ t ) [ ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ] 2 2 i = 1 Q E [ ˜ v μ F i ( Ξ t , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t , i ; B t , i ) ] [ ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) ] 2 + 2 χ t 2 i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 2 2 i = 1 Q E ˜ v μ F i ( Ξ t , i ; B t , i ) ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 2 + 2 χ t 2 i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 2 2 i = 1 Q L 2 E Ξ t , i Ξ t 1 , i 2 + 2 χ t 2 i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 2 = 2 L 2 E Ξ t Ξ t 1 2 + 2 χ t 2 i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q i = 1 Q ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 2
the last two inequalities here are attributed to the Lemma 1 in [38] and the L-smoothness. Regarding the last term, we can obtain
i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) 1 Q j = 1 Q ˜ v μ F j ( Ξ t 1 , j ; B t , j ) 2 = i = 1 Q E [ ˜ v μ F i ( Ξ t 1 , i ; B t , i ) g t 1 , i ] 1 Q j = 1 Q [ ˜ v μ F j ( Ξ t 1 , j ; B t , j ) g t 1 , j ] + [ g t 1 , i g ¯ t 1 ] 2 2 i = 1 Q E ˜ v μ F i ( Ξ t 1 , i ; B t , i ) g t 1 , i 2 + 2 i = 1 Q E g t 1 , i g ¯ t 1 2 2 Q L 2 μ 2 + 6 Q ζ 2 + 12 L 2 E Ξ t 1 1 Ξ ¯ t 1 2
The last two inequalities here are attributed to Lemma 1 in [38] and Lemma 2. Then, by integrating the aforementioned inequalities (A17)–(A19) with the definition of K t , we can deduce that when mod ( t , p ) 0 , the following formula can be reached.
i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 ( 1 χ t ) 2 ( 1 + γ ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 2 L 2 ρ 2 ( 1 + 1 γ ) E Ξ t Ξ t 1 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 E Ξ t 1 1 Ξ ¯ t 1 2 ( 1 χ t ) 2 ( 1 + γ ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 2 L 2 ρ 2 ( 1 + 1 γ ) i = 1 Q E Ξ t , i Ξ t 1 , i 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 i = 1 Q E Ξ t 1 , i Ξ ¯ t 1 2 ( 1 χ t ) 2 ( 1 + γ ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 2 L 2 ρ 2 ( 1 + 1 γ ) i = 1 Q E η t 1 K t 1 1 n t 1 , i 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 ( 1 χ t ) 2 ( 1 + γ ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 + 4 L 2 ρ 2 ( 1 + 1 γ ) i = 1 Q E η t 1 K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + η t 1 K t 1 1 n ¯ t 1 2
where we utilize Lemma 3. Then, combining like terms, we have
i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 [ ( 1 χ t ) 2 ( 1 + γ ) + 4 L 2 ρ 2 ( 1 + 1 γ ) η t 1 2 ] i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 4 Q L 2 ρ 2 ( 1 + 1 γ ) η t 1 2 E K t 1 1 n ¯ t 1 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2
We select γ = 1 p , η t ρ 12 L p , and we know that χ t ( 0 ,   1 ) , then
( 1 χ t ) 2 ( 1 + γ ) + 4 L 2 ρ 2 ( 1 + 1 γ ) η t 1 2 1 + 1 p + 4 L 2 ρ 2 ( 1 + p ) η t 1 2 1 + 1 p + p + 1 36 p 2 1 + 19 18 p
Substitute (A22) into (A21); then, we can obtain
i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 ( 1 + 19 18 p ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 4 Q L 2 ρ 2 ( 1 + 1 γ ) η t 1 2 E K t 1 1 n ¯ t 1 2 + 4 L 2 μ 2 ρ 2 ( 1 + 1 γ ) χ t 2 + 12 Q ρ 2 ζ 2 ( 1 + 1 γ ) χ t 2 + 24 L 2 ( 1 + 1 γ ) χ t 2 ρ 2 ( p 1 ) s = s t t 1 η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 ( 1 + 19 18 p ) i = 1 Q E K t 1 1 ( n t 1 , i n ¯ t 1 ) 2 + 2 Q L 3 ρ η t 1 E K t 1 1 n ¯ t 1 2 + 2 L μ 2 c 2 3 ρ η t 1 3 + 2 Q ζ 2 c 2 L ρ η t 1 3 + 48 L 2 p 2 c 2 η t 1 4 ρ 2 s = s t t 1 η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2
where we consider χ t = c η t 1 2 and 1 + p p + p = 2 p .
On the other hand, if mod ( t , p ) = 0 , that is t = s t , it follows that i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 = 0 . So taking the recursive expansion to (A23), we have
i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 2 Q L 3 ρ s = s t t 1 ( 1 + 19 18 p ) t 1 s η s E K s 1 n ¯ s 2 + 2 L μ 2 c 2 3 ρ + 2 Q ζ 2 c 2 L ρ s = s t t 1 ( 1 + 19 18 p ) t 1 s η s 3 + 48 L 2 p 2 c 2 ρ 2 s = s t t 1 ( 1 + 19 18 p ) t 1 s η s 4 s ¯ = s t s η s ¯ 2 i = 1 Q E K s ¯ 1 ( n s ¯ , i n ¯ s ¯ ) 2 2 Q L 3 ρ s = s t t 1 ( 1 + 19 18 p ) p η s E K s 1 n ¯ s 2 + 2 L μ 2 c 2 3 ρ + 2 Q ζ 2 c 2 L ρ s = s t t 1 ( 1 + 19 18 p ) p η s 3 + 48 L 2 p 3 c 2 ρ 2 ( ρ 12 L p ) 5 ( 1 + 19 18 p ) p s = s t t η s i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 2 Q L ρ s = s t t η s E K s 1 n ¯ s 2 + 2 L μ 2 c 2 ρ + 6 Q ζ 2 c 2 L ρ s = s t t η s 3 + 144 L 2 p 3 c 2 ρ 2 ( ρ 12 L p ) 5 s = s t t η s i = 1 Q E K s 1 ( n s , i n ¯ s ) 2
where we utilize that t 1 s p 1 < p and ( 1 + 19 18 p ) p e 19 18 3 . Then, by multiplying both sides by t = s t s ¯ η t , we can obtain
t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 2 Q L ρ t = s t s ¯ η t s = s t t η s E K s 1 n ¯ s 2 + 2 L μ 2 c 2 ρ + 6 Q ζ 2 c 2 L ρ t = s t s ¯ η t s = s t t η s 3 + 144 L 2 p 3 c 2 ρ 2 ( ρ 12 L p ) 5 t = s t s ¯ η t s = s t t η s i = 1 Q E K s 1 ( n s , i n ¯ s ) 2
Finally,
t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 2 Q L ρ ( t = s t s ¯ η t ) t = s t s ¯ η t E K t 1 n ¯ t 2 + 2 L μ 2 c 2 ρ + 6 Q ζ 2 c 2 L ρ ( t = s t s ¯ η t ) t = s t s ¯ η t 3 + 144 L 2 p 3 c 2 ρ 2 ( ρ 12 L p ) 5 ( t = s t s ¯ η t ) t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 Q 6 t = s t s ¯ η t E K t 1 n ¯ t 2 + μ 2 c 2 6 + Q ζ 2 c 2 2 L 2 t = s t s ¯ η t 3 + 144 L 2 p 4 c 2 ρ 2 ( ρ 12 L p ) 6 t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2
where the last inequality is because η t ρ 12 L p , so t = s t s ¯ η t ( s ¯ s t ) ρ 12 L p p ρ 12 L p = ρ 12 L .
Therefore,
[ 1 144 L 2 p 4 c 2 ρ 2 ( ρ 12 L p ) 6 ] t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 Q 6 t = s t s ¯ η t E K t 1 n ¯ t 2 + μ 2 c 2 6 + Q ζ 2 c 2 2 L 2 t = s t s ¯ η t 3
given that c 60 L 2 ρ 2 and 1 144 L 2 p 4 c 2 ρ 2 ( ρ 12 L p ) 6 20 72 . By multiplying by 3 ρ 2 Q on both sides, we can obtain
30 ρ 72 Q t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 ρ 4 t = s t s ¯ η t E K t 1 n ¯ t 2 + ρ μ 2 c 2 4 Q + 3 ρ ζ 2 c 2 4 L 2 t = s t s ¯ η t 3 = ρ 4 t = s t s ¯ 1 η t E Ξ ¯ t + 1 Ξ ¯ t 2 + ρ μ 2 c 2 4 Q + 3 ρ ζ 2 c 2 4 L 2 t = s t s ¯ η t 3
Lemma 6 is proved. □
Now let us prove the final theorem.
Proof of Theorem 1. 
We set η t = ρ h ¯ w t + t 1 / 3 , χ t + 1 = c · η t 2 , c = 1 12 L p h ¯ 3 ρ 2 + 30 L 2 ρ 2 60 L 2 ρ 2 , h ¯ = 1 L and w t = max ( 3 2 , 1728 L 3 p 3 h ¯ 3 t ) . So we can infer that
η t = min { ρ 1 L 3 2 + t 1 3 , ρ 1 L 1728 L 3 p 3 1 L 3 t + t 1 3 } = min { ρ L 3 2 + t 1 3 , ρ 12 L p }
It is clear that η t ρ 12 L p . And
2 η t 1 η t 1 1 = 2 ( w t + t ) 1 / 3 ρ h ¯ ( w t 1 + t 1 ) 1 / 3 ρ h ¯ 2 3 ρ h ¯ ( w t + ( t 1 ) ) 2 / 3 2 3 ρ h ¯ ( w t / 3 + t / 3 ) 2 / 3 = 2 · 3 2 / 3 3 ρ h ¯ ( w t + t ) 2 / 3 = 2 · 3 2 / 3 3 ρ 3 h ¯ 3 · ρ 2 h ¯ 2 ( w t + t ) 2 / 3 = 2 · 3 2 / 3 3 ρ 3 h ¯ 3 η t 2 η t 6 ρ 2 h ¯ 3 L p
where we leverage the concavity of f ( Ξ ) = Ξ 1 / 3 , that is ( Ξ + y ) 1 / 3 Ξ 1 / 3 + y 3 Ξ 2 / 3 . The second inequality is valid because of w t 3 2 , and the last inequality is obtained based on η t ρ 12 L p .
E n ¯ t + 1 g ¯ t + 1 2 η t E n ¯ t g ¯ t 2 η t 1 2 ( 1 χ t + 1 ) 2 η t 1 η t 1 E n ¯ t g ¯ t 2 + 4 ( 1 χ t + 1 ) 2 L 2 Q η t E Ξ t + 1 Ξ t 2 + 4 χ t + 1 2 L 2 μ 2 η t [ 2 η t 1 η t 1 1 2 c η t ] E n ¯ t g ¯ t 2 + 8 ( 1 χ t + 1 ) 2 L 2 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + 8 ( 1 χ t + 1 ) 2 L 2 η t E K t 1 n ¯ t 2 + 4 χ t + 1 2 L 2 μ 2 η t 60 L 2 ρ 2 η t E n ¯ t g ¯ t 2 + 8 L 2 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + 8 L 2 η t E K t 1 n ¯ t 2 + 4 c 2 η t 3 L 2 μ 2 .
where the second inequality holds true because ( a + b ) 2 2 a 2 + 2 b 2 and ( 1 χ t + 1 ) 2 1 χ t + 1 , χ t + 1 = c · η t 2 and the last inequality is obtained based on (A29). Therefore, we have
ρ 24 L 2 [ E n ¯ t + 1 g ¯ t + 1 2 η t E n ¯ t g ¯ t 2 η t 1 ] 5 2 ρ η t E n ¯ t g ¯ t 2 + ρ 3 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + ρ 3 η t E K t 1 n ¯ t 2 + ρ c 2 η t 3 μ 2 6
Subsequently, we set
Γ t = f ( Ξ ¯ t ) + ρ 24 L 2 n ¯ t g ¯ t 2 η t 1
E [ Γ t + 1 Γ t ] = E [ f ( Ξ ¯ t + 1 ) f ( Ξ ¯ t ) + ρ 24 L 2 ( n ¯ t + 1 g ¯ t + 1 2 η t n ¯ t g ¯ t 2 η t 1 ) ] ( 3 ρ 4 η t L 2 ) E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 + 5 η t L 2 ( p 1 ) 2 ρ Q s = s t t η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 + ρ 3 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + ρ 3 η t E K t 1 n ¯ t 2 + ρ c 2 η t 3 μ 2 6 = ( 3 ρ 4 η t L 2 ) E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f Ξ ¯ t n ¯ t 2 + 5 η t L 2 ( p 1 ) 2 ρ Q s = s t t η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 + ρ 3 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 + ρ c 2 η t 3 μ 2 6 ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 + 5 η t L 2 ( p 1 ) 2 ρ Q s = s t t η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 + ρ 3 Q η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + ρ c 2 η t 3 μ 2 6
where we utilize that Lemma 3, Lemma 4 and L 2 ρ 24 η t p ρ 24 η t . By summing the results from t = s t to s ¯ , where s ¯ [ t / p p , ( t / p + 1 ) p ] , we can obtain
E [ Γ s ¯ + 1 Γ s t ] t = s t s ¯ [ ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 ] + t = s t s ¯ ρ c 2 η t 3 μ 2 6 + t = s t s ¯ 5 η t L 2 ( p 1 ) 2 ρ Q s = s t t η s 2 i = 1 Q E K s 1 ( n s , i n ¯ s ) 2 + ρ 3 Q t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 t = s t s ¯ [ ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 ] + t = s t s ¯ ρ c 2 η t 3 μ 2 6 + 5 L 2 ( p 1 ) 2 ρ Q ( t = s t s ¯ η t ) t = s t s ¯ η t 2 i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 + ρ 3 Q t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 t = s t s ¯ [ ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 ] + ρ 3 Q t = s t s ¯ η t i = 1 Q E K t 1 n t , i n ¯ t 2 + 5 L 2 ( p 1 ) 2 ρ Q p × ρ 12 L p × ρ 12 L p t = s t s ¯ η t i = 1 Q E K t 1 n t , i n ¯ t 2 + t = s t s ¯ ρ c 2 η t 3 μ 2 6 t = s t s ¯ [ ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 ] + t = s t s ¯ ρ c 2 η t 3 μ 2 6 + 26 ρ 72 Q t = s t s ¯ η t i = 1 Q E K t 1 ( n t , i n ¯ t ) 2 t = s t s ¯ [ ρ 3 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f ( Ξ ¯ t ) n ¯ t 2 ] + t = s t s ¯ ρ c 2 η t 3 μ 2 6 + ρ 4 t = s t s ¯ 1 η t E Ξ ¯ t + 1 Ξ ¯ t 2 + ρ μ 2 c 2 4 Q + 3 ρ ζ 2 c 2 4 L 2 t = s t s ¯ η t 3
where, by utilizing Lemma 6 and the fact that 26 72 < 30 72 , we can derive the last inequality. Subsequently, summing the terms from the start can obtain
E Γ T Γ 0 t = 0 T 1 ρ 12 η t E Ξ ¯ t + 1 Ξ ¯ t 2 η t 4 ρ E f Ξ ¯ t n ¯ t 2 + t = 0 T 1 ρ c 2 η t 3 μ 2 6 + ρ μ 2 c 2 4 Q t = 0 T 1 η t 3 + 3 ρ ζ 2 c 2 4 L 2 t = 0 T 1 η t 3
Furthermore, we can obtain
t = 0 T 1 E ρ 12 η t Ξ ¯ t + 1 Ξ ¯ t 2 + η t 4 ρ f Ξ ¯ t n ¯ t 2 E Γ 0 Γ T + 5 ρ μ 2 c 2 12 t = 0 T 1 η t 3 + 3 ρ ζ 2 c 2 4 L 2 t = 0 T 1 η t 3 E f ( Ξ ¯ 0 ) f * + ρ 24 L 2 n ¯ 0 g ¯ 0 2 η 0 + 5 ρ μ 2 c 2 12 t = 0 T 1 η t 3 + 3 ρ ζ 2 c 2 4 L 2 t = 0 T 1 η t 3
Then, consider that t = 0 T 1 η t 3 = t = 0 T 1 ρ 3 h ¯ 3 w t + t t = 0 T 1 ρ 3 h ¯ 3 1 + t ρ 3 h ¯ 3 ( ln T + 1 ) , since w t 3 2 > 1 . Taking Lemma 2 and dividing both sides of the above result by ρ η T T , we can obtain
1 T t = 0 T 1 E 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2 E f ( Ξ ¯ 0 ) f * η T T ρ + μ 2 η T T 24 Q η 0 + ρ 3 η T T L 2 5 L 2 μ 2 12 + 3 ζ 2 4 c 2 h ¯ 3 ( ln T + 1 )
Regarding the first term in (A36),
1 η T T = w T + T 1 / 3 ρ h ¯ T w T 1 / 3 ρ h ¯ T + 1 ρ h ¯ T 2 / 3 12 L q ρ T + L ρ T 2 / 3
For the middle term, we have
μ 2 η T T 24 Q η 0 12 L p ρ T + L ρ T 2 / 3 × μ 2 24 Q × w 0 1 / 3 ρ h ¯ 12 L p ρ T + L ρ T 2 / 3 × μ 2 24 Q × 12 L p ρ 6 L 2 μ 2 p 2 ρ 2 Q T + L 2 μ 2 p 2 ρ 2 Q T 2 / 3
For the third term,
ρ 3 c 2 h ¯ 3 4 η T T L 2 12 L p ρ T + L ρ T 2 / 3 × 60 L 2 ρ 2 2 × ρ 3 h ¯ 3 4 L 2 = 12 L p ρ T + L ρ T 2 / 3 × 3600 L 4 ρ 4 × ρ 3 1 L 3 4 L 2 = 12 L p ρ T + L ρ T 2 / 3 × 900 ρ L = 12 2 × 75 p ρ 2 T + 900 ρ 2 T 2 / 3
We let Q t = 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2
1 T t = 0 T 1 E [ Q t ] = 1 T t = 0 T 1 E 1 12 η t 2 Ξ ¯ t + 1 Ξ ¯ t 2 + 1 4 ρ 2 f Ξ ¯ t n ¯ t 2 12 L p ρ 2 T + L ρ 2 T 2 / 3 E f Ξ ¯ 0 f * + 6 L 2 μ 2 p 2 ρ 2 Q T + L 2 μ 2 p 2 ρ 2 Q T 2 / 3 + 12 2 × 75 p ρ 2 T + 900 ρ 2 T 2 / 3 5 L 2 μ 2 3 + 3 ζ 2 ( ln T + 1 )
and if we choose p = T Q 2 1 3 , then p T = 1 ( Q T ) 2 3 , p 2 Q T = 1 Q 7 3 T 1 3 , p Q T 2 3 = 1 Q 5 3 T 1 3 . So we can infer that it is convergent.
Then, with Jensen’s inequality and K t ρ , we can obtain
1 η t Ξ ¯ t Ξ ¯ t + 1 + 1 ρ f Ξ ¯ t n ¯ t = K t 1 n ¯ t + 1 ρ f ( Ξ ¯ t ) n ¯ t = 1 K t K t K t 1 n ¯ t + 1 ρ f Ξ ¯ t n ¯ t 1 K t n ¯ t + 1 K t f Ξ ¯ t n ¯ t 1 K t f Ξ ¯ t
Finally,
1 T t = 1 T E f ( Ξ ¯ t ) 1 T t = 1 T E K t 1 η t Ξ ¯ t Ξ ¯ t + 1 + 1 ρ f Ξ ¯ t n ¯ t 1 T t = 1 T E λ 2 K t 2 + 1 2 λ 1 η t Ξ ¯ t Ξ ¯ t + 1 + 1 ρ f Ξ ¯ t n ¯ t 2 = λ 2 1 T t = 1 T E K t 2 + 1 2 λ 1 T t = 1 T E 1 η t Ξ ¯ t Ξ ¯ t + 1 + 1 ρ f Ξ ¯ t n ¯ t 2 1 T t = 1 T E K t 2 1 T t = 1 T E 2 η t 2 Ξ ¯ t Ξ ¯ t + 1 2 + 2 ρ 2 f Ξ ¯ t n ¯ t 2 5 2 d ( G 2 + σ g 2 ) + 2 ρ 2 + 1 2 d 2 L 2 μ 2 1 T t = 0 T 1 E [ Q t ]
where we utilize Young’s inequality and ( a + b ) 2 2 a 2 + 2 b 2 , and λ = 1 T t = 1 T E 1 η t Ξ ¯ t Ξ ¯ t + 1 + 1 ρ f Ξ ¯ t n ¯ t 2 / 1 T t = 1 T E K t 2 .
Since 1 T t = 0 T 1 E [ Q t ] is convergent, as mentioned before, it can be known that 1 T t = 1 T E f ( Ξ ¯ t ) is also convergent; thus, the theorem is proved. □

References

  1. McMahan, B.; Moore, E.; Ramage, D.; Hampson, S.; y Arcas, B.A. Communication-efficient learning of deep networks from decentralized data. In Proceedings of the Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, 20–22 April 2017; pp. 1273–1282. [Google Scholar]
  2. Shi, Y.; Yang, K.; Yang, Z.; Zhou, Y. Mobile Edge Artificial Intelligence: Opportunities and Challenges; Elsevier: Amsterdam, The Netherlands, 2021. [Google Scholar]
  3. Yang, L.; Tan, B.; Zheng, V.W.; Chen, K.; Yang, Q. Federated recommendation systems. In Federated Learning: Privacy and Incentive; Springer International Publishing: Cham, Switzerland, 2020; pp. 225–239. [Google Scholar]
  4. Yang, K.; Shi, Y.; Zhou, Y.; Yang, Z.; Fu, L.; Chen, W. Federated machine learning for intelligent IoT via reconfigurable intelligent surface. IEEE Netw. 2020, 34, 16–22. [Google Scholar]
  5. Tian, J.; Smith, J.S.; Kira, Z. Fedfor: Stateless heterogeneous federated learning with first-order regularization. arXiv 2022, arXiv:2209.10537. [Google Scholar]
  6. Zhang, M.; Sapra, K.; Fidler, S.; Yeung, S.; Alvarez, J.M. Personalized federated learning with first order model optimization. arXiv 2021, arXiv:2012.08565. [Google Scholar]
  7. Elbakary, A.; Issaid, C.B.; Shehab, M.; Seddik, K.G.; ElBatt, T.A.; Bennis, M. Fed-Sophia: A Communication-Efficient Second-Order Federated Learning Algorithm. arXiv 2024, arXiv:2406.06655. [Google Scholar]
  8. Dai, Z.; Low, B.K.H.; Jaillet, P. Federated Bayesian optimization via Thompson sampling. Adv. Neural Inf. Process. Syst. 2020, 33, 9687–9699. [Google Scholar]
  9. Staib, M.; Reddi, S.; Kale, S.; Kumar, S.; Sra, S. Escaping saddle points with adaptive gradient methods. In Proceedings of the International Conference on Machine Learning, Long Beach, CA, USA, 10–15 June 2019; pp. 5956–5965. [Google Scholar]
  10. Chen, X.; Li, X.; Li, P. Toward communication efficient adaptive gradient method. In Proceedings of the 2020 ACM-IMS on Foundations of Data Science Conference, San Francisco, CA, USA, 18–20 October 2020; pp. 119–128. [Google Scholar]
  11. Reddi, S.; Charles, Z.; Zaheer, M.; Garrett, Z.; Rush, K.; Konečnỳ, J.; Kumar, S.; McMahan, H.B. Adaptive federated optimization. arXiv 2020, arXiv:2003.00295. [Google Scholar]
  12. Wang, Y.; Lin, L.; Chen, J. Communication-efficient adaptive federated learning. In Proceedings of the International Conference on Machine Learning, Baltimore, MD, USA, 17–23 July 2022; pp. 22802–22838. [Google Scholar]
  13. Zhang, P.; Yang, X.; Chen, Z. Neural network gain scheduling design for large envelope curve flight control law. J. Beijing Univ. Aeronaut. Astronaut. 2005, 31, 604–608. [Google Scholar]
  14. Wang, J.; Liu, Q.; Liang, H.; Joshi, G.; Poor, H.V. A novel framework for the analysis and design of heterogeneous federated learning. IEEE Trans. Signal Process. 2021, 69, 5234–5249. [Google Scholar]
  15. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smith, V. Federated optimization in heterogeneous networks. Proc. Mach. Learn. Syst. 2020, 2, 429–450. [Google Scholar]
  16. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. Scaffold: Stochastic controlled averaging for federated learning. In Proceedings of the International Conference on Machine Learning, Virtual, 13–18 July 2020; pp. 5132–5143. [Google Scholar]
  17. Pathak, R.; Wainwright, M.J. FedSplit: An algorithmic framework for fast federated optimization. Adv. Neural Inf. Process. Syst. 2020, 33, 7057–7066. [Google Scholar]
  18. Zhang, X.; Hong, M.; Dhople, S.; Yin, W.; Liu, Y. Fedpd: A federated learning framework with adaptivity to non-IID data. IEEE Trans. Signal Process. 2021, 69, 6055–6070. [Google Scholar]
  19. Wang, S.; Roosta, F.; Xu, P.; Mahoney, M.W. Giant: Globally improved approximate newton method for distributed optimization. Adv. Neural Inf. Process. Syst. 2018, 31, 2332–2342. [Google Scholar]
  20. Li, T.; Sahu, A.K.; Zaheer, M.; Sanjabi, M.; Talwalkar, A.; Smithy, V. Feddane: A federated newton-type method. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers , Pacific Grove, CA, USA, 3–6 November 2019; pp. 1227–1231. [Google Scholar]
  21. Xu, A.; Huang, H. Coordinating momenta for cross-silo federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Online, 22 February–1 March 2022; Volume 36, pp. 8735–8743. [Google Scholar]
  22. Das, R.; Acharya, A.; Hashemi, A.; Sanghavi, S.; Dhillon, I.S.; Topcu, U. Faster non-convex federated learning via global and local momentum. In Proceedings of the Uncertainty in Artificial Intelligence, Eindhoven, The Netherlands, 1–5 August 2022; pp. 496–506. [Google Scholar]
  23. Khanduri, P.; Sharma, P.; Yang, H.; Hong, M.; Liu, J.; Rajawat, K.; Varshney, P. Stem: A stochastic two-sided momentum algorithm achieving near-optimal sample and communication complexities for federated learning. Adv. Neural Inf. Process. Syst. 2021, 34, 6050–6061. [Google Scholar]
  24. Tang, Y.; Zhang, J.; Li, N. Distributed zero-order algorithms for nonconvex multiagent optimization. IEEE Trans. Control Netw. Syst. 2020, 8, 269–281. [Google Scholar]
  25. Nikolakakis, K.; Haddadpour, F.; Kalogerias, D.; Karbasi, A. Black-box generalization: Stability of zeroth-order learning. Adv. Neural Inf. Process. Syst. 2022, 35, 31525–31541. [Google Scholar]
  26. Balasubramanian, K.; Ghadimi, S. Zeroth-order (non)-convex stochastic optimization via conditional gradient and gradient updates. Adv. Neural Inf. Process. Syst. 2018, 31, 3459–3468. [Google Scholar]
  27. Fang, W.; Yu, Z.; Jiang, Y.; Shi, Y.; Jones, C.N.; Zhou, Y. Communication-efficient stochastic zeroth-order optimization for federated learning. IEEE Trans. Signal Process. 2022, 70, 5058–5073. [Google Scholar]
  28. Li, Z.; Ying, B.; Liu, Z.; Yang, H. Achieving Dimension-Free Communication in Federated Learning via Zeroth-Order Optimization. arXiv 2024, arXiv:2405.15861. [Google Scholar]
  29. Maritan, A.; Dey, S.; Schenato, L. FedZeN: Quadratic convergence in zeroth-order federated learning via incremental Hessian estimation. In Proceedings of the 2024 European Control Conference, Stockholm, Sweden, 25–28 June 2024; pp. 2320–2327. [Google Scholar]
  30. Mhanna, E.; Assaad, M. Rendering wireless environments useful for gradient estimators: A zero-order stochastic federated learning method. In Proceedings of the 2024 60th Annual Allerton Conference on Communication, Control, and Computing, Urbana, IL, USA, 24–27 September 2024; IEEE: New York, NY, USA, 2024; pp. 1–8. [Google Scholar]
  31. Diederik, P.K. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
  32. Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
  33. Ling, X.; Fu, J.; Wang, K.; Liu, H.; Chen, Z. Ali-dpfl: Differentially private federated learning with adaptive local iterations. In Proceedings of the 2024 IEEE 25th International Symposium on a World of Wireless, Mobile and Multimedia Networks, Perth, Australia, 4–7 June 2024; pp. 349–358. [Google Scholar]
  34. Cong, Y.; Qiu, J.; Zhang, K.; Fang, Z.; Gao, C.; Su, S.; Tian, Z. Ada-FFL: Adaptive computing fairness federated learning. CAAI Trans. Intell. Technol. 2024, 9, 573–584. [Google Scholar]
  35. Huang, Y.; Zhu, S.; Chen, W.; Huang, Z. FedAFR: Enhancing Federated Learning with adaptive feature reconstruction. Comput. Commun. 2024, 214, 215–222. [Google Scholar]
  36. Li, Y.; He, Z.; Gu, X.; Xu, H.; Ren, S. AFedAvg: Communication-efficient federated learning aggregation with adaptive communication frequency and gradient sparse. J. Exp. Theor. Artif. Intell. 2024, 36, 47–69. [Google Scholar]
  37. Yi, X.; Zhang, S.; Yang, T.; Johansson, K.H. Zeroth-order algorithms for stochastic distributed nonconvex optimization. Automatica 2022, 142, 110353. [Google Scholar]
  38. Wu, X.; Huang, F.; Hu, Z.; Huang, H. Faster adaptive federated learning. In Proceedings of the AAAI Conference on Artificial Intelligence, Vancouver, BC, Canada, 20–27 February 2023; Volume 37, pp. 10379–10387. [Google Scholar]
  39. Gao, X.; Jiang, B.; Zhang, S. On the information-adaptive variants of the ADMM: An iteration complexity perspective. J. Sci. Comput. 2018, 76, 327–363. [Google Scholar]
Figure 1. FAFedZO framework.
Figure 1. FAFedZO framework.
Electronics 14 01452 g001
Figure 2. Influence of varying the number of local updates that select ( Q , M ) = ( 50 , 30 ) . (a) The attack loss on the MNIST dataset. (b) The attack loss on the CIFAR-10 dataset. (c) The attack loss on the Fashion-MNIST dataset. (d) The testing accuracy on the MNIST dataset. (e) The testing accuracy on the CIFAR-10 dataset. (f) The testing accuracy on the Fashion-MNIST dataset.
Figure 2. Influence of varying the number of local updates that select ( Q , M ) = ( 50 , 30 ) . (a) The attack loss on the MNIST dataset. (b) The attack loss on the CIFAR-10 dataset. (c) The attack loss on the Fashion-MNIST dataset. (d) The testing accuracy on the MNIST dataset. (e) The testing accuracy on the CIFAR-10 dataset. (f) The testing accuracy on the Fashion-MNIST dataset.
Electronics 14 01452 g002
Figure 3. Influence of the number of participating edge devices that select ( Q , E ) = ( 100 , 60 ) . (a) The testing accuracy on the MNIST dataset. (b) The testing accuracy on the Fashion-MNIST dataset.
Figure 3. Influence of the number of participating edge devices that select ( Q , E ) = ( 100 , 60 ) . (a) The testing accuracy on the MNIST dataset. (b) The testing accuracy on the Fashion-MNIST dataset.
Electronics 14 01452 g003
Figure 4. Influence of the number of local updates that select Q = 50 and M = 50 . (a) The attack loss on the MNIST dataset. (b) The attack loss on the CIFAR-10 dataset. (c) The attack loss on the Fashion-MNIST dataset. (d) The testing accuracy on the MNIST dataset. (e) The testing accuracy on the CIFAR-10 dataset. (f) The testing accuracy on the Fashion-MNIST dataset.
Figure 4. Influence of the number of local updates that select Q = 50 and M = 50 . (a) The attack loss on the MNIST dataset. (b) The attack loss on the CIFAR-10 dataset. (c) The attack loss on the Fashion-MNIST dataset. (d) The testing accuracy on the MNIST dataset. (e) The testing accuracy on the CIFAR-10 dataset. (f) The testing accuracy on the Fashion-MNIST dataset.
Electronics 14 01452 g004
Figure 5. Influence of the number of participating edge devices that select Q = 50 and E = 1 . (a) The attack loss on the MNIST dataset; (b) The attack loss on the Fashion-MNIST dataset; (c) The testing accuracy on the MNIST dataset; (d) The testing accuracy on the Fashion-MNIST dataset.
Figure 5. Influence of the number of participating edge devices that select Q = 50 and E = 1 . (a) The attack loss on the MNIST dataset; (b) The attack loss on the Fashion-MNIST dataset; (c) The testing accuracy on the MNIST dataset; (d) The testing accuracy on the Fashion-MNIST dataset.
Electronics 14 01452 g005
Figure 6. Influence of the number of directions that select Q = 50 and M = 10 . (a) Attack loss; (b) Testing accuracy.
Figure 6. Influence of the number of directions that select Q = 50 and M = 10 . (a) Attack loss; (b) Testing accuracy.
Electronics 14 01452 g006
Figure 7. The impact of different numbers of local updates when selecting Q = 50 and M = 30 in a non-IID setting. (a) Attack loss; (b) Testing accuracy.
Figure 7. The impact of different numbers of local updates when selecting Q = 50 and M = 30 in a non-IID setting. (a) Attack loss; (b) Testing accuracy.
Electronics 14 01452 g007
Figure 8. The impact of the number of participating edge devices when selecting Q = 50 and E = 1 under the non-IID setting. (a) Attack loss; (b) Testing accuracy.
Figure 8. The impact of the number of participating edge devices when selecting Q = 50 and E = 1 under the non-IID setting. (a) Attack loss; (b) Testing accuracy.
Electronics 14 01452 g008
Figure 9. The impact of different numbers of local updates when selecting Q = 50 and M = 50 under the non-IID setting. (a) Attack loss; (b) Testing accuracy.
Figure 9. The impact of different numbers of local updates when selecting Q = 50 and M = 50 under the non-IID setting. (a) Attack loss; (b) Testing accuracy.
Electronics 14 01452 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Y.; Gao, H.; Zhang, Y.; Xu, Y. FAFedZO: Faster Zero-Order Adaptive Federated Learning Algorithm. Electronics 2025, 14, 1452. https://doi.org/10.3390/electronics14071452

AMA Style

Lu Y, Gao H, Zhang Y, Xu Y. FAFedZO: Faster Zero-Order Adaptive Federated Learning Algorithm. Electronics. 2025; 14(7):1452. https://doi.org/10.3390/electronics14071452

Chicago/Turabian Style

Lu, Yanbo, Huimin Gao, Yi Zhang, and Yong Xu. 2025. "FAFedZO: Faster Zero-Order Adaptive Federated Learning Algorithm" Electronics 14, no. 7: 1452. https://doi.org/10.3390/electronics14071452

APA Style

Lu, Y., Gao, H., Zhang, Y., & Xu, Y. (2025). FAFedZO: Faster Zero-Order Adaptive Federated Learning Algorithm. Electronics, 14(7), 1452. https://doi.org/10.3390/electronics14071452

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop