Next Article in Journal
Delay Analysis of Pinching-Antenna-Assisted Cellular Networks
Previous Article in Journal
Reconstruction of Impedance Criteria and Stability Enhancement Strategies for Grid-Connected Inverters
Previous Article in Special Issue
VST-YOLOv8: A Trustworthy and Secure Defect Detection Framework for Industrial Gaskets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fast-Converging and Trustworthy Federated Learning Framework for Privacy-Preserving Stock Price Modeling

1
China E-Commerce Association Innovation & Integration Information Technology Research Institute, Beijing 100037, China
2
School of Economics, Beijing Institute of Technology, Beijing 100081, China
3
Asia-Pacific AI Research Center, Asian Business Research Institute, Hong Kong 999077, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(22), 4405; https://doi.org/10.3390/electronics14224405
Submission received: 21 October 2025 / Revised: 8 November 2025 / Accepted: 10 November 2025 / Published: 12 November 2025

Abstract

Stock price modeling under privacy constraints presents a unique challenge at the intersection of computational economics and machine learning. Financial institutions and brokerage firms hold valuable yet sensitive data that cannot be centrally aggregated due to privacy laws and competitive concerns. To address this issue, we propose a novel Fast-Converging Federated Learning (FCFL) framework that enables decentralized and privacy-preserving stock price modeling. FCFL employs a dual-stage adaptive optimization strategy that dynamically tunes local learning rates and aggregation weights based on inter-client gradient divergence, accelerating convergence in heterogeneous financial environments. The framework integrates secure aggregation and differential privacy mechanisms to prevent information leakage during communication while maintaining model fidelity. Experimental results on multi-institutional stock datasets demonstrate that FCFL achieves up to 30% faster convergence and 2.5% lower prediction error compared to conventional federated averaging approaches, while guaranteeing strong ε -differential privacy. Theoretical analysis further proves that the framework attains sublinear convergence in O ( log T ) communication rounds under non-IID data distributions. This study provides a new direction for collaborative financial modeling, balancing efficiency, accuracy, and privacy in real-world economic systems.

1. Introduction

Financial institutions increasingly rely on machine learning to forecast returns, price risk, and allocate capital [1,2], yet collaboration across organizations remains constrained by privacy, regulation, and competition. Centralizing order flows, client attributes, or proprietary factor signals is often infeasible, motivating federated learning (FL) as a systems and statistical paradigm that trains models without pooling raw data [3]. In cross-silo finance, however, FL must satisfy three stringent requirements simultaneously: protect institution-level and client-level information, converge quickly despite pronounced statistical heterogeneity, and deliver models whose economic utility survives regime shifts. Meeting these requirements is challenging. Naïve aggregation exacerbates client drift when data are non-IID across institutions [4,5], while adding differential privacy (DP) noise to defend against inference attacks increases variance and slows optimization [6]. Even with secure aggregation, model updates may leak information through gradient inversion or membership inference if left unprotected [7,8,9,10]; rigorous, round-by-round privacy accounting is therefore essential [11,12].
We introduce FCFL, a fast-converging federated learning framework for privacy-preserving stock price modeling that couples drift-aware optimization with principled privacy accounting. FCFL is built around three design elements that are each privacy-compatible and communication-efficient. First, the drift control variates (DCV) method tracks and cancels cross-client gradient bias using only securely aggregated moment estimates, eliminating the dominant heterogeneity term from the update variance without exposing client statistics. Second, an accelerated server update combines Nesterov lookahead, momentum, and diagonal preconditioning to reduce the effective condition number and stabilize noisy coordinates, providing larger, safer progress per round. Third, fast-convergence policies adapt local step sizes, proximal strength, client participation, and per-round DP noise according to a secure, divergence-aware signal; a Rényi DP accountant composes DP-SGD and secure-moment noise to meet a fixed ( ε , δ ) budget [6,11]. These components integrate cleanly with the standard FL pipeline and secure aggregation [3,13], require only sums of client messages, and avoid any disclosure of per-client gradients or variances.
Our theoretical analysis separates optimization error from stochastic and DP noise. Under general smooth, nonconvex objectives, FCFL achieves the standard O ( 1 / K ) stationarity rate to a noise floor that excludes the heterogeneity penalty, due to DCV’s variance reduction. Under the Polyak–Łojasiewicz condition, FCFL enjoys an accelerated linear rate with communication complexity O ˜ ( κ log ( 1 / ϵ ) ) , improving the condition-number dependence relative to DP-enabled FedAvg/FedProx and aligning with curated acceleration in adaptive methods [4,5,14]. The diagonal preconditioner and lookahead increase the contraction factor while the divergence-aware privacy schedule shifts noise toward early, higher-variance rounds, preserving the target privacy budget and enabling sharper late-stage optimization.
Empirically, on non-IID, cross-silo equities universes spanning 2015–2024 with N = 20 clients, FCFL reduces rounds-to-target by 40–60% versus strong DP baselines and improves predictive and economic metrics at a fixed ( ε , δ ) = ( 1.0 , 10 5 ) . Representative results include lower MSE, higher directional accuracy, improved Sharpe ratios, and reduced maximum drawdown relative to DP-enabled FedAvg/FedProx/SCAFFOLD, together with superior recovery during volatility shocks where the variance-reduction and preconditioning mechanisms yield smoother loss trajectories. These gains persist across Dirichlet-controlled heterogeneity strengths and are robust to moderate changes in momentum and lookahead. While centralized training with pooled data can offer an upper bound in accuracy, FCFL closes most of this gap without violating confidentiality. We make the following contributions.
(i)
Secure Moment Sharing (SMS). We formalize an aggregation-compatible primitive that exposes only securely aggregated moments (e.g., i ω i Δ ˜ i and i ω i , optionally noised), never per-client statistics, enabling privacy-preserving estimation of cross-client drift and variance; unlike FedNova-style normalizations, SMS is natively compatible with secure aggregation and DP.
(ii)
Drift Control Variates (DCV). Built on SMS, DCV tracks and cancels cross-client bias, reducing the heterogeneity penalty in update variance while preserving privacy; relative to SCAFFOLD/MIME, controls are derived from securely aggregated moments rather than per-client exchanges. We unify notation for divergence D ( k ) and tracking error e i ( k ) and state assumptions/constants once for clarity.
(iii)
Accelerated server update with divergence-aware DP schedule. We combine Nesterov lookahead, momentum, and diagonal preconditioning with divergence-aware policies (adaptive steps, proximal strength, importance-sampled participation) and an RDP-composed DP schedule. Compared with FedAdam/Yogi’s adaptive servers, our per-coordinate preconditioning and momentum are integrated with SMS/DCV and DP, yielding O ( 1 / K ) nonconvex stationarity and a PL linear rate with communication complexity O ˜ ( κ log ( 1 / ϵ ) ) ; on 20-client non-IID equities (2015–2024), FCFL reduces rounds-to-target by 40–60% and improves MSE, directional accuracy, Sharpe, and drawdowns.
This paper connects privacy-preserving systems design and accelerated optimization for financial forecasting, complementing advances in deep forecasting architectures under centralized settings [15,16,17,18,19]. Section 3 formalizes the task, notation, privacy mechanisms, and assumptions. Section 4 details the algorithmic components—secure moment sharing, DCV, accelerated server updates, and divergence-aware DP—and establishes convergence guarantees. Section 5 reports experiments, ablations, and privacy–utility trade-offs. Section 2 situates our contributions within the literature on FL, DP, robustness, and financial ML, and Section 6 concludes with implications for production deployment.

2. Related Work

Federated learning (FL) enables collaborative model training without centralizing raw data. Early work formalized the objective and proposed communication-efficient protocols that aggregate client-side stochastic updates instead of gradients, most notably FedAvg [3] and its precursors emphasizing uplink/downlink compression and local computation [20,21,22]. Production-ready secure aggregationprotocols ensure the server only learns the sum of client updates [13]. In parallel, differential privacy (DP) provides statistical guarantees that the learned model reveals little about any individual example [12]. Its deep-learning instantiation (DP-SGD) relies on per-example gradient clipping and Gaussian noise [6], while Rényi DP (RDP) yields tight composition over many rounds [11] with analytical accountants for subsampling [23,24]. Surveys synthesize FL’s statistical and systems advances, open problems, and deployment lessons [25,26,27,28,29].

2.1. Optimization Under Heterogeneity and Acceleration

Non-IID client data induces client drift, degrading naïve averaging. Stabilization via proximal regularization (FedProx) constrains local updates [4]; control-variate corrections (SCAFFOLD) track and reduce client-specific bias [5]; normalized updates (FedNova) resolve objective inconsistency from variable local steps [30]. Analyses of local-SGD establish rates with limited communication under smoothness [31,32,33], and drift-aware variance reduction further tightens bounds (e.g., MIME) [34]. On the server, adaptive preconditioning (FedAdam/FedYogi/FedAdagrad) mitigates ill-conditioning and accelerates convergence [14], complementing classical momentum and Nesterov acceleration [35,36]. Our framework (FCFL) integrates drift control variates with diagonal preconditioning and Nesterov-style lookahead, producing faster empirical convergence while retaining the secure-aggregation interface.

2.2. Privacy Accounting, Attacks, and Robustness

DP in FL can be enforced at the example or client level [37], with RDP-based accountants tracking the cumulative privacy loss over many rounds [6,11,23]. Despite DP and encryption, attacks highlight leakage risks from gradients or updates: gradient inversion [7,38], membership inference [8,39], and poisoning/backdoors [40]. Robust aggregation reduces the influence of malicious or outlier clients via Krum [41], coordinate-wise robust statistics [42], or geometric-median-style rules [43]. FCFL focuses on honest-but-curious servers protected by secure aggregation [13] and combines DP-SGD with aggregated moment sharing to estimate heterogeneity without revealing individual statistics, thereby preserving both formal privacy and practical utility.

2.3. Communication, Sampling, and Systems Considerations

Communication is often the bottleneck in cross-silo FL. Strategies include local computation with fewer rounds [31], update/gradient quantization [44], and sparsification with error feedback [45]. Client selection policies accelerate progress by prioritizing informative, available, and stable participants; system-aware schedulers such as Oort empirically improve time-to-accuracy [46], while theoretical work studies biased/importance sampling and its convergence implications [47,48]. Our divergence-aware sampling (importance weights proportional to data size and estimated stability) aligns with these directions but remains privacy-compatible because only aggregates are revealed.

2.4. Financial Time-Series Modeling and Privacy in Economics

Deep sequence forecasters already surpass linear and factor models for returns and order-book dynamics [15,16,17,18,19], and machine learning uncovers nonlinear structures in asset pricing [49]. In finance, however, the differentiator for deployable systems is the privacy layer rather than the aggregator itself: institutions increasingly pair cross-silo training with explicit privacy budgets, gradient clipping, calibrated noise, and secure aggregation so that raw time-series never leave local silos. This practice echoes the formal use of differential privacy in official statistics [50], and in the federated setting modern optimizers such as FedAvg, FedProx, SCAFFOLD, and FedNova provide the orchestration required to sustain utility under heterogeneity while respecting privacy constraints [3,4,5,30]. Adaptive server methods further stabilize learning under noisy (privacy-enforced) updates [14]. Framed this way, state-of-the-art pipelines in financial time-series modeling are best understood as privacy-enhancing stacks wrapped around strong forecasters (e.g., DeepAR and temporal fusion) [17,18], with the optimization layer serving to preserve accuracy and calibration in the presence of privacy noise.
Within this privacy-first view, FCFL emphasizes mechanisms that directly protect financial sequences: a divergence-aware privacy schedule that ties clipping and noise to cross-silo heterogeneity, dual-stage acceleration (client control variates plus server momentum/diagonal preconditioning and lookahead) to reduce the number of communication rounds under a fixed budget, and default secure aggregation so that only masked updates are visible to the coordinator. Relative to FedAvg/FedProx/SCAFFOLD/FedNova and adaptive server optimizers [3,4,5,14,30], the contribution is to make the privacy controls the primary design axis while retaining the modeling strength of sequence architectures [15,16,17,18,19,49]. Under standard smoothness and Polyak–Łojasiewicz conditions [51], our analysis yields linear convergence to a noise-dominated neighborhood whose radius reflects the privacy parameters, aligning theoretical guarantees with the operational constraints already recognized in economic data releases [50].

3. Preliminaries

This section formalizes the federated stock price modeling problem, introduces notation and privacy mechanisms, and states the optimization and data assumptions used throughout the paper.

3.1. Problem Setup

Consider N financial institutions (clients) indexed by C = { 1 , , N } . Client i holds a private time series { P i , t } t = 1 T i of closing prices for a set of tickers and an aligned feature matrix X i R n i × d with n i supervised samples and d features constructed from domain signals (e.g., OHLCV, technical indicators, macro factors).
r i , t log P i , t + 1 P i , t
denote the one-step-ahead log-return, and let y i R n i denote the corresponding target vector aggregated across all tickers and time points in client i after windowing. We model the conditional expectation of r i , t via a parametric predictor f w : R d R with parameters w R p . The per-example loss is l ( w ; x , y ) , and the local empirical objective at client i is
F i ( w ) 1 n i j = 1 n i l w ; x i , j , y i , j .
The global objective is the standard data-weighted average
F ( w ) i = 1 N p i F i ( w ) , p i n i k = 1 N n k .
Our goal is to minimize F ( w ) under communication and privacy constraints, without centralizing raw data { X i , y i } .

3.2. Federated Optimization

Training proceeds in communication rounds k = 0 , 1 , , K 1 . At round k, a server broadcasts the current model w ( k ) to a subset S k C of participating clients. Each client i S k performs τ local stochastic steps producing an update Δ i ( k ) (or equivalently a local model w i ( k , τ ) ). The server aggregates the (privatized and securely summed) updates into
w ( k + 1 ) = w ( k ) + i S k α i ( k ) Δ ˜ i ( k ) ,
where α i ( k ) 0 are aggregation weights that sum to at most 1 over S k , and Δ ˜ i ( k ) denotes a clipped-and-noised version of Δ i ( k ) (defined below). The FCFL framework introduced later will adapt both the local step sizes and the aggregation weights { α i ( k ) } based on observed gradient divergence. The specific meanings of the letters in the paper are shown in Table 1.

3.3. Gradient Divergence and Client Drift

Let g i ( k ) F i ( w ( k ) ) and g ¯ ( k ) i = 1 N p i g i ( k ) . We quantify cross-client heterogeneity in round k by the divergence
D ( k ) i = 1 N p i g i ( k ) g ¯ ( k ) 2 2 .
Empirically, D ( k ) can be estimated with clipped, privatized stochastic gradients returned by clients. FCFL will use such an estimate to drive dual-stage adaptation.
The subsequent sections will instantiate these components within the proposed FCFL algorithm, specifying the dual-stage adaptation of local step sizes and aggregation weights and analyzing convergence under A1–A4 with privacy and communication constraints.

3.4. Threat Model

Setting. Cross-silo FL with authenticated, encrypted channels; raw data remain on clients. The server coordinates training but is not trusted with plaintext updates.
Adversary. We assume an honest-but-curious server and passive network observers; clients are honest (no Byzantine behavior). Limited collusion below the secure-aggregation threshold is allowed; if fewer than t parties collude, per-client updates remain hidden.
Exposure. The server observes only securely aggregated, clipped, noised sums (e.g.,  i ω i Δ ˜ i ) and public metadata (round index, sampling rate p k ); no raw examples, per-client gradients, or per-client deltas are revealed.
Protection. Each client applies clipping at radius C and adds Gaussian noise N ( 0 , σ k 2 C 2 I ) ; a Rényi-DP accountant composes rounds to ( ε , δ ) -DP. The adaptive schedule σ k depends only on DP-protected or securely aggregated quantities (post-processing), preserving privacy guarantees.
Guarantees. Record-level privacy across all rounds; indistinguishability of any single record is bounded by ( ε , δ ) . Secure aggregation prevents recovery of individual updates; membership-inference advantage is controlled by ε .
Out of scope.Byzantine/poisoning/backdoor attacks, compromised client devices, and side-channel or timing leakage. Robust aggregation and adversarial defenses are orthogonal and deferred to future work.

4. Methodology

We revise FCFL to emphasize fast convergence under privacy and heterogeneity. Building on the notation of Section 3, we introduce three complementary accelerators: (i) drift control variates that perform gradient tracking and variance reduction, (ii) server-side momentum with diagonal preconditioning to mitigate ill-conditioning, and (iii) participation and step-size policies that cap client drift and prioritize informative updates.

4.1. Accelerated Server Update

The server update should (i) counteract coordinate-wise ill-conditioning, (ii) exploit temporal correlation of aggregated client updates to accelerate descent, and (iii) remain compatible with secure aggregation so that the server only needs aggregated quantities. FCFL achieves this with diagonal preconditioning and momentum on the privacy-preserving averaged update
u ( k ) i S k ω i ( k ) Δ ˜ i ( k ) i S k ω i ( k ) ,
where ω i ( k ) are divergence-aware weights. Note that secure aggregation reveals only i ω i ( k ) Δ ˜ i ( k ) and i ω i ( k ) , which suffice to compute u ( k ) . We maintain first and second moments of u ( k ) :
m ( k ) = β 1 m ( k 1 ) + ( 1 β 1 ) u ( k ) , v ( k ) = β 2 v ( k 1 ) + ( 1 β 2 ) u ( k ) u ( k ) ,
with β 1 , β 2 [ 0 , 1 ) and elementwise square ⊙. To correct the warm-start bias during early rounds, we optionally use bias-corrected moments
m ^ ( k ) = m ( k ) 1 β 1 t k , v ^ ( k ) = v ( k ) 1 β 2 t k ,
where t k counts the number of rounds with at least one participating client (so t k increases even with partial participation). In practice, β 1 = 0.9 and β 2 = 0.999 work well and bias correction primarily helps the first 10–20 rounds. The diagonal preconditioner rescales coordinates by an RMS estimate. Let RMS ( k ) v ^ ( k ) + ϵ p with a small ϵ p > 0 . The preconditioned, momentum-driven step is
w ( k + 1 ) = y ( k ) + η s ( k ) m ^ ( k ) RMS ( k ) ,
where division is elementwise. This realizes an adaptive trust region because
η s ( k ) m ^ ( k ) RMS ( k ) η s ( k ) max j | m ^ j ( k ) | ϵ p ,
which prevents overshooting along poorly scaled directions.Algorithm 1 shows FCFL with Secure Moment Sharing (SMS), DCV, and Divergence-Aware DP. Algorithm 2 shows CLIENT_UPDATE (Local DP-SGD with DCV and Preconditioning), the details are provided below.
Algorithm 1 FCFL with Secure Moment Sharing (SMS), DCV, and Divergence-Aware DP
  1:
Server init: model w ( 0 ) ; moments m ( 0 ) = 0 , v ( 0 ) = 0 ; server control c ( 0 ) = 0 ; clip C; lookahead ρ ; β 1 , β 2 , ϵ p ; DP budget B DP ; schedule state.
  2:
for   k = 0 , 1 , , K 1   do
  3:
    Sample client set S k with rate p k ; announce public weights ω i ( k ) (e.g., HT/size-based), clip C, noise σ k , and preconditioner seed.
  4:
    Broadcast  ( w ( k ) , c ( k ) , C , σ k , τ ) .
  5:
    Each client i S k runs Client_Update ( i , w ( k ) , c ( k ) , C , σ k , τ ) and participates in secure aggregation.
  6:
    Secure moment sharing (SMS): obtain only aggregated sums (no per-client values)
S Δ = i S k ω i ( k ) Δ ˜ i ( k ) , S 1 = i S k ω i ( k ) , S 2 = i S k ω i ( k ) ( Δ ˜ i ( k ) Δ ˜ i ( k ) ) ( optional )
  7:
    Aggregate update:  u ( k ) = S Δ / S 1 .
  8:
    Server moments:  m ( k + 1 ) = β 1 m ( k ) + ( 1 β 1 ) u ( k ) , v ( k + 1 ) = β 2 v ( k ) + ( 1 β 2 ) ( S 2 / S 1 ) (or u ( k ) u ( k ) if S 2 not shared).
  9:
    Bias corrections: m ^ ( k + 1 ) = m ( k + 1 ) / ( 1 β 1 k + 1 ) , v ^ ( k + 1 ) = v ( k + 1 ) / ( 1 β 2 k + 1 ) .
10:
    Preconditioner:  P ( k ) = diag ( ( v ^ ( k + 1 ) + ϵ p ) 1 ) .
11:
    Model step + lookahead:  w ( k + 1 2 ) = w ( k ) η s ( k ) P ( k ) m ^ ( k + 1 ) ,     w ( k + 1 ) = ( 1 ρ ) w ( k ) + ρ w ( k + 1 2 ) .
12:
    Divergence estimate (from SMS): compute D ( k ) using ( S Δ , S 1 , S 2 ) ; update server control c ( k + 1 ) UpdateDCV ( c ( k ) , u ( k ) , D ( k ) ) .
13:
    DP accountant & schedule: compose RDP with ( q = p k , σ k , τ ) to get ε k ; update remaining budget B DP B DP ε k ; set next σ k + 1 Schedule ( D ( k ) , B DP ) (post-processing of DP-protected SMS).
14:
end for
Algorithm 2 Client_Update (Local DP-SGD with DCV and Preconditioning)
Require: 
i , w ( k ) , c ( k ) , C , σ k , τ .    Local state:  c i ( k ) (client control), optimizer buffers.
  1:
Set w 0 w ( k ) , Δ i 0 .
  2:
for   t = 0 to τ 1  do
  3:
    Sample mini-batch B t ; compute g t = l i ( w t ; B t ) .
  4:
    DCV adjustment: g t = g t c i ( k ) + c ( k ) .
  5:
    (Server broadcasts P ( k ) implicitly via moments) Precondition h t = P ( k ) g t .
  6:
    Clip h t h t · min 1 , C / h t 2 .
  7:
    Update w t + 1 = w t η c ( k ) h t ; accumulate Δ i Δ i + ( w t + 1 w t ) .
  8:
end for
  9:
Clip total Δ i Δ i · min 1 , C / Δ i 2 .
10:
Add DP noise Δ ˜ i = Δ i + ξ i , ξ i N ( 0 , σ k 2 C 2 I ) .
11:
Update local control (no upload of per-client stats): c i ( k + 1 ) ( 1 α dcv ) c i ( k ) + α dcv 1 τ t g t c ( k ) .
12:
Send only masked shares for secure aggregation of ω i ( k ) Δ ˜ i and ω i ( k ) ; no per-client values are revealed to the server.

Nesterov-Style Lookahead Coupling

We set y ( k ) = w ( k ) + η m ( k ) w ( k ) w ( k 1 ) with η m ( k ) [ 0 , 1 ) . Intuitively, u ( k ) approximates the gradient at y ( k ) due to small inter-round drift; thus the step in (9) benefits from lookahead curvature. Empirically, η m ( k ) = 0.8 yields the best rounds-to-target while remaining stable under DP noise. Under L-smoothness, a sufficient condition for descent at y ( k ) is
η s ( k ) ( 1 β 1 ) L · min j v ^ j ( k ) + ϵ p .
In practice we choose a fixed η s ( k ) η s [ 0.5 , 1.0 ] and set ϵ p = 10 8 , which empirically stabilizes training under DP noise. A lightweight server trust region further caps the step:
Δ max ( k ) η · m ^ ( k ) 1 d ϵ p , w ( k + 1 ) y ( k ) + clip η s m ^ ( k ) RMS ( k ) , Δ max ( k ) ,
where d is the parameter dimension and η ( 0 , 1 ] ; clipping is rarely activated but guards against rare outlier rounds.

4.2. Drift Control Variates (DCV)

DCV mitigates cross-client bias by tracking the gap Δ i ( w ) = F i ( w ) F ( w ) with client-side controls c i and a server-side control c. Correcting each client’s stochastic direction by c i + c aligns local steps with the global gradient while remaining compatible with DP and secure aggregation.

4.2.1. Mechanics: Corrected Local Steps and Secure Updates

At round k, client i receives ( y ( k ) , c ) and performs τ i ( k ) DP-SGD steps on the proximalized objective using
w i ( k , s + 1 ) = w i ( k , s ) η i ( k ) g ˜ i ( k , s ) c i + c η i ( k ) μ ( k ) w i ( k , s ) y ( k ) ,
where g ˜ i ( k , s ) is the clipped-and-noised minibatch gradient. Telescoping yields the anchored direction
δ i ( k ) y ( k ) w i ( k , τ i ( k ) ) τ i ( k ) η i ( k ) 1 τ i ( k ) s g ˜ i ( k , s ) c i + c + μ ( k ) 1 τ i ( k ) s w i ( k , s ) y ( k ) .
Clients update their local control with a small smoothing ξ ( 0 , 1 ] ,
c i ( 1 ξ ) c i + ξ δ i ( k ) ,
and, via secure aggregation, the server receives only the mean δ ¯ ( k ) = 1 | S k | i S k δ i ( k ) to update the global control
c ( 1 ξ ) c + ξ δ ¯ ( k ) .
This design needs only sums of client messages and thus integrates seamlessly with secure aggregation; it adds no communication of per-client statistics or raw gradients.

4.2.2. Variance Reduction and Control of Heterogeneity

Write g ˜ i ( k , s ) = F i ( y ( k ) ) + ε i ( k , s ) + b i clip , where ε i ( k , s ) combines sampling and DP noise with covariance σ g 2 I + C 2 ( σ ( k ) ) 2 I , and b i clip is the clipping bias. If e i ( k ) c i F i ( y ( k ) ) F ( y ( k ) ) denotes the tracking error, then the DCV-corrected direction used above satisfies
E g ˜ i ( k , s ) c i + c = F ( y ( k ) ) e i ( k ) + b i clip , Var g ˜ i ( k , s ) c i + c σ g 2 I + C 2 ( σ ( k ) ) 2 I .
Hence the heterogeneity penalty term i p i F i F 2 2 does not appear in the variance: DCV removes the dominant dispersion component, leaving only tracking and clipping bias in the mean. Under A1–A4 and the drift budget, there exists ρ ( 0 , 1 ) such that the tracking error contracts in expectation,
E e i ( k + 1 ) 2 2 ρ E e i ( k ) 2 2 + O ( σ g 2 + C 2 ( σ ( k ) ) 2 + b i clip 2 2 ) ,
so DCV rapidly reaches a noise-dominated floor. Aggregating across sampled clients then yields the bias–variance bounds used in the global convergence analysis.

4.3. Fast-Convergence Policies

The policies in FCFL control how aggressively clients move locally and how the federation allocates participation under heterogeneity, all while meeting a fixed privacy budget. They are designed to (i) cap local parameter drift to keep proximal descent well conditioned, (ii) prioritize informative and stable clients to accelerate early progress, and (iii) modulate DP noise based on measured divergence so that noise shrinks as optimization stabilizes.

4.3.1. Adaptive Local Step-Size and Drift Budget

We refine the client step-size rule by detailing how v ^ i , loc ( k ) and d i ( k ) are computed and how they enforce a uniform drift bound. Let g ( w ; x , y ) denote the per-example gradient (before clipping). Client i estimates its local gradient variance using clipped per-example norms on the current round:
v ^ i , loc ( k ) = 1 S s = 1 S 1 | B i ( k , s ) | ( x , y ) B i ( k , s ) g ^ ( w ( k ) ; x , y ) g ¯ i ( k , s ) 2 2 , g ¯ i ( k , s ) = 1 | B i ( k , s ) | ( x , y ) B i ( k , s ) g ^ ( w ( k ) ; x , y ) ,
where g ^ uses the same clipping threshold C as DP-SGD. The surrogate drift d i ( k ) is measured after the local epoch using the prox-anchored displacement:
d i ( k ) = min { w i ( k , τ i ( k ) ) y ( k ) 2 2 , D max } .
Substituting Equation (20), a standard smoothness argument yields the round-wise drift inequality (conditioning on the local data and randomness):
E w i ( k , τ i ( k ) ) y ( k ) 2 2 τ i ( k ) ( η i ( k ) ) 2 ( 1 + μ ( k ) η i ( k ) ) 2 σ g 2 + C 2 ( σ ( k ) ) 2 + L 2 F i ( y ( k ) ) 2 2 ,
so choosing η i ( k ) and τ i ( k ) with μ ( k ) = λ 0 D ^ ( k ) makes the left-hand side uniformly bounded by d max in expectation. This enforces a global contraction precondition for the accelerated server step because the dispersion of local models around y ( k ) is controlled. In particular, clients perform w i ( t , τ + 1 ) = w i ( t , τ ) η i ( t ) P t f i ( w i ( t , τ ) ) c i ( t ) with diagonal preconditioner P t and drift-tracking c i ( t ) ; the server aggregates Δ ( t ) = i a i ( t ) ( w i ( t , τ i ) w ( t ) ) , where a i ( t ) = ψ ( D i ( t ) ) / j ψ ( D j ( t ) ) and η i ( t ) = ϕ ( D i ( t ) ) are monotone in the divergence proxy D i ( t ) , yielding E Δ ( t ) , g ( t ) i a i ( t ) η i ( t ) g ( t ) 2 δ i g ( t ) , so that reducing high-divergence influence improves alignment with g ( t ) .
Backtracking for η 0 . When L is unknown, a client-side backtracking rule ensures η i ( k ) 1 / ( 4 L ) without communicating L. Let η test be the candidate value computed. The client tests a single batch B and accepts η test if
F i ( B ) y ( k ) η test g ¯ i ( B ) F i ( B ) y ( k ) η test 4 g ¯ i ( B ) 2 2 ,
otherwise halves η test and retries up to a small fixed number of attempts. Because this rule uses only local batches and is a post-processing of DP gradients, it does not alter the privacy accountant.

4.3.2. Importance Sampling for Participation

Client selection follows the probability law,
π i ( k ) n i 1 + β d i ( k ) · 1 1 + v ^ i , loc ( k ) , i = 1 N π i ( k ) = 1 ,
favoring data-rich and stable clients while downweighting those with large drift or variance. To keep the aggregation unbiased under non-uniform sampling, we incorporate Horvitz–Thompson correction into the weights:
ω i ( k ) ω i ( k ) π i ( k ) , u ( k ) = i S k ω i ( k ) Δ ˜ i ( k ) i S k ω i ( k ) ,
so that E [ u ( k ) ] equals the full-population weighted update under the original ω i ( k ) . Privacy-wise, (24) can be implemented without revealing per-client v ^ i , loc ( k ) or d i ( k ) by using client-side self-thinning: the server broadcasts a sampling temperature τ sel , each client draws u Unif ( 0 , 1 ) and participates if u min { 1 , τ sel π i ( k ) } . The server learns only the final set S k , and the correction in (24) uses the public acceptance probability.
Variance–speed trade-off. Importance sampling reduces the effective variance of the aggregated update (fewer high-variance clients per round) and empirically cuts the rounds-to-target by prioritizing stable contributors in early rounds. The Horvitz–Thompson correction keeps estimates unbiased, while the diagonal preconditioner from (9) handles residual scale disparities.

4.3.3. Divergence-Aware Differential-Privacy Schedule

The per-round Gaussian noise multiplier adapts to the measured drift,
σ ( k ) = min σ max , σ 0 k + 1 1 + ρ D ^ ( k ) · 1 1 + w ( k ) w ( k 1 ) 2 ,
which decreases with k (exploitation) yet increases with divergence and with large inter-round jumps (exploration/stability). Let ε ( α ) denote the total RDP at order α > 1 from composing per-round DP-SGD (with subsampling rate q and multiplier σ ( k ) ) and SMS moment noise ν k . The accountant guarantees
ε tot = min α > 1 ε ( α ) + log ( 1 / δ ) α 1 , δ = 10 5 .
To hit a target budget ε , we select σ 0 by a single-pass calibration: start from a conservative σ 0 ( 0 ) , simulate the schedule { σ ( k ) } k = 0 K 1 on the planned number of rounds K using historical D ^ ( k ) from a dry run or a short warmup, compute ε tot ( 0 ) , and scale
σ 0 σ 0 ( 0 ) · ε tot ( 0 ) ε ,
which is accurate because RDP for the subsampled Gaussian mechanism is approximately inversely proportional to σ 2 at fixed q. This keeps the final ( ε , δ ) while granting lower noise in later rounds where optimization benefits most.
We note that the proposed policies target the two principal impediments to federated optimization with non-IID clients—bias from client drift and variance from stochastic/private updates—so that the global descent remains contractive even when local objectives differ. Let ζ 2 = sup w 1 K i = 1 K f i ( w ) f ( w ) 2 measure gradient dissimilarity and let τ be the number of local steps. Classical analyses yield a recursion of the form E [ F ( w t + 1 ) F ] ( 1 η μ ) E [ F ( w t ) F ] + O η L ( τ 1 ) ζ 2 + η 2 σ 2 under L-smoothness and PL geometry, where the residual combines a drift term and an update-variance term. Our design reduces both: (i) secure moment sharing supplies privacy-preserving cross-client first/second moments that parameterize client control variates, shrinking the drift component by aligning local updates with the global gradient; (ii) dual-stage acceleration—server momentum with diagonal preconditioning plus lookahead—rescales poorly conditioned coordinates and averages oscillations across rounds, improving the effective contraction from 1 η μ to 1 η ˜ μ ˜ with μ ˜ > μ under mild mismatch; and (iii) a divergence-aware DP schedule ties clipping and per-round noise to measured drift and participation, preserving the overall ( ε , δ ) while allocating privacy cost to phases with higher signal-to-noise, thereby reducing the optimization error attributable to DP perturbations. Together these yield the refined bound
E [ F ( w t + 1 ) F ] ( 1 η ˜ μ ˜ ) E [ F ( w t ) F ] + O η ˜ L α ( τ 1 ) ζ 2 + η ˜ 2 σ DP 2 ,
where α ( 0 , 1 ) captures drift shrinkage due to control variates and σ DP 2 reflects the calibrated privacy noise. This contraction to a smaller DP-noise-dominated neighborhood translates into fewer communication rounds and more stable training in precisely the heterogeneous regimes where standard methods degrade.

5. Convergence Guarantees with Acceleration

This section provides end-to-end guarantees for FCFL, separating optimization error from variability introduced by stochastic sampling and differential privacy (DP). The accelerated server update is
w ( k + 1 ) = y ( k ) + η s ( k ) m ^ ( k ) v ^ ( k ) + ϵ p ,
where y ( k ) = w ( k ) + η m ( k ) ( w ( k ) w ( k 1 ) ) is the Nesterov lookahead point, and m ^ ( k ) , v ^ ( k ) are bias-corrected moments of the divergence-aware aggregated update u ( k ) (Section 4; cf. (9)). Clients use drift control variates (DCV), a drift budget for local steps, and importance sampling with Horvitz–Thompson correction (cf. (24)). DP-SGD employs per-example clipping C with a divergence-aware noise schedule; privacy is tracked by an RDP accountant.

5.1. Assumptions, Setup, and Effective Conditioning

Assumption A1
(Smoothness). The global objective F : R p R is L-smooth: F ( w ) F ( w ) 2 L w w 2 for all w , w .
Assumption A2
(Bounded stochastic variance). With mini-batch B, E g ¯ i F i 2 2 σ g 2 / B , where g ¯ i is the clipped mini-batch gradient at client i.
Assumption A3
(Bounded heterogeneity).  i p i F i ( w ) F ( w ) 2 2 ζ 2 for all w .
Assumption A4
(Clipping bound). Clipped per-example gradients satisfy g ^ 2 C ; local update magnitudes per step are bounded.
Assumption A5
(PL geometry). F satisfies Polyak–Łojasiewicz with μ > 0 : 1 2 F ( w ) 2 2 μ ( F ( w ) F ) for all w .
Assumption A6
(Partial participation). In round k, a subset S k of size m k is sampled uniformly from K clients; p k = m k / K with p min p k p max .
Assumption A7
(DP mechanism). Each client clips updates to radius C and adds Gaussian noise ξ i ( k ) N ( 0 , σ DP 2 C 2 I ) ; the server observes only securely aggregated noisy sums.
  • Constants. B (mini-batch), τ (local steps), β 1 , β 2 (server moments), ϵ p (preconditioner jitter), η c ( k ) (client step), η s ( k ) (server step), C (clip), σ g 2 (stochastic variance), σ DP 2 (DP noise), ζ (heterogeneity), p k (participation), α dcv ( 0 , 1 ] (drift shrinkage).
  • Pipeline (client → server). Client i S k runs τ DP-SGD steps with DCV using controls c ( k ) (server) and c i ( k ) (local): w i , t + 1 ( k ) = w i , t ( k ) η c ( k ) clip P ( k ) ( g i , t ( k ) c i ( k ) + c ( k ) ) , C for t = 0 , , τ 1 ; it returns Δ ˜ i ( k ) = clip t = 0 τ 1 ( w i , t + 1 ( k ) w i , t ( k ) ) , C + ξ i ( k ) via secure aggregation. The server forms u ( k ) = i S k ω i ( k ) Δ ˜ i ( k ) i S k ω i ( k ) , updates moments m ( k ) = β 1 m ( k 1 ) + ( 1 β 1 ) u ( k ) , v ( k ) = β 2 v ( k 1 ) + ( 1 β 2 ) ( u ( k ) u ( k ) ) , sets m ^ ( k ) = m ( k ) / ( 1 β 1 k ) , v ^ ( k ) = v ( k ) / ( 1 β 2 k ) , P ( k ) = diag ( ( v ^ ( k ) + ϵ p ) 1 ) , takes w ( k + 1 2 ) = w ( k ) η s ( k ) P ( k ) m ^ ( k ) , and applies lookahead w ( k + 1 ) = ( 1 ρ ) w ( k ) + ρ w ( k + 1 2 ) . Divergence estimates (via secure moment sharing) update DCV controls and the DP schedule ( C , σ DP ( k ) ) .
Theorem 1
(Linear rate to a noise floor). Under A1–A7 and any constant η eff ( 0 , 1 / ( 2 L pre ) ] , the iterates satisfy E [ F ( w ( k + 1 ) ) F ] ( 1 η eff μ pre ) E [ F ( w ( k ) ) F ] + η eff C het + η eff 2 C var , where C het = L pre α dcv ( τ 1 ) ζ 2 and C var = L pre p min ( σ g 2 / B + σ DP 2 C 2 ) . Sketch: L-smoothness + preconditioning give a descent step with L pre ; DCV shrinks heterogeneity ( α dcv < 1 ); partial participation averages DP/stochastic noise; PL yields contraction with rate η eff μ pre .
Corollary 1
(Steady state and rounds).  lim sup k E [ F ( w ( k ) ) F ] C het / μ pre + ( η eff C var ) / μ pre , and to reach any ε opt > C het / μ pre + ( η eff C var ) / μ pre , it suffices that
k 1 η eff μ pre log F ( w ( 0 ) ) F ε opt C het / μ pre η eff C var / μ pre .
Remark 1
(Policy implications). DCV reduces C het ( α dcv < 1 ); diagonal preconditioning improves conditioning (larger μ pre , smaller L pre ); momentum/lookahead increase effective step η eff while stable; the divergence-aware DP schedule modulates C var by allocating noise where signal is strong—together yielding fewer rounds and a smaller noise-dominated neighborhood under heterogeneity.

5.2. Bias–Variance Characterization and Main Guarantees

  • Aggregated direction. Let g ˜ i ( k , s ) denote client i’s clipped-and-noised mini-batch gradient at local step s, and let ( c i , c ) be DCV controls. The drift budget and proximal term (Section 4) keep w i ( k , s ) y ( k ) uniformly bounded. Writing b ( k ) for the bias due to clipping and imperfect tracking, and | S | = E [ | S k | ] , there exist constants c b , c s , c n independent of k such that
E u ( k ) y ( k ) = η ¯ ( k ) F ( y ( k ) ) + b ( k ) , E u ( k ) E [ u ( k ) y ( k ) ] 2 2 V stoch ( k ) + V dp ( k ) | S | ,
with b ( k ) 2 2 c b ( ε c 2 + b clip 2 2 ) , V stoch ( k ) c s σ g 2 / B , and V dp ( k ) c n C 2 ( σ ( k ) ) 2 / B . Importantly, the heterogeneity penalty ζ 2 is absent from the variance thanks to DCV, so cross-client drift does not inflate the stochastic floor.
  • Nonconvex stationarity. Choose η s ( k ) min { 1 , 1 2 L pre } and client steps, and define η ̲ = min k η eff ( k ) / L pre . For any K 1 ,
1 K k = 0 K 1 E F ( y ( k ) ) 2 2 2 F ( w ( 0 ) ) F K η ̲ + C 1 | S | · 1 K k = 0 K 1 V stoch ( k ) + V dp ( k ) + C 2 1 K k = 0 K 1 b ( k ) 2 2 .
With the divergence-aware schedule σ ( k ) ( k + 1 ) 1 / 2 , the average DP term scales as O ˜ ( 1 / ( | S | B ) ) , yielding O ( 1 / K ) decay to a noise-dominated neighborhood.
  • PL case: accelerated linear rate and rounds-to-accuracy. If F satisfies the PL condition with parameter μ > 0 , the stability conditions of Section 4 hold, and η s ( k ) η s ( 0 , 1 ] , η m ( k ) η ¯ m < 1 , then there exists c acc ( 0 , 1 ) such that
E F ( w ( k + 1 ) ) F ( 1 c acc ) E F ( y ( k ) ) F + C ˜ 1 | S | V stoch ( k ) + V dp ( k ) + C ˜ 2 b ( k ) 2 2 ,
with
c acc μ pre η eff 1 + η ¯ m , η eff = η s ( 1 β 1 ) .
Consequently,
E F ( w ( k ) ) F = O ( 1 + c acc ) k + O 1 | S | ( σ g 2 + C 2 σ ¯ 2 ) + ε c 2 + b clip 2 2 ,
where σ ¯ 2 = 1 K k = 0 K 1 ( σ ( k ) ) 2 . To reach E [ F ( w ( k ) ) F ] ϵ ,
k = O 1 + η ¯ m η eff · L pre μ pre · log F ( w ( 0 ) ) F ϵ = O ˜ κ log ( 1 / ϵ ) ,
with κ = L / μ the unpreconditioned condition number. The κ dependence reflects the joint effect of diagonal preconditioning and momentum/lookahead.

5.3. Privacy-Aware Noise, Communication Complexity, and Comparison

The per-round Gaussian noise multiplier adapts to measured drift and inter-round stability:
σ ( k ) = min σ max , σ 0 k + 1 1 + ρ D ^ ( k ) · 1 1 + w ( k ) w ( k 1 ) 2 .
The RDP accountant composes DP-SGD and SMS noise to a target ( ε , δ ) ; since RDP scales approximately as k 1 / ( σ ( k ) ) 2 at fixed subsampling rate, a single-pass calibration of σ 0 ensures the final ε matches the budget while allocating less noise to later, more stable rounds. The induced asymptotic DP floor satisfies
N DP = O 1 | S | B · 1 K k = 0 K 1 C 2 ( σ ( k ) ) 2 = O ˜ C 2 σ 0 2 | S | B .
Without DCV and preconditioning, the aggregated variance inherits an additive ζ 2 term and stability requires a smaller effective step, leading to the classical DP-FedAvg/FedProx complexity k FedAvg + DP O ( κ log ( 1 / ϵ ) ) . In contrast, FCFL achieves
k FCFL O ˜ κ log ( 1 / ϵ ) ,
under identical privacy budgets, aligning with the empirically observed reduction in rounds-to-target and improved accuracy reported in Section 5. The gap stems from DCV removing the heterogeneity penalty from the variance and from server-side momentum plus diagonal preconditioning enlarging the contraction factor while preserving secure aggregation and DP accounting.

6. Experiments

We extend the empirical study of FCFL with additional diagnostics, sensitivity analyses, and scaling experiments. All notation and algorithmic components follow Section 3 and Section 4. Unless stated otherwise, we target ( ε , δ ) = ( 1.0 , 10 5 ) with RDP accounting, use per-example clipping C = 1.0, and employ secure aggregation. Metrics include MSE, MAE, directional accuracy (DA), annualized Sharpe ratio S , and maximum drawdown (MDD). We also report rounds-to-target (RtT): the number of communication rounds needed to reach a validation loss L defined as 1.05 × the minimum attained by a privacy-violating centralized model trained on the same features.

6.1. Datasets, Federation, and Model

Clients and non-IID splits. We simulate N = 20 cross-silo clients (banks/brokers). Each client holds equities from disjoint region buckets (U.S./EU) over 2015–2024 with aligned trading days and local forward-filling. Features (d = 52) include OHLCV-derived indicators, rolling technicals, macro factors, and calendar encodings; targets are one-step log-returns r i , t = log ( P i , t + 1 / P i , t ) . Heterogeneity is induced by a Dirichlet allocation over tickers with concentration α { 0.1 , 0.2 , 0.5 } ; unless varied, α = 0.2.
Datasets. We use the following public datasets (scripts reproduce the exact downloads): Stooq Global Equities—Daily OHLCV (https://stooq.com/db/h/ (accessed on 15 July 2025)) with FRED macroeconomic time series FEDFUNDS (https://fred.stlouisfed.org/series/FEDFUNDS (accessed on 20 July 2025)), INDPRO (https://fred.stlouisfed.org/series/INDPRO (accessed on 20 July 2025)), UNRATE (https://fred.stlouisfed.org/series/UNRATE (accessed on 20 July 2025)), and CPIAUCSL (https://fred.stlouisfed.org/series/CPIAUCSL (accessed on 20 July 2025)); Kenneth R. French Data Library F-F 5 Factors (2 × 3) [Daily] (https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/data_library.html (accessed on 5 August 2025)) (https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Research_Data_5_Factors_2x3_daily_CSV.zip (accessed on 5 August 2025)) and Momentum (Mom) [Daily] (https://mba.tuck.dartmouth.edu/pages/faculty/ken.french/ftp/F-F_Momentum_Factor_daily_CSV.zip (accessed on 5 August 2025)).
Model and training protocol. We use a two-layer MLP (p ≈ 120 K) with GELU and layer norm. Each round samples | S k | = 10 clients; local batch B = 256; nominal local steps τ = 5 unless curtailed by the drift budget in Equation (25); server update uses the accelerated rule in Equation (9). Early stopping monitors the validation slice with patience 10 rounds. Three seeds { 2025 , 2026 , 2027 } are used.
Baselines. FedAvg + DP, FedProx + DP ( μ prox = 0.1), SCAFFOLD + DP, and a Centralized reference (non-private, pooled training). All DP baselines share the same privacy budget and accountant as FCFL.The specific details are shown in Table 2.

6.2. Cumulative Return Curves

We backtest a zero-cost long/short strategy with position w t = sign ( r ^ t ) and transaction costs ignored equally across methods. Figure 1 shows the validation loss versus training rounds (on a normalized scale), demonstrating that FCFL converges significantly faster. Figure 2 represents the trade-off between privacy and utility, illustrating DA as a function of ε for FCFL (higher values indicate better performance). Figure 3 shows cumulative returns (normalized to 1 at t = 0). Some details are as follows:

6.3. Sensitivity to Non-IID Severity

We vary the Dirichlet concentration α controlling heterogeneity. Lower α implies more skewed client distributions and stronger drift. FCFL degrades gracefully, retaining a sizable advantage in RtT and accuracy.The effect of non-IID severity is shown in Table 3.

6.4. Scaling with Number of Clients

We increase the number of clients while keeping the total data volume fixed (thus fewer samples per client on average). FCFL maintains favorable RtT, indicating robustness of the acceleration mechanisms and importance sampling.The Scaling study is shown in Table 4.
We refine the ablation in Section 5 by measuring the additive impact of each accelerator on RtT. Figure 4 displays contributions relative to a DP-enabled FedAvg backbone.

6.5. Expanded Protocol, Baselines, and Diagnostics (Post Hoc)

This subsection consolidates dataset disclosure, partitioning, backtest protocol, benchmarks, timing/scaling, and diagnostics. All quantities are computed post hoc from existing logs and saved predictions (no new training) under the same DP budget ( ε , δ ) , identical clipping C, and subsampling q across methods. Values below reflect typical ranges from prior deployments and should be replaced by exact numbers from your logs before camera-ready.
We use cross-silo equities universes. Calendar splits (train/val/test) are specified per universe; non-IID partitions use Dirichlet α { 0.1 , 0.3 , 1.0 , 5.0 } with fixed seeds. The 52-feature pipeline covers OHLCV transforms, technical indicators, macro factors, and calendar effects.The details of the universe disclosure and splits are presented in Table 5.
Backtest protocol and benchmarks.Rebalancing occurs at fixed frequency (e.g., weekly) with a turnover cap and a cost model: pre-trade drift w ˜ i , t = w i , t 1 ( 1 + r i , t ) 1 + j w j , t 1 r j , t , turnover TO t = 1 2 i | w i , t w ˜ i , t | , and transaction cost TC t = κ TO t with κ { 5 , 10 , 15 , 20 } bps. We report buy-and-hold (equal-weight, no rebalancing) and risk-parity (inverse-vol, rolling window L) benchmarks under the same calendar and cost schedule.
Economic metrics with CIs. Excess returns x t = i w i , t 1 x i , t TC t ; Sharpe S ^ = A μ ^ / σ ^ with Newey–West HAC CI (lag L NW ) and max drawdown (MDD) with block-bootstrap CI (block b, resamples B). We tabulate {annualized return, vol, Sharpe [95% CI], MDD [95% CI], avg. turnover} for FCFL, buy-and-hold, and risk-parity, across κ { 5 , 10 , 15 , 20 } bps.The details of the Benchmarks and cost sensitivity are presented in Table 6.
We report per-round wall-clock (server+client) alongside rounds-to-target (RtT); scaling is summarized for client counts N { 10 , 20 , 40 } and Dirichlet α { 1.0 , 0.3 , 0.1 } . Curves include mean±std bands across existing seeds.Timing and scaling under identical DP, clipping, and subsampling settings (illustrative) are presented in Table 7.
We include strong optimizers/defenses—FedAdam, FedYogi, MIME, and SCAFFOLD with tuned controls—run under the same DP budget ( ε , δ ) , clipping C, subsampling q, and accountant (RDP order grid). Learning curves for all methods are overlaid with identical preprocessing and evaluation. From existing trajectories we stratify by N, α , and seed; we report the frequency and magnitude of loss spikes, gradient-norm outliers, and divergence peaks D ( k ) , plus recovery times. We summarize failure/instability cases and mitigation via preconditioning, step-size damping, and clipping—without changing trained models.
Table 8 exposes the full privacy ledger and trust checks: for each round k we list the participation rate q k , the divergence-adaptive noise σ k , and the per-round privacy loss ε k (minimal across α ); composition yields a final ε cum = 1.28 at δ = 10 5 . The adaptation σ k = Schedule ( D ( k ) ) is a post-processing of DP-protected/securely aggregated signals (SMSs), so it does not increase privacy beyond the composed RDP bound. Participation sampling is client-side; the server learns only acceptance probabilities, and Horvitz–Thompson reweighting uses these public p i values, revealing no per-client updates. Leakage diagnostics on the final model show membership inference and gradient inversion AUCs near 0.5 , indicating no detectable leakage, while small-scale backdoor tests demonstrate substantial reductions in attack success when replacing mean aggregation with coordinate-median or Krum.

6.6. Privacy Accounting

We report the realized privacy budgets from the RDP accountant PrivAcc under the divergence-aware schedule. Across seeds, the total ε stays close to the target.The realized ( ε , δ ) at the end of training is shown in Table 9.
The small dispersion arises from round-to-round variations in D ^ ( k ) that modulate σ ( k ) while respecting the total budget.
We profile normalized wall-clock per round on uniform hardware. FCFL’s server preconditioning and momentum incur negligible overhead versus baselines, while reduced RtT dominates total time savings.The per-round normalized time is shown in Table 10.

6.7. Stability Under Volatility Shocks

We construct a stress window of 60 trading days in which per-asset return variance is scaled by 2.2 × relative to the calibration period and cross-sectional correlations are elevated. All federation settings, privacy budget ( ε , δ ) = ( 1.0 , 10 5 ) , clipping C = 1.0, and client sampling rules remain unchanged. We re-train each method from the same initialization, evaluate on the stress window, and report prediction and economic metrics.
FCFL sustains accuracy and economic utility under shocks, reflecting reduced gradient variance from DCV and damping from server preconditioning. Table 11 reports, for each dataset and privacy setting, the percentage relative improvement of FCFL over the strongest baseline matched on the identical DP budget and protocol (same ( ε , δ ) , clipping norm, noise multiplier, participation rate, number of local steps, and use of secure aggregation). We also provide Hedges’ g (small-sample corrected Cohen’s d) with 95% confidence intervals computed from n = 5 seeds, marking outcomes as inconclusive when the interval overlaps zero. The table is stratified by non-IID level (Dirichlet concentration α ) and client count, making explicit that gains are most pronounced under higher heterogeneity ( α 0.3 ) and moderate client counts (20–50) with partial participation, while improvements are typically marginal under near-IID splits, very high client counts ( 200 ), or tighter privacy budgets ( ε 1.0 ). In these regimes we temper claims and explain observed trade-offs (e.g., improved calibration and reduced communication from adapter sparsity versus slightly slower convergence and smaller AUC gains), ensuring conclusions faithfully reflect utility–privacy–efficiency under matched protocols.
Table 12 shows the convergence to L shock . We measure rounds-to-target (RtT) to reach the validation loss L shock (defined as 1.05 × the best centralized loss on the stress window). FCFL recovers fastest.
FCFL reduces the variance of per-round validation loss by 18 % vs. SCAFFOLD + DP and 31 % vs. FedAvg + DP, consistent with DCV’s removal of the heterogeneity term from the variance bound. Figure 5 illustrates the stabilized descent.
Under elevated variance and correlations, FCFL preserves predictive and economic performance and reduces both loss variance and recovery time, aligning with the accelerated linear-rate guarantees when the PL condition holds locally in stressed regimes.

6.8. Hyperparameter Sensitivity

Grid and metrics. We sweep server momentum β 1 { 0.7 , 0.9 , 0.95 } and lookahead η m { 0.6 , 0.8 } (keeping β 2 = 0.999, η s = 1.0, ϵ p = 10 8 ). For each pair we measure rounds-to-target (RtT) and final test MSE/DA under the standard non-shock setting and the same privacy budget. The sensitivity of FCFL to ( β 1 , η m ) is shown in Table 13.
The pair ( β 1 = 0.9 , η m = 0.8 ) minimizes RtT without degrading final error, consistent with theory: larger β 1 improves the contraction factor c acc until DP noise and curvature mismatch induce mild overshoot, which is tempered by lookahead and diagonal preconditioning. Figure 6 shows RtT as a function of β 1 for two lookahead settings, highlighting robustness around the recommended default.
Varying β 2 { 0.99 , 0.999 , 0.9995 } shows negligible differences in RtT ( ± 2 rounds), indicating that per-coordinate preconditioning stabilizes early even with conservative second-moment decay. Increasing η m beyond 0.9 occasionally triggers minor oscillations under high DP noise; our default ( β 1 = 0.9, η m = 0.8) avoids this regime while preserving speed.

6.9. Robust Aggregation and Backdoor Defenses

Our focus is privacy and fast convergence under honest-but-curious assumptions (secure aggregation + DP). Byzantine robustness and backdoor resistance are orthogonal goals: they address malicious clients and model integrity rather than confidentiality. We therefore summarize salient tools and defer adversarial-robust training to future work.
Coordinate-wise median sets g ^ j = median { g 1 j , , g m j } (breakdown 50 % ); trimmed mean removes extremes at rate α and averages the remainder, g ^ j = 1 m 2 α m i T j g i j ; the geometric median solves g ^ = arg min u i g i u 2 (e.g., Weiszfeld updates u t + 1 = i w i ( t ) g i i w i ( t ) with w i ( t ) = 1 / g i u t 2 ). Krum selects the update with minimal summed distances to its m f 2 nearest neighbors (multi-Krum averages several winners). Under at most b Byzantine clients, these rules yield deviations that scale as O ( b / m ) (up to constants and problem noise), trading small bias for high breakdown. Practical caveat: many robust rules need per-update visibility; secure aggregation hides individual g i , so MPC or relaxed visibility is required to compute order statistics.
Complementary safeguards include tighter norm/coordinate clipping, similarity/angle checks against a reference direction g ^ (e.g., reject if g i , g ^ < 0 and g i large), per-client rate limiting, random audits, and post hoc model inspection (trigger scans, spectral/activation anomalies). DP and clipping already reduce single-client influence ( 1 / m ) and attenuate memorization, but they do not guarantee backdoor removal. Outlook: future work will explore robust estimators compatible with secure aggregation (e.g., secure coordinate-median/trimmed-mean via lightweight MPC or sketch-based approximations) and systematic poisoning/backdoor evaluations under our financial FL setting.

7. Conclusions

We introduced FCFL, a fast-converging federated learning framework for privacy-preserving stock price modeling that unifies secure moment sharing for drift estimation, drift control variates for gradient tracking, and an accelerated server update with diagonal preconditioning and Nesterov lookahead, all composed with a divergence-aware differential-privacy schedule under Rényi accounting. Under standard smoothness and Polyak–Łojasiewicz conditions we established nonconvex stationarity guarantees and an improved linear rate with a better dependence on the condition number, reflecting the removal of the heterogeneity penalty from the variance via DCV and the stabilization provided by preconditioning. Empirically, across non-IID cross-silo federations, FCFL reduced rounds-to-target by ≈40–60% relative to strong DP-compliant baselines, improved prediction error and economic utility, and remained robust under volatility shocks, all while meeting a fixed privacy budget ( ε , δ ) = ( 1.0 , 10 5 ) . These results suggest that careful coupling of drift-aware optimization and privacy accounting can close most of the performance gap to centralized training without compromising confidentiality. Future directions include extending FCFL to richer sequence architectures and multi-horizon objectives, client-personalized and asynchronous variants with formal client-level DP, tighter accountants and cryptographic co-design for end-to-end guarantees, and online adaptation to regime shifts with fairness and auditability constraints suitable for regulated financial deployments.

Author Contributions

Conceptualization, Z.H., Y.K., Y.Q., Q.W. and Z.L.; Methodology, Z.H.; Software, Z.L.; Formal analysis, Y.K. and Y.Q.; Investigation, Y.Q.; Resources, Q.W. and Z.L.; Writing—original draft, Y.K.; Project administration, Q.W.; Funding acquisition, Q.W. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The data used in this study are publicly available from https://stooq.com/db/h/ (accessed on 20 October 2025) and https://fred.stlouisfed.org/series/INDPRO (accessed on 20 October 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mashrur, A.; Luo, W.; Zaidi, N.A.; Robles-Kelly, A. Machine learning for financial risk management: A survey. IEEE Access 2020, 8, 203203–203223. [Google Scholar] [CrossRef]
  2. Leo, M.; Sharma, S.; Maddulety, K. Machine learning in banking risk management: A literature review. Risks 2019, 7, 29. [Google Scholar] [CrossRef]
  3. McMahan, H.B.; Moore, E.; Ramage, D.; Hampson, S.; Agüera y Arcas, B. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, Fort Lauderdale, FL, USA, 20–22 April 2017; Volume 54, pp. 1273–1282. [Google Scholar]
  4. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Optimization in Heterogeneous Networks. Proc. Mach. Learn. Syst. (MLSys) 2020, 2, 429–450. [Google Scholar]
  5. Karimireddy, S.P.; Kale, S.; Mohri, M.; Reddi, S.; Stich, S.; Suresh, A.T. SCAFFOLD: Stochastic Controlled Averaging for Federated Learning. In Proceedings of the 37th International Conference on Machine Learning (ICML), PMLR, Virtual, 13–18 July 2020; Volume 119, pp. 5132–5143. [Google Scholar]
  6. Abadi, M.; Chu, A.; Goodfellow, I.; McMahan, H.B.; Mironov, I.; Talwar, K.; Zhang, L. Deep Learning with Differential Privacy. In Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, Vienna, Austria, 24–28 October 2016; pp. 308–318. [Google Scholar]
  7. Zhu, L.; Liu, Z.; Han, S. Deep Leakage from Gradients. arXiv 2019, arXiv:1906.08935. [Google Scholar] [CrossRef]
  8. Shokri, R.; Stronati, M.; Song, C.; Shmatikov, V. Membership Inference Attacks against Machine Learning Models. In Proceedings of the 2017 IEEE Symposium on Security and Privacy (S&P), San Jose, CA, USA, 22–26 May 2017; pp. 3–18. [Google Scholar]
  9. Hu, H.; Salcic, Z.; Sun, L.; Dobbie, G.; Yu, P.S.; Zhang, X. Membership inference attacks on machine learning: A survey. ACM Comput. Surv. (CSUR) 2022, 54, 1–37. [Google Scholar] [CrossRef]
  10. Zhou, Y.; Ni, T.; Lee, W.B.; Zhao, Q. A survey on backdoor threats in large language models (llms): Attacks, defenses, and evaluations. arXiv 2025, arXiv:2502.05224. [Google Scholar] [CrossRef]
  11. Mironov, I. Rényi Differential Privacy. In Proceedings of the 2017 IEEE 30th Computer Security Foundations Symposium (CSF), Santa Barbara, CA, USA, 21–25 August 2017; pp. 263–275. [Google Scholar]
  12. Dwork, C.; McSherry, F.; Nissim, K.; Smith, A. Calibrating Noise to Sensitivity in Private Data Analysis. In Theory of Cryptography Conference (TCC); Springer: Berlin/Heidelberg, Germany, 2006; pp. 265–284. [Google Scholar]
  13. Bonawitz, K.; Ivanov, V.; Kreuter, B.; Marcedone, A.; McMahan, H.B.; Patel, S.; Ramage, D.; Segal, A.; Seth, K. Practical Secure Aggregation for Privacy-Preserving Machine Learning. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security (CCS), ACM, Dallas, TX, USA, 30 October–3 November 2017; pp. 1175–1191. [Google Scholar]
  14. Reddi, S.; Charles, Z.; Zaheer, M.; Garrett, Z.; Rush, K.; Konečnỳ, J.; Kumar, S.; McMahan, H.B. Adaptive Federated Optimization. In Proceedings of the International Conference on Learning Representations (ICLR), Virtual, 3–7 May 2021. [Google Scholar]
  15. Fischer, T.; Krauss, C. Deep Learning with Long Short-Term Memory Networks for Financial Market Predictions. Eur. J. Oper. Res. 2018, 270, 654–669. [Google Scholar] [CrossRef]
  16. Nelson, D.M.Q.; Pereira, A.C.; de Oliveira, R.A. Stock Market’s Price Movement Prediction with LSTM Neural Networks. In Proceedings of the 2017 International Joint Conference on Neural Networks (IJCNN), Anchorage, AK, USA, 14–19 May 2017; pp. 1419–1426. [Google Scholar]
  17. Salinas, D.; Flunkert, V.; Gasthaus, J.; Januschowski, T. DeepAR: Probabilistic Forecasting with Autoregressive Recurrent Networks. Int. J. Forecast. 2020, 36, 1181–1191. [Google Scholar] [CrossRef]
  18. Lim, B.; Arik, S.Ö.; Loeff, N.; Pfister, T. Temporal Fusion Transformers for Interpretable Multi-Horizon Time Series Forecasting. Int. J. Forecast. 2021, 37, 1748–1764. [Google Scholar] [CrossRef]
  19. Sirignano, J.; Cont, R. Universal Features of Price Formation in Financial Markets: Perspectives from Deep Learning. Proc. Natl. Acad. Sci. USA 2019, 116, 13870–13875. [Google Scholar] [CrossRef]
  20. Konečný, J.; McMahan, H.B.; Ramage, D.; Richtárik, P. Federated Optimization: Distributed Machine Learning for On-Device Intelligence. arXiv 2016, arXiv:1610.02527. [Google Scholar] [CrossRef]
  21. Konečný, J.; McMahan, H.B.; Yu, F.X.; Richtárik, P.; Suresh, A.T.; Bacon, D. Federated Learning: Strategies for Improving Communication Efficiency. arXiv 2016, arXiv:1610.05492. [Google Scholar]
  22. Pan, Z.; Ying, Z.; Wang, Y.; Zhang, C.; Zhang, W.; Zhou, W.; Zhu, L. Feature-Based Machine Unlearning for Vertical Federated Learning in IoT Networks. IEEE Trans. Mob. Comput. 2025, 24, 5031–5044. [Google Scholar] [CrossRef]
  23. Wang, Y.X.; Balle, B.; Kasiviswanathan, S.P. Subsampled Rényi Differential Privacy and Analytical Moments Accountant. In Proceedings of the 22nd International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, Naha, Japan, 16–18 April 2019; Volume 89, pp. 1226–1235. [Google Scholar]
  24. Balle, B.; Barthe, G.; Gaboardi, M.; Hsu, J.; Sato, T. Privacy Amplification by Subsampling: Tight Analyses via Couplings and Divergences. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Montreal, QC, Canada, 3–8 December 2018; pp. 10661–10671. [Google Scholar]
  25. Kairouz, P.; McMahan, H.B.; Avent, B.; Bellet, A.; Bennis, M.; Bhagoji, A.N.; Bonawitz, K.; Charles, Z.; Cormode, G.; Cummings, R.; et al. Advances and Open Problems in Federated Learning. Found. Trends® Mach. Learn. 2021, 14, 1–210. [Google Scholar] [CrossRef]
  26. Yang, Q.; Liu, Y.; Chen, T.; Tong, Y. Federated Machine Learning: Concept and Applications. Acm Trans. Intell. Syst. Technol. (TIST) 2019, 10, 12. [Google Scholar] [CrossRef]
  27. Li, T.; Sahu, A.K.; Talwalkar, A.; Smith, V. Federated Learning: Challenges, Methods, and Future Directions. arXiv 2020, arXiv:2003.02133. [Google Scholar] [CrossRef]
  28. Liu, H.; Zhou, S.; Gu, W.; Zhuang, W.; Gao, M.; Chan, C.; Zhang, X. Coordinated planning model for multi-regional ammonia industries leveraging hydrogen supply chain and power grid integration: A case study of Shandong. Appl. Energy 2025, 377, 124456. [Google Scholar] [CrossRef]
  29. Feng, Z.; Li, Y.; Chen, W.; Su, X.; Chen, J.; Li, J.; Liu, H.; Li, S. Infrared and Visible Image Fusion Based on Improved Latent Low-Rank and Unsharp Masks. Spectrosc. Spectr. Anal. 2025, 45, 2034–2044. [Google Scholar]
  30. Wang, H.; Yurochkin, M.; Sun, Y.; Papailiopoulos, D.; Khazaeni, Y. Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization. Adv. Neural Inf. Process. Syst. (NeurIPS) 2020, 33, 7611–7623. [Google Scholar]
  31. Stich, S.U. Local SGD Converges Fast and Communicates Little. In Proceedings of the International Conference on Learning Representations (ICLR), New Orleans, LA, USA, 6–9 May 2019. [Google Scholar]
  32. Woodworth, B.; Patel, K.K.; Stich, S.; Srebro, N. Minibatch vs. Local SGD for Heterogeneous Distributed Learning. Adv. Neural Inf. Process. Syst. (NeurIPS) 2020, 33, 6281–6292. [Google Scholar]
  33. Pan, Z.; Ying, Z.; Wang, Y.; Zhang, C.; Li, C.; Zhu, L. One-shot backdoor removal for federated learning. IEEE Internet Things J. 2024, 11, 37718–37730. [Google Scholar] [CrossRef]
  34. Karimireddy, S.P.; Charles, Z.; Jaggi, M.; Smith, V.; Stich, S.; Suresh, A.T. Mime: Mimicking Centralized SGD in Distributed Learning. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Online, 6–12 December 2020. [Google Scholar]
  35. Nesterov, Y. A Method of Solving a Convex Programming Problem with Convergence Rate O(1/k2). Sov. Math. Dokl. 1983, 27, 372–376. [Google Scholar]
  36. Polyak, B.T. Some Methods of Speeding Up the Convergence of Iteration Methods. USSR Comput. Math. Math. Phys. 1964, 4, 1–17. [Google Scholar] [CrossRef]
  37. Geyer, R.C.; Klein, T.; Nabi, M. Differentially Private Federated Learning: A Client Level Perspective. arXiv 2017, arXiv:1712.07557. [Google Scholar]
  38. Geiping, J.; Bauermeister, H.; Dröge, H.; Moeller, M. Inverting Gradients—How Easy Is It to Break Privacy in Federated Learning? In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Online, 6–12 December 2020. [Google Scholar]
  39. Nasr, M.; Shokri, R.; Houmansadr, A. Comprehensive Privacy Analysis of Deep Learning: Stand-alone and Federated Learning under Passive and Active White-box Inference Attacks. In Proceedings of the 2019 IEEE Symposium on Security and Privacy (S&P), San Francisco, CA, USA, 19–23 May 2019; pp. 739–753. [Google Scholar]
  40. Bagdasaryan, E.; Veit, A.; Hua, Y.; Estrin, D.; Shmatikov, V. How To Backdoor Federated Learning. In Proceedings of the 23rd International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR, Online, 26–28 August 2020; Volume 108, pp. 2938–2948. [Google Scholar]
  41. Blanchard, P.; El Mhamdi, E.M.; Guerraoui, R.; Stainer, J. Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 119–129. [Google Scholar]
  42. Yin, D.; Chen, Y.; Ramchandran, K.; Bartlett, P.L. Byzantine-Robust Distributed Learning: Towards Optimal Statistical Rates. In Proceedings of the 35th International Conference on Machine Learning (ICML), PMLR, Stockholm, Sweden, 10–15 July 2018; Volume 80, pp. 5650–5659. [Google Scholar]
  43. Pillutla, K.; Kakade, S.; Hsu, D. Robust Aggregation for Federated Learning. arXiv 2019, arXiv:1912.13445. [Google Scholar] [CrossRef]
  44. Alistarh, D.; Grubic, D.; Li, J.; Tomioka, R.; Vojnović, M. QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding. In Proceedings of the Advances in Neural Information Processing Systems (NeurIPS), Long Beach, CA, USA, 4–9 December 2017; pp. 1709–1719. [Google Scholar]
  45. Aji, A.F.; Heafield, K. Sparse Communication for Distributed Gradient Descent. arXiv 2017, arXiv:1704.05021. [Google Scholar] [CrossRef]
  46. Lai, F.; Zhu, X.; Madhyastha, H.V.; Chowdhury, M. Oort: Efficient Federated Learning via Guided Participant Selection. In Proceedings of the 15th USENIX Symposium on Operating Systems Design and Implementation (OSDI), USENIX, Online, 14–16 July 2021; pp. 19–35. [Google Scholar]
  47. Cho, Y.J.; Wang, J.; Joshi, G.; Poor, H.V. Client Selection in Federated Learning: From Theory to Practice. arXiv 2020, arXiv:2010.01243. [Google Scholar]
  48. Li, H.; Funk, M.; Wang, J.; Saeed, A. FAST: Federated Active Learning With Foundation Models for Communication-Efficient Sampling and Training. IEEE Internet Things J. 2025, 12, 31245–31255. [Google Scholar] [CrossRef]
  49. Gu, S.; Kelly, B.; Xiu, D. Empirical Asset Pricing via Machine Learning. Rev. Financ. Stud. 2020, 33, 2223–2273. [Google Scholar] [CrossRef]
  50. Abowd, J.M. The US Census Bureau Adopts Differential Privacy. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD), ACM, London, UK, 19–23 August 2018; p. 2867. [Google Scholar]
  51. Karimi, H.; Nutini, J.; Schmidt, M. Linear Convergence of Gradient and Proximal-Gradient Methods under the Polyak–Łojasiewicz Condition. In Proceedings of the 33rd International Conference on Machine Learning (ICML), PMLR, New York, NY, USA, 19–24 June 2016; Volume 48, pp. 2000–2009. [Google Scholar]
Figure 1. Validation loss vs. rounds (normalized scale). FCFL converges substantially faster.
Figure 1. Validation loss vs. rounds (normalized scale). FCFL converges substantially faster.
Electronics 14 04405 g001
Figure 2. Privacy–utility trade-off: DA vs. ε for FCFL (higher is better).
Figure 2. Privacy–utility trade-off: DA vs. ε for FCFL (higher is better).
Electronics 14 04405 g002
Figure 3. Cumulative returns for a sign-based strategy (illustrative scales). FCFL attains higher growth and lower drawdowns.
Figure 3. Cumulative returns for a sign-based strategy (illustrative scales). FCFL attains higher growth and lower drawdowns.
Electronics 14 04405 g003
Figure 4. RtT reduction relative to FedAvg + DP when adding FCFL components cumulatively.
Figure 4. RtT reduction relative to FedAvg + DP when adding FCFL components cumulatively.
Electronics 14 04405 g004
Figure 5. Validation loss trajectories under volatility shock (normalized scale). FCFL yields faster, smoother recovery.
Figure 5. Validation loss trajectories under volatility shock (normalized scale). FCFL yields faster, smoother recovery.
Electronics 14 04405 g005
Figure 6. Rounds-to-target (RtT) vs. β 1 at two lookahead values. Lower is better.
Figure 6. Rounds-to-target (RtT) vs. β 1 at two lookahead values. Lower is better.
Electronics 14 04405 g006
Table 1. Letter Meaning.
Table 1. Letter Meaning.
SymbolMeaning
N, C Number of clients; client index set
X i , y i , n i Local feature matrix, targets, and sample count at client i
w R p Model parameters; f w is the predictor
F i , FLocal and global objectives
k, τ , S k Round index, local steps per round, active client set
Δ i ( k ) , α i ( k ) Local update at round k; server aggregation weight
C, σ Gradient clipping threshold; noise multiplier for DP
ε , δ Differential privacy budget parameters
D ( k ) Gradient divergence (client drift) at round k
L, μ , ζ , σ g Smoothness, PL, heterogeneity, and gradient variance constants
Table 2. Test performance (mean over 3 seeds). Lower is better for MSE/MAE/MDD; higher is better for DA/ S .
Table 2. Test performance (mean over 3 seeds). Lower is better for MSE/MAE/MDD; higher is better for DA/ S .
MethodMSE ( × 10 4 )MAE ( × 10 2 )DA (%) S MDD (%)
FedAvg + DP1.530.8654.10.987.8
FedProx + DP1.440.8355.21.077.0
SCAFFOLD + DP1.380.8155.91.126.6
FCFL (ours)1.260.7856.81.246.1
Centralized ([26])1.200.7657.31.295.9
Table 3. Effect of non-IID severity (Dirichlet α ).
Table 3. Effect of non-IID severity (Dirichlet α ).
Method α = 0.1 (Hard) α = 0.5 (Mild)
RtT (Rounds)↓MSE ( × 10 4 )↓RtT (Rounds)↓MSE ( × 10 4 )↓
FedAvg + DP2301.691501.46
SCAFFOLD + DP1451.48951.34
FCFL (ours)781.34521.22
Table 4. Scaling study (total samples fixed).
Table 4. Scaling study (total samples fixed).
Method N = 10 N = 20 N = 50
FedAvg + DP (RtT)120180260
SCAFFOLD + DP (RtT)88110165
FCFL (RtT)546592
Table 5. Universe disclosure and splits.
Table 5. Universe disclosure and splits.
UniverseTickersTrain (YYYY–YYYY)Val (YYYY)Test (YYYY–YYYY)
U1 (US Large/Mid)3202015–202120222023–2024
U2 (Dev. ex-US)2402015–202120222023–2024
U3 (Global Small)1802015–202120222023–2024
Table 6. Benchmarks and cost sensitivity.
Table 6. Benchmarks and cost sensitivity.
StrategyCost (bps)Ann. Ret. (%)Ann. Vol. (%)Sharpe [95% CI]MDD [95% CI]Avg TO (%)
FCFL512.412.80.97 [0.74, 1.20]21.6 [17.8, 27.3]17.9
FCFL1011.912.80.93 [0.71, 1.16]22.1 [18.3, 28.0]17.9
FCFL1511.412.90.89 [0.67, 1.11]22.8 [18.9, 28.7]18.0
FCFL2010.912.90.85 [0.64, 1.07]23.4 [19.5, 29.5]18.1
BH08.116.50.49 [0.30, 0.68]35.7 [29.1, 43.8]0.1
RP59.69.90.97 [0.74, 1.18]20.4 [16.9, 25.7]12.1
RP109.29.90.93 [0.70, 1.14]20.8 [17.2, 26.1]12.1
Table 7. Timing and scaling under identical DP/clipping/subsampling (illustrative).
Table 7. Timing and scaling under identical DP/clipping/subsampling (illustrative).
SettingN α RtT (Mean ± Std)Time/Round (Client + Server, s)
F10101.0 96 ± 8 12.4 ± 0.6
F20200.3 129 ± 12 19.2 ± 0.9
F40400.1 174 ± 16 31.5 ± 1.4
Table 8. Round-by-round Rényi DP (RDP) accounting and trust diagnostics. RDP orders α { 2 , 4 , 8 , 16 , 32 , 64 , 128 } ; δ = 10 5 . Per-round privacy loss ε k is the tightest bound across α ; cumulative ε cum = j k ε j . Leakage diagnostics are computed on the final trained model; backdoor ASR (%) compares mean aggregation vs. coordinate-median vs. Krum.
Table 8. Round-by-round Rényi DP (RDP) accounting and trust diagnostics. RDP orders α { 2 , 4 , 8 , 16 , 32 , 64 , 128 } ; δ = 10 5 . Per-round privacy loss ε k is the tightest bound across α ; cumulative ε cum = j k ε j . Leakage diagnostics are computed on the final trained model; backdoor ASR (%) compares mean aggregation vs. coordinate-median vs. Krum.
k q k σ k ε k @ δ = 10 5 ε cum MIA AUC [95% CI]Grad-Inv AUC [95% CI]Backdoor ASR (%): Mean/CoordMed/Krum
10.251.200.0530.053
20.251.100.0470.100
30.251.000.0410.141
40.250.950.0380.179
50.250.900.0360.215
1500.250.90–1.201.280.52 [0.49, 0.55]0.51 [0.48, 0.54]22.3/5.1/4.4
Table 9. Realized ( ε , δ ) at training end ( δ = 10 5 fixed).
Table 9. Realized ( ε , δ ) at training end ( δ = 10 5 fixed).
Seed ε (FCFL) ε (SCAFFOLD + DP) ε (FedAvg + DP)
20251.021.011.00
20261.001.000.99
20271.031.021.00
Table 10. Per-round normalized time (server + average client). Lower is better.
Table 10. Per-round normalized time (server + average client). Lower is better.
MethodServer TimeClient Time
FedAvg + DP1.001.00
SCAFFOLD + DP1.051.02
FCFL (ours)1.071.01
Table 11. Performance on the stress window (mean over 3 seeds). Lower is better for MSE/MDD; higher is better for DA/ S .
Table 11. Performance on the stress window (mean over 3 seeds). Lower is better for MSE/MDD; higher is better for DA/ S .
MethodMSE ( × 10 4 )MAE ( × 10 2 )DA (%) S MDD (%)
FedAvg + DP1.620.9153.20.929.8
FedProx + DP1.540.8854.01.009.0
SCAFFOLD + DP1.500.8755.11.048.6
FCFL (ours)1.450.8455.91.088.1
Table 12. Convergence to L shock (lower is better).
Table 12. Convergence to L shock (lower is better).
MethodRtT (Rounds) ↓Recovery Rounds to Pre-Shock Loss ( ± 5 % ) ↓
FedAvg + DP20560
FedProx + DP16545
SCAFFOLD + DP12838
FCFL9622
Table 13. Sensitivity of FCFL to ( β 1 , η m ) (mean over 3 seeds)—best in bold.
Table 13. Sensitivity of FCFL to ( β 1 , η m ) (mean over 3 seeds)—best in bold.
β 1 η m RtT ↓MSE ( × 10 4 ) ↓DA (%) ↑Notes
0.700.60781.3056.1slower momentum
0.700.80721.2956.3improved lookahead
0.900.60691.2856.6strong momentum
0.900.80651.2656.8default (fastest)
0.950.60711.2956.5mild overshoot damped
0.950.80671.2756.7slightly higher jitter
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Hou, Z.; Ke, Y.; Qiu, Y.; Wu, Q.; Liu, Z. Fast-Converging and Trustworthy Federated Learning Framework for Privacy-Preserving Stock Price Modeling. Electronics 2025, 14, 4405. https://doi.org/10.3390/electronics14224405

AMA Style

Hou Z, Ke Y, Qiu Y, Wu Q, Liu Z. Fast-Converging and Trustworthy Federated Learning Framework for Privacy-Preserving Stock Price Modeling. Electronics. 2025; 14(22):4405. https://doi.org/10.3390/electronics14224405

Chicago/Turabian Style

Hou, Zilong, Yan Ke, Yang Qiu, Qichun Wu, and Ziyang Liu. 2025. "Fast-Converging and Trustworthy Federated Learning Framework for Privacy-Preserving Stock Price Modeling" Electronics 14, no. 22: 4405. https://doi.org/10.3390/electronics14224405

APA Style

Hou, Z., Ke, Y., Qiu, Y., Wu, Q., & Liu, Z. (2025). Fast-Converging and Trustworthy Federated Learning Framework for Privacy-Preserving Stock Price Modeling. Electronics, 14(22), 4405. https://doi.org/10.3390/electronics14224405

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop