Next Article in Journal
Towards Robust Chain-of-Thought Prompting with Self-Consistency for Remote Sensing VQA: An Empirical Study Across Large Multimodal Models
Previous Article in Journal
The Effects of Shear Stress Memory and Variable Viscosity on Viscous Fluids Flowing Between Two Horizontal Parallel Plates
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration

by
Mohammed M. Alenazi
1,* and
Fawwad Hassan Jaskani
2
1
Department of Computer Engineering, Faculty of Computers and Information Technology, University of Tabuk, Tabuk 71421, Saudi Arabia
2
Department of Computer Systems Engineering, Faculty of Engineering, Islamia University of Bahawalpur, Bahawalpur 63100, Pakistan
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 3044; https://doi.org/10.3390/math13183044
Submission received: 21 August 2025 / Revised: 2 September 2025 / Accepted: 10 September 2025 / Published: 22 September 2025
(This article belongs to the Special Issue Recent Computational Techniques to Forecast Cryptocurrency Markets)

Abstract

The volatile nature of cryptocurrency markets demands real-time analytical capabilities that traditional centralized computing architectures struggle to provide. This paper presents a novel hybrid cloud–edge computing framework for cryptocurrency market forecasting, leveraging distributed systems to enable low-latency prediction models. Our approach integrates machine learning algorithms across a distributed network: edge nodes perform real-time data preprocessing and feature extraction, while the cloud infrastructure handles deep learning model training and global pattern recognition. The proposed architecture uses a three-tier system comprising edge nodes for immediate data capture, fog layers for intermediate processing and local inference, and cloud servers for comprehensive model training on historical blockchain data. A federated learning mechanism allows edge nodes to contribute to a global prediction model while preserving data locality and reducing network latency. The experimental results show a 40% reduction in prediction latency compared to cloud-only solutions while maintaining comparable accuracy in forecasting Bitcoin and Ethereum price movements. The system processes over 10,000 transactions per second and delivers real-time insights with sub-second response times. Integration with blockchain ensures data integrity and provides transparent audit trails for all predictions.

1. Introduction

The cryptocurrency market has emerged as one of the most dynamic and volatile financial markets globally, with a total market capitalization exceeding USD 2.3 trillion as of 2024 [1]. The extreme price volatility, characterized by intraday fluctuations often exceeding 10%, presents both significant opportunities and substantial risks for investors and traders [2]. Traditional centralized computing architectures face considerable challenges in processing the massive volumes of real-time blockchain data required for accurate market forecasting, particularly when dealing with high-frequency trading scenarios that demand sub-second response times [3].
The fundamental challenge in cryptocurrency market prediction lies in the need to process and analyze multiple heterogeneous data streams simultaneously, including blockchain transaction data, order book information, social media sentiment, and macroeconomic indicators [4]. Conventional cloud-based solutions, while providing substantial computational resources, introduce significant latency due to data transmission delays and centralized processing bottlenecks. This latency becomes particularly problematic in cryptocurrency markets where price movements can occur within milliseconds, making real-time decision-making crucial for profitable trading strategies [5].
Figure 1 illustrates the complexity of the cryptocurrency forecasting ecosystem and the motivation for our proposed hybrid cloud–edge architecture. The figure demonstrates how traditional centralized approaches create processing bottlenecks and increased latency, while our distributed framework enables parallel processing and reduced response times through strategic placement of computational resources.
Recent advances in edge computing and distributed machine learning have opened new possibilities for addressing these challenges [6,7]. Edge computing brings computational capabilities closer to data sources, significantly reducing latency and enabling real-time processing of cryptocurrency market data [8]. However, the limited computational resources and storage capacity of edge devices necessitate intelligent workload distribution and sophisticated coordination mechanisms between edge and cloud components [9].
The integration of blockchain technology into the forecasting architecture presents both opportunities and challenges. While blockchain provides immutable data provenance and transparent audit trails, it also introduces additional computational overhead and complexity in data processing [10]. Furthermore, the decentralized nature of blockchain networks requires novel approaches to data aggregation and consensus mechanisms for ensuring prediction accuracy and reliability [11].
This research addresses these challenges by proposing a novel hybrid cloud–edge architecture that leverages advanced distributed systems principles to enable real-time cryptocurrency market forecasting. Our key contributions include the development of a three-tier distributed computing framework, implementation of a federated learning mechanism for collaborative model training, integration of blockchain technology for data integrity and audit trails, and comprehensive experimental validation demonstrating significant improvements in prediction latency and accuracy [12,13].
Research Questions:
This study is driven by two fundamental questions:
  • How can a distributed hybrid cloud–edge architecture be designed to simultaneously achieve sub-second latency and high prediction accuracy in volatile cryptocurrency markets?
  • In what ways can federated learning and blockchain integration enhance both data integrity and real-time scalability in financial prediction systems?
Key Contributions:
The primary contributions of this paper are as follows:
  • Development of a three-tier (edge–fog–cloud) distributed framework optimized for real-time cryptocurrency forecasting.
  • Implementation of federated learning and gradient compression to achieve efficient and privacy-preserving synchronization.
  • Integration of blockchain to ensure data integrity and auditability.
  • Comprehensive experimental validation showing 40% lower latency and superior economic performance compared to centralized and distributed baselines.
The remainder of this paper is organized as follows: Section 2 reviews related work in cryptocurrency forecasting and distributed computing architectures. Section 3 presents our proposed methodology and system architecture. Section 4 describes the algorithm design and mathematical modeling. Section 5 presents experimental results and performance evaluation. Section 6 discusses the implications and limitations of our approach, and Section 7 concludes with future research directions.

2. Related Work

The field of cryptocurrency market forecasting has witnessed significant advancement in recent years, with researchers exploring various computational approaches and architectural paradigms. Traditional centralized machine learning approaches have dominated early research efforts, employing techniques such as support vector machines, random forests, and neural networks for price prediction [14,15]. However, these approaches face scalability limitations when dealing with the high-velocity data streams characteristic of cryptocurrency markets [16].
Recent studies have explored the application of deep learning techniques to cryptocurrency forecasting, with particular emphasis on recurrent neural networks (RNNs) and long short-term memory (LSTM) networks [17,18]. Chen et al. [19] demonstrated that CNN-LSTM hybrid architectures could achieve superior performance in Bitcoin price prediction compared to traditional time series methods. Similarly, Kumar and Patel [20] proposed an ensemble approach combining multiple deep learning models, achieving 85% accuracy in predicting Ethereum price movements over 24-h periods.
The emergence of transformer-based architectures has further enhanced cryptocurrency forecasting capabilities. Wang et al. [21] introduced a multi-head attention mechanism specifically designed for cryptocurrency market analysis, demonstrating improved performance in capturing long-term dependencies in price data. However, these approaches remain computationally intensive and require substantial cloud resources for training and inference [22].
Edge computing has gained considerable attention as a solution for reducing latency in financial applications. Li and Zhang [23] proposed an edge-based trading system for cryptocurrency markets, achieving sub-millisecond response times for order execution. Their approach, however, was limited to simple technical indicators and did not incorporate advanced machine learning models. Martinez et al. [24] developed a more sophisticated edge computing framework for financial data processing, but their evaluation was confined to traditional stock markets rather than cryptocurrencies.
Distributed machine learning architectures have emerged as a promising approach for handling large-scale financial data. The federated learning paradigm, originally developed for privacy-preserving machine learning, has shown potential for cryptocurrency applications [25]. Rodriguez and Anderson [26] implemented a federated learning system for cryptocurrency fraud detection, demonstrating the feasibility of collaborative learning across multiple institutions while maintaining data privacy.
Blockchain integration in forecasting systems has been explored primarily from the perspective of data integrity and transparency. Garcia et al. [27] proposed a blockchain-based audit trail system for financial predictions, ensuring immutable records of forecasting decisions. However, their approach did not address the computational challenges associated with real-time blockchain data processing. Liu and Kim [28] developed a smart contract-based prediction market for cryptocurrencies, but their system lacked the sophisticated machine learning capabilities required for accurate forecasting.
Hybrid cloud–edge architectures have been investigated in various domains, including IoT applications, autonomous vehicles, and smart cities [29,30]. In the financial domain, recent work by Hassan and Rodriguez [31] explored cloud–edge collaboration for high-frequency trading systems. Their architecture achieved significant latency reductions but was limited to traditional financial instruments and did not address the unique challenges of cryptocurrency markets.
Several commercial systems have attempted to address cryptocurrency forecasting challenges through distributed architectures. CoinBase Pro’s Advanced Trade API employs edge nodes for reduced latency, while Binance’s cloud infrastructure utilizes distributed computing for market analysis [32]. However, these systems lack the sophisticated machine learning capabilities and blockchain integration proposed in our approach.
Despite these advances, significant gaps remain in the literature. Most existing approaches focus on either edge computing or cloud-based machine learning, but few have successfully integrated both paradigms for cryptocurrency forecasting. Additionally, the incorporation of blockchain technology for data integrity and audit trails in real-time forecasting systems remains largely unexplored. Furthermore, the federated learning approaches developed for other domains have not been adequately adapted to address the unique characteristics of cryptocurrency markets, such as extreme volatility and 24/7 trading cycles.
The lack of comprehensive evaluation frameworks for hybrid cloud–edge systems in cryptocurrency applications represents another significant gap. Existing studies often evaluate individual components in isolation rather than assessing the overall system performance under realistic market conditions. This limitation makes it difficult to compare different approaches and validate their practical applicability [33].
Several recent contributions have begun exploring the joint application of federated learning, blockchain, and distributed machine learning in financial and engineering domains. Zhang et al. [34] introduced a federated learning with blockchain integration framework for secure financial forecasting, demonstrating how decentralized learning can be combined with immutable auditability. Whig et al. [35] extended this line of work by proposing blockchain-enabled secure federated learning systems that enhance privacy and trust in decentralized AI environments. Similarly, Veerasamy et al. [36] applied blockchain-based federated recurrent neural networks for microgrid frequency control, highlighting the feasibility of combining blockchain consensus with distributed deep learning in mission-critical applications. Ruchel et al. [37] contributed a scalable leaderless consensus algorithm for blockchain-enabled distributed systems, which provides insights into designing efficient decentralized synchronization strategies.
Despite these advances, prior works primarily emphasize either privacy preservation, consensus mechanisms, or specific financial applications. They often lack a holistic focus on the latency and scalability challenges unique to real-time cryptocurrency market forecasting. Our contribution differs by unifying these themes into a single hybrid cloud–edge framework that leverages federated learning for distributed model training, blockchain for auditability and data integrity, and edge/fog/cloud tiers for latency-aware forecasting. This integrated design directly addresses the high-frequency, low-latency demands of cryptocurrency markets, distinguishing our approach from earlier studies.
In addition to recent works on federated learning and blockchain integration, several studies have addressed related challenges in edge computing and distributed AI systems. Hao et al. [38] proposed a task-driven, priority-aware computation offloading framework using deep reinforcement learning, optimizing resource utilization in latency-sensitive environments. Similarly, Liu et al. [39] provided a comprehensive survey bridging distributed machine learning and federated learning paradigms, highlighting synchronization and scalability challenges relevant to our hybrid framework. Furthermore, Zawish et al. [40] introduced an energy-aware AI-driven edge-computing framework for IoT applications, demonstrating techniques for balancing computational efficiency and predictive accuracy, which also inform the resource optimization strategy in our proposed architecture.
Our proposed approach addresses these gaps by providing a comprehensive hybrid cloud–edge architecture specifically designed for cryptocurrency market forecasting, incorporating advanced distributed machine learning techniques, blockchain integration for data integrity, and extensive experimental validation under realistic market conditions.

3. Proposed Methodology

This section presents our comprehensive hybrid cloud–edge architecture for real-time cryptocurrency market forecasting. The proposed system integrates distributed computing principles with advanced machine learning techniques to address the unique challenges of cryptocurrency market prediction while maintaining low-latency and high-accuracy requirements.
The overall architecture consists of three main computational tiers strategically distributed to optimize data processing efficiency and minimize prediction latency. Figure 2 illustrates the multi-stage learning pipeline that forms the foundation of our approach, demonstrating how data flows through various processing stages from raw blockchain transactions to actionable trading insights.
The first tier consists of edge computing nodes strategically deployed in close proximity to major cryptocurrency exchanges and blockchain network nodes. These edge devices are responsible for real-time data acquisition, preliminary preprocessing, and immediate feature extraction from incoming transaction streams. Each edge node is equipped with specialized hardware optimized for low-latency processing, including field-programmable gate arrays (FPGAs) for high-speed pattern matching and application-specific integrated circuits (ASICs) for cryptographic operations.
The second tier comprises fog computing layers that serve as intermediate processing units between edge nodes and cloud infrastructure. These fog nodes aggregate data from multiple edge sources, perform more sophisticated feature engineering, and execute local model inference for time-critical predictions. The fog layer implements advanced caching mechanisms and distributed consensus protocols to ensure data consistency and reliability across the network.
The third tier encompasses cloud-based infrastructure that handles computationally intensive tasks such as deep learning model training, historical data analysis, and global pattern recognition. The cloud component leverages distributed computing frameworks and parallel processing capabilities to manage large-scale datasets and complex machine learning algorithms that would be impractical to execute on edge devices.
Figure 3 demonstrates the dynamic integration strategy that enables seamless coordination between different architectural tiers, showcasing the adaptive optimization mechanisms that automatically adjust resource allocation based on market conditions and computational demands.
The data acquisition subsystem implements a multi-protocol approach to gather comprehensive market information from diverse sources. Real-time blockchain transaction data are collected through direct connections to cryptocurrency network nodes, while market data including order books, trade histories, and price feeds are obtained via WebSocket connections to major exchanges. Social media sentiment data are gathered through streaming APIs from platforms such as Twitter, Reddit, and specialized cryptocurrency forums. Macroeconomic indicators and news feeds are integrated through external data providers to capture broader market influences.
Our feature engineering pipeline transforms raw data into meaningful representations suitable for machine learning algorithms. Technical indicators such as moving averages, relative strength index (RSI), and Bollinger bands are computed in real time at edge nodes. More sophisticated features including network metrics, transaction flow analysis, and sentiment scores are calculated at fog layer nodes. The cloud infrastructure handles complex feature derivation such as market microstructure analysis, cross-correlation calculations, and dimensional reduction through principal component analysis.
The distributed machine learning framework employs a federated learning approach that enables collaborative model training across multiple nodes while preserving data locality and minimizing network overhead. Each edge node maintains local model replicas that are periodically synchronized with global models stored in the cloud. The synchronization process utilizes gradient compression techniques and differential privacy mechanisms to ensure efficient communication and protect sensitive trading strategies.
Model architecture selection is based on the specific requirements of different prediction horizons and market conditions. For ultra-short-term predictions (sub-second to minutes), lightweight models such as linear regression and support vector machines are deployed on edge nodes. Medium-term forecasts (hours to days) utilize LSTM networks and transformer architectures executed on fog nodes. Long-term predictions (weeks to months) leverage complex ensemble methods and deep learning models running on cloud infrastructure.
The blockchain integration component serves multiple purposes within the architecture. Transaction data from blockchain networks provides fundamental input for market analysis, while smart contracts enable automated execution of trading strategies based on prediction outcomes. Additionally, a permissioned blockchain network maintains immutable audit trails of all predictions, model updates, and trading decisions, ensuring transparency and regulatory compliance.
Blockchain Implementation Details. In our implementation, a permissioned blockchain (Hyperledger Fabric) was adopted due to its low-latency transaction validation and configurable consensus mechanisms. Smart contracts were developed to automate three main processes: (i) recording prediction outcomes, (ii) logging model updates, and (iii) enforcing automated trading execution rules. Data integrity is managed through cryptographic hash chaining and a modified practical Byzantine Fault Tolerance (pBFT) consensus protocol, ensuring immutability and resilience against node-level adversarial attacks. These mechanisms complement the blockchain integration performance results reported in Table 1.
Data consistency and synchronization across the distributed architecture are maintained through a combination of consensus protocols and conflict resolution mechanisms. The system implements a practical Byzantine fault tolerance (pBFT) algorithm adapted for financial applications, ensuring reliable operation even when some nodes experience failures or malicious attacks. Conflict resolution strategies include weighted voting based on node reliability scores and temporal precedence rules for handling concurrent updates.
Security measures are implemented at multiple levels to protect against various threats. Edge nodes employ hardware security modules (HSMs) for cryptographic key management and secure data transmission. Network communications utilize end-to-end encryption with regular key rotation. Access control mechanisms implement multi-factor authentication and role-based permissions to restrict system access to authorized personnel only.
Performance optimization techniques are employed throughout the architecture to maximize throughput and minimize latency. Edge nodes utilize in-memory databases and optimized data structures for rapid access to frequently used information. Fog nodes implement intelligent caching strategies that predict and preload likely-to-be-requested data. Cloud components leverage distributed computing frameworks such as Apache Spark 3.4.1 and TensorFlow 2.13.0 Distributed for parallel processing of large datasets.
The adaptive resource allocation mechanism continuously monitors system performance and automatically adjusts computational resources based on market volatility and prediction accuracy requirements. During periods of high market activity, additional edge and fog nodes can be dynamically deployed to handle increased data volumes. Conversely, during quiet market periods, resources can be consolidated to improve cost efficiency.
Quality assurance mechanisms ensure prediction reliability through multi-level validation processes. Edge predictions are validated against fog-level models, while fog predictions are cross-checked with cloud-based ensemble methods. Prediction confidence scores are calculated based on model agreement levels and historical accuracy metrics. Automated alerting systems notify operators when prediction quality falls below predefined thresholds.
The proposed methodology addresses key challenges in cryptocurrency market forecasting through its distributed architecture, advanced machine learning techniques, and comprehensive integration of blockchain technology, providing a robust foundation for real-time market analysis and decision-making.
Problem Formulation. The proposed framework aims to optimize real-time cryptocurrency forecasting under resource and latency constraints while ensuring high predictive accuracy and blockchain auditability. The optimization problem is formulated as follows:
min M L g l o b a l ( M , D ) s . t . L l a t e n c y k τ k , k { 1 , , K } , L r e s o u r c e k R k , k { 1 , , K } , S ( B ) σ ,
Here, M represents the decision variables (model parameters), τ k denotes the maximum tolerated latency at node k, R k represents resource constraints, and S ( B ) measures blockchain synchronization quality. The objective is to minimize the global loss while satisfying latency, resource, and auditability requirements.
To address heterogeneous data distributions across edge nodes, we adopt a data-volume-based weighted averaging strategy instead of simple uniform averaging. Specifically, the contribution of each node k during model aggregation is proportional to the size of its local dataset | D k | , defined as follows:
θ = k = 1 K | D k | j = 1 K | D j | · θ k ,
where θ k represents the locally trained model parameters from node k, and θ denotes the aggregated global model.
Additionally, to mitigate synchronization delays caused by unstable or intermittent connections, we implement an asynchronous aggregation mechanism. Nodes that fail to update within a given round are automatically included in the next aggregation cycle without stalling the entire system.

4. Algorithm Design

To strengthen the theoretical foundation of the proposed framework, this section formally presents the mathematical modeling and optimization formulation underlying the hybrid cloud–edge architecture. The equations defined here capture the joint objective of minimizing the global loss function, maintaining low latency, and optimizing resource utilization while ensuring data integrity through blockchain synchronization. These formulations provide the theoretical justification for the performance improvements demonstrated in the experimental results.
The core algorithmic framework of our hybrid cloud–edge cryptocurrency forecasting system is built upon a sophisticated multi-stage optimization strategy that coordinates distributed learning across heterogeneous computing resources. Algorithm 1 presents the main orchestration logic that manages the entire distributed prediction pipeline, from data acquisition through model inference and decision-making.
Algorithm 1 Adaptive Multi-Stage Distributed Cryptocurrency Forecasting
Require: 
Market data streams D = { D 1 , D 2 , , D n } , learning rates η = { η e d g e , η f o g , η c l o u d } , prediction thresholds Θ = { θ 1 , θ 2 , , θ k } , node configurations N = { N e d g e , N f o g , N c l o u d }
Ensure: 
Real-time predictions P , model updates Δ M , performance metrics R
 1:
Initialize distributed models M = { M e d g e , M f o g , M c l o u d } with random weights
 2:
Initialize blockchain audit ledger L and consensus mechanisms
 3:
Initialize performance monitoring system S and resource allocator A
 4:
for each time epoch t = 1 to T do
 5:
   for each edge node e N e d g e do
 6:
     Acquire real-time market data d t e from local exchange connections
 7:
     Extract features f t e = ϕ e d g e ( d t e ) using lightweight extractors
 8:
     Generate ultra-short predictions p t e = M e d g e e ( f t e )
 9:
     Update local edge model: M e d g e e M e d g e e η e d g e M e d g e e L e d g e ( p t e , y t e )
  10:
     Transmit aggregated features f ˜ t e to assigned fog nodes
  11:
   end for
  12:
   for each fog node g N f o g do
  13:
     Aggregate features F t g = e E g f ˜ t e
  14:
     Perform intermediate feature fusion f t g = ψ f o g ( F t g )
  15:
     Execute medium-term predictions p t g = M f o g g ( f t g )
  16:
     Apply federated learning updates: M f o g g M f o g g η f o g M f o g g L f o g ( p t g , y t g )
  17:
     Synchronize with peer fog nodes using consensus protocol Π f o g
  18:
   end for
  19:
   for each cloud cluster c N c l o u d do
  20:
     Collect market state S t c from fog nodes and external sources
  21:
     Generate advanced features f t c = ξ c l o u d ( S t c )
  22:
     Compute long-term predictions p t c = M c l o u d c ( f t c )
  23:
     Global model training: M c l o u d c M c l o u d c η c l o u d M c l o u d c L c l o u d ( p t c , y t c )
  24:
     Broadcast model updates Δ M c l o u d c to fog and edge tiers
  25:
   end for
  26:
   Multi-tier fusion: p t f i n a l = α p t e d g e + β p t f o g + γ p t c l o u d
  27:
   Validate quality: q t = Q ( p t f i n a l , H v a l i d a t i o n )
  28:
   if  q t > θ q u a l i t y then
  29:
     Execute trading decisions and record in blockchain ledger L
  30:
   else
  31:
     Trigger alert system and initiate recalibration
  32:
   end if
  33:
   Update resource allocation A A ( l o a d t , a c c u r a c y t , l a t e n c y t )
  34:
   if  t mod s y n c i n t e r v a l = 0 then
  35:
     Synchronize distributed models
  36:
   end if
  37:
end for
  38:
return optimized models M , prediction history P , audit trail L
The mathematical foundation of our distributed learning framework builds upon advanced optimization theory adapted for cryptocurrency market characteristics. The global objective function combines multiple loss components from different architectural tiers, weighted according to their prediction horizons and accuracy requirements.
In our formulation, M denotes the decision variables that are optimized during training, corresponding to the learnable model parameters across the edge, fog, and cloud tiers. By contrast, ω k , α k , β k , γ k , and δ k are exogenous hyperparameters, externally set to control the trade-offs between accuracy, latency, consistency, and resource efficiency. The dataset partitions D are also exogenous inputs, defined by the distribution of training samples across clients. This distinction clarifies which parameters are subject to optimization by the framework and which are predefined tuning factors.
Equation (3) defines the comprehensive loss function that guides the distributed learning process.
L g l o b a l ( M , D ) = k = 1 K ω k α k L a c c u r a c y k + β k L l a t e n c y k + γ k L c o n s i s t e n c y k + δ k L r e s o u r c e k
The accuracy loss component L a c c u r a c y k for prediction tier k incorporates both mean squared error for price predictions and cross-entropy loss for directional movement classification, as formulated in Equation (4).
L a c c u r a c y k = 1 N k i = 1 N k λ 1 ( p i k y i ) 2 + λ 2 j = 1 C y i , j log ( p i , j k )
The latency constraint is modeled through a penalty function that increases exponentially with prediction delay, ensuring that time-critical forecasts maintain sub-second response times. Equation (5) captures this relationship.
L l a t e n c y k = i = 1 N k exp max ( 0 , t i r e s p o n s e t i t h r e s h o l d ) · I c r i t i c a l ( i )
Model consistency across distributed nodes is enforced through a regularization term that penalizes divergence between local and global model parameters. The consistency loss function is defined in Equation (6).
L c o n s i s t e n c y k = 1 M k m = 1 M k θ m k θ ¯ k 2 2 + σ m = 1 M k KL ( P m k P ¯ k )
Resource utilization optimization ensures efficient allocation of computational resources across the distributed architecture. The resource loss component balances computational cost with prediction accuracy requirements, as shown in Equation (7).
L r e s o u r c e k = r = 1 R c r · u r k + ϕ r = 1 R max ( 0 , u r k c a p a c i t y r ) + ψ r = 1 R i d l e r k
The federated learning mechanism employs a novel gradient compression technique specifically designed for financial time series data. The compression algorithm preserves critical gradient information while reducing communication overhead between distributed nodes. Equation (8) describes the compression function.
˜ = Compress ( L ) = TopK ( L , k ) + Quantize ( Residual ( L ) , q )
Dynamic weight adaptation allows the system to automatically adjust the influence of different prediction tiers based on real-time market conditions and historical performance. The weight update mechanism is governed by Equation (9).
ω k ( t + 1 ) = ω k ( t ) · exp ζ · E k ( t ) / j = 1 K ω j ( t ) · exp ζ · E j ( t )
The blockchain integration component implements a custom consensus mechanism optimized for financial data validation. The consensus algorithm ensures data integrity while maintaining high transaction throughput required for real-time trading applications. Equation (10) defines the consensus probability function.
P c o n s e n s u s ( b ) = v = 1 V P v ( b ) · W v · R v r e p u t a t i o n
Prediction confidence estimation incorporates uncertainty quantification to provide reliability measures for each forecast. The confidence metric combines model ensemble agreement with historical accuracy patterns, as formulated in Equation (11).
C ( p t ) = 1 1 + exp ( ( α e n s e m b l e · A t + β h i s t o r i c a l · H t + γ v o l a t i l i t y · V t ) )
Risk management integration ensures that predictions are evaluated within proper risk contexts before trading execution. The risk assessment function considers market volatility, position exposure, and liquidity constraints, as defined in Equation (12).
R t o t a l = i = 1 N w i 2 σ i 2 + 2 i = 1 N j > i N w i w j σ i σ j ρ i j + λ l i q u i d i t y L i m p a c t
Feature importance ranking enables dynamic feature selection based on market regime changes and prediction performance. The importance scoring mechanism is described in Equation (13).
I f ( t ) = m = 1 M L m ( t ) f · V f ( t ) + α s t a b i l i t y · Var ( I f ( t w : t ) )
Adaptive learning rate scheduling adjusts optimization parameters based on market volatility and model convergence characteristics. The learning rate adaptation follows the formulation in Equation (14).
η ( t + 1 ) = η ( t ) · exp L ( t ) 2 L ( t 1 ) 2 + ϵ · VolatilityFactor ( t )
Load balancing across distributed nodes utilizes a sophisticated queuing model that considers both computational capacity and network latency. The load distribution function is optimized according to Equation (15).
L i o p t i m a l = C i / D i j = 1 N C j / D j · L t o t a l · A i a v a i l a b i l i t y
Model synchronization protocols ensure temporal consistency across distributed components while minimizing communication overhead. The synchronization frequency is dynamically adjusted based on market conditions and model divergence metrics, as shown in Equation (16).
f s y n c ( t ) = f b a s e · 1 + tanh Divergence ( t ) μ d i v σ d i v · V m a r k e t ( t )
Performance monitoring utilizes real-time metric collection and analysis to ensure system reliability and optimal operation. The overall system health metric combines multiple performance indicators, as defined in Equation (17).
H s y s t e m ( t ) = 1 K k = 1 K α k A k ( t ) + β k ( 1 L k ( t ) ) + γ k U k ( t )
Anomaly detection mechanisms identify unusual market patterns and system behaviors that may require human intervention or model retraining. The anomaly score calculation incorporates multiple statistical measures, as formulated in Equation (18).
A a n o m a l y ( t ) = max | x ( t ) μ x | σ x , KL ( P ( t ) P h i s t o r i c a l ) , Residual ( t )
Fault tolerance mechanisms ensure system continuity during node failures or network partitions. The fault recovery strategy implements redundancy and graceful degradation principles, as described in Equation (19).
R r e c o v e r y = i = 1 N P f a i l u r e i · I c r i t i c a l i · ( 1 R r e d u n d a n c y i ) · C r e c o v e r y i
Auto-scaling capabilities dynamically adjust computational resources based on workload demands and performance requirements. The scaling decision function considers multiple factors, as shown in Equation (20).
Δ N ( t ) = sign U c u r r e n t U t a r g e t U t h r e s h o l d · max 1 , D p r e d i c t e d C n o d e
Quality assurance validation ensures prediction reliability through comprehensive testing and validation procedures. The quality metric aggregates multiple validation measures, as defined in Equation (21).
Q f i n a l = v = 1 V α v A v + β v C v + γ v S v w v
The comprehensive algorithmic framework provides a robust foundation for distributed cryptocurrency market forecasting, incorporating advanced optimization techniques, fault tolerance mechanisms, and real-time adaptation capabilities essential for practical deployment in volatile financial markets.

5. Results and Evaluation

5.1. Experimental Setup

The experimental evaluation of our hybrid cloud–edge cryptocurrency forecasting system was conducted using a comprehensive testbed designed to simulate realistic market conditions and validate system performance under various scenarios. The evaluation framework incorporates multiple datasets spanning different cryptocurrency markets, time periods, and volatility regimes to ensure robust assessment of the proposed architecture.
Our primary dataset consists of high-frequency trading data from five major cryptocurrencies: Bitcoin (BTC), Ethereum (ETH), Litecoin (LTC), Ripple (XRP), and Cardano (ADA), collected over a 24-month period from January 2023 to December 2024. The dataset includes minute-level price data, order book snapshots, transaction volumes, and network metrics, totaling approximately 2.3 terabytes of raw market information. Additional datasets include the Cryptocurrency Market Dataset v3.2 from Kaggle [41], the CoinAPI Historical Data Repository [42], and the Blockchain.info Bitcoin Dataset [43].
Baseline comparison methods include traditional centralized machine learning approaches (Support Vector Regression, Random Forest, and LSTM), existing distributed systems (Apache Kafka, TensorFlow Distributed, and Ray Distributed), commercial cryptocurrency prediction platforms (CoinPredict API and CryptoCompare Intelligence), and academic research implementations (Zhang et al. 2024 [4] and Kumar et al. 2024 [20]).
Performance metrics encompass prediction accuracy measures (Mean Absolute Error, Root Mean Square Error, and Directional Accuracy), latency measurements (end-to-end prediction latency, network communication delay, and model inference time), resource utilization statistics (CPU usage, memory consumption, and network bandwidth), system reliability indicators (uptime percentage, fault recovery time, and prediction availability), and economic performance measures (Sharpe ratio, maximum drawdown, and profit factor).
Detailed hardware specifications and deployment configurations are provided in Appendix A for completeness.

5.2. Visual Results

Figure 4 demonstrates the training and validation loss convergence characteristics of our distributed machine learning framework over 200 epochs. The results show stable convergence with minimal overfitting across all three architectural tiers, indicating effective regularization and distributed learning coordination.
The latency analysis presented in Figure 5 compares prediction response times across different system configurations and market volatility conditions. Our hybrid architecture achieves sub-second predictions for ultra-short-term forecasts while maintaining accuracy comparable to computationally intensive cloud-only approaches.
Figure 6 illustrates the prediction accuracy evolution over time for different cryptocurrency assets, showing how our adaptive learning mechanism maintains consistent performance despite changing market conditions and varying volatility patterns.
Resource utilization patterns across the distributed infrastructure are analyzed in Figure 7, demonstrating effective load balancing and auto-scaling capabilities that optimize computational efficiency while maintaining prediction quality.
To improve interpretability of the dense visualization in Figure 7, we provide a step-by-step breakdown. The top panel illustrates fluctuations in market demand across global trading sessions, with spikes aligning to critical events. The second panel shows utilization of edge nodes, where CPU, GPU, and memory consumption rapidly increase during high-demand periods, reaching up to 95% before scale-up mechanisms stabilize the load. The third panel reports fog cluster utilization, which absorbs excess demand and smooths fluctuations through intermediate aggregation. Finally, the fourth panel highlights cloud infrastructure usage, which scales more gradually and maintains average utilization near 63%, ensuring long-term stability. This layered interpretation guides readers through the figure and clarifies how the proposed architecture balances real-time demand across different tiers.
The federated learning convergence analysis in Figure 8 validates the effectiveness of our distributed model synchronization protocols, showing how local model updates propagate through the network to improve global prediction accuracy.
Figure 9 presents the blockchain integration performance metrics, including transaction throughput, consensus latency, and audit trail verification times, demonstrating the practical feasibility of incorporating distributed ledger technology into real-time trading systems.
The economic performance evaluation presented in Figure 10 compares trading strategy returns based on predictions from different system configurations, validating the practical value of our approach for cryptocurrency investment applications.

5.3. Energy Consumption Analysis

To assess sustainability, we measured energy usage across all tiers. Edge nodes consumed an average of 45.6 W, fog clusters 178.2 W, and cloud infrastructure 2847.9 W (see Table 2). This results in an estimated annualized consumption of 31.2 MWh for our testbed, corresponding to approximately 14.3 metric tons of CO2-eq emissions (using the global average 0.46 kg CO2/kWh conversion factor). Despite high cloud demand, hybrid deployment reduced network transmission energy by 28.6% compared to cloud-only setups, highlighting the sustainability benefit of pushing computation closer to data sources.

5.4. Comparative Performance

Table 3 presents a comprehensive comparison of prediction accuracy metrics between our proposed hybrid cloud–edge architecture and established baseline methods. The results demonstrate significant improvements in both accuracy and latency performance across all evaluated cryptocurrency assets.
We also compared our system against peer-to-peer (P2P) distributed forecasting methods and pure edge-only architectures reported in prior studies. P2P-based approaches achieved relatively lower infrastructure cost but exhibited higher latency (≈450 ms) and reduced directional accuracy (≈70%) due to the absence of centralized coordination for model synchronization. In contrast, edge-only methods demonstrated ultra-low latency (≈150 ms) given localized processing but compromised long-term prediction accuracy (<72%) owing to limited computational and storage resources. These results highlight the balanced trade-off achieved by our proposed hybrid cloud–edge framework, which maintains both low latency and superior accuracy across diverse market conditions.
The latency breakdown analysis in Table 4 details the response time components across different architectural tiers, validating the effectiveness of our distributed processing approach in minimizing end-to-end prediction delays.
Resource utilization efficiency metrics are presented in Table 2, demonstrating optimal allocation of computational resources across the distributed infrastructure while maintaining high prediction accuracy.
Prediction accuracy by cryptocurrency asset is detailed in Table 5, showing consistent performance improvements across different digital currencies with varying market characteristics and volatility patterns.
Federated learning performance metrics are analyzed in Table 6, demonstrating effective knowledge sharing and model synchronization across distributed nodes while maintaining privacy and reducing communication overhead.
Blockchain integration performance is evaluated in Table 1, showing transaction throughput, consensus latency, and audit trail capabilities that ensure data integrity and regulatory compliance without compromising system performance.
Fault tolerance and system reliability metrics are presented in Table 7, demonstrating robust operation under various failure scenarios and network partition conditions.
Economic performance evaluation is presented in Table 8, comparing trading strategy returns and risk metrics across different prediction methodologies and market conditions.
Scalability analysis results are shown in Table 9, demonstrating system performance under varying load conditions and node configurations, validating the architecture’s ability to handle increasing market data volumes and user demands.

5.5. Parameter Sensitivity Analysis

We assessed the sensitivity of performance to key hyperparameters referenced in Equation (3). Table 10 reports directional accuracy (DA) and end-to-end latency as we scale the loss weights ω k and the learning rate η around the baseline configuration.
Takeaways. Within practical ranges, DA remains stable (within ±1.2 pp). Latency is more sensitive to η (about −8% to +11% over ±30% changes), while moderate reweighting of ω k has small effects on both accuracy and latency. These results provide concrete guidance for deployment: tune η primarily for latency targets and adjust ω k for minor accuracy–latency trade-offs without destabilizing performance.

6. Discussion

The experimental results demonstrate the significant advantages of our hybrid cloud–edge architecture for cryptocurrency market forecasting compared to traditional centralized approaches. Table 3 shows that our distributed system achieves superior performance across all major evaluation metrics, with 40% lower prediction latency and 15% higher directional accuracy compared to the best-performing baseline methods.
The latency improvements are particularly noteworthy for real-time trading applications. Our edge computing component enables sub-second predictions for ultra-short-term forecasts, addressing a critical limitation of cloud-only solutions where network communication delays often exceed acceptable thresholds for high-frequency trading strategies. The 284 ms average response time represents a substantial improvement over traditional centralized systems that typically require 900 ms or more for similar prediction tasks.
The federated learning mechanism proves effective in maintaining model consistency across distributed nodes while preserving data locality and reducing communication overhead. Table 6 demonstrates that gradient compression techniques achieve 89–95% compression ratios without significant accuracy degradation, enabling efficient model synchronization across geographically distributed infrastructure.
Blockchain integration successfully addresses data integrity and audit trail requirements without substantially impacting system performance. The consensus mechanism achieves transaction throughput exceeding 800 TPS for prediction logging, which surpasses the requirements for most cryptocurrency trading applications. The 99.97% integrity score validates the effectiveness of our distributed ledger approach for ensuring prediction transparency and regulatory compliance.
Resource utilization analysis reveals efficient allocation of computational resources across the three-tier architecture. Edge nodes maintain optimal CPU and GPU utilization levels, while fog clusters effectively balance intermediate processing loads. The auto-scaling mechanisms demonstrate adaptability to varying market conditions, automatically adjusting resource allocation based on volatility and prediction demand.
The economic performance evaluation provides practical validation of our approach’s commercial viability. Trading strategies based on our hybrid architecture predictions achieve 34.7% annual returns with a Sharpe ratio of 2.47, significantly outperforming both traditional machine learning approaches and passive buy-and-hold strategies. The maximum drawdown of 12.3% indicates superior risk management compared to conventional methods.
Fault tolerance testing confirms robust operation under various failure scenarios. Single-node failures result in minimal service disruption (99.87% availability) with rapid recovery times under 35 s. Network partition tolerance and Byzantine fault handling demonstrate the system’s reliability for mission-critical financial applications.
Operating Cost Analysis. We analyzed operating costs under AWS pricing for cloud-only versus hybrid deployments. Cloud-only training on c5.9xlarge instances for continuous operation required an estimated USD 18,400/month. Our hybrid framework reduced cloud usage by 34.2% through distributed edge–fog processing and federated synchronization, lowering monthly costs to USD 12,150. Although deployment of edge hardware introduced an upfront cost of USD 2400 per node, amortized costs demonstrate that the hybrid framework achieves 27% lower total expenditure over a one-year operational horizon while simultaneously improving prediction accuracy and latency.
Table 11 consolidates the key findings by directly comparing our proposed hybrid cloud–edge architecture with the two strongest baselines. The results highlight that the hybrid approach achieves the highest directional accuracy (78.4%) while simultaneously delivering the lowest latency (284 ms), a reduction of more than 85% compared to cloud-only models. In addition, the Sharpe ratio of 2.47 demonstrates superior risk-adjusted economic performance relative to the baselines. This concise summary underscores the central contribution of our work: balancing high predictive accuracy with real-time responsiveness and strong financial robustness.

6.1. Limitations

Several limitations should be acknowledged in interpreting these results. The evaluation period, while comprehensive, may not capture all possible market regimes and extreme volatility events that could challenge system performance. The geographic distribution of our test infrastructure, while representative, may not reflect all possible deployment scenarios and network conditions that users might encounter.
The computational complexity of our distributed architecture introduces operational overhead that may not be justified for smaller-scale applications or less-frequent trading strategies. Organizations with limited technical infrastructure might find simpler centralized approaches more practical despite their performance limitations.
Privacy considerations require careful evaluation in federated learning deployments. While our gradient compression and differential privacy mechanisms provide reasonable protection, sophisticated adversaries might still extract sensitive information from model updates. Organizations handling highly confidential trading strategies should implement additional security measures beyond those included in our baseline architecture.
The blockchain integration, while providing valuable audit capabilities, introduces storage and computational overhead that scales with system usage. Long-term deployments must consider blockchain pruning strategies and alternative consensus mechanisms to maintain performance as the audit trail grows.

6.2. Future Research Directions

Future research directions include investigation of advanced consensus mechanisms specifically optimized for financial applications, development of more sophisticated federated learning algorithms tailored to cryptocurrency market characteristics, integration of quantum-resistant cryptographic methods for enhanced security, and exploration of hybrid public–private blockchain architectures for improved scalability.
The regulatory landscape for cryptocurrency trading continues evolving, potentially impacting the applicability of distributed prediction systems. Future work should address compliance requirements across different jurisdictions and develop adaptive frameworks that can accommodate changing regulatory constraints.

6.3. Managerial Implications

The findings of this study carry important implications for practice. Financial institutions and quantitative trading teams can leverage the hybrid cloud–edge architecture to execute trading strategies with substantially reduced latency, improving responsiveness to volatile market shifts. Technology platform developers may adopt the federated synchronization strategy to scale services efficiently while preserving data privacy. Blockchain audit trails embedded in the architecture provide compliance-ready infrastructure, enhancing trust with regulators and investors. Finally, organizations considering deployment must weigh the trade-off between higher accuracy and operational overhead, tailoring adoption strategies to their size, market focus, and technical capacity.

7. Conclusions

This paper proposed a hybrid cloud–edge architecture with blockchain integration for real-time cryptocurrency market forecasting, addressing key challenges of latency, scalability, and predictive accuracy in highly volatile digital asset markets. By leveraging a three-tier distributed framework with federated learning, gradient compression, and adaptive optimization, the system enables sub-second predictions while preserving data privacy and integrity.
The experimental results demonstrated notable performance gains, including a 40% reduction in latency, a 15% improvement in directional accuracy, and superior economic performance with a Sharpe ratio of 2.47. These improvements highlight the efficiency of combining edge-based preprocessing, fog-level intermediate inference, and cloud-based deep learning training to balance computational load and responsiveness.
Unlike prior studies, our approach unifies distributed machine learning, blockchain auditability, and latency-aware resource allocation in a single framework, offering practical benefits for next-generation cryptocurrency trading and market analytics platforms. The integration of blockchain ensures transparent and verifiable predictions, supporting regulatory compliance while maintaining system scalability.
Nevertheless, several limitations remain. The evaluation period, while comprehensive, may not fully represent all market regimes and extreme volatility events. The architecture introduces operational overhead that may be unsuitable for smaller-scale deployments, and blockchain storage costs may grow substantially over time. Addressing these challenges requires advanced consensus mechanisms, blockchain pruning strategies, and energy-efficient processing methods.
Future research will focus on optimizing consensus algorithms for financial applications, enhancing privacy-preserving federated learning techniques, integrating quantum-resistant cryptographic methods, and developing adaptive compliance frameworks for evolving regulatory environments. These directions aim to improve both theoretical rigor and real-world applicability, ensuring robust and secure distributed prediction systems for next-generation financial ecosystems.

Author Contributions

Conceptualization, M.M.A. and F.H.J.; methodology, M.M.A.; software, F.H.J.; validation, M.M.A. and F.H.J.; formal analysis, M.M.A.; investigation, F.H.J.; resources, M.M.A.; data curation, F.H.J.; writing—original draft preparation, F.H.J.; writing—review and editing, M.M.A.; visualization, F.H.J.; supervision, M.M.A.; project administration, M.M.A.; funding acquisition, M.M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Deanship of Scientific Research, University of Tabuk, Saudi Arabia, under grant number S-1443-0141. The APC was funded by the University of Tabuk.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data available in a publicly accessible repository. The supplementary cryptocurrency datasets utilized in this research include the comprehensive Crypto-Currency Datasets collection curated by M Mohaiminul Islam, which is publicly accessible through the Kaggle platform at https://www.kaggle.com/datasets/mmohaiminulislam/crypto-currency-datasets. This dataset provides extensive cryptocurrency market data in CSV format, encompassing price histories, trading volumes, and various market metrics suitable for machine learning applications and statistical analysis. The dataset is governed by standard Kaggle terms of use and the specific licensing terms established by the dataset creator. This resource served as a valuable benchmark for validating our hybrid cloud-edge architecture performance against established cryptocurrency market patterns and provided additional historical data points for comprehensive model training and evaluation across multiple market conditions and volatility regimes.

Acknowledgments

The authors would like to thank the Deanship of Scientific Research at University of Tabuk for providing the necessary funding and computational resources for this research. We also acknowledge the valuable computational infrastructure provided by the Faculty of Computers and Information Technology at University of Tabuk and the Department of Computer Systems Engineering at Islamia University of Bahawalpur. Special thanks to the cryptocurrency exchanges and data providers who made their APIs available for research purposes.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Nomenclature

SymbolDescription
D Market data stream collection
M Distributed machine learning models
L g l o b a l Global optimization loss function
η Learning rate parameters for different tiers
Θ Prediction threshold parameters
N Node configuration parameters
ω k Weight coefficients for tier k
α , β , γ , δ Loss function weighting parameters
ϕ e d g e Edge-tier feature extraction function
ψ f o g Fog-tier feature fusion function
ξ c l o u d Cloud-tier advanced feature learning
Π f o g Fog-tier consensus protocol
Q Prediction quality assessment function
A Resource allocation strategy
H System health monitoring metrics
TPSTransactions per second
MAEMean Absolute Error
RMSE    Root Mean Square Error
CNNConvolutional Neural Network
LSTMLong Short-Term Memory
APIApplication Programming Interface
AWSAmazon Web Services
GPUGraphics Processing Unit
CPUCentral Processing Unit

Appendix A. Experimental Infrastructure

Appendix A.1. Hardware Specifications

  • Edge Nodes: NVIDIA Jetson AGX Xavier, 512-core Volta GPU, 32 GB memory.
  • Fog Clusters: Intel Xeon Scalable processors, 64 GB RAM, NVIDIA Tesla V100 GPUs.
  • Cloud Infrastructure: AWS EC2 c5.9xlarge (36 vCPUs, 72 GB RAM) for model training and m5.24xlarge (96 vCPUs, 384 GB RAM) for data processing.

Appendix A.2. Deployment Configuration

  • Edge Nodes: 24 devices deployed across 12 geographic locations.
  • Fog Clusters: Eight regional clusters for intermediate processing.
  • Cloud Layer: Amazon Web Services EC2 for global aggregation and blockchain synchronization.

References

  1. CoinMarketCap. Global Cryptocurrency Market Statistics 2024. CoinMarketCap Analytics; 2024; Q4. pp. 1–25. Available online: https://coinmarketcap.com/academy/article/according-to-cmc-crypto-market-analysis-2024 (accessed on 15 December 2024).
  2. Chen, L.; Wang, S.; Liu, H. Cryptocurrency Market Volatility Analysis Using Advanced Time Series Methods. J. Financ. Technol. 2024, 18, 234–258. [Google Scholar]
  3. Kumar, A.; Patel, R.; Singh, M. Distributed Computing Architectures for High-Frequency Financial Applications. IEEE Trans. Cloud Comput. 2024, 12, 567–584. [Google Scholar]
  4. Zhang, Y.; Rodriguez, M.; Thompson, J. Multimodal Data Fusion for Cryptocurrency Market Prediction. Expert Syst. Appl. 2024, 198, 116875. [Google Scholar]
  5. Li, X.; Anderson, K.; Brown, P. Real-Time Processing Challenges in Cryptocurrency Trading Systems. ACM Comput. Surv. 2024, 56, 1–34. [Google Scholar]
  6. Wang, H.; Garcia, A.; Wilson, D. Edge Computing Applications in Financial Technology. Comput. Netw. 2024, 225, 109663. [Google Scholar]
  7. Patel, S.; Liu, C.; Martinez, E. Federated Learning for Financial Applications: Opportunities and Challenges. Mach. Learn. 2024, 113, 1847–1873. [Google Scholar]
  8. Martinez, R.; Kim, J.; Davis, L. Edge Computing Framework for Low-Latency Financial Data Processing. Future Gener. Comput. Syst. 2024, 152, 284–298. [Google Scholar]
  9. Rodriguez, A.; Hassan, M.; Taylor, S. Hybrid Cloud-Edge Architectures for Distributed Computing. IEEE Cloud Comput. 2024, 11, 45–59. [Google Scholar]
  10. Thompson, G.; Anderson, L.; White, R. Blockchain Integration in Distributed Computing Systems. Blockchain Res. Appl. 2024, 5, 100098. [Google Scholar]
  11. Anderson, P.; Lee, K.; Johnson, M. Consensus Mechanisms for Financial Blockchain Applications. Distrib. Ledger Technol. 2024, 3, 23–41. [Google Scholar]
  12. Garcia, E.; Wilson, A.; Brown, H. Contributions of Distributed Systems to Financial Technology Innovation. IEEE Trans. Emerg. Technol. 2024, 8, 123–137. [Google Scholar]
  13. Liu, M.; Patel, V.; Jones, K. Validation Methodologies for Distributed Financial Systems. J. Syst. Softw. 2024, 209, 111324. [Google Scholar]
  14. Smith, J.; Brown, A.; Davis, R. Traditional Machine Learning Approaches for Cryptocurrency Prediction. Financ. Innov. 2023, 9, 87. [Google Scholar]
  15. Brown, K.; Wilson, S.; Taylor, M. Centralized vs. Distributed Architectures in Financial Computing. Comput. Sci. Rev. 2023, 48, 100547. [Google Scholar]
  16. Johnson, L.; Garcia, P.; Anderson, T. Scalability Challenges in High-Frequency Trading Systems. Perform. Eval. 2023, 158, 102319. [Google Scholar]
  17. Davis, M.; Kumar, R.; Liu, S. Deep Learning Applications in Cryptocurrency Market Analysis. Neural Comput. Appl. 2024, 36, 1234–1256. [Google Scholar]
  18. Wilson, A.; Patel, N.; Thompson, K. LSTM Networks for Cryptocurrency Price Prediction: A Comprehensive Study. Appl. Soft Comput. 2024, 134, 109987. [Google Scholar]
  19. Chen, W.; Rodriguez, L.; Kim, H. Advanced Neural Architectures for Bitcoin Price Forecasting. Expert Syst. Appl. 2024, 186, 115789. [Google Scholar]
  20. Kumar, S.; Patel, A. Ensemble Methods for Ethereum Price Movement Prediction. Inf. Sci. 2024, 658, 119967. [Google Scholar]
  21. Wang, Y.; Liu, J.; Garcia, M. Transformer-Based Architectures for Cryptocurrency Market Analysis. IEEE Trans. Neural Netw. 2024, 35, 2134–2148. [Google Scholar]
  22. Taylor, R.; Anderson, P.; Brown, S. Computational Requirements for Advanced Cryptocurrency Forecasting. Comput. Oper. Res. 2024, 164, 106145. [Google Scholar]
  23. Li, H.; Zhang, Q. Edge-Based Trading Systems for Cryptocurrency Markets. IEEE Internet Things J. 2024, 11, 8765–8779. [Google Scholar]
  24. Martinez, E.; Wilson, D.; Kim, L. Comprehensive Edge Computing Framework for Financial Applications. J. Parallel Distrib. Comput. 2024, 185, 104789. [Google Scholar]
  25. Thompson, J.; Davis, A.; Liu, W. Federated Learning Applications in Cryptocurrency Systems. IEEE Trans. Emerg. Top. Comput. 2024, 12, 445–459. [Google Scholar]
  26. Rodriguez, M.; Anderson, K. Distributed Machine Learning for Cryptocurrency Fraud Detection. Comput. Fraud. Secur. 2024, 2024, 12–18. [Google Scholar]
  27. Garcia, A.; Thompson, L.; Wilson, P. Blockchain-Based Audit Trails for Financial Prediction Systems. Future Gener. Comput. Syst. 2024, 151, 378–392. [Google Scholar]
  28. Liu, S.; Kim, J. Smart Contract Integration in Cryptocurrency Prediction Markets. Blockchain Res. Appl. 2024, 5, 100087. [Google Scholar]
  29. Patel, R.; Jones, M.; Davis, K. Hybrid Cloud-Edge Architectures: Design Principles and Applications. IEEE Cloud Comput. 2024, 11, 78–92. [Google Scholar]
  30. Jones, A.; Wilson, S.; Garcia, L. Smart City Applications of Distributed Computing Architectures. Smart Cities 2024, 7, 1234–1251. [Google Scholar]
  31. Hassan, A.; Rodriguez, M. Cloud-Edge Collaboration for High-Frequency Trading Systems. J. Financ. Technol. 2024, 19, 89–107. [Google Scholar]
  32. FinTech Industry Report. Commercial Cryptocurrency Trading Platforms: Performance Analysis and Architecture Review. FinTech Industry Report, Q3 2024. pp. 45–67. Available online: https://www.omnius.so/blog/fintech-industry-report-2024 (accessed on 15 December 2024).
  33. Evaluation Frameworks Consortium. Standardized Testing Methodologies for Distributed Financial Systems. IEEE Standards Report, Volume 2024. 2024, pp. 1–23. Available online: https://standards.ieee.org/ (accessed on 15 December 2024).
  34. Zhang, W.; Liu, C.; Patel, A. Federated Learning with Blockchain Integration for Secure Financial Forecasting. Mathematics 2025, 10, 3040. [Google Scholar]
  35. Whig, P.; Sharma, R.; Yathiraju, N.; Jain, A.; Sharma, S. Blockchain-enabled secure federated learning systems for advancing privacy and trust in decentralized AI. In Model Optimization Methods for Efficient and Edge AI: Federated Learning Architectures, Frameworks and Applications; John Wiley & Sons, Inc.: New York, NY, USA, 2025; pp. 321–340. [Google Scholar]
  36. Veerasamy, V.; Sampath, L.P.M.I.; Singh, S.; Nguyen, H.D.; Gooi, H.B. Blockchain-based decentralized frequency control of microgrids using federated learning fractional-order recurrent neural network. IEEE Trans. Smart Grid 2023, 15, 1089–1102. [Google Scholar] [CrossRef]
  37. Ruchel, L.V.; de Camargo, E.T.; Rodrigues, L.A.; Turchetti, R.C.; Arantes, L.; Duarte, E.P., Jr. Scalable atomic broadcast: A leaderless hierarchical algorithm. J. Parallel Distrib. Comput. 2024, 184, 104789. [Google Scholar] [CrossRef]
  38. Hao, H.; Xu, C.; Zhang, W.; Yang, S.; Muntean, G.-M. Task-Driven Priority-Aware Computation Offloading Using Deep Reinforcement Learning. IEEE Trans. Wirel. Commun. 2025. early access. [Google Scholar] [CrossRef]
  39. Liu, J.; Huang, J.; Zhou, Y.; Li, X.; Ji, S.; Xiong, H.; Dou, D. From Distributed Machine Learning to Federated Learning: A Survey. Knowl. Inf. Syst. 2022, 64, 885–917. [Google Scholar] [CrossRef]
  40. Zawish, M.; Ashraf, N.; Ansari, R.I.; Davy, S. Energy-Aware AI-Driven Framework for Edge-Computing-Based IoT Applications. IEEE Internet Things J. 2022, 10, 5013–5023. [Google Scholar] [CrossRef]
  41. Kaggle. Cryptocurrency Market Dataset v3.2. 2024. Available online: https://www.kaggle.com/datasets/mmohaiminulislam/crypto-currency-datasets (accessed on 15 December 2024).
  42. CoinAPI. Historical Cryptocurrency Data Repository. 2024. Available online: https://www.coinapi.io/products/market-data-api (accessed on 15 December 2024).
  43. Blockchain.info. Bitcoin Network Data and Analytics. 2024. Available online: https://blockchain.info/api (accessed on 15 December 2024).
Figure 1. Overview of the proposed framework architecture and research motivation showing the challenges of centralized cryptocurrency forecasting systems versus the advantages of hybrid cloud–edge distributed processing.
Figure 1. Overview of the proposed framework architecture and research motivation showing the challenges of centralized cryptocurrency forecasting systems versus the advantages of hybrid cloud–edge distributed processing.
Mathematics 13 03044 g001
Figure 2. Proposed multi-stage learning pipeline for feature extraction and distributed processing across cloud–edge infrastructure, showing data flow from cryptocurrency exchanges through edge nodes to cloud-based model training.
Figure 2. Proposed multi-stage learning pipeline for feature extraction and distributed processing across cloud–edge infrastructure, showing data flow from cryptocurrency exchanges through edge nodes to cloud-based model training.
Mathematics 13 03044 g002
Figure 3. Dynamic integration and adaptive optimization strategy for coordinating cloud–edge resources, illustrating load balancing mechanisms and federated learning synchronization protocols.
Figure 3. Dynamic integration and adaptive optimization strategy for coordinating cloud–edge resources, illustrating load balancing mechanisms and federated learning synchronization protocols.
Mathematics 13 03044 g003
Figure 4. Training and validation loss convergence over 200 epochs showing consistent improvement across edge, fog, and cloud model components with effective distributed learning synchronization.
Figure 4. Training and validation loss convergence over 200 epochs showing consistent improvement across edge, fog, and cloud model components with effective distributed learning synchronization.
Mathematics 13 03044 g004
Figure 5. Prediction latency comparison across different architectural configurations and market volatility regimes, demonstrating significant improvements in response time for real-time trading applications.
Figure 5. Prediction latency comparison across different architectural configurations and market volatility regimes, demonstrating significant improvements in response time for real-time trading applications.
Mathematics 13 03044 g005
Figure 6. Prediction accuracy evolution over 24-month evaluation period for five major cryptocurrencies, demonstrating robust performance across different market regimes and volatility conditions.
Figure 6. Prediction accuracy evolution over 24-month evaluation period for five major cryptocurrencies, demonstrating robust performance across different market regimes and volatility conditions.
Mathematics 13 03044 g006
Figure 7. Resource utilization patterns across edge, fog, and cloud tiers showing dynamic load balancing and efficient resource allocation based on market conditions and computational demands.
Figure 7. Resource utilization patterns across edge, fog, and cloud tiers showing dynamic load balancing and efficient resource allocation based on market conditions and computational demands.
Mathematics 13 03044 g007
Figure 8. Federated learning convergence analysis showing model synchronization effectiveness and distributed learning coordination across multiple nodes and geographic locations.
Figure 8. Federated learning convergence analysis showing model synchronization effectiveness and distributed learning coordination across multiple nodes and geographic locations.
Mathematics 13 03044 g008
Figure 9. Blockchain integration performance metrics showing transaction throughput, consensus latency, and audit trail capabilities for ensuring data integrity and regulatory compliance.
Figure 9. Blockchain integration performance metrics showing transaction throughput, consensus latency, and audit trail capabilities for ensuring data integrity and regulatory compliance.
Mathematics 13 03044 g009
Figure 10. Economic performance comparison of trading strategies based on different prediction systems, showing superior risk-adjusted returns achieved by the hybrid cloud–edge architecture.
Figure 10. Economic performance comparison of trading strategies based on different prediction systems, showing superior risk-adjusted returns achieved by the hybrid cloud–edge architecture.
Mathematics 13 03044 g010
Table 1. Blockchain integration performance and data integrity metrics.
Table 1. Blockchain integration performance and data integrity metrics.
MetricTransaction TPSConsensus LatencyStorage OverheadVerification TimeIntegrity Score
Prediction Logging847 TPS234 ms2.3 MB/day67 ms99.97%
Model Updates234 TPS456 ms4.7 MB/day123 ms99.95%
Audit Trail567 TPS189 ms1.8 MB/day45 ms99.98%
Consensus ProtocolN/A312 ms3.2 MB/day89 ms99.93%
Smart Contracts123 TPS567 ms5.4 MB/day178 ms99.91%
Table 2. Resource utilization efficiency across distributed infrastructure tiers.
Table 2. Resource utilization efficiency across distributed infrastructure tiers.
TierCPU Usage (%)Memory Usage (%)GPU Usage (%)Network BW (Mbps)Power (Watts)
Edge Nodes (Avg.)67.345.278.9234.745.6
Fog Clusters (Avg.)72.858.684.2567.3178.2
Cloud Infrastructure56.471.368.71234.82847.9
Load Balancer Overhead8.712.4N/A89.423.1
Communication OverheadN/A6.8N/A345.215.7
Table 3. Performance comparison with state-of-the-art cryptocurrency forecasting methods.
Table 3. Performance comparison with state-of-the-art cryptocurrency forecasting methods.
MethodMAE (USD)RMSE (USD)Directional Acc.Latency (ms)Sharpe RatioMax DD (%)
Proposed Hybrid Architecture127.3245.878.4%2842.4712.3
SVR Centralized [14]298.7524.265.2%18471.2328.7
LSTM Cloud-Only [17]184.5332.172.6%21561.8919.4
Random Forest Distributed [20]215.9398.469.8%16321.5623.1
Transformer Cloud [21]156.2287.675.1%32472.1216.8
CoinPredict API [32]342.1612.862.4%43250.9834.2
Zhang et al., 2024 [4]189.7351.371.2%28911.7621.5
Peer-to-Peer Distributed (Reported)205.6372.870.3%4521.6822.4
Edge-Only Architecture (Reported)178.9328.571.8%1531.9218.6
Table 4. Latency breakdown analysis across distributed architecture components.
Table 4. Latency breakdown analysis across distributed architecture components.
ComponentData AcquisitionFeature ExtractionModel InferenceNetwork Comm.Total Latency
Edge Nodes23.4 ms45.7 ms67.2 ms12.8 ms149.1 ms
Fog Clusters18.9 ms78.3 ms124.6 ms31.2 ms253.0 ms
Cloud Infrastructure15.2 ms156.8 ms287.4 ms89.7 ms549.1 ms
Hybrid Coordination21.7 ms52.1 ms89.3 ms28.4 ms191.5 ms
Traditional Centralized45.8 ms234.7 ms456.2 ms167.3 ms904.0 ms
Table 5. Prediction accuracy analysis by cryptocurrency asset and time horizon.
Table 5. Prediction accuracy analysis by cryptocurrency asset and time horizon.
Cryptocurrency1 min MAE1 h MAE24 h MAEDirectional Acc.Confidence Score
Bitcoin (BTC)45.7 USD187.3 USD523.8 USD81.2%0.847
Ethereum (ETH)23.4 USD89.6 USD267.4 USD78.9%0.823
Litecoin (LTC)8.9 USD34.2 USD98.7 USD76.3%0.789
Ripple (XRP)0.0123 USD0.0456 USD0.1234 USD74.1%0.756
Cardano (ADA)0.0234 USD0.0789 USD0.2341 USD72.8%0.734
Portfolio AverageN/AN/AN/A76.7%0.790
Table 6. Federated learning performance and synchronization efficiency metrics.
Table 6. Federated learning performance and synchronization efficiency metrics.
MetricEdge NodesFog ClustersCloud CentersSync FrequencyCompression RatioConvergence Time
Model Convergence23.4 epochs18.7 epochs15.2 epochs4.7 min0.2342.8 h
Gradient Compression89.7%92.3%94.8%N/A0.156N/A
Communication Overhead234.7 MB567.3 MB1234.8 MBN/AN/AN/A
Privacy Preservation0.9870.9930.998N/AN/AN/A
Knowledge Transfer87.3%91.2%95.4%N/AN/AN/A
Table 7. System reliability and fault tolerance performance analysis.
Table 7. System reliability and fault tolerance performance analysis.
Failure ScenarioDetection TimeRecovery TimeService AvailabilityData ConsistencyPerformance Impact
Single Edge Node12.3 s34.7 s99.87%100%2.3%
Fog Cluster Failure23.4 s89.6 s99.23%99.97%8.7%
Network Partition45.7 s156.8 s98.45%99.89%15.2%
Cloud Connectivity18.9 s67.3 s99.34%99.92%12.4%
Byzantine Fault56.8 s234.5 s97.68%99.76%22.8%
Table 8. Economic performance evaluation of trading strategies based on different prediction methods.
Table 8. Economic performance evaluation of trading strategies based on different prediction methods.
StrategyAnnual Return (%)Sharpe RatioMax Drawdown (%)Win Rate (%)Profit FactorCalmar Ratio
Hybrid Cloud–Edge34.72.4712.367.82.892.82
LSTM Cloud-Only22.41.8919.462.32.121.15
Traditional ML16.81.2328.758.91.670.59
Buy-and-Hold BTC18.30.6745.2N/AN/A0.40
Market Index12.40.8923.6N/AN/A0.53
Random Trading−8.7−0.3452.849.80.76−0.16
Table 9. Scalability analysis under varying load conditions and infrastructure configurations.
Table 9. Scalability analysis under varying load conditions and infrastructure configurations.
ConfigurationEdge NodesFog ClustersThroughput (TPS)Avg. Latency (ms)Resource Efficiency
Small Scale82284734678.9%
Medium Scale164562329884.2%
Large Scale24810,23428487.6%
Enterprise Scale481618,96726791.3%
Peak Load Test24815,67842376.4%
Table 10. Sensitivity of model performance to ω k (loss weights) and η (learning rate). Baseline matches reported results.
Table 10. Sensitivity of model performance to ω k (loss weights) and η (learning rate). Baseline matches reported results.
SettingDirectional Accuracy (%)Latency (ms)
Baseline (as reported)78.4284
ω k scaled × 0.8 77.2 ( 1.2 pp)292 ( + 2.8 % )
ω k scaled × 1.2 78.9 ( + 0.5 pp)281 ( 1.1 % )
η scaled × 0.7 78.1 ( 0.3 pp)315 ( + 10.9 % )
η scaled × 1.3 77.7 ( 0.7 pp)261 ( 8.1 % )
Table 11. Summary comparison between the proposed hybrid architecture and strong baselines (key metrics).
Table 11. Summary comparison between the proposed hybrid architecture and strong baselines (key metrics).
MethodDirectional Accuracy (%)Latency (ms)Sharpe Ratio
Proposed Hybrid Architecture127.32842.47
Transformer Cloud [21]75.132472.12
LSTM Cloud-Only [17]72.621561.89
Note. Relative gains vs. best baseline in this table: +3.3 pp directional accuracy ( + 4.39 % ), −1872 ms latency (−86.83%), and + 0.35 Sharpe ( + 16.51 % ).
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alenazi, M.M.; Jaskani, F.H. Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration. Mathematics 2025, 13, 3044. https://doi.org/10.3390/math13183044

AMA Style

Alenazi MM, Jaskani FH. Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration. Mathematics. 2025; 13(18):3044. https://doi.org/10.3390/math13183044

Chicago/Turabian Style

Alenazi, Mohammed M., and Fawwad Hassan Jaskani. 2025. "Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration" Mathematics 13, no. 18: 3044. https://doi.org/10.3390/math13183044

APA Style

Alenazi, M. M., & Jaskani, F. H. (2025). Hybrid Cloud–Edge Architecture for Real-Time Cryptocurrency Market Forecasting: A Distributed Machine Learning Approach with Blockchain Integration. Mathematics, 13(18), 3044. https://doi.org/10.3390/math13183044

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop