Next Article in Journal
Aero-Engine Borescope Image Defect Detection Algorithm Using Symmetric Feature Extraction and State Space Model
Previous Article in Journal
Structural Design and Vibro-Mechanical Characterization Analysis of Variable Cross-Sectional Metal Rubber Isolator
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems

by
Thulasi Karpagam
1,* and
Jayashree Kanniappan
2
1
Department of Artificial Intelligence and Data Science, R.M.K College of Engineering and Technology, Chennai 601206, India
2
Department of Artificial Intelligence and Data Science, Panimalar Engineering College, Chennai 600123, India
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(3), 383; https://doi.org/10.3390/sym17030383
Submission received: 12 January 2025 / Revised: 15 February 2025 / Accepted: 27 February 2025 / Published: 3 March 2025

Abstract

:
Cloud computing offers scalable and adaptable resources on demand, and has emerged as an essential technology for contemporary enterprises. Nevertheless, it is still challenging work to efficiently handle cloud resources because of dynamic changes in load requirement. Existing forecasting approaches are unable to handle the intricate temporal symmetries and nonlinear patterns in cloud workload data, leading to degradation of prediction accuracy. In this manuscript, a Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems (MASNN-WL-RTSP-CS) is proposed. Here, the input data from the Google cluster trace dataset were preprocessed using Multi Window Savitzky–Golay Filter (MWSGF) to remove noise while preserving important data patterns and maintaining structural symmetry in time series trends. Then, the Multi-Dimensional Attention Spiking Neural Network (MASNN) effectively models symmetric patterns in workload fluctuations to predict workload and resource time series. To enhance accuracy, the Secretary Bird Optimization Algorithm (SBOA) was utilized to optimize the MASNN parameters, ensuring accurate workload and resource time series predictions. Experimental results show that the MASNN-WL-RTSP-CS method achieves 35.66%, 32.73%, and 31.43% lower Root Mean Squared Logarithmic Error (RMSLE), 25.49%, 32.77%, and 28.93% lower Mean Square Error (MSE), and 24.54%, 23.65%, and 23.62% lower Mean Absolute Error (MAE) compared with other approaches, like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CCE, respectively. These advances emphasize the utility of MASNN-WL-RTSP-CS in achieving more accurate workload and resource forecasts, thereby facilitating effective cloud resource management.

1. Introduction

Cloud computing is now widely embraced by numerous organizations, establishing a flexible pool of storage, network resources, and computing through the integration of cloud data center networks, software, servers, storage services [1]. This Cloud Data Center (CDC) dynamically allocates memory, storage, and network bandwidth resources depending on user demands [2]. Major providers like Amazon, Google, Facebook, and Alibaba offer CDCs for users to rent computing resources. Hence, CDC providers face the challenge of efficiently managing a multitude of user tasks though upholding QoS standards, ultimately aiming to enhance profits and minimize costs [3,4,5]. Ensuring higher resource availability and meeting Service-Level Agreement requirements necessitates proactive resource allocation by CDC providers [6]. Optimal resource allocation for workload management hinges on precise workload prediction within CDC servers, a challenge exacerbated by the need for comprehensive insights into workload and resource usage characteristics [7]. Their workload exhibits dynamic fluctuations, while resource usage undergoes constant changes throughout task execution [8]. Presently, many providers employ a tiered payment model where users must preselect resource types and quantities, such as virtual machine instances, prior to accessing the desired services [9]. Non-IT professionals often lack clarity regarding their resource consumption, leading to unnecessary costs and resource wastage, thereby reducing CDC providers’ revenue [10]. Inadequate resource selection by users can result in task delays or incompletions, compromising service quality and user satisfaction, ultimately leading to user loss. The accurate prediction of future workload and resource needs based on historical data can enable CDC providers to optimize resource management, thereby increasing revenue potential [11,12,13]. Additionally, recent advancements, including offering improved accuracy and mitigating issues like gradient disappearance, are essential [14]. Presently, some methods are utilized for cross-language pronoun prediction, while others are employed for multiple channel speech frequency change estimation. Such models capture bi-directional dependence, whereas deep learning handles different dimensional information [15].
Typical prediction techniques are unable to forecast large-scale data because of the constantly shifting nature of workload and resource utilization, as well as the fact that the amount of data produced by CDCs during operations grows over time [16,17]. They are also incapable of representing complex temporal dependencies and nonlinear cloud workload data patterns. In addition, noise in cloud workload data actually degrades prediction accuracy, which makes it especially hard to obtain reliable workload predictions [18]. These constraints highlight the demand for a deeper and more computationally efficient predictive paradigm that could improve accuracy, while ensuring computational efficiency, in cloud computing environments [19,20].
To overcome the deficiencies of the traditional prediction techniques in large-scale methods, the proposed model is introduced to develop the model of MASNN with SBOA. MASNN leverages a Symmetry-Aware Multi-Dimensional Attention Mechanism to effectively capture complex temporal dependencies and nonlinear workload variations. This means that the model can concentrate on those relevant temporal features and dependencies, hence providing more accurate workload resource predictions. As a solution to the noise problem, MWSGF is applied as a preprocessing step. The proposed SBOA achieves the optimal hyperparameters for MASNN and dynamically adjusts the weights and structure of the network to ensure maximum accuracy with minimized computational time.
The major contributions of this investigative work are briefly described below.
The proposed MASNN-WL-RTSP-CS method employs MWSGF to remove noise from input data.
MASNN constructs a predictive model for workload and resource usage time series. This integration adeptly captures symmetric patterns and intricate features within the series, yielding notably high prediction accuracy.
Experiments conducted with real-life workload, RAM, and CPU data illustrate that MASNN surpasses numerous benchmark methods regarding prediction accuracy, especially for extended time series in the Google cluster trace dataset.
The remained for the paper is structured as follows: Section 2 presents the literature review, Section 3 discusses the materials and procedures employed, Section 4 describes the results and discussion, and Section 5 presents the conclusion.

2. Literature Review

In 2021, Bi, J. et al. [21] presented an integrated DL technique for workload and RP cloud systems. After minimizing standard deviation using logarithmic operation, the workload and resource sequences are smoothed. A robust filter was employed to remove noise interference and outlier points effectively. A comprehensive deep learning methodology is crafted for time series prediction, integrating network architectures such as bi-directional and grid LSTM networks, ensuring superior quality forecasts for both workload and resource time series. It provides low Mean Absolute Percentage Error (MAPE) and high RMSLE.
In 2023, Saxena, D. et al. [22] presented PA of ML Centered Workload Prediction Methods for Cloud. Here, they introduced a comprehensive survey and performance analysis, marking the inaugural systematic exploration of diverse ML-driven cloud workload prediction methods. It categorizes various prediction approaches into five distinct classes, delving into their theoretical underpinnings and mathematical frameworks. It conducts an extensive survey and comparison of the most prominent prediction methods within each machine learning class. It provides high throughput and high MAE.
In 2022, Al-Asaly, M.S. et al. [23] presented a DL-dependent resource usage prediction method for RP in autonomic CCE. Here, they focused defamations on forecasting future CPU usage demand and devising strategies to address workload fluctuations in subsequent intervals. To tackle the challenges posed by inconsistent, nonlinear workloads in CC systems and to introduce an effective deep learning framework, they proposed the DCRNN method. Unlike existing DL methods, which often struggle with precise real-time forecasting, this model aims to enhance accuracy and minimize discrepancies among predicted, actual workloads. It provides low computation time and high MSE.
In 2022, Al-Sayed, M.M. et al. [24] presented the Workload Time Series Cumulative Prediction Mechanism for Cloud Resources Utilizing Neural Machine Translation Method. Here, they approached workload sequence prediction as a translation task, thus introducing an Attention Seq2Seq-dependent method for forecasting cloud resources’ workloads. To assess the efficacy of this method, a real-world dataset gathered from Google cluster comprising machines was employed. Additionally, to enhance the performance of the presented method, a new method termed cumulative validation was introduced as an alternative to cross-validation. It provides low MAPE and low throughput.
In 2023, Ruan, L. et al. [25] presented workload time series prediction in storage systems: a DL-dependent method. Here, Crystal LP is a storage workload prediction methodology grounded in deep learning principles. It encompasses workload collection, time series estimation, and data preprocessing and post-processing phases. The core of the time series prediction phase relies on LSTM. Comprehensive experimental findings demonstrate that CrystalLP yields performance enhancements when juxtaposed with three traditional time series prediction process. It attains low MAE and high computation time.
In 2023, Dogani, J. et al. [26] presented multivariate workload and resource prediction in CC utilizing CNN and GRU through an attention mechanism. A hybrid approach was developed to forecast workload for subsequent steps and multivariate time series workloads of host machines in cloud data centers. Initially, statistical analysis was employed to create a training set. Finally, spatial features extracted by CNN were inputted into a Gated Recurrent Unit network and optimized by an attention mechanism to extract temporal correlation features. It provides low MSE and low MAPE.
In 2023, Devi, K.L. et al. [27] presented MAG-D: a multivariate attention network-dependent method for cloud workload estimating. ’MAG-D’ is a Multivariate Attention and GRU-dependent DL methodology for cloud workload estimating in data centers. Extensive testing on Google cluster traces confirms that MAG-DL efficiently captures long-range nonlinear correlations in cloud workload data, improving prediction accuracy when compared to more recent techniques that include GRU, LSTM, CNN, and BiLSTM. It provides low RMSLE and low throughput.
Table 1 presents a comparative survey on deep learning-based approaches for workload including resource prediction in cloud schemes. Bi et al. [21] utilized bi-directional LSTM networks, achieving low MAPE but high RMSLE. Saxena et al. [22] combined evolutionary and quantum neural networks with LSTM-RNN for high throughput, albeit with high MAE. Al-Asaly et al. [23] employed Diffusion Convolutional Recurrent Neural Networks, ensuring low MAPE but yielding high MSE. Al-Sayed et al. [24] approached workload prediction as a translation task using Attention Seq2Seq, offering high normalized correlation and structural similarity index measure (SSIM). Ruan et al. [25] introduced CrystalLP with LSTM for lower MAE but faced higher computation times. Dogani et al. [26] proposed a Hybrid CNN-GRU with attention, excelling in low MSE and MAPE, while Devi et al. [27] used GRU, LSTM, CNN, and BiLSTM for low RMSLE and throughput.

3. Proposed Methodology

This section describes the proposed approach integrating multi-dimensional attention mechanisms with spiking neural networks to enhance workload prediction accuracy and optimize real-time streaming performance in cloud environments. This process consists of data acquisition, preprocessing, prediction, and optimization. This study initially gathered historical data from the Google cluster trace, organizing information such as task timestamps, task counts, records of resource usage (CPU, RAM) for all time slots to create the workload, and resource utilization time series for experimentation. Subsequently, various methods are employed for data preprocessing. The natural logarithm is first applied to scale down the data, followed by the use of MWSGF to reduce noise. With the preprocessed data, MASNN is used for training and testing time series data. MASNN composed of an SBOA processes the preprocessed input data, followed by a fully connected network for generating the final output. A block diagram of the proposed MASNN-WL-RTSP-CS approach is represented in Figure 1.

3.1. Data Acquisition

The input data were captured from the Google cluster trace dataset [28]. The data consist of 12,000 computers. There are 25,462,157 tasks and 672,003 jobs in 29 days in this workload statistics. The workload comes in the form of jobs, each of which comprises several duties. All of the tasks are regarded as Linux programs that need to run on particular computers. The 29-day duration is separated as 20,880 time slots for each length in this work. Every time slot is two minutes long. A count of tasks and the resources, such as CPU and RAM consumption records, are counted by examining the timestamp data of the tasks. Finally, the workload is obtained including the resource usage time series. The dataset is separated by 70% training, 15% testing, and 15% validation.

3.2. Preprocessing Under Multi Window Savitzky–Golay Filter

This section discusses preprocessing using MWSGF [29]. Physical machine failures in CDCs or other anomalous occurrences, such as the amount of anomalous workload along resource utilization occurred by a few aberrant actions, provide greater noise in the actual workload together with resource utilization time series. The workload and resource consumption time series contain more extreme points than others. The accuracy of the data’s predictions is severely compromised if they are not removed from the data when a prediction mode is constructed. So, MWSGF removes noise interference along extreme points in the data. MWSGF uses adaptive smoothing, with different window sizes and polynomial fitting in order to eliminate noise and suppress outliers. MWSGF approximates the noisy time series with polynomial fitting. The polynomial approximation is given in Equation (1).
q ( a ) = h = 0 t x h a h
q ( a ) represents the polynomial function, approximating the time series data at point a ; h indicates polynomial order; t is the highest polynomial degree considered for smoothing; x h represents coefficients of the polynomial; and a is the time series data point in the dataset. The error minimization function is given in Equation (2).
ϑ t = a = w w ( q ( a ) d [ a ] ) 2 = a = w w ( h = 0 t x h a h d [ a ] ) 2
ϑ t represents the computed smoothing error, d a is the observed noisy data at point a , and q a is the estimated smoothed value at a and represents the window size. In Equation (2), the fitting polynomial is minimized from the difference weighted with the actual noisy data. Then, it introduces a weight function ϖ a that makes it possible to more discriminatively separate contributions of different data points by reducing the influence of extreme points. The weighted filtering given in Equation (3).
b ( h ) = a = w w ϖ a d [ h 1 ] = a = w w ϖ h 1 d [ a ]
b ( h ) represents the weighted filtering function output and d h 1 represents the data value at the previous time step h 1 . Finally, the least square value is used to compute the polynomial coefficients, which is given in Equation (4).
x = ( X T X ) 1 X T a × X a
x represents polynomial coefficients, X is the matrix representation of the time series data points, and X T is the transposed version of X . It determines the optimal polynomial coefficients that give rise to the minimum of the squared error between the model and the real data. Thereby, noise interference with extreme points is reduced using MWSGF. Then, the preprocessing data are fed into the prediction phase.

3.3. Predicting the Workload with Resource Time Series Using the Multi-Dimensional Attention Spiking Neural Network

In this section, workload with resource time series prediction using MASNN [30] is discussed. MASNN integrates attention mechanisms with spiking neural networks (SNNs) to improve the accuracy of time series prediction in cloud computing systems. Standard SNNs struggle with handling multi-dimensional dependencies and symmetric dependencies in time series data. The Symmetry-Aware Multi-Dimensional Attention Mechanism addresses this by dynamically weighting features across multiple time steps, improving long-term dependencies. Compared to traditional deep learning models like RNNs and LSTMs, MASNNs are inherently designed for sequential data processing using event-driven computation.
Figure 2 presents an architecture diagram of MASNN. The architecture for predicting workload and resource time series in cloud systems integrates several components for efficient performance. It starts with feature extraction from historical data, like CPU utilization and network traffic, using time series and statistical analysis methods. The extracted features are processed through multiple convolutional layers, capturing complex temporal dependencies and symmetric patterns in workload fluctuations. A Temporal-wise Attention (TA) layer incorporates structured temporal symmetry, such as attention mechanisms, to focus on relevant time windows. The core of the architecture is an MASNN, which uses biologically inspired neuron models like Conductance-Based (CA) dynamics, along with Synaptic Adaption (SA) for learning. The final output of the MASNN is decoded to predict future workload and resource usage, optimizing cloud resource allocation and management.
Spiking Neural Networks: These are the most common neuron models in SNNs like Leaky Integrate-and-Fire (LIF). MASNN employs spiking neurons, which mimic biological neurons by transmitting information through discrete spikes rather than continuous activations. The input information is given in Equation (5).
E t , 0 = p ( S t ) = t = x n x ( t + 1 ) 1 E t
E t , 0 represents the real valued frames, p signifies the parameter value of data, S t indicates the millisecond level temporal resolution, x is the calculation of consecutive spike patterns, E t denotes the valued data at time t , and t is a time step.
Multi-Dimensional Attention Mechanism: MASNN integrates a multi-dimensional attention mechanism, which empowers the method to concentrate on pertinent aspects of input data while disregarding irrelevant details. MASNN potential for scalable and effective in cloud systems across diverse applications simulating the large-scale spiking neural network layer is given in Equation (6).
η v ( t ) t = v ( t ) + A ( t )
η represents the time constant, v ( t ) is the membrane potential for the postsynaptic neuron, and A ( t ) denotes the input gathered through presynaptic neurons. The original input through a convolution operation is calculated in Equation (7).
D n . t = a v g p o o l ( b n ( c o n v ( M t E n , t 1 ) ) )
D n . t represents the original number of input data prediction, a v g p o o l is the average pooling, b n denotes the batch normalization, c o n v ( ) the convolution operation, M t is the weight matrix, and E n , t 1 indicates the spike tensor time calculation. Spike–timing-dependent plasticity (STDP) is a biologically inspired learning mechanism where synaptic weight updates are dependent on the timing of neuron spikes. The recurrent structure of spiking neurons enables them to learn complex time-dependent relationships.
Temporal-wise Attention: Its multi-dimensional nature indicates that the attention mechanism operates across various dimensions of the input data, thereby improving its capability to capture intricate relationships and the level of temporal resolution. Ten temporal-wise refined feature blocks are calculated using Equation (8).
D N X t = r n ( D n ) D n
D N X t represents the temporal-wise refined feature blocks, r n is the workload prediction, D n is the prediction of number of data, and is the multiplication operator. MASNN is custom-designed to forecast resource time series in cloud systems, a critical task for optimizing resource allocation and ensuring the seamless operation of cloud infrastructure. By harnessing its spiking neural network architecture and attention mechanism, MASNN accurately predicts future values of workload and resource metrics using historical data. Then, the output of the prediction is given in Equation (9).
u e d n , t = r e ( V n , t ) V n , t
u e d n , t represents the resource time series prediction, r e denotes the original output of data, and V n , t signifies the variation in the number of data. Finally, MASNN predicts the workload and resource time series in the cloud system. Because of its convenience, the artificial intelligence-based optimization strategy is taken into account in MASNN. Here, SBOA is employed to optimize the MASNN optimum for tuning the weight parameter.

3.4. Optimization Utilizing Secretary Bird Optimization Algorithm

SBOA [31] is a new nature-inspired metaheuristic method. It is utilized to optimize the weight parameters D n . t , u e d n , t of MASNN and improve its performance. Here, the parameter D n . t is optimized to improve throughput and u e d n , t is optimized to reduce MAE. SBOA is inspired by the unique hunting behavior of the secretary bird. SBOA leverages movement and updates rules that are directly incited through the secretary bird’s predatory tactics. These operators are designed to enhance convergence by adapting the search behavior based on the optimization problem. By using multiple, specialized operators, SBOA inherently supports multiple exploration paths simultaneously. It efficiently finds the optimal weight parameters by balancing between global search and local refinement. This process is repeated until convergence is reached, yielding the optimal parameter weights for MASNN, which is then used in the final classification task. All steps for this method are presented below.
Step 1: Initialization
Since each secretary bird is a member of the algorithm’s population, SBOA is categorized as a population-dependent metaheuristic method. In this context, the location of all secretary birds within the search space dictates the values of the decision variables to calculate the modulation prediction in MASNN weight parameters is exhibited in Equation (10).
P = P 11 P 12 P 13 P 1 p P 21 P 22 P 23 P 2 p P d 1 P d 2 P d 3 P d p
P represents the parameter weight calculation in the workload and resource time series in the cloud system, and P d p denotes number of parameter weight calculation in the workload and resource time series in the cloud system.
Step 2: Random Generation
The input parameters are created randomwise. Depending on the particular hyper parameter conditions they meet, weight D n . t , u e d n , t parameter is created randomly through SBOA.
Step 3: Fitness Function
Generate the random solution using initialization. The fitness function is estimated using a parameter optimization value to optimize the weight parameter of MASNN. It is articulated in Equation (17).
F i t n e s s   F u n c t i o n = o p t i m i z i n g       [ D n . t , u e d n , t ]
Step 4: Hunting approach of secretary bird optimizing D n . t
SBOA, drawing inspiration from the strategy of secretary bird’s hunting, assigns each bird as a candidate solution to predict the workload and resource time series in the cloud system. By employing a population-based metaheuristic method, SBOA continually adjusts the positions of the birds to pinpoint the optimal workload and resource time series. This approach fortifies real-time access control measures, leveraging cloud functionalities. It is given in Equation (12).
D a = D a n e w , i f   D n . t Z a n e w , q 1 < Z a D a ,         e l s e
D a represents a new stage of secretary bird, D a n e w denotes a new stage of secretary bird at the initial stage, Z a n e w , q 1 signifies random candidate solutions in the initial stage iteration, and Z a denotes the randomly generated array of dimension. Then, the fight distribution function is calculated using Equation (13).
l e v y ( X ) = e v ε u 1 π
l e v y ( X ) represents the fight distribution function, e is the fixed constant value, v ε is the standard normal distribution, and u 1 π is the updating secretary bird’s position. Then, Figure 3 shows the corresponding flowchart.
Step 5: Escape strategy of secretary bird optimizing u e d n , t
Inspired by the secretary bird, the escape method uses dynamic evasion strategies against adversarial networks to improve cloud system security. Emulating the bird’s swift evasion tactics, this approach ensures resilience against attempts to deceive the prediction process. Through adaptive measures, the algorithm fortifies security, responding effectively to potential threats and maintaining robustness in workload and resource time series prediction. Then, the updated condition is calculated in Equation (14).
D a = D a n e w , i f   Z a n e w , q 2 u e d n , t < Z a D a ,           e l s e
D a represents the new state of the secretary bird, D a n e w denotes new the state of the secretary bird at the initial stage, Z a n e w , q 2 signifies random candidate solutions in the second stage iteration, and Z a signifies the randomly generated array of dimension. Then, random selection of the integer is calculated in Equation (15).
H = r o u n d ( 1 + r a n d ( 0 , 1 ) )
D a represents the random selection of the integer, r o u n d is the randomly generated variable, and r a n d ( 0 , 1 ) is the randomly generated random count.
Step 6: Termination
The weight parameter for generator D n . t , u e d n , t from MASNN is optimized by utilizing SBOA; and step 3 is repeated until satisfy its halting condition P = P + 1 . The MASNN-WL-RTSP-CS defectively assesses for workload and resource time series prediction in the cloud system by higher throughput, lessening MAE.

4. Results

In this section, the experimental outcomes of MASNN-WL-RTSP-CS are deliberated. The experimental setup for this work is a system with an Intel(R) Core(TM) i7-2620M CPU @ 2.70 GHz and 8.0 GB of RAM. This platform uses Windows 10 Pro as its operating system. The proposed technique is executed in MATLAB version R2023a using several performance measures are estimated. The effectiveness of the proposed technique is analyzed with existing models: combined DL model for workload with resource prediction in cloud schemes (ICNN-WL-RP-CS) [21], Analysis of Machine Learning Centered Workload Prediction Models for Cloud (PA-ENN-WLP-CS) [22], and DL-driven resource usage prediction method for resource provisioning in autonomic cloud computing environs (DCRNN-RUP-RP-CCE) [23], respectively.

4.1. Performance Metrics

Performance is measured utilizing many performance metrics, like RMSLE, MAPE, MAE, MSE, computation time, and throughput. These metrics provide a comprehensive evaluation of the model’s accuracy and effectiveness.

4.1.1. RMSLE

RMSLE is applied to the predicted and actual values by taking logarithms to the predicted and actual values before the squares of the differences. This serves to minimize the effect of very large outliers and highlight the relative error. When RMSLE is effectively used to accurately quantify the ability of a model to generalize to tasks with large variability in workload and resource requirements such as in cloud settings. It is measured by Equation (16).
RMSLE = 1 T a = 1 T ( log ( d ^ a + 1 ) log ( d a + 1 ) ) 2
T represents the total number of predictions, d ^ a represents the predicted value, and d a represents the actual value.

4.1.2. MSE

MSE is the mean of the squared variances among the predicted and actual values. It emphasizes larger errors by squaring them. MSE is efficient in revealing the difference between the predicted and the true time series data, which describe the capacity of the method to track changes in resource workloads over time. It is measured by Equation (17).
M S E = 1 T a = 1 T d ^ a d a 2

4.1.3. MAE

MAE scales the average absolute variance of the estimated and actual values. MAE is the average of the errors in the estimation of the workload and the estimation of the resource, which is used to estimate the accuracy of the estimation of the workload and resource. It is measured by Equation (18).
M A E = 1 T a = 1 T d ^ a d a

4.1.4. MAPE

MAPE represents the error in a percentage value, which presents a standardized perspective on predictions accuracy. MAPE allows the quantification of relative prediction errors for the comparison between models and can be used to show the extent of generalization of the proposed model across different cloud scenarios. It is calculated by Equation (19).
M A P E = 100 T a = 1 T d ^ a d a

4.1.5. Throughput

Throughput is one of the most important performance measures in cloud computing, which represents the number of completed tasks per unit of time. This is computed by Equation (20).
T h r o u g h p u t =   T o t a l   n u m b e r   o f   c o m p l e t e d   t a s k s T o t a l   t i m e   t a k e n

4.2. Performance Analysis

Figure 4, Figure 5, Figure 6, Figure 7, Figure 8 and Figure 9 show the performance analysis of the proposed model compared to the existing approaches.
Figure 4 depicts the RMSLE analysis. In this figure, the x axis represents the timestamps and the y axis represents RMSLE. In the proposed model, MASNN captures temporal dependencies more efficiently by utilizing spikes, leading to better generalization in workload and resource time series prediction. The attention mechanism mitigates bias caused by irregular workload fluctuations, thus minimizing RMSLE. The proposed MASNN-WL-RTSP-CS method achieves 35.66%, 32.73%, and 31.43% lower RMSLE analyzed with the existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively.
Figure 5 depicts the MSE analysis. In this figure, the x axis represents the timestamps and the y axis represents MSE. Workload as well as resource time series data in cloud computing exhibit complex temporal and spatial dependencies. MASNN leverages an optimized spiking neural network that integrates attention mechanisms to learn both short-term bursts and long-range correlations effectively. The proposed MASNN-WL-RTSP-CS method achieves 25.49%, 32.77%, and 28.93% lower MSE analyzed with existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively.
Figure 6 depicts the MAE analysis. In this figure, the x axis represents the timestamps and the y axis represents MAE. MASNN’s combination of spiking neurons and attention mechanisms ensures that only relevant features are selected, leading to more precise predictions with lower errors. The proposed MASNN-WL-RTSP-CS method achieves an improvement of 24.54%, 23.65%, and 23.62% lower MAE analyzed with existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively.
Figure 7 depicts the MAPE analysis. In this figure, the x axis represents the timestamps and the y axis represents MAPE. Unlike other models, MASNNs process information asynchronously, mimicking the biological neural system. MASNNs encode temporal dependencies efficiently and have better information retention due to STDP, leading to higher prediction accuracy. Lower MAPE values indicate better performance in capturing the true underlying patterns of the data. The proposed MASNN-WL-RTSP-CS method achieves an improvement of 34.73%, 32.96%, and 31.74% lower MAPE analyzed with existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively.
Figure 8 depicts the computation time analysis. The reduced computation time of the proposed MASNN-WL-RTSP-CS method is due to the efficient process through MWSGF and the optimized prediction capabilities of MASNN. By focusing on features and minimizing unnecessary computations, the model ensures faster processing without compromising accuracy, resulting in a lower overall computation time. This simplifies the issue without compromising crucial information. Here, MASNN-WL-RTSP-CS attains 14.39%, 20.78%, and 17.12% lower computational time when comparing to the existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively.
Figure 9 depicts the throughput analysis. In this figure, the x axis represents the timestamps and the y axis represents throughput. In the proposed model, MASNN integrates multi-dimensional attention to focus on important temporal and spatial features in workload patterns. This selective attention mechanism enhances feature extraction, leading to better resource allocation decisions and higher throughput. Maximizing throughput facilitates faster and more efficient handling of time series prediction tasks, enabling real-time or near-real-time analysis of workload and resource dynamics. In this context, the proposed MASNN-WL-RTSP-CS method achieves 26.35%, 28.56%, and 32.46% greater throughput than analyzed with the existing methods like ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC.
Figure 10 depicts the training with validation accuracy vs. epoch, showing the learning progression of the model during the watermark embedding task. A steady rise in training accuracy is usually observed as the model adapts to the training data. Validation accuracy represents the method’s generalization to hidden data and should ideally improve initially and then plateau. Any divergence, such as a rising training accuracy but stagnant or declining validation accuracy, may indicate overfitting. On the other hand, if both accuracies are low, it may propose under fitting, where the approach is too simple to detain the fundamental patterns. Monitoring these metrics helps fine-tune the model to avoid overfitting or under fitting.
Figure 11 depicts the training and validation loss vs. epoch, illustrating how the model’s error evolves over time during training. As the model gains knowledge from the training data, the training loss usually reduces. Validation loss reflects the model’s presentation on unseen data and should ideally decrease initially, indicating improved generalization. Overfitting, in which the model becomes overly specialized to the training data, may be indicated if the validation loss begins to rise while the training loss keeps falling. On the other hand, underfitting, in which the model is not able to learn the patterns in the data, might be indicated by both significant training and validation loss.
Table 2 presents a performance analysis of optimization algorithms, comparing Particle Swarm Optimization (PSO), the Genetic Algorithm (GA), and the proposed SBOA. The proposed SBOA achieves the lowest MSE of 1.1% and MAE of 0.38% with the fastest convergence time of 7.3 s, indicating its superiority in optimizing MASNN. In the proposed model, integrating SBOA can enhance training efficiency and predictive accuracy by refining weight optimization, reducing computational cost, and accelerating convergence.
Table 3 presents the comparison with the state-of-the-art models. Compared to previous methods, MASNN-WL-RTSP-CS achieves the lowest error rates such as an RMSLE of 0.7%, MSE of 1.1%, MAE of 0.38%, and MAPE of 3.7%, and the highest throughput of 97.89 KBPS, while also significantly reducing computational time to 99 s. In contrast, methods like those of Bi et al. (2021) [21] and Al-Sayed et al. (2022) [24] are restricted by RMSLE and MAPE being higher, all because of reliance on standard deep learning models, which do not have the advantages of specialized feature selection and optimization. A proposed method using MWSGF and its robustness to noise effectively and preserving useful pattern information is desirable. Methods like those of Bi et al. (2021) [21] and Ruan et al. (2023) [25] use simple noise reduction techniques, resulting in increased RMSLE and MAPE due to unwanted noise. Additionally, the proposed method exploits SBOA to gain control over MASNN parameters so that workload prediction can be very accurate. But methods like those of Dogani et al. (2023) [26] and Devi et al. (2023) [27] rely on manual hyperparameter tuning or standard optimizers, leading to suboptimal accuracy and increased error rates. This demonstrates the superiority of the proposed approach in terms of both accuracy and effectiveness.

4.3. Ablation Study

Ablation analysis with and without each individual component is carried out in order to comprehend the significance of each component in the proposed MASNN-WL-RTSP-CS.
Table 4 presents the ablation study for the proposed MASNN-WL-RTSP-CS method for workload with resource time series prediction. Without MWSGF preprocessing, error metrics such as RMSLE (3.2%) and MAPE (9.15%) remain high, indicating the importance of noise removal. Incorporating MASNN alone improves results but retains higher MAPE (9.4%), showing limited optimization. Excluding SBOA reduces the model’s capability, with RMSLE and MAE at 1.3% and 0.88%, respectively, highlighting the critical role of optimization in weight tuning. The full proposed method, combining MWSGF, MASNN, and SBOA, achieves the best performance with RMSLE (0.7%), MSE (1.1%), MAE (0.38%), and MAPE (3.7%), demonstrating significant accuracy and robustness improvements.

4.4. Discussion

The proposed methodology, MASNN-WL-RTSP-CS, introduces an innovative model that is tailored for workload prediction and resource time series within cloud systems. It commences by subjecting input data collected from the Google cluster trace dataset to preprocessing through MWSGF, a method geared toward refining data quality by effectively filtering out noise and mitigating the influence of extreme points within Google cluster trace dataset inputs. This employs an MWSGF to smoothen the resource and workload data, thereby rendering them as easier-to-predict time series. Following this preprocessing step, MASNN is employed as the primary predictive network. While MASNN effectively captures symmetric temporal dependencies in workload variations, it lacks an inherent optimization mechanism for parameter selection. In response to this limitation, SBOA emerges as a proposed solution, specifically designed to fine-tune MASNN parameters. The simulation outcomes demonstrate that the proposed approach attains an impressive MAE of 0.38% and a 3.7 MAPE. The disadvantage of using MWSGF is that it can be computationally intensive for large datasets, potentially increasing preprocessing time. The disadvantage of SBOA is its computational complexity, which may increase processing time and resource usage, especially during parameter optimization for high-dimensional data. The proposed method, MASNN-WL-RTSP-CS, has higher throughput evaluation metrics than existing models. Therefore, the comparative models are more expensive than the MASNN-WL-RTSP-CS technique. MASNN-WL-RTSP-CS with less computation time and greater efficiency has important potential to translate for efficiency, scalability, and cost benefits. Lowered computation time guarantees shorter processing time for workload prediction, which allows for live decision-making for dynamic resource allocation, decreasing the delay and increasing the responsiveness of the system. Increased throughput optimizes the system to process larger amounts of information as well as simultaneous requests, which makes it suitable for high-availability applications like financial transactions, healthcare analytics, and IoT networks at scale.

5. Conclusions

The proposed MASNN-WL-RTSP-CS method can significantly enhance workload and resource prediction accuracy in cloud computing. By combining MWSGF to reduce noise, MASNN to train high accuracy time series forecast, and SBOA to tune the model, the proposed model is significantly improved compared with the state-of-the-art methods. The MASNN-WL-RTSP-CS method provides 34.73%, 32.96%, and 31.74% lower MAPE, and 29.45%, 32.77%, and 37.93% lower computation time analyzed with existing techniques likes ICNN-WL-RP-CS, PA-ENN-WLP-CS, and DCRNN-RUP-RP-CC, respectively. Experimental results demonstrate a substantial reduction in error improvement, which validates the robustness and efficiency of the proposed framework. Besides its application in cloud computing, the MASNN-WL-RTSP-CS model also exhibits great extensibility to other domains with dynamically and complex time series forecasting requirements. With demand forecasting, anomaly detection, resource optimization, and predictive maintenance amongst many industry applications, this model can be leveraged within the healthcare, financial, energy management, and smart city industries. In future work, the MASNN-WL-RTSP-CS framework can be extended to incorporate reinforcement learning (RL) and on-line learning processes with resultant dynamic adaptation to real-time stream data. Introducing RL-based adaptive learning, the model can iteratively improve the prediction in the trajectory, achieved by reward-driven feedback, which guarantees the best decision in dynamic conditions.

Author Contributions

Conceptualization, T.K. and J.K.; methodology, T.K.; resources, T.K.; writing—original draft preparation, T.K.; writing—review and editing, J.K.; visualization, T.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

This research uses Google cloud traces data (https://www.kaggle.com/datasets/derrickmwiti/google-2019-cluster-sample) (accessed on 6 January 2025). All users have access to the aforementioned datasets made accessible under a public license. Data generated and acquired for the project will be provided upon request.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Gadhavi, L.J.; Bhavsar, M.D. Adaptive cloud resource management through workload prediction. Energy Syst. 2022, 13, 601–623. [Google Scholar] [CrossRef]
  2. Wang, X.; Cao, J.; Yang, D.; Qin, Z.; Buyya, R. Online cloud resource prediction via scalable window waveform sampling on classified workloads. Future Gener. Comput. Syst. 2021, 117, 338–358. [Google Scholar] [CrossRef]
  3. Kumar, J.; Singh, A.K.; Buyya, R. Self directed learning based workload forecasting model for cloud resource management. Inf. Sci. 2021, 543, 345–366. [Google Scholar] [CrossRef]
  4. Bedi, J.; Patel, Y.S. STOWP: A light-weight deep residual network integrated windowing strategy for storage workload prediction in cloud systems. Eng. Appl. Artif. Intell. 2022, 115, 105303. [Google Scholar] [CrossRef]
  5. St-Onge, C.; Benmakrelouf, S.; Kara, N.; Tout, H.; Edstrom, C.; Rabipour, R. Generic SDE and GA-based workload modeling for cloud systems. J. Cloud Comput. 2021, 10, 6. [Google Scholar] [CrossRef] [PubMed]
  6. Patel, E.; Kushwaha, D.S. A hybrid CNN-LSTM model for predicting server load in cloud computing. J. Supercomput. 2022, 78, 1–30. [Google Scholar] [CrossRef]
  7. Rjoub, G.; Bentahar, J.; Wahab, O.A.; Bataineh, A.S. Deep and reinforcement learning for automated task scheduling in large-scale cloud computing systems. Concurr. Comput. Pract. Exp. 2021, 33, e5919. [Google Scholar] [CrossRef]
  8. Saxena, D.; Singh, A.K.; Buyya, R. OP-MLB: An online VM prediction-based multi-objective load balancing framework for resource management at cloud data center. IEEE Trans. Cloud Comput. 2021, 10, 2804–2816. [Google Scholar] [CrossRef]
  9. Amekraz, Z.; Hadi, M.Y. CANFIS: A chaos adaptive neural fuzzy inference system for workload prediction in the cloud. IEEE Access 2022, 10, 49808–49828. [Google Scholar] [CrossRef]
  10. Malik, S.; Tahir, M.; Sardaraz, M.; Alourani, A. A resource utilization prediction model for cloud data centers using evolutionary algorithms and machine learning techniques. Appl. Sci. 2022, 12, 2160. [Google Scholar] [CrossRef]
  11. Ullah, F.; Bilal, M.; Yoon, S.K. Intelligent time-series forecasting framework for non-linear dynamic workload and resource prediction in cloud. Comput. Netw. 2023, 225, 109653. [Google Scholar] [CrossRef]
  12. Dogani, J.; Khunjush, F.; Seydali, M. Host load prediction in cloud computing with discrete wavelet transformation (dwt) and bidirectional gated recurrent unit (bigru) network. Comput. Commun. 2023, 198, 157–174. [Google Scholar] [CrossRef]
  13. Kumar, J.; Singh, A.K. Performance assessment of time series forecasting models for cloud datacenter networks’ workload prediction. Wirel. Pers. Commun. 2021, 116, 1949–1969. [Google Scholar] [CrossRef]
  14. Nawrocki, P.; Osypanka, P. Cloud resource demand prediction using machine learning in the context of qos parameters. J. Grid Comput. 2021, 19, 20. [Google Scholar] [CrossRef]
  15. Seshadri, K.; Pavana, C.; Sindhu, K.; Kollengode, C. Unsupervised Modeling of Workloads as an Enabler for Supervised Ensemble-based Prediction of Resource Demands on a Cloud. In Advances in Data Computing, Communication and Security: Proceedings of I3CS2021; Springer: Singapore, 2022; pp. 109–120. [Google Scholar]
  16. Singh, A.K.; Saxena, D.; Kumar, J.; Gupta, V. A quantum approach towards the adaptive prediction of cloud workloads. IEEE Trans. Parallel Distrib. Syst. 2021, 32, 2893–2905. [Google Scholar] [CrossRef]
  17. Bao, L.; Yang, J.; Zhang, Z.; Liu, W.; Chen, J.; Wu, C. On accurate prediction of cloud workloads with adaptive pattern mining. J. Supercomput. 2023, 79, 160–187. [Google Scholar] [CrossRef]
  18. Bhalaji, N. Cloud load estimation with deep logarithmic network for workload and time series optimization. J. Soft Comput. Paradig. 2021, 3, 234–248. [Google Scholar] [CrossRef]
  19. Karthikeyan, R.; Balamurugan, V.; Cyriac, R.; Sundaravadivazhagan, B. COSCO2: AI-augmented evolutionary algorithm based workload prediction framework for sustainable cloud data centers. Trans. Emerg. Telecommun. Technol. 2023, 34, e4652. [Google Scholar] [CrossRef]
  20. Xu, M.; Song, C.; Wu, H.; Gill, S.S.; Ye, K.; Xu, C. esDNN: Deep neural network based multivariate workload prediction in cloud computing environments. ACM Trans. Internet Technol. 2022, 22, 1–24. [Google Scholar] [CrossRef]
  21. Bi, J.; Li, S.; Yuan, H.; Zhou, M. Integrated deep learning method for workload and resource prediction in cloud systems. Neurocomputing 2021, 424, 35–48. [Google Scholar] [CrossRef]
  22. Saxena, D.; Kumar, J.; Singh, A.K.; Schmid, S. Performance analysis of machine learning centered workload prediction models for cloud. IEEE Trans. Parallel Distrib. Syst. 2023, 34, 1313–1330. [Google Scholar] [CrossRef]
  23. Al-Asaly, M.S.; Bencherif, M.A.; Alsanad, A.; Hassan, M.M. A deep learning-based resource usage prediction model for resource provisioning in an autonomic cloud computing environment. Neural Comput. Appl. 2022, 34, 10211–10228. [Google Scholar] [CrossRef]
  24. Al-Sayed, M.M. Workload time series cumulative prediction mechanism for cloud resources using neural machine translation technique. J. Grid Comput. 2022, 20, 16. [Google Scholar] [CrossRef]
  25. Ruan, L.; Bai, Y.; Li, S.; He, S.; Xiao, L. Workload time series prediction in storage systems: A deep learning based approach. Clust. Comput. 2023, 26, 25–35. [Google Scholar] [CrossRef]
  26. Dogani, J.; Khunjush, F.; Mahmoudi, M.R.; Seydali, M. Multivariate workload and resource prediction in cloud computing using CNN and GRU by attention mechanism. J. Supercomput. 2023, 79, 3437–3470. [Google Scholar] [CrossRef]
  27. Devi, K.L.; Valli, S. Time series-based workload prediction using the statistical hybrid model for the cloud environment. Computing 2023, 105, 353–374. [Google Scholar] [CrossRef]
  28. Available online: https://www.kaggle.com/datasets/derrickmwiti/google-2019-cluster-sample (accessed on 6 January 2025).
  29. Liu, W.; Wang, H.; Xi, Z.; Zhang, R. Smooth Deep Learning Magnetotelluric Inversion based on Physics-informed Swin Transformer and Multi-Window Savitzky-Golay Filter. IEEE Trans. Geosci. Remote Sens. 2023, 61, 4505214. [Google Scholar]
  30. Yao, M.; Zhao, G.; Zhang, H.; Hu, Y.; Deng, L.; Tian, Y.; Xu, B.; Li, G. Attention spiking neural networks. IEEE Trans. Pattern Anal. Mach. Intell. 2023, 45, 8. [Google Scholar] [CrossRef] [PubMed]
  31. Fu, Y.; Liu, D.; Chen, J.; He, L. Secretary bird optimization algorithm: A new metaheuristic for solving global optimization problems. Artif. Intell. Rev. 2024, 57, 123. [Google Scholar] [CrossRef]
  32. Bacanin, N.; Simic, V.; Zivkovic, M.; Alrasheedi, M.; Petrovic, A. Cloud computing load prediction by decomposition reinforced attention long short-term memory network optimized by modified particle swarm optimization algorithm. Ann. Oper. Res. 2023, 1–34. [Google Scholar] [CrossRef]
  33. Hung, L.H.; Wu, C.H.; Tsai, C.H.; Huang, H.C. Migration-based load balance of virtual machine servers in cloud computing by load prediction using genetic-based methods. IEEE Access 2021, 9, 49760–49773. [Google Scholar] [CrossRef]
Figure 1. Block diagram of proposed MASNN-WL-RTSP-CS method.
Figure 1. Block diagram of proposed MASNN-WL-RTSP-CS method.
Symmetry 17 00383 g001
Figure 2. Architecture diagram of MASNN.
Figure 2. Architecture diagram of MASNN.
Symmetry 17 00383 g002
Figure 3. Flowchart of SBOA for optimizing MASNN parameter.
Figure 3. Flowchart of SBOA for optimizing MASNN parameter.
Symmetry 17 00383 g003
Figure 4. Performance analysis of RMSLE.
Figure 4. Performance analysis of RMSLE.
Symmetry 17 00383 g004
Figure 5. Performance analysis of MSE.
Figure 5. Performance analysis of MSE.
Symmetry 17 00383 g005
Figure 6. Performance analysis of MAE.
Figure 6. Performance analysis of MAE.
Symmetry 17 00383 g006
Figure 7. Performance analysis of MAPE.
Figure 7. Performance analysis of MAPE.
Symmetry 17 00383 g007
Figure 8. Performance analysis of computational time.
Figure 8. Performance analysis of computational time.
Symmetry 17 00383 g008
Figure 9. Performance analysis of throughput.
Figure 9. Performance analysis of throughput.
Symmetry 17 00383 g009
Figure 10. Training and validation accuracy vs. epoch.
Figure 10. Training and validation accuracy vs. epoch.
Symmetry 17 00383 g010
Figure 11. Training and validation loss vs. epoch.
Figure 11. Training and validation loss vs. epoch.
Symmetry 17 00383 g011
Table 1. Comparison table of literature survey.
Table 1. Comparison table of literature survey.
AuthorObjectivesModelsAdvantagesDisadvantages
Bi, J. et al. [21]To improve workload and resource prediction accuracy in cloud systems.Logarithmic operation, smoothing, bi-directional, and grid LSTM networks.It provides low MAPE.It provides high RMSLE.
Saxena, D. et al. [22]To enhance the performance analysis of ML-based workload prediction.Evolutionary Neural Networks, Quantum Neural Network, and LSTM-RNN.It provides
high throughput.
It provides
high MAE.
Al-Asaly, M.S. et al. [23]To improve CPU usage forecasting and manage workload fluctuations.Diffusion Convolutional Recurrent Neural Network.It provides
low MAPE.
It provides high MSE.
Al-Sayed, M.M. et al. [24]To develop workload sequence estimation as a translation task.Attention Seq2Seq method and Recurrent Neural Network.It provides
high normalized correlation.
It provides high structural similarity index measure.
Ruan, L. et al. [25]To improve storage workload time series prediction.CrystalLP using LSTM.It provides
low MAE.
It provides high computation time.
Dogani, J. et al. [26]To predict multivariate workload with resource usage in cloud systems.Hybrid CNN-GRU with attention.It provides
low MSE.
It provides
low MAPE.
Devi, K.L. et al. [27]To enhance workload prediction accuracy in cloud data centers.GRU, LSTM, CNN, and BiLSTM.It provides low RMSLE.It provides
low throughput.
Table 2. Performance analysis of optimization algorithms.
Table 2. Performance analysis of optimization algorithms.
MethodsPerformance Metrics
MSE (%)MAE (%)Convergence Time (s)
PSO [32]2.20.5410.2
GA [33]2.50.6212.5
SBOA (Proposed)1.10.387.3
Table 3. Comparison with state-of-the-art techniques.
Table 3. Comparison with state-of-the-art techniques.
AuthorsPerformance Metrics
RMSLE (%)MSE (%)MAE (%)MAPE (%)Computational Time (s)Throughput (KPBS)
Bi, J. et al. [21]3.24.20.999.1525065.4
Saxena, D. et al. [22]2.22.80.929.419055.6
Al-Asaly, M.S. et al. [23]1.31.90.886.226051.9
Al-Sayed, M.M. et al. [24]3.84.50.989.2211.2770.9
Ruan, L. et al. [25]2.53.10.99192.3259.4
Dogani, J. et al. [26]1.72.30.745.5123.2755.3
Devi, K.L. et al. [27]1.64.80.958.9219.5370.8
MASNN-WL- RTSP-CS (Proposed)0.71.10.383.79997.89
Table 4. Ablation study of proposed MASNN-WL-RTSP-CS method.
Table 4. Ablation study of proposed MASNN-WL-RTSP-CS method.
Ablation ModelMetrics
RMSLE (%)MSE (%)MAE (%)MAPE (%)
Without MWSGF3.24.20.999.15
WithMASNN2.22.80.929.4
Without SBOA1.31.90.886.2
MASNN-WL-RTSP-CS (Proposed)0.71.10.383.7
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Karpagam, T.; Kanniappan, J. Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems. Symmetry 2025, 17, 383. https://doi.org/10.3390/sym17030383

AMA Style

Karpagam T, Kanniappan J. Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems. Symmetry. 2025; 17(3):383. https://doi.org/10.3390/sym17030383

Chicago/Turabian Style

Karpagam, Thulasi, and Jayashree Kanniappan. 2025. "Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems" Symmetry 17, no. 3: 383. https://doi.org/10.3390/sym17030383

APA Style

Karpagam, T., & Kanniappan, J. (2025). Symmetry-Aware Multi-Dimensional Attention Spiking Neural Network with Optimization Techniques for Accurate Workload and Resource Time Series Prediction in Cloud Computing Systems. Symmetry, 17(3), 383. https://doi.org/10.3390/sym17030383

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop