Return Rate Prediction in Blockchain Financial Products Using Deep Learning

: Recently, bitcoin-based blockchain technologies have received signiﬁcant interest among investors. They have concentrated on the prediction of return and risk rates of the ﬁnancial product. So, an automated tool to predict the return rate of bitcoin is needed for ﬁnancial products. The recently designed machine learning and deep learning models pave the way for the return rate prediction process. In this aspect, this study develops an intelligent return rate predictive approach using deep learning for blockchain ﬁnancial products (RRP-DLBFP). The proposed RRP-DLBFP technique involves designing a long short-term memory (LSTM) model for the predictive analysis of return rate. In addition, Adam optimizer is applied to optimally adjust the LSTM model’s hyperparameters, consequently increasing the predictive performance. The learning rate of the LSTM model is adjusted using the oppositional glowworm swarm optimization (OGSO) algorithm. The design of the OGSO algorithm to optimize the LSTM hyperparameters for bitcoin return rate prediction shows the novelty of the work. To ensure the supreme performance of the RRP-DLBFP technique, the Ethereum (ETH) return rate is chosen as the target, and the simulation results are investigated in different measures. The simulation outcomes highlighted the supremacy of the RRP-DLBFP technique over the current state of art techniques in terms of diverse evaluation parameters. For the MSE, the proposed RRP-DLBFP has 0.0435 and 0.0655 compared to an average of 0.6139 and 0.723 for compared methods in training and testing, respectively.


Introduction
Recently, economic globalization has rapidly developed, and together, different aspects restraining industrial development have been overcome with the fastest-developing resources. Rapid growth in economic markets has been observed [1,2]. The media of economic development determine economic marketplaces. It controls the allocation of the entire public and economic scheme while becoming an important part of economic development [3,4] The global expansion of the internet has led to the growth of many internet-based financial products such as Baidu Economic Management, Yu'EBao, etc.; and this growth has an important impact on society. Recently, a new internet economics scheme developed by world influences such as peer-to-peer (P2P), crowdfunding, digital currency, and Blockchain may play a major part in developing the worldwide financial marketplace [5]. Blockchain models are determined as a systematic advance where the interference has greater impacts by transmitting the function of businesses from centralized to decentralized forms. It also always changes the untrustworthy agent without needing entity-based modules. Instead [6], it alters the methods in which all transactions are identified and activates huge potential in various sectors such as Decentralized Autonomous organizations (DAC) and Multi-Party Computation (MPC) in the government sector. It consist of 3 evolutionary stages namely Blockchain (1.0, 2.0, & 3.0). In the beginning, Blockchain 1.0 was determined to be the commercial application using digital payment, money transfer, and remittance that acquired hugely-dispersed applications of derivatives and Bitcoin. Later, Blockchain 2.0 is used by the contract, entire financial challenges, markets, and financial fields that are extensively employed, not just for minor cash transactions. Lastly, Blockchain 3.0 represents application through the marketplace and money, particularly in the areas of health, science, government, etc. [7,8].
Blockchain technology uses decentralized storage to store massive amounts of data linked to the current blocks for the initial blocks using intelligent contracts. InterPlanetary File System (IPFS), BigchainDB, LitecoinDB, MoneroDB, Swarm, SiacoinDB, are used through decentralized databases in everyday instances [9][10][11].The IPFS is a distributed, peer to peer and decentralized database that is transmitted and linked standard file. IPFS is a huge storage network that is exploited via blockchain method for IoTs software for maximal efficacy. Now, some research is associated with the proposed method that was studied. Using a method developed by the researcher, the writers in [12] used a fixed proportion for different character traits. They utilize the support vector machine (SVM) method for identifying and classifying tasks. Eventually, the experiment result shows that the presented approach has attained greater performance. While in [13], the authors discovered the Taiwanese stock marketplaces and applied SVM dependent genetic algorithm (GA) method. The previous study used the POS system to solve the optimum privacy portfolio method achieved by higher performances. Authors in [14] employed a group of ML approaches to process the prediction assessment on the nikkei 225 indexes. Hence, the result shows that SVM gained optimum results amongst these four modules.
The yield rate of bitcoin is the subject of [15], which collects data from 2 June 2016, to 30 December 2018, totaling 943 pieces. PSO least-squares vector approaches (PSOLSSVR) are adapted to perform empirical analyses and model simulations on the collected information and conclude that the PSO neural network (BPNN), SVM machine learning (ML), and particle swarm optimization (PSO) least-square vector approaches have an optimal appropriate effect. The generative adversarial network (GAN)-MLP model is used in [16] to develop a new return rate forecasting approach for Blockchain financial goods. Blockchain financial goods' intelligent return rates can be predicted using the suggested system. Using the proposed method, stock price closings can be predicted with high accuracy by providing historical stock price information.
In [17], price predictions are implemented by the two ML approaches such as Logistic Regression (LR) and SVM, consisting of everyday ether cryptocurrency closing price through a time sequence. Various window distances are employed in ether cryptocurrency price predictions through the filter using different weights coefficients. In the training stage, a cross-validated technique is employed to construct a better accuracy method autonomous of the dataset based on daily collected data. The optimal least-square support vector machine (OLSSVM) algorithm was used by Sivaram et al. [18] to develop an effective return rate prediction strategy for Blockchain financial products. The LSSVM constraint optimization was carried out by combining differential evolution (DE) with grey wolf optimization (GWO), resulting in the OGWO strategy, the best one. Hybridization techniques are used to eliminate GWO's worst local problems while also increasing the diversity of the population.
The authors in [19] show that Bitcoin prices demonstrate long-term memory, though its trend gets reduced over time. It is observed that Bitcoin can be defined effectively using a random walk, displaying the sign of market maturity emerging; contrastingly, other cryptocurrencies such as Ethereum and Ripple offer evidence of increasing underlying memory behavior. The authors in [20] employed a battery of statistical tests to determine the price leadership among the two cryptocurrencies, namely Bitcoin and Ethereum.
This study designs an intelligent return rate predictive approach using deep learning for blockchain financial products (RRP-DLBFP). The proposed RRP-DLBFP technique involves developing a long short-term memory (LSTM) model for the predictive analysis of return rate. Adam optimizer is also used to fine-tune the hyperparameters of the LSTM model so that it can make better predictions with more accuracy. In addition, the learning rate of the LSTM model is adjusted using the oppositional glowworm swarm optimization (OGSO) algorithm. To ensure the supreme performance of the RRP-DLBFP technique, the Ethereum (ETH) returns are chosen as the goal, and various simulation results are examined. In short, the paper contribution is listed here. The remainder of this paper is divided into five sections. Section 2 discusses the related studies to the literature on return rate prediction models. Section 3 introduces and summarizes the blockchain method, deep learning models, and Adam algorithm. Section 4 details the proposed RRP-DLBFP. Section 5 discusses the experiment's results. Section 6 summarizes and concludes the paper.

Related Work
Using machine learning algorithms to predict Bitcoin's price is a relatively new area of research. It was possible to make an 89 percent profit in three months by employing the [21] developed latent source model to predict Bitcoin's price. Text data gleaned from social media and other sources have also been used to predict Bitcoin's price. Three studies looked into sentiment analysis by combining support vector machines (SVMs) with Wikipedia view frequency and hash rate (HashN). Authors in [22] looked into the connection between Bitcoin's price and the number of tweets and Google Trends searches for the term "Bitcoin". As in [23], the authors anticipated transaction volume rather than Google Trends views for Bitcoin price in their analysis. The limitation of this study is the small sample size and the propensity for misinformation to spread over many discussion boards, media channels such as Twitter or, which artificially inflate and deflate prices. In [24], liquidity is extremely scarce on Bitcoin exchanges. As a result, market manipulation is more likely. As a result, opinions expressed on social media will be ignored. For the analysis of the Bitcoin Blockchain, researchers in [25] employed support vector machines (SVMs) and artificial neural networks. They found that traditional ANNs were 55 percent accurate in forecasting the Bitcoin price. They concluded that Blockchain data on its own has limited predictability. Prediction accuracy of above 97% was noted by [26], who used Blockchain data with Random Forests, SVM, and Binomial Generalised Linear Model to make their predictions. However, they did not cross-validate their models, which limited the findings' generalization. It has been shown that network hash rate, search engine views, and mining difficulty positively correlate with Bitcoin price using wavelets [27,28]. CoinDesk's research is based on these findings and incorporates data from the Blockchain such as hash rate and difficulty into its study. Similar financial-based prediction jobs such as stock prediction can be compared to Bitcoin price prediction. Several research groups have used the Multilayer Perceptron (MLP) for stock price prediction [29]. The MLP can study a single observation, but only in one step at a time [30]. A recurrent neural network (RNN) uses a context layer to store the output from each layer before looping back to it. In contrast to the MLP, the network now has some memory. The network's length is referred to as the "temporal window length" (TWL). This study finds that internal states play a considerable role in the series' temporal relationship, explicitly represented by [31]. Authors in [32] utilized a genetic algorithm and an RNN for network optimization; they successfully predicted stock returns. However, there are other types of RNNs, such as (LSTM) the Long Short Term Memory network. While Elman RNN can remember and retrieve data regardless of the significance or weight of a feature, these models may also selectively remember and forget data. In [33], for a time series prediction challenge, authors used an LSTM and discovered that it worked just as well as an RNN. This kind of model is likewise utilized in this instance. The large amount of computation required while training the RNN and the LSTM is a drawback. In other words, training 50 individual MLP models with a 50-day network is comparable. Since NVIDIA released its CUDA framework in 2006, many applications have benefited from the GPU's extraordinary parallelism, including machine learning. Authors in [34] stated that using a GPU instead of a CPU accelerated the testing and training of their ANN model three times. Similarly, in [35], authors found that putting an SVM on a GPU was eighty times faster than using a CPU-based SVM method for classification. In addition, the CPU implementation took nine times as long to train. Training a deep neural network for image identification on a GPU instead of a CPU resulted in forty times quicker training speeds for [36]. LSTM and RNN models use both the CPU and GPU because of the obvious advantages of doing so. In the automotive components supply chain, rough set theory (RST) and ABC analysis were proposed to combine to verify inventory for demand forecasts and ordering decisions. In the initial stage, the customer places a specific order with the reseller for specific spare parts. After that, a classification is created using the ABC analysis of the demand projection for each time. Next, the average weighted approach based on the mileage of the cars registered in the workshops uses each period's starting and ending mileage. This was accomplished by performing an ABC analysis and then creating an RST model to forecast which ABC group each element will belong to in the future. A method for controlling spare parts inventory based on machine learning for demand forecasting has been proposed [37]. Pre-processing, weight determination, and extraction are all parts of the strategy. It is necessary to train historical forecasting examples for each subset corresponding to the effect mechanism models before extracting the data using ELM and SVM learning methods. By equalizing the accuracy of the two models, the prediction model is firm-minded. However, the upkeep of weather-related spare parts is complicated. As mentioned in [38], a new hybrid model, abbreviated as WT-Adam-LSTM, uses Adam and wavelet transforms to improve LSTM neural network's price predicting accuracy. When nonlinear electricity price sequences are decomposed using a wavelet transform, the variance of the processed data will be more stable. This will allow algorithms such as Adam, which is one of the most efficient stochastic gradient-based optimizers, and LSTM to capture the behaviour of electricity prices better.

Blockchain
The term "blockchain" [39] refers to a data structure that sequentially records transactions and facilitates them as a distributed record set. Then, it is divided into two sections: header and transaction, and it stores information about transaction specifics. In these data, you'll find both the source and destination addresses included as well. Using a cryptographic digest, each block generates its unique ID. The header stores the hash of the opposing block and thereby links the blocks together. It is primarily for marketing purposes that the structure is dubbed "Blockchain." When this link is viewed differently, it is denoted as a partial hash collision, which requires significant computing power to determine the hash function. Since all block references are predecessors, a change in a single bit could change the corresponding hash. It should be recalculated in the decreased order at the end. The presence of a lengthy chain, when it grows as a block, is unchangeable and ensures the security of preserved transactions, as is well-established.
The immutability, auditability, and nonrepudiation of transactions are all guaranteed to be possible using blockchain technology. As a huge distributed ledger, this technology ensures digital transactions are secure and private. It denotes the route to gaining consensus Sustainability 2021, 13, 11901 5 of 16 among disparate groups of individuals. Security of transaction data is generally the responsibility of both banks and notaries, referred to as 'trusted third parties'. As a result of this model's registries being viewed as public and furnished in a decentralized manner via vast system applicants, such entities are eliminated.
As shown in Figure 1 [39], the bitcoin network keeps tabs on the major operations of all the nodes in use at every level. As previously said, it is an overlay network built on top of another one. It also creates new levels of network abstraction with innovative security benefits that are then deployed. order at the end. The presence of a lengthy chain, when it grows as a block, is unchangeable and ensures the security of preserved transactions, as is well-established. The immutability, auditability, and nonrepudiation of transactions are all guaranteed to be possible using blockchain technology. As a huge distributed ledger, this technology ensures digital transactions are secure and private. It denotes the route to gaining consensus among disparate groups of individuals. Security of transaction data is generally the responsibility of both banks and notaries, referred to as 'trusted third parties'. As a result of this model's registries being viewed as public and furnished in a decentralized manner via vast system applicants, such entities are eliminated.
As shown in Figure 1 [39], the bitcoin network keeps tabs on the major operations of all the nodes in use at every level. As previously said, it is an overlay network built on top of another one. It also creates new levels of network abstraction with innovative security benefits that are then deployed.

LSTM
RNN neural networks are effective models, and they differ from standard feed-forward networks since their neuron connections are not limited to only one direction. In other words, neurons in an RNN network can send information to a previous or the same layer. Nodes between layers are disconnected in a typical neural network model since the layers are all connected. This general neural network can't solve numerous complex problems. Predicting the next word in a sentence requires using the preceding word since sentences do not contain independent words [40]. The RNN is a cyclic neural network, which means that the current output is also linked to the preceding result. Because the network remembers earlier information, it can use that knowledge in calculating the current production, which will be connected to all of the nodes in the network. Thus, the hidden layer's input includes both the input layer's output and the hidden layer's output from an earlier time.
As shown in Figure 2, this neural network chunk A receives the input values and calculates the previous output at each step. It then outputs the result. It is like having many copies of the same neural network in your brain. RNN can theoretically process any length of sequence data. Memory blocks instead of neurons are used in LSTM networks, with each memory block comprising gates that regulate the output and status of the block.

LSTM
RNN neural networks are effective models, and they differ from standard feedforward networks since their neuron connections are not limited to only one direction. In other words, neurons in an RNN network can send information to a previous or the same layer. Nodes between layers are disconnected in a typical neural network model since the layers are all connected. This general neural network can't solve numerous complex problems. Predicting the next word in a sentence requires using the preceding word since sentences do not contain independent words [40]. The RNN is a cyclic neural network, which means that the current output is also linked to the preceding result. Because the network remembers earlier information, it can use that knowledge in calculating the current production, which will be connected to all of the nodes in the network. Thus, the hidden layer's input includes both the input layer's output and the hidden layer's output from an earlier time.
As shown in Figure 2, this neural network chunk A receives the input values and calculates the previous output at each step. It then outputs the result. It is like having many copies of the same neural network in your brain. RNN can theoretically process any length of sequence data. Memory blocks instead of neurons are used in LSTM networks, with each memory block comprising gates that regulate the output and status of the block. It also has a memory for the most recent sequence, as well as a smarter-than-classical-neuron-like component. A sigmoid activation unit controls each gate in the block, which works with the input sequence.
It also has a memory for the most recent sequence, as well as a smarter-than-classicalneuron-like component. A sigmoid activation unit controls each gate in the block, which works with the input sequence. Depending on whether they are activated, the state changes, and new information enters the block. A unit has three different kinds of gates (see Figure 3) [41].

Adam Optimizer
Adam [42] is a stochastic objective function first-order gradient-based optimization technique based on adaptive estimations of lower-order moments. The method is easy to implement, has excellent computational efficiency, requires little memory, and rescales the gradient's diagonal. It is also a good fit for problems with a lot of information or parameters. Using Algorithm 1, we can see Adam's pseudocode [11].
After finding the parameters, β1, α, β2, and the random objective function f(θ), we must initialize the first-moment vector, the parameter vector, the time step, and the second-moment vector as given in the procedure. In order to reach convergence of the parameter, the loop iteratively updates the various components of the model until convergence is reached. Depending on whether they are activated, the state changes, and new information enters the block. A unit has three different kinds of gates (see Figure 3) [41]. It also has a memory for the most recent sequence, as well as a smarter-than-classicalneuron-like component. A sigmoid activation unit controls each gate in the block, which works with the input sequence. Depending on whether they are activated, the state changes, and new information enters the block. A unit has three different kinds of gates (see Figure 3) [41].

Adam Optimizer
Adam [42] is a stochastic objective function first-order gradient-based optimization technique based on adaptive estimations of lower-order moments. The method is easy to implement, has excellent computational efficiency, requires little memory, and rescales the gradient's diagonal. It is also a good fit for problems with a lot of information or parameters. Using Algorithm 1, we can see Adam's pseudocode [11].
After finding the parameters, β1, α, β2, and the random objective function f(θ), we must initialize the first-moment vector, the parameter vector, the time step, and the second-moment vector as given in the procedure. In order to reach convergence of the parameter, the loop iteratively updates the various components of the model until convergence is reached.

Adam Optimizer
Adam [42] is a stochastic objective function first-order gradient-based optimization technique based on adaptive estimations of lower-order moments. The method is easy to implement, has excellent computational efficiency, requires little memory, and rescales the gradient's diagonal. It is also a good fit for problems with a lot of information or parameters. Using Algorithm 1, we can see Adam's pseudocode [11].
After finding the parameters, β1, α, β2, and the random objective function f (θ), we must initialize the first-moment vector, the parameter vector, the time step, and the secondmoment vector as given in the procedure. In order to reach convergence of the parameter, the loop iteratively updates the various components of the model until convergence is reached. Algorithm 1. The proposed method for optimizing the LSTM model's parameters. The elementwise square gt 2 is represented by g t g t . Default machine learning settings that are effective so far include β 1 = 0.9, α = 0.001, = 10 −8 . and β 2 = 0.999 and when working with vectors, you must always do things element by element. With β t 2 and β t 1 we denote β 2 and β 1 to the power t.
loop θ t not converged do 3.

The Proposed RRP-DLBFP Model Design
In this study, an RRP-DLBFP technique is designed to predict the return rate of financial blockchain products. The LSTM learns the dependency that ranges amongst arbitrary longer time intervals. An LSTM resolves the reducing gradient issue by replacing a typical neuron with a challenging LSTM unit framework. The LSTM unit develops in the application of related nodes. Necessary units of the LSTM framework [10] are determined under: Constant error carousel (CEC): A significant element with recurrent link of unit weight. The recurrent link depicts a feedback loop at the time step as 1. The CEC's activation is an internal state which helps memory of prior information.
Input Gate (IG): The multiplicative units which protect the data secured from the CEC in unwanted input interruption.
Output Gate (OG): The multiplicative units protect another unit in an interference with the data stored in CEC.
An input-and-output-gates control activates the CEC. During the training stage, an input gate proposes training to data inside the CEC. An input gate has been assigned with a value of zero. Similarly, an output gate learns the time to release a data flow from the CEC. If the gate is closed, activation is applied inside the memory cell. It activates the error signals to flow over issues of the reducing gradient.
The structure of LSTM units contains a forget gate which is utilized for residual issues. The fundamental element of LSTM units is given under.
Input: An LSTM unit executes the current vector demonstrated as x n , and the output saved in the last step is signified as h n−1 . The weighted inputs were summarized and changed by tanh activation that was demonstrated in z n .
IG: This gate reads x n and h n−1 , computes the weighted sum, and applies sigmoid activation. So, the outcomes were improved with z n , and input flow has been given as the memory cell.
Forget gate (FG): When the network introduces a new order, afterward, a forget gate reads x n and h n−1 and applies the sigmoid activation to the weighted input. Lastly, f n is improved as a cell state from the preceding time step, in which s n−1 has been stimulated to forget the unimportant memory data. Memory cell: The CEC and recurrent edge can be included with unit weights. The present cell state s n has been computed as removing unwanted data in the previous time step and attaining the relevant data in the current input.
OG: This gate was utilized to the weighted sum of x n and h n−1 and executed sigmoid activation to manage the LSTM blocks' data flow.
Output: The simulation result of an LSTM unit h n is estimated by altering the cell state c n with tan h and improving the output gate. The working rule of LSTM is defined as the implemented function: where i, f, and o represent the input, forget, and output gates, correspondingly. During this technique, σ implies the sigmoid functions executed to manage every iteration. Afterward,

the attributes that exist learned in training.
For LSTM neural networks, Adam [38] optimizes the target function f (θ) (the mean squared error was utilized in the proposed model), intending to find parameters that minimize the mean squared error. Instead of using a stationary target, Adam uses sparse gradients and accomplishes a form of step size annealing without any additional effort.
The Adam manner implements dynamic alteration of several parameters with computing the gradient 1st-order moment estimation m t and 2nd-order moment estimation v t , as illustrated in Equations (6)-(8), where β 1 and β 2 correspondingly represent the 1st-order exponential damping decrement and 2nd-order exponential damping decrement. g t stands for the gradient of parameters at time step t from the loss function J sparse (W, b).
Computer bias-corrected to m t and v t : Update parameters: γ indicates the updating step size, ξ attains the small constant to prevent the denominator that exists 0. In addition, the learning rate of the LSTM model is adjusted using the OGSO algorithm. GSO is assumed as an intelligence swarm optimized technique utilized for speeding up the luminescent feature of fireflies. This GSO technique has shared the glowworm swarm from the space solution and FF of all glowworm places [19]. The strong glowworm was higher brightness, and an optimal place was developed for maximum FF rate. The glowworm was conceived of the dynamic line of sight, for instance, the decision domain, the range of density for neighboring nodes. Conversely, the decision radius has been constrained if the glowworm travels near a similar type of robust fluorescence from the Sustainability 2021, 13, 11901 9 of 16 decision domain. Obtaining the maximum value of iterations, every glowworm is placed from a better place. The process contained from the GSO techniques is demonstrated in Figure 4 [43].
Sustainability 2021, 13, x FOR PEER REVIEW 9 of 17 the range of density for neighboring nodes. Conversely, the decision radius has been constrained if the glowworm travels near a similar type of robust fluorescence from the decision domain. Obtaining the maximum value of iterations, every glowworm is placed from a better place. The process contained from the GSO techniques is demonstrated in Figure  4 [43]. It involves five stages as given as follows: The fluorescence from the concentration updating techniques are written in the subsequent: But l (f) signifies the fluorescence from the concentration of ith glowworms in time f, α refers to the fluorescence in volatilization coefficients, β represents fluoresce from development factor, f(x) refers to the FF and x (r) demonstrated the place of glowworms i in f time is provided as: whereas N (f) refers the neighbor group of ith glowworms in time r and r (r) stands for the radius of decision domains of ith glowworms in time f as follow: It involves five stages as given as follows: The fluorescence from the concentration updating techniques are written in the subsequent: But l i (f) signifies the fluorescence from the concentration of ith glowworms in time f, α refers to the fluorescence in volatilization coefficients, β represents fluoresce from development factor, f(x) refers to the FF and x i (r) demonstrated the place of glowworms i in f time is provided as: whereas N i (f) refers the neighbor group of ith glowworms in time r and r i d (r) stands for the radius of decision domains of ith glowworms in time f as follow: Here r s refers to the attained radius of a glowworm, γ denotes the values of decision domains, and n i shows the neighboring threshold. The moving probabilities of an upgrading technique were provided under: where p ij (t) refers to the possibilities that glowworm i travels to glowworms j in r time as follows: The OBL is the main aim in the effectual optimization technique to enhance the convergences speed of distinct heuristic improving techniques. The productive execution of OBL contributes to estimating the opposite and existing population from the similar generation to recognize optimal candidate solutions of provided issues. The OBL techniques are effectually utilized in distinct Meta-heuristics used to enhance convergences speed. The methods of the opposite amount are explained in OBL.
Let N ∈ N[x, y] represent the real numbers. An opposite number N 0 is provided as: In the d-dimensional searching region, the representation can be extended as follows: Here (N 1 , N 2 , ..N d ) defines the d-dimensional exploring area and N i [x i , y i ], i = 1, 2, . . . , d. In the OBO, this technique of OBL has been utilized in this beginning process of the GSO technique and in every iteration from the application of jump rate.

Experimental Validation
Ethereum (ETH) return rates are used as targets, and simulations are run to ensure a more accurate forecast for time series data. This model has been shown to be more precise. The Python tool simulates the proposed model. Data were collected from January 2018 to December 2018 over 365 days to measure the return on bitcoin. It is possible to verify the bitcoin return rate using model checking and experimental simulations. For this reason, the dataset is split up into two parts: training data and testing data, with the training data comprising 80% of the total.
Detailed predictive performance of the RRP-DLBFP technique takes place in this section. Table 1 investigates the analysis of the results of the RRP-DLBFP technique in terms of MSE and MAPE under the training and testing sets.

Conclusions
This study has developed an RRP-DLBFP technique to predict the return rate of blockchain financial products. The proposed RRP-DLBFP technique involves the design of the LSTM model for the predictive analysis of return rate. In addition, Adam optimizer and the OGSO algorithm are applied to adjust the hyperparameters of the LSTM model optimally, consequently increasing the predictive performance. The ETH return rate is preferred to ensure the supreme performance of the RRP-DLBFP technique, and the simulation results are investigated in terms of different measures. The simulation outcomes highlighted the supremacy of the RRP-DLBFP technique over the current state-of-the-art techniques in terms of diverse evaluation parameters. In the future, metaheuristic-based hyperparameter tuning models can be devised to boost the predictive outcome further.

Conclusions
This study has developed an RRP-DLBFP technique to predict the return rate of blockchain financial products. The proposed RRP-DLBFP technique involves the design of the LSTM model for the predictive analysis of return rate. In addition, Adam optimizer and the OGSO algorithm are applied to adjust the hyperparameters of the LSTM model optimally, consequently increasing the predictive performance. The ETH return rate is preferred to ensure the supreme performance of the RRP-DLBFP technique, and the simulation results are investigated in terms of different measures. The simulation outcomes highlighted the supremacy of the RRP-DLBFP technique over the current state-of-the-art techniques in terms of diverse evaluation parameters. In the future, metaheuristic-based hyperparameter tuning models can be devised to boost the predictive outcome further.