You are currently viewing a new version of our website. To view the old version click .
Applied Sciences
  • Article
  • Open Access

28 September 2023

Research on a Time Series Data Prediction Model Based on Causal Feature Weight Adjustment

,
,
,
and
1
Institute for Quantum Information & State Key Laboratory of High Performance Computing, College of Computer Science and Technology, National University of Defense Technology, Changsha 410073, China
2
Institute of Software, School of Computer Science, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
This article belongs to the Section Computing and Artificial Intelligence

Abstract

As the Information Age brings people an amount of data, research on data prediction has been widely concerned. Time series data, a sequence of data points collected over an interval of time, are very common in many areas such as weather forecasting, stock markets, and so on. Thus, research on time series data prediction is of great significance. Traditional prediction methods are usually based on correlations between data points, but correlations do not always reflect the relationship exactly within the data. In this paper, we propose the LiNGAM Weight Adjust–LSTM (LWA-LSTM) algorithm, which combines a causality discovery algorithm, feature weight adjustment, and a deep neural network for time series data prediction. Several stocks in the Chinese stock market were selected and the algorithm was used to predict the stock price. Comparing the prediction effect of the model with that of the LSTM model alone, the results show that the LWA-LSTM model can find the stable relationship between the data better and has fewer prediction errors.

1. Introduction

Time series data, also referred to as time-stamped data, are a sequence of data points indexed in time order. They are widely present in our daily lives, such as in weather forecasting, stock markets, and other fields. Nowadays, the accurate prediction and analysis of time series data have become a hot research topic in the field of artificial intelligence. Stock prices are typical time series data. Many scholars have carried out research on stock price forecasting due to its potential high returns. However, since stock data are often unstable and noisy and are greatly affected by international relations and national policies, the prediction of stock data has always been a challenging problem. Therefore, we take stock price data as an example to study the forecast of time series data, proposing a new method to improve prediction accuracy.
Thanks to the extensive research scholars have conducted, many effective methods have been proposed. Before the advent of machine learning, statistical approaches were widely tried and tested. The Exponential Smoothing Model [1] uses the exponential window function to smooth time series data and then analyze them. The ARMA [2] is another popular technique for stock market analysis, which combines the Auto-Regressive (AR) model, which models the momentum and mean reversion effects observed in trading markets, and the Moving Average (MA) model, which tries to capture the shock effects observed. The ARIMA [3] is a natural extension of the ARMA model that can reduce a non-stationary series to a stationary series. Machine learning methods include random forest [4], xgboost [5], and support vector machine [6]. As neural network models show their great potential, more and more deep learning models have been proposed for stock price prediction, based on RNN [7], CNN [8], LSTM [9], GRU [10], and so on. We will introduce some of them in the next chapter.
We found that when using machine learning methods for time series prediction, the choice of features greatly affects the prediction performance. Existing methods often can only find correlations within variables. As we all know, correlation differs from causality, which is a more essential and thus more stable and reliable relationship in time series data. Therefore, it is of great significance to explore the causal relationship between stock price data for forecasting.
To achieve this, we devised an effective novel method. First, we selected a set of stock factors as features and used a causal discovery algorithm to model the causal relationship between these features and the objective. Next, we adjusted the weights of the features that had a causal relationship with the objective. Finally, we used the weighted feature set as input to predict the stock price with the LSTM neural network.
The contributions of this paper are as follows: (1) To the best of our knowledge, we apply the causal discovery algorithm based on a structural causal model to the stock price prediction for the first time. (2) Based on the LiNGAM algorithm and LSTM algorithm, we propose LWA-LSTM, which is capable of discovering causality and predicting time series data. (3) We find that there is a causal relationship rather than a correlation between many features and the stock price to be predicted. (4) LWA-LSTM achieved excellent performance when tested on real stock data. Our work validates the potential for adding causality and causal weight adjustment to produce more reliable and accurate predictions for such tasks.
This paper is organized as follows. Section 2 introduces the relevant work. Section 3 provides a detailed description of the LWA-LSTM method we devised. In Section 4, we show the experiment settings and results. Finally, in Section 5, we summarize our work and look forward to the future.

3. LWA-LSTM Method

We propose a stock prediction method called LWA-LSTM based on the structural causal model. This approach begins by using a causal discovery algorithm to identify features that have a causal relationship with the predicted values within the feature set. Subsequently, these identified features are subjected to feature weight adjustment to enhance their importance within the entire feature set. Finally, the adjusted feature set is fed into a neural network for prediction. The flowchart of this method is shown in Figure 1.
Figure 1. LWA-LSTM model flowchart.

3.1. Multifactorial Forecasting

The daily real-time trading in the stock market generates a large amount of data for analysis. Various types of trading data that reflect changes in stock prices are suitable for stock price prediction, making them the focus of research. Traditional stock prediction methods often rely on single features such as opening and closing prices for forecasts. However, due to the limited number of features, these models struggle to capture the patterns of stock price fluctuations, resulting in limited predictive accuracy. A multifactor model utilizes multiple relevant features, which can improve the accuracy and robustness of stock prediction while enhancing the model’s interpretability. Therefore, we have chosen to use a multifactor model for stock forecasting.
Factors in addition to common ones such as the highest price, lowest price, and trading volume also include some manually constructed composite technical indicators. They can be divided into the following categories: scale-related features, valuation-related features, trading-related features, and price-related features. When selecting features, several factors need to be considered: firstly, the selected features should be representative and able to reflect the stock’s trading situation and changing trends; secondly, the selected features should have a strong causal relationship with the predicted value, and for predicting stock prices, they should possess greater stability and interpretability. Based on these requirements, we have selected 33 initial features for the model, which include price-related features, trading-related features, and others, as shown in Table 1.
Table 1. Complete set of multifactorial features.

3.2. LiNGAM Algorithm and Causal Weight Adjustment

Currently, the mainstream causal discovery algorithms can be classified into three main types: constraint-based methods, structure-based causal model methods, and hybrid methods. Constraint-based methods remove redundant edges in the causal graph by conducting independence tests on variables. Structure-based causal model methods start from the causal mechanisms generated by data and construct functions to determine the causal relationships between variables, thereby identifying the direction of causality. Hybrid methods combine both approaches, aiming to achieve both the high-dimensional scalability of constraint-based methods and the strong causal discovery capability of structure-based causal model methods.
Constraint-based methods often have drawbacks such as misidentification and high time complexity. Additionally, these methods are unable to learn all edges in a causal network graph; they can only obtain a directed acyclic graph comprising a set of Markov equivalent classes. On the other hand, structure-based causal modeling methods overcome these limitations by studying the distribution properties of data to discover causal relationships. However, research on hybrid approaches is still in its early stages and faces challenges like insufficient theoretical analysis. Therefore, we have chosen to study the LiNGAM algorithm, which is based on structure-based causal modeling.
LiNGAM, short for Linear Non-Gaussian Acyclic Model, was proposed by Shimizu et al. It is a variation of structural equation models and Bayesian networks. The model requires that the causal structure of the variables satisfies three conditions: first, the directed graph formed by all the variables must be acyclic; second, the model must be linear, with the target variable being a linear sum of its corresponding cause variables; third, the noise variables follow non-Gaussian distributions with nonzero variances and are mutually independent.
The variables in the LiNGAM model are generated in a causal order, so after the variables are arranged in a causal order, the variable located in the back cannot be the dependent variable of the preceding variable. In practice, the arrangement of the observed variables is random, as opposed to the causal order. We write the variables as { v 1 , v 2 , …, v n }, denoting the causal order as k   ( i ) ,   i   [ 1 ,   n ] .   ( i )     [ 1 ,   n ] represents the position of the i-th variable in the causal order of the observation sequence, then the generation process of the variable can be described as:
v i = k j < k ( i ) b i j v j + n i                 i , j [ 1 , n ]
In the formula, n i represents the noise terms that obey the non-Gaussian distribution, and the noise terms are independent of each other in pairs; if b i j is not 0, there is an edge with v j pointing to v i .
Under the linear non-Gaussian acyclic conditions described above, the LiNGAM model is expressed as a matrix:
V = B V + n
V is a p-dimensional random vector, B is a p × p adjacency matrix, and n is a p-dimensional non-Gaussian random noise variable. Under the assumption of a cycle-free graph, there exists a permutation matrix P R m × m such that B = P B P T is a strictly lower triangular matrix with all diagonal elements equal to 0. This solving method is proposed based on the Independent Component Analysis (ICA) algorithm. The algorithm first obtains the limit matrix W in the connection matrix Y = W V from the observation data through ICA, where Y is the vector containing independent components. As you can see, W has a derivation relationship with ( I B ) . Then, combined with the characteristics of B as a strictly lower triangular matrix, the causal order can be obtained from W by using methods such as row and column permutation. Finally, after a pruning algorithm, we obtain the final cause-and-effect diagram. The basic flow is shown in Figure 2.
Figure 2. LiNGAM algorithm flowchart.
In our algorithm, we first use the daily data of the features and the closing price of the next day as variables. We apply the LiNGAM algorithm to determine the causal ordering of the variables, obtain the causal matrix, and draw a causal graph. Then, we observe the factors in the causal graph that have a causal relationship with the next day’s closing price. We select these feature factors and add them to the original feature set by duplicating a column. This increases the weight of causal feature factors in the feature set.

3.3. LSTM Algorithm

Due to the good performance of LSTM in time series data prediction, our approach selects the LSTM algorithm for stock price prediction.
The specific structure of the LSTM unit is as follows:
f t = s i g m w f x x t + w t h h t 1 + b f
i t = s i g m w i x x t + w t h h t 1 + b i
g t = t a n h w g x x t + w g h h t 1 + b g
c t = f t × c t 1 + i t × g t
o t = s i g m w o x x t + w o h h t 1 + b o
Among them, f t stands for the door of forgetting, which determines how much information from the upper layer will be recorded. i t represents an input gate that determines how much input information will be used. g t represents a source of alternative information, ready to update new cell status c t . The final output h t is determined by the current cell state c t and the intermediate output o t and entered into the LSTM unit at the next moment. LSTM realizes the long-term transmission of information by building input, forget, and output gates, ensuring that the previous information can always participate in network training.

4. Experiment

In order to verify the effectiveness of the algorithm, we select the actual trading data of the Chinese stock market for training and predicting future stock prices, and the experimental results prove that the algorithm can better reduce the prediction error and improve the accuracy of stock prediction.

4.1. Data Sources and Preprocessing

All stock data are from the Tushare data interface package in Python. We have selected 33 factors, including opening price, trading volume, trading value, and adjustment factors, as features. The full sample period is from 20 May 2019 to 24 May 2022 and the test set selects the last 20% of the total sample set. In total, there are 733 days of data and 137 days of test sets.
Ping An Bank (000001.SZ), formerly known as Shenzhen Development Bank, is the first nationwide joint-stock commercial bank in mainland China to be publicly listed. We believe that Ping An Bank has good representativeness for the Shenzhen stock market, so we chose to conduct our experiment using it. Some of the data for it are presented in Table 2. There are various types of stocks in the stock market, and in order to ensure that the prediction errors are not caused by differences in stock types, we selected four banking stocks listed on the Shenzhen Stock Exchange: Jiangyin Bank (002807.SZ), Zhengzhou Bank (002936.SZ), and Qingdao Bank (002948.SZ) as data sources.
Table 2. Ping An Bank partial factor data graph.
Since variables in the raw data may have different scales, we first use the fit_transform method to normalize the train data, then use the transform method on the test set. This transforms the variance to 1 and the mean to 0.

4.2. Causality Discovery

We used the LiNGAM algorithm from the causal-learn library in Python to discover causal relationships between variables. The lower limit was set to 0.9, indicating that only causal relationships with weights greater than 0.9 are displayed in the causal graph.
Due to the large number of characteristic values and the complexity of the connection between the characteristic factors, the Bank of Qingdao (002948.SZ) causal discovery is taken as an illustrative example in Figure 3.
Figure 3. Thirty-three-factor causal discovery diagram of the Bank of Qingdao.
Among them, X1 represents the closing price of the next day. From the above chart, it can be observed that the lowest price (low) has a causal relationship with the closing price. By observing the causal relationships between the remaining stocks and the closing price, the characteristic factors that have a causal relationship with the closing price are shown in Table 3: for Ping An Bank, the lowest price (low) and the closing price before adjustment (close_qfq) have a causal relationship; for Jiangyin Bank, the lowest price (low) and the trading volume (amount) have a causal relationship with X1; for Zhengzhou Bank, the lowest price (low), highest price (high), and opening price (open) have a causal relationship with X1; for Qingdao Bank, the lowest price (low) has a causal relationship with X1.
Table 3. Characteristic values in the four stocks that have a causal relationship with the closing price X1.

4.3. Model Building and Parameter Setting

The experimental model in this paper is built and run under the TensorFlow framework of Python 3.10, using the sequential model in Keras in TensorFlow and combining two layers of LSTM and a layer of Dense to complete the stock prediction. Above the model parameters, the number of neurons in the first layer of LSTM is 80, and the number of neurons in the second layer is 100. Optimizing parameters in 200 epochs using the Adam optimizer with a learning rate of 0.001 and a batch size batch_size of 128, the experiment uses a 10-day time step as a sliding time window. The model takes 1 day as the forecast time step, which means that we will use the stock characteristics data of the previous 10 days to predict the closing price of the stock on the eleventh day.

4.4. Evaluation Indicators

Since our job is to predict stock prices, which is a regression problem, some evaluation indicators can be used to evaluate how well our predictions work. In this paper, three evaluation indicators, mean squared error (MSE), root mean square error (RMSE), and mean absolute error (MAE), are used to evaluate the matching degree between the predicted value and true value and quantify the predictive performance of the model. The formulas for the three evaluation indicators are as follows:
M S E = 1 n i = 1 n y i y i ^ 2
R M S E = 1 n i = 1 n y i y i ^ 2
M A E = 1 n i = 1 n y i y i ^
where n is the number of samples, y i is the real data, and y i ^ is the fitted data. The three evaluation indicators are used to measure the deviation between the true value and the predicted value. A smaller value indicates that the predicted value is closer to the true value, indicating that the model selection and fitting are better and that the data prediction is more successful.

4.5. Experimental Results and Analysis

Figure 4 shows the stock price prediction graph obtained by our algorithm with 000001.SZ as an example. Table 4, Table 5 and Table 6 present the comparative results between our approach, the original LSTM model, and the LSTM model after eliminating causal feature factors. The results indicate that our algorithm outperforms in terms of all evaluation metrics for the four stocks. Particularly, Ping An Bank’s mean square error has decreased by 39.24%, achieving excellent performance.
Figure 4. Stock price prediction chart of 000001.SZ using the LWA-LSTM algorithm.
Table 4. MSE of predicted values under different causal weights.
Table 5. RMSE of predicted values under different causal weights.
Table 6. MAE of predicted values under different causal weights.
The causal feature factors selected for the experiment, represented by variables pointing to the closing price, are all dependent variables that predict the target variable, which is the closing price. Adding these feature values as an additional column to the complete set of features implies increasing the initial weights of features with causal relationships when inputting them into the neural network. After removing features with causal relationships, the predictive performance of three out of four stocks declined, demonstrating the importance of these variables in predicting the closing price. Following the implementation of the new algorithm, the prediction errors of all four stocks decreased, with the smallest prediction error observed among the three stocks. In 002948.SZ stock, the MSE error using our algorithm even dropped by 39%, further validating the effectiveness of our algorithm.
In previous research, it was widely recognized that features such as the lowest price and highest price have a positive impact on predictive effectiveness. However, there has not been any study analyzing whether this effect is based on correlation or causation. Our research demonstrates that there is a causal relationship between these feature values and the closing price. Additionally, we also provide evidence that boosting the weights of these causal features effectively enhances predictive effectiveness.

5. Summary and Outlook

In this paper, the features found with the LiNGAM algorithm are used for stock price prediction after weight adjustment. In order to make causal features play a more important role in stock price prediction, a new method is proposed in this paper, which combines the LiNGAM algorithm, feature weight adjustment, and LSTM algorithm. In this algorithm, we only select the features that point to the predicted objective in the causal graph to adjust their weights, which increases the interpretability of the method. We conducted experiments on real stock data, and the results show that our method can effectively improve prediction accuracy.
Mining the causal relationship between data to make predictions and adjust the weights of factors is a very promising research direction in the future, meeting various challenges. On the basis of this paper, future research directions can be: (1) Combining causality discovery with more advanced models such as GRU and transformers. (2) Mining the adjustment rules of feature weights to enhance the method’s generality in predicting different stocks.

Author Contributions

Conceptualization, D.H. and Q.Z.; methodology, D.H.; software, Q.Z.; validation, D.H., Q.Z. and W.X.; formal analysis, W.X.; investigation, Z.W.; writing—original draft preparation, Q.Z.; writing—review and editing, Z.W.; visualization, M.H.; supervision, W.X. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gardner, E.S., Jr. Exponential smoothing: The state of the art. J. Forecast. 1985, 4, 1–28. [Google Scholar] [CrossRef]
  2. McLeod, A.I.; Li, W.K. Diagnostic checking ARMA time series models using squared-residual autocorrelations. J. Time Ser. Anal. 1983, 4, 269–273. [Google Scholar] [CrossRef]
  3. Ariyo, A.A.; Adewumi, A.O.; Ayo, C.K. Stock price prediction using the ARIMA model. In Proceedings of the UKSim-AMSS 16th International Conference on Computer Modelling and Simulation, Cambridge, UK, 26–28 March 2014; pp. 106–112. [Google Scholar]
  4. Breiman, L. Random forests. Mach. Learn. 2001, 45, 5–32. [Google Scholar] [CrossRef]
  5. Chen, T.; Guestrin, C. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd ACM Sigkdd International Conference on Knowledge Discovery and Data Mining, San Francisco, CA, USA, 13–17 August 2016; pp. 785–794. [Google Scholar]
  6. Joachims, T. Making Large-Scale SVM Learning Practical; Technical Report; Cornell University: Ithaca, NY, USA, 1998. [Google Scholar]
  7. Sherstinsky, A. Fundamentals of recurrent neural network (RNN) and long short-term memory (LSTM) network. Phys. D Nonlinear Phenom. 2020, 404, 132306. [Google Scholar] [CrossRef]
  8. Alzubaidi, L.; Zhang, J.; Humaidi, A.J.; Al-Dujaili, A.; Duan, Y.; Al-Shamma, O.; Santamaría, J.; Fadhel, M.A.; Al-Amidie, M.; Farhan, L. Review of deep learning: Concepts, CNN architectures, challenges, applications, future directions. J. Big Data 2021, 8, 53. [Google Scholar] [CrossRef]
  9. Yu, Y.; Si, X.; Hu, C.; Zhang, J. A review of recurrent neural networks: LSTM cells and network architectures. Neural Comput. 2019, 31, 1235–1270. [Google Scholar] [CrossRef] [PubMed]
  10. Yamak, P.T.; Yujian, L.; Gadosey, P.K. A comparison between arima, lstm, and gru for time series forecasting. In Proceedings of the 2nd International Conference on Algorithms, Computing and Artificial Intelligence, Sanya, China, 20–22 December 2019; pp. 49–55. [Google Scholar]
  11. Fama, E.F. Efficient capital markets: A review of theory and empirical work. J. Financ. 1970, 25, 383–417. [Google Scholar] [CrossRef]
  12. Welch, I.; Goyal, A. A comprehensive look at the empirical performance of equity premium prediction. Rev. Financ. Stud. 2008, 21, 1455–1508. [Google Scholar] [CrossRef]
  13. Goyal, A.; Welch, I.; Zafirov, A. A Comprehensive Look at the Empirical Performance of Equity Premium Prediction II; Swiss Finance Institute: Zürich, Switzerland, 2021. [Google Scholar]
  14. Li, C.; Yang, B.; Li, M. Forecasting analysis of Shanghai stock index based on ARIMA model. MATEC Web Conf. 2017, 100, 02029. [Google Scholar] [CrossRef]
  15. Kim, K.-J. Financial time series forecasting using support vector machines. Neurocomputing 2003, 55, 307–319. [Google Scholar] [CrossRef]
  16. Qiu, M.; Song, Y.; Akagi, F. Application of artificial neural network for the prediction of stock market returns: The case of the Japanese stock market. Chaos Solitons Fractals 2016, 85, 1–7. [Google Scholar] [CrossRef]
  17. Catalin, S.; Wieslaw, P.; Ruxandra, S.; Adrian, S. Deep architectures for long-term stock price prediction with a heuristic-based strategy for trading simulations. PLoS ONE 2019, 14, e0223593. [Google Scholar]
  18. Selvin, S.; Vinayakumar, R.; Gopalakrishnan, E.A.; Menon, V.K.; Soman, K.P. Stock price prediction using LSTM, RNN and CNN-sliding window model. In Proceedings of the International Conference on Advances in Computing, Communications and Informatics (ICACCI), Udupi, India, 13–16 September 2017; pp. 1643–1647. [Google Scholar]
  19. Chen, Y.; Lin, W.; Wang, J.Z. A dual-attention-based stock price trend prediction model with dual features. IEEE Access 2019, 7, 148047–148058. [Google Scholar] [CrossRef]
  20. Wen, Q.; Zhou, T.; Zhang, C.; Chen, W.; Ma, Z.; Yan, J.; Sun, L. Transformers in time series: A survey. arXiv 2022, arXiv:2202.07125. [Google Scholar]
  21. Liu, J.; Lin, H.; Liu, X.; Xu, B.; Ren, Y.; Diao, Y.; Yang, L. Transformer-based capsule network for stock movement prediction. In Proceedings of the 1st Workshop on Financial Technology and Natural Language Processing, Macao, China, 12 August 2019; pp. 66–73. [Google Scholar]
  22. Lin, Y.; Koprinska, I.; Rana, M. SSDNet: State space decomposition neural network for time series forecasting. In Proceedings of the IEEE International Conference on Data Mining (ICDM), Auckland, New Zealand, 7–10 December 2021; pp. 370–378. [Google Scholar]
  23. Gupta, U.; Bhattacharjee, V.; Bishnu, P.S. StockNet—GRU based stock index prediction. Expert Syst. Appl. 2022, 207, 117986. [Google Scholar] [CrossRef]
  24. Hossain, M.A.; Karim, R.; Thulasiram, R.; Bruce, N.D.B.; Wang, Y. Hybrid deep learning model for stock price prediction. In Proceedings of the IEEE Symposium Series on Computational Intelligence (SSCI), Bangalore, India, 18–21 November 2018; pp. 1837–1844. [Google Scholar]
  25. Lien Minh, D.; Sadeghi-Niaraki, A.; Huy, H.D.; Min, K.; Moon, H. Deep learning approach for short-term stock trends prediction based on two-stream gated recurrent unit network. IEEE Access 2018, 6, 55392–55404. [Google Scholar] [CrossRef]
  26. Shah, S.Y.; Patel, D.; Vu, L.; Dang, X.H.; Chen, B.; Kirchner, P.; Samulowitz, H.; Wood, D.; Bramble, G.; Gifford, W.M.; et al. AutoAI-TS: AutoAI for time series forecasting. In Proceedings of the International Conference on Management of Data, Shaanxi, China, 20–25 June 2021; pp. 2584–2596. [Google Scholar]
  27. Granger, C.W.J. Investigating causal relations by econometric models and cross-spectral methods. Econom. J. Econom. Soc. 1969, 37, 424–438. [Google Scholar] [CrossRef]
  28. Hiemstra, C.; Jones, J.D. Testing for linear and nonlinear Granger causality in the stock price-volume relation. J. Financ. 1994, 49, 1639–1664. [Google Scholar]
  29. Param, S. Testing for linear and nonlinear granger causality in the stock price-volume relation: Korean evidence. Q. Rev. Econ. Financ. 1999, 39, 598–612. [Google Scholar]
  30. Zhuo, Q.; Michael, M.; Wing-Keung, W. Linear and nonlinear causality between changes in consumption and consumer attitudes. Econ. Lett. 2008, 102, 161–164. [Google Scholar]
  31. Hu, Y.; Liu, K.; Zhang, X.; Xie, K.; Chen, W.; Zeng, Y.; Liu, M. Concept drift mining of portfolio selection factors in stock market. Electron. Commer. Res. Appl. 2015, 14, 444–455. [Google Scholar] [CrossRef]
  32. Zhang, X.; Hu, Y.; Xie, K.; Wang, S.; Ngai, E.; Liu, M. A causal feature selection algorithm for stock prediction modeling. Neurocomputing 2014, 142, 48–59. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.