Next Article in Journal
Utilizing Information Bottleneck to Evaluate the Capability of Deep Neural Networks for Image Classification
Next Article in Special Issue
Evolved-Cooperative Correntropy-Based Extreme Learning Machine for Robust Prediction
Previous Article in Journal
A Simple Method to Estimate Entropy and Free Energy of Atmospheric Gases from Their Action
Previous Article in Special Issue
Assessing the Performance of Hierarchical Forecasting Methods on the Retail Sector
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Neutrosophic Forecasting Model for Time Series Based on First-Order State and Information Entropy of High-Order Fluctuation

1
School of Management Science and Engineering, Shandong University of Finance and Economics, Jinan 250014, China
2
Courant Institute of Mathematical Sciences, New York University, New York, NY 10012, USA
3
School of Management, Jiangsu University, Zhenjiang 212013, China
*
Author to whom correspondence should be addressed.
Entropy 2019, 21(5), 455; https://doi.org/10.3390/e21050455
Submission received: 31 March 2019 / Revised: 26 April 2019 / Accepted: 26 April 2019 / Published: 1 May 2019
(This article belongs to the Special Issue Entropy Application for Forecasting)

Abstract

:
In time series forecasting, information presentation directly affects prediction efficiency. Most existing time series forecasting models follow logical rules according to the relationships between neighboring states, without considering the inconsistency of fluctuations for a related period. In this paper, we propose a new perspective to study the problem of prediction, in which inconsistency is quantified and regarded as a key characteristic of prediction rules. First, a time series is converted to a fluctuation time series by comparing each of the current data with corresponding previous data. Then, the upward trend of each of fluctuation data is mapped to the truth-membership of a neutrosophic set, while a falsity-membership is used for the downward trend. Information entropy of high-order fluctuation time series is introduced to describe the inconsistency of historical fluctuations and is mapped to the indeterminacy-membership of the neutrosophic set. Finally, an existing similarity measurement method for the neutrosophic set is introduced to find similar states during the forecasting stage. Then, a weighted arithmetic averaging (WAA) aggregation operator is introduced to obtain the forecasting result according to the corresponding similarity. Compared to existing forecasting models, the neutrosophic forecasting model based on information entropy (NFM-IE) can represent both fluctuation trend and fluctuation consistency information. In order to test its performance, we used the proposed model to forecast some realistic time series, such as the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), the Shanghai Stock Exchange Composite Index (SHSECI), and the Hang Seng Index (HSI). The experimental results show that the proposed model can stably predict for different datasets. Simultaneously, comparing the prediction error to other approaches proves that the model has outstanding prediction accuracy and universality.

1. Introduction

Financial markets are a complex system where fluctuation is the result of combined variables. These variables cause frequent market fluctuations with trends exhibiting degrees of ambiguity, inconsistency, and uncertainty. This pattern implies the importance of time series representations, and thus, an urgent demand arises for analyzing time series data in more detail. To some extent, an effective time series representation can be understood from two aspects: traditional time series prediction approaches [1,2,3,4]; and the fuzzy time series prediction approaches [5,6]. The former emphasizes the use of a crisp set to represent the time series, while the latter uses the fuzzy set.
Generally speaking, data are not only the source for prediction processes or prediction system inputs. The original data, however, are full of noise, incompleteness, and inconsistency, which limit the function of traditional prediction methods. Therefore, Song and Chissom [7,8,9] developed a fuzzy time series model to predict real-time scenarios like college admissions. The fuzzification method effectively eliminates part of the noise inside the data, and the prediction performance of the time series is strengthened. Subsequently, with advancing research, the non-determinacy of information has become the main contradiction affecting prediction accuracy. Some studies proposed novel information representation approaches, such as the type 2 fuzzy time series [5], rough set fuzzy time series [10], and intuitionistic fuzzy time series [11].
Although the above work has achieved considerable results for specific problems, certain shortcomings remain that pose a barrier to the accuracy and applicability of predictions. More specifically, complex scenarios and variables in actual situations make it unrealistic to define and classify explicitly the membership and non-membership of elements.
The neutrosophic sets (NSs) method, proposed by Smarandache [12] for the first time, is suitable for the expression of incomplete, indeterminate, and inconsistent information. A neutrosophic set consists of true-, indeterminacy-, and false-memberships. From the perspective of information representation, scholars have proposed two specific concepts based on the neutrosophic set: single-valued NSs [13] and interval-valued NSs [14]. These concepts are intended to seek a more detailed information representation, thereby enabling NSs to quantify uncertain information more accurately. To deal with the above problem, entropy is an important representation of the degree of the complexity and inconsistency. In a nutshell, entropy is more focused on the representation and measure of inconsistency, while NSs tends to describe uncertainty. Zadeh [15] first proposed the entropy of fuzzy events, which measures the uncertainty of fuzzy events by probability. Subsequently, De Luca and Termin [16] proposed the concept of entropy for fuzzy sets (FSs) based on Shannon’s information entropy theory and further proposed a method of fuzzy entropy measurement. Since information entropy is an effective measurement in the degree of systematic order, it has been gaining popularity for different applications, such as climate variability [17], uncertainty analysis [18,19], financial analysis [20], image encryption [21], and detection [22]. Specifically, He et al. [23] proposed a collapse hazard forecasting method and applied the information entropy measurement to reduce the influence of collapse activity indices. Bariviera [24] proposed a prediction method based on the maximum entropy principle to predict the market and further monitor market anomalies. In Liang’s research [25], information entropy was introduced to analyze trends for capacity assessment of sustainable hydropower development. Zhang et al. [26] proposed a signal recognition theory and algorithm based on information entropy and integrated learning, which applied various types of information entropy including energy entropy and Renyi entropy.
In order to describe the indeterminacy of fluctuations and further measure the inconsistency and uncertainty of dynamic fluctuation trends, we propose a neutrosophic forecasting model based on NSs and information entropy of high-order fuzzy fluctuation time series (NFM-IE). The biggest difference compared to the original models is that the NFM-IE represents both fluctuation trend information and fluctuation consistency information. First of all, a time series is converted to a fluctuation time series by comparing each of the current data and corresponding previous data in the time series. Then, the upward trend of each of the fluctuation data is mapped to the truth-membership of a neutrosophic set and falsity-membership for the downward trend. Information entropy of high-order fluctuation time series is introduced to describe the inconsistency of historical fluctuations and is mapped to the indeterminacy-membership of the neutrosophic set. Finally, an existing similarity measurement method for the neutrosophic set is introduced to find similar states during the forecasting stage, and the weighted arithmetic averaging (WAA) aggregation operator is employed to obtain the forecasting result according to the corresponding similarity. The largest contributions of the proposed model are listed as follows: (1) Introducing information entropy to quantify the inconsistency of fluctuations in related periods and mapping it to the indeterminacy-membership of neutrosophic sets allow NFM-IE to extend traditional forecasting models to a certain level. (2) Employing a similarity measurement method and aggregation operator allows NFM-IE to integrate more possible rules. In order to test its performance, we used the proposed model to forecast some realistic time series, such as the Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX), the Shanghai Stock Exchange Composite Index (SHSECI), the Hang Seng Index (HSI), etc. The experimental results show that the model has a stable prediction ability for different datasets. Simultaneously, comparing the prediction error with that from other approaches proves that the model has outstanding prediction accuracy and universality.
The rest of this paper is organized as follows: Section 2 introduces the basic concepts of wave time series and information entropy. Then, the concepts proposed in this paper, such as neutrosophic fluctuation time series (NFTS) and the neutrosophic fluctuation logical relationship, are defined. Section 3 presents the specific modules of the model presented in this paper. Section 4 details the prediction steps and validates the model using TAIEX as the dataset. Section 5 further analyzes the prediction accuracy and universality of the model based on SHSECI and HSI. Finally, the conclusions and prospects are presented in Section 6.

2. Preliminaries

2.1. Fluctuation Time Series

Definition 1.
Let {Vt|t = 1, 2, …, T} be a stock time series, where T is the number of observations. Then, {Ut|t = 2, 3, …, T} is called a fluctuation time series, where Ut = Vt − Vt−1 (t = 2, 3, …, T).

2.2. Information Entropy of the mth-Order Fluctuation in a Time Series

Information entropy (IE) [27] was proposed as a measurement of event uncertainty. The amount of information can be expressed as a function of event occurrence probability. The general formula for information entropy is:
E = t = 1 N p ( x t ) l o g 2 ( p ( x t ) )
where p(·) is the probability function of a set of N events. In addition, the information entropy must satisfy the following conditions: t = 1 N p ( x t ) = 1 ,   0 < p ( x t ) < 1 . The information entropy is always positive.
According to the fuzzy set definition by Zadeh [28], each number in a time series can be fuzzified by its membership function of a fuzzy set L = { L 1 , L 2 , , L g } , which can be regarded as an event in a time series. For example, when g = 5, it might represent a set of linguistic event variants as: L = {L1, L2, L3, L4, L5} = {very low, low, equal, high, very high}, etc.
Definition 2.
Let F(t − 1), F(t − 2), …, F(t − m) be fuzzy sets of the mth-order fluctuation time series {Ut|t = m + 1, m + 2, …, T}. Let pUt(L1), pUt (L2), pUt (L3), pUt (L4), and pUt(L5) be the probabilities of the occurrence of the linguistic variants L1, L2, L3, L4, and L5 for F(t − 1), F(t − 2), …, F(t − m). The information entropy of the mth-order fluctuation is defined as:
E ( U t ) = n = 1 g p U t ( L n ) log 2 ( p U t ( L n ) )
where g = 5, E ( U t ) is the information entropy of the mth-order fluctuation at point t in the fluctuation time series {Ut|t = m + 1, m + 2, …, T}.

2.3. Neutrosophic Fluctuation Time Series

Definition 3.
(Smarandache [12]) Let W be a space of points (objects), with a generic element in W denoted by w. A neutrosophic set A in W is characterized by a truth-membership function TA(w), am indeterminacy-membership function IA(w), and a falsity-membership function FA(w). The functions TA(w), IA(w), and FA(w) are real standard or nonstandard subsets of ]0,1+[, where 0 = 0 ε , 1 + = 1 + ε , ε > 0 is an infinitesimal number. There is no restriction on the sum of TA(w), IA(w), and FA(w).
Definition 4.
Let {Ut|t = 2, 3, …, T} be a fluctuation time series of a stock time series as defined in Definition 1. A number Ut in U is characterized by an upward-trend function T(Ut), a fluctuation-inconsistency function I(Ut), and downward-trend function F(Ut), which can be correspondingly mapped to the truth-membership, indeterminacy-membership, and falsity-membership dimension of a neutrosophic set, respectively. The upward-trend function T(Ut) and downward-function F(Ut) are defined according to the number Ut shown as follows:
T ( U t ) = { 0 , U t m 1 f 1 ( U t , m 1 , m 2 ) , m 1 U t m 2 1 , o t h e r w i s e     F ( U t ) = { 1 , U t o 1 f 2 ( U t , o 1 , o 2 ) , o 1 U t o 2 1 , o t h e r w i s e
where m j and o j (j = 1, 2) are parameters according to the fluctuation time series.
The fluctuation-inconsistency function I(Ut) can be represented by the information entropy E ( U t ) as defined in Equation (2).
Thus, a fluctuation time series {Ut|t = 1, 2, 3, …, T} can be represented by a neutrosophic fluctuation time series {Xt|t = m + 1, m + 2, …, T}, where Xt = (T(Ut), I(Ut), F(Ut)) is a neutrosophic set.

2.4. Neutrosophic Logical Relationship

Definition 5.
Let {Xt|t = 1, 2, 3, …, T} be a fluctuation time series. If there exists a relation R(t, t + 1), such that:
Xt+1 = XtR(t, t + 1)
where ◦ is a max–min composition operator, Xt+1 is said to be derived from Xt, denoted by the neutrosophic logical relationship (NLR) Xt → Xt+1. Xt and Xt+1 are called the left-hand side (LHS) and the right-hand side (RHS) of the NLR, respectively. Xt+1 can also represented by Dt. Therefore, Xt → Xt+1 can also be represented by Xt → Dt.
The Jaccard index, also known as the Jaccard similarity coefficient, is used to compare similarities and differences between finite sample sets [29]. The larger the Jaccard similarity value, the higher the similarity.
Definition 6.
Let Xt, Xj be two NSs. The Jaccard similarity between Xt and Xj in vector space can be expressed as follows:
J ( X t , X j ) = T X t T X j + I X t I X j + F X t F X j ( T X t ) 2 + ( I X t ) 2 + ( F X t ) 2 + ( T X j ) 2 + ( I X j ) 2 + ( F X j ) 2 ( T X t T X j + I X t I X j + F X t F X j )

2.5. Aggregation Operator for NLRs

Definition 7.
Let X = { X 1 , X 2 , , X t , , X n } , D = { D 1 , D 2 , , D t , , D n } be the LHSs and RHSs of a group of NLRs, respectively. The Jaccard similarities between Xt (t = 1, 2, …, n) and Xj are S X i , j (i = 1, 2, …, n), respectively. The corresponding Dj can be calculated by an aggregation operator [30] as:
T D j = t = 1 n S X t , j × T D t t = 1 n S X t , j ,   I D j = t = 1 n S X t , j × I D t t = 1 n S X t , j ,
According to the definition of NLR, Dj can be represented by Xj+1.

3. Research Methodology

In this section, we will introduce a neutrosophic forecasting model for time series based on first-order state and information entropy of high-order fluctuation. The detailed steps are shown as follow steps and in Figure 1.

3.1. Step 1: Using Neutrosophic Fluctuation Sets to Describe a Time Series

Let {Vt|t = 1, 2, 3, …, T} be a stock index time series and {Ut|t = 2, 3, …, T} be its fluctuation time series, where Ut = Vt − Vt−1 (t = 2, 3, …, T). Then, we can calculate l e n = t = 2 T | U t | T 1 , which is the benchmark for interval division when calculating membership. Let {Xt|t = m, m + 1, m + 2, …, T} be the mth-order neutrosophic expression of fluctuation time series {Ut|t = 2, 3, …, T}. The conversion rules for the truth-membership T X t and falsity-membership F X t of Xt are defined as follows:
T X t = { 0 , U t 0.5 l e n U t 3 / 2 × l e n + 1 3 , 0.5 × l e n U t l e n 1 , U t l e n     F X t = { 1 , U t l e n U t 3 / 2 × l e n + 1 3 , l e n U t 0.5 × l e n 0 , U t 0.5 l e n

3.2. Step 2: Using Information Entropy to Represent the Complexity of Historical Fluctuations

{Ut|t = 1, 2, 3, …, T} can be fuzzified according to a linguistic set L = {l1, l2, l3, l4, l5}. Specifically, l 1 = [ U min   ,   1.5 × l e n ) , l 2 = [ 1.5 × l e n   ,   0.5 × l e n ) , l 3 = [ 0.5 × l e n   ,   0.5 × l e n ) , l 4 = [ 0.5 × l e n   ,   1.5 × l e n ) , and l 5 = [ 1.5 × l e n   ,   U max ) . The conversion rule for the indeterminacy-membership I X t is defined as follows:
I X t = n = 1 g p X t ( L n ) log 2 ( p X t ( L n ) )
where g = 5, p X t ( L n ) indicates the probability of occurrence of the label ln in the past m days.

3.3. Step 3: Establishing Logical Relationships for Training Data

According to Definition 5, NLRs were established as a training dataset.

3.4. Step 4: Calculating the Similarities between Current Data and Training Data

According to Definition 6, similarities between current data and training data were calculated. Let t be the current data of the point. S X t , j is the similarity of NFTS between the current point t and training data j.

3.5. Step 5: Forecasting Neutrosophic Value Using the Aggregation Operator

According to Definition 7, the future neutrosophic fluctuation number X t + 1 can be generated based on the training dataset and the similarities with X t . In order to eliminate very low similarity data, valid NLRs satisfy S X t , j w .

3.6. Step 6: Deneutrosophication for the Neutrosophic Fluctuation Set and Calculating the Forecasted Value

Calculating the expected value of the forecasted neutrosophic set X t + 1 , the forecasted fluctuation value can be calculated by:
V t + 1 = ( T X t + 1 F X t + 1 ) × l e n + V t

4. Empirical Analysis

4.1. Prediction Process

4.1.1. Step 1: After Calculating the Fluctuation Value in Stock Time Series, the Fluctuation Values Will Be Converted to Neutrosophic Time Series

This study needs to select the parameters of the model and estimate its performance. Many studies in the field of fuzzy forecasting have used the data from January–October as the training set and the data from November–December as the test dataset. To facilitate comparison with these existing studies, we also selected data from November–December as the test dataset. Considering the characteristics of time series, traditional cross-validation methods (such as k-fold cross-validation) have poor adaptability. A subset of data after the training subset needs to be retained for validation of model performance. Therefore, we chose a special nested cross-validation, the outer layer of which was used to estimate the model performance and the inner layer of which was used to select the parameters. Specifically, in this paper, we used TAIEX’s 1999 data as an example. The closing prices from 1 January–31 October were used as the training dataset. Among them, from January–August was a training subset, and from September–October was for validation. Logical relationships were constructed between each dataset and its closest ninth-order historical values. The closing prices from 1 November–31 December were used as forecast data, and performance was evaluated by comparing forecasting and realistic data.
For example, when the fluctuation value is U12 = 28.7, the sequence of linguistic variables is l4, l5, l3, l3, l2, l2, l2, l5, l3. p U 12 ( l 1 ) = 0, p U 12 ( l 2 ) = 0.3333, p U 12 ( l 3 ) = 0.3333, p U 12 ( l 4 ) = 0.1111, p U 12 ( l 5 ) = 0.2222. Then, we can calculate the ninth-order fuzzy fluctuation information entropy as follows:
E ( U 12 ) = E ( 28.7 ) = i = 1 5 p U 12 ( l i ) log 2 ( p U 12 ( l i ) ) = 1 . 8911
E ( U 13 ) = E ( 106.5 ) = i = 1 5 p U 13 ( l i ) log 2 ( p U 13 ( l i ) ) = 1.5307
E ( 33.89 ) = i = 1 5 p U 14 ( l i ) log 2 ( p U 14 ( l i ) ) = 1.3923
The information entropy of fluctuation time proposed in this paper is the intermediate term of NS. In order to maintain the consistency with the other two terms, the above results must be normalized. Normalized information entropy based on the maximum values of information entropy is calculated as follows:
E ( U 12 ) = 1.8911 3.7000 = 0.5111
E ( U 13 ) = 1.5307 3.7000 = 0.4137
E ( U 14 ) = 1.3923 3.7000 = 0.3763
In order to convert the numerical data of stock market fluctuation time series into NS, it is necessary to calculate the elements corresponding to the truth-membership term and the falsity-membership term of NS. According to Equation (7), neutrosophic set membership can be calculated. For example, when the fluctuation value is U12 = 28.7, then truth-membership T X 12 of X12 is 28.7 3 / 2 × l e n   + 1 3 = 0.5584 and falsity-membership F X 12 of X12 is 28.7 3 / 2 × l e n   + 1 3 = 0.1082 . Then, the fluctuation can be represented by the neutrosophic set as follows:
X 12 ( 28.7 ) ( 0.5584 , 0.5111 , 0.1082 )
X 13 ( 106.5 ) ( 0.0000 , 0.4137 , 1.0000 )
X 14 ( 33.89 ) ( 0.0675 , 0.3763 , 0.5991 )
X 223 ( 148.18 ) ( 1.0000 , 0.3910 , 0.0000 )

4.1.2. Step 2: According to Definition 5, Establishing Mapping Relationships Based on Historical Values, Historical Trends, and Current Values

This step requires establishing neutrosophic logical relationships based on the feature and target sets, where X12 is the feature item of X13.
X 12 ( x ) X 13 ( x ) = D 12 ( x )
X 13 ( x ) X 14 ( x ) = D 13 ( x )

4.1.3. Step 3: Calculating the Jaccard Similarity

Jaccard similarity is usually used to compare similarities and differences of a limited set of samples. The higher the value, the higher the similarity. We used it to compare the current logical group with the logical groups in the training set in order to identify similar groups. S X 223 , 12 indicates the similarity between the 223rd and 12th groups.
S X 223 , 12 = 0.5584 × 1.0000 + 0.5111 × 0.3910 + 0.1082 × 0.0000    0.5584 2 + 0.5111 2 + 0.1082 2 + 1.0000 2 + 0.3910 2 + 0.0000 2 ( 0.5584 × 1.0000 + 0.5111 × 0.3910 + 0.1082 × 0.0000 ) = 0.7742

4.1.4. Step 4: Forecasting the Neutrosophic Fluctuation Point Using the Aggregation Operator

First, we applied the Jaccard similarity measure method to locate similar LHSs of NLRs. We tested different threshold values for the training data. In this example, it was set to 0.89, and we identified 65 groups that met the criteria.
Furthermore, we calculated the forecasting NFTS using the aggregation operator:
D224 = (0.5005, 0.5067, 0.3401)

4.1.5. Step 5: Calculating the Forecasted Value

Then, we calculated the predicted fuzzy fluctuation:
Y ( t + 1 ) = 0.5005 0.3401 = 0.1604
We also calculated the real number of the fluctuation:
U ( t + 1 ) = Y ( t + 1 ) × l e n = 0.1604 × 85 = 13.63
Finally, the predicted value was obtained from the actual value of the previous day and the predicted fluctuation value:
V ( t + 1 ) = V ( t ) + U ( t + 1 ) = 7854.85 + 13.63 = 7868.47
For the sample dataset, the complete prediction result of stock fluctuation trends and the actual values are shown in Table 1 and Figure 2.
Table 1 and Figure 2 show that NFM-IE was able to successfully forecast TAIEX data from 1 November 1999–30 December 1999 based on the logical rules derived from training data.

4.2. Performance Assessments

During the experimental analysis, some methods were used to measure prediction accuracy in order to quantify model prediction effects. These methods are mainly used in the prediction field, including the mean squared error (MSE), the root mean squared error (RMSE), the mean absolute error (MAE), and the mean absolute percentage error (MAPE).
These expressions are respectively illustrated by Equations (26)–(29):
M S E = t = 1 n ( f o r e c a s t t a c t u a l t ) 2 n
R M S E = t = 1 n ( f o r e c a s t t a c t u a l t ) 2 n
M A E = t = 1 n | ( f o r e c a s t t a c t u a l t ) | n
M A P E = t = 1 n | ( f o r e c a s t t a c t u a l t ) | / a c t u a l t n
where forecastt represents the predicted observations and actualt represents actual observations.
Theil’s U index [31] is primarily used to measure the deviation between predicted and actual values. It can get a relative value between zero and one, where zero means that the actual value is equal to the predicted value, that is the prediction model is perfect. At the same time, one indicates that the model prediction effect is not satisfactory. Theil’s U index is expressed as follows:
U = t = 1 n ( f o r e c a s t t a c t u a l t ) 2 n t = 1 n f o r e c a s t t 2 n + t = 1 n a c t u a l t 2 n
According to Equations (26)–(30), we separately predicted TAIEX data from 1997–2005 and further calculated the error for each year.
From Table 2, the results of different error statistics methods showed that NFM-IE can successfully forecast different time series of TAIEX 1997–2005.

5. Results Analysis

5.1. Taiwan Stock Exchange Capitalization Weighted Stock Index

In general, TAIEX is a widely-used dataset in stock market forecasting. In order to facilitate comparison with other forecasting models, this paper also uses it as the main dataset to verify the model. Using non-stationary data can lead to spurious regressions, so we first performed a stationarity test based on the unit root test by software Eviews (Eviews10.0 Enterprise Edition, Microsoft, Redmond, WA, USA). It can be concluded that the first-order difference of TAIEX 1997–2005 was stationary data, which indicates that the fluctuation data used in this study were stationary. Further, other datasets in this study were also stationary data.
The model in this paper was based on high order, and thus, different orders may affect the accuracy of the prediction. Hence, the experimental analysis showed that when the order of fuzzy fluctuation information entropy was 9–11, the stability of the model was more ideal. Table 3 shows the experimental errors for different years under different orders.
Not surprisingly, accurate fluctuation trend predictions are very important and needed. Therefore, the performance of different methods must be compared and evaluated, thus verifying the superiority or deficiency of the model. In order to verify the effects of model prediction, this section focuses on comparing this model’s experimental results with those from other models. Comparing the errors across model showed that the current model had certain advantages in prediction accuracy. Table 4 shows the prediction errors for the different methods between 1997 and 2005. The NFM-IE hybrid model achieved better prediction accuracy compared to the traditional regression model, autoregressive model, neural network model, and fuzzy model (Table 4). In addition, NFM-IE exhibited better predictive power in some years compared to other hybrid models based on the fuzzy theory.

5.2. Forecasting Shanghai Stock Exchange Composite Index

SHSECI is one of the most typical stock indices in China, with certain representativeness. We selected it as an experimental dataset to verify the model’s applicability.
Recently, scholars have proposed more comprehensive models based on traditional prediction methods. For example, Guan et al. [39] proposed a two-actor autoregressive moving average model based on the fuzzy logical relationships (ARMA-FR). Guan et al. [40] proposed a model based on back propagation neural network and high-order fuzzy-fluctuation trends (BPNN-HFT). This section compares several typical prediction methods. The results indicated that the model can also effectively predict the stock index. Table 5 and Figure 3 show a comparison of the different prediction methods.
The comparison shows that NFM-IE outperformed other methods in predicting SHSECI from 2007–2015.
Comparing the average value of the SHSECI prediction error showed that NFM-IE had better prediction accuracy and stability compared to the neural network-based BPNN-HFT model and the statistical-based ARMA-FR model.

5.3. Forecasting Hong Kong-Hang Seng Index

Finally, the Hong Kong-Hang Seng Index (HSI) was selected as the experimental dataset. Comparing several authoritative prediction methods, we can verify the universality of the model in other stock markets. Table 6 and Figure 4 show a comparison of the different prediction methods from 1998–2012.
To further evaluate the validity of the proposed model, we used Friedman’s test to perform a significance test based on the study of Demšar [44]. For reference, Friedman’s test is a parametric statistical test that was proposed by Milton Friedman [45,46]. To further illustrate the significance of the model’s predictions compared to other prediction methods, this section will use Friedman’s test and the post-hoc test for significance analysis. In the Friedman test phase, SPSS was used for statistical testing, and the post-hoc test phase was based on manual calculations.
In the first stage, Friedman’s test requires comparison of the average ranking of different algorithms R j = 1 N i r i j , where, r i j is the rank of the j -th of k algorithms on the i -th of N datasets. The ranking of each method was based on the analysis of HSI forecast results as shown in Table 7.
Through software analysis, we concluded that the method had the highest comprehensive ranking. In addition, according to the Chi-square distribution, there were significant differences between these methods.
C D = q α k ( k + 1 ) 6 N
In the second stage, in order to further compare the different methods, we used the Nemenyi test [47]. According to Equation (31), α = 0.05 and CD = 1.575. Upon further comparison, we found that the method proposed in this study had significant advantages over Yu (2005) [41], Wan (2017) [42], Ren (2016) [43], etc. Although it was not significant compared with Cheng’s method (2018) [10], the NFM-IE had certain advantages from the perspective of error mean and average level.

5.4. Discussion

The research was mainly focused on two issues. The first was whether the uncertainty of stock market volatility can be used as a key feature of forecasting in a complex environment. The other was whether the prediction method considering uncertainty and trend was effective. We first used the inconsistency of historical fluctuations as a stock forecasting feature and further characterized and quantified it. Then, we applied the neutrosophic set to be the representation of the information and established a neutrosophic logic relationship based on wave inconsistency. Through experimental analysis, the proposed model achieved robustness and stability with relatively few parameters. In addition, it was also proven that predictions that consider inconsistency are meaningful and effective. The advantages were embodied in the following aspects: First, NFM-IE did not need to establish complex assumptions compared to traditional regression-based prediction models. Second, the NFM-IE prediction process was more interpretable than the neural network. Finally, compared with the fuzzy prediction method, NFM-IE effectively utilized data inconsistency as key information. All in all, the model showed satisfactory performance. However, it also showed certain limitations: First, the model used single stock market data as the system input and failed to consider multiple factors fully. Secondly, using information entropy as a key tool for uncertainty measurement requires further optimization in characterizing data.

6. Conclusions

In this paper, we presented the concept of NFTS and proposed a prediction model based on the neutrosophic set and information entropy of high-order fuzzy fluctuation time series. This model had significant performance advantages over existing fuzzy time series models, machine learning prediction models, and traditional economic prediction models. In this paper, we applied three typical test datasets to prove that the model had certain universality and stability. In addition, this paper had a certain degree of scientific contribution in the following aspects: First, the concept of NFTS was proposed. Second, this paper proposed information entropy based on high-order fluctuation time series. Finally, this paper established NLRs based on NFTS and information entropy. This paper discussed the first-order neutrosophic time series to characterize the historical state of uncertainty and high-order information fluctuation entropy to measure the complexity of historical fluctuations. Other types of time series will be tested in the future. Meanwhile, future research should aim to establish detailed high-order neutrosophic time series models indicating the uncertainty of historical trends. In this study, we have considered the Jaccard similarity measure for comparing X_t and X_j. Further work could considered the Jensen–Shannon distance [20], which accomplishes the triangular inequality. Furthermore, in order to verify the robustness of the forecast in longer forecast scenarios, we will extend the model to 2, 3, or 4 periods ahead.

Author Contributions

Data curation, Z.D.; Supervision, H.G.; Validation, H.G., S.G. and A.Z.; Writing—original draft, Z.D.; Writing—review & editing, A.Z.

Funding

This work was supported by the National Natural Science Foundation of China under Grant 71704066.

Acknowledgments

The authors are indebted to the anonymous reviewers for their very insightful comments and constructive suggestions, which helped improve the quality of this paper.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; nor in the decision to publish the results.

References

  1. Han, M.; Xu, M. Laplacian Echo State Network for Multivariate Time Series Prediction. IEEE Trans. Neural Netw. Learn. Syst. 2018, 29, 238–244. [Google Scholar] [CrossRef] [PubMed]
  2. Mishra, N.; Soni, H.K.; Sharma, S.; Upadhyay, A.K. Development and Analysis of Artificial Neural Network Models for Rainfall Prediction by Using Time-Series Data. Int. J. Intell. Syst. Appl. 2018, 10, 16–23. [Google Scholar] [CrossRef]
  3. Safari, N.; Chung, C.Y.; Price, G.C.D. A Novel Multi-Step Short-Term Wind Power Prediction Framework Based on Chaotic Time Series Analysis and Singular Spectrum Analysis. IEEE Trans. Power Syst. 2018, 33, 590–601. [Google Scholar] [CrossRef]
  4. Moskowitz, D. Implementing the template method pattern in genetic programming for improved time series prediction. Genet. Program. Evol. Mach. 2018, 19, 271–299. [Google Scholar] [CrossRef]
  5. Soto, J.; Melin, P.; Castillo, O. Ensembles of Type 2 Fuzzy Neural Models and Their Optimization with Bio-Inspired Algorithms for Time Series Prediction; Springer Briefs in Applied Sciences & Technology; Springer: Basel, Switzerland, 2018. [Google Scholar]
  6. Soares, E.; Costa, P., Jr.; Costa, B.; Leite, D. Ensemble of evolving data clouds and fuzzy models for weather time series prediction. Appl. Soft Comput. 2018, 64, 445–453. [Google Scholar] [CrossRef]
  7. Song, Q.; Chissom, B.S. Forecasting enrollments with fuzzy time series—Part I. Fuzzy Sets Syst. 1993, 54, 1–9. [Google Scholar] [CrossRef]
  8. Song, Q.; Chissom, B.S. Fuzzy time series and its models. Fuzzy Sets Syst. 1993, 54, 269–277. [Google Scholar] [CrossRef]
  9. Song, Q.; Chissom, B.S. Forecasting enrollments with fuzzy time series—Part II. Fuzzy Sets Syst. 1991, 62, 1–8. [Google Scholar] [CrossRef]
  10. Cheng, C.H.; Yang, J.H. Fuzzy Time-Series Model Based on Rough Set Rule Induction For Forecasting Stock Price. Neurocomputing 2018, 302, 33–45. [Google Scholar] [CrossRef]
  11. Kumar, S.; Gangwar, S. Intuitionistic fuzzy time series: An approach for handling non-determinism in time series forecasting. IEEE Trans. Fuzzy Syst. 2016, 24, 1270–1281. [Google Scholar] [CrossRef]
  12. Smarandache, F. A unifying field in logics: Neutrosophic logic. Mult.-Valued Log. 1999, 8, 489–503. [Google Scholar]
  13. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Single valued neutrosophic sets. Multispace Multistruct 2010, 4, 410–413. [Google Scholar]
  14. Wang, H.; Smarandache, F.; Zhang, Y.Q.; Sunderraman, R. Interval Neutrosophic Sets and Logic: Theory and Applications in Computing; Hexis: Phoenix, AZ, USA, 2005. [Google Scholar]
  15. Zadeh, L.A. Probability measure of fuzzy events. J. Math. Anal. Appl. 1968, 23, 421–427. [Google Scholar] [CrossRef]
  16. DeLuca, A.S.; Termini, S. A definition of nonprobabilistic entropy in the setting of fuzzy set theory. Inf. Control 1972, 20, 301–312. [Google Scholar] [CrossRef]
  17. Vu, T.M.; Mishra, A.K.; Konapala, G. Information Entropy Suggests Stronger Nonlinear Associations between Hydro-Meteorological Variables and ENSO. Entropy 2018, 20, 38. [Google Scholar] [CrossRef]
  18. Zeng, X.; Wu, J.; Wang, D.; Zhu, X.; Long, Y. Assessing Bayesian model averaging uncertainty of groundwater modeling based on information entropy method. J. Hydrol. 2016, 538, 689–704. [Google Scholar] [CrossRef]
  19. Arellano-Valle, R.B.; Contreras-Reyes, J.E.; Stehlík, M. Generalized skew-normal negentropy and its application to fish condition factor time series. Entropy 2017, 19, 528. [Google Scholar] [CrossRef]
  20. Liu, Z.; Shang, P. Generalized information entropy analysis of financial time series. Physica A 2018, 505, 1170–1185. [Google Scholar] [CrossRef]
  21. Ye, G.; Pan, C.; Huang, X.; Zhao, Z.; He, J. A Chaotic Image Encryption Algorithm Based on Information Entropy. Int. J. Bifurcation Chaos 2018, 28, 9. [Google Scholar] [CrossRef]
  22. Tang, Y.; Liu, Z.; Pan, M.; Zhang, Q.; Wan, C.; Guan, F.; Wu, F.; Chen, D. Detection of Magnetic Anomaly Signal Based on Information Entropy of Differential Signal. IEEE Geosci. Remote Sens. Lett. 2018, 15, 512–516. [Google Scholar] [CrossRef]
  23. He, H.; An, L.; Liu, W.; Zhang, J. Prediction Model of Collapse Risk Based on Information Entropy and Distance Discriminant Analysis Method. Math. Prob. Eng. 2017, 2017. [Google Scholar] [CrossRef]
  24. Bariviera, A.F.; Martín, M.T.; Plastino, A.; Vampa, V. LIBOR troubles: Anomalous movements detection based on maximum entropy. Physica A 2016, 449, 401–407. [Google Scholar] [CrossRef]
  25. Liang, X.; Si, D.; Xu, J. Quantitative Evaluation of the Sustainable Development Capacity of Hydropower in China Based on Information Entropy. Sustainability 2018, 10, 529. [Google Scholar] [CrossRef]
  26. Zhang, Z.; Li, Y.; Jin, S.; Zhang, Z.; Wang, H.; Qi, L.; Zhou, R. Modulation Signal Recognition Based on Information Entropy and Ensemble Learning. Entropy 2018, 20, 198. [Google Scholar] [CrossRef]
  27. Shannon, C.E. A mathematical theory of communication. Bell Labs Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
  28. Zadeh, L.A. The Concept of a Linguistic Variable and its Application to Approximate Reasoning. Inf. Sci. 1974, 8, 199–249. [Google Scholar] [CrossRef]
  29. Fu, J.; Ye, J. Simplified neutrosophic exponential similarity measures for the initial evaluation/diagnosis of benign prostatic hyperplasia symptoms. Symmetry 2017, 9, 154. [Google Scholar] [CrossRef]
  30. Ali, M.; Son, L.H.; Thanh, N.D.; Minh, N.V. A neutrosophic recommender system for medical diagnosis based on algebraic neutrosophic measures. Appl. Soft Comput. 2017, 71, 1054–1071. [Google Scholar] [CrossRef]
  31. Theil, H. Applied Economic Forecasting; North-Holland: Amsterdam, The Netherlands, 1966. [Google Scholar]
  32. Yu, T.H.K.; Huarng, K.H. A bivariate fuzzy time series model to forecast the TAIEX. Expert Syst. Appl. 2008, 34, 2945–2952. [Google Scholar] [CrossRef]
  33. Yu, T.H.K.; Huarng, K.H. Corrigendum to ‘‘A bivariate fuzzy time series model to forecast the TAIEX”. Expert Syst. Appl. 2010, 37, 5529. [Google Scholar] [CrossRef]
  34. Sullivan, J.; Woodall, W.H. A comparison of fuzzy forecasting and Markov modeling. Fuzzy Sets Syst. 1994, 64, 279–293. [Google Scholar] [CrossRef]
  35. Chen, S.M.; Chang, Y.C. Multi-variable fuzzy forecasting based on fuzzy clustering and fuzzy rule interpolation techniques. Inf. Sci. 2010, 180, 4772–4783. [Google Scholar] [CrossRef]
  36. Chen, S.M.; Chen, C.D. TAIEX Forecasting Based on Fuzzy Time Series and Fuzzy Variation Groups. IEEE Trans. Fuzzy Syst. 2011, 19, 1–12. [Google Scholar] [CrossRef]
  37. Chen, S.M.; Manalu, G.M.; Pan, J.S.; Liu, H.C. Fuzzy Forecasting Based on Two-Factors Second-Order Fuzzy-Trend Logical Relationship Groups and Particle Swarm Optimization Techniques. IEEE Trans. Cybern. 2013, 43, 1102–1117. [Google Scholar] [CrossRef]
  38. Jia, J.; Zhao, A.W.; Guan, S. Forecasting Based on High-Order Fuzzy-Fluctuation Trends and Particle Swarm Optimization Machine Learning. Symmetry 2017, 9, 124. [Google Scholar] [CrossRef]
  39. Guan, S.; Zhao, A. A Two-Factor Autoregressive Moving Average Model Based on Fuzzy Fluctuation Logical Relationships. Symmetry 2017, 9, 207. [Google Scholar] [CrossRef]
  40. Guan, H.; Dai, Z.; Zhao, A. A novel stock forecasting model based on High-order-fuzzy-fluctuation Trends and Back Propagation Neural Network. PLoS ONE 2018, 13. [Google Scholar] [CrossRef]
  41. Yu, H.K. A refined fuzzy time-series model for forecasting. Physica A 2005, 346, 657–681. [Google Scholar] [CrossRef]
  42. Wan, Y.; Si, Y.W. Adaptive neuro fuzzy inference system for chart pattern matching in financial time series. Appl. Soft Comput. 2017, 57, 1–18. [Google Scholar] [CrossRef]
  43. Ren, Y.; Suganthan, P.N.; Srikanth, N. A Novel Empirical Mode Decomposition With Support Vector Regression for Wind Speed Forecasting. IEEE Trans. Neural Netw. Learn. Syst. 2016, 27, 1793–1798. [Google Scholar] [CrossRef] [PubMed]
  44. Demšar, J. Statistical comparisons of classifiers over multiple datasets. J. Mach. Learn. Res. 2006. Available online: http://www.jmlr.org/papers/v7/demsar06a.html (accessed on 26 April 2019).
  45. Friedman, M. The use of ranks to avoid the assumption of normality implicit in the analysis of variance. J. Am. Stat. Assoc. 1937, 32, 675–701. [Google Scholar] [CrossRef]
  46. Friedman, M. A comparison of alternative tests of significance for theproblem of m rankings. Ann. Math. Stat. 1940, 11, 86–92. [Google Scholar] [CrossRef]
  47. Nemenyi, P. Distribution-free Multiple Comparisons. Ph.D. Thesis, Princeton University, Princeton, NJ, USA, 1963. [Google Scholar]
Figure 1. The flowchart of the neutrosophic forecasting model.
Figure 1. The flowchart of the neutrosophic forecasting model.
Entropy 21 00455 g001
Figure 2. Forecasting results from 1 November 1999–30 December 1999.
Figure 2. Forecasting results from 1 November 1999–30 December 1999.
Entropy 21 00455 g002
Figure 3. RMSEs of forecast errors for SHSECI from 2007–2015.
Figure 3. RMSEs of forecast errors for SHSECI from 2007–2015.
Entropy 21 00455 g003
Figure 4. RMSEs of forecast errors for HSI from 1998–2012.
Figure 4. RMSEs of forecast errors for HSI from 1998–2012.
Entropy 21 00455 g004
Table 1. Forecasting results from 1 November 1999–30 December 1999.
Table 1. Forecasting results from 1 November 1999–30 December 1999.
Date (MM/DD/YYYY)ActualForecast(Forecast − Actual)2Date (MM/DD/YYYY)ActualForecast(Forecast − Actual)2
11/1/19997814.897868.472871.0812/1/19997766.207719.402190.11
11/2/19997721.597821.8210,046.3112/2/19997806.267770.621270.07
11/3/19997580.097722.0420,149.7112/3/19997933.177814.7514,022.27
11/4/19997469.237577.9211,813.9612/4/19997964.497944.99380.16
11/5/19997488.267466.90456.1412/6/19997894.467968.415468.57
11/6/19997376.567489.5412,764.3712/7/19997827.057895.114631.50
11/8/19997401.497374.68718.7312/8/19997811.027826.02225.13
11/9/19997362.697399.021320.1912/9/19997738.847808.594864.78
11/10/19997401.817371.66909.1312/10/19997733.777738.7624.94
11/11/19997532.227391.2019,887.0412/13/19997883.617723.9225,501.56
11/15/19997545.037543.083.8212/14/19997850.147897.062201.62
11/16/19997606.207536.554851.1412/15/19997859.897854.2831.42
11/17/19997645.787613.891017.0712/16/19997739.767860.8214,654.64
11/18/19997718.067643.215603.2612/17/19997723.227738.34228.50
11/19/19997770.817729.371716.8712/18/19997797.877722.015754.66
11/20/19997900.347780.4414,376.8412/20/19997782.947811.00787.09
11/22/19998052.317915.2418,788.7312/21/19997934.267782.8422,929.50
11/23/19998046.198068.19483.8212/22/19998002.767946.353182.30
11/24/19997921.858046.1215,443.7912/23/19998083.498016.214526.63
11/25/19997904.537919.37220.2912/24/19998219.458096.5115,113.68
11/26/19997595.447906.3796,679.9312/27/19998415.078233.2533,058.13
11/29/19997823.907592.6453,479.1112/28/19998448.848429.73365.06
11/30/19997720.877836.5213,376.00Root Mean Square Error (RMSE)102.02
Table 2. Comparing results of different error statistics methods for Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) data collected from 1997–2005.
Table 2. Comparing results of different error statistics methods for Taiwan Stock Exchange Capitalization Weighted Stock Index (TAIEX) data collected from 1997–2005.
Year199719981999200020012002200320042005
RMSE141.42114.69102.02129.94114.2266.8453.8855.2453.1
MSE19,999.6213,153.8010,408.0816,884.4013,046.214467.592903.053051.462819.61
MAE113.4296.3179.3896.6592.4851.6541.1138.6541.27
MAPE0.01430.01380.01020.01820.0190.01110.0070.00650.0067
Theil’s U0.00890.00820.00650.01220.01190.00720.00460.00470.0043
Table 3. Comparing average RMSEs based on different order fuzzy fluctuation time series from 1997–2005.
Table 3. Comparing average RMSEs based on different order fuzzy fluctuation time series from 1997–2005.
Order8910111213141516
1997141.41141.42141.46141.9141.53141.72141.68141.8141.69
1998114.67114.69114.61114.76114.63114.39114.46114.29114.23
1999101.86102.02101.7101.66101.55101.59101.7101.26101.54
2000129.07129.94129.62129.34129.87129.49128.64128.6128.43
2001113.97114.22114.53114.86115.37115.11115.39116.06116.02
200267.2966.8466.9566.8566.7667.2166.9867.0267.48
200353.8453.8853.9953.6853.7453.853.5553.4853.45
200454.755.2455.1755.0855.0755.3655.4755.155.25
200553.0953.153.2253.0953.1453.1153.1353.0452.97
average92.2192.3792.3692.3692.4192.4292.3392.2992.34
total829.9831.35831.25831.22831.66831.78831830.65831.06
Table 4. Performance comparison of prediction RMSEs with other models. NFM-IE, neutrosophic forecasting model based on information entropy.
Table 4. Performance comparison of prediction RMSEs with other models. NFM-IE, neutrosophic forecasting model based on information entropy.
TYPEMethodsRMSE
199719981999200020012002200320042005AverageTotal
Regression ModelUnivariate conventional regression model (U_R) [32,33]N/AN/A1644201070116329146N/A374.202245
Bivariate conventional regression model (B_R) [32,33]N/AN/A103154120775485N/A98.80593
Auto-regressiveAutoregressive model for order one (AR_1) [34]146.22144.53116.84155.12112.3997.0991.6779.94N/A117.98653.05
Autoregressive model for order two (AR_2) [34]174.09135.21128.15142.3129.8489.866.5860.33N/A115.79617
Neural networkUnivariate neural network model (U_NN) [32,33]N/AN/A107309259785760N/A145.00870
Bivariate neural network mode (B_NN) [32,33]N/AN/A112274131695261N/A116.40699
Fuzzyfuzzy forecasting and fuzzy rule(F-R) [35]N/AN/A123.64131.1115.0873.0666.3660.48N/A94.95569.72
Fuzzy time-series model based on rough set rule (F-RS) [10]N/A120.8110.7150.6113.26653.158.653.590.81605.7
Fuzzy variation groups (F-VG) [36]140.86144.13119.32129.87123.1271.0165.1461.94N/A106.92570.4
Fuzzy+Multi-variable fuzzy and particle swarm optimization (M_F-PSO) [37]138.41113.88102.34131.25113.6265.7752.2356.16N/A96.71521.37
Univariate fuzzy and particle swarm optimization (U_F-PSO) [38]143.6115.3499.12125.7115.9170.4354.2657.2454.6892.92577.34
Autoregressive moving average and fuzzy logical Relationships (ARMA-FR) [39]141.89119.8599.03128.62125.6466.2953.256.1155.8394.05584.72
Back propagation neural network and high-order fuzzy-fluctuation trends (BPNN-HFT) [40]142.99112.5196.77126.85120.1266.3954.8758.154.792.59577.8
NFM-IE141.42114.69102.02129.94114.2266.8453.8855.2453.192.37575.24
Table 5. RMSEs of forecast errors for the Shanghai Stock Exchange Composite Index SHSECI from 2007–2015.
Table 5. RMSEs of forecast errors for the Shanghai Stock Exchange Composite Index SHSECI from 2007–2015.
Year200720082009201020112012201320142015Average
ARMA-FR (2017) [39]129.2279.7759.9649.4829.723.1422.1344.1158.8955.15
BPNN-HFT (2018) [40]123.8957.4448.9247.3428.3725.8421.4350.5959.6951.50
NFM-IE112.1051.9849.3745.5828.2224.9220.2150.4459.7749.17
Table 6. RMSEs of forecast errors for the Hong Kong-Hang Seng Index (HSI) from 1998–2012.
Table 6. RMSEs of forecast errors for the Hong Kong-Hang Seng Index (HSI) from 1998–2012.
Method199819992000200120022003200420052006200720082009201020112012Average
Yu (2005) [41]291.4469.6297.05316.85123.7186.16264.34112.4252.44912.67684.9442.64382.06419.67239.11359.66
Wan (2017) [42]326.62637.1356.7299.43155.09226.38239.63147.2466.241847.82179437.24445.41688.04477.34595.26
Ren (2016) [43]296.67761.9356.81254.07155.4199.58540.191127407.891028.7593.8435.18718.33578.7442.44526.46
Cheng (2018) [10]201.99231.91251.7156.58106.26118.74105.38103.96189.2682.08460.12326.65260.67346.33190.13248.78
NFM-IE195.86223.91246.11163.49105.65122.04102.23105.37173.55694.89469.11319.7274.73347.2181.98248.39
Table 7. The rank of the forecasting results of the HSI.
Table 7. The rank of the forecasting results of the HSI.
MethodRank
Yu (2005) [41]3.40
Wan (2017) [42]4.40
Ren (2016) [43]4.20
Cheng (2018) [10]1.53
NFM-IE1.47

Share and Cite

MDPI and ACS Style

Guan, H.; Dai, Z.; Guan, S.; Zhao, A. A Neutrosophic Forecasting Model for Time Series Based on First-Order State and Information Entropy of High-Order Fluctuation. Entropy 2019, 21, 455. https://doi.org/10.3390/e21050455

AMA Style

Guan H, Dai Z, Guan S, Zhao A. A Neutrosophic Forecasting Model for Time Series Based on First-Order State and Information Entropy of High-Order Fluctuation. Entropy. 2019; 21(5):455. https://doi.org/10.3390/e21050455

Chicago/Turabian Style

Guan, Hongjun, Zongli Dai, Shuang Guan, and Aiwu Zhao. 2019. "A Neutrosophic Forecasting Model for Time Series Based on First-Order State and Information Entropy of High-Order Fluctuation" Entropy 21, no. 5: 455. https://doi.org/10.3390/e21050455

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop