Modified MF-DFA Model Based on LSSVM Fitting

: This paper proposes a multifractal least squares support vector machine detrended fluctuation analysis (MF-LSSVM-DFA) model. The system is an extension of the traditional MF-DFA model. To address potential overfitting or underfitting caused by the fixed-order polynomial fitting in MF-DFA, LSSVM is employed as a superior alternative for fitting. This approach enhances model accuracy and adaptability, ensuring more reliable analysis results. We utilize the p model to construct a multiplicative cascade time series to evaluate the performance of MF-LSSVM-DFA, MF-DFA, and two other models that improve upon MF-DFA from recent studies. The results demonstrate that our proposed modified model yields generalized Hurst exponents h ( q ) and scaling exponents τ ( q ) that align more closely with the analytical solutions, indicating superior correction effectiveness. In addition, we explore the sensitivity of MF-LSSVM-DFA to the overlapping window size s . We find that the sensitivity of our proposed model is less than that of MF-DFA. We find that when s exceeds the limited range of the traditional MF-DFA, h ( q ) and τ ( q ) are closer than those obtained in MF-DFA when s is in a limited range. Meanwhile, we analyze the performances of the fitting of the two models and the results imply that MF-LSSVM-DFA achieves a better outstanding performance. In addition, we put the proposed MF-LSSVM-DFA into practice for applications in the medical field, and we found that MF-LSSVM-DFA improves the accuracy of ECG signal classification and the stability and robustness of the algorithm compared with MF-DFA. Finally, numerous image segmentation experiments are adopted to verify the effectiveness and robustness of our proposed method.


Introduction
Fractal theory has been an important basis for the research of the nonstationary and nonlinear time series since it was proposed by Mandelbrot.Many scholars utilized the fractal method to explore the complexity and systematic of the time series.Fractal theory has indeed played a significant role in natural sciences [1], material structure [2], and medicine [3], where it has been used for accurate EEG signal segmentation through fractal dimension analysis.Fractal theory provides unique tools and methodologies that aid in understanding and analyzing the complex phenomena and structures within these fields.Subsequently, in order to obtain a more detailed description of a fractal, it is necessary to add parameters that can describe different fractal subsets, so multifractal theory should be introduced, which is a research hotspot [4][5][6][7].
Based on extensive empirical research on Nile water flow, Hurst [8,9] proposed Rescaled Range Analysis (R/S) which is the most classic method to explore the long-memory nature of the time series.Due to the sensitivity of the outliers to (R/S), Peng et al. [10] proposed the detrended fluctuation analysis (DFA) method.However, regardless of R/S or DFA, they only work on the analysis of the single fractal features of the time series.They cannot describe the whole structure of the object.Therefore, Kantelhardt et al. [11] combined multifractal and DFA to construct the multifractal detrended fluctuation analysis (MF-DFA) which has been widely used in the multifractal nature of the time series [12].
Since the MF-DFA method has been put forward, it has shone in many fields.The MF-DFA method plays an important role in analyzing the one-dimensional sequences.In the financial field, MF-DFA can effectively analyze multifractal characteristics under different time scales.Thompson and Wilson [13] used MF-DFA to show the volatility clustering or other highly irregular behavior of the financial time series.The MF-DFA method is commonly used to examine the efficiency of the markets [14][15][16].As well, MF-DFA can realize the effective analysis of social phenomena and events [17,18].In addition, MF-DFA has been widely used in other sequence processing [19][20][21][22].Wang et al. [23] applied MF-DFA to the classification of ECG signals.Cao et al. [24] used MF-DFA to extract the features of EEG signals and the proposed method can be helpful for the recognition of sudden pain.Yu et al. [25] investigated the daily rainfall time series in the Yangtze River basin.Except for one-dimensional time series, MF-DFA performs well in two-dimensional (2D) images.Many researchers have explored the 2D surfaces by using the MF-DFA method [26][27][28].Wang et al. utilized MF-DFA for classifying 2D images, including magnetic resonance images [29] and retinal images [30].Additionally, Shi et al. [31] applied MF-DFA for the segmentation of 2D images.
In order to enhance the performance of the MF-DFA model, many scholars have extended the model in different ways.Considering that the MF-DFA method is only used to study a single time series, Zhou [32] combined MF-DFA with DCCA, which is proposed by Podobnik and Stanley [33], to construct the MF-DCCA method which can explore the cross-correlation of two sets of sequences.However, the fitting in both MF-DFA and MF-DCCA is the polynomial fitting, indicating that inherent and polynomial trends cannot be completely eliminated [34].Therefore, considering the sensitivity of DFA to trends, Xu et al. [35] proposed a smoothing algorithm based on the discrete Laplace transform to minimize the influence of exponential trends and distortion in the double log plots acquired by MF-DFA.Subsequently, the detrended moving average has been applied to two nonlinear time series.Nian and Fu [36] incorporated the extended selfsimilarity to construct a novel MF-DFA method, whose performance was more excellent than traditional MF-DFA.
The fitting method in the traditional MF-DFA model is the polynomial fit which is simple and easy to achieve.The polynomial fitting works well for small data and simple relationships.It is the foundation for many nonlinear models.However, the degree of the fitting needs to be determined manually, easily leading to underfitting or overfitting.The support vector machine (SVM), having excellent generalizability and robustness, is commonly used in classification, regression prediction, function approximation and time series prediction.The least squares support vector machine (LSSVM) is an extended form of SVM, which uses the least squares method to replace the quadratic optimization algorithm of SVM.LSSVM reduces the number of parameters, decreases the complexity of the algorithm and improves the efficiency of calculation by using the Lagrange function.
Considering the MF-DFA method is widely used to deal with the nonlinear and nonstationary time series and the degree of the polynomial fitting is fixed, indicating that the fitting method is destined to suffer from under-fitting or over-fitting; therefore, the LSSVM fit is used to replace the polynomial fit in this paper.
The paper is organized in the following manner.In Section 2, we introduce the generation method of multiplicative cascading constructed time series, and describe the algorithms of MF-DFA, LSSVM, and MF-LSSVM-DFA.We provide the performance of the MF-LSSVM-DFA method with numerical experiments in Section 3. In Section 4 we apply MF-LSSVM-DFA to the classification of EEG signals in the medical field.In Section 5 we apply the method proposed in this paper to image segmentation.Some conclusions are delivered in Section 6.

Methodology
All the computations are processed by using Matlab R2020a on an Intel(R) Core(TM) i5-7200U CPU @ 2.50 GHz processor.

Multifractal Detrended Fluctuation Analysis
Based on DFA, MF-DFA develops and it is important to explore the multifractal nature of non-stationary time series.The specific steps of the MF-DFA algorithm are given as follows: Firstly, for the given time series {x 1 , x 2 , . . . ,x N }.Therefore, we construct the cumula- tive summation sequence as follows: in Equation ( 1), x is the mean value of the sequence.Secondly, fix the window size s, and we divide the new series into W non-overlapping windows.W equals int[N/s] which indicates the possibility of missing information.Therefore, we divide T(i) according to i from small to large and from large to small, respectively.Thus, a total of 2W intervals are obtained.
Thirdly, apply the least squares method to finish the k-th polynomial fitting for the points in each interval ω: The following step is to calculate the mean square error E 2 (s, ω): Afterwards, we can obtain the q − th fluctuation function through the the function: The wave function F increases in a power-law relationship with the increase in s which can be marked as F ∼ s h(q) .h(q) is the generalized Hurst exponent and when q equals 2, h(q) is the standard Hurst exponent.If the original time series is a single fractal, the scaling exponents in all the intervals are consistent and h(q) is a constant which is independent from q. Otherwise, the original time series has a multifractal nature.
When the sequence has a multifractal nature, the scaling exponent τ(q) is used to evaluate the degree of multifractality:

Least Square Support Vector Machine
SVM is a supervised learning model, usually used for pattern recognition, classification and regression analysis.Especially, the fitting accuracy of SVM is high which is suitable for small sample data.LSSVM, proposed by Suykens [37], adopted the least squares linear system as a loss function which decreases the complexity of calculation.In addition, when using the same kernel function, the parameter of LSSVM is one less than that of traditional SVM, implying that the operation speed is faster.The LSSVM model is given as follows.
Given a set of training samples x i , where ω ∈ R n represents the weight vector, ϕ(x) is the nonlinear function that maps the input space to a high-dimensional feature space, and α represents the bias.
According to the principle of risk minimization, the optimization function is given as follows: Here, λ represents the regularization parameter and e i represents the random errors.The Lagrange multiplier method is adopted to solve the problem: where β l is the Lagrange multiplier which is called support values.
According to optimization conditions, we can obtain: solve the above differential equations, we can obtain: The next step is to solve pairwise problems, which is similar to SVM, we do not conduct any calculations on ω and e.
with Ω = ϕ(x i ) T ϕ(x j ).Therefore, the model is as follows: where K(x i , x j ) is called the kernel function.

MF-LSSVM-DFA Model
As shown in Section 2.1, fitting is an important step.However, the fitting method adopted by the traditional MF-DFA needs to be further improved.Most time series are nonlinear, implying that using only one fitting method may lead to overfitting or underfitting of some segments.For example, we give a nonlinear time series in Figure 1.We make a primary fitting, a secondary fitting and a tertiary fitting to the time series, respectively, which are shown in Figure 2.  It is obvious that when the fitting order equals 1, the second segment and the third segments are underfitting.When the number of the fit is 2, although the fitting results of the second segment are good, the first segment presents an overfitting performance and the last segment is underfitting.Figure 2c shows that when x belongs to [0, 100], the fit is overfitting.Therefore, when the fitting order of the nonlinear time series is single, the fitting performance of some segments of the time series is disappointing.Therefore, we replace the polynomial fitting function with LSSVM in the MF-DFA method.That is, replace Equation (2) with Equation (14).Equations ( 3) and ( 4) are updated as follows: When ω = 1, 2, . . ., W, when ω = W + 1, W + 2, . . ., 2W,

Multiplicative Cascades Sequence
In this paper, we use multiplicative cascade sequences to measure the robustness of the proposed model.The multiplicative cascades time series is constructed by the p-model [38] due to its simple advantages [39].The parameter p mainly determines the generated sequence.The initial time series has been generated as follows: We define the length of the multiplicative cascades time series as N, where N equals 2 ϕ , and ϕ is a positive integer.ψ(i) denotes the number of 1's in binary form.In this paper, we set ϕ = 12 and the initial p = 0.75, which is shown in Figure 3.The analytic value of the scaling exponent τ(q) and the generalized Hurst exponent h(q) can be calculated as follows.

Experiment Results
Several numerical experiments are performed to examine the robustness of the MF-LSSVM-FA method.We perform numerical experiments based on the p model.The related multifractal values are calculated according to q.We make q grow from −10 to 10 in intervals of 1.We obtain the values by using the MF-DFA.In addition, we calculate the analytical values which are all shown in Figure 4.It is obvious that when q ⩾ 0, the generalized Hurst exponent h(q) and the scaling exponent τ(q) calculated by MF-LSSVM-DFA are closer to the analytical values than those calculated by MF-DFA.It is obvious that the generalized Hurst exponent h(q) and the scaling exponent τ(q) calculated by the MF-LSSVM-DFA method are closer to the analtical solution than those calculated by the MF-DFA method, implying that the MF-LSSVM-DFA method performs better than the MF-DFA method.
Similar to h1 err (q) and h2 err (q), we define τ1 err (q) = |τ1(q) − τ av (q)| and τ2 err (q) = |τ2(q) − τ av (q)|.τ1(q) and τ2(q) are the scaling exponents calculated by MF-DFA and MF-LSSVM-DFA, respectively.τ av (q) represents the analytic value.h1 err (q) shows the difference between the Hurst exponent h1(q) and the analytic value and so as h2 err (q).τ1 err (q) and τ2 err (q) exhibit the difference between the calculated value and the analytic value.The smaller the difference is, the more accurate the fitting is.The results can be seen in Figure 5.
The comparison of the fitting between MF-DFA and MF-LSSVM-DFA is shown in Figure 6.The red lines mean the fitting in MF-DFA and the fitting order is 2. The yellow lines show the LSSVM fitting in MF-LSSVM-DFA.For the nonlinear multiplicative cascade time series, the single polynomial fitting cannot fit each segment perfectly.Comparing the fitting effect of polynomial fitting and LSSVM fitting, i.e., the performance of the red and the yellow lines, it is obvious that the fitting performance of LSSVM is more detailed than polynomial fitting.Observing the red lines, the LSSVM fitting of each segment has a clear boundary, which seems to continue in the polynomial fitting.With different values of s, the difference of polynomial fitting is greater than that of LSSVM fitting.

Comparison between MF-LSSVM-DFA and MF-DFA with Different Fitting Order
Next, we compare the performance of different orders of the polynomial fitting and the performance of the MF-LSSVM-DFA method.We set the order of the polynomial fitting to be 1 and 3 which are shown in Figures 7 and 8.It is obvious that regardless of whether the fitting order in MF-DFA is 1 or 3, the distance between the generalized Hurst exponent h(q) computed by MF-DFA is larger than that computed by MF-LSSVM-DFA.The calculated h(q) through the MF-LSSVM-DFA method almost coincides with the analytic value, indicating that the accuracy of the MF-LSSVM-DFA model is higher than that of the traditional MF-DFA model.We find that the fitting order in the MF-DFA model has an influence on the accuracy.Therefore, when using the MF-DFA model, suitable fitting order should be taken into account to decrease the difference between the calculated value and the analytic value.If the MF-LSSVM-DFA model is considered, it is unnecessary to consider the order problem, and the relative error is smaller than the traditional MF-DFA model.Figure 8 exhibits the comparison of polynomial fitting and LSSVM fitting.Firstly, there is no need to consider the selection of the fitting order when using LSSVM fitting.Afterward, for most nonlinear and nonstationary time series, fixed fitting orders cannot perform excellently in each segment.Therefore, the overfitting or underfitting of some intervals is inevitable and unavoidable.

Multifractal Analysis with Differents
s in the proposed model is the size of the overlapping window.The traditional model is sensitive to the change of s.Therefore, we explore the sensitivity of the proposed modified model to the window size s.   Figure 9 show the fitting conditions in the two models when s = 60, 70, 80, respectively.Figures 10 and 11 present the generalized Hurst exponents and the scaling exponents when the range of s is small enough.We can find that when q is less than 0, h1(q) is far from the analytic solution and h2(q) is distributed around the analytic solution, especially in Figure 10a.In Figure 10b, the scaling exponent τ2(q) is almost completely consistent with the analytic value.Compared to τ2(q), the difference between h1(q) and the analytic value is large, which is over 10.When s grows, the difference between h2(q) and the analytic solution becomes a little large.However, compared to Figure 10b, although h1(q) becomes closer to the analytic value and becomes smaller, it still performs worse than τ2(q).Therefore, when the segment size s is small, the MF-LSSVM-DFA model is excellent.
According to [40,41], in the traditional MF-DFA model, when the overlapping window size s is smaller than N/9, it implies that the maximum s ought to be smaller than N/9.Therefore, we explore the performance of the MF-LSSVM model when the segment size s is near and over the maximum value.The length of the multiplicative cascade sequence we use for numerical experiments is 2 12 .Therefore, we select an interval [300, 400], in which the maximum value is a little less than N/9, an interval [500, 600], in which the minimum value is a little greater than N/9 and an interval [700, 800], for which the minimum value is far larger than N/9.When the maximum s is slightly less than N 9 , that is, the range of s is a little smaller than the boundary, and the performance of the polynomial fitting is similar to before.The performance of LSSVM is quite different.Observing Figure 12, the segmentation of the LSSVM fit is more obvious.In Figure 12, we can see that the trends of the LSSVM fit are similar.On the opposite, in the interval [1500, 2000], when s equals 300 and 350, the opening is oriented upward.When s is 400, the opening is facing downward which is completely opposite to that when s is 300 and 350.Therefore, the LSSVM fit is more suitable for the nonlinear and nonstationary time series.From Figure 13a,b, when s ∈ [300, 400], h(q) and τ(q) are tightly distributed around the analytic value in both models.However, the generalized Hurst exponent and the scaling exponent calculated by MF-LSSVM-DFA are almost identical to the resolved values.
Combining Figures 4, 10, 11 and 15, we find that when the range of s is in the traditional specified range interval, the performances of both the MF-DFA model and the MF-LSSVM-DFA model are gratifying, and the MF-LSSVM-model is more outstanding.
Subsequently, we explore the performances of the MF-LSSVM-DFA model when the range outstrips the limit.Figures 14 and 16 present the fitting in the two models when s ∈ [500, 600] and when s ∈ [700, 800], respectively.Compared with the traditional MF-DFA model, the fitting trend is similar when s alters.The opening directions of the fitting in the similar interval are the opposite in MF-DFA when s changes.Figures 15 and 17 exhibit the corresponding generalized Hurst exponents and the scaling exponents.In both Figures 15 and 17, when the minimum s is greater than 1  9 N, the generalized Hurst exponent calculated by the proposed model is still closer to the analytic solution than that computed by the conventional MF-DFA, and so is the scaling exponent.In addition, we find that the scaling exponent is less sensitive to s than the generalized Hurst exponent.Especially when q is smaller than 0, h1(q) alters from above the analytic solution to below the analytic solution and the difference between h1(q) and the solution in Figure 15 is larger than that in Figure 17.Therefore, compared to the MF-DFA model, the proposed MF-LSSVM-DFA model is relatively less affected by s.In addition, we calculate the difference between h1(q) and the analytic value, and between h2(q) and the analytic value for different ranges of s in Tables 1 and 2, respectively.We calculate the average value of H2 err (q) in each range.The results show that when the maximum s is smaller than N  9 , the average values of H1 err (q) are almost around 0.1.When s ∈ [500, 600], the average value increases and surprisingly, the average value varies a little.When the minimum s becomes far larger than N 9 , H2 err (q) becomes large and the average value when s ∈ [700, 800] is far bigger than that when s in the former ranges.Comparing to Table 1, we find that the average value of H1 err (q) is greater than that of H2 err (q) when the range of s is the same.In addition, all the mean values of H1 err (q) are under 1.However, when the minimum s is over N  9 , the mean value becomes larger and is over 1, indicating that there is a significant discrepancy between the calculated Hurst exponent and the analytic value.Therefore, our proposed model has a higher accuracy fit.From Tables 1 and 2, we find that when s belongs to [500, 600], H2 err (q) is less than all the H1 err (q), denoting that even when the overlapping window size s is over N/9, our proposed model still surpass the traditional MF-DFA model.Even when s belongs to [700, 800], H2 err (q) is smaller than H1 err (q) when s belongs to [50,70].At the same time, we explore whether the performance of the scaling exponent is consistent with the generalized Hurst exponent.We calculate the difference between the scaling exponent and the theoretic value.τ1 err (q) represents the absolute value of the difference calculated in MF-DFA, and τ2 err (q) shows the absolute value of the difference computed in MF-LSSVM-DFA.Therefore, we calculate the scaling exponent τ2 err (q) when s belongs to [500, 600] and [700, 800], respectively.Both the minimum values of the two selected ranges of s are over 1/9 the length of the multiplicative cascade time series.In addition, we compute τ1 err (q) of the first five intervals of MF-DFA.The specific calculated results are shown in Tables 3 and 4.
Figure 18 presents the distribution of the scaling exponents.From Figure 18a, we find that when q is the same, the scaling exponent obtained by MF-LSSVM-DFA when s is in [500, 600] is almost smaller than that calculated by MF-DFA, regardless of the interval of s.When the interval of s becomes larger where the minimum s is far larger than N/9, τ2 err (q) and is still closer to the analytic value than τ1 err (q) when s belongs to [50, 70] and [300,400].Therefore, our proposed model has smaller restrictions on the window size s.
MF-DFA, 1 err (q),s in [50,70] MF-DFA, 1 err (q),s in [60,80] MF-DFA, 1 err (q),s in [100,120] MF-DFA, 1 err (q),s in [300,400] MF-DFA, 1 err (q),s in [500,600] MF-DFA, 1 err (q),s in [700,800] MF-LSSVM-DFA, 2 err (q),s in [500,600] (a)  MF-DFA, 1 err (q),s in [50,70] MF-DFA, 1 err (q),s in [60,80] MF-DFA, 1 err (q),s in [100,120] MF-DFA, 1 err (q),s in [300,400] MF-DFA, 1 err (q),s in [500,600] MF-DFA, 1 err (q),s in [700,800] MF-LSSVM-DFA, 2 err (q),s in [700,800] ( To highlight the effectiveness of the proposed method, we compare it with existing improved MF-DFA methods.Yang et al. [42] addressed the potential presence of negative values in the original MF-DFA model by introducing sign retention to enhance performance, resulting in the sign retention model MF-S-DFA.Additionally, Wang et al. [43] proposed the MF-LF-DFA algorithm, which improves the performance of MF-DFA by reasonably setting the fitting order for different local intervals based on the fluctuation characteristics of the sequence.We calculate the Hurst values corresponding to different ranges of s.Let H3 err (q) and H4 err (q) denote the absolute differences between the results computed using MF-S-DFA and LF-MF-DFA methods and the analytical values, respectively.The final results are presented in Table 5. Comparing the averages of H3 err (q) and H4 err (q) in each range of s with those in Table 2, we find that the proposed method achieves better performance compared to the two existing improved MF-DFA methods.Similarly, we calculate the variation in the differences between the scaling exponents of the two improved methods and the theoretical values across different ranges of s.Let τ3 err (q) and τ4 err (q) represent the absolute differences between the scaling exponents computed using MF-S-DFA and LF-MF-DFA methods and the theoretical values, respectively.We compare the τ2 err (q) values for s in the range [700, 800] with the τ3 err (q) and τ2 err (q) values computed for three different intervals, as presented in Table 6.From the results, we can see that our method is superior to improving MF-DFA.

MF-LSSVM-DFA for EEG Signal Classification
In this section, consider applying the proposed method to practical problems to verify if MF-LSSVM-DFA is more effective than traditional MF-DFA.EEG, as typical nonlinear signals, have been widely utilized by scholars for analytical research [44][45][46].This section conducts an empirical analysis of EEG data from the Epilepsy Laboratory at the University of Bonn in Germany.The generalized Hurst exponent of the EEG signal is extracted using both the MF-LSSVM-DFA and MF-DFA models and then analyzed as input vectors for SVM to compare the effectiveness of the two methods.
The EEG signals are collected from the clinical EEG database of the Epilepsy Laboratory at the University of Bonn in Germany, which is a widely used public database.The dataset consists of EEG data from five healthy individuals and five patients with epileptic seizures.A total of 200 normal EEG signals and 300 arrhythmic signals are collected, all of which are single-channel datasets.Each sub-dataset contains 100 data segments, each lasting 23.6 s and comprising 4097 data points.The signal resolution is 12 bits, and the sampling frequency is 173.61Hz.
For the EEG signal data, feature extraction is conducted using the MF-LSSVM-DFA algorithm and the MF-DFA algorithm.The fluctuation function order q ranges from −10 to 10. Non-overlapping windows are set with s min = 100 and s max = 190, with s increasing in steps of 9 from s min to s max .Using the MF-LSSVM-DFA algorithm for feature extraction, six generalized Hurst exponents are extracted to serve as input vectors for SVM classification.The extracted generalized Hurst exponents are depicted in Figure 19.From Figure 19, it can be seen that the EEG feature values extracted using the MF-LSSVM-DFA algorithm show a significant concentration trend for healthy individuals, with values primarily distributed in the range of 0.5 to 1.5.In contrast, the EEG feature values of epileptic patients exhibit a more dispersed distribution pattern.When the q value is the same, the generalized Hurst values of EEG for healthy individuals are generally higher than those of epileptic patients, indicating a significant difference in the complexity and regularity of their brain electrical activities.Furthermore, we utilize the obtained generalized Hurst exponents h(q) as input vectors for the SVM and feed them into the SVM classifier for further validation.We measure the classification using accuracy, sensitivity, and specificity.During classification, the SVM classifier employs a Gaussian kernel function, with 90% of the dataset used as the training set and 10% as the test set.We conduct classification validation using k-fold crossvalidation with k = 10 and compute each classification evaluation metric after 100 iterations.The results are depicted in Figure 20.Meanwhile, to validate the effectiveness of the MF-LSSVM-DFA algorithm, the traditional MF-DFA algorithm is employed for feature extraction from EEG signals and SVM is used for classification.The classification evaluation metrics including accuracy, sensitivity, and specificity can be found in Figure 21.In addition, the mean values of accuracy, sensitivity, and specificity after each iteration were calculated.These statistical results are summarized in Table 7. From the data in the Table 7, it can be observed that MF-LSSVM-DFA demonstrates higher accuracy, sensitivity, and specificity in the classification of electrocardiogram signals, proving its effectiveness and reliability in this field.

MF-LSSVM-DFA for Image Segmentation
In this section, to verify the effectiveness and robustness of our model, we apply our proposed model to image segmentation.The specific segmentation method can refer to [30].We first perform segmentation on a simple model, and the segmentation results are shown in Figure 22.We observe that three different patterns reflect different segmentation states, indicating that our method can perform segmentation in different states for different targets.
The results indicate that our proposed method can effectively perform edge segmentation on simple synthesized images.Finally, we analyze the image of the outdoor transmission tower, as shown in Figure 24a.Due to the complex background, multiple image components, and large gradients between pixels in outdoor images, the effective target localization of transmission equipment is more complex.We use the MF-LSSVM-DFA method for the target localization of such an image, as shown in Figure 24.We note that our proposed method can effectively segment transmission towers and their ancillary equipment in complex backgrounds.

Conclusions
In this paper, the MF-LSSVM-DFA model was proposed as the modified model of the MF-DFA model.As an important step in the traditional MF-DFA model, the fitting method had a significant impact on MF-DFA.The polynomial fitting was adopted in the traditional MF-DFA model.However, although the order of the polynomial can be adjusted, there would still be overfitting or underfitting, or even both, in the fitting process.Considering the excellent performance of LSSVM, we replaced the polynomial fitting with LSSVM to construct the MF-LSSVM-DFA model.Subsequently, the multiplicative cascade time series was constructed for numerical experiments based on the p-model.The generalized Hurst exponent h(q) and the scaling exponent τ(q) were used to examine the performance of MF-LSSVM-DFA.Firstly, we compared the performances between MF-LSSVM-DFA and MF-DFA, in which the order of the polynomial fitting is 2.Then, we varied the order of the polynomial fitting and the results showed that the proposed model performs more admirably than MF-DFA.Afterwards, we examined the sensitivity to the overlapping window size s of MF-LSSVM-DFA.When s was larger than N/9, the increase in the variance of the difference between h(q) and the analytic value in the MF-LSSVM-DFA model was smaller than that in the MF-DFA model, denoting that the sensitivity of MF-LSSVM-DFA was smaller than MF-DFA.Thus, the universality of MF-LSSVM-DFA was stronger.Besides, we analyzed the performance of the fitting and we found that the fit was more detailed in MF-LSSVM-DFA than in MF-DFA, which indicated that our proposed model has better performance than the traditional MF-DFA.Finally, we put the proposed MF-LSSVM-DFA into practice, and the MF-LSSVM-DFA algorithm proposed in this paper performs better in practical applications compared with the traditional MF-DFA method.The improvement not only improved the accuracy of ECG signal classification but also enhanced the stability and robustness of the algorithm.By introducing LSSVM as an optimization tool, MF-LSSVM-DFA was able to deal with complex and nonlinear ECG signals more effectively, which provided a more accurate and reliable method for ECG signal analysis and heart disease diagnosis.Therefore, replacing the polynomial fit with the LSSVM fit could effectively improve the performance of MF-DFA.In addition, we employed the proposed method to segment the different category images such as the simple synthesized, hyperspectral insulator, and outdoor transmission tower images.The results validated the excellent performance of our model in image segmentation.

Figure 1 .
Figure 1.A nonlinear time series.

Figure 2 .
Figure 2. Least squares fitting for different orders.The blue dots indicate the original data, and the red lines imply the fitting data.(a): The number of least-squares fitting for each segment is 1; (b): The number of least-squares fitting for each segment is 2; (c): The number of least-squares fits for each segment is 3.

Figure 4 .
Figure 4. Multifractal analysis of p model of (a) generalized Hurst exponent H(q) obtained by MF-DFA and MF-LSSVM-DFA and analytic value; (b) scaling exponent spectrum τ(q) obtained by MF-DFA and MF-LSSVM-DFA and analytic value.

Figure 4 ,
Figure 4, ♦ represents the values calculated by MF-DFA, ▲ shows the values calculated by MF-LSSVM-DFA.The black line is the analytical value.It is obvious that when q ⩾ 0, the generalized Hurst exponent h(q) and the scaling exponent τ(q) calculated by MF-LSSVM-DFA are closer to the analytical values than those calculated by MF-DFA.It is obvious that the generalized Hurst exponent h(q) and the scaling exponent τ(q) calculated by the MF-LSSVM-DFA method are closer to the analtical solution than those calculated by the MF-DFA method, implying that the MF-LSSVM-DFA method performs better than the MF-DFA method.In addition, we calculate the difference between the obtained Hurst exponent h(q) and the analytic value h av (q).We set h1 err (q) = |h1(q) − h av (q)| and h2 err (q) = |h2(q) − h av (q)|, where h1(q) and h2(q) represent the generalized Hurst exponents computed by MF-DFA and MF-LSSVM-DFA, respectively.Similar to h1 err (q) and h2 err (q), we define τ1 err (q) = |τ1(q) − τ av (q)| and τ2 err (q) = |τ2(q) − τ av (q)|.τ1(q) and τ2(q) are the scaling exponents calculated by MF-DFA and MF-LSSVM-DFA, respectively.τ av (q) represents the analytic value.h1 err (q) shows the difference between the Hurst exponent h1(q) and the analytic value and so as h2 err (q).τ1 err (q) and τ2 err (q) exhibit the difference between the calculated value and the analytic value.The smaller the difference is, the more accurate the fitting is.The results can be seen in Figure5.The comparison of the fitting between MF-DFA and MF-LSSVM-DFA is shown in Figure6.The red lines mean the fitting in MF-DFA and the fitting order is 2. The yellow lines show the LSSVM fitting in MF-LSSVM-DFA.For the nonlinear multiplicative cascade time series, the single polynomial fitting cannot fit each segment perfectly.Comparing the fitting effect of polynomial fitting and LSSVM fitting, i.e., the performance of the red and the yellow lines, it is obvious that the fitting performance of LSSVM is more detailed than

Figure 5 .Figure 6 .
Figure 5.Comparison of the difference between the calculated value and the analytic value: (a) the relative error of the Hurst exponent h error (q); (b) the relative error of the scaling exponent τ error (q).The fitting order adopted in MF-DFA is 2 and the size s belongs to [−10, 10].

Figure 7 .
Figure 7. Multifractal features of the MF-DFA model and MF-LSSVM-DFA model, the generalized Hurst exponent of (a,b) and the scaling exponent of (c,d).From left to right, the fitting order of the MF-DFA model is 1 and 3.The range of s is [50, 70].

Figure 8 .
Figure 8. (a,b): The performance of the polynomial fitting and the fitting order is 1 and 3, respectively (Black color).(c): The performance of the LSSVM fitting.The range of s is [50, 70] (Red color).

Figures 9 -
17 exhibit the fitting performance, the generalized Hurst exponent and the scaling exponent.In general, the distribution of the generalized Hurst exponent computed through the MF-LSSVM-DFA model is more compact than that calculated by the MF-DFA model.The Hurst exponents obtained from the MF-LSSVM-DFA model are around the analytic values.

Figure 9 .Figure 10 .
Figure 9. Fitting forms of dispersion sequence.From top to bottom, the adopted models are MF-DFA (Black color) and MF-LSSVM-DFA (Red color), repectively.From left to right, s = 60, 70, 80, respectively.Fitting order is set by 2 in the MF-DFA.

Figure 12 .
Figure 12.Fitting forms of dispersion sequence.From top to bottom, the adopted models are MF-DFA (Black color) and MF-LSSVM-DFA (Red color), respectively.From left to right, s = 300, 350, 400, respectively.Fitting order is set by 2 in the MF-DFA.

Figure 14 .
Figure 14.Fitting forms of dispersion sequence.From top to bottom, the adopted models are MF-DFA (Black color) and MF-LSSVM-DFA (Red color), respectively.From left to right, s = 500, 550, 600, respectively.Fitting order is set by 2 in the MF-DFA.

Figure 19 .
Figure 19.(a,b) The generalized Hurst indices were extracted from EEG signal data based on MF-LSSVM-DFA for healthy persons and epileptic patients, respectively.

Figure 21 .
Figure 21.Evaluation metrics for EEG signal classification based on conventional MF-DFA algorithm: (a) accuracy; (b) sensitivity and (c) specificity.

Figure 22 .
Figure 22.Simple synthesized image segmentation by MF-LSSVM-DFA method of (a) original image, (b) mask (White color), and (c) segmented state (Blue color).Subsequently, we apply our proposed method to the segmentation of hyperspectral insulator images.Due to the low recognition of hyperspectral insulator images, effective segmentation of the image can demonstrate the advantages and efficiency of the MF-LSSVM-DFA method.The segmentation process of the hyperspectral insulator is shown in Figure23.

Table 1 .
MF-DFA: H1 err (q) based on different ranges of s.

Table 3 .
Comparison of the absolute value of the scaling exponent τ error (q).The range of s in MF-LSSVM-DFA is determined by [500, 600].

Table 4 .
Comparison of the absolute value of the scaling exponent τ error (q).The range of s in MF-LSSVM-DFA is determined by [700, 800].

Table 6 .
Comparison of Proposed method, MF-S-DFA, and LF-MF-DFA of the absolute value of the scaling exponent τ error (q).

Table 7 .
Evaluation metrics for two classification algorithms.