Development of the Non-Iterative Supervised Learning Predictor Based on the Ito Decomposition and SGTM Neural-Like Structure for Managing Medical Insurance Costs †

The paper describes a new non-iterative linear supervised learning predictor. It is based on the use of Ito decomposition and the neural-like structure of the successive geometric transformations model (SGTM). Ito decomposition (Kolmogorov–Gabor polynomial) is used to extend the inputs of the SGTM neural-like structure. This provides high approximation properties for solving various tasks. The search for the coefficients of this polynomial is carried out using the fast, non-iterative training algorithm of the SGTM linear neural-like structure. The developed method provides high speed and increased generalization properties. The simulation of the developed method’s work for solving the medical insurance costs prediction task showed a significant increase in accuracy compared with existing methods (common SGTM neural-like structure, multilayer perceptron, Support Vector Machine, adaptive boosting, linear regression). Given the above, the developed method can be used to process large amounts of data from a variety of industries (medicine, materials science, economics, etc.) to improve the accuracy and speed of their processing.


Introduction
Health insurance is one of the main directions of modern healthcare system development [1,2].The prediction of individual health insurance costs is one of the most important tasks in this direction.The application of commonly used regression methods [3] does not provide satisfactory results in solving this task.In the big data era, the problem is deepened by the need for accurate and quick operation of such methods [4,5].
The existence of a large number of data leads to the possibility of using artificial intelligence to solve this task.The use of computational intelligence will allow for the hidden dependencies in the data set to be taken into account [6].In most cases, it can increase the accuracy of individual health insurance costs prediction.Existing neural network tools [7,8] demonstrate a sufficient accuracy of their work.However, they do not always provide the satisfactory speed of training procedures.The use of multilayer perceptron [9] for processing large amounts of data necessitates the use of large volumes of memory.In addition, this tool does not always provide satisfactory generalization properties [10].The main drawback of the RBF networks for solving this task is that they provide only a local approximation of the nonlinear response surface [10].Moreover, this method is characterized by a "curse of dimension", which imposes a number of restrictions on its use for the processing of large amounts of data [11].In the works of [12,13], the backpropagation algorithm is used to implement the training procedure.The large numbers of the epochs of this algorithm, as well as a large amount of input data, cause large time delays during its use.
Deep learning methods are associated with large time delays for training, long-term debugging procedures, and the need to interpret the output signals of each hidden layer.They are designed primarily for image processing tasks [14].
The training procedures of the known machine learning algorithms are fast [15]; however, these methods are inferior to the accuracy of the prediction results [16].
That is why it is necessary to develop new or improve existing individual insurance costs prediction methods and tools that would provide high prediction accuracy with sufficient training speed.

Data Analysis
To solve the regression task, the medical insurance cost prediction dataset (Dataset: https://www.kaggle.com/mirichoi0218/insurance.Dataset License: Open Database) was selected from Kaggle [17].It contains 1338 observations of the personal medical insurance cost.Each vector includes six input attributes and one output (Table 1).The task is to predict individual costs for health insurance.We will consider all independent variables in more detail:

•
Body mass index (BMI).This is the ratio of the person's height to their weight (kg/m 2 ).Minimal BMI is 15.96, the maximum is 53.12, and the average is 30.66.It is higher than normal.

•
The number of dependents (Children).This is the number of children covered by medical insurance.This indicator ranges from 1 to 5, and the average is 1095.The individual insurance costs (IIC) is an output variable.

Data Preparation
We will make a series of transformations of the input data in order to represent them from a text in a numerical form (binary system coding).In particular, we will add five new columns as follows: Each column of the Insurance contractor gender and Smoking will turn into two; namely, male (M) and female (F), and smoker and non-smoker, respectively.The column Beneficiary's residential area in the United States will be transformed into four different ones, each of which will be located in one of the four U.S. regions: Area 1 is Southwest, Area 2 is Southeast, Area 3 is Northwest, and Area 4 is Northeast.Thus, a new data sample was obtained.The vectors of each of the 1338 observations contain 11 input numeric attributes.They are given in Table 2. Figure 1 shows the scatter plot of the dataset from  The individual insurance costs (IIC) is an output variable.

Data Preparation
We will make a series of transformations of the input data in order to represent them from a text in a numerical form (binary system coding).In particular, we will add five new columns as follows: Each column of the Insurance contractor gender and Smoking will turn into two; namely, male (M) and female (F), and smoker and non-smoker, respectively.The column Beneficiary's residential area in the United States will be transformed into four different ones, each of which will be located in one of the four U.S. regions: Area 1 is Southwest, Area 2 is Southeast, Area 3 is Northwest, and Area 4 is Northeast.Thus, a new data sample was obtained.The vectors of each of the 1338 observations contain 11 input numeric attributes.They are given in Table 2. Figure 1 shows the scatter plot of the dataset from Table 2 using Orange Software, version 3.13.0.The circles mark women, and the crosses mark men.The blue circles mark the woman-non-smoker, and the red circles mark the woman-smoker.The blue crosses mark the male-smoker, the red crosses mark the male non-smoker.The size of the figures reflects the value of the body mass index.The larger index, the larger shape of the corresponding figure.Shapes and colors were chosen randomly.

Predictor Based on the Ito Decomposition and Neural-Like Structure of the Successive Geometric Transformations Model (SGTM)
This paper proposes a new method focused on high-speed realization and universal application for regression and classification tasks.

Linear Neural-Like Structure of the Successive Geometric Transformations Model
The authors of [19] have described the topology and the training algorithm of the new non-iterative neural-like structure for solving various tasks.It is based on the successive geometric transformations model (SGTM), and can work in supervised and unsupervised modes.The topology of this linear computational intelligence tool is demonstrated in Figure 2. Its feature is ordered as lateral connections between adjacent neurons of the hidden layer.The procedures of training and functioning of this instrument are of the same type.
The greedy non-iterative training algorithm ensures the repetition of the solution and allows using the common SGTM neural-like structure for processing large amounts of data effectively.Detailed mathematical descriptions and flowcharts of the training and operation procedures of the common SGTM neural-like structure are given in the work of [20].
Figure 1.Dataset visualization using Orange Software, version 3.13.0.The x-axis represents the insurance contractor's age, and the y-axis is the size of the medical insurance costs.The circles mark women, and the crosses mark men.The blue circles mark the woman-non-smoker, and the red circles mark the woman-smoker.The blue crosses mark the male-smoker, the red crosses mark the male nonsmoker.The size of the figures reflects the value of the body mass index.The larger index, the larger shape of the corresponding figure.Shapes and colors were chosen randomly.

Predictor Based on the Ito Decomposition and Successive Geometric Transformations Model (SGTM) Neural-Like Structure
This paper proposes a new method focused on high-speed realization and universal application for regression and classification tasks.

Linear Neural-Like Structure of the Successive Geometric Transformations Model
The authors of [19] have described the topology and the training algorithm of the new noniterative neural-like structure for solving various tasks.It is based on the successive geometric transformations model (SGTM), and can work in supervised and unsupervised modes.The topology of this linear computational intelligence tool is demonstrated in Figure 2. Its feature is ordered as lateral connections between adjacent neurons of the hidden layer.The procedures of training and functioning of this instrument are of the same type.
The greedy non-iterative training algorithm ensures the repetition of the solution and allows using the common SGTM neural-like structure for processing large amounts of data effectively.Detailed mathematical descriptions and flowcharts of the training and operation procedures of the common SGTM neural-like structure are given in the work of [20].

The Ito Decomposition
The accuracy of the approximation task for nonlinear dependencies is one of the important tasks for processing large amounts of data.Existing machine learning methods do not always provide an opportunity for their use to obtain sufficiently precise results for solving this task.
According to the Weierstrass theorem, any continuous function in the given interval can be arbitrarily precisely described by a series of polynomials [21].Another mathematical proof of the

The Ito Decomposition
The accuracy of the approximation task for nonlinear dependencies is one of the important tasks for processing large amounts of data.Existing machine learning methods do not always provide an opportunity for their use to obtain sufficiently precise results for solving this task.
According to the Weierstrass theorem, any continuous function in the given interval can be arbitrarily precisely described by a series of polynomials [21].Another mathematical proof of the approximation of any continuous function is the universal approximation theorem (the expansion of the Weierstrass theorem).
The Ito decomposition (Kolmogorov-Gabor polynomial) is widely used for the development of various nonlinear approximation models [22][23][24][25][26].The general view of the second degree polynomial can be written as follows [6]: Under the conditions of processing of large amounts of multiparametric data, the searching of the polynomial's coefficients is a non-trivial task.Existing methods, in particular, the least squares method and singular decomposition, do not provide sufficient speed [6].That is why the application of the Kolmogorov-Gabor polynomial for the elaboration of the big data processing models requires the development of new, more efficient algorithms for the searching of its coefficients.

The Composition of the Non-Iterative Supervised Learning Predictor Using Ito Decomposition
The proposed linear non-iterative prediction method is based on combining the use of the Ito decomposition (Kolmogorov-Gabor polynomial) and SGTM neural-like structure [6].Input (dependent) parameters according to the method are represented as members of this polynomial.The SGTM neural-like structure is used to find the Kolmogorov-Gabor polynomial's coefficients.The benefits of such process are fast training, as well as the repetition of the solution.Figure 3 demonstrates the topology of the proposed non-iterative neural-like predictor, which contains two blocks [6].
Data 2018, 3, x FOR PEER REVIEW 5 of 14 approximation of any continuous function is the universal approximation theorem (the expansion of the Weierstrass theorem).The Ito decomposition (Kolmogorov-Gabor polynomial) is widely used for the development of various nonlinear approximation models [22][23][24][25][26].The general view of the second degree polynomial can be written as follows [6]: ( ,..., ) Under the conditions of processing of large amounts of multiparametric data, the searching of the polynomial's coefficients is a non-trivial task.Existing methods, in particular, the least squares method and singular decomposition, do not provide sufficient speed [6].That is why the application of the Kolmogorov-Gabor polynomial for the elaboration of the big data processing models requires the development of new, more efficient algorithms for the searching of its coefficients.

The Composition of the Non-Iterative Supervised Learning Predictor Using Ito Decomposition
The proposed linear non-iterative prediction method is based on combining the use of the Ito decomposition (Kolmogorov-Gabor polynomial) and SGTM neural-like structure [6].Input (dependent) parameters according to the method are represented as members of this polynomial.The SGTM neural-like structure is used to find the Kolmogorov-Gabor polynomial's coefficients.The benefits of such process are fast training, as well as the repetition of the solution.Figure 3 demonstrates the topology of the proposed non-iterative neural-like predictor, which contains two blocks [6].The input data is converted in the first block (preprocessing) (1).The number of input layer's neurons of the proposed method when choosing a second-degree polynomial can be calculated according to the following formula [6]: where n is the number of initial inputs from Table 1 ( The input data is converted in the first block (preprocessing) (1).The number of input layer's neurons of the proposed method when choosing a second-degree polynomial can be calculated according to the following formula [6]: where n is the number of initial inputs from As the result of the fast, non-iterative training, the coefficients of the Kolmogorov-Gabor member are calculated in the hidden layer of the proposed model's second block (Figure 3).Then, they are used to solve the task [6].

Modelling and Results
The simulation of the proposed method was carried out using the author's software (console application).The main parameters of the computer on which the experiments were carried out are as follows: memory: 8 Gb Intel ® Core(TM) i5-6200U CPU, 2.40 GHz.
The parameters of the proposed method (SGTM + Ito decomposition) are as follows: 77 neurons in the input and hidden layers, 1 output.The second-degree Kolmogorov-Gabor polynomial was chosen for modeling.The mean absolute percentage error (MAPE) for the proposed method was 30.82%.
The mathematical basis for the direct dissemination networks application with the one hidden layer to the solution of approximation tasks is the universal approximation theorem.According to the theorem, the accuracy of the best approximation is obtained with a large number of neurons in the hidden layer [27].However, in this case, according to the authors of [28], there is a possibility of overfitting.
A necessary estimation of the proposed method's work is the indicator of the model's complexity ratio to the accuracy of its work [24,29].The complexity of the model, in this case, is influenced by two parameters; namely, the degree of the Kolmogorov-Gabor polynomial (which is why the second degree polynomial has been chosen) and the number of hidden layer's neurons of the SGTM linear neural-like structure.The conducted experimental studies have demonstrated that the number of the polynomial's coefficients, which are formed in a hidden layer, make a very small contribution to obtaining the exact result.However, their calculation greatly increases the duration of the method.That is why the research was conducted in order to determine the optimal complexity model for the proposed method.The results of this experiment are listed in Appendix A, Table A1.
Figure 4a demonstrates the ratio of the neurons number in the hidden layer of the proposed method to the accuracy of the work (MAPE) on the interval 25-50 neurons with a step of 5.As can be seen from Figure 4, the optimal result of the method is 35 neurons in the hidden layer.All other indicators in Appendix A, Table A1 demonstrate the same result.Figure 4b  As the result of the fast, non-iterative training, the coefficients of the Kolmogorov-Gabor member are calculated in the hidden layer of the proposed model's second block (Figure 3).Then, they are used to solve the task [6].

Modelling and Results
The simulation of the proposed method was carried out using the author's software (console application).The main parameters of the computer on which the experiments were carried out are as follows: memory: 8 Gb Intel ® Core(TM) i5-6200U CPU, 2.40 GHz.
The parameters of the proposed method (SGTM + Ito decomposition) are as follows: 77 neurons in the input and hidden layers, 1 output.The second-degree Kolmogorov-Gabor polynomial was chosen for modeling.The mean absolute percentage error (MAPE) for the proposed method was 30.82%.
The mathematical basis for the direct dissemination networks application with the one hidden layer to the solution of approximation tasks is the universal approximation theorem.According to the theorem, the accuracy of the best approximation is obtained with a large number of neurons in the hidden layer [27].However, in this case, according to the authors of [28], there is a possibility of overfitting.
A necessary estimation of the proposed method's work is the indicator of the model's complexity ratio to the accuracy of its work [24,29].The complexity of the model, in this case, is influenced by two parameters; namely, the degree of the Kolmogorov-Gabor polynomial (which is why the second degree polynomial has been chosen) and the number of hidden layer's neurons of the SGTM linear neural-like structure.The conducted experimental studies have demonstrated that the number of the polynomial's coefficients, which are formed in a hidden layer, make a very small contribution to obtaining the exact result.However, their calculation greatly increases the duration of the method.That is why the research was conducted in order to determine the optimal complexity model for the proposed method.The results of this experiment are listed in Appendix A, Table A1.
Figure 4a demonstrates the ratio of the neurons number in the hidden layer of the proposed method to the accuracy of the work (MAPE) on the interval 25-50 neurons with a step of 5.As can be seen from Figure 4, the optimal result of the method is 35 neurons in the hidden layer.All other indicators in Appendix A, Table A1 demonstrate the same result.Figure 4b    The training procedure for the optimized version of the method is much shorter than training of the proposed method without optimal parameters selection.In addition, by reducing the number of neurons in the hidden layer from 77 to 35, it was possible to neutralize the effect of noise components.The topology of the optimized version of the method is demonstrated in Figure 5.
Data 2018, 3, x FOR PEER REVIEW 7 of 14 The training procedure for the optimized version of the method is much shorter than training of the proposed method without optimal parameters selection.In addition, by reducing the number of neurons in the hidden layer from 77 to 35, it was possible to neutralize the effect of noise components.The topology of the optimized version of the method is demonstrated in Figure 5.As can be seen from the table, the optimal parameters selection of the proposed method (according to all five indicators) allowed the following: • to increase the generalization properties of the method (the difference between the MAPE indicators in the training and testing modes is 2.40% and 1.12%, respectively, for the developed and optimized methods);   As can be seen from the table, the optimal parameters selection of the proposed method (according to all five indicators) allowed the following:

•
to increase the generalization properties of the method (the difference between the MAPE indicators in the training and testing modes is 2.40% and 1.12%, respectively, for the developed and optimized methods); • to increase the accuracy of the optimized method by 1.34%.
In addition, it was possible to reduce the duration of the training procedure by 0.22 s.In terms of big data processing, all of the above are significant advantages.

Comparison and Discussion
The results of the developed method (optimize version) were compared with the results of the known methods [6], which are demonstrated in Figure 6.
Data 2018, 3, x FOR PEER REVIEW 8 of 14 • to increase the accuracy of the optimized method by 1.34%.
In addition, it was possible to reduce the duration of the training procedure by 0.22 s.In terms of big data processing, all of the above are significant advantages.

Comparison and Discussion
The results of the developed method (optimize version) were compared with the results of the known methods [6], which are demonstrated in Figure 6.As can be seen from the figure, the common SGTM neural-like structure provides the lowest error value of the regression task among all known methods.However, the use of Ito decomposition can significantly improve the accuracy of the method in both modes of operation by 1.5 and 1.3 times, respectively.This is because the linear non-iterative SGTM neural-like structure provides an exact search for the coefficients of the Kolmogorov-Gabor polynomial.In addition, reducing the number of hidden layer's neurons allows discarding components (members of a polynomial) that do not affect the result.In this way, an effective approximation procedure with great accuracy is carried out.
An important role in applying the computational intelligence methods for solving the practical tasks of processing large data arrays is played an important role for the duration of the training procedure.That is why in this work, the comparison of the training procedure duration for all considered methods is given.Figure 7 demonstrates the results of this investigation.As can be seen from the figure, the common SGTM neural-like structure provides the lowest error value of the regression task among all known methods.However, the use of Ito decomposition can significantly improve the accuracy of the method in both modes of operation by 1.5 and 1.3 times, respectively.This is because the linear non-iterative SGTM neural-like structure provides an exact search for the coefficients of the Kolmogorov-Gabor polynomial.In addition, reducing the number of hidden layer's neurons allows discarding components (members of a polynomial) that do not affect the result.In this way, an effective approximation procedure with great accuracy is carried out.
An important role in applying the computational intelligence methods for solving the practical tasks of processing large data arrays is played an important role for the duration of the training procedure.That is why in this work, the comparison of the training procedure duration for all considered methods is given.Figure 7 demonstrates the results of this investigation.As can be seen from Figure 7, the multi-layered perceptron demonstrates the longest training time.The linear common SGTM neural-like structure provides one of the best results and is inferior only to linear regression.However, the latter method demonstrates poor results in accuracy (Figure 7).The developed method demonstrates 10 times faster training compared with multi-layer perceptron and less than 8 times slower training compared with the common SGTM neural-like structure.Obviously, the working time of the developed method has increased, as the dimension of the input space due to the use of Ito decomposition in accordance to equation ( 2) has significantly increased.However, the developed method demonstrates the best results both in the accuracy of work and in relation to the generalization properties of the chosen instrument of computational intelligence.
Figure 8 illustrates the visualization of the work for all investigated methods in the form of scatter plots.Figure 8f confirms the best results of the developed method among those considered as to the accuracy of its work.As can be seen from Figure 7, the multi-layered perceptron demonstrates the longest training time.The linear common SGTM neural-like structure provides one of the best results and is inferior only to linear regression.However, the latter method demonstrates poor results in accuracy (Figure 7).The developed method demonstrates 10 times faster training compared with multi-layer perceptron and less than 8 times slower training compared with the common SGTM neural-like structure.Obviously, the working time of the developed method has increased, as the dimension of the input space due to the use of Ito decomposition in accordance to equation ( 2) has significantly increased.However, the developed method demonstrates the best results both in the accuracy of work and in relation to the generalization properties of the chosen instrument of computational intelligence.
Figure 8 illustrates the visualization of the work for all investigated methods in the form of scatter plots.Figure 8f confirms the best results of the developed method among those considered as to the accuracy of its work.As can be seen from Figure 7, the multi-layered perceptron demonstrates the longest training time.The linear common SGTM neural-like structure provides one of the best results and is inferior only to linear regression.However, the latter method demonstrates poor results in accuracy (Figure 7).The developed method demonstrates 10 times faster training compared with multi-layer perceptron and less than 8 times slower training compared with the common SGTM neural-like structure.Obviously, the working time of the developed method has increased, as the dimension of the input space due to the use of Ito decomposition in accordance to equation ( 2) has significantly increased.However, the developed method demonstrates the best results both in the accuracy of work and in relation to the generalization properties of the chosen instrument of computational intelligence.
Figure 8 illustrates the visualization of the work for all investigated methods in the form of scatter plots.Figure 8f confirms the best results of the developed method among those considered as to the accuracy of its work.
Figure1shows the scatter plot of the dataset from Table2using Orange Software, version 3.13.0.[18].

Figure 1 .
Figure 1.Dataset visualization using Orange Software, version 3.13.0.The x-axis represents the insurance contractor's age, and the y-axis is the size of the medical insurance costs.The circles mark women, and the crosses mark men.The blue circles mark the woman-non-smoker, and the red circles mark the woman-smoker.The blue crosses mark the male-smoker, the red crosses mark the male non-smoker.The size of the figures reflects the value of the body mass index.The larger index, the larger shape of the corresponding figure.Shapes and colors were chosen randomly.

Figure 2 .
Figure 2. Topology of the common linear neural-like structure of the successive geometric transformations model (SGTM).

Figure 2 .
Figure 2. Topology of the common linear neural-like structure of the successive geometric transformations model (SGTM).

Figure 3 .
Figure 3. Topology of the proposed model (combining the use of the Ito decomposition and SGTM linear neural-like structure).

Figure 3 .
Figure 3. Topology of the proposed model (combining the use of the Ito decomposition and SGTM linear neural-like structure).
confirms the obtained result regarding the duration of the training procedure.Data 2018, 3, x FOR PEER REVIEW 6 of 14 confirms the obtained result regarding the duration of the training procedure.(a) (b)

Figure 4 .
Figure 4. Optimal parameters identification for training and test modes: (a) mean absolute percentage error (MAPE) value according to changing the hidden layer's neurons number, (b) MAPE value according to changing the training time.

Figure 4 .
Figure 4. Optimal parameters identification for training and test modes: (a) mean absolute percentage error (MAPE) value according to changing the hidden layer's neurons number, (b) MAPE value according to changing the training time.

Figure 5 .
Figure 5. Topology of the proposed optimized model.

Figure 5 .
Figure 5. Topology of the proposed optimized model.

Figure 6 .
Figure 6.Mean absolute error (MAE) in training and testing modes for developed and existing methods.On the x-axis, the mean absolute error value for all considered methods is illustrated.

Figure 6
Figure6demonstrates the training and testing errors for all methods.As can be seen from the figure, the common SGTM neural-like structure provides the lowest error value of the regression task among all known methods.However, the use of Ito decomposition can significantly improve the accuracy of the method in both modes of operation by 1.5 and 1.3 times, respectively.This is because the linear non-iterative SGTM neural-like structure provides an exact search for the coefficients of the Kolmogorov-Gabor polynomial.In addition, reducing the number of hidden layer's neurons allows discarding components (members of a polynomial) that do not affect the result.In this way, an effective approximation procedure with great accuracy is carried out.An important role in applying the computational intelligence methods for solving the practical tasks of processing large data arrays is played an important role for the duration of the training procedure.That is why in this work, the comparison of the training procedure duration for all considered methods is given.Figure7demonstrates the results of this investigation.

Figure 6 .
Figure 6.Mean absolute error (MAE) in training and testing modes for developed and existing methods.On the x-axis, the mean absolute error value for all considered methods is illustrated.

Figure 6
Figure6demonstrates the training and testing errors for all methods.As can be seen from the figure, the common SGTM neural-like structure provides the lowest error value of the regression task among all known methods.However, the use of Ito decomposition can significantly improve the accuracy of the method in both modes of operation by 1.5 and 1.3 times, respectively.This is because the linear non-iterative SGTM neural-like structure provides an exact search for the coefficients of the Kolmogorov-Gabor polynomial.In addition, reducing the number of hidden layer's neurons allows discarding components (members of a polynomial) that do not affect the result.In this way, an effective approximation procedure with great accuracy is carried out.An important role in applying the computational intelligence methods for solving the practical tasks of processing large data arrays is played an important role for the duration of the training procedure.That is why in this work, the comparison of the training procedure duration for all considered methods is given.Figure7demonstrates the results of this investigation.

Figure 7 .
Figure 7.The training time for all methods.

Figure 7 .
Figure 7.The training time for all methods.

Data 2018, 3 ,
x FOR PEER REVIEW 9 of 14

Figure 7 .
Figure 7.The training time for all methods.

•
Smoking (Smoker).The dataset contains 1064 smokers and 274 non-smokers.•Beneficiary'sresidential area in the United States (Area).This column displays four regions of the United States, where the number of observations for each of them is northeast: 324, northwest: 325, southeast: 364, and southwest: 325.

Table 3
[30,31]s quantitative indicators for evaluating the work of the developed method and its optimized version in terms of both training and testing modes according to the following indicators[30,31]:

Table 3 .
Modeling results based on mean absolute percentage error (MAPE), sum square error (SSE), symmetric mean absolute percentage error (SMAPE), root mean square error (RMSE), and mean absolute error (MAE) in training and test modes.SGTM-successive geometric transformations model.

Table 3
[30,31]s quantitative indicators for evaluating the work of the developed method and its optimized version in terms of both training and testing modes according to the following indicators[30,31]:

Table 3 .
Modeling results based on mean absolute percentage error (MAPE), sum square error (SSE), symmetric mean absolute percentage error (SMAPE), root mean square error (RMSE), and mean absolute error (MAE) in training and test modes.SGTM-successive geometric transformations model.