Optimal-Setting for Ore and Water Feeding in Grinding Process Based on Improved Case-Based Reasoning

: The grinding product particle size is the most crucial operational index of mineral grinding processes. The size and consistency of the product directly affects the subsequent dressing and sintering. In this paper, a novel expert system is proposed for guiding the operating variables to keep the product stable with the wildly varying ore properties. First, case-based reasoning (CBR) is introduced to describe the whole grinding process with the historical data and expert experience. Second, the generative adversarial network (GAN) is employed to extend the raw data to enhance the ﬂexibility of CBR. Moreover, the weights of different features in CBR is optimized by improved non-dominated sorting genetic algorithm II (NSGA-II). Finally, the proposed method is validated by a set of actual data collected from a Chinese dressing plant. The experimental result demonstrates the effectiveness of the proposed method.


Introduction
In mineral processing, grinding is used to reduce the particle size of the ore so that the valuable mineral constituent is exposed and can be recovered in the subsequent beneficiation operation, such as flotation or magnetic separation. The product particle size (PPS) is the most crucial production index which is a required constant to ensure the quality of the subsequent product. However, grinding, which is a complex and time-variant process, is affected by a large number of operational parameters. What is more, the iron ore's composition is unstable with the decrease in mineral resources [1].
Because of these difficulties, the research on the grinding process has received much attention. Generally, the PPS needs to be controlled within the desired range suitable for the subsequent beneficiation process. With the development of new intelligent modeling and control methods, many intelligent modeling methods have been successfully implemented in the modeling process of grinding processing. Recently, the advanced control methods based on model predictive control [2][3][4] or supervisory control [5,6] for the grinding process have been discussed. It is worth noting that the results mentioned above are stringently based on the mathematical models of the system. However, the grinding process is difficult to describe by mathematical models since the actual industrial process is essentially a multi-input-multi-output system (MIMO) with large inertia, nonlinearity, strong coupling, uncertainty, and interference factors [7]. Moreover, due to the lack of online analyzers measuring the grinding particle size and the difficulty of obtaining transfer functions of the grinding process, many concentrators cannot realize optimal control by adopting the model-based control method proved to be effective in other plants [8]. The distinct element method, finite element method, smooth particle hydrodynamics method and discrete element coupling algorithm were adapted to the grinding process, focusing on the media motion state, mechanism research, mill parameter optimization research, ball mill power consumption and energy distribution [9,10]. The coupling of various methods can realize the combination of advantages of various methods and simulate the actual working conditions of the ball mill more truly and comprehensively [11][12][13]. However, due to the complex and unstable model, the uncertainty of ore properties, high technical experience requirement and high computational cost, the progress of the coupling method in the mill simulation is difficult to implement. With the development of science and technology, a hybrid intelligent optimal control method based on an expert system for the operation of the complex industrial process is proposed. CBR [14,15], rule-based reasoning (RBR) [16,17] and a fuzzy expert system [18][19][20] are applied to provide potential solutions for such grinding problems. Among these methods, CBR is an approach to problem solving that utilizes previous cases and experiences that are similar to the current one. The advantage of this method is that it bypasses the unavailable parameters and makes full use of the available experience and data to obtain satisfactory results with little change in the process equipment. Compared with other intelligent algorithm, another important advantage of CBR is that it enables sustained learning by updating the case base after a problem has been solved.
Based on the previous research, the ball mill ore and water feeding intelligent optimization based on CBR are proposed. Historical production data are firstly switched into the form of the case, which include characteristics and characteristic solutions. The characteristics consist of the crushing ore size, ore taste, magnetic tube mFe content, magnetic tube recovery rate, raw ore FeO content, raw ore mFe content, and overflow particle size target value, while the characteristic solution is the ball mill ore feeding and water feeding. Then k-nearest neighbor (KNN) algorithm is applied to calculate the similarity between cases in cases retrieved with the Euclidean distance. The accuracy of CBR is mainly affected by the quantity, quality, and weight of cases. Considering that it is very difficult to obtain a large number of samples in industrial practice, this paper adopts the method of data expansion to increase the sample density. GAN, proposed by Goodfellow et al. in 2014, has been proved to be the most effective method of data augmentation in recent years [21]. GAN is different from other data expansion methods, such as the synthetic minority oversampling technique (SMOTE) and variational auto encode, in that it pays more attention to learning the internal distribution of the original data rather than differentiating or adding noise to the original data. Through the zero-sum game, GAN can be trained to generate more realistic data. GANs have been successfully applied in generating realistic data in many fields. Shao et al. employed GAN to enrich fault samples to detect the faults in an induction motor [22]. Wang et al. proposed a new model of fault diagnosis by combining GAN and stacked denoising autoencoders to recognize the planetary gearbox fault pattern. Mao et al. [23] and Liu et al. adopted GAN to generate a large number of fault samples to obtain a balanced data set and then input it into a classifier to realize the fault diagnosis of the induction motor [24]. Spyridon proposed an unsupervised fault detection scheme based on GAN in which the judgment on fault occurrence is made by the discriminator [25].
By its very nature, the distribution of attribute weights also influences the process of similarity assessment. Considering that the operating parameters of the grinding process are not single, it means all case solutions should be considered when weighting the case description. However the commonly used CBR assigns weights through expert experience, which is generally qualitative rather than quantitative. To address the above issues, this paper used NSGA-II which has been used in industrial production [26][27][28] for multiobjective optimization of weights based on historical cases.
The main contributions of this paper can be summarized as follows.
(1) The CBRbased grinding parameter optimization algorithm is proposed to maximize the use of available data and expert experience for plants with changing ore properties and incomplete monitoring equipment. (2) The operational parameters and observation data are simplified by combining the process and actual field conditions, and the GAN algorithm is used to realize the expansion of data. (3) The paper introduces an allocation method of the attribute weights, based on NSGA-II, to make it efficacious and precise to multiple case solutions.
(4) The method proposed in this paper was experimented in a dressing plant and achieved satisfactory results, which provides experience for subsequent studies.
The paper is organized as follows: Section 2 presents background information about the ball mill grinding process and the data augmentation method. Section 3 introduces CBR and optimizes the weight of each characteristic by NSGA-II. This is followed by Section 4, which presents the simulation results of the optimization, comparing them with the other algorithms. Section 5 presents the application results in a Chinese ore dressing mill. The last part gives the conclusion.

Parameter Selection and Data Reduction
In this section, two-stage closed-circuit mineral grinding processes are first introduced. Combined with the actual situation, some key parameters are selected to optimize the grinding processes. Then, the collected data are expanded by the data augmentation method to make the data-driven model more precise.

Parameter Selection
Grinding process is the sequel of the ore crushing process, whose purpose is to produce useful components of the ore to reach all or most of the monomer separation, while avoiding an excessive wear phenomenon and achieving the particle size requirements for sorting operations. The typical two-stage closed-circuit wet-type mineral grinding process considered in this paper is shown in Figure 1. The coarse ore from the primary crusher is mixed in the ore bin and then fed into the ball mill together with water for wet grinding. By changing the frequency of the pendulum feeder, we can control the quantity entering the ball mill. The tumbling and crushing action with the grinding medium grinds the coarse ore to finer sizes. The slurry containing the fine product is discharged from the ball mill to the pump sump and then pumped to hydrocyclones for classification. The slurry is separated into two streams in the hydrocyclones: the underflow with the larger particles and the overflow with the finer particles. The underflow is recycled back to the ball mill with the new coarse ore for further regrinding. The overflow slurry flows to the next stage of the grinding system for finer particle grinding. The next stage of the grinding process is the same as the first stage, and the overflow flows into the subsequent beneficiation operation as the desired product. The grinding process is a complex controlled object that is influenced by many factors. PPS is the key operational index that dictate the product quality and is usually used as the controlled variable in the grinding process. The manipulated variables are listed as follows: the milling ore feed velocity (denoted by f 1 ), the water feed velocity of the two-stage hydrocyclones (denoted by w 11 , w 12 ), the pump water feed velocity of the twostage sumps (denoted by w 21 w 22 ), and the frequency of the two-stage pumps (denoted by p 1 , p 2 ). Combined with the actual situation of a concentrator in northeast China, the fresh ore is mainly composed of hematite, magnetic ore, and carbonate iron ore. We use the characteristics of the crushing ore size, ore grade, magnetic tube mFe, magnetic tube recovery rate, raw ore FeO content, and raw ore mFe content to describe them in this paper.
The calculation of adjusting all of the above-mentioned operating parameters, according to the nature of the ore, is too complex, and the construction on site has a large lag and uncertainty. So combined with the actual situation of a concentrator in northeast China, this paper simplifies as follows.
(1) Long-term industrial tests showed that the grinding efficiency is highest when the underflow concentration of the two-stage cyclone are around 65% and 85%, respectively. When the amount of ore and water is well controlled so that the concentration of one stage of underflow is 65%, the concentration of the second-stage underflow is exactly 85% without adding water to the cyclone. So the water feed rate of the second-stage hydrocyclones is set to zero. (2) The main function of water feeding to the sumps is to avoid slurry precipitation. So the water feed rate of the two-stage sumps is a fixed value that does not changes as the manipulated variable. (3) The frequency of the two-stage pumps is also fixed, based on long-term industrial trials. (4) Operators can adjust the amount of steel ball supplied according to complete real-time regulation. According to the above assumptions, this paper's operating parameters are simplified to the water feed velocity of the first-stage hydrocyclones and the milling ore feed velocity. These parameters need to be adjusted, according to the ore properties fluctuation and PPS requirements.

Data Augmentation
Limited by the upgrading of technology and the backwardness of modern equipment on the industrial site, there are not enough data for parameter optimization and calculation. The primary purpose of the method in this section is to generate a large amount of data so as to provide sufficient samples for data-driven methods to calculate.
The GAN was proposed by Goodfellow et al. in 2014, and consists of two essential components, namely, the generator and the discriminator. Through the zero-sum game, GANs can be trained to generate more realistic data. GAN is widely used in image generation and recognition, speech generation and recognition, data generation, and other fields. The main thought behind GAN is using adversarial networks to improve the quality of generated data. The generator learns the probability distribution of the original data and generates artificial samples that mimic the pattern, using random noises. The discriminator discriminates the artificially generated data from the actual data and prompts the generator to produce better quality data in the next iteration. The general semantics flow chart of GAN is depicted in Figure 2. The GANs framework is equivalent to a minimax two-player game, which can be described by the following function [21]: where x is a real datum with a distribution, and p x and z is a random vector. z is made to be an input of G through which the generated datum G(z) can be obtained. p g is denoted as the distribution of G(z). Then, D represents the probability that x comes from p x , rather than from p g . Ideally, D(x) = 1 if x ∼ p x and D(x) = 0 if x ∼ p g . D is trained to maximize the probability of assigning the correct label to both the real datum x and the generated datum G(z).
In this paper, GAN is used to generate grinding industry data. The data include the crushing ore size, ore grade, magnetic tube mFe, magnetic tube recovery rate, raw ore FeO content, raw ore mFe content, the water feed rate, and the fresh ore feed rate. After the actual field investigation records, 182 sets of data were used as the original sample.
The data from the actual production process have some incorrect data. Some of the values are 0, while others are either unreasonably big or small. To get rid of the abnormal data, Formula (2) is used firstly for data screening.
where X i is the ith parameter value; X is the average of the values of the set of parameters. When one of the parameter values is greater than α times, the standard deviation, the sample should be deleted. After data screening, 164 sets of data are left to generate samples. In the generator, the input is composed of the noise input, which contains 80-dimensional data, randomly generated by Gaussian distribution. There are two hidden layers, followed by the rectified linear unit (ReLU) as an activation function, and the kernel sizes are 120 and 240. The output of the generator is a one-dimensional data sample, using Sigmoid as the activation function. As for the discriminator, the input layer is with eight kernels, using LeakyReLU as an activation function, and the kernel size is 120. Another layer with 240 kernels using LeakyReLU is added and followed by dropout with the probability of 0.5. Finally, the model layer is flattened and linked to one fully connected layer with 0.5 dropouts.
GAN generates 886 samples. To verify the accuracy of the generated samples, the backpropagation neural network (BP), support vector machines (SVM), and extreme learning machine (ELM) are used to compare the original data and the generated data. We used 30% of the original samples as test samples.
The mean absolute error(MAE), mean absolute percentage error(MAPE) and root mean square error(RMSE) of the generated data and the original data under different algorithms are shown in Tables 1 and 2. As can be seen from the tables, the generated samples can better improve each algorithm's accuracy. The reason is that the original data volume is relatively small and the data interval is large, which makes it difficult to fit the data well. GAN can supplement the data to simulate some working conditions that are not in the original data but may occur, so that the database contains more working conditions. Although the generated data may not definitely have a positive effect on the optimization, the validity of the generated data can be proven after several generation experiments when all different classification methods show that the expanded data fit better than the original data.

Optimizing Setting Base on CBR
Generating samples can improve the accuracy of the algorithm when the original samples are insufficient. However, the generated data do not have the same accurate guiding significance as the measured data. Some generated data may also be misleading for subsequent data-driven training. Therefore, an algorithm is needed to make the original sample and the generated sample with different weight coefficients so that the generated data only play an auxiliary role rather than a deciding role. As an algorithm that relies on expert experience, CBR has the advantages of fast speed, strong adaptability, and low hardware equipment requirements. These advantages meet the needs of the grinding site very well. This chapter improves CBR by distinguishing between the generated samples and original samples. What is more, the feature weights are optimized by an evolutionary algorithm.

Basic Flowchart of Case-Based Reasoning
The case-based reasoning process is mainly divided into four basic steps: (1) case retrieval in which, by a series of searching and similarity calculations, the most similar case with the current problem is found in the case base; (2) case reuse, which compares the differences between the source case and the target case. Then, the solution case recognized by the user is submitted to the user, and the election of its application is observed; (3) case revision in which the solution strategy is adjusted by combining the elect of the case reuse and the current issue to fit the current problem; and (4) case storage in which the current issue is resolved and stored in the case database for future use [28]. In the CBR process, case retrieval is the core of CBR technology, which directly determines the decision-making speed and accuracy.
Together with the real problems that exist in grinding process control with the theory of case-based reasoning, the basic procedure of the ball mill feed intelligent optimization is shown in Figure 3 [29]. By carrying out a comprehensive mathematical problem in the engineering analysis and case-based reasoning for the complex process, the intelligent ball mill feed is obtained in an optimized manner. The details are presented as followed.

Case Description
In a CBR model, the samples in the case base should be representative and cover the typical operational conditions as much as possible. The model's generalization ability could be limitive if a series of continuous samples are directly selected to be the case base. On the other side, if a large amount of data that might cover all the conditions becomes the case base samples, high-level data redundancy might appear. The model precision might be negatively impacted if the anomaly data are added into the case base.
Usually, each case consists of characteristics and characteristic solutions. To distinguish the impact of confirmed cases and generated cases on retrieval, this article adds case weights to describe each case further. The characteristics contain the crushing ore size f 1 , ore grade f 2 , magnetic tube mFe f 3 , magnetic tube recovery rate f 4 , raw ore FeO content f 5 , raw ore mFe content f 6 and overflow particle size target value f 7 . The characteristic solutions are the ball mill ore feed j 1 and water feed j 2 . The case weight is defined as β.
The case model in the CBR process is described as C k = {F k , J k , β k }. C k (k = 1, · · · , m) means there are m cases in total, C k stands for the kth case in the case base. F{ f k,1 , f k,2 , · · · , f k,7 } expresses the characteristics of what C k describes. J k = {j k,1 , j k,2 } expresses the characteristics solutions of the case C k . β k expresses the case weight of the case C k .

Case Retrieve and Matching
Case matching and case retrievals are essential steps in the case-based reasoning process and the key of the information extraction from the case databases. In general, the case matching strategy includes the serial and parallel search methods. In the serial search process, the cases are organized in a hierarchical manner. The top-down refinement layer by the layer retrieval approach is adopted, which means that the further down the layer, the higher the similarity. The parallel searching strategy weakens the level features among the cases. The retrieve method returns to the most similar case by retrieving many cases. The commonly used search strategies have a nearest neighbor strategy, inductive reasoning strategy, and knowledge guidance strategy.
When the new case C comes, we define the SI M = {F, F k } as the similarity of C and C k . We can use the Euclidean distance to calculate the similarity as Equation (3): where ω i is the weight of f i .

Case Reuse
In the actual generation process, the cases in the case library cannot cover all work situations. Therefore, the similar case solution retrieved must be reused, that is to say, the CBR system adjusts the retrieved case solution, according to the specific circumstances of the new case, to get the solution of the new case. After the case retrieving and matching, cases in the case base are arranged in the order of highest to lowest by the value of SI M. Then, the k-nearest neighbor algorithm (KNN) is used in this step: the first n historical cases are chosen as reference cases, and their solutions get reused by using formula (4) [30].
The value of n definitely affects the final solution. If the value is much larger than the reasonable value, some insufficient cases get involved in the case reuse process. The accuracy of the case solution is affected undesirably. While if the value is too small, there are not enough cases to reflect all the features of the new case. So, either a too-big value or a too-small value makes the solution unfaithful. The value β k increases the proportion of the original cases. This parameter reduces the weight of the generated case, which is optimized with ω i .

Case Maintenance
The accuracy of the cases in the case base is directly reflected in the case solution of CBR. Case maintenance stores the new cases and their solutions into a case database, according to a certain strategy. The newly solved case needs to be stored to increase the searching scope of the case base and adapt the case base to the changing working conditions. However, an incorrect case contaminates the case base and greatly affects the accuracy of the CBR algorithm permanently. What is more, the unrestrained storing of new cases increases the calculation time constantly. Therefore, there should be a strategy in judging whether the new case should be retained or not. This paper puts forward the strategy by the following steps: (1) Experts judge whether the new case is correct or not. The influence of errors, such as input error, can be reduced by expert judgment. (2) Set m max as the maximum storage amount. Set S max as the maximum storage similarity. Identify the maximum similarity SI M max of all the SI M(F, F (k) ). Identify the m as the amount of the latest case base. (3) If the retrieved case with SI M max is the generated case, delete this case and store the new case in the case base to replace it. (4) If the retrieved case with SI M max is the original case, make decisions as follows:

Weight Optimization for CBR
ω i and β are key parameters in the case matching and case retrieval [15]. The value of them directly affects the value of SI M(F, F (k) ), which in turn affects the final optimization result of CBR. Usually, the value of ω i and β are distributed by expertise. However, experts can only give ω i and β qualitative indicators based on experience, and it is difficult for them to determine the exact value. In the paper, we use the NSGA-II intelligent optimization algorithm to calculate ω i and β based on the historical cases of the case base. The algorithm is summarized in Figure 4. The algorithm steps are as follows: (1) Organize the case base, select 70% original cases and all generated cases as the training samples, and the remaining 30% original cases as the testing samples. Regard the training samples as the new base database. Regard the testing samples as the new bases. (2) Set ω i (i = 1, · · · , 7) and β as the decision variables. Generate initial random father populations of ω i and β. Use the method CBR mentioned above to calculate the J of every new case.
Build the mathematical model. 1. Objective functions: • The MAE (mean absolute error) between the optimized amount of ore feeding and the actual value of the test set is the smallest. • The MAE (mean absolute error) between the optimized amount of water feeding and the actual value of the test set is the smallest.

Constraint conditions:
Adopt the NSGA-II multi-objective optimization algorithm, obtaining the noninferior solution set through populations, across and mutual. (5) Adopt the TOPSIS algorithm to obtain the optimal solution of the non-inferior solution set as the optimized feature weight, based on the Euclidean distance. (6) When the new case comes, adopt the optimized feature weight to case retrieve from the whole case base. Then, solve the ore and water feeding as the typical steps of CBR as introduced above.

Simulations Results Using Actual Industrial Data
In this section, 164 sets of actual industrial data collected from a Chinese dressing plant are used for simulation to verify the effectiveness and performance of the proposed algorithm preliminarily. The actual production data are expanded with the GAN and optimize the feeding value with the proposed method. In order to verify the accuracy of this algorithm, this algorithm is compared with the SVM, ELM, and traditional CBR algorithms.

Parameters Training
The 50 sets of the entire 164 sets of original data after data screening are used as the testing data, and the remaining 114 sets and 886 sets of generated data are used as the training data. Then, by performing multiple simulations in the laboratory, the initial population number is chosen as 200, and evolutionary algebra is set to 80. Figures 5 and 6 show the Pareto optimal fronts for ore and water feeding errors of the first generation and the 80th generation. Inspecting the figures, we can find that training is stabilized in the 80th generation and NSGA-II shows excellent convergence in two optimal objects. The solutions along the Pareto front are equally optimal and the TOPSIS method is applied to select the most appropriate solution.  The optimization results of different case features are shown in Table 3. ω 1 to ω 7 describe the importance of each case feature in case retrieval, respectively, and ω 8 represents the importance of the generated cases.

Further Comparative Analysis
To prove the accuracy of the optimal setting, the other two traditional algorithms (SVR and ELM) are used to compare, based on the expanded data. Further, to give importance to the different weights of the generated case and the original case, the traditional CBR is also added for comparison. Different from the scheme proposed in this paper, the traditional method does not distinguish between generated samples and original samples, and each case description is selected by manual experience. The optimal setting value and actual value of the water and ore feeding with different algorithms are shown in Figures 7 and 8.  The MAE, MAPE, and RMSE of each algorithm are shown in Table 4. As can be seen from the figures and table, the results obtained by CBR accurately vary with the given values. Although there are some deviations, the deviation values are much smaller than other algorithms. At the same time, compared with other algorithms, the CBR have almost no abnormal points. The range of change is smaller and the MAPE is much smaller for both ore and water feeding. The reason is that, compared with other machine learning algorithms, CBR assigns human experience to different case descriptions and places more emphasis on important variables. What is more, the case solutions are jointly derived from similar cases and less affected by outliers. The training process is also simpler than other machine learning algorithms and does not suffer from overfitting or underfitting. Therefore, this method is very suitable for application in practical engineering.

Industrial Application Experiments
After testing and comparison with the actual industrial data, the proposed approach was applied to the dressing mill to verify the effectiveness. The average grade of the processed iron ore is 30%. The algorithm is programmed by MATLAB, and the humancomputer surface (as shown in Figure 9) is designed by Visual Basic. The surface is a simple version, and the actual application version's language is Chinese with some additional features, such as querying historical data curves, setting parameters, and so on. Figure 10 is a typical flowsheet of the optimal-setting system running manually. The auto-run indicates that the soft-sensor system carries out estimation operations in accordance with a specified time interval. When a new set of ore characteristics or new overflow particle size target are obtained, the operator firstly judges whether to update the feature weights or not, then presses "calculate" to run the system. When they get the feedback of the actual overflow particle size, they need to input the data to the system, and the entire group of data saves in the host in the format of EXCEL. After two months of successful operation, a part of the data is shown in Table 5. f 1 to f 6 in the table represent the ore properties retrieval conditions and f 7 represents the target particle size requirement. J 1 and J 2 are the optimized feeding values and O represents the actual PPS achieved with the optimized feeding values. Figure 10. Flowsheet of the optimal-setting system.  Figure 11 shows the target and the actual value of the overflow particle size. The mean absolute error (MAE) of the target and the actual value is 1.1289. This error is within the allowable range. What is more, the curve of the actual value and the target value are basically consistent. The algorithm can quickly and accurately calculate the set value of the ore and water feeding, according to the ore characteristics and the target of the overflow particle size compared with the traditional manual method. However, when the target value is less than 72 or greater than 80, the error is large. This is due to the following factors: (1) The cases in the case base are not enough, especially beyond 72-80. The cases cannot contain all the characteristics of the ore. (2) The target value beyond 72-80 means that the ore is much better or worse than the normal state. The other operating parameters of the grinding process are not adjusted according to the ore properties. In industrial applications, some error, such as human error and equipment error, makes the algorithm have some deviation, and some special cases also affect the accuracy of the system. As the optimization system is used for a long time, a large number of cases covering various situations are stored in the case base and the algorithm will be more accurate. Figure 11. Target and actual value of particle size for overflow particle.

Conclusions
To solve the unstable production process caused by the ore and water feeding's fluctuation properties in the ball mill, the improved CBR is proposed in this paper. Firstly, the GAN algorithm is used to expand the data in view of the shortage of field data collection. Then CBR algorithm solves a new problem with the experience of dealing with similar ones. Based on the similarity between the new case and the old cases, CBR can give the ball mill feed quantity based on historical data. CBR can also update its case base according to new cases to constantly adapt to new conditions. In order to improve the accuracy of the algorithm, NSGA-II is applied to identify the key parameter, which is the weight of each characteristic and the coefficients that distinguish between accurate and generated data. The method is then verified by experimental simulation that it is feasible and has higher accuracy than other algorithms. The application results show that the system achieved satisfactory estimation accuracy and adaptation ability under an actual production environment.  Acknowledgments: The authors are grateful to ANSTEEL Group Corporation for offering us the experimental environment.

Conflicts of Interest:
The authors declare no conflict of interest.

Abbreviations
The following abbreviations are used in this manuscript: