Next Article in Journal
Cyberbullying Detection on Twitter Using Deep Learning-Based Attention Mechanisms and Continuous Bag of Words Feature Extraction
Next Article in Special Issue
Data-Driven Surveillance of Internet Usage Using a Polynomial Profile Monitoring Scheme
Previous Article in Journal
On a General Formulation of the Riemann–Liouville Fractional Operator and Related Inequalities
Previous Article in Special Issue
Process Capability and Performance Indices for Discrete Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

New Machine-Learning Control Charts for Simultaneous Monitoring of Multivariate Normal Process Parameters with Detection and Identification

by
Hamed Sabahno
1,* and
Seyed Taghi Akhavan Niaki
2
1
Department of Statistics, School of Business, Economics and Statistics, Umeå University, Umeå 901 87, Sweden
2
Department of Industrial Engineering, Sharif University of Technology, Tehran 1458889694, Iran
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(16), 3566; https://doi.org/10.3390/math11163566
Submission received: 21 July 2023 / Revised: 14 August 2023 / Accepted: 15 August 2023 / Published: 17 August 2023

Abstract

:
Simultaneous monitoring of the process parameters in a multivariate normal process has caught researchers’ attention during the last two decades. However, only statistical control charts have been developed so far for this purpose. On the other hand, machine-learning (ML) techniques have rarely been developed to be used in control charts. In this paper, three ML control charts are proposed using the concepts of artificial neural networks, support vector machines, and random forests techniques. These ML techniques are trained to obtain linear outputs, and then based on the concepts of memory-less control charts, the process is classified into in-control or out-of-control states. Two different input scenarios and two different training methods are used for the proposed ML structures. In addition, two different process control scenarios are utilized. In one, the goal is only the detection of the out-of-control situation. In the other one, the identification of the responsible variable (s)/process parameter (s) for the out-of-control signal is also an aim (detection–identification). After developing the ML control charts for each scenario, we compare them to one another, as well as to the most recently developed statistical control charts. The results show significantly better performance of the proposed ML control charts against the traditional memory-less statistical control charts in most compared cases. Finally, an illustrative example is presented to show how the proposed scheme can be implemented in a healthcare process.

1. Introduction

When there are more than two quality characteristics to be monitored in a process, multivariate control charts can be employed in the process monitoring context. In a multivariate normal process, there are two process parameters, i.e., the mean vector and the variance–covariance matrix. Simultaneous monitoring of these parameters is usually preferred over monitoring only one of them due to its better overall performance. We have two main simultaneous process parameters monitoring schemes: (i) a single-chart scheme and (ii) a double-chart scheme. In a single-chart scheme, only one chart for both process parameters is used, and in a double-chart scheme, one chart for each process parameter is employed. A single-chart scheme is usually preferred due to its simplicity, considering only one control chart should be administrated in a single-chart scheme in comparison to the double-chart scheme, in which two control charts should be administrated at the same time. Reynolds and Gyo-Young [1], Hawkins and Maboudou-Tchao [2], and Zhang and Chang [3] studied double-chart cases, and Khoo [4], Zhang et al. [5], Wang et al. [6], Sabahno et al. [7,8,9], Sabahno and Khoo [10], and Sabahno [11] investigated single-chart cases. Sabahno et al. [9] proposed new memory-less statistical control charts with fixed and adaptive design parameters (sample size, sampling interval, and control limits) for simultaneous monitoring of multivariate normal process parameters. They used two statistics each for monitoring one process parameter, but in the end, they combined them into one statistic.
Machine-learning techniques have been extensively used for process monitoring with control charts for different reasons, such as dimension reduction, pattern recognition, change point estimation, signal detection, identification, and fault diagnosis. Artificial neural network (ANN) is the most used ML technique in this regard. Some of the most notable research that employed ML techniques in control charts are that conducted by Chang and Ho [12], Niaki and Abbasi [13,14], Cheng and Cheng [15], Abbasi [16], Salehi et al. [17], Hosseinifard et al. [18], Weese et al. [19], Escobar and Morales-Menendez [20], Apsemidis et al. [21], Mohd Amiruddin et al. [22], Diren et al. [23], Yeganeh et al. [24], Mohammadzadeh et al. [25], and Sabahno and Amiri [26], and Yeganeh et al. [27,28,29].
ML structures have rarely been used to construct control charts in the literature. Niaki and Abbasi [14] developed a perceptron neural network for monitoring and classifying mean shifts in multi-attribute processes. Hosseinifard et al. [18] developed three ANN control charts for monitoring simple linear profile parameters (the intercept, the slope, and the residual variance). One of their control charts was involved in detection and identification, while the other two were only involved in detection. Mohammadzadeh et al. [25] developed an SVR (support vector regression) control chart for monitoring a logistic profile by extending Hosseinifard et al.’s [18] paper. Sabahno and Amiri [26] developed different statistical and machine-learning-based control charts with fixed and variable design parameters to monitor generalized linear regression profiles. Yeganeh et al. [27] extended Hosseinifard et al.’s [18] work for social network surveillance. Yeganeh et al. [28] proposed an ANN-based control chart to monitor binary surgical outcomes, while in another study, Yeganeh et al. [29] proposed ML-based control charts for monitoring autocorrelated profiles.
The previously mentioned papers have utilized machine-learning (ML) techniques for regression to construct control charts. While our approach in this paper is similar to theirs, by proposing a special input set for ML structures (our first input scenario), we extend them to address multivariate processes and enable simultaneous monitoring of their parameters. Moreover, while they employed a single ML structure to build their control charts, we utilize multiple machine-learning techniques and extensively compare them in different scenarios. Additionally, all the ML control charts mentioned above solely trained their ML structures using a random shift size to represent out-of-control data. In contrast, we employ an alternative method in addition to this approach.
In this paper, for the first time in the literature, the ANN, SVM (support vector machine), and RF (random forest)-based control charts are proposed for simultaneous monitoring of multivariate normal process parameters. We use two different input scenarios (in one of them, for the first time the two statistics used by Sabahno et al. [9] are utilized as the inputs) and two different control scenarios (detection and detection–identification). We also use two training methods to see which types of datasets suit each ML structure better (in one we train the ML structures with small shifts and in the other one, for the first time, we train them with both small and large mean shifts). The ML control charts are developed in the cases of two, three, and four quality characteristics, based on which their performances with one another, and with the proposed statistical control charts by Sabahno et al. [9], are compared.
This paper is structured as follows: In Section 2, machine-learning control charts are developed. In Section 3, different models are developed, and then extensive numerical analyses in each scenario under different separate and simultaneous shift sizes are conducted. In Section 4, an illustrative example is presented. In Section 5, concluding remarks and suggestions for future research are discussed.

2. Machine-Learning Control Charts

In this section, three ML control charts are proposed. After obtaining the ML structure for each control chart, which is explained later in Section 3, the upper control limit (UCL) of each control chart is obtained using the after-mentioned algorithm. First note that the design parameters in a control chart are the sample size n, the sampling interval t, and the probability of type-I error α .
The monitoring strategy for each ML control chart is as follows:
  • If at sample i, the ML structure’s output UCL, then the process is in control;
  • If at sample i, the ML structure’s output > UCL, then the process is declared out-of-control.
The following algorithm can be used to compute the UCLs:
Step 1.
Choose a value for α   and n;
Step 2.
Choose and train an ML structure;
Step 3.
Obtain the initial value of UCL by generating and sorting 10,000 in-control samples in ascending order using the ML structure and choosing the [10,000(1 − α )]th value in the range;
Step 4.
Run 10,000 simulations and adjust the UCL so that the average of 10,000 run lengths (ARL) is 1 α .
The best way to estimate the UCL is to employ the above algorithm. However, it might be very time- and energy-consuming, especially if one is conducting many numerical analyses and should estimate many UCLs for different parameter settings. An easier way with almost the same accuracy is to obtain the value of the UCL by generating and sorting 10,000 in-control samples in ascending order and then choosing the [10,000(1 − α   )]th value in the range, repeating this 10,000 times, and finally taking the average of these 10,000 values. This proposed approach works very well in memory-less schemes in most cases; however, it is better to confirm its result using step 4 of the above algorithm.
Instead of using a statistic and developing a control chart using it, one can use supervised ML techniques. As in this research, we only need linear (continuous) outputs from the ML techniques (because, as mentioned before, we then apply a classification method to their linear outputs); the types of ML techniques that can be used are limited because not all ML techniques are capable of regression. However, fortunately, the most popular ML techniques, namely ANN, SVM, and RF, are also capable of generating linear (regression) outputs.
In what follows in the next subsections, each of the ML techniques is shortly described. It should also be noted that after the ML structures are trained, they are tested and validated by achieving the desired in-control performance of the control chart. This is explained later in Section 3.

2.1. ANN Control Chart

ANN is one of the most popular ML techniques that mimics human brain activity. It has one input layer, a different number of hidden layers, and one output layer, and each layer contains at least one node. ANNs with more than one hidden layer are called deep-learning techniques. ANNs can be used for both classification and regression. Determining the number of nodes in each layer and the number of hidden layers is very important in ANNs, which can be carried out using the trial-and-error method. For simple problems, one hidden layer usually works best. Regarding the number of nodes in the hidden layer, although there are some rules of thumb, there is no solid rule in this regard, and the number of nodes should be determined to obtain the desired performance.
Another problem in ANNs is determining the optimal values of the connection weights and node biases. There are several optimization algorithms used in ANNs for this purpose, among which are gradient descent, stochastic gradient descent, mini-batch gradient descent, Broyden–Fletcher–Goldfarb–Shanno (BFGS), momentum, Nesterov accelerated gradient, Adagrad, AdaDelta, and Adam. In this research, we use the BFGS optimization method, which is a variant of the gradient descent method. The reason for this is simply that it is the only optimization method that our selected R package (‘nnet’ package) uses (for more information about the computer package, refer to Section 3). The BFGS method overcomes some of the weaknesses of the gradient descent method by seeking a stationary point of the cost function; the cost function, in this case, is the mean-squared error (MSE) for regression problems and the cross-entropy (negative log-likelihood) for classification problems.
In this paper, the ANN technique is used for regression, and the trained ANN’s structure’s output for sample i is ANNi. Different input scenarios, training methods, and output (control) scenarios that we use in this research for each ANN structure are explained in Section 3.

2.2. SVR Control Chart

SVM is a kernel-based method and is a powerful classification and regression technique. It can be used for classification, novelty detection (one-class classification), and regression (called SVR in this case). By ignoring errors that are smaller than a certain threshold, and creating a tube around the true output, SVM can perform regression.
SVMs usually work in two main steps: (i) transformation of the input space to a higher-dimensional feature space through a non-linear mapping function; and (ii) construction of the separating hyperplane with maximum distance from the closest points (called support vectors) of the training set. It has been shown that maximizing the margin of separation improves the generalization ability of the classifier/regressor. Training of an SVM is finding the solution for a quadratic optimization problem.
In SVMs, the calculation of dot products in a high-dimensional space can be avoided by introducing a kernel function, which allows all the necessary computations to be performed directly in the input space. The most popular kernel functions are linear, polynomial, radial basis, and sigmoid. In this research, we use an R package (‘e1071’ package, which is explained more in Section 3) for training and optimization, and we do not apply any optimization algorithms on our own to find the optimal values of the SVM structure’s parameters. However, there are several optimization algorithms for SVMs, some of which are as follows: sequential minimization optimization (SMO), trust region Newton method (TRON), and chunking. The computer package we chose uses the SMO algorithm. The SMO algorithm solves the SVM’s quadratic problem without using any numerical optimization steps. The adopted package also implements epsilon-support vector regression (ε-SVR) as the cost function for regression problems. The cost function in ε-SVR aims to minimize the deviations between the predicted values and the actual values using an ε-insensitive loss function. It also implements the hinge loss function, which penalizes misclassifications by introducing a linear error term, in classification problems. To construct a control chart, we use the SVM technique for regression; therefore, it is called SVR.
The same as the ANN case, the trained SVR structure’s output at sample i is SVRi, and the different input scenarios, training methods, and output (control) scenarios that we use in this research for each SVR structure are explained in Section 3.

2.3. RFR Control Chart

Decision trees are tree-structured classifiers/regressors with three types of nodes: (i) the root node, which is the initial node and represents the entire sample and may be split further into more nodes; (ii) the interior nodes that represent the features of a dataset that, with the branches that represent the decision rules, are connected to other nodes; and (iii) the leaf nodes, which represent the outcome. In the regression case (the case we use in this paper), they start with the root of the tree and follow splits based on variable outcomes until a leaf node is reached and a real number-type result is given. Although decision trees work best for classification, they work very well in regression cases as well. The most popular decision tree algorithms are Iterative Dichotomiser 3 (ID3), C4.5, and CART (classification and regression tree).
Random forest (RF) is an ML technique that uses ensemble learning methods for regression or classification. The ensemble learning method is a technique that combines predictions (in the case of the RF technique, by taking the average in a regression case (called RFR in the case of regression) or choosing the class with the maximum number of occurrences in a classification case) from multiple machine-learning algorithms to make a more accurate prediction than a single model (in the RF case, the single model would be a decision tree). An RF technique is powerful and accurate, and it overcomes the over-fitting issue of individual decision trees. It usually performs great on many problems, including features with non-linear relationships. The most popular RF algorithm was introduced by Leo Breiman. It builds a forest of uncorrelated trees using a CART-like procedure. The computer package we use (‘randonForest’ package) employs a CART algorithm and uses the mean-squared error (MSE) as the cost function for the regression problems and the Gini impurity or the cross-entropy as the cost function for the classification problems. The main problem in using RF is choosing the number of trees to be included in the model. Using more trees is not always better as it might unnecessarily and significantly increase the computational times. The best number of trees varies from problem to problem and should be determined to obtain the minimum errors, as well as the desired performance.
The same as before, the trained RFR structure’s output at sample i is RFRi.

3. Model Development and Analysis

In this section, different control–input–training scenarios are modeled for the above three ML control charts, based on which numerical analyses are conducted afterward. We, for the first time, consider several input sets, training methods, and control scenarios to see which works best for each ML structure for building a multivariate control chart for simultaneous monitoring of process parameters. As there are two input scenarios for each ML structure, two different training methods are used for each. The two training methods are (i) training the ML structures with only a certain small shift size to familiarize them with the out-of-control situations or, (ii) training them with a certain small and a certain large shift size. We have the same number of in-control and out-of-control data in both training cases. Note that in all training scenarios, training the ML structures is performed with an output of 0 representing an in-control situation and an output of 1 representing an out-of-control situation.
In the first input scenario, the two statistics employed by Sabahno et al. [9] are used. They used them in their statistical structure to develop a multivariate control chart for simultaneous monitoring of the process parameters. We, however, use these two statistics for the inputs of the ML structures. By doing so, we easily enable the ML structures to consider multivariate processes without adding complexity. These statistics are Hotelling T2 and W.
When the in-control process parameters are known ( μ 0 and Σ 0 ), the Hotelling T2 statistic is evaluated for each sample i to monitor the process mean vector ( μ 0 ) as follows:
T i 2 = n ( X ¯ i μ 0 ) ( Σ 0 ) 1 X ¯ i μ 0 ,
where n is the sample size and X ¯ i is the sample’s mean.
Regarding the process variability ( Σ 0 ), they used the following statistic:
W i = n 1 S i 1 / p Σ i 1 / p ,
where Si is the sample’s variance–covariance matrix and p is the number of quality characteristics.
In the second input scenario, however, we use all the elements of the sample mean vector and variance–covariance matrix as the inputs.
For example, in the case of two quality characteristics, we have five inputs as follows:
x ¯ 1 , x ¯ 2 , s 1 2 , s 2 2 , and s 12 (covariance), where X ¯ = ( x ¯ 1 , x ¯ 2 ) and S = s 1 2 s 12 s 12 s 2 2 .
Note that according to the above paragraphs, as the process dimension (the number of quality characteristics) increases, the number of inputs in the second input scenario also increases. However, the number of inputs in the first input scenario is always two (T2 and W). This makes it easier and more efficient to use the first input scenario in higher dimensions.
We also consider two types of control schemes in this paper. In the first type, the goal is only the detection of the out-of-control situation, and in the other type, the goal is to identify the responsible process parameters as well. In the first one, we only have one chart/output with which we determine whether the process is in or out of control. In the second one, we have several charts/outputs to identify the responsible variables, as well as process parameters, for the out-of-control situation. In summary, two input scenarios, two training scenarios, and two control scenarios are involved.
Moreover, three different numbers of quality characteristics are considered, i.e., p = 2, p = 3, and p = 4. However, to reduce the paper’s size, we only consider the p = 3 case with the first input scenario and the p = 4 case with the first input scenario and the first control scenario (only detection). It should also be noted that as the process dimension (the number of variables) increases, the amount of data and diversity of the dataset increases as well in both training methods. In addition, since using the second input scenario is directly affected by the number of variables, increasing the number of variables increases the number of inputs in this scenario. In addition, as the dimension increases, the number of control charts using the second control scenario increases as well, and that could increase the false-alarm rates.
The ARL (average run length) and SDRL (standard deviation of run length) are used in this paper to measure each chart’s performance, but in this section, we only make comparisons based on the ARL. The ARL is the average number of runs before an out-of-control signal is detected by a control chart. A larger ARL value is preferred when the process is in control and as low as possible when out of control.
The ‘svm’ function of the ‘e1071′ package in R is employed to train the SVR structures, the ‘nnet’ package for the ANN structures, and the ‘randomForrest’ package for the RFR structures. In general, we try to use the default hyperparameters in each package, but those we had to change are explained separately in each scenario. However, one thing we had to change in all these packages is the output type, and we changed it from a classification output (the default set) to a linear (regression) output.
Note that most of the mean shifts and variation shifts in both p = 2 and 3 cases are chosen similarly to those of Sabahno et al. [9] to be able to compare the proposed control charts in each scenario with their proposed statistical charts. To avoid adding additional tables for the comparisons, we do not repeat their tables here and refer readers to see their work (the ARLs in their Table 4 for p = 2 case and in their Table 5 for p = 3 case are subjects of comparisons). Therefore, we only include the comparisons’ results in this paper. Moreover, similar to Sabahno et al. [9], the sample size we use in this paper is 10 (n = 10).

3.1. Scenario a: Control Charts for Detection

In scenario a, the cases in which only signal detection is important are investigated. Since there is only one control chart involved, assuming that the probability of type-I error is equal to 0.005, the performance measure is computed as ARL = 1 α   = 200.

3.1.1. Scenario a1 (Control Type a, Input Set 1)

As mentioned before, only two inputs are considered for each ML control chart in this scenario, and they are T2 and W statistics.

Scenario a11 (Control Type a, Input Set 1, Training Method 1)

In this section, the shift size we select for the mean shifts is 0.2, and for the variance and covariance shifts, it is 1.2 times the in-control values. We consider 250 data with only μ 1   shifted; 250 data with only μ 2 shifted; 250 data with both means shifted; 250 data with only σ 1 2 shifted; 250 data with only σ 2 shifted; 250 data with both σ 1 2 and σ 2 2 shifted; 250 data with only covariance shifted; 250 data with σ 1 2 , σ 2 2 , and covariance shifted; and 250 data with all the parameters shifted together. For the in-control dataset, the same amount of data, i.e., 9 × 250 = 2250 data, are included. We train each ML structure with these in and out-of-control datasets.
In this scenario, a linear kernel is used for the SVR structure and the RMSE is computed as 0.51. For the ANN structure, four nodes (twice the number of inputs) in the hidden layer are used and the trained ANN structure has an RMSE of 0.49. For the random forest package, 100 trees are used with an RMSE of 0.53. The UCLs of the ANN, SVR, and RFR control charts are computed as 0.7073, 1.016, and 0.920144, respectively. The result of this analysis, which is reported in Table 1, shows that the SVR and ANN charts perform better than the RFR chart. In general, as the mean shift increases, the SVR chart performs better, and as the variation shift increases, the ANN chart performs better. Another interesting result is that, as the shift size increases more than the values that the ML structure is trained with, the RFR chart performs worse, such that under the mean shift of size 2, it is not even able to detect the shift at all. However, this phenomenon does not happen in ANN and SVR charts.
Moreover, by comparing the results of Table 1 to Sabahno et al.’s [9] Table 4, one can see that in all cases, at least one of the ML control charts performs better than all their proposed control charts (fixed parameters and adaptive ones). Although we do not use any adaptive strategies, the proposed ML charts perform even better (much better in most cases) than their adaptive control charts.
In the case of three quality characteristics (p = 3), for the out-of-control dataset, we consider 150 data with only μ 1   shifted; 150 data with only μ 2 shifted; 150 data with only μ 3 shifted; 150 data with both μ 1 and μ 2 shifted; 150 data with both μ 1 and μ 3 shifted; 150 data with both μ 2 and μ 3 shifted; 150 data with only σ 1 2 shifted; 150 data with only σ 2 2 shifted; 150 data with only σ 3 2 shifted; 150 data with both σ 1 2 and σ 2 2 shifted; 150 data with both σ 1 2 and σ 3 2 shifted; 150 data with both σ 2 2 and σ 3 2 shifted; 150 data with only covariance shifted; 150 data with all three variances and covariance shifted; and 150 data with all the parameters shifted together. For the in-control dataset, we again include the same amount of data, which in this case is 2250 data. The same hyperparameters as in the case of p = 2 are used here as well for the ML structures. The RMSE of the ANN, SVR, and RFR structures are computed as 0.49, 0.51, and 0.53, respectively, and the computed UCLs are 0.724, 0.9955, and 0.9305, respectively. The results in Table 2 show that the RFR chart mostly performs better in higher dimensions (compared with Table 1), especially under large mean shifts, in which the deterioration in performance under shifts larger than the trained value is much less noticeable. For the ANN and SVR charts, it is kind of mediocre (in some cases they perform better, in some they perform worse); however, they perform better than the RFR chart even in higher dimensions. The other conclusions derived for the case of p = 2 are valid here as well.
Moreover, by comparing the results in Table 2 to Sabahno et al.’s [9] Table 5, it is evident that in most compared cases, at least one of the proposed ML control charts performs better than all their proposed control charts (fixed design parameters and adaptive ones). Only in the case of (0.5, 0.8, 0.5) mean shift, together with no/small variation shift, does at least one of their proposed charts perform a little bit better than the best of the three proposed charts in this paper.
In the case of four quality characteristics (p = 4), for the out-of-control dataset, we consider 125 data with only μ 1   shifted; 125 data with only μ 2   shifted; 125 data with only μ 3   shifted; 125 data with only μ 4   shifted; 125 data with μ 1   and   μ 2   shifted; 125 data with μ 1   and   μ 3 shifted; 125 data with μ 2   and   μ 3 shifted; 125 data with μ 1   and   μ 4   shifted; 125 data with μ 2   and   μ 4 shifted; 125 data with μ 3   and   μ 4   shifted; 125 data with μ 1 ,    μ 2   and   μ 3 shifted; 125 data with, μ 1 ,    μ 2   and   μ 4 shifted; 125 data with μ 1 ,    μ 3   and   μ 4 shifted; 125 data with μ 2 ,    μ 3   and   μ 4 shifted; 125 data with only σ 1 2 shifted; 125 data with only σ 2 2 shifted; 125 data with only σ 3 2 shifted; 125 data with only σ 4 2 shifted; 125 data with σ 1 2 and σ 2 2 shifted; 125 data with σ 1 2 and σ 3 2   shifted; 125 data with σ 2 2 and σ 3 2 shifted; 125 data with σ 1 2 and σ 4 2 shifted; 125 data with σ 2 2 and σ 4 2 shifted; 125 data with σ 3 2 and σ 4 2 shifted; 125 data with σ 1 2 , σ 2 2   and σ 3 2   shifted; 125 data with σ 1 2 , σ 2 2   and σ 4 2   shifted; 125 data with σ 1 2 , σ 3 2   and σ 4 2 shifted; 125 data with σ 2 2 , σ 3 2   and σ 4 2   shifted; 125 data with only covariance shifted; 125 data with all four variances and covariance shifted; and 125 data with all the parameters shifted together. For the in-control dataset, we again include the same amount of data, which in this case is 3875 data. The same hyperparameters as in the case of p = 2 are used here as well for the ML structures. The RMSE of the ANN, SVR, and RFR structures are computed as 0.49, 0.51, and 0.52, respectively, and the computed UCLs are 0.7164, 1.0439, and 0.9332, respectively.
The results in Table 3 show that the RFR chart mostly performs better again in higher dimensions (compared with both Table 1 and Table 2). For the ANN and SVR charts, it is again kind of mediocre (in some cases they perform better, in some they perform worse); however, they still perform better than the RFR chart even in higher dimensions. The other conclusions derived for the case of p = 2 are valid here as well.

Scenario a12 (Control Type a, Input Set 1, Training Method 2)

In this scenario, we use the same input set as before, but with the second training method (trained with both small and large shifts). Here, for the out-of-control dataset, the shift size for the small mean shifts is 0.2; for the large mean shifts, it is 1; for the small variance and covariance shifts, it is 1.2 times the in-control values; and for the large variance and covariance shifts, it is 2 times the in-control values. For the small shifts, the following are considered: 150 data with only μ 1   shifted; 150 data with only μ 2    shifted; 150 data with both means shifted; 150 data with only σ 1 2 shifted; 150 data with only σ 2 2 shifted; 150 data with both variances shifted; 150 data with only covariance shifted; 150 data with both variances and covariance shifted; and 150 data with all the parameters shifted together. Similarly, for the large mean shifts, we have 150 data with only μ 1   shifted; 150 data with only μ 2   shifted; 150 data with both means shifted; 150 data with only σ 1 2 shifted; 150 data with only σ 2 2 shifted; 150 data with both variances shifted; 150 data with only covariance shifted; 150 data with both varianbces and covariance shifted; and 150 data with all the parameters shifted together. The total number of out-of-control data in this scenario is 2700; therefore, the same number of in-control data is used. For all the ML structures, the same hyperparameters as in the previous case are used. The RMSEs for the ANN, SVR, and RFR structures are 0.44, 0.53, and 0.47, respectively. The UCLs of ANN, SVR, and RFR control charts are computed as 0.8658, 0.5958, and 0.9729, respectively. The results for this case are reported in Table 4.
The results in Table 4 show that the ANN and SVR charts perform mostly better with the second training method, only in the cases of no/small variation shifts. On the contrary, this training method suits the RFR chart the most, considering it performs better than the previous training method in all the cases. In terms of which chart among the three performs better in this scenario, despite the significant improvements in the performance of the RFR chart, it still cannot outperform the ANN and SVR charts, as the ANN chart performs better in most compared cases, and in the rest (no or small variation shifts), despite the loss of performance in this training method, the SVR chart still performs better than the others. In addition, unlike the previous training method, the RFR chart does not experience a deterioration in performance as the shift size becomes larger than the trained value.
Moreover, by comparing the results of Table 4 to Sabahno et al.’s [9] Table 4, one can again see that in all the cases, at least one of the proposed ML control charts performs better (mostly much better) than all their charts.
In the case of p = 3, for the out-of-control dataset, the following are considered: 125 data with μ 1   shifted 0.2 and 125 data shifted 1; 125 data with μ 2    shifted 0.2 and 125 data shifted 1; 125 data with only μ 3   shifted 0.2 and 125 data shifted 1; 125 data with μ 1   and   μ 2   shifted 0.2 and 125 data shifted 1; 125 data with μ 1   and   μ 3   shifted 0.2 and 125 data shifted 1; 125 data with μ 2   and   μ 3   shifted 0.2 and 125 data shifted 1; 125 data with σ 1 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with σ 2 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with σ 3 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with σ 1 2 and σ 2 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with σ 1 2 and σ 3 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with σ 2 2 and σ 3 2 shifted 1.2 times and 125 data shifted 2 times; 125 data with only covariance shifted 1.2 times and 125 data shifted 2 times; 125 data with all three variances and covariance shifted 1.2 times and 125 data shifted 2 times; 125 data with all the parameters shifted together with small shift sizes; and 125 data shifted together with large shift sizes. For the in-control dataset, we include the same amount of data, which in this scenario is 3750 data. By keeping the previous hyperparameters, the RMSE of the ANN, SVR, and RFR structures are 0.44, 0.52, and 0.5, respectively. The UCLs are computed as 0.8635, 0.5781, and 0.9783, respectively. The results are presented in Table 5.
This table shows that again under no/small variation shifts, the SVR chart mostly performs better than the other two, but as the variation shift increases, the other two begin to perform better than the SVR chart, with the ANN chart having the best performance. By comparing Table 2 and Table 5 (the first and second training methods), the conclusions driven for the p = 2 case (comparing Table 1 and Table 4) can be driven for the p = 3 case as well. By comparing the cases of p = 3 and p = 2 (Table 4 and Table 5), we realize that the conclusions are similar to those of Scenario a11.
Moreover, by comparing the results of Table 5 to Sabahno et al.’s [9] Table 5, we can see that with the second training method, at least one of the ML control charts performs better than all their control charts, in all the shift cases.
In the case of four quality characteristics (p = 4), for the out-of-control dataset, we consider 100 data with only μ 1   shifted 0.2 and 100 data shifted 1; 100 data with only μ 2   shifted 0.2 and 100 data shifted 1; 100 data with only μ 3   shifted 0.2 and 100 data shifted 1; 100 data with only μ 4    shifted 0.2 and 100 data shifted 1; 100 data with μ 1   and   μ 2 shifted 0.2 and 100 data shifted 1; 100 data with μ 1   and   μ 3 shifted 0.2 and 100 data shifted 1; 100 data with μ 2   and   μ 3   shifted 0.2 and 100 data shifted 1; 100 data with μ 1   and   μ 4 shifted 0.2 and 100 data shifted 1; 100 data with μ 2   and   μ 4 shifted 0.2 and 100 data shifted 1; 100 data with μ 3   and   μ 4 shifted 0.2 and 100 data shifted 1; 100 data with   μ 1 ,    μ 2   and   μ 3 shifted 0.2 and 100 data shifted 1; 100 data with μ 1 ,    μ 2   and   μ 4 shifted 0.2 and 100 data shifted 1; 100 data with μ 1 ,    μ 3   and   μ 4   shifted 0.2 and 100 data shifted 1; 100 data μ 2 ,    μ 3   and   μ 4 shifted 0.2 and 100 data shifted 1; 100 data with only σ 1 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with only σ 2 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with only σ 3 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with only σ 4 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 and σ 2 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 and σ 3 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 2 2 and σ 3 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 and σ 4 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 2 2 and σ 4 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 3 2 and σ 4 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 , σ 2 2   and σ 3 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 , σ 2 2   and σ 4 2   shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 1 2 , σ 3 2   and σ 4 2 shifted 1.2 times and 100 data shifted 2 times; 100 data with σ 2 2 , σ 3 2   and σ 4 2   shifted 1.2 times and 100 data shifted 2 times; 100 data with only covariance shifted 1.2 times and 100 data shifted 2 times; 100 data with all four variances and the covariance shifted 1.2 times and 100 data shifted 2 times; and 100 data with all the parameters shifted together with small shift sizes and 100 data shifted together with large shift sizes. For the in-control dataset, we include the same amount of data, which in this scenario is 6200 data.
By keeping the previous hyperparameters, the RMSE of the ANN, SVR, and RFR structures are 0.43, 0.53, and 0.46, respectively. The UCLs are computed as 0.856, 0.5108, and 0.9735, respectively. The results are presented in Table 6.
This table shows that again under no/small variation shifts, the SVR chart mostly performs better than the other two, but as the variation shift increases, the other two begin to perform better than the SVR chart, with the ANN chart having the best performance. By comparing Table 3 and Table 6 (first and second training methods), the conclusions driven for the p = 2 case (comparing Table 1 and Table 4) can be driven for the p = 4 case as well. The results in Table 6 show that all the charts mostly perform better in higher dimensions when the second training method is used.

3.1.2. Scenario a2 (Control Type a, Input Set 2)

As mentioned before, different inputs for the ML structures are considered in this scenario. The sample’s mean vector and variance–covariance matrix are used and in the case of p = 2, there are going to be five inputs (individual variances, means, and covariance) as described in the following subsections.

Scenario a21 (Control Type a, Input Set 2, Training Method 1)

In-control and out-of-control datasets are the same as in Scenario a11. In this scenario, again, a linear kernel is used for the SVR structure, and the RMSE is computed as 0.51. For the ANN control chart, 10 nodes (twice the number of inputs) are used in the hidden layer and the trained ANN structure has an RMSE of 0.48. For the random forest structure, again 100 trees are used with an RMSE of 0.5. The UCLs of the ANN, SVR, and RFR control charts are computed as 0.8746, 1.0631, and 0.8143, respectively. The results of this analysis are reported in Table 7. Based on these results, the SVR chart performs better than the other two under all shift cases. By comparing their performance against the previous input scenario (comparing Table 1 and Table 7), we realize that the ANN chart mostly performs worse when the second input scenario is used. The situation is kind of mediocre with the SVR chart, and it seems that the input method overall does not affect its performance. The RFR chart on the other hand, especially under moderate and large mean shifts, mostly performs better in this input scenario, and the deterioration in performance as the shift size becomes larger than the trained value is much less when this input set is used.
Moreover, by comparing the results in Table 7 to Sabahno et al.’s [9] Table 4, it can be seen that in most cases, at least one of the proposed ML control charts performs better (mostly much better) than all their proposed control charts. Only in the case of (0.5, 0.8) mean shift, together with no/small variation shift, at least one of their proposed charts performs a little bit better than the best of ours.

Scenario a22 (Control Type a, Input Set 2, Training Method 2)

In-control and out-of-control datasets are the same as in Scenario a12. The same as before, a linear kernel is used for the SVR structure, and the RMSE is computed as 0.48. For the ANN control chart, 10 nodes (twice the number of inputs) in the hidden layer are used and the trained ANN structure has an RMSE of 0.42. For the random forest structure, 100 trees are used again with an RMSE of 0.45. The UCLs of the ANN, SVR, and RFR control charts are computed as 0.8846, 0.8136, and 0.894, respectively.
According to the results of this analysis (reported in Table 8), the ANN chart is the worst-performing in all the compared cases. Under zero variation shift, the RFR chart performs better than the SVR chart. Other than that, under small variation shifts, together with small mean shifts, the SVR chart performs better, and together with moderate and large mean shifts, the RFR chart performs better. In large variation shift cases, the SVR chart performs better. By comparing the first and second training cases (Table 7 and Table 8), it is evident that the ANN and RFR charts perform better with the second training method being applied, in all the shift cases. On the other hand, under no or small variation shifts, the SVR chart also performs better with the second training method, but under larger variation shifts, the chart performance is rather the same no matter which training method is used. In addition, no deterioration in performance as the shift size becomes larger than the trained value is noticeable in any of the charts in this scenario. By comparing Table 4 and Table 8 (different input sets), we realize that all the charts mostly perform better with the second input scenario, with the RFR chart having the least improved cases.
Moreover, by comparing the results in Table 8 to Sabahno et al.’s [9] Table 4, we can again see that in all the cases, at least one of the proposed ML control charts performs better (mostly much better) than all their control charts. Therefore, when the second input scenario is used, if the second training method is utilized, the proposed scheme performs better under all shift sizes and types, unlike the first training method (Scenario a21).

3.2. Scenario b: Control Charts for Detection and Identification

In scenario b, several ML structures are involved, and consequently, so are several control charts in each control scheme. Each process parameter has its output (control chart), and if the corresponding chart signals, it means that this parameter (and consequently the variable associated with it) has shifted. However, as the number of quality characteristics increases, the number of control charts (outputs) also increases in this scenario. This might increase the false-alarm rates. As such, this scenario should be used with more caution, especially in larger dimensions. In addition, even before conducting any numerical analysis (next subsections), overall, worse performance compared to scenario a is expected in this scenario because more than one control chart is being monitored together. Having said that, the advantage of this scenario is not its performance, but helping one to identify the responsible variable parameter by allowing a little bit of performance to be sacrificed.
In the case of p = 2, since we have five process parameters, namely μ 1 ,   μ 2 ,   σ 1 2 , σ 2 2   and σ 12 , five ML structures, each with its own control chart, are therefore required. These charts are monitored together and obtaining a signal from either one of them means that the process is out of control. Before we start developing control charts for each scenario, we should mention that it was more difficult to choose suitable hyperparameters for this scenario to obtain the desired ARL performance, especially in the case of p = 3. Therefore, we tried many combinations of hyperparameter values for the ML structures to obtain the desired overall performance. The desired performance, in this case, is computed (assuming the independency of control charts as well as equality of their type-I error probability) as follows: α o v e r a l l = 1 1 α m , where m is the number of control charts, and ARL = 1 α o v e r a l l = 1 0.005 = 200 . Consequently, for the case of p = 2, in which a maximum of five control charts is required (m = 5), by using the above formulae, we have α = 0.001 . This means that each control chart’s performance is 1 0.001 = 1000 , but they all together should have a performance of 1 0.005 = 200 . Similarly, for the case of p = 3, individual α s are computed as 0.000716, which results in ARL = 1 0.000716 = 1396.6 . Note that, for simplicity, we assume that all the covariances are equal in the case of p = 3. Therefore, we only have one control chart for monitoring the covariance, making a total of seven control charts (m = 7, i.e., three to monitor the means, three to monitor the variances, and one to monitor the covariance).

3.2.1. Scenario b1 (Control Type b, Input Set 1)

Similar to Scenario a1, the inputs, in this case, are T2 and W statistics. Again, two different training methods are applied on each control chart (one trained with only small shifts and the other one trained with small and large shifts).

Scenario b11 (Control Type b, Input Set 1, Training Method 1)

In this scenario, the ML structures are trained with only small shift sizes. First, we generate 1000 in-control data that are going to be used in all control charts. Second, we generate 1000 out-of-control data with 0.2 shifts in μ 1 and use it for the control chart that monitors μ 1 ; 1000 out-of-control data with 0.2 shifts in μ 2 and use it for the control chart that monitors μ 2 ; 1000 out-of-control data with shifts of 1.2 times the in-control σ 1 2 and use it for the control chart that monitors σ 1 2 ; 1000 out-of-control data with shifts of 1.2 times the in-control σ 2 2 and use it for the control chart that monitors σ 2 2 ; and 1000 out-of-control data with shifts of 1.2 times the in-control σ 12 and use it for the control chart that monitors σ 12 . Now that we have five datasets, we need to train five different ML structures. For the ANN scheme, for monitoring μ 1 ,   μ 2 ,   σ 1 2 , σ 2 2   and σ 12 , we use 5, 5, 5, 5, and 4 nodes in the hidden layers, respectively, to obtain an overall performance of 200. The RMSE of these ANN structures, respectively, are 0.49, 0.49, 0.48,0.48, and 0.49. Finally, the UCLs are, respectively, computed as 1.0292, 1.1628, 0.8846, 1.048, and 0.8393.
For the SVR scheme, the major difference was that the linear kernel did not work for all the structures in this scenario, and we had to try the radial kernel as well (other kernel types did not work at all). The kernel types we used for μ 1 ,   μ 2 ,   σ 1 2 , σ 2 2   and σ 12 control charts are, respectively, radial, radial, radial, radial, and linear. The RMSEs and UCLs are as follows. The RMSEs are 0.52, 0.52, 0.52, 0.53, and 0.51, and the UCLs are 1.1174, 1.1204, 1.1089, 1.0972, and 1.283. Regarding the RFR scheme, 100 trees still worked well for each structure in this scenario. The RMSEs and UCLs for the RFR charts are computed as follows. The RMSEs are 0.53, 0.54, 0.52, 0.52, and 0.52, and the UCLs are 0.9576, 0.9409, 0.9597, 0.9681, and 0.9433.
According to the reported results in Table 9, all the control charts mostly experience a deterioration in performance compared to when only one control chart is used (Table 1). However, the RFR scheme mostly performs better in this control case when the mean shift is large. The worst deterioration in performance can be seen in the SVR scheme. It even experiences a deterioration in performance under shifts larger than the trained value in this scenario (unlike Table 1). The RFR scheme, however, experiences that effect only when the variation shift size is very large in this scenario. Out of these three, the ANN is the best-performing scheme. Its performance is even very close to that of being reported in Table 1 (used only for detection).
Moreover, by comparing the results of Table 9 to Sabahno et al.’s [9] Table 4, one can still see that in most cases (except mostly in the case of (0.5, 0.8) mean shift together with no/small variation shift), at least one of the proposed ML control charts performs better than all their proposed control charts, even though their control charts are only designed for detection. In the case of three quality characteristics, the construction of the dataset is the same as the case of p = 2, except here we add the following: 1000 out-of-control data with 0.2 shifts in μ 3 and use it for the control chart that monitors μ 3 and 1000 out-of-control data with shifts 1.2 times the in-control σ 3 2 and use it for the control chart that monitors σ 3 2 . For the ANN scheme, for monitoring μ 1 ,   μ 2 ,   μ 3 ,   σ 1 2 , σ 2 2 , σ 3 2   and the covariance, we use 5, 5, 5, 5, 5, 5, and 4 nodes in the hidden layers, respectively, with RMSEs of 0.49, 0.49, 0.49, 0.49, 0.49, and 0.48, and UCLs of 0.9169, 1.2852, 0.9164, 0.822, 0.8787, 0.8193, and 0.7305. For the SVR scheme, the used kernels, respectively, are radial, radial, radial, radial, radial, radial, and linear. The RMSEs are 0.51, 0.51, 0.51, 0.52, 0.52, 0.52, and 0.52, and the UCLs are 1.1506, 1.0763, 1.0906, 1.0908, 1.1556, 1.0914, and 1.6437. Regarding the RFR scheme, the least numbers of trees that we had to use for each structure (to obtain the overall ARL of 200), respectively, 100, 100, 100, 500, 100, 300, and 100. The RMSEs are 0.53, 0.53, 0.53, 0.53, 0.52, 0.53, and 0.52, and the UCLs are 0.9485, 0.9585, 0.9474, 0.9577, 0.9647, 0.9647, and 0.9677. The results of this analysis are reported in Table 10.
According to the results in Table 10, the ANN chart performs better than the others. By comparing Table 2 and Table 10, one can conclude that when the identification is a goal as well, all the schemes perform worse. Also, both the SVR and RFR schemes experience significant deterioration in performance as the shift size becomes larger than the trained value in this scenario. By comparing Table 9 and Table 10 (p = 2 and p = 3 cases), we realize that the ANN scheme mostly performs worse in higher dimensions and the SVR scheme mostly performs better (except under no variation shift). Regarding the RFR scheme, only under small mean shifts and very large variation shifts, it mostly performs better in higher dimensions.
Moreover, by comparing the results in Table 10 to Sabahno et al.’s [9] Table 5, we can see that the number of cases in which at least one of our control charts performs better than all their charts is almost equal to the cases that at least one of their proposed charts performs better than ours. Once again, their charts are designed for detection, and in this scenario, we are looking for both detection and identification; therefore, the performance deterioration is normal.

Scenario b12 (Control Type b, Input Set 1, Training Method 2)

In this scenario, the ML structures are trained with both small and large shift sizes. Here again, 1000 in-control data are generated to be used in all control charts. Regarding the out-of-control dataset, we generate 500 out-of-control data with 0.2 shifts and 500 out-of-control data with 1 shift in μ 1 and use it for the control chart that monitors μ 1 ; 500 out-of-control data with 0.2 shifts in μ 2 and 500 out-of-control data with 1 shift in μ 2 and use it for the control chart that monitors μ 2 ; 500 out-of-control data with shifts 1.2 times that of the in-control σ 1 2 and 500 out-of-control data with shifts 2 times that of the in-control σ 1 2 and use it for the control chart that monitors σ 1 2 ; 500 out-of-control data with shifts 1.2 times that of the in-control σ 2 2 and 500 out-of-control data with shifts 2 times that of the in-control σ 2 2 and use it for the control chart that monitors σ 2 2 ; and finally, 500 out-of-control data with shifts 1.2 times that of the in-control σ 12 and 500 out-of-control data with shifts 2 times that of the in-control σ 12 and use it for the control chart that monitors   σ 12 . As five datasets are considered, five different ML structures for each scheme are required.
For the ANN scheme, for μ 1 ,   μ 2 ,   σ 1 2 , σ 2 2   and σ 12 , we use five nodes in all the hidden layers. The RMSE of these ANN structures, respectively, are 0.42, 0.42, 0.46, 0.46, and 0.43. Finally, the UCLs are computed as 1.1086, 0.9762, 0.9519, 0.985, and 1.007. For the SVR scheme, we use the radial kernel for all the structures. The RMSEs and UCLs are as follows. The RMSEs are 0.45, 0.44, 0.49, 0.49, and 0.46, and the UCLs are 1.0793, 1.1018, 1.0266, 1.0194, and 1.0103. Regarding the RFR scheme, 100 trees worked well for each structure. The RMSEs and UCLs for the RFR charts are as follows. The RMSEs are 0.45, 0.45, 0.49, 0.49, and 0.45, and the UCLs are 0.9998, 0.9998, 0.9937, 0.9961, and 0.999.
The results in Table 11 show that in most cases, the ANN scheme performs better than the other charts. However, under no variation shift, and small variation shifts when the mean shift is also small, the RFR scheme performs better. By comparing the results of this table with its equivalent scenario when only detection was the goal (comparing Table 4 and Table 11), one can see the performance deterioration in all the schemes when identification is added to the goal as well. However, again same as in the previous case (Scenario b11), deterioration is at its lowest for the ANN chart. In addition, by comparing the results of two training methods (Scenario b11/Table 9 and Scenario b12/Table 11), one can see that the SVR and RFR schemes benefit from the second training method, with the RFR benefiting the most so that we do not see any deterioration in performance under shifts larger than the trained value. The ANN scheme, however, experiences a slight deterioration in performance in most cases.
Moreover, by comparing the results in Table 11 to Sabahno et al.’s [9] Table 4, we realize that although more cases are added to the ones in which at least one of their proposed control charts performs better, we can still see that the cases in which at least one of the proposed ML schemes performs better than all their proposed control charts are more, even though their control charts are only designed for detection.
In the case of three quality characteristics, the construction of the dataset is similar to the p = 2 case, with the only difference being that since two more process parameters ( μ 3 and σ 3 2 ) /charts are added in the case of p = 3, 500 out-of-control data with 0.2 shifts and 500 out-of-control data with 1 shift in μ 3 are generated to be used for the control chart that monitors   μ 3 , and 500 out-of-control data with shifts 1.2 times that of the in-control σ 3 2 and 500 out-of-control data with shifts 2 times that of the in-control σ 3 2 are considered to be used for the control chart that monitors   σ 3 2 .
For the ANN control charts to monitor μ 1 ,   μ 2 ,   μ 3 ,   σ 1 2 , σ 2 2 , σ 3 2 , and the covariance, five nodes are utilized in each hidden layer, with RMSEs of 0.38, 0.38, 0.38, 0.45, 0.44, 0.44, and 0.27 as well as UCLs of 0.913, 0.9082, 0.96518, 0.9581, 0.9339, 0.9797, and 0.9951. For the SVR control charts, the used kernels are all radial. The RMSEs and UCLs are as follows. The RMSEs are 0.39, 0.39, 0.39, 0.48, 0.48, 0.48, and 0.29, and the UCLs are 1.0722, 1.0839, 1.1073, 1.0892, 1.0128, 1.0823, and 1.0281. Regarding the RFR control charts, the least numbers of trees that we had to use for each structure (to obtain the overall ARL of 200) are 100, 100, 100, 500, 300, 300, and 100. The RMSEs are 0.4, 0.4, 0.4, 0.45, 0.46, 0.46, and 0.29, and the UCLs are 0.7026, 0.7052, 0.7254, 0.3872, 0.4453, 0.4077, and 0.8023. Note that for the RFR structures in this scenario, for the first time, we had to activate a feature that the ‘randomForest’ package offers, and it is called ‘corr.bias’, which performs bias correction for the regression model. Without that feature being activated, we were not able to obtain the desired performance, no matter how many trees we tried.
The result of this analysis is reported in Table 12. This table shows that in this case, the RFR scheme performs better than the other two in all the shift cases. By comparing this case with the first training method (Table 10 and Table 12), we realize that the RFR scheme performs better with the second training method, and on the contrary, the ANN and SVR schemes mostly perform worse. Comparing the p = 3 and p = 2 cases (Table 11 and Table 12), the RFR scheme mostly performs better in higher dimensions. On the other hand, the ANN and SVR charts mostly perform worse in higher dimensions (however, under very large mean shifts, the SVR scheme performs better). The highest deterioration in performance belongs to the ANN scheme. By comparing two process control scenarios (Table 5 and Table 12), one can see that all the schemes experience deterioration in performance, more so for the SVR scheme, as unlike the previous control case, it experiences a significant deterioration in performance as the shift size becomes larger than the trained value.
Moreover, by comparing the results in Table 12 to Sabahno et al.’s [9] Table 5, we can see that in the cases of large mean shifts and/or large variation shifts, at least one of their proposed charts performs better than all our proposed charts. Otherwise, at least one of our charts performs better than all their charts.

3.2.2. Scenario b2 (Control Type b, Input Set 2)

The individual elements of each sample’s mean vector and variance–covariance matrix are considered as the inputs in this scenario.

Scenario b21 (Control Type b, Input Set 2, Training Method 1)

The first training method is used in this scenario. The construction of in and out-of-control datasets are the same as in Scenario b11. For the ANN scheme, 11 nodes in each hidden layer are used. The obtained RMSEs are 0.45, 0.43, 0.46, 0.46, and 0.45. Also, the computed UCLs are 0.9644, 1.1054, 1.3556, 1.1599, and 1.177. For the SVR scheme, we use linear kernels in all the structures in this scenario. The RMSEs are 0.49, 0.47, 0.5, 0.51, and 0.5. The UCLs are 1.2462, 1.2957, 1.3706, 1.3492, and 1.1669. For the RFR scheme, we use 100 trees for each structure. The RMSEs are 0.49. 0.48, 0.49, 0.49, and 0.5. The UCLs are 0.8489, 0.9042, 0.88, 0.8534, and 0.8459. The result of this scenario is reported in Table 13.
It is clear from the results in Table 13 that in all cases, the SVR scheme performs better than the other two in this scenario. By comparing Table 7 and Table 13 (to its equivalent scenario with only detection as the goal), we realize that the ANN scheme mostly performs worse in this scenario. The RFR scheme mostly performs worse as well, but the deterioration is more severe, especially as the mean shifts increase. The SVR scheme mostly performs worse too, but the deterioration in performance in most cases is not that noticeable compared to the ANN and RFR schemes. By comparing Table 9 and Table 13 (two different input methods, but the same training methods), we can see that the ANN scheme performs significantly worse with the second input set. On the contrary, the SVR scheme performs significantly better with this input set so that it does not experience any deterioration in performance under large shifts. Regarding the RFR scheme, it performs better under small mean shifts, but as the mean shift increases, its performance becomes significantly worse.
Moreover, by comparing the results in Table 13 to Sabahno et al.’s [9] Table 4, one can see that the situation is improved compared to the previous scenario, and in most cases (except in the case of (0.5, 0.8) mean shifts), at least one of the proposed ML control charts performs better than all their proposed control charts again, even though their control charts are only designed for detection.

Scenario b22 (Control Type b, Input Set 2, Training Method 2)

The construction of in-control and out-of-control datasets is the same as in Scenario b12. For the ANN scheme, 10 nodes in each hidden layer are used. The obtained RMSEs are 0.38, 0.36, 0.41, 0.42, and 0.38. The UCLs are computed as 0.9637, 1.2445, 1.7237, 1.034, and 1.061. For the SVR scheme, again linear kernels are used in all the structures, with the RMSEs equal to 0.41, 0.4, 0.47, 0.47, and 0.43 and the UCLs computed as 0.8145, 0.8271, 0.9899, 0.9878, and 0.9457. Regarding the RFR scheme, 100 trees for each ML structure are used. The obtained RMSEs are 0.42, 0.4, 0.45, 0.45, and 0.43. The computed UCLs are 0.9401, 0.9528, 0.9458, 0.9284, and 0.894. The results of this analysis are reported in Table 14.
According to the results in Table 14, in most cases, the SVR scheme performs better than the other two. By comparing Table 14 to Table 13 (two different training methods), we realize that the ANN and RFR schemes perform better with the second training method. On the other hand, on average, the performance of the SVR scheme is very similar in the two training methods. By comparing Table 14 to its equivalent in scenario a (which is Table 8), we can conclude that all the schemes mostly perform worse when the identification is a goal and when the second training method is used. Having said that, the RFR scheme is the most affected one, while the SVR scheme is the least affected. By comparing Table 14 to Table 11 (different input sets), we realize that the ANN scheme mostly performs worse with this input set. The RFR scheme mostly performs worse too, but it even experiences a significant deterioration in performance as the shifts become larger than the training value in this input set. The SVR scheme performs better with this input method such that there is no deterioration in performance anymore as the shift size becomes larger than the trained value (unlike the RFR scheme).
Moreover, by comparing the results in Table 14 to Sabahno et al.’s [9] Table 4, one can see that similar to the previous scenario, except in the case of (0.5, 0.8) mean shift, at least one of the proposed ML control charts performs better than all their proposed control charts.

4. An Illustrative Example

A real case originally discussed by Hawkins and Maboudou-Tchao [2] regarding a healthcare process for monitoring blood pressure and heart rate is used in this section to illustrate the application of the proposed ML schemes. The main indicators, in this case, are heart attack and stroke. The quality characteristics are x1 = systolic blood pressure, x2 = diastolic blood pressure, and x3 = heart rate. They follow a multivariate normal distribution with the following in-control parameter values.
μ 0 = 126.61 , 77.48 , 80.95   and   Σ 0 = 15.04 8.66 10.51 8.66 5.83 5.56 10.51 5.56 15.17 .  
To identify the quality characteristic and the process parameter responsible for the chart signal, the second control scenario is used for this practical case. Based on the results of the numerical analyses section, the first training method for the ANN and SVR schemes and the second training method for the RFR scheme (because the results showed that the RFR scheme performs better with the second training method) are utilized. The mean shift size used for training is 0.2 ×   σ   (note that in the numerical analyses section, since all the standard deviations were equal to 1, we simply used 0.2 × 1 = 0.2) for small shifts, and similarly, it is 1 ×   σ for large shifts. The shifted variances used for training are as they were in Section 3 (because in both cases, they are multiplied by a coefficient). In addition, we assume that detection of the covariance shift is not a priority for the quality system; therefore, we only consider six control charts for monitoring μ 1 ,    μ 2 , μ 3 , σ 1 2 ,   σ 2 2 , and   σ 3 2 . The UCLs are computed using the proposed algorithm in Section 2 with α = 0.005 and n = 10. Also, the same R packages as in the simulations study section are used in this section. Similarly, the only changes we carried out in those packages’ default settings were changing the output type to regression and changing the number of trees in the RFR scheme, the number of nodes in the hidden layer in the ANN scheme, and the kernel type in the SVR scheme.
For the SVR scheme, radial kernels are used in all the structures. The RMSEs are 0.47,0.47, 0.51, 0.44, 0.46, and 0.52. The UCLs 1.0714, 1.0792, 1.0735, 1.0744, 1.0033, and 1.0932. For the ANN scheme, five nodes are used in the hidden layers. The RMSEs are 0.45, 0.45, 0.49, 0.42, 0.44, and 0.49. The UCLs are 0.9934, 1.7028, 1.0451, 0.9771, 0.9736, and 0.8285. For the RFR scheme, each structure’s number of trees is as follows: 100, 100, 100, 300, 100, and 500, respectively. The bias correction feature is only turned on for the last structure. The RMSEs are 0.4, 0.4, 0.43, 0.38, 0.41, and 0.47. The UCLs are 0.9858, 0.98, 0.9999, 0.9986, 0.9993, and 0.9914.
To see how each control chart would react in the case of an out-of-control situation, we make an artificial shift to the process and take ten consecutive samples from the process. To do so, we shift the third mean by 1.1 ( μ 3 = 80.95 + 1.1 ) and the third variance by 1.4 ( σ 3 2 = 1.4 × 15.17 ). It should be noted that we use the detection plus identification scenario; therefore, we have six control charts for each of the proposed ML schemes, and if any of those six charts signal in each scheme, we call the process out of control.
The results of ten consecutive random samplings from the process for each control scheme, i.e., SVR, ANN, and RFR, are reported in Table 15, Table 16 and Table 17, respectively. Note that, since there are six control charts involved in each scheme (making it a total of 18 control charts), to reduce the paper size, they are presented in tables.
Regarding the SVM scheme (Table 15), none of its control charts signal during the first ten consecutive samplings. Regarding the ANN scheme (Table 16), the ANN6 chart, which is responsible for the detection of σ 3 2 shifts, signals at samples no. 2 and no. 8, and the ANN3 chart, which is responsible for the detection of μ 3 shifts, signals at sample no. 10. Regarding the RFR scheme (Table 17), its RFR1 and RFR3 charts, which are responsible for the detection of μ 1 and μ 3 shifts, respectively, both signal at sample no. 10. Clearly in the case of RFR1, we received a false alarm from the RFR scheme, considering there is no shift in the first mean.
Note that, this was only a simple example to show how each proposed control scheme can be implemented in practice, and based on only 10 samples, no comparisons can be made. Performance comparisons were the purpose of the previous section.

5. Concluding Remarks

This paper proposed new control charts for simultaneous monitoring of multivariate normal process mean vectors and the variance–covariance matrix. For the first time, machine-learning techniques were used for this purpose. Three used ML techniques are ANN, SVM, and RF. We received linear outputs from these ML structures and then applied control chart rules to decide whether the process is in control or out of control. Two different input sets and two different training methods were employed for the proposed ML structures. In the first input set, two statistics (one representing the process mean vector and the other for the process variability) were employed, and for the second input set, we used each process parameter for each quality characteristic separately as inputs. In the first training method, we only trained the ML structures with a small shift size, and in the other method, a small and a large shift size were considered. We also used two different process control scenarios. In the first scenario, the only goal was the detection of the out-of-control situation, regardless of which variable and process parameter is responsible for it. In the second scenario, on the other hand, other than detection, identifying which variable (s)/process parameter (s) is responsible for the signal was also a goal, which involved several control charts to be monitored together.
For each of these control–input–training scenarios, the ML structures were trained, and control charts were developed. Numerical analyses were performed for the cases of two, three, and four quality characteristics. The results, in general, showed that depending on which control-input-training scenario is used, as well as the number of variables, each of these ML control charts performed better in some cases, and there is no absolute winner among them. However, considering how its decision-making procedure works based on dividing tree branches, the RFR scheme tended to mostly perform better when there were more inputs, more diverse training, and more quality characteristics (higher dimension). However, this did not happen in all cases (except for the diverse training part), and most importantly, even if its performance was improved by all these diversities, it did mean that it should perform better than the ANN and SVR charts (which actually in most cases it did not). It was also concluded that when identification was also a goal, the charts performed worse. However, this deterioration in performance was mostly at its lowest for one ML scheme (it differed based on the scenario).
We also compared the proposed ML charts with some recently developed multivariate statistical control charts with fixed and adaptive chart parameters (designed only for detection). For the case of p = 2, the results showed that in the detection-only scenario and with the first input set, at least one of the proposed ML charts performed better than all their proposed charts in all the shift cases, even though our proposed schemes are all fixed parameters. With the second input set together with the first training method, our proposed charts performed better in most cases, and together with the second training method, they performed better in all cases. Regarding the detection–identification scenario, our proposed ML charts still performed better in more cases, even though their charts have only been designed for detection and that usually by default means better performance. For the case of p = 3, the results showed that in the detection-only scenario, at least one of the ML charts performed better than all their proposed charts in most shift cases, and with the second training method, they performed better in all the shift cases. In the detection–identification scenario, one can say that the cases in which at least one of our proposed schemes performed better than all their charts and vice versa were almost the same. However, keep in mind that, unlike their charts, our proposed charts are also capable of identification. Lastly, an illustrative example based on a healthcare-related practical case was presented to show how the proposed schemes can be implemented in practice.
Highlighting the primary focus of this paper, our investigation centered the utilization of diverse machine-learning techniques in constructing control charts, effectively substituting traditional statistical methods. This exploration involved rigorous testing of different input sets and training methodologies to surpass the performance of statistical control charts. While our study concentrated on specific control charts, along with a limited selection of input sets and training approaches, it presents an opportunity for further exploration of a wide range of control charts, as well as diverse input sets and training methods, thus broadening the horizons of research in this field.
For future developments, one might be interested in trying different input sets, training methods, and even output sets for ML structures. Adding adaptive features to the proposed control charts would also be a major improvement. Since ML control charts have rarely been developed, developing them for many other applications and comparing them to traditional statistical control charts might also be interesting. In particular, since all the developed ML control charts are so far memory-less, developing memory-type ML control charts and comparing them to memory-type statistical control charts might also be interesting for some researchers. In addition, how to train the ML structures in the case of unknown distributions is another challenge that might be worth investigating by some other researchers.

Author Contributions

Conceptualization, H.S.; methodology, H.S. and S.T.A.N.; software, H.S.; validation, H.S.; formal analysis, H.S.; investigation, H.S. and S.T.A.N.; resources, H.S..; data curation, H.S.; writing—original draft preparation, H.S.; writing—review and editing, S.T.A.N.; visualization, H.S.; supervision, S.T.A.N.; project administration, H.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

All data are available in the manuscript.

Acknowledgments

The authors thank the journal’s editorial board and appreciate the esteemed reviewers for their constructive comments, which led to significant improvements in the quality of the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Reynolds, M.R., Jr.; Gyo-Young, C. Multivariate control charts for monitoring the mean vector and covariance matrix. J. Qual. Technol. 2006, 38, 230–253. [Google Scholar]
  2. Hawkins, D.M.; Maboudou-Tchao, E.M. Multivariate exponentially weighted moving covariance matrix. Technometrics 2008, 50, 155–166. [Google Scholar] [CrossRef]
  3. Zhang, G.; Chang, S.I. Multivariate EWMA control charts using individual observations for process mean and variance monitoring and diagnosis. Int. J. Prod. Res. 2008, 46, 6855–6881. [Google Scholar] [CrossRef]
  4. Khoo, M.B.C. A new bivariate control chart to monitor the multivariate process mean and variance simultaneously. Qual. Eng. 2004, 17, 109–118. [Google Scholar] [CrossRef]
  5. Zhang, J.; Li, Z.; Wang, Z. A multivariate control chart for simultaneously monitoring process mean and variability. Comput. Stat. Data Anal. 2010, 54, 2244–2252. [Google Scholar] [CrossRef]
  6. Wang, K.; Yeh, A.B.; Li, B. Simultaneous monitoring of process mean vector and covariance matrix via penalized likelihood estimation. Comput. Stat. Data Anal. 2014, 78, 206–217. [Google Scholar] [CrossRef]
  7. Sabahno, H.; Castagliola, P.; Amiri, A. A variable parameters multivariate control chart for simultaneous monitoring of the process mean and variability with measurement errors. Qual. Reliab. Eng. Int. 2020, 36, 1161–1196. [Google Scholar] [CrossRef]
  8. Sabahno, H.; Castagliola, P.; Amiri, A. An adaptive variable-parameters scheme for the simultaneous monitoring of the mean and variability of an autocorrelated multivariate normal process. J. Stat. Comput. Simul. 2020, 90, 1430–1465. [Google Scholar] [CrossRef]
  9. Sabahno, H.; Amiri, A.; Castagliola, P. A new adaptive control chart for the simultaneous monitoring of the mean and variability of multivariate normal processes. Comput. Ind. Eng. 2021, 151, 106524. [Google Scholar] [CrossRef]
  10. Sabahno, H.; Khoo, M.B.C. A multivariate adaptive control chart for simultaneously monitoring of the process parameters. Commun. Stat. Simul. Comput. 2022, 1–19. [Google Scholar] [CrossRef]
  11. Sabahno, H. An adaptive max-type multivariate control chart by considering measurement errors and autocorrelation. J. Stat. Comput. Simul. 2023, 1–26. [Google Scholar] [CrossRef]
  12. Chang, S.I.; Ho, E.S. A two-stage network approach for process variance change detection and classification. Int. J. Prod. Res. 1999, 37, 1581–1599. [Google Scholar] [CrossRef]
  13. Niaki, S.T.A.; Abbasi, B. Fault diagnosis in multivariate control charts using artificial neural networks. Qual. Reliab. Eng. Int. 2005, 21, 825–840. [Google Scholar] [CrossRef]
  14. Niaki, S.T.A.; Abbasi, B. Detection and classification mean-shifts in multiattribute processes by artificial neural networks. Int. J. Prod. Res. 2008, 46, 2945–2963. [Google Scholar] [CrossRef]
  15. Cheng, C.-S.; Cheng, H.-P. Identifying the source of variance shifts in the multivariate process using neural networks and support vector machines. Expert Syst. Appl. 2008, 35, 198–206. [Google Scholar] [CrossRef]
  16. Abbasi, B. A neural network applied to estimate process capability of nonnormal processes. Expert Syst. Appl. 2009, 36, 3093–3100. [Google Scholar] [CrossRef]
  17. Salehi, M.; Bahreininejad, A.; Nakhai, I. On-line analysis of out-of-control signals in multivariate manufacturing processes using a hybrid learning-based model. Neurocomputing 2011, 74, 2083–2095. [Google Scholar] [CrossRef]
  18. Hosseinifard, S.Z.; Abdollahian, M.; Zeephongsekul, P. Application of artificial neural networks in linear profile monitoring. Expert Syst. Appl. 2011, 38, 4920–4928. [Google Scholar] [CrossRef]
  19. Weese, M.; Martinez, W.; Megahed, F.M.; Jones-Farmer, L.A. Statistical learning methods applied to process monitoring: An overview and perspective. J. Qual. Technol. 2016, 48, 4–24. [Google Scholar] [CrossRef]
  20. Escobar, C.A.; Morales-Menendez, R. Machine learning techniques for quality control in high conformance manufacturing environment. Adv. Mech. Eng. 2018, 10, 1–16. [Google Scholar] [CrossRef]
  21. Apsemidis, A.; Psarakis, S.; Moguerza, J.M. A review of machine learning kernel methods in statistical process monitoring. Comput. Ind. Eng. 2020, 142, 106376. [Google Scholar] [CrossRef]
  22. Mohd Amiruddin, A.A.A.; Zabiri, H.; Taqvi, S.A.A.; Dendena Tufa, L. Neural network applications in fault diagnosis and detection: An overview of implementations in engineering-related systems. Neural Comput. Appl. 2020, 32, 447–472. [Google Scholar] [CrossRef]
  23. Demircioglu Diren, D.; Boran, S.; Cil, I. Integration of machine learning techniques and control charts in multivariate processes. Sci. Iran. 2020, 27, 3233–3241. [Google Scholar] [CrossRef]
  24. Yeganeh, A.; Pourpanah, F.; Shadman, A. An ANN-based ensemble model for change point estimation in control charts. Appl. Soft Comput. 2021, 110, 107604. [Google Scholar] [CrossRef]
  25. Mohammadzadeh, M.; Yeganeh, A.; Shadman, A. Monitoring logistic profiles using variable sample interval approach. Comput. Ind. Eng. 2021, 158, 107438. [Google Scholar] [CrossRef]
  26. Sabahno, H.; Amiri, A. New statistical and machine learning based control charts with variable parameters for monitoring generalized linear model profiles. Comput. Ind. Eng. 2023; submitted. [Google Scholar]
  27. Yeganeh, A.; Chukhrova, N.; Johannssen, A.; Fotuhi, H. A network surveillance approach using machine learning based control charts. Expert Syst. Appl. 2023, 219, 119660. [Google Scholar] [CrossRef]
  28. Yeganeh, A.; Shadman, A.; Shongwe, S.C.; Abbasi, S.A. Employing evolutionary artificial neural network in risk-adjusted monitoring of surgical performance. Neural Comput. Appl. 2023, 35, 10677–10693. [Google Scholar] [CrossRef]
  29. Yeganeh, A.; Johannssen, A.; Chukhrova, N.; Abbasi, S.A.; Pourpanah, F. Employing machine learning techniques in monitoring autocorrelated profiles. Neural Comput. Appl. 2023, 35, 16321–16340. [Google Scholar] [CrossRef]
Table 1. ARL, SDRL. Scenario a11, p = 2.
Table 1. ARL, SDRL. Scenario a11, p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200144.92, 145.2154.96, 152.82161.62, 158.7
(0.1, 0)162, 160138.66, 137.05186.33, 189.21122.4, 121.94113.02, 116.26152.91, 157.33
(0.1, 0.3)62.19, 60.0247.61, 48.63140.32, 146.0552.04, 51.6141.22, 40.37130.6, 125.98
(0.3, 0)49.56, 49.5535.99, 35.32128.69, 125.6640.8, 40.5229.97, 30.13116.94, 115.29
(0.5, 0.8)5.12, 4.53.71, 3.1860.35, 61.594.76, 4.33.46, 2.959.86, 58.94
(1, 1)1.77, 1.171.54, 0.980.38, 78.191.83, 1.241.48, 0.8274.51, 71.41
(2, 2)1, 01, 0>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)107.32, 109.12125.16, 121.37146.96, 148.9136.78, 36.6551.55, 49.0183.77, 81.68
(0.1, 0)90.78, 89.3195.82, 94.79131.3, 129.9832.13, 31.6644.63, 44.4384.89, 83.49
(0.1,0.3)43.42, 43.1734.9, 34.19113.39, 105.5917.99, 17.2319.29, 18.6272.91, 73.01
(0.3,0)34.78, 33.3429.68, 30.57104.35, 99.916.6, 16.4116.84, 16.4968.34, 66.56
(0.5,0.8)4.38, 3.813.38, 2.9556.93, 59.033.18, 2.562.87, 2.1745.87, 46.56
(1,1)1.78, 1.171.48, 0.8773.73, 72.281.57, 0.961.36, 0.764.34, 63.55
(2, 2)1, 01, 0>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)43.95, 42.8264.05, 62.1690.6, 87.939.8, 9.5519.51, 18.1748.4, 45.87
(0.1, 0)39.19, 37.3752.2, 51.3287.75, 90.619.13, 8.7117.15, 16.747.2, 46.22
(0.1, 0.3)20.18, 19.722.4, 22.5971.62, 69.176.78, 6.2810.3, 9.8841.01, 39.85
(0.3, 0)17.46, 16.4417.58, 17.4270.123, 69.915.88, 5.268.56, 8.340.36, 38.33
(0.5, 0.8)3.37, 2.842.8, 2.2647.63, 46.92.25, 1.652.29, 1.7837.1, 34.78
(1, 1)1.6, 0.971.38, 0.7364.8, 70.471.35, 0.71.31, 0.6458.83, 57.21
(2, 2)1, 01, 0>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)17.5, 16.7532.32, 31.6661.28, 61.731.23, 0.532.79, 2.1996.38, 94.14
(0.1, 0)16.19, 15.6127.22, 26.4357.54, 56.921.22, 0.552.69, 2.1294.52, 93.75
(0.1, 0.3)10.5, 9.7513.93, 13.2853.38, 52.481.2, 0.52.46, 1.91100.8, 94.77
(0.3, 0)9.3, 9.0812.07, 11.3451.55, 49.991.21, 0.512.42, 1.8596.16, 96.41
(0.5, 0.8)2.75, 2.192.34, 2.0240.78, 38.251.09, 0.331.51, 0.84167.92, 165.58
(1, 1)1.51, 0.8761.37, 0.68858.84, 60.11.04, 0.21.15, 0.41325.44, 318.73
(2, 2)1, 01, 0>10,0001, 01, 0>10,000
Table 2. ARL, SDRL. Scenario a11, p = 3.
Table 2. ARL, SDRL. Scenario a11, p = 3.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1, 1, 0.5
(0, 0, 0)200, 200200, 200200, 200153.9, 156.8158.9, 160180.2, 179.7
(0.1, 0, 0.1)150.8, 145.7147.5, 147.7181.6, 185.1125.6, 124.1113.3, 114.2164.8, 170.4
(0.1, 0.3, 0.3)53.19, 53.3843.97, 43.6296.25, 91.1445.96, 44.9837.58, 37.3487.61, 86.93
(0.3, 0, 0)53.66, 52.3744.3, 44.4295.67, 91.4944.13, 42.3835.45, 35.5892.73, 93.9
(0.5, 0.8, 0.5)5.51, 5.035.28, 4.6527.19, 26.225.26, 4.7454.81, 4.12627.98, 27.18
(1, 1, 1)1.73, 1.1981.69, 1.0621.56, 20.91.67, 11.65, 1.02421.52, 19.81
(2, 2, 2)1.02, 0.151, 0.0333.14, 33.191.01, 0.111, 033.36, 33.13
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.3, 1.05, 0.5
(0, 0, 0)94.39, 93.96105.5, 106.1135.2, 133.734.38, 34.8443.38, 41.5581.86, 83.72
(0.1, 0, 0.1)79.17, 78.1480.75, 78.77133.9, 128.431.71, 30.8336.21, 35.3577.94, 71.35
(0.1, 0.3, 0.3)33.21, 32.7728.43, 28.6775, 78.0417.3, 16.715.98, 15.4458.63, 59.42
(0.3, 0, 0)33.71, 32.6528.68, 27.9783.53, 87.0118.44, 18.6216.73, 15.7455.94, 54.5
(0.5, 0.8, 0.5)4.85, 4.324.28, 3.828.38, 27.233.77, 3.23.4, 2.84727.64, 26.99
(1, 1, 1)1.598, 0.991.53, 0.8922.56, 21.61.55, 0.931.38, 0.7121.04, 20.5
(2, 2, 2)1.01, 0.091, 0.0227.36, 25.291, 0.071, 0.0219.81, 19.74
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.5
(0, 0, 0)34.11, 33.2644.19, 43.6279.7, 78.4410.17, 9.7115.38, 14.7734.67, 35.8
(0.1, 0, 0.1)28.58, 27.3737.97, 38.4873.85, 71.429.56, 9.1714.02, 13.4933.19, 31.55
(0.1, 0.3, 0.3)16.5, 15.7615.85, 14.8559, 59.567.16, 6.698.18, 7.4330.1, 29.57
(0.3, 0, 0)16.31, 15.6516.3, 15.6257.71, 51.466.82, 6.537.86, 7.2130.88, 31.9
(0.5, 0.8, 0.5)3.72, 3.163.14, 2.5627.61, 26.052.65, 2.072.44, 1.822.63, 21.41
(1, 1, 1)1.51, 0.891.43, 0.7721.26, 21.871.34, 0.680.68, 0.617.97, 18.27
(2, 2, 2)1, 0.061, 020.33, 19.31, 0.021, 016.29, 15.34
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 3, 3, 1.4, 0.5
(0, 0, 0)45.86, 44.3752.07, 50.04104.4, 105.91.14, 0.411.92, 1.354.28, 3.87
(0.1, 0, 0.1)42.02, 40.6843.13, 42.5499.6, 105.91.14, 0.411.87, 1.264.26, 3.69
(0.1, 0.3, 0.3)20.61, 19.9718.29, 17.9766.77, 66.111.14, 0.41.67, 1.034.68, 4.35
(0.3, 0, 0)20.79, 20.6718.76, 18.6967.56, 63.591.12, 0.361.77, 1.124.76, 4.25
(0.5, 0.8, 0.5)3.97, 3.493.6, 3.0428.77, 28.261.05, 0.231.27, 0.587.36, 6.835
(1, 1, 1)1.6, 0.991.55, 0.921.1, 19.751.01, 0.131.05, 0.2321.84, 20.87
(2, 2, 2)1, 0.081, 0.0223.68, 22.41, 01, 054.86, 55.44
Table 3. ARL, SDRL. Scenario a11, p = 4.
Table 3. ARL, SDRL. Scenario a11, p = 4.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1, 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1, 1, 1, 0.5
(0, 0, 0, 0)200, 200200, 200200, 200152.8, 154.7175.2, 177180.2, 174.3
(0.1, 0, 0.1, 0)167.8, 173.3138.6, 138169.2, 171.5132.8, 134.3125, 120.6152.7, 153.2
(0.1, 0.3, 0.3, 0.1)79.52, 80.0445.18, 42.7101.4, 105.866.18, 65.7740.66, 40.8892.34, 92.61
(0.3, 0, 0, 0.3)51.29, 50.127.61, 26.2377.29, 73.1743.4, 42.9226.25, 25.0771.88, 68.42
(0.5, 0.8, 0.5, 0.8)5.4, 5.053.41, 2.8319.91, 19.35.14, 4.653.31, 2.7920.03, 19.75
(1, 1, 1, 1)1.84, 1.261.5, 0.8617.81, 17.811.83, 1.221.5, 0.8817.47, 17.37
(2, 2, 2, 2)1, 01, 0.02>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1.3, 1.05, 1.3, 0.5
(0, 0, 0, 0)80.29, 78.62103.8, 104.2118.4, 118.316.56, 15.9629.58, 27.942.75, 42.33
(0.1, 0, 0.1, 0)69.13, 69.0478.11, 79.96108.2, 117.815.13, 14.6626.21, 26.341.4, 42.44
(0.1, 0.3, 0.3, 0.1)40.03, 39.3828.81, 27.9471.83, 64.3311.11, 10.6213.11, 12.9232.93, 31.47
(0.3, 0, 0, 0.3)28.05, 28.2218.5, 17.2356.97, 56.899.82, 9.279.6, 9.1529.23, 28.9
(0.5, 0.8, 0.5, 0.8)4.57, 4.033.04, 2.5221.23, 20.742.97, 2.372.38, 1.8518.68, 18.81
(1, 1, 1, 1)1.75, 1.121.4, 0.7519.71, 19.521.53, 0.921.32, 0.6624.26, 25.35
(2, 2, 2, 2)1, 01, 0>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.2, 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.4, 1.4, 1.05, 1.05, 0.5
(0, 0, 0, 0)25.26, 24.0247.95, 46.8862.73, 59.5310.38, 9.6920.91, 20.6328.59, 28.24
(0.1, 0, 0.1, 0)24.18, 23.7138.09, 38.5954.33, 55.1110.39, 9.5818.75, 18.6829.28, 27.89
(0.1, 0.3, 0.3, 0.1)16.12, 15.3517.27, 16.9742.41, 41.987.93, 7.4310.35, 9.5525.98, 25.84
(0.3, 0, 0, 0.3)13.12, 12.6111.73, 10.9139.12, 42.096.86, 6.327.58, 6.9722.48, 23.2
(0.5, 0.8, 0.5, 0.8)3.44, 2.922.59, 2.0521, 20.112.64, 2.082.21, 1.6616.46, 16.63
(1, 1, 1, 1)1.6, 0.951.37, 0.7123.52, 21.431.47, 0.841.24, 0.5623.3, 23.02
(2, 2, 2, 2)1, 01, 0>10,0001, 01, 0>10,000
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.4, 1.4, 1.05, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 3, 3, 1.4, 1.4, 0.5
(0, 0, 0, 0)135.3, 130.2107.6, 108.6166.1, 162.61.13, 0.392.28, 1.652.69, 2.12
(0.1, 0, 0.1, 0)117.1, 118.279.76, 79.14145.8, 141.21.14, 0.412.27, 1.672.82, 2.31
(0.1, 0.3, 0.3, 0.1)55.12, 54.9329.68, 29.483.9, 80.921.11, 0.362.12, 1.572.85, 2.23
(0.3, 0, 0, 0.3)38.91, 38.3922.2, 22.0668.92, 69.371.12, 0.361.87, 1.322.98, 2.53
(0.5, 0.8, 0.5, 0.8)4.78, 4.3273.34, 2.7822.08, 21.251.06, 0.261.3, 0.654.34, 3.7
(1, 1, 1, 1)1.88, 1.321.54, 0.9320.19, 20.951.02, 0.171.07, 0.289.06, 8.13
(2, 2, 2, 2)1, 01, 0.02>10,0001, 01, 0>10,000
Table 4. ARL, SDRL. Scenario a12, p = 2.
Table 4. ARL, SDRL. Scenario a12, p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200155.48, 153.15165.01, 161.56156.21, 152.98
(0.1, 0)154.13, 167.78151.92, 149.84158.22, 155.65125.77, 118.97119.68, 124.59129.18, 121.49
(0.1, 0.3)52.94, 50.0343.71, 43.5356.77, 56.3546.37, 46.4442.12, 41.5256.99, 55.06
(0.3, 0)37.56, 39.8933.85, 32.8547.98, 44.9134.51, 34.4928.73, 27.441.32, 39.63
(0.5, 0.8)3.78, 3.233.38, 2.884.5, 3.953.69, 3.193.28, 2.724.55, 4.01
(1, 1)1.53, 0.911.4, 0.761.75, 1.131.5, 0.871.43, 0.781.77, 1.2
(2, 2)1, 01, 01.09, 0.31, 01, 01.09, 0.32
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)123.73, 124.92141.1, 142.5120.62, 118.6645.01, 44.8166.29, 65.0847.93, 48.98
(0.1, 0)102.23, 102.6598.54, 95.45104.1, 104.5737.23, 36.5856.49, 55.8443.89, 43.52
(0.1, 0.3)40.99, 41.0335.92, 36.2346.84, 47.3320.49, 19.8622.62, 21.7722.76, 22.35
(0.3, 0)30.97, 30.5427.23, 26.9237, 35.0117.84, 17.0120.25, 19.5221.47, 20.87
(0.5, 0.8)3.7, 3.153.15, 2.56 4.28, 3.823.25, 2.752.83, 2.193.94, 3.32
(1, 1)1.47, 0.831.41, 0.761.74, 1.081.47, 0.81.39, 0.751.77, 1.18
(2, 2)1, 01, 01.08, 0.31, 01, 01.08, 0.29
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)51.69, 50.4180.46, 77.958.56, 56.611.5, 11.0429.25, 29.0213.79, 13.32
(0.1, 0)46.38, 47.1862.89, 60.5148.64, 49.3910.32, 9.45824.92, 25.1713.44. 12.6
(0.1, 0.3)23.45, 23.3625.63, 25.3226.47, 25.157.87, 7.2414.1, 13.599.72, 9.48
(0.3, 0)20.17, 19.5621.13, 19.7621.94, 21.437.12, 6.6911.42, 11.338.58, 7.85
(0.5, 0.8)3.33, 2.83.05, 2.513.87, 3.392.52, 1.952.66, 2.063.08, 2.42
(1, 1)1.53, 0.911.41, 0.751.74, 1.131.42, 0.761.33, 0.671.66, 1.05
(2, 2)1, 01, 01.09, 0.311, 01, 01.08, 0.29
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)21.33, 20.7246.28, 46.2525.54, 24.711.2, 0.494.14, 3.61.49, 0.87
(0.1, 0)19.5, 19.139.61, 38.7824.42, 23.761.17, 0.464.08, 3.531.48, 0.84
(0.1, 0.3)12.64, 11.7618.49, 17.0314.88, 14.31.16, 0.423.43, 2.91.47, 0.84
(0.3, 0)11.19, 10.5315.26, 14.4213.72, 13.021.16, 0.453.36, 2.821.51, 0.89
(0.5, 0.8)2.96, 2.362.79, 2.183.43, 2.971.08, 0.31.85, 1.21.27, 0.59
(1, 1)1.49, 0.861.43, 0.761.77, 1.181.03, 0.181.31, 0.661.13, 0.39
(2, 2)1, 01, 01.1, 0.321, 01, 0.31.02, 0.15
Table 5. ARL, SDRL. Scenario a12, p = 3.
Table 5. ARL, SDRL. Scenario a12, p = 3.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v =1.05, 1, 1, 0.5
(0, 0, 0)200, 200200, 200200, 200162.9, 170.9167.6, 163177.2, 186
(0.1, 0, 0.1)148.9, 145.8141.3, 136.4145.8, 148.7123.2, 125.3119.5, 118.7132.2, 131.8
(0.1, 0.3, 0.3)44.9, 45.3538.54, 37.5752.27, 52.8741.05, 38.3934.83, 34.8344, 44.2
(0.3, 0, 0)44.83, 44.0936.98, 35.2655.52, 53.8638.93, 37.9433.9, 32.4643.5, 43.37
(0.5, 0.8, 0.5)4.55, 3.93.93, 3.215.74, 4.924.44, 4.043.9, 3.555.53, 4.97
(1, 1, 1)1.47, 0.811.42, 0.791.69, 11.51, 0.871.4, 0.731.78, 1.17
(2, 2, 2)1, 01, 01.05, 0.231, 01, 01.06, 0.25
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.3, 1.05, 0.5
(0, 0, 0)109.1, 110.3117.8, 117.4114.2, 120.146.17, 45.7165.58, 67.2448.03, 46.07
(0.1, 0, 0.1)87.09, 88.4493.44, 91.1990.04, 94.3339.46, 39.2751.23, 49.1640.36, 41.61
(0.1, 0.3, 0.3)33.59, 32.7330.85, 31.2537.79, 36.5419.72, 19.8720.69, 21.1321.6, 21.77
(0.3, 0, 0)33.12, 33.0629.23, 29.3636.43, 35.9620.39, 19.721.64, 21.3521.92, 20.55
(0.5, 0.8, 0.5)4.37, 3.843.81, 3.25.17, 4.933.76, 3.273.49, 2.824.42, 3.87
(1, 1, 1)1.49, 0.881.35, 0.71.76, 1.161.47, 0.871.38, 0.711.7, 1.11
(2, 2, 2)1, 01, 01.06, 0.241, 01, 01.07, 0.29
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.5
(0, 0, 0)42.13, 41.5868.06, 67.1244.54, 42.6512.74, 12.4225.15, 24.2613.02, 12.54
(0.1, 0, 0.1)37.37, 35.8254.46, 55.7239, 39.2512.14, 12.1522.95, 22.9512.32, 12.16
(0.1, 0.3, 0.3)20, 19.0420.41, 20.3220.24, 19.768.43, 7.7413.01, 12.139.54, 8.17
(0.3, 0, 0)19.19, 19.220.66, 19.7120.83, 20.148.34, 7.6212.11, 11.968.66, 8.44
(0.5, 0.8, 0.5)3.83, 3.293.32, 2.824.42, 4.033, 2.42.9, 2.383.36, 2.76
(1, 1, 1)1.49, 0.91.38, 0.731.69, 1.11.37, 0.731.32, 0.631.52, 0.88
(2, 2, 2)1, 01, 01.06, 0.251, 01, 01.06, 0.27
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 3, 3, 1.4, 0.5
(0, 0, 0)54.68, 54.7669.61, 72.4356.6, 55.831.2, 0.483.72, 3.251.36, 0.72
(0.1, 0, 0.1)48.04, 48.6852.62, 50.8249.07, 49.571.19, 0.483.53, 3.11.36, 0.72
(0.1, 0.3, 0.3)21.29, 21.3120.82, 20.2124.63, 23.861.17, 0.453.02, 2.51.3, 0.62
(0.3, 0, 0)22.63, 22.0219.92, 19.1124.3, 24.311.19, 0.473.06, 2.421.34, 0.7
(0.5, 0.8, 0.5)3.84, 3.323.39, 2.854.59, 4.161.1, 0.341.82, 1.231.25, 0.55
(1, 1, 1)1.53, 0.891.47, 0.841.74, 1.21.03, 0.191.21, 0.521.08, 0.28
(2, 2, 2)1, 01, 0, 021.08, 0.31, 01, 01.01, 0.11
Table 6. ARL, SDRL. Scenario a12, p = 4.
Table 6. ARL, SDRL. Scenario a12, p = 4.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1, 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1, 1, 1, 0.5
(0, 0, 0, 0)200, 200200, 200200, 200163.9, 158.3174.04, 169.22160.2, 170.4
(0.1, 0, 0.1, 0)148, 149.2133.58, 131.92152.2, 159.5127.9, 126.5121.35, 122.85127.7, 126.7
(0.1, 0.3, 0.3, 0.1)52.81, 51.744.38, 43.3158.32, 56.3346.43, 44.7337.04, 36.5154.4, 54.9
(0.3, 0, 0, 0.3)34.3, 34.6525.74, 25.1840.84, 40.0729.85, 29.6823.79, 23.635.56, 36.57
(0.5, 0.8, 0.5, 0.8)3.72, 3.23.21, 2.674.79, 4.233.73, 3.193.15, 2.574.47, 3.93
(1, 1, 1, 1)1.58, 0.981.44, 0.781.83, 1.191.53, 0.891.4, 0.741.79, 1.203
(2, 2, 2, 2)1, 01, 01.02, 0.151, 01, 01.02, 0.16
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.05, 1.3, 1.05, 1.3, 0.5
(0, 0, 0, 0)90.44, 90.41113.47, 112.5792.41, 100.219.97, 19.3438.03, 37.1722.14, 21.16
(0.1, 0, 0.1, 0)73.3, 72.5480.82, 80.8675.77, 81.5318.39, 18.9831.29, 30.3920.76, 20.77
(0.1, 0.3, 0.3, 0.1)32.89, 34.229.7, 30.1735.04, 36.0311.77, 11.7515.89, 15.813.84, 12.81
(0.3, 0, 0, 0.3)21.97, 21.2218.94, 18.2726.83, 27.459.25, 8.6310.53, 9.9110.68, 10.29
(0.5, 0.8, 0.5, 0.8)3.3, 2.622.9, 2.384.18, 3.62.51, 1.922.43, 1.883.2, 2.61
(1, 1, 1, 1)1.47, 0.81.37, 0.721.8, 1.171.36, 0.71.31, 0.621.65, 1.05
(2, 2, 2, 2)1, 01, 01.02, 0.171, 01, 01.03, 0.19
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.2, 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.4, 1.4, 1.05, 1.05, 0.5
(0, 0, 0, 0)31.71, 31.0555.85, 54.6736.09, 33.8813.98, 13.4327.48, 27.5215.11, 15.02
(0.1, 0, 0.1, 0)29.45, 28.4845.9, 47.4828.89, 28.0212.06, 11.2122.92, 22.4914.42, 13.81
(0.1, 0.3, 0.3, 0.1)16.28, 15.7819.25, 18.4218.4, 17.298.66, 8.1112.33, 12.1710.21, 10.01
(0.3, 0, 0, 0.3)11.75, 11.4913.26, 12.5914.95, 14.456.87, 6.639.14, 8.68.51, 7.64
(0.5, 0.8, 0.5, 0.8)2.75, 2.212.59, 2.083.56, 3.12.28, 1.692.3, 1.692.97, 2.32
(1, 1, 1, 1)1.46, 0.811.36, 0.691.68, 1.061.31, 0.651.3, 0.611.54, 0.88
(2, 2, 2, 2)1, 01, 01.04, 0.211, 01, 01.05, 0.24
( μ 1 ,   μ 2 , μ 3 , μ 4 ) σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 1.4, 1.4, 1.05, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , σ 4 2 , C o v = 3, 3, 1.4, 1.4, 0.5
(0, 0, 0, 0)111.4, 112.196.52, 96.41120.3, 118.11.15, 0.412.99, 2.391.3, 0.63
(0.1, 0, 0.1, 0)92.3, 90.4379.73, 80.99106.6, 99.791.18, 0.462.95, 2.491.3, 0.63
(0.1, 0.3, 0.3, 0.1)35.14, 33.4528.81, 28.1141.53, 40.011.16, 0.422.64, 2.011.27, 0.57
(0.3, 0, 0, 0.3)25.72, 25.5221.05, 20.4330.23, 28.251.17, 0.452.37, 1.841.26, 0.59
(0.5, 0.8, 0.5, 0.8)3.55, 3.063.11, 2.684.1, 3.571.06, 0.261.44, 0.801.17, 0.44
(1, 1, 1, 1)1.59, 0.951.5, 0.841.86, 1.251.02, 0.161.12, 0.371.07, 0.27
(2, 2, 2, 2)1, 0.021, 01.03, 0.171, 01, 01.03, 0.17
Table 7. ARL, SDRL. Scenario a21, p = 2.
Table 7. ARL, SDRL. Scenario a21, p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200186.2, 187.6142, 138.4176.7, 177.4
(0.1, 0)231.6, 222.2162.3, 167.9181.6, 174.8212.4, 221.1124.5, 121.7181.7, 179.7
(0.1, 0.3)39.04, 40.2149.88, 49.4750.44, 49.6335.28, 34.5540.77, 41.2750.68, 48.63
(0.3, 0)274.8, 282.8116.7, 117.2165.7, 171.1225.2, 225.286.65, 84.26168, 170.8
(0.5, 0.8)5.17, 4.666.08, 5.3812.31, 12.285, 4.385.52, 4.9312.26, 11.75
(1, 1)2.28, 1.742.39, 1.7610.02, 8.692.17, 1.632.29, 1.6910.06, 9.615
(2, 2)1, 0.091, 0.0811.04, 10.031, 0.081, 0.0511.28, 10.79
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)150, 150.2112.4, 111.3178.2, 170.956.75, 53.1935.34, 34.7135.6, 133.3
(0.1, 0)167.8, 165.791.67, 88.75172.5, 167.663.89, 6029.74, 30.07108.6, 110.8
(0.1, 0.3)31.64, 31.0632.33, 32.3646.03, 47.717.87, 17.3313.91, 12.7647.6, 45.41
(0.3, 0)181.1, 185.666.24, 66.01145, 139.870.16, 68.2924.4, 23.7998.41, 93.14
(0.5, 0.8)4.89, 4.464.72, 4.4112, 12.293.99, 3.493.11, 2.5512.31, 11.62
(1, 1)2.16, 1.62.18, 1.6110.49, 9.532.01, 1.3861.77, 1.17211.4, 11.41
(2, 2)1, 0.071, 0.0511.06, 10.51, 0.081, 0.0314.01, 13.45
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)82.78, 81.3940.88, 41.09112.5, 112.321.28, 20.969.24, 8.3791.04, 88.54
(0.1, 0)84.68, 82.8335.39, 34.28102, 101.222.17, 22.548.59, 7.9186.13, 84.34
(0.1, 0.3)22.31, 22.4915.23, 14.8535.87, 33.518.81, 8.455.02, 4.5141.04, 42.17
(0.3, 0)85.71, 88.1727.79, 27.52102, 106.521.26, 21.97.29, 6.6174.15, 69.47
(0.5, 0.8)3.84, 3.173.24, 2.679.72, 9.282.65, 2.131.91, 1.3512.47, 12.18
(1, 1)1.88, 1.291.79, 1.219.06, 8.251.59, 0.951.34, 0.6612.14, 12.33
(2, 2)1.01, 0.091, 0.0610.77, 10.351, 0.061, 0.0317.15, 16.48
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)42.26, 39.8716.45, 15.675.73, 71.091.91, 1.31.2, 0.48105.9, 108.5
(0.1, 0)41.82, 39.9415.16, 14.7768.19, 70.491.98, 1.41.22, 0.5392.67, 90.9
(0.1, 0.3)13.78, 14.037.58, 7.126.01, 24.691.49, 0.861.13, 0.3857.07, 56.54
(0.3, 0)40.9, 40.5512.26, 11.3465.37, 64.641.97, 1.391.17, 0.4584.41, 78.99
(0.5, 0.8)3.02, 2.452.39, 1.848.55, 8.21.17, 0.471.03, 0.1923.51, 23.63
(1, 1)1.67, 1.041.54, 0.97.81, 7.571.11, 0.351.01, 0.1222.48, 21.63
(2, 2)1, 0.081, 0.0710.28, 9.771, 0.081, 059.71, 60.83
Table 8. ARL, SDRL. Scenario a22, p = 2.
Table 8. ARL, SDRL. Scenario a22, p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200123.77, 120.89112.98, 110.77146.04, 137.69
(0.1, 0)175.12, 169.66155.3, 149.36122.8, 117.12107.61, 104.7687.61, 86.69100.7, 94.8
(0.1, 0.3)55.5, 53.3749.26, 48.8637.66, 35.3540.88, 40.4632.22, 32.1830.83, 29.54
(0.3, 0)68.36, 69.0493.47, 91.0833.89, 32.9349.6, 46.6357.54, 59.1330.19, 30.05
(0.5, 0.8)4.73, 4.285.52, 5.123.47, 2.974.07, 3.594.19, 3.633.27, 2.64
(1, 1)1.94, 1.352.1, 1.511.39, 0.721.77, 1.161.87, 1.261.36, 0.73
(2, 2)1, 0.031, 0.031, 0.041, 0.051, 0.041, 0.05
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)119.7, 117.43113.66, 111.26145.8, 143.2636.98, 36.2935.36, 34.1567.92, 71.98
(0.1, 0)109.92, 110.2591.58, 89.6594.87, 95.0832.44, 31.928.9, 29.5157.81, 57.1
(0.1, 0.3)39.17, 38.331.75, 30.9432.75, 32.5716.98, 16.6713.64, 12.9319.89, 19.42
(0.3, 0)46.38, 46.4757.45, 56.9930.82, 28.6921.45, 21.0722.37, 21.6126.26, 25.91
(0.5, 0.8)4.2, 3.634.33, 3.763.36, 2.793.36, 2.753.03, 2.53.07, 2.52
(1, 1)1.79, 1.181.9, 1.321.32, 0.681.63, 1.031.62, 0.981.36, 0.69
(2, 2)1, 0.031, 0.021, 0.071, 0.041, 0.031, 0.07
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)49.03, 47.3743.5, 43.1370.52, 70.588.71, 8.049.21, 8.7226.31, 26.03
(0.1, 0)46.69, 46.0735.71, 37.0850.66, 50.537.93, 7.318.41, 7.8322.27, 21.87
(0.1, 0.3)21.07, 20.8315.34, 14.9920.19, 19.525.9, 5.295.08, 4.410.86, 10.34
(0.3, 0)26.63, 25.9224.93, 24.8322.78, 23.166.36, 6.046.86, 6.3112.83, 12.98
(0.5, 0.8)3.61, 3.053.18, 2.742.86, 2.322.23, 1.561.89, 1.272.41, 1.76
(1, 1)1.66, 1.051.64, 11.39, 0.751.39, 0.761.28, 0.591.3, 0.66
(2, 2)1, 0.051, 0.051, 0.031.01, 0.121, 01, 0, 07
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)21.16, 20.817.8, 17.7829.82, 28.271.26, 0.571.26, 0.582.01, 1.5
(0.1, 0)19.72, 19.1115.32, 14.325.04, 24.311.24, 0.531.2, 0.481.96, 1.32
(0.1, 0.3)11.45, 10.877.954, 7.31911.55, 10.31.26, 0.581.14, 0.391.67, 1.09
(0.3, 0)14.49, 13.5911.25, 10.413.41, 12.791.2, 0.471.18, 0.461.85, 1.26
(0.5, 0.8)2.93, 2.432.26, 1.672.57, 21.23, 0.541.03, 0.21.26, 0.57
(1, 1)1.59, 0.971.42, 0.761.34, 0.71.22, 0.541.01, 0.121.1, 0.35
(2, 2)1, 0.091, 0.071, 0.041.345, 0.671, 01, 0, 05
Table 9. ARL, SDRL. Scenario b11 (with identification), p = 2.
Table 9. ARL, SDRL. Scenario b11 (with identification), p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200149.42, 151.52201.87, 203.46181.8, 177.7
(0.1, 0)160.4, 155.4173.9, 176.6186.7, 191124, 120184.7, 191.4180.1, 189
(0.1, 0.3)62.01, 58.38110.02, 110.37155.2, 158.552.15, 50.89109.1, 107.4143, 151.4
(0.3, 0)48.04, 46.4393.51, 90136.3, 138.540.94, 40.8797.53, 97.58139.4, 143.1
(0.5, 0.8)4.75, 4.336.58, 36.7549.6, 49.634.55, 4.1638.09, 38.8151.89, 50.52
(1, 1)1.79, 1.1551.36, 49.9629.7, 30.331.77, 1.1947.34, 46.3431.93, 32.45
(2, 2)1, 0.03>10,00019.38, 18.911, 0.03>10,00020.84, 20.32
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)118.52, 119.79202.07, 207.02180.8, 175.139.63, 39.97168.7, 169131.2, 131.3
(0.1, 0)97.77, 99.47172.9, 172.1176.5, 178.136.89, 36.02161.9, 158.9130.8, 130.4
(0.1, 0.3)43.95, 44.88108.1, 106.8138.2, 13320.56, 20.27102.3, 103.9113.9, 116.9
(0.3, 0)35.95, 35.4795.47, 93.45134.1, 128.219.35, 19.488.08, 86.51122.5, 117.7
(0.5, 0.8)4.422, 3.93438.92, 37.756.43, 55.793.69, 3.1939.36, 39.5574.89, 70.93
(1, 1)1.76, 1.1647.73, 44.8234.79, 32.971.77, 1.1647.39, 47.6152.86, 50.55
(2, 2)1, 0.02>10,00022.8, 22.21, 0.05>10,00035.4, 34.89
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)47.96, 47.6186.1, 183.4132.9, 133.411.57, 11.02146.5, 145.794.65, 99.79
(0.1, 0)41.51, 40.24157.3, 157.5144.8, 156.511.2, 10.7135.7, 139.997.2, 96.26
(0.1, 0.3)23.57, 22.57104.1, 106.2126.2, 117.98.69, 8.15103.9, 105.394.03, 94.87
(0.3, 0)20.93, 20.8788.51, 86.74122.6, 121.77.41, 6.6987.51, 88.0296.93, 95.25
(0.5, 0.8)3.79, 3.2338.36, 38.3669.34, 69.383, 2.443.3, 44.68106.1, 105.8
(1, 1)1.83, 1.2448.31, 49.3949.93, 52.861.65, 1.0856.18, 54.86109.8, 109.8
(2, 2)1, 0.04>10,00033.44, 32.791, 0.07>10,00091.38, 85.05
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)19.79, 18.66160.9, 161.4108, 107.11.27, 0.6725.8, 726.7222.9, 221.5
(0.1, 0)19.09, 18.97141.7, 132.8103.9, 106.71.27, 0.6700.6, 692.5206.9, 200.7
(0.1, 0.3)13.15, 12.3999.46, 98.47104.4, 1061.26, 0.58735.2, 706.8217.9, 228.2
(0.3, 0)11.99, 10.9388.22, 83.6108.71, 104.81.28, 0.58709.2, 712.3221.7, 215.4
(0.5, 0.8)3.46, 3.0341.47, 41.0994.66, 93.91.13, 0.38643.3, 611.6436.7, 432.3
(1, 1)1.81, 1.1853.56, 55.7969.6, 75.211.08, 0.28812.3, 5781434, 1027
(2, 2)1, 0.08>10,00059.42, 58.751, 0.02>10,000>10,000
Table 10. ARL, SDRL. Scenario b11 (with identification), p = 3.
Table 10. ARL, SDRL. Scenario b11 (with identification), p = 3.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1, 1, 0.5
(0, 0, 0)200, 200200, 200200, 200173.2, 170200, 197180.7, 171.2
(0.1, 0, 0.1)162.7, 158.5191.7, 180.4193.2, 182.2140.1, 139.8184.4, 186.3169.1, 178.3
(0.1, 0.3, 0.3)65.37, 63.74112.4, 113.3140.2, 136.462.19, 58.74104.3, 106.4137.1, 134.9
(0.3, 0, 0)70, 68.85112.1, 108.8145.3, 147.162.41, 62.24105.7, 102.8136.1, 129.4
(0.5, 0.8, 0.5)8.79, 8.2639.13, 40.167.1, 65.668.88, 8.0439.58, 38.3564.91, 66.47
(1, 1, 1)2.45, 1.8841.89, 39.8685.13, 79.042.41, 1.7940.68, 41.8281.15, 81.58
(2, 2, 2)1, 0>10,000>10,0001, 0>10,000>10,000
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.3, 1.05, 0.5
(0, 0, 0)119.6, 120.1166.9, 173.5185.3, 174.158.1, 56.94117, 121.3122, 123.7
(0.1, 0, 0.1)111.3, 112161.5, 155.2152, 144.953.26, 53.94113.3, 114.5114.6, 112.5
(0.1, 0.3, 0.3)54.7, 54.698.81, 95.89113.1, 106.334.07, 34.3579.56, 82.7198.41, 96.06
(0.3, 0, 0)53.08, 51.8396.81, 100.2117.3, 121.132.76, 32.483.34, 78.16106.1, 108
(0.5, 0.8, 0.5)8.64, 7.8639.58, 37.6864.28, 62.586.88, 6.1441.4, 4070.57, 68.66
(1, 1, 1)2.43, 1.8641.77, 42.7589.83, 86.322.27, 1.6347.5, 45.6299.91, 96.18
(2, 2, 2)1, 0.04>10,000>10,0001, 0.03>10,000>10,000
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.5
(0, 0, 0)54.77, 54.89115.1, 112126.4, 129.617.75, 16.9671.26, 68.0774.38, 82.69
(0.1, 0, 0.1)51.34, 48.68108.7, 106.1113.1, 105.317.79, 17.8870.62, 71.7479.58, 83.47
(0.1, 0.3, 0.3)32.2, 32.7281.05, 76.1590.85, 86.5313.11, 13.159.12, 58.7771.82, 68.78
(0.3, 0, 0)31.45, 30.6177.98, 76.2694.03, 86.5812.84, 12.0861.83, 58.7380.5, 82.38
(0.5, 0.8, 0.5)6.52, 5.7240.93, 40.5170.07, 71.174.77, 3.9250.43, 49.8676.7, 73.9
(1, 1, 1)2.21, 1.5847.82, 46.98 105.9, 98.51.96, 1.3360.61, 61.3138.3, 146.4
(2, 2, 2)1, 0>10,000>10,0001, 0.03>10,000>10,000
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 3, 3, 1.4, 0.5
(0, 0, 0)72.1, 76.37130.7, 130135.9, 136.51.24, 0.58132.5, 133.266.75, 65.52
(0.1, 0, 0.1)66.25, 67.53129.3, 128.7126.2, 122.11.25, 0.57140.8, 132.472.06, 71.05
(0.1, 0.3, 0.3)38.7, 39.5790.35, 95.11101.9, 105.11.22, 0.52174.8, 182.180.7, 78.22
(0.3, 0, 0)35.49, 36.3984.25, 83.67103.4, 106.21.2, 0.47141.8, 138.772.15, 73.2
(0.5, 0.8, 0.5)7.22, 6.943.91, 44.972.57, 70.331.16, 0.45320.2, 319.3167.3, 171.3
(1, 1, 1)2.24, 1.6243.59, 42.46106.5, 100.21.06, 0.26953.8, 947.7944.5, 964.4
(2, 2, 2)1, 0.05>10,000>10,0001, 0>10,000>10,000
Table 11. ARL, SDRL. Scenario b12 (with identification), p = 2.
Table 11. ARL, SDRL. Scenario b12 (with identification), p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200167.4, 170.5193.6, 186.8172.1, 177.2
(0.1, 0)171.1, 171.3181.2, 203.5166.5, 145.6139, 139.6167.5, 170.9157.8, 164
(0.1, 0.3)79.9, 78.7899.75, 105.663.04, 65.8269.15, 70.5583.67, 75.2770.87, 68.67
(0.3, 0)62.84, 59.8575.31, 76.6250.5, 52.8153.81, 54.1475.63, 74.5144.89, 40.81
(0.5, 0.8)5.51, 5.0430.7, 35.055.29, 4.515.57, 5.124.71, 235.65, 4.97
(1, 1)1.92, 1.2731.48, 30.62.01, 1.361.89, 1.2428.24, 25.562.01, 1.29
(2, 2)1, 0.03>10,0001.22, 0.54141, 0.03>10,000 1.26, 0.6
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)133.9, 131.2157.5, 144.2127.9, 123.652.48, 51.18109.5, 11555.23, 53.85
(0.1, 0)116.3, 115.4144.6, 145.5119.3, 116.947.29, 46.04102.5, 108.954.11, 51.69
(0.1, 0.3)54.95, 57.1583.44, 89.256.89, 56.2726.13, 26.266.23, 71.5629, 01, 30.95
(0.3, 0)45.51, 4574.85, 71.9144.08, 41.123.33, 22.8756.56, 56.3528.85, 28.8
(0.5, 0.8)5.32, 4.7323.63, 21.634.93, 4.24.29, 3.8418.89, 205.05, 4.82
(1, 1)1.83, 1.2227.98, 25.331.98, 1.281.73, 1.0922.17, 19.722.07, 1.56
(2, 2)1, 0>10,0001.26, 0.511, 0>10,0001.34, 0.71
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)60.34, 59.04124.5, 132.475.91, 77.4614.94, 14.6658.04, 54.7325.81, 25.53
(0.1, 0)55.62, 53.89107.8, 98.6865.75, 63.0714.1, 14.4253.29, 48.7221.36, 20.77
(0.1, 0.3)31.88, 31.2762.41, 56.2334.39, 33.7110.38, 9.7936.14, 35.8315.69, 15.49
(0.3, 0)26.52, 25.5855.24, 51.4433.01, 31.479.14, 8.9734.89, 34.8414.04, 13.73
(0.5, 0.8)4.41, 3.9521.47, 22.075.49, 4.3353.03, 2.4515.77, 14.664.94, 4.5
(1, 1)1.76, 1.1824.05, 24.972.04, 1.351.55, 0.9221.01, 21.062.38, 1.64
(2, 2)1, 0.03>10,0001.29, 0.61, 03684.07, 2428.361.61, 0.98
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)27.24, 26.2779.79, 76.5935.43, 35.041.5, 0.8564.87, 66.7614.6, 14.12
(0.1, 0)24.84, 23.8770.3, 71.433.03, 32.21.49, 0.8563.42, 63.0914.04, 13.35
(0.1, 0.3)16.94, 16.2845.1, 47.7521.66, 21.231.44, 0.862.29, 61.1412.93, 11.99
(0.3, 0)14.64, 13.8642.77, 41.0619.98, 19.131.46, 0.8160.07, 59.9612.91, 12.09
(0.5, 0.8)3.5, 2.9319.98, 19.234.85, 4.391.34, 0.6455.18, 50.7711.38, 10.41
(1, 1)1.71, 1.1222.8, 21.872.25, 1.6571.16, 0.4579.59, 74.0711.11, 10.37
(2, 2)1, 0.034761.09, 5179.761.4, 0.731, 0.044256.1, 4516.5311.47, 10.73
Table 12. ARL, SDRL. Scenario b12 (with identification), p = 3.
Table 12. ARL, SDRL. Scenario b12 (with identification), p = 3.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1, 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1, 1, 0.5
(0, 0, 0)200, 200200, 200200, 200231.6, 237.3222.7, 234.9168.8, 170.3
(0.1, 0, 0.1)190.1, 196.3191.7, 190.8160.7, 155216.5, 214.1212.96, 212.47147.2, 157.1
(0.1, 0.3, 0.3)145.1, 157.2143.6, 145.753.17, 54.18171, 171.5153.5, 157.247.92, 49.33
(0.3, 0, 0)155.2, 150.9143.7, 148.252.73, 53.01152.1, 143.3157.2, 155.249.32, 51.13
(0.5, 0.8, 0.5)18.42, 17.8672.88, 72.846.11, 5.82418.61, 18.5480.7, 75.615.48, 4.97
(1, 1, 1)3.56, 2.9558.57, 58.11.97, 1.443.58, 3.0361.34, 61.481.87, 1.27
(2, 2, 2)1.11, 0.371494, 13971.2, 0.51.11, 0.341504, 16291.2, 0.4738
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.05, 1.3, 1.05, 0.5
(0, 0, 0)272, 274.4277.6, 294.1127.3, 127.9389.4, 380.8350.1, 358.357.92, 54.64
(0.1, 0, 0.1)275.7, 287.7268.9, 261.799.14, 94.62361.7, 363.8335.5, 331.654.21, 53.41
(0.1, 0.3, 0.3)176.8, 173184.2, 192.139.81, 39.34191.4, 205.4250.8, 27027.95, 26.8
(0.3, 0, 0)171.5, 174.2172.6, 163.841.3, 39.46224.7, 198.1263.7, 257.430.73, 29.67
(0.5, 0.8, 0.5)19.84, 19.1288.31, 84.75.93, 4.91123.47, 22.21103.5, 1045.5, 5.17
(1, 1, 1)4.18, 3.6562.5, 60.631.94, 1.354.89, 4.4476.9, 73.981.93, 1.48
(2, 2, 2)1.2, 0.51453.11, 15381.18, 0.451.32, 0.611120.58, 1064.461.28, 0.59
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.2, 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 3 2 , C o v = 1.4, 1.4, 1.05, 0.5
(0, 0, 0)427.05, 391.7382.5, 392.962.2, 68.76522.8, 501.9602.7, 5, 58322.73, 20.64
(0.1, 0, 0.1)400.4, 414.5333, 328.554.46, 54.61426.9, 392.3496.7, 505.221.92, 21.14
(0.1, 0.3, 0.3)215.4, 216.1228.3, 233.427.86, 26.09219.6, 217.7354.7, 347.315.05, 14.52
(0.3, 0, 0)234.6, 226.5263.7, 278.727.52, 26.72173.5, 167384.7, 382.914.34, 13.9
(0.5, 0.8, 0.5)22.8, 22.6116.9, 126.75.2, 4.6829.7, 28.48139, 149.34.61, 4.04
(1, 1, 1)5.22, 5.0266.91, 65.022.02, 1.358.44, 7.6368.62, 67.182.21, 1.58
(2, 2, 2)1.33, 0.65932.7, 871.21.28, 0.661.73, 1.12974.3, 982.51.32, 0.6323
( μ 1 ,   μ 2 , μ 3 ) σ 1 2 , σ 2 2 , σ 3 2 , C o v 1.4, 1.4, 1.05, 0.75 σ 1 2 , σ 2 2 , σ 3 2 , C o v 3, 3, 1.4, 0.5
(0, 0, 0)286.3, 291.4285.6, 255.268.35, 64.3663.12, 60.09458.3, 432.82.47, 1.84
(0.1, 0, 0.1)286.4, 284271.5, 28064.79, 74.1363.15, 61.07427.3, 411.22.5, 2.01
(0.1, 0.3, 0.3)159.1, 150.6189.4, 188.130.58, 30.9355.69, 53.68344, 340.62.51, 2.01
(0.3, 0, 0)158.1, 165219.4, 192.730.31, 31.4550.76, 48.21337.1, 351.62.45, 1.94
(0.5, 0.8, 0.5)18.46, 16.9106.8, 106.75.12, 4.6729.64, 29.25203.3, 191.91.97, 1.41
(1, 1, 1)4.3, 4.0170.44, 77.172.05, 1.3319.25, 18.14108, 110.81.48, 0.85
(2, 2, 2)1.23, 0.51803.2, 715.91.2, 0.4913.82, 12.841979.95, 2028.431.13, 0.36
Table 13. ARL, SDRL. Scenario b21 (with identification), p = 2.
Table 13. ARL, SDRL. Scenario b21 (with identification), p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200176.7, 177.2141.8, 144.9173.2, 178.9
(0.1, 0)179.64, 185.25165.69, 162.43180.9, 184.9160.4, 164.7118.7, 121.7172.2, 171.5
(0.1, 0.3)105.6, 109.455.46, 54.82124.8, 125.5111.2, 112.351.25, 50.59129.1, 127.6
(0.3, 0)141, 140.740.08, 38.88153.3, 148.8126, 123.932.26, 31.24143, 142.8
(0.5, 0.8)52.1, 51.689.35, 8.8117.3, 118.252.31, 52.348.49, 7.61118.2, 111.6
(1, 1)57.59, 58.315.07, 4.43403.6, 369.552.29, 50.814.74, 4.28382.2, 413.6
(2, 2)48.33, 47.021.03, 0.19>10,00043.19, 41.41 1.04, 0.22>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)169.6, 167.2111.5, 114160.3, 155.699.24, 98.4730.91, 31.46111.5, 107.7
(0.1, 0)144.2, 142.498.1, 100.21157, 157.885.06, 84.0133.06, 31.67116, 125
(0.1, 0.3)117.3, 116.642.04, 39.82121.8, 125.3123.7, 127.816.58, 17.14109.6, 106.4
(0.3, 0)106.5, 110.831.94, 29.97145.3, 143.256.9, 56.0820.03, 18.95117, 119.1
(0.5, 0.8)54.98, 51.537.92, 7.35125.4, 126.173.16, 72.226.06, 5.25134, 136.4
(1, 1)50.73, 49.554.52, 3.93370.3, 353.255.04, 56.333.83, 3.26318.4, 337.9
(2, 2)41.48, 43.251.03, 0.19>10,00045.47, 43.74 1.04, 0.21>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)138.6, 134.654.05, 54.47132, 131.338.51, 36.38.93, 9.4973.89, 75.73
(0.1, 0)127.4, 13150.91, 49.72125.2, 117.832.91, 32.838.87, 8.6272.35, 75.25
(0.1, 0.3)122, 118.425.85, 24.53108.3, 104.154.57, 52.756.55, 6.1683.54, 83.87
(0.3, 0)83.93, 82.4621.56, 19.51131.3, 13024.62, 24.276.96, 6.2482.86, 79.47
(0.5, 0.8)60.66, 62.227.25, 6.57123.7, 128.546.98, 45.423.75, 3.27151.9, 146.2
(1, 1)52.75, 54.073.99, 3.56342.6, 346.232.56, 31.692.57, 2.02364.4, 311.5
(2, 2)45.81, 48.531.05, 0.24>10,00035.74, 34.21.03, 0.2>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)112.3, 112.924.84, 23.7118.8, 120.62.53, 2.051.16, 0.4358.39, 58.91
(0.1, 0)90.95, 95.7624.12, 23.53114.7, 114.92.53, 1.981.13, 0.3862.33, 63.42
(0.1, 0.3)121.1, 119.515.19, 15.2895.5, 89.72.8, 2.161.15, 0.4372.95, 70.41
(0.3, 0)67.03, 68.4413.98, 13.48122.5, 120.32.55, 1.971.13, 0.3666.19, 65.73
(0.5, 0.8)68.94. 66.515.32, 4.85113.6, 110.53.36, 2.781.11, 0.36224.6, 232.5
(1, 1)55.07, 52.253.34, 2.84260.3, 259.93.44, 2.741.07, 0.29679.6, 665.7
(2, 2)54.17, 54.491.05, 0.23>10,0006.23, 5.541, 0.06>10,000
Table 14. ARL, SDRL. Scenario b22 (with identification), p = 2.
Table 14. ARL, SDRL. Scenario b22 (with identification), p = 2.
ANNSVRRFRANNSVRRFR
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1, 1, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1, 0.5
(0, 0)200, 200200, 200200, 200178.1, 179.7148.2, 152153.3, 149.7
(0.1, 0)153.9, 151.9155.4, 152148, 142132.5, 132117.4, 121.1129.3, 133.3
(0.1, 0.3)108.7, 107.554.17, 56.7469.72, 70.86118.4, 120.644.3, 44.3865.13, 63.27
(0.3, 0)44.82, 42.9340.26, 39.9247.45, 47.4538.06, 36.7232.54, 32.844.68, 47.01
(0.5, 0.8)28.11, 29.328.69, 7.8423.04, 21.4830.53, 29.868.05, 7.1722.21, 21.46
(1, 1)16.27, 15.874.88, 4.3260.94, 56.6816.11, 14.794.57, 4.0154.22, 54.52
(2, 2)8.55, 8.311.02, 0.15>10,0009.41, 8.351.03, 0.18>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.05, 0.5 σ 1 2 , σ 2 2 , σ 12 = 1.05, 1.3, 0.5
(0, 0)160.7, 165.5117.5, 119.7140.7, 146.268.12, 68.0931.45, 31.4965.31, 66.4
(0.1, 0)115.2, 113.997.86, 100.5110, 116.654.93, 53.7930.67, 30.3565.18, 64
(0.1, 0.3)103.9, 106.136.53, 35.0963.48, 61.9665.09, 65.8716.59, 15.8844.09, 42.66
(0.3, 0)35.67, 35.2732.35, 31.1842.7, 39.9626.15, 26.1320.27, 19.634.88, 33.61
(0.5, 0.8)26.22, 267.21, 6.721.78, 21.9116.03, 15.265.81, 5.322.63, 22.88
(1, 1)14.59, 14.394.46, 3.854.27, 54.188.78, 7.953.76, 3.3341.85, 42.24
(2, 2)9.35, 8.751.04, 0.2>10,0008.85, 8.081.04, 0.21>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.2, 1.2, 0.6 σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.5
(0, 0)121.5, 126.755.63, 54.1994.1, 97.0626.39, 25.68.66, 8.1528.33, 27.67
(0.1, 0)84.17, 84.2549.78, 50.4175.83, 76.7722.12, 21.598.36, 7.67 27.04, 25.97
(0.1, 0.3)93.44, 92.8624.24, 23.7755.67, 56.0333.73, 34.46.8, 6.2426.5, 26.05
(0.3, 0)28.67, 28.0622.83, 22.9234.83, 34.8912.12, 11.717.4, 7.0621.43, 20.75
(0.5, 0.8)21.62, 20.516.47, 5.7622.83, 22.3815.73, 14.943.42, 2.8121.8, 20.96
(1, 1)11.13, 10.064, 3.548.59, 44.417.24, 6.822.66, 2.140.76, 38.22
(2, 2)7.87, 7.281.04, 0.2>10,0008.43, 7.651.02, 0.16>10,000
( μ 1 ,   μ 2 ) σ 1 2 , σ 2 2 , σ 12 = 1.4, 1.4, 0.75 σ 1 2 , σ 2 2 , σ 12 = 3, 3, 0.5
(0, 0)79.65, 74.8926.41, 74.8926.41, 26.342.99, 2.391.13, 0.3712.99, 12.41
(0.1, 0)55.06, 55.2226.12, 25.5164.21, 59.842.74, 2.131.15, 0.4113.86, 13.97
(0.1, 0.3)68.62, 71.715.83, 15.8246.4, 44.213.19, 2.551.12, 0.3514.38, 13.47
(0.3, 0)22.16, 21.414.23, 13.5433.38, 31.512.46, 1.871.13, 0.416.09, 15.11
(0.5, 0.8)15.89, 15.825.39, 4.825.09, 24.342.82, 2.271.09, 0.3123.9, 23.52
(1, 1)8.03, 7.93.44, 2.8351.26, 51.362.35, 1.741.08, 0.361.49, 58.96
(2, 2)5.75, 5.251.03, 0.19>10,0001.85, 1.31, 0.08>10,000
Table 15. The SVM scheme for the illustrative example.
Table 15. The SVM scheme for the illustrative example.
Sample12345678910
inputT26.512815.70311.34421.35673.29051.23726.024910.7784.04914.8453
W7.27288.11926.45879.70295.8186.37459.30897.568514.865616.7297
SVR1output0.84570.88350.10290.10040.09770.10250.90140.75060.14690.9076
UCL1.07141.07141.07141.07141.07141.07141.07141.07141.07141.0714
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
SVR2output0.45840.4110.05930.10370.05470.06090.92690.90380.42740.8055
UCL1.07921.07921.07921.07921.07921.07921.07921.07921.07921.0792
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
SVR3output0.85440.25290.14070.23130.16730.1370.86280.85510.11610.7355
UCL1.07351.07351.07351.07351.07351.07351.07351.07351.07351.0735
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
SVR4output0.90320.92240.89910.09270.89930.8989 0.9874 0.91660.10020.4982
UCL1.07441.07441.07441.07441.07441.07441.07441.07441.07441.0744
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
SVR5output0.90050.89720.90180.10640.90190.90490.99480.91730.09780.502
UCL1.00331.00331.00331.00331.00331.00331.00331.00331.00331.0033
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
SVR6output0.89930.75540.86130.86120.38680.85880.91250.89460.04790.4052
UCL1.09321.09321.09321.09321.09321.09321.09321.09321.09321.0932
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
Table 16. The ANN scheme for the illustrative example.
Table 16. The ANN scheme for the illustrative example.
Sample12345678910
inputT26.512815.70311.34421.35673.29051.23726.024910.7784.04914.8453
W7.27288.11926.45879.70295.8186.37459.30897.568514.865616.7297
ANN1output0.50110.85820.27280.13330.19060.26160.49630.72940.33120.8917
UCL0.99340.99340.99340.99340.99340.99340.99340.99340.99340.9934
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
ANN2output0.46030.85710.09710.09840.2980.0894 0.42860.79750.31150.778
UCL1.70281.70281.70281.70281.70281.70281.70281.70281.70281.7028
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
ANN3output0.51550.97920.19650.31620.20410.17030.43090.77750.32361.1416
UCL1.04511.04511.04511.04511.04511.04511.04511.04511.04511.0451
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlOut-of-control
ANN4output0.74310.93430.67650.26140.84170.69170.47710.86530.09650.2469
UCL0.97710.97710.97710.97710.97710.97710.97710.97710.97710.9771
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
ANN5output0.78480.92680.6680.31240.87120.6830.46040.86370.14780.535
UCL0.97360.97360.97360.97360.97360.97360.97360.97360.97360.9736
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
ANN6output0.77370.86940.7930.43440.78380.79430.60090.83150.30940.4433
UCL0.82850.82850.82850.82850.82850.82850.82850.82850.82850.8285
statusIn-controlOut-of-controlIn-controlIn-controlIn-controlIn-controlIn-controlOut-of-controlIn-controlIn-control
Table 17. The RFR scheme for the illustrative example.
Table 17. The RFR scheme for the illustrative example.
Sample12345678910
inputT26.512815.70311.34421.35673.29051.23726.024910.7784.04914.8453
W7.27288.11926.45879.70295.8186.37459.30897.568514.865616.7297
RFR1output0.18580.70080.00950.00450.4120.020830.77460.90750.42411
UCL0.98580.98580.98580.98580.98580.98580.98580.98580.98580.9858
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlOut-of-control
RFR2output0.15950.6390.08750.05060.30760.18330.42210.73210.63660.9625
UCL0.980.980.980.980.980.980.980.980.980.98
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
RFR3output0.24880.92310.16310.33480.15330.16130.83060.56810.1261
UCL0.99990.99990.99990.99990.99990.99990.99990.99990.99990.9999
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlOut-of-control
RFR4output0.77120.83660.71560.25560.72660.58660.75210.85560.43620.5173
UCL0.99860.99860.99860.99860.99860.99860.99860.99860.99860.9986
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
RFR5output0.45310.82060.7750.48050.9480.76510.9030.85850.40930.5771
UCL0.99930.99930.99930.99930.99930.99930.99930.99930.99930.9993
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
RFR6output0.31920.42660.25570.27680.31570.3790.39570.31620.13860.1974
UCL0.99140.99140.99140.99140.99140.99140.99140.99140.99140.9914
statusIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-controlIn-control
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sabahno, H.; Niaki, S.T.A. New Machine-Learning Control Charts for Simultaneous Monitoring of Multivariate Normal Process Parameters with Detection and Identification. Mathematics 2023, 11, 3566. https://doi.org/10.3390/math11163566

AMA Style

Sabahno H, Niaki STA. New Machine-Learning Control Charts for Simultaneous Monitoring of Multivariate Normal Process Parameters with Detection and Identification. Mathematics. 2023; 11(16):3566. https://doi.org/10.3390/math11163566

Chicago/Turabian Style

Sabahno, Hamed, and Seyed Taghi Akhavan Niaki. 2023. "New Machine-Learning Control Charts for Simultaneous Monitoring of Multivariate Normal Process Parameters with Detection and Identification" Mathematics 11, no. 16: 3566. https://doi.org/10.3390/math11163566

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop