Next Article in Journal
Advances in Food, Bioproducts and Natural Byproducts for a Sustainable Future: From Conventional to Innovative Processes
Previous Article in Journal
Mask R-CNN with New Data Augmentation Features for Smart Detection of Retail Products
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Robust Process Parameter Design Methodology: A New Estimation Approach by Using Feed-Forward Neural Network Structures and Machine Learning Algorithms

1
Department of Electrical Engineering, Faculty of Engineering and Technology, Quy Nhon University, Quy Nhon 591417, Vietnam
2
Department of Industrial & Management Systems Engineering, Dong-A University, Busan 49315, Korea
3
Department of Technology Management Engineering, Jeonju University, Jeonju 55069, Korea
*
Authors to whom correspondence should be addressed.
Appl. Sci. 2022, 12(6), 2904; https://doi.org/10.3390/app12062904
Submission received: 5 January 2022 / Revised: 17 February 2022 / Accepted: 19 February 2022 / Published: 11 March 2022

Abstract

:
In robust design (RD) modeling, the response surface methodology (RSM) based on the least-squares method (LSM) is a useful statistical tool for estimating functional relationships between input factors and their associated output responses. Neural network (NN)-based models provide an alternative means of executing input-output functions without the assumptions necessary with LSM-based RSM. However, current NN-based estimation methods do not always provide suitable response functions. Thus, there is room for improvement in the realm of RD modeling. In this study, a new NN-based RD modeling procedure is proposed to obtain the process mean and standard deviation response functions. Second, RD modeling methods based on the feed-forward back-propagation neural network (FFNN), cascade-forward back-propagation neural network (CFNN), and radial basis function network (RBFN) are proposed. Third, two simulation studies are conducted using a given true function to verify the proposed three methods. Fourth, a case study is examined to illustrate the potential of the proposed approach. In conclusion, a comparative analysis of the three feed-forward NN structure-based modeling methods and conventional LSM-based RSM proposed in this study showed that the proposed methods were significantly lower in the expected quality loss (EQL) and various variability indicators.

1. Introduction

In recent decades, robust design (RD) has been considered essential for improvement of product quality, as the primary purpose of RD is to seek a set of parameters that make a product insensitive to various sources of noise factors. In other words, RD attempts to minimize the variability of quality characteristics while ensuring that the process mean meets the target value. To solve the RD problem, Taguchi [1] considered both the process mean and variance as a single performance measure and defined a number of signal-to-noise ratios to obtain the optimal factor settings. Unfortunately, the orthogonal arrays (OAs), statistical analysis, and signal-to-noise ratios associated with this technique were criticized by Box et al. [2], Leon et al. [3], Box [4], and Nair [5]. Therefore, Vining and Myers [6] proposed the dual response (DR) approach based on response surface methodology (RSM), in which the process mean and variance are estimated separately as functions of control factors. The result is an RD optimization model in which the process mean is prioritized by setting it as a constraint and the process variability is set as an objective function. From this starting point, the three sequential steps of the RD procedure were generated: design of experiment (DoE), estimation, and optimization. The DoE step exploits information about the relationship between the input and output variables. In the second step, the functional form of this relationship is defined by estimating the model parameters. Ultimately, the optimal settings for the input factors are identified in the third step.
As several RD optimization models have been proposed and modified according to various criteria, the optimization step is relatively well developed. The priority criterion was used in the DR models proposed by Vining and Myers [6], Copeland and Nelson [7], and Del Castillo and Montgomery [8], whereas the process mean and variance were considered simultaneously in the mean squares error (MSE) model developed by Lin and Tu [9]. The weight criterion was used to consider the trade-off between the mean and variance in the weighted sum models reported by Cho et al. [10], Ding et al. [11], and Koksoy and Doganaksoy [12], whereas Ames et al. [13] used the weight criterion in a quality loss function model. Shin and Cho [14,15] extended the DR model by integrating the customized maximum value on the process bias and variance. Based on the MSE concept, Robinson et al. [16] and Truong and Shin [17] proposed generalized linear mixed models and inverse problem models. Using the goal programming approach, Kim and Cho [18] and Tang and Xu [19] introduced prioritized models, whereas Borror [20] and Fogliatto [21] both considered the output responses on the same scale as the desirability functions. In an attempt to identify the Pareto efficient solutions, Kim and Lin [22] and Shin and Cho [23] developed a fuzzy model and a lexicographical weighted Tchebycheff model for the RD multiple-objective optimization problem, respectively. Furthermore, Goethals and Cho [24] integrated the economic factor in the economic time-oriented model to handle time-oriented dynamic characteristics, while Nha et al. [25] introduced the lexicographical dynamic goal programming model.
The DoE is a systematic method that aims to identify the effects of controllable factors on the quality characteristics of interest. DOE has been developed since the 1920s. Montgomery [26] reviewed the history of DoE in the published literature. Several DoE techniques were developed and intensively researched as a means of conducting experiments in industrial applications; these include full factorial designs, fractional factorial designs (screening designs), mixture designs, Box-Benken designs, central composite designs (CCD), Taguchi array designs, Latin square designs and other non-conventional methods (D-optimal designs).
Although many techniques for the estimation stage of an RD process are reported in literature, there is room for improvement. Indeed, the accuracy and reliability of prediction and optimization depend directly on the estimation results. Additionally, most regression methods are RSM-based approaches which commonly rely on assumptions, such as normality and homogeneous variance of the response data. However, in practice, these assumptions may not be maintained.
Along with the development of RD, RSM has been widely applied in various fields of applied statistics. The most extensive applications of RSM are in the industrial world, particularly in situations where several input variables may influence some performance measure or quality characteristic of the product or process [27]. RSM uses mathematical and statistical techniques to explore the functional relationship between input control variables and an output response variable of interest, with the unknown coefficients in this functional relationship typically estimated by the least-squares method (LSM). The usual assumptions behind LSM-based RSM are that the experimental data and error terms must be normally distributed, and the distribution of error terms must have constant variance and zero mean. When one of these assumptions is violated, the Gauss-Markov theory no longer holds. Instead, alternative techniques can be applied, such as the maximum likelihood estimation (MLE), weighted least-squares (WLS), and Bayesian methods. From the viewpoint of MLE, the model parameters are regarded as fixed and unknown quantities, and the observed data are considered as random variables [28]. Truong and Shin [17,29] showed that LSM-based RSM does not always estimate the input-output functions effectively, so they developed a procedure to estimate these unknown coefficients using an inverse problem.
In recent decades, neural networks (NNs) have become a hot topic of research; NNs are now widely used in various fields, including speech recognition, multi-objective optimization, function estimation, and classification. NNs can model linear and nonlinear relationships between inputs and outputs without any assumptions based on the activation function’s generalization capacity. NNs are universal functional approximators, and Irie and Miyake [30], Funahashi [31], Cybenko [32] and Hornik et al. [33], and Zainuddin and Pauline [34] showed that NNs are capable of approximating any arbitrary nonlinear function to the desired accuracy without the knowledge of predetermined models, as NNs are data-driven and self-adaptive. Therefore, a NN provides a powerful regression method to model the functional relationship between RD input factors and output responses without making any assumptions. In RD settings, Rowlands et al. [35] integrated an NN into RD by using the NN to conduct the DoE stage. Su and Hsieh [36] applied two NNs to train the data to obtain the optimal parameter sets and predict the system response value. Cook et al. [37] developed an NN model to forecast a set of critical process parameters and employed a genetic algorithm to train the NN model to achieve the desired level of efficiency. The integration of NNs into RD has also been discussed by Chow et al. [38], Chang [39], and Chang and Chen [40]. Arungpadang and Kim [41] developed a feed-forward NN-based RSM to model the functional relationship between input variables and output responses to improve the precision of estimation without increasing the number of experimental runs. Sabouri et al. [42] proposed an NN-based method for function estimation and optimization by adjusting weights until the response reached the target conditions. With regard to input-output relationship modeling, Hong and Satriani [43] also discussed a convolutional neural network (CNN) with an architecture determined by Taguchi’s orthogonal array to predict renewable power. Recently, Arungpadang et al. [44] proposed a hybrid neural network-genetic algorithm to predict process parameters. Le et al. proposed NN-based response function estimation (NRFE) identifies a new screening procedure to obtain the best transfer function in an NN structure using a desirability function family while determining its associated weight parameters [45]. Le and Shin propose an NN-based estimation method as a RD modeling approach. The modeling method based on a feedback NN approach is first integrated in the RD response functions estimation. Two new feedback NN structures are then proposed. Next, the existing recurrent NNs (i.e., Jordan-type and Elman-type NNs) and the proposed feedback NN approaches are suggested as an alternative RD modeling method [46].
The primary motive of this research is to establish feed-forward NN structure-based estimation methods as alternative RD modeling approaches. First, an NN-based estimation framework is incorporated into the RD modeling procedure. Second, RD modeling methods based on the feed-forward back-propagation neural network (FFNN), cascade-forward back-propagation neural network (CFNN), and radial basis function network (RBFN) are proposed. These are applied to estimate the process mean and standard deviation response functions. Third, the efficiency of the proposed modeling methods is illustrated through simulation studies with a given real function. Fourth, the efficacy of the proposed modeling methods is illustrated through a printing case study. Finally, the results of comparative studies show that the proposed methods obtain better optimal solutions than conventional LSM-based RSM. The proposed estimation methods based on feed-forward NN structures are illustrated in Figure 1. From the experimental data, the optimal numbers of hidden neurons in the FFNN and CFNN structures and the dispersion constant “spread” in the RBFN are identified to finalize the optimal structures of the corresponding NNs. The DR functions can be separately estimated using the proposed estimation methods from the optimal NN structures with their control factors and output responses.
The statistical estimation method of RSM based on conventional LSM, as introduced by Box and Wilson [47], generates the response surface approximation using linear, quadratic, and other functions, while the coefficients are estimated by minimizing the total error between the actual and estimated values. For a more comprehensive understanding of RSM, Myers [48] and Khuri and Mukhopadhyay [49] discuss the various development stages and future directions of RSM. When the exact functional relationship is unknown or very complicated, conventional LSM is typically used to estimate the input-output functional responses in RSM [50,51]. In general, the output response y can be identified as a function of input factors x as follows:
y = x β + ε
where x is a vector of the control factors, β is a column vector of the estimated parameters, and ε is the random error. The estimated second-order models for the process mean μ ^ L S M and standard deviation σ ^ L S M are represented as
μ ^ L S M x = β ^ 0 + i = 1 p β ^ i x i + i = 1 p β ^ i x i 2 + i = 1 i < j p j = 1 p β ^ i j x i x j
σ ^ L S M x = δ ^ 0 + i = 1 p δ ^ i x i + i = 1 p δ ^ i x i 2 + i = 1 i < j p j = 1 p δ ^ i j x i x j
where β ^ and δ ^ are the estimators of unknown parameters in the mean and standard deviation functions, respectively. These coefficients are estimated using LSM as
β ^ = X T X 1 X T y ¯   and   δ ^ = X T X 1 X T s
where y ¯ and s are the average and standard deviation values for the experimental data, respectively.

2. Proposed Feed-Forward NN Structure-Based Estimation Methods

Abbreviations and main variables used in this study are summarized in Appendix D.

2.1. NN Structures

An artificial neuron is a computational model inspired by biological neurons [52]. A NN consists of numerous artificial neurons or nodes and the connections between them, with weight coefficients applied to each connection. Based on the pattern of connection, NNs typically fall into two distinct categories: feed-forward networks, in which the connection flows unidirectionally from input to output, and recurrent networks, in which the connections among layers run in both directions. Feed-forward networks are the most popular type for function approximation [34]. Therefore, this study uses feed-forward back-propagation networks and RBFN structures to estimate the desired functions.

2.2. Proposed NN-Based Estimation Method 1: FFNN Structure

2.2.1. Response Function Estimation Using FFNN

According to Taguchi’s philosophy, there are two output responses of interest, namely, the process mean and standard deviation. These output responses can be estimated simultaneously from the output layer in a single FFNN. Besides, as demonstrated by Cybenko [32], any continuous function of n input variables can be approximated by an FFNN with only one hidden layer. A similar notion is also discussed by Funahashi [31], Hornik et al. [33], and Hartman et al. [53]. The proposed FFNN-based RD modeling method with one hidden layer is illustrated in Figure 2.
The input layer has k control factors, x 1 , …, x i , …, x k . The input for each hidden neuron is the weighted sum of these k factors plus their associated bias, expressed as i = 1 k w j i x i + a j h i d . This calculated value will be transformed by a transformation function ( f 1 ), also called an activation function (expressed by Equation (5)). The transformed value y j h i d is both the output of the hidden neuron and the input for the next layer (in our case, the output layer). Subsequently, the ultimate transformed outcome y ^ F F N N is the final result. The outputs of the hidden neuron and FFNN can then be expressed as
y j h i d = f 1 ( i = 1 k w j i x i + a j h i d )
where w j i is the weight connecting input factor x i to hidden node j; a j h i d is the associated bias of the hidden layer node j, and f 1 is the transfer function of the hidden layer.
y ^ F F N N = g 1 j = 1 h w j y j h i d + a o u t = g 1 j = 1 h t j f 1 i = 1 k w j i x i + a j h i d + b o u t
where g 1 is the activation function of the output layer, h is the number of hidden nodes, t j is the weight connecting each hidden node y j h i d to the output layer, and b o u t is the output bias.
The typical transfer functions for the hidden and output layers are the hyperbolic tangent sigmoid and linear functions, respectively. The estimated mean and standard deviation functions in this case are
μ ^ F F N N x = j = 1 h _ m e a n t j m e a n 2 1 + e x p 2 i = 1 k w i j m e a n x i + a j m e a n 1 + b m e a n F F N N
σ ^ F F N N x = j = 1 h _ s t d t j s t d 2 1 + e x p 2 i = 1 k w i j s t d x i + a j s t d 1 + b s t d F F N N
where h _ m e a n and h _ s t d represent the number of hidden neurons in the mean and standard deviation estimation model, respectively. Moreover, a j m e a n and a j s t d represent the bias for hidden node j in both models, while b m e a n F F N N and b s t d F F N N denote the bias of the output neuron. Similarly, t j m e a n and t j s t d signify the weight connecting a hidden node j to the output for both models, while w i j m e a n and w i j s t d refer to the weight connecting the input factors to a hidden node j of the FFNN estimation model for mean and standard deviation, respectively.

2.2.2. Number of Hidden Neurons

For a NN model with a single hidden layer, the number of neurons in the hidden layer must be chosen carefully since use of a larger number of hidden neurons can generate a model that more accurately reflects the training data. However, use of a large number of hidden neurons makes the model increasingly complex and may lead to overfitting. On the flip side, use of only a few hidden neurons may cause underfitting. Several researchers have proposed approaches to determine the ideal number of hidden neurons. A review that discusses how to fix the number of hidden neurons in NNs was presented by Sheela and Deepa [54], but no single method is effective in every circumstance. In this paper, the well-established Schwartz’s Bayesian criterion, or the Bayesian information criterion (BIC), is used to determine the number of hidden neurons. This criterion is defined as
B I C = n l n 1 n i = 1 n E 2 + p l n   n
where n and p denote the sample size and the number of parameters, respectively. In BIC, the term l n   n generally penalizes free parameters strongly. In addition, the accuracy of the NN model will increase as the sample size increases.

2.2.3. Integration into a Learning Algorithm: Back-Propagation

NN-based estimation requires a learning algorithm, such as error correction, perception learning, Boltzmann learning, Hebbian rules, or back-propagation (BP). Among these, BP is one of the most popular network training algorithms because it is both simple and generally applicable [55]. The back-propagation algorithm trains a NN by applying the chain rule method. The weights of the network are randomly initialized, then repeatedly adjusted through minimization of the cost function, which is calculated by measuring the difference between the actual output y ^ o u t and the desired value y o u t . One of the most common error measures is the root mean square (RMS) error, defined by
E = 1 2 y o u t y ^ o u t 2
During the training process, the RMS error (cost function) is minimized as much as possible. The iterative step of the gradient descent algorithm then changes the weights w j according to
w j w j + Δ w j   where   Δ w j = η E w w j .
Here, η is the learning rate, which can determine the effect of the gradient. There are a few alternative optimization techniques for finding the local minimum, such as conjugate gradient, steepest descent, and Newton’s method.
  • Steepest descent is an iterative method that finds the local minimum by moving in the direction opposite to the one implied by the gradient; the learning rate (such as traingda or traingdx) indicates how quickly it moves. Usually, the smaller the learning rate, the slower the process of convergence to a local minimum. In standard steepest descent, the learning rate remains constant throughout the training stage while the weight and bias parameter values are interactively updated.
  • The resilientBP training algorithm (i.e., trainrp) is a local adaptive learning scheme for supervised batch learning in an FFNN. Resilient back-propagation is similar to the standard BP algorithm, but it is capable of training the NN model faster than the regular method without the need to specify any free parameter values. Additionally, since trainrp merely considers the size of the partial derivatives, the direction of the weight update is only affected by the sign of the derivative.
  • The Levenberg-Marquardt algorithm (i.e., trainlm) is a standard method for solving nonlinear least-squares minimization problems without computing the Hessian matrix. It can be thought of as a middle ground between the steepest descent and Gauss-Newton methods.

2.2.4. Generalization and Overfitting Issues

NN models are often used in order to generalize the information learned from the training data to any unseen data so that it is possible to make inferences or predictions for the problem domain. Nevertheless, overfitting can occur, in which case a trained NN model will work correctly on the training data but perform terribly on test data. Overfitting can happen when the model is too complicated or has too many parameters. To avoid this problem, the additional technique of “early stopping” is often used to improve the generalizability of the model. With an early stopping mechanism, the training process can be halted in an early iteration to prevent overfitting. Normally, to ensure the accuracy and efficiency of the NN model, the whole dataset should be divided into three subsets: the training, validation, and test datasets. Training data is used for NN model construction, while validation and test data are applied to check the model’s error and efficiency, respectively. The exact proportions of the training, test, and validation datasets are determined by the designer; the most widely used ratios are 50:25:25 or 60:20:20. In this technique, the optimal point at which to stop training is indicated by the minimum estimated true error, as shown in Figure 3 [56].

2.3. Proposed NN-Based Estimation Method 2: CFNN Structure

An alternative NN-based model, the cascade forward neural network (CFNN), has been shown to boost model accuracy and learning speed in cases with complicated relationships. CFNN is similar to FFNN, except that the former also includes a weighted link between the input and output layers. The additional connection means that the inputs can directly affect the output, which increases the complexity of the model but also increases its accuracy. Likewise, the CFNN model uses the BP algorithm to update the weights but has a drawback in that each layer of neurons is related to all previous layers of neurons [57]. The proposed CFNN-based modeling method is illustrated in Figure 4. The output of CFNN is expressed as follows:
y ^ C F N N = g 2 o = 1 l q o f 2 i = 1 k r o i x i + c o h i d + i = 1 k z 1 i x i + d
Again, the hyperbolic tangent sigmoid and linear functions are usually used as the transfer functions in the hidden and output layers, respectively. In this case, the derived mean and standard deviation functions are expressed as
μ ^ C F N N x = o = 1 l _ m e a n q o m e a n 2 1 + e x p 2 i = 1 k r o i m e a n x i + c o m e a n 1 + i = 1 k z 1 i m e a n x i + d m e a n C F N N
σ ^ C F N N x = o = 1 l _ s t d q o s t d 2 1 + e x p 2 i = 1 k r o i s t d x i + c o s t d 1 + i = 1 k z 1 i s t d x i + d s t d C F N N
where l _ m e a n and l _ s t d , c o m e a n and c o s t d , d m e a n C F N N and d s t d C F N N , q o m e a n and q o s t d , r o i m e a n and r o i s t d , z 1 i m e a n , and z 1 i s t d denote the number of hidden neurons, bias at hidden node o , bias at the output neuron, weight connecting hidden node o to the output neuron, weight connecting input factor x i to hidden node o , and weights directly linking input factor x i to the output neuron of the CFNN for the mean and standard deviation, respectively.

2.4. Proposed NN-Based Estimation Method 3: RBFN

RBFN generally gives excellent, fast approximations for curve-fitting problems. This multilayer feed-forward NN has a single hidden layer and uses radial basis activation functions in the hidden neurons. The radial basis function (kernel) is a function whose value depends on the distance between the input value and center points, where the centers and width can be determined by methods such as random selection, k-means clustering, supervised selection, unsupervised learning, etc. [58]. Unlike in an FFNN, the output of the hidden neuron y h i d is given by the product of the bias and the Euclidean distance between the input and weight vectors. Figure 5 shows the RBFN structure used to estimate the process mean and standard deviation functions.
In this essay, the design of an RBFN requires a two-step training procedure. First, the kernel positions, kernels widths, and weights that link the hidden layer inputs to the output layer nodes are estimated using an unsupervised LMS algorithm. After this initial solution has been obtained, a supervised gradient-based algorithm can then be used to refine the network parameters. The dispersion constant, or spread, exhibits a strong relationship with the output of the network. Specifically, the input region covers a large area when the spread is vast; if the spread is relatively narrow, the radial basis function curve is steeper, and the neuron output is much more significant relative to the weighted input vector approach. As a result, the network output is closer to the expected output [59]. The output of RBFN is expressed as
y ^ o u t = p = 1 m u i p e x p | | i = 1 k v i p x i | | e p 2 σ 2 2 + f
In this case, the estimated mean and standard deviation functions are
μ ^ R B F N x = p = 1 m _ m e a n u p m e a n e x p | | i = 1 k v p i mean x i | | e p m e a n 2 σ 2 2 + f m e a n R B F N
σ ^ R B F N x = p = 1 m _ s t d u p s t d e x p | | i = 1 k v p i std x i | | e p s t d 2 σ 2 2 + f s t d R B F N
where m _ m e a n , m _ s t d , e p m e a n , e p s t d , f m e a n R B F N , f s t d R B F N , u p m e a n , u p s t d , v p i m e a n , and v p i s t d denote the number of hidden neurons, bias at hidden node p , bias at the output neuron, weight connecting hidden node p to the output, and weight connecting input factor x i to hidden node p of RBFN for the mean and standard deviation, respectively.

3. Simulation Studies

A variety of simulation-based examples are presented to demonstrate the efficacy of the NN-based RD modeling methods. Assume that a given exact function representing the true relationship between the input factors and output response is represented as
y = x 2 2 4 x 2 e x p 2 x 1 2 3 x 2 2 x 1 x 2 + 4 + 110
The factorial design is used to evaluate how two factors with five levels impact one response. The related experimental data are exhibited in Table 1, and the actual relationship function is shown in Equation (18). Figure 6 shows a plot of the experimental data and true response value. The replicated response value is created at each treatment level by randomly adding some deviation from the true response value. In this paper, two different simulation studies are conducted to check the efficiency of our proposed model.
In RD, the expected quality loss (EQL) is often used as a critical optimization criterion to compare different methods. The EQL is given by
E Q L = θ μ ^ x τ 2 + σ ^ 2 x
where θ denotes the loss coefficient, normally θ = 1 , and μ ^ x , σ ^ x , and τ represent the estimated mean function, approximated standard deviation function, and target value, respectively. In both simulation studies, the target value is defined as 128 ( τ 1 = 128 ).

3.1. Simulation Study 1

Simulation study 1 was conducted by randomly adding a number of small standard deviation values σ n o i s e , i _ s m a l l = e 0.35 x 1 0.35 x 2 + 2 to the true response to give 50 replicates of the output response. Experimental data from simulation study 1 are presented in Table 2.
The approximated mean and standard deviation functions shown in Equations (20) and (21) were estimated using the LSM-based RSM method in MATLAB software.
μ ^ L S M x = 103.5282 + 3.3709 x 1 15.4188 x 2 2.7589 x 1 2 0.4410 x 2 2 0.6280 x 1 x 2
σ ^ L S M x = 4.7239 + 0.8883 x 1 0.7607 x 2 + 0.1752 x 1 2 0.0442 x 2 2 + 0.0713 x 1 x 2
Information about the trained multilayer feed-forward NN model, including the transfer function and NN architecture (number of inputs, number of hidden neurons, number of outputs), is given in Table 3 and Table 4. Furthermore, the weight and bias values associated with each layer of the three proposed NN models (FFNN, CFNN, and RBFN) for both the process mean and standard deviation function estimations are exhibited in Appendix A (Table A1, Table A2, Table A3 and Table A4, respectively).
The conventional LSM-based RSM method is compared to the proposed FFNN-, CFNN-, and RBFN-based RD modeling methods in Figure 7, Figure 8, Figure 9 and Figure 10 in contour plot form. Furthermore, Table 5 shows the ultimate solution set and corresponding best response values, while the comparison is done in terms of EQL.
The results in Table 5 clearly show that the proposed RD modeling approach with the three NN architectures produces much smaller EQL values than the approach using LSM-based RSM. Specifically, both the process bias and variance values obtained from the proposed NN-based model are significantly lower than those of LSM-based RSM. The squared process bias vs. variance results of the conventional LSM-based RSM and proposed FFNN-, CFNN-, and RBFN-based modeling methods in simulation study 1 are illustrated in Figure 11. The optimal settings are marked with green stars in each figure.

3.2. Simulation Study 2

Simulation study 2 was conducted by randomly adding a number of large standard deviation values σ n o i s e , i _ l a r g e = e 0.35 x 1 0.35 x 2 + 5 to the true response to give 50 replicates. The experimental data of simulation study 2 are presented in Table 6. The estimated mean and standard deviation functions given by LSM-based RSM in simulation study 2 are:
μ ^ L S M x = 103.7401 + 2.8205 x 1 16.0324 x 2 4.7497 x 1 2 + 2.4012 x 2 2 + 2.6455 x 1 x 2
σ ^ L S M x = 21.5666 + 3.4305 x 1 3.5459 x 2 0.8637 x 1 2 + 0.2259 x 2 2 0.7160 x 1 x 2
Similarly, detailed information about the multilayer FFNN is given in Table 7 and Table 8, while the weight and bias values of all proposed models for both the process mean and standard deviation functions are summarized in Appendix B (Table A5, Table A6, Table A7 and Table A8). Contour plots of the process mean and standard deviation response values for each model are displayed in Figure 12, Figure 13, Figure 14 and Figure 15, respectively.
Table 9 presents the optimal input factor settings, associated process mean, process bias, process variance, and EQL values obtained from the proposed NN-based modeling methods and LSM-based RSM in simulation study 2. The proposed NN-based modeling methods again produced significantly smaller EQL values than the LSM-based RSM. The plot of squared process bias vs. the variance for LSM-based RSM and the proposed FFNN-, CFNN-, and RBNN-based modeling methods are illustrated in Figure 16. The optimal settings are marked with green stars in each figure.
Clearly, the proposed NN-based modeling methods produced better solutions than LSM-based RSM in both simulation studies. Whereas RSM is generally used to estimate second-order functions, the proposed modeling methods can effectively estimate nonlinear functions.

4. Case Study

The printing data example used by Vining and Myers [6] and Lin and Tu [9] was selected to demonstrate the application of the proposed methods. The printing experiment investigates the effects of speed ( x 1 ), pressure ( x 2 ), and distance ( x 3 ) on a printing machine’s ability to add colored ink to a package (y). The case study has a factorial design with three level three factors, so in total, the number of experimental runs is 3 3 . To ensure the accuracy of the experiment, each treatment combination is repeated three times. In this case study, the target τ 2 = 500 . The experimental data are given in Vining and Myers [6]. The estimated mean and standard deviation functions given by LSM-based RSM are
μ ^ LSM x = 327.6296 + x T a L S M + x T A L S M x
σ ^ LSM x = 34.8832 + x T b L S M + x T B L S M x
where x = x 1 x 2 x 3 , a L S M = 177.0001 109.4259 131.4630 , A L S M = 32.0001 66.0278 75.4721 66.0278 6.02780 43.5833 75.4721 43.5833 3.58330 , b L S M = 11.5268 15.3231 29.1904 , and B L S M = 4.2038 7.7195 5.1093 7.7195 0.71954 14.082 5.1093 14.082 16.778 .
The information used for the RD modeling methods after training with the associated transfer functions, training functions, NN architectures (number of inputs, number of hidden neurons, number of outputs), and number of epochs in the case study are summarized in Table 10 and Table 11. The specified weight and bias values of the proposed FFNN, DFNN, and RBFN used to approximate the process mean and standard deviation functions in the case study are shown in Appendix C (Table A9, Table A10, Table A11, Table A12, Table A13 and Table A14).
The contour plots of the response functions for the process mean and standard deviation estimated by the LSM-based RSM, FFNN, CFNN, and RBFN robust design modeling methods are demonstrated in Figure 17, Figure 18, Figure 19 and Figure 20, respectively.
Table 12 presents the optimal factor settings, associated process mean, process bias, process variance, and EQL values obtained from the proposed NN-based modeling methods and LSM-based RSM in the case study. The proposed RD modeling methods produced significantly smaller EQL values than LSM-based RSM.
The process variance obtained from the proposed NN-based estimation methods is markedly lower than that of conventional RSM. The scatter plots demonstrate the value of squared process bias vs. variance for the traditional LSM-based RSM model and the three proposed NN-based modeling methods as illustrated in Figure 21. The optimal settings are marked with green stars in each figure.

5. Conclusions and Further Studies

This paper identified three NN-based modeling approaches (FFNN, CFNN, and RBFN) that obviate the need for the assumptions required when LSM-based RSM is used to approximate the mean and standard deviation functions. The feed-forward NN structure-based RD modeling methods are alternative options for identifying a functional relationship between input factors and process mean and standard deviation in RD. Compared with the conventional RD modeling method, the proposed approach has significant advantages with regard to accuracy and efficiency. The proposed RD modeling methods can easily be implemented using existing software such as MATLAB. The results of both the two types of simulation studies and case study show that the proposed RD modeling methods provide better optimal solutions than conventional LSM-based RSM. The main results are summarized, and the variability index, EQL, and R2 are central to the model validation criteria. (i) In simulation study 1, the proposed RD modeling methods showed, on average, 139 times lower process bias and 1.5 times lower process variance than LSM-based RSM in terms of variability, and about four times smaller in EQL. Moreover, among the three NN-Based modelings, FFNN showed the lowest value in terms of EQL. (ii) In simulation study 2, the proposed RD modeling methods showed 11 times lower process bias and one time lower process variance than LSM-based RSM on average in terms of variability and about 1.2 times smaller in EQL. Among the three NN-Based modelings, FFNN also showed the lowest value. (iii) In the case study, the proposed RD modeling methods showed significantly lower results than LSM-based RSM, with an average process bias of 102 times and process variance of 85 times in terms of variability, and about 86 times smaller in EQL. Among the three NN-Based modelings, CFNN and FFNN showed much lower values than RBFN. (iv) Specifically, in the printing machine case study, the R2 values for the estimated standard deviation functions are 45.452%, 73.51%, 69.47%, and 99.99%, when the conventional LSM-based RSM, FFNN-based modeling, CFNN-based modeling and RBFN-based modeling methods are applied respectively.
In future work, the proposed NN structures-based RD methods could be used to estimate multiple responses (RD multi-objective optimization problem), time-series data, big data, and simulation data [60]. But the activation and transfer function in the hidden and output layers should be carefully investigated and selected separately when applied. In addition, in this study, a classical case study was performed to verify the proposed methodology, but in the future study, a field-based case study that suggests optimal process conditions for productivity improvement in a smart factory will be performed.

Author Contributions

Conceptualization, T.-H.L. and S.S.; Methodology, T.-H.L. and S.S.; Modeling, T.-H.L.; Validation T.-H.L. and S.S.; Writing—Original Draft Preparation, T.-H.L. and S.S.; Writing—Review and Editing, L.D., H.J. and S.S.; Funding Acquisition, H.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Research Foundation of Korea (NRF) (No. NRF-2019R1G1A1010335).

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A. Weight and Bias Values of the Proposed Neural Network Structure in Simulation Study 1

Table A1. Estimated parameter values (Weight and bias) of the proposed FFNN for simulation study 1.
Table A1. Estimated parameter values (Weight and bias) of the proposed FFNN for simulation study 1.
a. Mean Function
WeightsBiases
W F F N N m e a n t F F N N m e a n T a F F N N m e a n b F F N N m e a n
3.65504.7000−0.5320−5.6150−0.3370
4.63703.28900.7650−5.1200
−3.68804.63900.39104.1080
5.46301.9730−0.1880−3.5370
1.3880−5.72200.2690−2.5120
−4.33403.7490−0.27602.2050
−2.58605.1880−0.15701.3530
1.44705.5880−0.3560−0.7180
4.99702.82800.1800−0.0290
5.12102.5860−0.14100.8320
−5.5550−1.5900−0.4130−1.4570
5.47501.8190−0.26402.1750
4.6360−3.39800.02802.9310
−0.41505.73800.4640−3.6420
3.1110−4.84900.06904.3330
−4.8490−2.9730−0.0830−5.0940
−1.0380−5.9250−0.5680−5.5340
b. Standard Deviation Function
WeightsBiases
W F F N N s t d t F F N N s t d T a F F N N s t d b F F N N s t d
2.3640−2.29000.6380−3.22400.5100
3.11503.18900.2420−1.9530
−1.92101.77100.13402.8660
0.75905.2410−0.01001.6240
1.4680−3.04000.45101.2370
0.80102.75700.1990−3.3430
Table A2. Estimated parameter values (Weight and bias) of the proposed CFNN for mean function in simulation study 1.
Table A2. Estimated parameter values (Weight and bias) of the proposed CFNN for mean function in simulation study 1.
WeightsBiases
R C F N N m e a n z C F N N m e a n T q C F N N m e a n T c C F N N m e a n d C F N N m e a n
3.46804.5940 0.5890−5.7880−0.3310
4.70003.4200 −0.1830−5.0120
−3.57904.9050 0.58203.9430
5.22702.2140 −0.1130−3.7440
1.5730−5.5360 0.2110−2.9200
−4.36103.8520 −0.53802.0050
−2.61105.1060 −0.26401.4290
1.32405.6320−0.2540−0.3850−0.8230
5.03502.77800.3700−0.20300.3890
5.11202.6770 0.03700.7280
−5.4880−1.8040 −0.0510−1.4370
5.46001.8770 0.11702.1230
4.5330−3.4080 0.35103.1740
−0.61105.8300 0.0350−3.4760
3.1690−4.8040 −0.37204.4170
−4.9220−3.0140 −0.0030−5.0410
−0.9670−5.6650 −0.5560−5.7940
Table A3. Estimated parameter values (Weight and bias) of the proposed CFNN for standard deviation function in simulation study 1.
Table A3. Estimated parameter values (Weight and bias) of the proposed CFNN for standard deviation function in simulation study 1.
WeightsBiases
R C F N N s t d z C F N N s t d T q C F N N s t d T c C F N N s t d d C F N N s t d
1.9650−1.2350−0.1600
0.1370
0.9570−3.18200.9310
2.8420−1.43300.30301.1660
−2.7860−0.2660−0.30600.4420
1.73403.2990−0.43801.9590
0.61303.89200.15903.1660
Table A4. Estimated parameter values (Weight and bias) of the proposed RBFN in simulation study 1.
Table A4. Estimated parameter values (Weight and bias) of the proposed RBFN in simulation study 1.
a. Mean Functionb. Standard Deviation Function
WeightsBiasesWeightsBiases
V R B F N m e a n u R B F N m e a n T e R B F N m e a n f R B F N m e a n V R B F N s t d u R B F N s t d T e R B F N s t d f R B F N s t d
0.0−0.5126.74501.387031.52800.5−0.56.87601.387033.6620
0.51.0−29.66601.38700.51.0−3.61601.3870
−1.00.59.13101.3870−1.0−0.53.94201.3870
1.0−0.5−35.07201.3870−1.01.0−28.71801.3870
−1.0−1.055.84701.38701.0−1.0−24.20001.3870
1.01.072.60501.38701.00.50.14401.3870
−0.51.07.04701.3870−0.5−0.58.54701.3870
−0.50.5−52.03801.38700.00.5−11.90501.3870
−1.00.031.48301.3870−0.5−1.00.57601.3870
1.0−1.077.71701.3870−1.00.55.45801.3870
1.00.079.65501.38700.50.511.82101.3870
−1.01.056.62201.38700.00.0−4.17001.3870
0.50.543.06701.38700.01.0−16.53401.3870
0.01.092.48901.38701.01.0−23.59901.3870
−1.0−0.513.20201.38701.00.0−21.11901.3870
−0.5−0.5−60.80501.38700.0−1.0−18.98701.3870
0.00.5−59.63301.38700.50.0−9.24001.3870
−0.50.065.20901.3870−1.00.0−25.12501.3870
−0.5−1.011.59001.38700.0−0.5−9.85101.3870
0.00.0−2.24101.3870−0.50.0−7.76801.3870
0.5−1.0−31.37301.38701.0−0.52.73301.3870
1.00.5−27.40601.3870−1.0−1.0−27.34001.3870
0.5−0.551.31001.38700.5−1.00.68801.3870
0.50.0−51.92101.3870−0.50.56.34001.3870

Appendix B. Weight and Bias Values of the Proposed Neural Network Structure in Simulation Study 2

Table A5. Estimated parameter values (Weight and bias) of the proposed FFNN in simulation study 2.
Table A5. Estimated parameter values (Weight and bias) of the proposed FFNN in simulation study 2.
a. Mean Functionb. Standard Deviation Function
WeightsBiasesWeightsBiases
W F F N N m e a n t F F N N m e a n T a F F N N m e a n b F F N N m e a n W F F N N s t d t F F N N s t d T a F F N N s t d b F F N N s t d
2.05703.92800.7630−4.77800.05301.78201.8550−0.3890−1.61500.6640
2.40003.91300.4830−3.46802.2310−0.70300.57400.0480
−4.5010−0.41900.56503.0550−0.2220−1.81001.0090−2.9250
2.91503.75900.2290−1.3400
1.4850−4.31800.9410−1.3350
−4.6270−0.26600.6220−0.1490
−2.25003.9870−0.6890−1.0590
0.70704.55800.15501.5870
2.61603.82800.53102.4090
4.35401.5560−0.99404.0990
−2.6170−3.7890−1.3930−4.6190
Table A6. Estimated parameter values (Weight and bias) of the proposed CFNN for mean function in simulation study 2.
Table A6. Estimated parameter values (Weight and bias) of the proposed CFNN for mean function in simulation study 2.
WeightsBiases
R C F N N m e a n z C F N N m e a n T q C F N N m e a n T c C F N N m e a n d C F N N m e a n
5.1170−0.11200.3710
0.8340
−0.3840−4.97100.0220
4.22602.94600.2340−4.0130
−3.7930−3.59900.06403.0950
4.5130−0.8420−0.0300−3.2350
1.88804.7280−0.2380−1.5650
−4.22002.73000.06100.9880
−1.79504.8310−0.42100.2780
1.56204.7170−0.55600.9210
3.5040−3.67200.51501.6860
4.13202.82700.25002.6160
−3.21204.03200.5630−3.1850
4.82101.5590−0.53104.0190
4.44802.59900.21304.9300
Table A7. Estimated parameter values (Weight and bias) of the proposed CFNN for standard deviation function in simulation study 2.
Table A7. Estimated parameter values (Weight and bias) of the proposed CFNN for standard deviation function in simulation study 2.
WeightsBiases
R C F N N s t d z C F N N s t d T q C F N N s t d T c C F N N s t d d C F N N s t d
1.24500.72100.5230
−0.5240
−0.03100.2590−0.0380
Table A8. Estimated parameter values (Weight and bias) of the proposed RBFN in simulation study 2.
Table A8. Estimated parameter values (Weight and bias) of the proposed RBFN in simulation study 2.
a. Mean Functionb. Standard Deviation Function
WeightsBiasesWeightsBiases
V R B F N m e a n u R B F N m e a n T e R B F N m e a n f R B F N m e a n V R B F N s t d u R B F N s t d T e R B F N s t d f R B F N s t d
0.0−0.564.89801.665038.2010.5−0.5−28.22701.3870−1.6450
0.51.0−6.62201.6650−0.50.58.25301.3870
−1.00.517.09201.66501.01.021.67101.3870
1.0−0.5−8.68601.6650−1.0−1.034.51701.3870
−1.0−1.034.93101.66501.0−1.09.12601.3870
1.00.5−21.91701.6650−0.5−1.0−17.91001.3870
−0.51.025.69601.66501.00.0−5.44401.3870
0.5−1.010.91801.66500.0−1.022.52501.3870
−1.00.026.84101.66500.51.0−3.24701.3870
−1.01.031.42501.6650−1.00.032.09601.3870
−0.5−1.034.74001.6650−1.01.017.59401.3870
1.01.062.82001.66500.00.54.85501.3870
0.50.0−47.01901.6650−0.50.0−11.19201.3870
1.0−1.040.45101.66500.01.012.68001.3870
1.00.063.15601.6650−1.0−0.5−25.72201.3870
0.01.055.44601.66501.00.54.01701.3870
−0.5−0.5−4.69201.66500.50.5−7.40101.3870
0.5−0.562.55001.6650−0.5−0.520.54401.3870
−0.50.017.05901.66501.0−0.526.76301.3870
−0.50.5−11.31001.66500.50.027.59901.3870
−1.0−0.521.59001.66500.5−1.013.42801.3870
0.50.552.69401.6650−1.00.5−12.60701.3870
0.00.5−59.79701.6650−0.51.0−3.68901.3870
0.00.035.72901.66500.0−0.53.92301.3870

Appendix C. Weight and Bias Values of the Proposed Neural Network Structure in the Case Study

Table A9. Estimated parameter values (Weight and bias) of the FFNN for mean function in the case study.
Table A9. Estimated parameter values (Weight and bias) of the FFNN for mean function in the case study.
WeightsBiases
W F F N N m e a n t F F N N m e a n T a F F N N m e a n b F F N N m e a n
1.56601.94501.17900.3610−3.0880−0.1520
1.7450−1.31402.04000.4270−2.0620
−1.68102.25900.66900.21801.4730
1.57701.7570−1.55100.1660−0.9160
1.1460−0.07302.61400.05100.1540
−1.74801.46601.7830−0.0630−0.4130
−1.3760−2.36600.9550−0.1300−1.4310
0.5280−0.89302.48000.27002.4590
1.99801.75501.2700−0.02202.8680
Table A10. Estimated parameter values (Weight and bias) of the FFNN for standard deviation function in the case study.
Table A10. Estimated parameter values (Weight and bias) of the FFNN for standard deviation function in the case study.
WeightsBiases
W F F N N s t d t F F N N s t d T a F F N N s t d b F F N N s t d
1.9520−2.08100.9550−0.2470−3.0060−0.3080
1.45201.6590−1.9370−0.5730−2.5620
−1.67501.92701.5750−0.06601.6940
1.9030−0.26302.23100.1620−1.0300
0.94302.33201.62000.1020−0.3810
−2.0160−1.71701.3500−0.2570−0.1860
−1.9130−0.48202.27700.0650−0.9540
0.20002.8330−0.82300.14201.7220
2.75801.50700.65200.30001.9940
1.4971.726−1.141−0.9383.480
Table A11. Estimated parameter values (Weight and bias) of the CFNN for mean function in the case study.
Table A11. Estimated parameter values (Weight and bias) of the CFNN for mean function in the case study.
WeightsBiases
R C F N N m e a n z C F N N m e a n T q C F N N m e a n T c C F N N m e a n d C F N N m e a n
1.74802.41001.08200.5820
0.1600
0.6080
−0.0060−3.20300.1530
2.6590−0.15701.7430−0.1080−2.6440
−2.19801.75001.4620−0.21002.0920
2.3100−1.9110−0.74400.0770−1.6370
1.9620−0.92702.3510−0.0110−0.8430
−1.96801.8610−1.6700−0.05600.2480
−1.51702.33601.62700.4240−0.2960
0.24802.1750−2.33700.07300.8240
2.76600.9810−1.3500−0.00101.3920
1.8210−1.8120−1.84400.26302.0910
−1.68701.7400−2.01900.2820−2.6670
2.10301.88801.2460−0.56603.3270
Table A12. Estimated parameter values (Weight and bias) of the CFNN for standard deviation function in the case study.
Table A12. Estimated parameter values (Weight and bias) of the CFNN for standard deviation function in the case study.
WeightsBiases
R C F N N s t d z C F N N s t d T q C F N N s t d T c C F N N s t d d C F N N s t d
2.08002.04501.7150−0.4480
−0.6100
0.7170
0.8060−2.7520−0.0920
1.33602.18501.13500.0190−3.1570
−2.1980−0.26701.52400.05502.4750
1.64101.76402.2460−0.5140−1.7790
0.9380−2.57201.70500.1360−0.7120
−3.1130−0.72900.9550−0.44900.6660
0.84102.76401.82000.4000−1.2890
0.93702.2670−1.86400.80300.9310
1.79202.18401.0390−0.01801.7790
2.20300.7220−2.38300.05202.2580
−1.8960−2.6360−0.65600.0710−2.9400
Table A13. Estimated parameter values (Weight and bias) of the RBFN for mean function in the case study.
Table A13. Estimated parameter values (Weight and bias) of the RBFN for mean function in the case study.
WeightsBiases
V R B F N m e a n u R B F N m e a n T e R B F N m e a n f R B F N m e a n
1.0001.0001.000804.33001.6650124.7800
1.0001.0000.000513.99001.6650
1.0000.0001.000442.7601.6650
0.0000.0001.000288.71001.6650
1.0000.0000.000274.62001.6650
0.0001.0000.000219.62001.6650
1.000−1.0001.000250.58001.6650
0.0001.0001.000287.62001.6650
1.0000.000−1.000185.99001.6650
1.000−1.0000.000195.11001.6650
0.0000.0000.000193.36001.6650
−1.0001.0000.000121.65001.6650
0.0001.000−1.000110.8101.6650
−1.000−1.0001.00089.96001.665
1.0001.000−1.00094.29401.6650
1.000−1.000−1.00064.22401.6650
0.000−1.0001.00075.53901.6650
−1.0000.0001.00045.00001.6650
−1.0000.0000.00027.02701.6650
−1.0001.0001.00021.34101.6650
0.000−1.000−1.0000.00001.6650
−1.0001.000−1.000−25.83801.6650
0.0000.000−1.000−18.84201.6650
−1.0000.000−1.000−33.22401.6650
−1.000−1.0000.000−42.95201.6650
0.000−1.0000.000−53.03001.6650
−1.000−1.000−1.000−95.89401.6650
Table A14. Estimated parameter values (Weight and bias) of the RBFN for standard deviation function in the case study.
Table A14. Estimated parameter values (Weight and bias) of the RBFN for standard deviation function in the case study.
WeightsBiases
V R B F N s t d u R B F N s t d T e R B F N s t d f R B F N s t d
1.0000.0001.000158.16002.7750−0.0810
1.0001.0001.000142.39002.7750
0.0001.0001.000138.87002.7750
−1.000−1.0001.000133.88002.7750
1.0000.0000.00092.47802.7750
0.0001.0000.00088.58402.7750
0.0000.000−1.00080.47302.7750
−1.0001.0000.00063.49502.7750
−1.0001.0001.00055.48602.7750
0.0000.0001.00044.55902.7750
1.000−1.000−1.00042.88402.7750
1.000−1.0000.00032.91202.7750
−1.0000.0001.00029.41302.7750
−1.0001.000−1.00027.62302.7750
1.0001.000−1.00023.69102.7750
0.000−1.0001.00023.44302.7750
1.0001.0000.00021.00302.7750
1.000−1.0001.00018.50402.7750
0.000−1.0000.00017.72502.7750
1.0000.000−1.00016.13902.7750
−1.0000.0000.00015.04802.7750
−1.000−1.000−1.00012.56602.7750
0.000−1.000−1.0008.39702.7750
0.0001.000−1.0004.60002.7750
−1.0000.000−1.0003.48302.7750
0000.0000.000−0.07202.7750

Appendix D. Summary of Abbreviations and Main Variables

Table A15. List of symbols.
Table A15. List of symbols.
DivisionDescription
B I C Bayesian information criterion
BPback-propagation
CCDCentral composite designs
CFNNCascade-forward back-propagation neural network
CNNConvolutional neural network
DoEDesign of experiment
DRDual-response
E Q L Expected quality loss
FFNNFeed-forward back-propagation neural network
LSMLeast squares method
MLEMaximum likelihood estimation
MSEMean squared error
NNNeural network
OAOrthogonal array
RBFNRadial basis function network
RDRobust design
RMSroot mean square
RSMResponse surface methodology
WLSweighted least-squares
x Input factor
x Vector of input factors
z Noise factors
y Output response
y Vector of output responses
y ¯ Mean of observed data
s Standard deviation of observed data
s 2 Variance of observed data
ε Error
τ Desired target value of a quality characteristic
LSM
  • μ ^ L S M x : Estimated mean response function by LSM
  • σ ^ L S M x : Estimated standard deviation response function by LSM
FFNN
  • μ ^ F F N N x : Estimated mean response function by FFNN
  • σ ^ F F N N x : Estimated variance response function by FFNN
  • y ^ F F N N : Output by FFNN
CFNN
  • μ ^ C F N N x : Estimated mean response function by CFNN
  • σ ^ C F N N x : Estimated variance response function by CFNN
  • y ^ C F N N : Output by CFNN
RBFN
  • μ ^ R B F N x : Estimated mean response function by RBFN
  • σ ^ R B F N x : Estimated variance response function by RBFN
  • y ^ R B F N : Output by RBFN

References

  1. Taguchi, G. Introduction to Quality Engineering: Designing Quality into Products and Processes; UNIPUB/Kraus International: New York, NY, USA, 1986. [Google Scholar]
  2. Box, G.; Bisgaard, S.; Fung, C. An explanation and critique of Taguchi’s contributions to quality engineering, Qual. Reliab. Eng. Int. 1988, 4, 123–131. [Google Scholar] [CrossRef]
  3. Leon, R.V.; Shoemaker, A.C.; Kackar, R.N. Performance measures independent of adjustment: An explanation and extension of Taguchi’s signal-to-noise ratios. Technometrics 1987, 29, 253–265. [Google Scholar] [CrossRef]
  4. Box, G. Signal-to-noise ratios, performance criteria, and transformations. Technometrics 1988, 30, 1–17. [Google Scholar] [CrossRef]
  5. Nair, V.N.; Abraham, B.; MacKay, J.; Nelder, J.A.; Box, G.; Phadke, M.S.; Kacker, R.N.; Sacks, J.; Welch, W.J.; Lorenzen, T.J.; et al. Taguchi’s parameter design: A panel discussion. Technometrics 1992, 34, 127–161. [Google Scholar] [CrossRef]
  6. Vining, G.G.; Myers, R.H. Combining Taguchi and response surface philosophies: A dual response approach. J. Qual. Technol. 1990, 22, 38–45. [Google Scholar] [CrossRef]
  7. Copeland, K.A.F.; Nelson, P.R. Dual response optimization via direct function minimization. J. Qual. Technol. 1996, 28, 331–336. [Google Scholar] [CrossRef]
  8. Del Castillo, E.; Montgomery, D.C. A nonlinear programming solution to the dual response problem. J. Qual. Technol. 1993, 25, 199–204. [Google Scholar] [CrossRef]
  9. Lin, D.K.J.; Tu, W. Dual response surface optimization. J. Qual. Technol. 1995, 27, 34–39. [Google Scholar] [CrossRef]
  10. Cho, B.R.; Philips, M.D.; Kapur, K.C. Quality improvement by RSM modeling for robust design. In Proceedings of the Fifth Industrial Engineering Research Conference, Minneapolis, MN, USA, 18–20 May 1996; pp. 650–655. [Google Scholar]
  11. Ding, R.; Lin, D.K.J.; Wei, D. Dual-response surface optimization: A weighted MSE approach. Qual. Eng. 2004, 16, 377–385. [Google Scholar] [CrossRef]
  12. Koksoy, O.; Doganaksoy, N. Joint optimization of mean and standard deviation using response surface methods. J. Qual. Tech. 2003, 35, 239–252. [Google Scholar] [CrossRef]
  13. Ames, A.E.; Mattucci, N.; Macdonald, S.; Szonyi, G.; Hawkins, D.M. Quality loss functions for optimization across multiple response surfaces. J. Qual. Technol. 1997, 29, 339–346. [Google Scholar] [CrossRef]
  14. Shin, S.; Cho, B.R. Bias-specified robust design optimization and its analytical solutions. Comput. Ind. Eng. 2005, 48, 129–140. [Google Scholar] [CrossRef]
  15. Shin, S.; Cho, B.R. Robust design models for customer-specified bounds on process parameters. J. Syst. Sci. Syst. Eng. 2006, 15, 2–18. [Google Scholar] [CrossRef]
  16. Robinson, T.J.; Wulff, S.S.; Montgomery, D.C.; Khuri, A.I. Robust parameter design using generalized linear mixed models. J. Qual. Technol. 2006, 38, 65–75. [Google Scholar] [CrossRef]
  17. Truong, N.K.V.; Shin, S. Development of a new robust design methodology based on Bayesian perspectives. Int. J. Qual. Eng. Technol. 2012, 3, 50–78. [Google Scholar] [CrossRef]
  18. Kim, Y.J.; Cho, B.R. Development of priority-based robust design. Qual. Eng. 2002, 14, 355–363. [Google Scholar] [CrossRef]
  19. Tang, L.C.; Xu, K. A unified approach for dual response surface optimization. J. Qual. Technol. 2002, 34, 437–447. [Google Scholar] [CrossRef]
  20. Borror, C.M. Mean and variance modeling with qualitative responses: A case study. Qual. Eng. 1998, 11, 141–148. [Google Scholar] [CrossRef]
  21. Fogliatto, F.S. Multiresponse optimization of products with functional quality characteristics. Qual. Reliab. Eng. Int. 2008, 24, 927–939. [Google Scholar] [CrossRef]
  22. Kim, K.J.; Lin, D.K.J. Dual response surface optimization: A fuzzy modeling approach. J. Qual. Technol. 1998, 30, 1–10. [Google Scholar] [CrossRef]
  23. Shin, S.; Cho, B.R. Studies on a biobjective robust design optimization problem. IIE Trans. 2009, 41, 957–968. [Google Scholar] [CrossRef]
  24. Goethals, P.L.; Cho, B.R. The development of a robust design methodology for time-oriented dynamic quality characteristics with a target profile. Qual. Reliab. Eng. Int. 2011, 27, 403–414. [Google Scholar] [CrossRef]
  25. Nha, V.T.; Shin, S.; Jeong, S.H. Lexicographical dynamic goal programming approach to a robust design optimization within the pharmaceutical environment. Eur. J. Oper. Res. 2013, 229, 505–517. [Google Scholar] [CrossRef]
  26. Montgomery, D.C. Design and Analysis of Experiments, 4th ed.; John Wiley & Sons: New York, NY, USA, 1997. [Google Scholar]
  27. Myers, R.H.; Montgomery, D.C. Response Surface Methodology: Process and Product Optimization Using Designed Experiments; John Wiley & Sons: New York, NY, USA, 1995. [Google Scholar]
  28. Box-Steffensmeier, J.M.; Brady, H.E.; Collier, D. (Eds.) The Oxford Handbook of Political Methodology; Oxford Handbooks Online: Oxford, UK, 2008; Chapter 16. [Google Scholar]
  29. Truong, N.K.V.; Shin, S. A new robust design method from an inverse-problem perspective. Int. J. Qual. Eng. Technol. 2013, 3, 243–271. [Google Scholar] [CrossRef]
  30. Irie, B.; Miyake, S. Capabilities of three-layered perceptrons. In Proceedings of the IEEE 1988 International Conference on Neural Networks, San Diego, CA, USA, 24–27 July 1988; IEEE: Piscataway Township, NJ, USA, 1988; pp. 641–648. [Google Scholar] [CrossRef]
  31. Funahashi, K. On the approximate realization of continuous mappings by neural networks. Neural Netw. 1989, 2, 183–192. [Google Scholar] [CrossRef]
  32. Cybenko, G. Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 1989, 2, 303–314. [Google Scholar] [CrossRef]
  33. Hornik, K.; Stinchcombe, M.; White, H. Multilayer feedforward networks are universal approximators. Neural Netw. 1989, 2, 359–366. [Google Scholar] [CrossRef]
  34. Zainuddin, Z.; Pauline, O. Function approximation using artificial neural networks. WSEAS Trans. Math. 2008, 7, 333–338. [Google Scholar]
  35. Rowlands, H.; Packianather, M.S.; Oztemel, E. Using artificial neural networks for experimental design in off-line quality control. J. Syst. Eng. 1996, 6, 46–59. [Google Scholar]
  36. Su, C.; Hsieh, K. Applying neural network approach to achieve robust design for dynamic quality characteristics. Int. J. Qual. Reliab. Manag. 1998, 15, 509–519. [Google Scholar] [CrossRef]
  37. Cook, D.F.; Ragsdale, C.T.; Major, R.L. Combining a neural network with a genetic algorithm for process parameter optimization. Eng. Appl. Artif. Intell. 2000, 13, 391–396. [Google Scholar] [CrossRef]
  38. Chow, T.T.; Zhang, G.Q.; Lin, Z.; Song, C.L. Global optimization of absorption chiller system by genetic algorithm and neural network. Energy Build. 2002, 34, 103–109. [Google Scholar] [CrossRef]
  39. Chang, H. Applications of neural networks and genetic algorithms to Taguchi’s robust design. Int. J. Electron. Bus. Manag. 2005, 3, 90–96. [Google Scholar]
  40. Chang, H.; Chen, Y. Neuro-genetic approach to optimize parameter design of dynamic multiresponse experiments. Appl. Soft Comput. 2011, 11, 436–442. [Google Scholar] [CrossRef]
  41. Arungpadang, R.T.; Kim, J.Y. Robust parameter design based on back propagation neural network. Korean Manag. Sci. Rev. 2012, 29, 81–89. [Google Scholar] [CrossRef] [Green Version]
  42. Javad Sabouri, K.; Effati, S.; Pakdaman, M. A neural network approach for solving a class of fractional optimal control problems. Neural Process. Lett. 2017, 45, 59–74. [Google Scholar] [CrossRef]
  43. Hong, Y.Y.; Satriani, T.R.A. Day-ahead spatiotemporal wind speed forecasting using robust design-based deep learning neural network. Energy 2020, 209, 118441. [Google Scholar] [CrossRef]
  44. Arungpadang, T.A.; Maluegha, B.L.; Patras, L.S. Development of dual response approach using artificial intelligence for robust parameter design. In Proceedings of the 1st Ahmad Dahlan International Conference on Mathematics and Mathematics Education, Universitas Ahmad Dahlan, Yogyakarta, Indonesia, 13–14 October 2017; pp. 148–155. [Google Scholar]
  45. Le, T.-H.; Jang, H.; Shin, S. Determination of the Optimal Neural Network Transfer Function for Response Surface Methodology and Robust Design. Appl. Sci. 2021, 11, 6768. [Google Scholar] [CrossRef]
  46. Le, T.-H.; Shin, S. Structured neural network models to improve robust design solutions. Comput. Ind. Eng. 2021, 156, 107231. [Google Scholar] [CrossRef]
  47. Box, G.E.P.; Wilson, K.B. On the experimental attainment of optimum conditions. J. R. Stat. Soc. Ser. B 1951, 13, 1–45. [Google Scholar] [CrossRef]
  48. Myers, R.H. Response surface methodology—Current status and future directions. J. Qual. Technol. 1999, 31, 30–44. [Google Scholar] [CrossRef]
  49. Khuri, A.I.; Mukhopadhyay, S. Response surface methodology. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 128–149. [Google Scholar] [CrossRef]
  50. Box, G.E.P.; Draper, N.R. Empirical Model-Building and Response Surfaces; Wiley: New York, NY, USA, 1987. [Google Scholar]
  51. Khuri, A.I.; Cornell, J.A. Response Surface: Design and Analyses; CRC Press: New York, NY, USA, 1987. [Google Scholar]
  52. Chang, S.W. The Application of Artificial Intelligent Techniques in Oral Cancer Prognosis Based on Clinicopathologic and Genomic Markers. Ph.D. Thesis, University of Malaya, Kuala Lumpur, Malaysia, 2013. [Google Scholar]
  53. Hartman, E.J.; Keeler, J.D.; Kowalski, J.M. Layered neural networks with Gaussian hidden units as universal approximations. Neural Comput. 1990, 2, 210–215. [Google Scholar] [CrossRef]
  54. Gnana Sheela, K.; Deepa, S.N. Review on methods to fix number of hidden neurons in neural networks. Math. Probl. Eng. 2013, 2013, 425740. [Google Scholar] [CrossRef] [Green Version]
  55. Zilouchian, A.; Jamshidi, M. (Eds.) Intelligent Control Systems Using Soft Computing Methodologies; CRC Press: Boca Raton, FL, USA, 2001. [Google Scholar]
  56. Marsland, S. Machine Learning: An Algorithmic Perspective, 2nd ed.; CRC Press: Boca Raton, FL, USA, 2014. [Google Scholar]
  57. Vaghefi, M.; Mahmoodi, K.; Setayeshi, S.; Akbari, M. Application of artificial neural networks to predict flow velocity in a 180° sharp bend with and without a spur dike. Soft Comput. 2020, 24, 8805–8821. [Google Scholar] [CrossRef]
  58. Jayawardena, A.W.; Xu, P.C.; Tsang, F.L.; Li, W.K. Determining the structure of a radial basis function network for prediction of nonlinear hydrological time series. Hydrol. Sci. J. 2006, 51, 21–44. [Google Scholar] [CrossRef]
  59. Heimes, F.; Van Heuveln, B. The normalized radial basis function neural network. In Proceedings of the SMC’98 Conference Proceedings, 1998 IEEE International Conference on Systems, Man, and Cybernetics 2, San Diego, CA, USA, 11–14 October 1998; pp. 1609–1614. [Google Scholar] [CrossRef]
  60. Leong, J.; Ponnambalam, K.; Binns, J.; Elkamel, A. Thermally Constrained Conceptual Deep Geological Repository Design under Spacing and Placing Uncertainties. Appl. Sci. 2021, 11, 11874. [Google Scholar] [CrossRef]
Figure 1. Proposed NN-based estimation procedure.
Figure 1. Proposed NN-based estimation procedure.
Applsci 12 02904 g001
Figure 2. Proposed FFNN-based RD modeling method.
Figure 2. Proposed FFNN-based RD modeling method.
Applsci 12 02904 g002
Figure 3. Early stopping technique.
Figure 3. Early stopping technique.
Applsci 12 02904 g003
Figure 4. Proposed CFNN-based RD modeling method.
Figure 4. Proposed CFNN-based RD modeling method.
Applsci 12 02904 g004
Figure 5. Proposed RBFN-based RD modeling method. The operation “*” denotes the element-by-element multiplication.
Figure 5. Proposed RBFN-based RD modeling method. The operation “*” denotes the element-by-element multiplication.
Applsci 12 02904 g005
Figure 6. Sampled points on the true response surface. (a) Surface plot. (b) Contour plot.
Figure 6. Sampled points on the true response surface. (a) Surface plot. (b) Contour plot.
Applsci 12 02904 g006
Figure 7. Response plots created using the conventional LSM-based RSM in simulation study 1: (a) Mean (R2 = 29.54%), (b) Standard deviation (R2 = 86.61%).
Figure 7. Response plots created using the conventional LSM-based RSM in simulation study 1: (a) Mean (R2 = 29.54%), (b) Standard deviation (R2 = 86.61%).
Applsci 12 02904 g007
Figure 8. Response plots created using the proposed FFNN-based modeling method in simulation study 1: (a) Mean (R2 = 89.96%), (b) Standard deviation (R2 = 89.28%).
Figure 8. Response plots created using the proposed FFNN-based modeling method in simulation study 1: (a) Mean (R2 = 89.96%), (b) Standard deviation (R2 = 89.28%).
Applsci 12 02904 g008
Figure 9. Response plots created using the proposed CFNN-based modeling method in simulation study 1: (a) Mean (R2 = 83.22%), (b) Standard deviation (R2 = 86.68%).
Figure 9. Response plots created using the proposed CFNN-based modeling method in simulation study 1: (a) Mean (R2 = 83.22%), (b) Standard deviation (R2 = 86.68%).
Applsci 12 02904 g009
Figure 10. Response plots created using the proposed RBFN-based modeling method in simulation study 1: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Figure 10. Response plots created using the proposed RBFN-based modeling method in simulation study 1: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Applsci 12 02904 g010
Figure 11. Criterion space of the estimated functions in simulation study 1. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Figure 11. Criterion space of the estimated functions in simulation study 1. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Applsci 12 02904 g011aApplsci 12 02904 g011b
Figure 12. Response plots created using the conventional LSM-based RSM in simulation study 2: (a) Mean (R2 = 32.04%), (b) Standard deviation (R2 = 91.36%).
Figure 12. Response plots created using the conventional LSM-based RSM in simulation study 2: (a) Mean (R2 = 32.04%), (b) Standard deviation (R2 = 91.36%).
Applsci 12 02904 g012
Figure 13. Response plots created using the proposed FFNN-based modeling method in simulation study 2: (a) Mean (R2 = 90.76%), (b) Standard deviation (R2 = 91.36%).
Figure 13. Response plots created using the proposed FFNN-based modeling method in simulation study 2: (a) Mean (R2 = 90.76%), (b) Standard deviation (R2 = 91.36%).
Applsci 12 02904 g013
Figure 14. Response plots created using the proposed CFNN-based modeling method in simulation study 2: (a) Mean (R2 = 80.98%), (b) Standard deviation (R2 = 89.32%).
Figure 14. Response plots created using the proposed CFNN-based modeling method in simulation study 2: (a) Mean (R2 = 80.98%), (b) Standard deviation (R2 = 89.32%).
Applsci 12 02904 g014
Figure 15. Response plots created using the proposed RBFN-based modeling method in simulation study 2: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Figure 15. Response plots created using the proposed RBFN-based modeling method in simulation study 2: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Applsci 12 02904 g015
Figure 16. Criterion space of the estimated functions in simulation study 2. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Figure 16. Criterion space of the estimated functions in simulation study 2. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Applsci 12 02904 g016
Figure 17. Response plots created using the conventional LSM-based RSM in the case study: (a) Mean (R2 = 92.68%), (b) Standard deviation (R2 = 45.42%).
Figure 17. Response plots created using the conventional LSM-based RSM in the case study: (a) Mean (R2 = 92.68%), (b) Standard deviation (R2 = 45.42%).
Applsci 12 02904 g017
Figure 18. Response plots created using the proposed FFNN-based modeling method in the case study: (a) Mean (R2 = 91.70%), (b) Standard deviation (R2 = 73.51%).
Figure 18. Response plots created using the proposed FFNN-based modeling method in the case study: (a) Mean (R2 = 91.70%), (b) Standard deviation (R2 = 73.51%).
Applsci 12 02904 g018
Figure 19. Response plots created using the proposed CFNN-based modeling method in the case study: (a) Mean (R2 = 83.05%), (b) Standard deviation (R2 = 69.47%).
Figure 19. Response plots created using the proposed CFNN-based modeling method in the case study: (a) Mean (R2 = 83.05%), (b) Standard deviation (R2 = 69.47%).
Applsci 12 02904 g019
Figure 20. Response plots created using the proposed RBFN-based modeling method in the case study: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Figure 20. Response plots created using the proposed RBFN-based modeling method in the case study: (a) Mean (R2 = 99.99%), (b) Standard deviation (R2 = 99.99%).
Applsci 12 02904 g020
Figure 21. Criterion space of the estimated functions in the case study. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Figure 21. Criterion space of the estimated functions in the case study. (a) LSM-based RSM. (b) Proposed FFNN. (c) Proposed CFNN. (d) Proposed RBFN.
Applsci 12 02904 g021
Table 1. Experimental data and true function response value.
Table 1. Experimental data and true function response value.
Run x 1 x 2 y t r u e
10.5−1.0113.5914
21.0−0.5112.9478
3−1.0−0.5104.7632
41.00.0100.0000
50.50.578.6806
60.00.0100.0000
70.50.0100.0000
8−1.00.589.9294
9−1.01.097.0000
101.0−1.0105.0000
110.01.091.8451
12−1.00.0100.0000
131.01.099.5939
14−0.50.564.8503
150.0−0.5158.0282
16−0.5−1.0105.0000
17−0.5−0.5127.4106
180.51.097.0000
19−1.0−1.0100.6766
200.0−1.0113.5914
210.00.554.8669
221.00.596.2952
23−0.50.0100.0000
24−0.51.091.8451
250.5−0.5145.1924
Table 2. Experimental data from simulation study 1.
Table 2. Experimental data from simulation study 1.
Run x 1 x 2 y ¯ s s 2
10.50−1.00112.59105.942035.3060
21.00−0.50111.14806.366040.5310
3−1.00−0.50105.18304.933024.3300
41.000.00100.64005.201027.0510
50.500.5077.48105.353028.6530
60.000.0099.76005.125026.2680
70.500.00101.06005.065025.6490
8−1.000.5089.96903.995015.9580
9−1.001.0096.66002.91108.4740
101.00−1.00105.80006.658044.3270
110.001.0091.74504.220017.8060
12−1.000.0099.78003.797014.4200
131.001.00100.35405.033025.3290
14−0.500.5066.17003.449011.8960
150.00−0.50158.78804.855023.5740
16−0.50−1.00104.50005.092025.9290
17−0.50−0.50127.93105.354028.6630
180.501.0096.58004.495020.2080
19−1.00−1.00100.87704.394019.3060
200.00−1.00113.89105.211027.1530
210.000.5054.80704.002016.0170
221.000.5095.89505.451029.7140
23−0.500.00100.88004.034016.2710
24−0.501.0091.20503.439011.8270
250.50−0.50144.51205.362028.7530
Table 3. Summary of multilayer feed-forward NN information for simulation 1.
Table 3. Summary of multilayer feed-forward NN information for simulation 1.
ModelTransfer FunctionTraining FunctionArchitecture#Epoch
FFNNMeanTansig-PurelinTrainlm2-17-13
Standard deviationTansig-PurelinTrainrp2-6-144
CFNNMeanTansig-PurelinTrainlm2-17-15
Standard deviationTansig-PurelinTrainrp2-5-128
Table 4. Summary of RBFN information for simulation 1.
Table 4. Summary of RBFN information for simulation 1.
ModelTransfer FunctionGoalSpread
RBFNMeanRadbas-Purelin1 × 10−70.6
Standard deviationRadbas-Purelin1 × 10−70.6
Table 5. Comparative results in simulation study 1.
Table 5. Comparative results in simulation study 1.
Estimation ModelOptimal Factor SettingsProcess MeanProcess BiasProcess VarianceEQL
x 1 x 2
LSM−0.5870−1.0000119.90308.096035.7730101.3270
FFNN−0.5230−0.5110127.99000.009023.086023.0860
CFNN−0.3280−0.9290127.98300.016027.026027.0260
RBFN0.6860−0.7690127.94100.058024.325024.3280
Table 6. Experimental data for simulation 2.
Table 6. Experimental data for simulation 2.
Run x 1 x 2 y ¯ s s 2
10.5−1.0117.031028.4560809.7210
21.0−0.5108.948028.5790816.7760
3−1.0−0.5107.123017.1750294.9700
41.00.0100.580022.3820500.9420
50.50.583.941022.0650486.8490
60.00.098.340020.3830415.4530
70.50.098.000023.8250567.6330
8−1.00.590.189016.3130266.1150
9−1.01.092.220014.7930218.8280
101.0−1.0100.120026.8040718.4340
110.01.091.205017.0730291.5000
12−1.00.097.580018.1380328.9830
131.01.0104.234021.8010475.2960
14−0.50.568.730018.4750341.3320
150.0−0.5157.468023.2770541.8020
16−0.5−1.0115.180021.6910470.5180
17−0.5−0.5128.191021.9950483.7670
180.51.098.360018.7680352.2350
19−1.0−1.0104.597020.9930440.6870
200.0−1.0114.631026.1640684.5700
210.00.553.387020.1200404.8260
221.00.593.095020.7210429.3470
23−0.50.097.860019.4930379.9600
24−0.51.095.265016.5790274.8610
250.5−0.5147.872025.1310631.5690
Table 7. Summary of the multilayer FFNN in simulation 2.
Table 7. Summary of the multilayer FFNN in simulation 2.
ModelTransfer FunctionTraining FunctionArchitecture#Epoch
FFNNMeanTansig-PurelinTrainlm2-11-15
Standard deviationTansig-PurelinTrainrp2-3-123
CFNNMeanTansig-PurelinTrainlm2-13-16
Standard deviationTansig-PurelinTrainrp2-1-129
Table 8. Summary of the RBFN in simulation 2.
Table 8. Summary of the RBFN in simulation 2.
ModelTransfer FunctionGoalSpread
RBFNMeanRadbas-Purelin1e-200.5
Standard deviationRadbas-Purelin1e-200.6
Table 9. Comparative results of simulation study 2.
Table 9. Comparative results of simulation study 2.
Estimation
Model
Optimal Factor SettingsProcess
Mean
Process
Bias
Process
Variance
EQL
x 1 x 2
LSM−1.000−1.000117.249010.7500413.2320528.8130
FFNN−0.804−0.561127.28000.7190359.3420359.8600
CFNN−0.383−0.274127.21200.7870445.7840446.4040
RBFN−0.141−0.190127.06800.9310422.7920423.6600
Table 10. Summary of the multilayer FFNN used in the case study.
Table 10. Summary of the multilayer FFNN used in the case study.
ModelTransfer FunctionTraining FunctionArchitecture#Epoch
FFNNMeanTansig-PurelinTraingdx3-9-1105
Standard deviationTansig-PurelinTraingda3-10-119
CFNNMeanTansig-PurelinTraingda3-12-1107
Standard deviationTansig-PurelinTrainrp3-11-19
Table 11. Summary of the RBFN used in the case study.
Table 11. Summary of the RBFN used in the case study.
ModelTransfer FunctionGoalSpread
RBFNMeanRadbas-Purelin1 × 10−270.5
Standard deviationRadbas-Purelin1 × 10−270.8
Table 12. Comparative results of the case study.
Table 12. Comparative results of the case study.
Estimation
Model
Optimal Factor SettingsProcess
Mean
Process
Bias
Process
Variance
EQL
x 1   x 2   x 3  
LSM1.0000.071−0.2500494.67205.32701977.53202005.9170
FFNN0.9990.999−0.6230499.86900.13107.76307.7800
CFNN0.792−0.7520.9990499.68700.31202.85402.9520
RBFN0.9990.999−0.4300500.05200.052023.219023.2210
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Le, T.-H.; Dai, L.; Jang, H.; Shin, S. Robust Process Parameter Design Methodology: A New Estimation Approach by Using Feed-Forward Neural Network Structures and Machine Learning Algorithms. Appl. Sci. 2022, 12, 2904. https://doi.org/10.3390/app12062904

AMA Style

Le T-H, Dai L, Jang H, Shin S. Robust Process Parameter Design Methodology: A New Estimation Approach by Using Feed-Forward Neural Network Structures and Machine Learning Algorithms. Applied Sciences. 2022; 12(6):2904. https://doi.org/10.3390/app12062904

Chicago/Turabian Style

Le, Tuan-Ho, Li Dai, Hyeonae Jang, and Sangmun Shin. 2022. "Robust Process Parameter Design Methodology: A New Estimation Approach by Using Feed-Forward Neural Network Structures and Machine Learning Algorithms" Applied Sciences 12, no. 6: 2904. https://doi.org/10.3390/app12062904

APA Style

Le, T. -H., Dai, L., Jang, H., & Shin, S. (2022). Robust Process Parameter Design Methodology: A New Estimation Approach by Using Feed-Forward Neural Network Structures and Machine Learning Algorithms. Applied Sciences, 12(6), 2904. https://doi.org/10.3390/app12062904

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop