Abstract
The computational complexity of airfoil optimization for aircraft wing designs typically involves high-dimensional parameter spaces defined by geometric variables, where each Computational Fluid Dynamics (CFD) simulation cycle may require significant processing resources. Therefore, performing variable selection to identify influential inputs becomes crucial for minimizing the number of necessary model evaluations, particularly when dealing with complex systems exhibiting nonlinear and poorly understood input–output relationships. As a result, it is desirable to use fewer samples to determine the influential inputs to achieve a simple, more efficient optimization process. This article provides a systematic, novel approach to solving aircraft optimization problems. Initially, a Kriging-based variable screening method (KRG-VSM) is proposed to determine the active inputs using a ikelihood-based screening method, and new stopping criteria for KRG-VSM are proposed and discussed. A genetic algorithm (GA) is employed to achieve the global optimum of the log-likelihood function. Subsequently, the airfoil optimization is conducted using the identified active design variables. According to the results, the Kriging-based variable screening method could select all the active inputs with a few samples. The Kriging-based variable screening method is then tested on the numerical benchmarks and applied to the airfoil aerodynamic optimization problem. Applying the variables screening technique can enhance the efficiency of the airfoil optimization process under acceptable accuracy.
1. Introduction
Solving expensive optimization problems with high-fidelity CFD models and many variables usually requires a large computational cost. This lies in that the sample size required to find the optimal solution in the search space usually grows exponentially with the number of optimization variables [1]. A possible solution concerns replacing high-fidelity simulation models with metamodels to reduce the computation burden. A metamodel is an approximate model constructed by sampling points from high-fidelity simulation and fitting the objective function and constraints. The optimization process is then conducted on the approximate model to identify the design point. This process is repeated until the best design is obtained. Kriging metamodels, which model the problem functions by using stochastic processes, excel at capturing nonlinear and multimodal relationships in engineering systems [2]. However, it is difficult to construct effective metamodels for the method of Kriging, especially for high-dimensional optimization problems. Therefore, it requires reducing the dimensionality of high-dimensional optimization problems. Dimensionality reduction (DR) methods transform high-dimensional data into meaningful lower-dimensional representations [3]. The principal component analysis (PCA) [4] method is a linear method widely employed for dimensionality reduction, which describes the original data in a lower dimension and fits the variance of the original data as closely as possible. Other DR methods include Isomap [5], Kernel PCA [6], Maximum Variance Unfolding (MVU) [7], Local Linear Embedding (LLE) [8], and so on. Such DR techniques fall in the class of convex methods where the cost function must be strictly convex [3]. In contrast, we cannot ensure that the objective function has a unique minimum for a general engineering problem. While these DR methods are powerful tools for data compression and feature extraction, they are generally ineffective in reducing the computational complexity of high-dimensional black-box optimization problems. Variable screening identifies important inputs and prunes less significant variables or noises to reduce the dimensionality of the problems. The screening process is usually carried out via sampling and analyzing the sampling results. To distinguish the active inputs of a certain model, sensitivity analysis (SA), including both local and global ones, is widely employed. Local sensitivity analysis refers to the local variability of the output relative to the input variable at a certain point, which is actually a partial derivative. Global Sensitivity Analysis (GSA) consists of a series of mathematical methods for studying how changes in the output of a numerical model depend on the changes in its inputs. Ciuffo [9] proposed a variance-based technique utilizing Sobol variance decomposition [10], which turned out to be a valid GSA approach. However, the sample size of variance-based methods is large, so the number of evaluations required for such methods is computationally taxing for complex models. Thus, the sensitivity analysis based on variance is often performed by using a Gaussian process metamodel. Ge [11] combines the quasi-optimized trajectory-based elementary effects (quasi-OTEE) method with the Kriging method to solve the high-dimensional SA problem. The first step is to use the quasi-OTEE SA to identify the active and inactive inputs of the problem. The Kriging-based SA is then leveraged to count the sensitivity indicators and sort the most active inputs more accurately. Pianosi [12] provides a Matlab toolbox for solving GSA problems named SAFE (Sensitivity Analysis for Everybody), which contains a variety of GSA methods, such as the variance-based SA method. The Kriging-based SA method seems inaccurate in ranking the factors based on a small sample size, according to the results of some high-dimensional screening test problems. Welch [13] proposed the Kriging-based variable screening method (KRG-VSM) by building a Kriging metamodel and employing a maximum likelihood estimator (MLE) to screen the active inputs. KRG-VSM maximizes the MLE sequentially, taking the contribution of each input into account. For every iteration of KRG-VSM, the most active inputs are removed from the collection until just inactive inputs remain.
This paper proposes a systematic variable screening-based method to address airfoil optimization engineering problems. Simulation results indicate that the proposed method effectively balances computational efficiency and aerodynamic precision in airfoil design optimization. The key contribution of this work lies in the KRG-VSM enhancement and its successful application to the airfoil optimization problems.
The remainder of this paper is organized as follows. The section “Abbreviations and Notations” systematically presents all abbreviations and mathematical notations used in this study. Section 2 discusses the variance-based sensitivity analysis method and the Kriging-based variable screening method. Section 3 introduces the procedure of the KRG-VSM and validates the effectiveness of the proposed variable screening method through numerical simulation cases. Subsequently, the proposed algorithm is applied to the airfoil aerodynamic optimization problems. The conclusion of this article is summarized in Section 4.
Abbreviations and Notations
This subsection provides a comprehensive reference for the abbreviations and mathematical notations employed throughout the study, with full definitions provided in Table 1 and Table 2, respectively.
Table 1.
List of abbreviations.
Table 2.
Notations description.
2. Review of the Variable Screening Methods
2.1. Variance Based Sensitivity Analysis
Analysis of Variance (ANOVA) is a set of statistical models utilized to investigate the differences between inputs and corresponding processes. In ANOVA, the variance observed in a certain input can be classified according to different causes of variation.
The total sensitivity indicator is given by
where is obtained as
where the matrices A and B are matrices of input factors. And denotes the objective function, i.e., , with zero inputs. As mentioned above, the sample size N is usually large for complex optimization problems; thus, the metamodel is leveraged to implement the variance-based sensitivity analysis. For Equation (9), the active inputs are , and . The Latin hypercube sampling (LHS) method is used to get 50 samples. Then, the total sensitivity indices of the inputs for the metamodel constructed by 50 expensive samples can be achieved. The metamodel is evaluated 5000 times to collect data for ANOVA-based sensitivity analysis. The result is shown in Table 3. The active inputs , and are selected, while the active inputs and are ignored. It is difficult to construct an acceptable metamodel for the variance-based sensitivity analysis using a few samples.
Table 3.
Metamodel-based total sensitivity indices (50 samples).
As the sample size is added to 500, the metamodel constructed by these samples could identify all the active inputs through ANOVA-based sensitivity analysis. The result is shown in Table 4. Due to the stochastic nature of LHS, ANOVA-based SA may occasionally fail to identify all active inputs.
Table 4.
Metamodel-based total sensitivity indices (500 samples).
2.2. Review of KRG-VSM
The Kriging-based variable screening method (KRG-VSM) screens the active inputs by numerically maximizing
which is determined by only the correlation parameters and sample points. n refers to the sample size, and R is an matrix of correlations for the samples . denotes the estimation value of the process variance obtained by MLE. Compared with ANOVA-based sensitivity analysis, KRG-VSM [13] could use 50 samples to select all six active inputs of Equation (9) efficiently.
However, as the original author noted, the KRG-VSM described in the paper [13] was subject to two limitations imposed by the computational power constraints of that era. One is the proper criteria of the active inputs. The inactive inputs are seen as active inputs because they lead to the maximum increment in the log-likelihood sometimes. Another one is the difficulty in obtaining a global maximum of the likelihood, especially at later stages when optimizing problems with many parameters. The simplex search [13] method is applied to the algorithm to optimize the likelihood function. Try it a few times from different starting points; it can be found that even when these try to derive the same optimum, it does not guarantee that a global optimum can be found. These problems are explored in this paper.
3. Proposed KRG-VSM
3.1. Procedure of KRG-VSM
Kriging models the function as a stochastic process.
The regression model offers little advantage in applications, which is set to be a constant. Then, F would be a one-dimensional vector because the regression model has only a constant term. R denotes the stochastic-process correlations among the inputs. For two vectorial inputs and x, the correlation can be expressed as
where . d is the dimensionality of the problem. Note that the function should be selected based on the characteristics of the objective function. The correlation function (5), which has higher-order derivatives, is appropriate for a smooth response, e.g., the aircraft optimization problems. Correspondingly, this type of function is not suitable for fitting step-response-type black-box functions.
Given the parameter , the maximum likelihood estimates of and are
The regression process of the Kriging metamodel is to find a set of correlation parameters to maximize the log-likelihood
The variable screening process is correlated to the regression process of the Kriging metamodel. The flowchart of KRG-VSM is illustrated in Figure 1. It should be noted that, unlike the regression problem in Kriging modeling, the maximum likelihood function optimization in this variable screening process is a single-variable optimization problem, as shown in Figure 1. The independent variables for the first and second optimizations are and , respectively. The genetic algorithm (GA) was employed to maximize the log-likelihood function because conventional linear search methods failed to locate the global optimum in this non-convex optimization problem.
Figure 1.
Flowchart of KRG-VSM.
Step 1. N initial samples are produced throughout the entire design space leveraging LHS method.
Step 2. Let S denote the set of parameters of inputs restrained to share a common value of so that the optimization of the likelihood function is only about . All inputs are on the same footing because of the same scales of the factors. Set . Common is set to be the upper limit at first.
Step 3. GA optimization is performed to maximize the log-likelihood subject to , and the maximum by . The objective function can be expressed as
where the log-likelihood can be calculated by combining Equations (5) and (6). The fitness tolerance is set as 10 . The variable is bounded between and 1. The maximum number of generations is equal to the problem dimension d, and the population size is fixed at 50 individuals. All optimization problems considered in this study are formulated as unconstrained optimization problems. The maximum number of executions for the GA is . If the value of common is reduced but greater than 10 , the process skips to Step 4. If not, the screening process terminates.
Step 4. For every j in S, GA is used to maximize the log-likelihood subject to and the maximum is denoted by .
Step 5. Let denote the input producing the largest increase, , in the log-likelihood at Step 4. If , the process turns to Step 6. Otherwise, the screening process terminates.
Step 6. If , is selected from set S as the active input and allowed its own value of . Let and skip to Step 7. Otherwise, let be infinitely small and skip to Step 5.
Step 7. Let and . Go to Step 3.
The input, which leads to the biggest increase in the log-likelihood and has a bigger value of theta than the value of theta for the restrained inputs and a considerable value of the theta at the end of the algorithm, could be seen as an active input. The input, which leads to the biggest increase in the log-likelihood, is not selected because of a smaller value of theta than the value of theta for the restrained inputs, whose flag will be plus one. We could not ensure the importance of the inputs with nonzero flags. The input, which is unrestrained but whose theta is close to 0 at the end of the screening algorithm, could be seen as inactive input. The input, which is restrained all the time and does not have flags, could also be seen as inactive input. More active inputs and inactive inputs could be identified; the quality of the screening result would be better.
3.2. Test on Benchmark Problem
3.2.1. Benchmark Description and Screening Results
To verify the effectiveness of the proposed algorithm, we test it on five well-known variable screening problems. The screening results are presented and discussed. To further demonstrate the effectiveness of the proposed algorithm relative to the original KRG-VSM, this subsection employs a benchmark numerical case, i.e., the Welch function, from the original KRG-VSM to conduct comparative simulations with our enhanced method, and the simulation results demonstrate the advantages of our proposed approach. means the value of theta for the inputs left in the set S. means the values of theta for the unrestrained inputs at the end of the algorithm. We obtain the samples utilizing the LHS approach.
Throughout all test cases, sample points were generated using LHS with randomized initialization, ensuring statistically equivalent comparison conditions across all simulations.
Welch function
Equation (9) [13] is evaluated on , for all . The active inputs are , and . For input screening purposes, it can be found that some inputs of this function have a very high effect on the output compared with other inputs. As Welch [14] points out, interactions and nonlinear effects make this function challenging.
Eighty samples are used for the input identification. We present the screening results in Table 5. The first six inputs to be unconstrained are , and . Then, the procedure terminated as the value of common theta was less than 10 . The six active inputs could be selected accurately.
Table 5.
Result of the test function (9).
The screening results for the Welch function using the original KRG-VSM are presented in Table 6. According to Table 5 and Table 6, one can observe that the most active variables, and , are selected first by the proposed algorithm. In contrast, the original KRG-VSM initially selects different active inputs and , as described in [14]. Therefore, we conclude that the proposed algorithm demonstrates superior active variable selection capability compared with the original method, which can be attributed to its incorporation of the more computationally intensive GA algorithm.
Table 6.
Result of the test function (9) with original KRG-VSM.
Moon high-dimensionality function [14]
It is obvious that Equation (10) is a nonlinear function with a large number of variables in which the most active ones are , and . One can observe that the active variables all appear in the form of interaction terms or quadratic terms, which makes it difficult to carry out variable screening on Equation (10).
Seventy samples are used for the input identification. We present the screening results in Table 7. The first five inputs to be selected are , and , which are the five active inputs. The screening procedure continues as the value of theta for the restrained inputs has a considerable value bigger than 10 . Then , and are selected from S. The procedure terminated as the value of common theta was less than 10 after was selected. We notice that there are two times that leads to the biggest increase in the log-likelihood with a smaller value of theta than the value of theta for the restrained inputs. There are a lot of times that , and lead to the biggest increase with a smaller theta. So we could not identify the importance of , and . Because of the plenty of correlation terms for this function, it is difficult to judge which input is inactive.
Table 7.
Result of the test function (8).
Linkletter sinusoidal function
Equation (11) [15] is evaluated on the scaled input domain , for all . It is shown above that there are only two active variables in this function. The last eight inputs do not appear in the expression. The input variable ranges for most numerical examples are defined as symmetric distributions centered around zero. The choice of min–max scaling to [0, 1] for the input variables in this function is made to maintain consistency with the benchmark problem we adopted from the well-established Virtual Library of Simulation Experiments (VLSE) at Simon Fraser University (https://www.sfu.ca/~ssurjano/linketal06sin.html, accessed on 17 March 2025).
Eighty samples are used for the variable screening. The results are shown in Table 8. The first two factors to be selected are and . Then, the procedure terminated as the value of the common theta was less than 10 .
Table 8.
Result of the test function (11).
Loeppky function
For Equation (12), the first three inputs are active, and the last three are inactive, while the rest are moderately active. The function is evaluated on the hypercube . The last three input variables do not appear in the expression. The screening procedure terminated as the value of the common theta was 7.7 , which is less than 10 . No variable is selected from the set S.
Linkletter simple function
Equation (13) is used for input variable screening because only four of its input variables have a nonzero effect on the response. The function is evaluated on the scaled input domain , for all .
The screening procedure terminated as the value of common theta is 2.8 , which is less than 10 . No variable is selected from the set S.
3.2.2. Discussion on Numerical Benchmarks
According to the screening result of the test function (12) and the function (13), the screening algorithm will fail under these conditions. The samples do not correspond enough to the number of active inputs. The terms of the problem are linear. The distribution of the samples is improper. The significance of all the inputs is similar to each other.
For the linear problem, the algorithm will fail, as no input could be selected. The algorithm usually makes mistakes when the input, which leads to the biggest increase in the maximum likelihood function, has a smaller value than the common theta during the screening process. In other words, the screening quality has a negative relation with the abnormal times.
For a fixed sample dataset, every optimization instance consistently identifies the same number and identical sequence of active variables, and the maximum likelihood function values obtained at each optimization step remain comparable. For different sampling datasets, the obtained likelihood function values from variable selection vary with the sampling data. Even with identical sample sizes, insufficient sample quantities will lead to divergent variable selection results between datasets.
3.3. Airfoil Aerodynamic Optimization Example
3.3.1. Description of the Problem
Airfoil optimization is one of the most important parts of whole-wing optimization. In this chapter, GA is used to optimize airfoil NACA0012, mainly to increase the lift-drag ratio. KRG-VSM is conducted for screening the inactive inputs in the optimization problem to reduce the computational expense while keeping a similar optimum. The mathematical model of the airfoil optimization problem goes as follows:
where represents the lift-drag ratio. The symbol denotes the maximum thickness of the baseline airfoil while the lift coefficient of that. The candidate airfoil shapes are selected by the shape parameterization method, while the CFD analysis model gets the response corresponding to the parameterized airfoil. The speed of the flight conditions is 0.8 Mach. The angle of attack is 2.5 degrees.
3.3.2. CFD Model and Parametric Modeling of the Airfoil
We employ the analytical shape parameterization method proposed by Khurana [16] to fit the airfoil. The polynomial is expressed by
where and n are, respectively, the design variable and the size of variable space. is the shape function, and represents the chord length interval. It can be seen that the design variable “” acts as the multiplier of the shape function, determining the contribution of each function to the final shape [17].
The polynomial of Hicks–Henne [18] is used as the shape function in this paper:
where the order of is usually similar to . Substituting (15) into (14), Equation (14) could fully describe any smooth airfoil with the correct curvature coefficients (or design variables). It is proved that five variables are suitable to fit airfoils for aerodynamic analysis. So, , which means there are five design variables for the upper and lower airfoil each. The fitting curve of NACA0012 is shown in Figure 2.
Figure 2.
NACA0012 airfoil fitting with parameterization method.
We import the airfoil coordinate data, which comes from the parametric model of the airfoil to Gambit to create high-quality meshes. Then, Fluent is used as a CFD solver to finish the aerodynamic analysis. We can get the lift coefficient and drag coefficient distribution figures after that, which are important data for airfoil design. GA is employed to optimize the airfoil design problem due to the unique challenges posed by airfoil aerodynamic optimization—a high-dimensional, black-box problem lacking mathematical interpretability. Unlike classical nonlinear regression problems where theoretically grounded methods (e.g., hybrid global optimization or interval arithmetic optimization algorithms) are used, our problem involves a design space exceeding 10 variables and relies on computationally intensive CFD simulations, which introduce non-convexity and numerical noise. The GA was selected for its inherent strengths in handling such complexity: (1) its population-based stochastic search effectively navigates multimodal landscapes without requiring gradient information, and (2) its global exploration capability mitigates the risk of convergence to suboptimal solutions, a common pitfall of local search methods (e.g., linear search optimization algorithm). The population size of GA is set to 10, and the maximum number of generations is fixed at 50. All other termination conditions follow MATLAB’s default settings. All the above processes can be realized automatically. To identify the accuracy of the CFD model, the experimental data and the pressure coefficient distribution figures of the CFD model are compared in Figure 3. The accuracy of the whole CFD model is acceptable according to the comparative result.
Figure 3.
Comparison of the CFD result and experiment result.
3.3.3. Screening Result of the Problem
For the 10-dimensional airfoil optimization problem, we use the LHS method to generate 70 samples to call the CFD model. Forty-eight samples get the corresponding results for accuracy validation, satisfying the criteria. Then, the 48 samples are used to do the variable screening. The result is shown in Table 9. , and can be recognized as the active inputs. , and is inactive because their value of the theta at the end of the algorithm are close to 0. We could not ensure the importance of and . led to the biggest increase in the log-likelihood with a smaller theta than common once. There are five times that led to the biggest increase in the log-likelihood with a smaller than common .
Table 9.
Airfoil optimization problem (10D).
3.3.4. Comparisons of the Optimized Results
To validate the effectiveness of the variable screening process, the comparison results between the performance of direct numerical optimization with ten inputs and four inputs after variable screening are summarized in Table 10. The values of other inactive inputs are fixed as medium values in the domain. The optimization results in Table 10 demonstrate that reducing the input variables from ten to four (a 60% reduction) achieves an excellent balance between computational efficiency and aerodynamic performance. While the simplified four-input model shows only a marginal 2.4% decrease in lift coefficient, it maintains over 90% of the original lift performance. The trade-off appears in the 12.8% increase in drag coefficient and 13.5% reduction in lift-to-drag ratio. Most remarkably, this variable screening method decreases the number of function evaluations (Nfe) from 22,540 to 1120, which means that the computational costs are reduced by 95%, highlighting the method’s effectiveness in preserving key aerodynamic characteristics while dramatically improving optimization efficiency. The method’s ability to maintain aerodynamic performance while drastically reducing computational burden demonstrates its strong potential for airfoil optimization problems where both accuracy and efficiency are critical.
Table 10.
Comparisons of the optimized results.
Figure 4 shows the results of the optimal airfoil. The lift-drag coefficient after direct numerical optimization with four inputs increases by 88.38% compared with the baseline. The maximum thickness and lift coefficient all meet the constraints. Figure 4 shows the optimal airfoil profile, and the pressure coefficient distribution of the optimal airfoil is presented in Figure 5.
Figure 4.
Airfoil profile comparison.
Figure 5.
Pressure coefficient distribution.
4. Conclusions
Variable screening offers a promising approach to mitigate the curse of dimensionality, which requires a rapid increase in the number of model evaluations as the dimension of the inputs grows. GSA is an efficient tool to screen the active inputs, but the problem lies in that GSA usually requires a large number of model evaluations, which brings computation challenges. In the last several years, metamodels have arisen to deal with that issue. However, it is problematic to construct an acceptable metamodel for GSA to select all the active variables in the high-dimensional and computationally expensive problems with a small sample size. The Kriging-based screening algorithm, via likelihood, could select the active inputs efficiently with few expensive samples. In this work, a mass of numerical benchmarks has been employed to validate the screening effectiveness of our proposed algorithm. A genetic algorithm (GA) is employed to optimize the maximum likelihood function. More reasonable criteria for the active inputs are proposed, which makes the screening process more accurate.
Author Contributions
Conceptualization, Y.W. and J.W.; methodology, Y.W.; software, Y.W.; validation, J.G.; formal analysis, Y.W.; investigation, M.H.; resources, Y.W.; data curation, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W.; visualization, X.D.; supervision, J.W. and J.G.; project administration, J.W. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Data Availability Statement
All data in this paper can be calculated using the provided simulation conditions, with no dependency on external databases.
Conflicts of Interest
Author Minglei Han was employed by Anhui Fangyuan Mechanical and Electrical Co., Ltd. The remaining authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
References
- Bellman, R.E. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 1961; pp. 112–115. [Google Scholar]
- Jones, D.R.; Schonlau, M.; Welch, W.J. Efficient global optimization of expensive black-box functions. J. Glob. Optim. 1998, 13, 455–492. [Google Scholar] [CrossRef]
- Van Der Maaten, L.; Postma, E.; Van den Herik, J. Dimensionality reduction: A comparative. J. Mach. Learn. Res. 2009, 10, 66–71. [Google Scholar]
- Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Tenenbaum, J.B.; Silva, V.d.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
- Schölkopf, B.; Smola, A.; Müller, K.R. Nonlinear component analysis as a kernel eigenvalue problem. Neural Comput. 1998, 10, 1299–1319. [Google Scholar] [CrossRef]
- Weinberger, K.Q.; Sha, F.; Saul, L.K. Learning a kernel matrix for nonlinear dimensionality reduction. In Proceedings of the Twenty-First International Conference on Machine Learning, Banff, AB, Canada, 4–8 July 2004; p. 106. [Google Scholar]
- Roweis, S.T.; Saul, L.K. Nonlinear dimensionality reduction by locally linear embedding. Science 2000, 290, 2323–2326. [Google Scholar] [CrossRef] [PubMed]
- Ciuffo, B.; Casas, J.; Montanino, M.; Perarnau, J.; Punzo, V. Gaussian process metamodels for sensitivity analysis of traffic simulation models: Case study of AIMSUN mesoscopic model. Transp. Res. Rec. 2013, 2390, 87–98. [Google Scholar] [CrossRef]
- Sobol’, I.M. On sensitivity estimation for nonlinear mathematical models. Mat. Model. 1990, 2, 112–118. [Google Scholar]
- Ge, Q.; Ciuffo, B.; Menendez, M. Combining screening and metamodel-based methods: An efficient sequential approach for the sensitivity analysis of model outputs. Reliab. Eng. Syst. Saf. 2015, 134, 334–344. [Google Scholar] [CrossRef]
- Pianosi, F.; Sarrazin, F.; Wagener, T. A Matlab toolbox for global sensitivity analysis. Environ. Model. Softw. 2015, 70, 80–85. [Google Scholar] [CrossRef]
- Welch, W.J.; Buck, R.J.; Sacks, J.; Wynn, H.P.; Mitchell, T.J.; Morris, M.D. Screening, predicting, and computer experiments. Technometrics 1992, 34, 15–25. [Google Scholar] [CrossRef]
- Moon, H. Design and Analysis of Computer Experiments for Screening Input Variables. Ph.D. Thesis, The Ohio State University, Columbus, OH, USA, 2010. [Google Scholar]
- Linkletter, C.; Bingham, D.; Hengartner, N.; Higdon, D.; Ye, K.Q. Variable selection for Gaussian process models in computer experiments. Technometrics 2006, 48, 478–490. [Google Scholar] [CrossRef]
- Khurana, M.; Winarto, H.; Sinha, A. Airfoil geometry parameterization through shape optimizer and computational fluid dynamics. In Proceedings of the 46th AIAA Aerospace Sciences Meeting and Exhibit, Reno, Nevada, 7–10 January 2008; p. 295. [Google Scholar]
- Jamshid, A. A Survey of Shape Parameterization Techniques; NASA Conferenc e Publication: Washington, DC, USA, 1999; pp. 333–344. [Google Scholar]
- Hicks, R.M.; Henne, P.A. Wing design by numerical optimization. J. Aircr. 1978, 15, 407–412. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).