Abstract
Traditional calibration methods often assume constant parameters that remain unchanged across input conditions, which can limit predictive accuracy when parameters actually vary. To address this issue, we propose a novel calibration framework with time-varying parameters. Building on the idea of profile least squares, we first apply local linear smoothing to estimate the discrepancy function between the computer model and the true process, and then use local linear smoothing again to obtain pointwise estimates of the functional calibration parameter. Through rigorous theoretical analysis, we establish the consistency and asymptotic normality of the proposed estimator. Simulation studies and an application to NASA’s OCO-2 mission demonstrate that the proposed method effectively captures parameter variation and improves predictive performance.
Keywords:
functional parameter; profile least squares estimator; local linear smoother; uncertainty quantification MSC:
62E20; 62G05; 62G20
1. Introduction
Computer models are employed to simulate or emulate complex physical systems with high cost. With the rapid advancement of computer technology, computer models have become increasingly appealing and are applied to various domains due to their relatively low cost and time efficiency. Despite their advantageous properties, computer models possess certain drawbacks, one being the uncertainty stemming from unknown parameters, referred to as “calibration parameters”, which are intricately linked to the inherent properties of physical systems. Calibration aims to discern these parameters to achieve precise predictions of the physical system.
The concept of calibration was first proposed in [1], where the authors developed a Bayesian calibration framework. From then on, the Bayesian calibration methods began to emerge, such as in [2,3,4,5], etc. The above Bayesian calibration procedures have been applied to real applications successfully, but they lack rigorous theoretical guarantees. The appearance of frequentist calibration fills a theoretical gap.
Recently, Tuo and Wu proved the inconsistency of K-O (Kennedy and O’Hagan) calibration based on rigorous derivations [6], and they developed an calibration framework by minimizing the -norm of the discrepancy function between the computer model and the true process (physical system) [7]. Moreover, Tuo and Wu also derived the consistency and asymptotic normality of the proposed estimator and proved its semiparametric efficiency. Subsequently, Wong et al. investigated the theoretical properties of the ordinary least squares estimator of the calibration parameter and estimated the discrepancy function between the computer model and the true process via smoothing spline ANOVA [8]. In recent years, there have been some new calibration methods. Tuo proposed a projected kernel calibration, which has a natural Bayesian version and can construct credible intervals of the proposed estimator without large sample approximation [9]. To address the issue of local convergence, Wang proposed a penalized projected kernel calibration that can achieve both semiparametric efficiency and global convergence of the proposed estimator [10].
While existing frequentist calibration methods assume continuous output for convenience, practical scenarios often involve discrete or other types of output, such as binary output in biology, count output in medicine, etc. Sung employed a kernel logistic regression-based calibration procedure for binary output and applied it to cell adhesion studies [11]. For the count output, Sun and Fang adopted a penalized maximum likelihood method and constructed a calibration procedure, which also enjoys asymptotic normality and semiparametric efficiency [12].
So far, constant calibration has been considered assuming calibration parameters are constant and independent of input variables. In reality, the calibration parameter often varies with input variables, meaning it is a function of some input variables. Tuo et al. addressed this issue and developed a functional calibration framework based on the reproducing kernel Hilbert space, where the calibration parameter is a function of input variables X [13]. They derived the theoretical properties from two perspectives: the consistency of estimation and the consistency of prediction. Sometimes, the calibration parameter is not related to all input variables and only varies with one specified variable, such as time. Tian et al. proposed a novel framework for the inference of real-time parameters based on reinforcement learning and applied their method to physics-based models of turbofan engines [14]. The proposed calibration procedure in [14] has computational validity but lacks theoretical persuasion. To the best of our knowledge, there is no real-time calibration procedure with rigorous theoretical guarantees.
Real-time calibration parameters resemble varying coefficients in statistics. Since the computer models are non-linear, real-time calibration can be regarded as a problem of estimating a coefficient-varying non-linear model. There are few articles about coefficient-varying non-linear models, except for [15]. The authors constructed a pointwise estimator of the functional coefficient based on a local linear smoother and successfully applied their procedure to a photosynthesis study.
The motivation for this article arises primarily from both theoretical and applied perspectives. From the theoretical perspective, most existing calibration approaches assume that model parameters are constant over time. However, in many scientific applications, the underlying parameters evolve dynamically with time. Ignoring this time variation may cause systematic bias, as the calibrated computer model cannot adequately capture the changing physical process. Our work addresses this limitation by proposing a calibration framework that explicitly accounts for time-varying parameters, thereby improving both theoretical understanding and estimation accuracy. From the application perspective, the motivation is exemplified by the NASA Orbiting Carbon Observatory-2 (OCO-2) mission [16]. In this mission, the forward model plays a central role in the Observing System Uncertainty Experiments (OSUE), where its accuracy directly affects the evaluation of retrieval algorithms. Importantly, the forward model involves several geometric parameters—such as instrument and solar azimuth/zenith angles—that are inherently time-dependent. Accurately calibrating these time-varying parameters is crucial for the reliable prediction of radiances and, ultimately, for improving the quality of atmospheric carbon retrievals. This concrete application highlights the practical relevance and necessity of the proposed framework.
Building on this motivation, we adopt the idea of [15], obtain a local linear estimator of the discrepancy function between the computer model and the true process, and construct a pointwise estimator of the functional parameter via quasi-profile least squares in this paper. Furthermore, we establish the rate of convergence for the estimator of the discrepancy function, as well as the asymptotic normality of the pointwise estimator for the real-time parameter.
This paper is organized as follows. In Section 2, we develop the proposed calibration procedure based on the local linear approximation and the quasi-profile least squares. In Section 3, we investigate the asymptotic properties of the proposed estimators. In Section 4 and Section 5, two examples including simulated and practical models and an application in NASA’s OCO-2 mission are employed to illustrate the superior accuracy of the proposed method. Finally, we draw some conclusions and discuss future extensions in Section 6. The proofs of all the theorems in this paper are provided in Appendix A.
2. Main Framework
2.1. Notations
Throughout this paper, we denote by the output of the i-th physical experiment at time t, with corresponding input variables . And stands for the random noise at time t. The true physical response function is written as , while denotes the computer model output with functional calibration parameter , and indicates the optimal calibration parameter. The model discrepancy between the true process and the computer model is represented by , and its true counterpart by . For discrete observations, denotes the measurement at time . The local linear approximation coefficients of are and , with and as their estimators. Kernel smoothing functions are written as and , with corresponding bandwidths , and . Other notations include and , where . Let represent the norm and be the Euclidean norm for a matrix or vector. We use “≍” to indicate asymptotic equivalence, i.e., means for some positive constant c as , and “⇒” indicates “converges in distribution”. Finally, n and T denote the number of experiments and the number of discretization time points, respectively.
2.2. Methodology
We suppose the output of the physical experiments is generated by the following model:
where , , is a random process, and is an unknown determined function. Accordingly, we denote the computer model with the functional parameter by for and . Similarly to [7,13], we define the optimal functional parameter as follows:
Let be the discrepancy between and . We regard as the true discrepancy function. Thus, we can rewrite model (1) as follows:
We discretize the above model (3) by taking values on for every . Then, we have
where . By Taylor’s expansion, for close to , we have
where . If are given, then we can obtain an initial estimator of by solving the following optimization problem:
where is a two-dimensional kernel function. By [17], we can obtain the explicit expression of as follows:
where
diag , , and Furthermore, we can obtain quasi-profile least squares estimators and of and by replacing with .
where is a univariate kernel function, and and are obtained by the iterative linear regression algorithm of the following optimization problem:
Since is generally non-linear, optimization problem (7) does not have a closed-form solution, which poses a challenge for proving the asymptotic properties of . Inspired by [15], we replace in (7) with its first-order Taylor expansion, that is,
where
We adopt the iterative linear regression algorithm to obtain the solutions of (7). Denote the k-th iteration of a and b by . Then, we update the values of a and b as follows:
where with
and diag .
Remark 1.
In practice, the computer model is generally unknown, and thus, a surrogate model is often employed in the calibration procedure. In this paper, we focus on the estimation of the functional parameter in the computer model , and for convenience, we assume that the computer model is known. Even if is unknown, one can construct a surrogate model via Gaussian process regression based on the dataset obtained from N computer experiments. By replacing in Equation (7) with , the proposed estimator can still be obtained. Provided that is consistent, the resulting estimator also maintains consistency; see [7,12] for more details. Hence, the proposed calibration method based on the surrogate model performs as well as when the true computer model is available.
3. Theoretical Properties
3.1. Assumptions
Before we give the asymptotic properties of the proposed estimators, we must list some necessary conditions. First, we make some general assumptions for the structure of the dataset and the kernel functions as follows:
- A1. The kernel function is symmetric, and satisfies
- A2. The bivariate kernel function is of order , that is,where is a multi-index and .
- A3. The kernel function is continuously differentiable on its support (0, 1) with for all and .
- A4. The input variables , are independent and identically distributed (i. i. d) random vectors. are independent and identically distributed with mean zero and variance , and . For every , and are independent.
- A5. is a unique solution of Equation (2), and is a compact subset of .
- A6. Let and , then
- A7. Bothandare non-singular.
- A8.
Conditions A1–A3 make some limitations for the kernel functions and , which were adopted in [17,18,19]. Furthermore, we need to specify the widths and and give some conditions on the combined distribution referring to [18]. Denote the joint probability density functions of , and by , and , respectively.
- B1. , , and .
- B2. , and .
- B3. exists and is continuous on for .
- B4. is continuous on uniformly in y; exists and is continuous on uniformly in y, for .
- B5. is continuous on uniformly in .
- B6. exists and is continuous on for .
- B7. exists and is continuous on , for .
- B8. is continuous on uniformly in ; exists and is continuous on uniformly in , for .
- B9. As , both and tend to 0.
In addition, we need to make some restrictions for the computer model . In this article, we assume is known for convenience. In practice, we can obtain an estimator (surrogate model) of using Gaussian process regression based on the simulated dataset; see Remark 1 for more details.
- C1..
- C2. For any ,
- C3. For , , where is a reproducing kernel Hilbert space and is Donsker for all .
Assumptions C1–C2 require that both computer model and its first-order partial derivatives with respect to are bounded. These are standard regularity conditions that are typically easy to satisfy in practice and have been widely adopted in the literature; see, for example, [7,13]. Furthermore, we assume that the second-order partial derivatives of with respect to are continuous. This is a relatively mild assumption because it does not require the continuity of higher-order derivatives. Finally, Assumption C3 states that computer model for any fixed satisfies the Donsker property, which is crucial for establishing asymptotic normality via empirical process theory; see [7] for further discussion.
3.2. Asymptotic Normality
Theorem 1.
Under Conditions A1, A2, A6, and B1–B8, we have
where .
Theorem 1 claims that the rate of convergence of is , which is slower than . In this article, we select a -consistent estimator of ; thus, the above rate becomes .
Theorem 2.
Under Conditions A1–A8, B9, and C1–C3, we denote and ; then, we have
where is defined in Condition A7 and
where represents taking the expectation over random and .
Theorem 2 establishes the asymptotic normality of the raw estimator for the time-varying parameter at a fixed , which facilitates the asymptotic properties of the proposed estimator. Based on the above results, we can derive the asymptotic distribution of as follows:
Theorem 3.
Under Conditions A1–A8, B1–B9, and C1–C3, we denote ; then, we have
where has also been defined in Condition A7 and
Theorem 3 establishes the asymptotic normality of the proposed estimator for the time-varying parameter at a fixed point , which is also called pointwise asymptotic normality.
4. Numerical Studies
4.1. Numerical Simulations
Suppose the output from physical experiments is generated by the following model:
where is the optimal time-varying parameter. According to Section 2, we know that the computer model and the true discrepancy function is . We assume , follows the uniform distribution and select and , respectively.
First, we fix and to compare three estimators of the calibration function (the proposed estimator, the raw estimator, and the constant calibration estimator) with the true calibration function, and we let where . The results are represented in Figure 1.
Figure 1.
Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with the true calibration function in a toy model over the time period with and .
From Figure 1, we find that the proposed local linear estimator most closely follows the true parameter curve. In contrast, the raw estimator deviates more noticeably, and the constant calibration estimator shows a severe mismatch with the true parameter curve. These results indicate that the proposed time-varying calibration method achieves the best overall performance, the raw time-varying approach performs worse but still better than the constant calibration, and the constant calibration scheme is clearly infeasible in this scenario. In order to measure the performance of the proposed estimator, we utilize the following two components: the mean square errors (MSE) and the mean absolute prediction error (MAPE), where
All results are shown in Table 1.
Table 1.
MSE and MAPE of the estimators of the calibration function with .
Table 1 reveals several noteworthy observations. Initially, the real-time calibration estimators, the ordinary least squares (OLS) and the quasi-profile least squares (PLS), exhibit significantly superior performance compared to the constant calibration estimator (COLS). Notably, the proposed estimator PLS outperforms the OLS marginally, as is evident from the MSE and MAPE. Specifically, irrespective of sample size and error term variance, the MSE of the constant calibration COLS is more than twice as large as that of OLS and PLS. Considering MAPE, although the difference between the constant calibration COLS and the proposed estimators OLS and PLS is minor, COLS consistently exhibits a higher MAPE than the proposed estimators. Despite the similar performance of the proposed estimators PLS and OLS, owing to the relatively small values of the discrepancy function , PLS consistently demonstrates better performance than OLS.
4.2. Calibrating an Environmental Model for the Concentrations of Pollutants
Air pollution is a global problem that has long been of public concern. Pollutant concentration is one of the important indicators of air pollution, and it is an effective tool for predicting air pollution. The models for pollutant concentration have been investigated by many studies, such as [20,21]. However, existing calibration procedures for models of pollutant concentration always regard the calibration parameters as constants, which would not be accurate. Bliznyuk et al. used the following model about concentration representation [20]:
where is the calibration parameter. Moreover, and represent the same mass, location, diffusion rate, and time of the second spill, not all of which are constant with respect to time, such as . is the diffusion rate of the pollutant, which varies with time. Since the other three parameters were calibrated by [20], we only consider as a function of time. We replace and with their true values, and with ; then, model (13) becomes
Referring to [20], we set and with . In addition, we assume that there exists a discrepancy between the above model and the true process with and the random error follows a normal distribution . Similarly to in Section 4.1, we also utilize the MSE and MAPE to measure the performance of the estimators of the calibration function, which are shown in Table 2.
Table 2.
MSE and MAPE of the estimators of the calibration function with .
As we expected, the proposed estimator PLS has the most favorable performance in terms of MSE and MAPE, while the constant calibration COLS has the worst performance, by a significant difference. Specifically, the MSE of the proposed estimator PLS consistently remains in the order of , while the MSE of the constant estimator COLS consistently remains in the order . From the perspective of MAPE, the proposed estimators OLS and PLS are more accurate than COLS, especially when the sample size is relatively small. With increasing sample size, the discrepancy between OLS’s MAPE and PLS’s MAPE gradually diminishes, while COLS’s MAPE consistently exceeds that of the proposed estimators. To further discern the performance of the proposed estimators, we visualize the estimated values under and of the calibration function compared to the true values in Figure 2.
Figure 2.
Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with true calibration function in a pollutant concentration model over the time period with and .
From Figure 2, we find that the values estimated by the constant calibration are distant from the true calibration function, while the values estimated by the proposed method are close to the true function. Comparing OLS with PLS, we find that the values estimated by PLS are always close to the true values of the calibration function, while the values estimated by OLS have some significant fluctuations and sometimes deviate from the true values of the calibration function. In summary, the proposed estimator PLS consistently proves to be valid and efficient for the environmental model (14).
5. An Application to Calibrate the Forward Model in NASA’s OCO-2 Mission
In the NASA Orbiting Carbon Observatory-2 (OCO-2) mission, Observing System Uncertainty Experiments (OSUEs) play a crucial role in performing probabilistic assessments on retrieval algorithms. The forward model is an essential component of the OSUEs, and its prediction accuracy for real-world scenarios directly impacts the evaluation of retrieval algorithms, as detailed in [16].
The forward model describes the complex physical relationship between the atmospheric variable and radiances . This model typically involves four geometric parameters, Instrument Azimuth Angle (Inst-AziA), Instrument Zenith Angle (Inst-ZenA), Solar Azimuth Angle (Sol-AziA), and Solar Zenith Angle (Sol-ZenA), denoted by , which are time-dependent. These angles define the relative positions of the instrument’s line of sight and the incoming solar radiation. Specifically, azimuth angles describe the horizontal orientation of either the instrument or the sun with respect to a reference direction (typically north), while zenith angles measure the deviation from the vertical. Together, they determine the optical path of sunlight through the atmosphere and thus strongly influence the measured radiances.
Given the high computational cost of this forward model, we constructed a surrogate model based on experimental data. We utilized the simulated dataset from [16] and first employed Principal Component Analysis (PCA) to reduce the dimensionality of from 66 to 4. We considered the spectrometer’s measure of the radiation intensity in strong CO2 wavelength bands and computed its functional PCA weight as a new output because output is functional data concerning wavelength w. At last, we used Gaussian process regression to construct a surrogate model of the forward model based on normalized experimental parameters.
To reduce the uncertainty of the forward model, we need to identify the time-varying parameter based on true observations. We downloaded data comprising 20 days from NASA’s official website and used MAPE and mean square prediction error (MSPE) to measure the performance of the involved calibration procedures. The calibrated results, including MAPE, MSPE, and the and quantiles of the pointwise estimated values of the calibration parameter, are presented in Table 3 and Figure 3.
Table 3.
MAPE and MSPE of the estimators of the calibration function in forward model based on NASA’s observations.
Figure 3.
Comparison of the predicted results for strong () obtained by calibrating the forward model with the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least squares estimator for constant parameter) with the true observations (NASA’s satellite remote sensing data).
From Table 3, the proposed calibration procedure PLS achieves the minimal prediction error, while OLS is slightly worse than PLS. As expected, the constant calibration procedure COLS performs poorly in terms of MAPE and PMSE, validating our assumption of the time-varying parameter. Figure 3 reports the comparison for the predicted values through different methods to calibrate the forward model. The predicted values from the proposed PLS method are closest to the true observations, while those from the constant calibration procedure significantly deviate from the true observations. This application of the forward model further verifies the efficiency of the proposed calibration procedure.
6. Conclusions and Discussion
In this article, we propose a real-time calibration procedure for computer models with a time-varying parameter. To construct a pointwise estimator for the time-varying parameter at a fixed point, we employed a quasi-profile least squares estimation approach. This involved deriving a local linear estimator for the discrepancy function given the calibration parameter, followed by computing a quasi-profile least squares estimator for the calibration function at a specified time point. Additionally, we provided insights on the convergence rate of the estimator for the discrepancy function and explored the asymptotic properties of our proposed estimator of the time-varying parameter. Furthermore, we conducted numerical simulations and considered a real-world example, demonstrating the favorable performance of our proposed method.
Although our proposed method exhibits superior performance in both asymptotic theory and computational efficiency, there are some drawbacks in our calibration procedure. First, we assume that the computer model is fully known, which may not hold in practical applications. To address this, future research could focus on constructing surrogate models to approximate unknown computer models with time-varying parameters. Second, we assume that random errors are independent and identically distributed with finite variance. In situations where errors are correlated or exhibit heteroscedasticity, the efficiency of the proposed estimator may be reduced. Extending the method to account for correlated or non-standard error structures could significantly broaden its applicability. Finally, enhancing estimation efficiency through weighted quasi-profile least squares or other advanced techniques presents a promising avenue for further investigation. Overall, these potential extensions suggest that the proposed framework could be adapted to a wider range of complex and realistic modeling scenarios.
Author Contributions
Conceptualization, Y.S. and X.F.; methodology, Y.S.; software, Y.S.; formal analysis, Y.S. and X.F.; investigation, Y.S.; resources, X.F. and Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S.; visualization, Y.S. and X.F.; supervision, X.F.; project administration, X.F. and Y.S.; funding acquisition, Y.S. and X.F. All authors have read and agreed to the published version of the manuscript.
Funding
This research was Supported by the China Postdoctoral Science Foundation under Grant Number 2025M773095 and GZC20252106.
Data Availability Statement
The original data presented in the study are openly available from the NASA Goddard Earth Science Data and Information Services Center at https://disc.gsfc.nasa.gov/datasets?keywords=OCO2_l1&page=1&processingLevel=1,1B,1A (accessed on 10 September 2025).
Conflicts of Interest
The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.
Appendix A. Proofs of Theorems
Proof of Theorem 1.
Let
where , when while , and . Let
Then, we have
that is,
which are equivalent to
By Taylor’s expansion and Condition A6, we know
which yields that
By Gammer’s rule, we can obtain that
where is the j-th component of ,
are matrices, where is defined by replacing the j-th columnn of with a vector R. Thus, we can get
By the proof of Theorem 3.2 of [18], we know that
which implies that
□
Proof of Theorem 2.
Let
By the definitions of and , we have . Let ; then,
which is equivalent to
At first, we need to prove the consistency of . By [15], we only need to prove in probability. Denote and . By the definition of , we know that
which facilitates that
By Conditions A6, A8, and C2, we know that
which implies that
which combined with the weak law of large numbers implies that
Thus, we obtain that
where the last equality holds by Equation (A10) and Condition A3. Combining (A11) and the proof of Lemma 1 of the Supplementary Material of [15], we can obtain the consistency of .
Next, we derive the asymptotic results of . Recall the definition of ; we have
Define the empirical process
where and . Denote by a neighbourhood of . By conditions A3, A8 and C2, we know the set is Donsker. Condition C3 implies that
is also Donsker. From conditions A3, A8, and C1–C2, we know that both and are uniformly bounded, which implies that the product class is Donsker. Referring to the rigorous derivations in the proof of Theorem 1 of [7] and the consistency of , we can obtain that in probability, that is,
which is equivalent to
Under Conditions A3 and A6, we have
which in combination with (A13) implies that
which together with (A12) yields that
Define the empirical process
Similarly to the derivation of , under Conditions A3, A6, A8, and C1–C3, we can obtain that in probability, that is,
Let
Then, we have
where lies between and . Combining the above results and the fact as , we have
By the consistency of ,
Thus, we obtain the asymptotic result of as follows:
Next, we consider . Define the empirical process
Under conditions A3–A4, A8, and C2, let ; we know the function class
is a Donsker class, which implies that weakly converges in to a tight Gaussian process by Chapter 2 of [22]. Since for all n, we have . According to the consistency of [15] and the continuous mapping theorem in [23], we obtain , which yields that
Similarly, we can obtain that
Proof of Theorem 3.
Let
By the definitions of , we have , that is,
where , . According to the derivation of in the proof of Theorem 2, we have
Next, we consider . Similarly to the derivation of of the proof of Theorem 2, it is not hard to obtain the asymptotic result of as follows:
where
At last, we derive the asymptotic result of . By Theorem 1, we have
where the second equality holds by Theorem 1 and the third equality holds by conditions A1 and C2. Combining (A20)–(A23), we have
which together with condition A7 facilitates that
where
□
References
- Kennedy, M.C.; O’Hagan, A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 2001, 63, 425–464. [Google Scholar] [CrossRef]
- Drignei, D.; Morris, M.D. Empirical Bayesian analysis for computer experiments involving finite difference codes. J. Am. Stat. Assoc. 2006, 101, 1527–1536. [Google Scholar] [CrossRef]
- Higdon, D.; Gattiker, J.; Williams, B.; Rightley, M. Computer model calibration using high-dimensional output. J. Am. Stat. Assoc. 2008, 103, 570–583. [Google Scholar] [CrossRef]
- Reich, B.J.; Storlie, C.B.; Bondell, H.D. Variable selection in Bayesian smoothing spline ANOVA models: Application to deterministic computer codes. Technometrics 2009, 51, 110–120. [Google Scholar] [CrossRef]
- Xie, F.Z.; Xu, Y.X. Bayesian projected calibration of computer models. J. Am. Stat. Assoc. 2021, 116, 1965–1982. [Google Scholar] [CrossRef]
- Tuo, R.; Wu, C.F.J. A theoretical framework for calibration in computer models: Parametrization, estimation and convergence properties. SIAM/ASA J. Uncertain. Quantif. 2016, 4, 767–795. [Google Scholar] [CrossRef]
- Tuo, R.; Wu, C.F.J. Efficient calibration for imperfect computer models. Ann. Stat. 2015, 43, 2331–2352. [Google Scholar] [CrossRef]
- Wong, K.W.; Storlie, C.B.; Lee, C.M. A frequentist approach to computer model calibration. J. R. Stat. Soc. Ser. B 2017, 79, 635–648. [Google Scholar] [CrossRef]
- Tuo, R. Adjustments to Computer Models via Projected Kernel Calibration. SIAM/ASA J. Uncertain. Quantif. 2019, 7, 553–578. [Google Scholar] [CrossRef]
- Wang, Y. Penalized Projected Kernel Calibration for Computer Models. SIAM/ASA J. Uncertain. Quantif. 2022, 10, 1652–1683. [Google Scholar] [CrossRef]
- Sung, C.L.; Berber, B.D.; Walker, B.J. Calibration of Inexact Computer Models with Heteroscedastic Errors. SIAM/ASA J. Uncertain. Quantif. 2022, 10, 1733–1752. [Google Scholar] [CrossRef]
- Sun, Y.; Fang, X. A model calibration procedure for count response. Commun. Stat. Theory Methods 2024, 53, 4272–4289. [Google Scholar] [CrossRef]
- Tuo, R.; He, S.Y.; Pourhabib, A.; Ding, Y.; Huang, J.Z. A Reproducing Kernel Hilbert Space Approach to Functional Calibration of Computer Models. J. Am. Stat. Assoc. 2023, 118, 883–897. [Google Scholar] [CrossRef]
- Tian, Y.; Chao, M.A.; Kulkarni, C.; Geobel, K.; Fink, O. Real-time model calibration with deep reinforcement learning. Mech. Syst. Signal Process. 2022, 165, 108284. [Google Scholar] [CrossRef]
- Kürüm, E.; Li, R.Z.; Wang, Y.; Sentürk, D. Nonlinear Varying-Coefficient Models with Applications to a Photosynthesis Study. J. Agric. Biol. Environ. Stat. 2013, 19, 57–81. [Google Scholar] [CrossRef]
- Ma, P.L.; Mondal, A.; Konomi, B.A.; Hobbs, J.; Song, J.J.; Kang, E.L. Computer Model Emulation with High-Dimensional Functional Output in Large-Scale Observing System Uncertainty Experiments. Technometrics 2022, 64, 65–79. [Google Scholar] [CrossRef]
- Chen, J.; Li, D.G.; Liang, H.; Wang, S.L. Semiparametric GEE Analysis in Partially Linear Single-Index Models for Longitudinal Data. Ann. Stat. 2015, 43, 1682–1715. [Google Scholar] [CrossRef]
- Jiang, C.R.; Wang, J.L. Covariate adjusted functional principle components analysis for longitudinal data. Ann. Stat. 2010, 38, 1194–1226. [Google Scholar] [CrossRef]
- Yao, F.; Müller, H.G.; Wang, J.L. Functional Data Analysis for Sparse Longitudinal Data. J. Am. Stat. Assoc. 2005, 100, 577–590. [Google Scholar] [CrossRef]
- Bliznyuk, N.; Ruppert, D.; Shoemaker, C.; Regis, R.; Wild, S.; Mugunthan, P. Bayesian Calibration and Uncertainty Analysis for Computationally Expensive Models Using Optimization and Radial Basis Function Approximation. J. Comput. Graph. Stat. 2008, 17, 270–294. [Google Scholar] [CrossRef]
- Mugunthan, P.; Shoemaker, C.A. Assessing the Impacts of Parameter Uncertainty for Computationally Expensive Groundwater Models. Water Resour. Res. 2006, 42, W10428. [Google Scholar] [CrossRef]
- van der Varrt, A.W.; Wellner, J.A. Weak Convergence and Empirical Processes: With Applications to Statistics; Springer: New York, NY, USA, 1996. [Google Scholar] [CrossRef]
- van der Vaart, A.W. Asymptotic Statistics; Cambridge Series in Statistical and Probabilistic Mathematics 3; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).