Next Article in Journal
From Model to Mechanism: Enforcing Delegated Authority in SSI with Language-Based Security
Previous Article in Journal
Evaluating the Effectiveness of Standardized Sales Incentive Contracts Under Agent Heterogeneity
Previous Article in Special Issue
Sure Independence Screening for Ultrahigh-Dimensional Additive Model with Multivariate Response
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Calibration for Computer Models with Time-Varying Parameter

School of Mathematical Sciences, Peking University, Beijing 100871, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(18), 2969; https://doi.org/10.3390/math13182969
Submission received: 18 July 2025 / Revised: 10 September 2025 / Accepted: 11 September 2025 / Published: 13 September 2025

Abstract

Traditional calibration methods often assume constant parameters that remain unchanged across input conditions, which can limit predictive accuracy when parameters actually vary. To address this issue, we propose a novel calibration framework with time-varying parameters. Building on the idea of profile least squares, we first apply local linear smoothing to estimate the discrepancy function between the computer model and the true process, and then use local linear smoothing again to obtain pointwise estimates of the functional calibration parameter. Through rigorous theoretical analysis, we establish the consistency and asymptotic normality of the proposed estimator. Simulation studies and an application to NASA’s OCO-2 mission demonstrate that the proposed method effectively captures parameter variation and improves predictive performance.

1. Introduction

Computer models are employed to simulate or emulate complex physical systems with high cost. With the rapid advancement of computer technology, computer models have become increasingly appealing and are applied to various domains due to their relatively low cost and time efficiency. Despite their advantageous properties, computer models possess certain drawbacks, one being the uncertainty stemming from unknown parameters, referred to as “calibration parameters”, which are intricately linked to the inherent properties of physical systems. Calibration aims to discern these parameters to achieve precise predictions of the physical system.
The concept of calibration was first proposed in [1], where the authors developed a Bayesian calibration framework. From then on, the Bayesian calibration methods began to emerge, such as in [2,3,4,5], etc. The above Bayesian calibration procedures have been applied to real applications successfully, but they lack rigorous theoretical guarantees. The appearance of frequentist calibration fills a theoretical gap.
Recently, Tuo and Wu proved the inconsistency of K-O (Kennedy and O’Hagan) calibration based on rigorous derivations [6], and they developed an L 2 calibration framework by minimizing the L 2 -norm of the discrepancy function between the computer model and the true process (physical system) [7]. Moreover, Tuo and Wu also derived the consistency and asymptotic normality of the proposed estimator and proved its semiparametric efficiency. Subsequently, Wong et al. investigated the theoretical properties of the ordinary least squares estimator of the calibration parameter and estimated the discrepancy function between the computer model and the true process via smoothing spline ANOVA [8]. In recent years, there have been some new calibration methods. Tuo proposed a projected kernel calibration, which has a natural Bayesian version and can construct credible intervals of the proposed estimator without large sample approximation [9]. To address the issue of local convergence, Wang proposed a penalized projected kernel calibration that can achieve both semiparametric efficiency and global convergence of the proposed estimator [10].
While existing frequentist calibration methods assume continuous output for convenience, practical scenarios often involve discrete or other types of output, such as binary output in biology, count output in medicine, etc. Sung employed a kernel logistic regression-based calibration procedure for binary output and applied it to cell adhesion studies [11]. For the count output, Sun and Fang adopted a penalized maximum likelihood method and constructed a calibration procedure, which also enjoys asymptotic normality and semiparametric efficiency [12].
So far, constant calibration has been considered assuming calibration parameters are constant and independent of input variables. In reality, the calibration parameter often varies with input variables, meaning it is a function of some input variables. Tuo et al. addressed this issue and developed a functional calibration framework based on the reproducing kernel Hilbert space, where the calibration parameter is a function of input variables X [13]. They derived the theoretical properties from two perspectives: the consistency of estimation and the consistency of prediction. Sometimes, the calibration parameter is not related to all input variables and only varies with one specified variable, such as time. Tian et al. proposed a novel framework for the inference of real-time parameters based on reinforcement learning and applied their method to physics-based models of turbofan engines [14]. The proposed calibration procedure in [14] has computational validity but lacks theoretical persuasion. To the best of our knowledge, there is no real-time calibration procedure with rigorous theoretical guarantees.
Real-time calibration parameters resemble varying coefficients in statistics. Since the computer models are non-linear, real-time calibration can be regarded as a problem of estimating a coefficient-varying non-linear model. There are few articles about coefficient-varying non-linear models, except for [15]. The authors constructed a pointwise estimator of the functional coefficient based on a local linear smoother and successfully applied their procedure to a photosynthesis study.
The motivation for this article arises primarily from both theoretical and applied perspectives. From the theoretical perspective, most existing calibration approaches assume that model parameters are constant over time. However, in many scientific applications, the underlying parameters evolve dynamically with time. Ignoring this time variation may cause systematic bias, as the calibrated computer model cannot adequately capture the changing physical process. Our work addresses this limitation by proposing a calibration framework that explicitly accounts for time-varying parameters, thereby improving both theoretical understanding and estimation accuracy. From the application perspective, the motivation is exemplified by the NASA Orbiting Carbon Observatory-2 (OCO-2) mission [16]. In this mission, the forward model plays a central role in the Observing System Uncertainty Experiments (OSUE), where its accuracy directly affects the evaluation of retrieval algorithms. Importantly, the forward model involves several geometric parameters—such as instrument and solar azimuth/zenith angles—that are inherently time-dependent. Accurately calibrating these time-varying parameters is crucial for the reliable prediction of radiances and, ultimately, for improving the quality of atmospheric carbon retrievals. This concrete application highlights the practical relevance and necessity of the proposed framework.
Building on this motivation, we adopt the idea of [15], obtain a local linear estimator of the discrepancy function between the computer model and the true process, and construct a pointwise estimator of the functional parameter via quasi-profile least squares in this paper. Furthermore, we establish the rate of convergence for the estimator of the discrepancy function, as well as the asymptotic normality of the pointwise estimator for the real-time parameter.
This paper is organized as follows. In Section 2, we develop the proposed calibration procedure based on the local linear approximation and the quasi-profile least squares. In Section 3, we investigate the asymptotic properties of the proposed estimators. In Section 4 and Section 5, two examples including simulated and practical models and an application in NASA’s OCO-2 mission are employed to illustrate the superior accuracy of the proposed method. Finally, we draw some conclusions and discuss future extensions in Section 6. The proofs of all the theorems in this paper are provided in Appendix A.

2. Main Framework

2.1. Notations

Throughout this paper, we denote by y i p ( t ) the output of the i-th physical experiment at time t, with corresponding input variables x i Ω R p . And e i ( t ) stands for the random noise at time t. The true physical response function is written as ζ ( x , t ) , while y s ( x , θ ( t ) ) denotes the computer model output with functional calibration parameter θ ( t ) Θ R d , and θ ( t ) indicates the optimal calibration parameter. The model discrepancy between the true process and the computer model is represented by δ ( x , θ ( t ) ) , and its true counterpart by δ ( x , t ) . For discrete observations, y i j p denotes the measurement at time t i j . The local linear approximation coefficients of θ ( t ) are a and b , with a ^ and b ^ as their estimators. Kernel smoothing functions are written as K 1 and K 2 , with corresponding bandwidths h t 1 , h t 2 , h t 3 , and h x . Other notations include γ = ( a T , b T ) T and γ = ( a , T , b , T ) T , where T ˜ ( t ) = ( 1 , t t 0 ) T . Let f L 2 ( Ω ) 2 = Ω f ( x ) 2 d x represent the L 2 norm and · be the Euclidean norm for a matrix or vector. We use “≍” to indicate asymptotic equivalence, i.e., a n b n means a n / b n c for some positive constant c as n , and “⇒” indicates “converges in distribution”. Finally, n and T denote the number of experiments and the number of discretization time points, respectively.

2.2. Methodology

We suppose the output y i p ( t ) , i = 1 , , n of the physical experiments is generated by the following model:
y i p ( t ) = ζ ( x i , t ) + e i ( t ) ,
where x i Ω R p , t T , e i ( t ) is a random process, and ζ ( · ,   · ) is an unknown determined function. Accordingly, we denote the computer model with the functional parameter θ ( t ) Θ R d by y s ( x , θ ( t ) ) for x Ω and t T . Similarly to [7,13], we define the optimal functional parameter θ ( · ) as follows:
θ ( · ) = arg min θ ( t ) Θ , t T ζ ( · , t ) y s ( · , θ ( t ) ) L 2 ( Ω × T ) 2 .
Let δ ( x , θ ( t ) ) = ζ ( x , t ) y s ( x , θ ( t ) ) be the discrepancy between ζ ( x , t ) and y s ( x , θ ( t ) ) . We regard δ ( x , t ) = ζ ( x , t ) y s ( x , θ ( t ) ) as the true discrepancy function. Thus, we can rewrite model (1) as follows:
y i p ( t ) = y s ( x i , θ ( t ) ) + δ ( x i , t ) + e i ( t ) .
We discretize the above model (3) by taking values t = t i 1 , , t i T on T for every i { 1 , , n } . Then, we have
y i j p = y s ( x i , θ ( t i j ) ) + δ ( x i , t i j ) + e i j , i = 1 , , n , j = 1 , , T ,
where e i j = e i ( t i j ) . By Taylor’s expansion, for t T close to t i j , we have
θ ( t i j ) θ ( t ) + θ ˙ ( t ) ( t i j t ) = : a + b ( t i j t ) ,
where θ ˙ ( t ) = d θ ( t ) / d t . If ( a , b ) = ( a , b ) are given, then we can obtain an initial estimator d ^ 0 a , b of δ ( · , · ) by solving the following optimization problem:
( d ^ 0 a , b , d ^ 1 a , b , d ^ 2 a , b ) = arg min d 0 , d 1 R , d 2 R p i = 1 n j = 1 T { y i j p y s ( x i , a + b ( t i j t ) ) ( d 0 + d 1 ( t i j t ) + d 2 T ( x i x ) ) } 2 K 1 t i j t h t 1 , x i x h x ,
where K 1 ( · , · ) is a two-dimensional kernel function. By [17], we can obtain the explicit expression of δ ^ a , b ( x , t ) as follows:
δ ^ a , b ( x , t ) = d ^ 0 a , b = i = 1 n e p + 2 1 ( i = 1 n ϕ i T K i ϕ i 1 ϕ i T K i ( y i p y i s ( a , b ) ) ,
where
ϕ i = 1 t i 1 t x i x 1 t i T t x i x ,
K i = diag { K 1 ( t i 1 t h t 1 , x i x h x ) , , K 1 ( t i T t h t 1 , x i x h x ) } , y i s ( a , b ) = ( y s ( x i , a + b ( t i 1 x ) ) , , y s ( x i , a + b ( t i T t ) ) ) T , and e p + 2 1 = ( 1 , 0 , , 0 ) 1 × ( p + 2 ) . Furthermore, we can obtain quasi-profile least squares estimators a ^ and b ^ of a and b by replacing δ with d ^ 0 a , b .
( a ^ , b ^ ) = arg min a , b i = 1 n j = 1 T y i j p y s ( x i , a + b ( t i j t 0 ) ) δ ^ a ˜ , b ˜ ( x i , t i j ) 2 K 2 t i j t 0 h t 2 ,
where K 2 is a univariate kernel function, and a ˜ and b ˜ are obtained by the iterative linear regression algorithm of the following optimization problem:
min a , b i = 1 n j = 1 T y i j p y s ( x i , a + b ( t i j t 0 ) ) 2 K 2 t i j t 0 h t 3 .
Since y s is generally non-linear, optimization problem (7) does not have a closed-form solution, which poses a challenge for proving the asymptotic properties of a ^ . Inspired by [15], we replace y s ( · , · ) in (7) with its first-order Taylor expansion, that is,
y s ( x i , a + b ( t i j t 0 ) ) y s ( x i , a 0 + b 0 ( t i j t 0 ) ) + [ ( a a 0 ) + ( b b 0 ) ( t i j t 0 ) ] T y ˙ s ( x i , a 0 + b 0 ( t i j t 0 ) ) ,
where
y ˙ s ( x i , a 0 + b 0 ( t i j t ) ) = y s ( x i , β ) β | β = a 0 + b 0 ( t i j t ) .
We adopt the iterative linear regression algorithm to obtain the solutions of (7). Denote the k-th iteration of a and b by a ( k ) , b ( k ) . Then, we update the values of a and b as follows:
( a ( k + 1 ) ) T , ( b ( k + 1 ) ) T T = i = 1 n Z i ( k ) T K ˜ i Z i ( k ) 1 i = 1 n Z i ( k ) T K ˜ i Y ˜ i ( k ) ,
where Y i ( k ) = ( y ˜ i 1 p , , y ˜ i T p ) T with
y ˜ i j p = y i j p δ ^ a ˜ , b ˜ ( x i , t i j ) y s ( x i , a ( k ) + b ( k ) ( t i j t 0 ) ) + ( a ( k ) + b ( k ) ( t i j t 0 ) ) T y ˙ s ( x i , a ( k ) + b ( k ) ( t i j t 0 ) ) ,
Z i ( k ) = y ˙ s ( x i , a ( k ) + b ( k ) ( t i 1 t 0 ) ) T , y ˙ s ( x i , a ( k ) + b ( k ) ( t i 1 t 0 ) ) T ( t i 1 t 0 ) y ˙ s ( x i , a ( k ) + b ( k ) ( t i T t 0 ) ) T , y ˙ s ( x i , a ( k ) + b ( k ) ( t i T t 0 ) ) T ( t i T t 0 ) ,
and K ˜ i = diag { K 2 ( t i 1 t 0 h t 2 ) , , K 2 ( t i T t 0 h t 2 ) } .
Remark 1.
In practice, the computer model is generally unknown, and thus, a surrogate model is often employed in the calibration procedure. In this paper, we focus on the estimation of the functional parameter in the computer model y s ( · ,   · ) , and for convenience, we assume that the computer model is known. Even if y s ( · ,   · ) is unknown, one can construct a surrogate model y ^ s ( · ,   · ) via Gaussian process regression based on the dataset { ( x j , θ j , y j ) : j = 1 , , N } obtained from N computer experiments. By replacing y s ( · , · ) in Equation (7) with y ^ s ( · ,   · ) , the proposed estimator θ ^ ( · ) can still be obtained. Provided that y ^ s ( · ,   · ) is consistent, the resulting estimator θ ^ ( · ) also maintains consistency; see [7,12] for more details. Hence, the proposed calibration method based on the surrogate model performs as well as when the true computer model is available.

3. Theoretical Properties

3.1. Assumptions

Before we give the asymptotic properties of the proposed estimators, we must list some necessary conditions. First, we make some general assumptions for the structure of the dataset and the kernel functions as follows:
  • A1. The kernel function K 1 ( · ,   · ) is symmetric, and satisfies
    u 2 K 1 ( u , v ) d u d v = v 2 K 1 ( u , v ) d u d v = 1 .
  • A2. The bivariate kernel function K 1 ( · ,   · ) is of order ( ν , κ ) , that is,
    u k 1 v k 2 K 1 ( u , v ) d u d v = 0 , 0 k 1 + k 2 < κ , k 1 , k 2 κ , ( 1 ) | ν | | ν | ! , k 1 = ν 1 , k 2 = ν 2 , 0 , k 1 + k 2 = κ ,
    where ν = ( ν 1 , ν 2 ) is a multi-index and | ν | = ν 1 + ν 2 > 2 .
  • A3. The kernel function K 2 ( · ) is continuously differentiable on its support (0, 1) with K 2 ( s ) 0 for all s ( 0 , 1 ) and 0 < K 2 ( 1 ) K 2 ( 0 ) < .
  • A4. The input variables x i , 1 i n are independent and identically distributed (i. i. d) random vectors. e i j , i = 1 , , n , j = 1 , , T are independent and identically distributed with mean zero and variance σ 2 < , and E e i j 4 < . For every i = 1 , , n , x i and e i = ( e i 1 , , e i T ) T are independent.
  • A5. θ ( · ) is a unique solution of Equation (2), and Θ is a compact subset of R d .
  • A6. Let γ = ( a , T , b , T ) T and T ˜ ( t ) = ( 1 , t t 0 ) T , then
    E x E t { y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) T I d ) γ ) } = 0 .
  • A7. Both
    V ˜ 0 = Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 t t 0 h t 3 d x d t
    and
    V 0 = Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 t t 0 h t 2 d x d t
    are non-singular.
  • A8. sup t 1 , t 2 T | t 1 t 2 | < .
Conditions A1–A3 make some limitations for the kernel functions K 1 ( · ,   · ) and K 2 ( · ) , which were adopted in [17,18,19]. Furthermore, we need to specify the widths h t 1 , h t 2 , h t 3 , and h x and give some conditions on the combined distribution referring to [18]. Denote the joint probability density functions of ( t , x ) , ( t , x , y ) , ( t , s , x ) , and ( t 1 , t 2 , x , y 1 , y 2 ) , ( t 1 , t 2 , t 1 , t 2 , x , y 1 , y 1 , y 2 , y 2 ) by f 2 ( t , x ) , f 3 ( t , x , y ) , g 3 ( t , s , x ) , f 5 ( t 1 , t 2 , x , y 1 , y 2 ) , and f 9 ( t 1 , t 2 , t 1 , t 2 , x , y 1 , y 1 , y 2 , y 2 ) , respectively.
  • B1. h t 1 h x h , h 0 n h | ν | + 2 , and n h 2 κ + 2 < .
  • B2. h t 1 h x h , h 0 , n T h | ν | + 2 , T h 0 and n T h 2 κ + 2 < .
  • B3. κ t κ 1 x κ 2 f 2 ( t , x ) exists and is continuous on { ( t , x ) } for κ 1 + κ 2 = κ , 0 κ 1 , κ 2 κ .
  • B4. f 3 ( t , x , y ) is continuous on { t , x } uniformly in y; κ t κ 1 x κ 2 f 3 ( t , x , y ) exists and is continuous on { ( t , x ) } uniformly in y, for κ 1 + κ 2 = κ , 0 κ 1 , κ 2 κ .
  • B5. f 5 ( t 1 , t 2 , x , y 1 , y 2 ) is continuous on { ( t 1 , t 2 , x ) } uniformly in ( y 1 , y 2 ) .
  • B6. κ t κ 1 s κ 2 x κ 3 g 3 ( t , s , x ) exists and is continuous on { ( t , s , x ) } for κ 1 + κ 2 + κ 3 = κ , 0 κ 1 , κ 2 , κ 3 κ .
  • B7. κ t κ 1 x κ 2 δ ( x , t ) exists and is continuous on { ( x , t ) } , for κ 1 + κ 2 = κ , 0 κ 1 , κ 2 κ .
  • B8. f 9 ( t 1 , t 2 , t 1 , t 2 , x , y 1 , y 2 , y 1 , y 2 ) is continuous on { ( t 1 , t 2 , t 1 , t 2 , x ) } uniformly in ( y 1 , y 2 ) ; κ t κ 1 d s κ 2 d x κ 3 f 5 ( t , s , x , y 1 , y 2 ) exists and is continuous on { ( t , s , x ) } uniformly in ( y 1 , y 2 ) , for κ 1 + κ 2 + κ 3 = κ , 0 κ 1 , κ 2 , κ 3 κ .
  • B9. As n T , both h t 2 and h t 3 tend to 0.
In addition, we need to make some restrictions for the computer model y s ( · ,   · ) . In this article, we assume y s ( · ,   · ) is known for convenience. In practice, we can obtain an estimator y ^ s ( · ,   · ) (surrogate model) of y s using Gaussian process regression based on the simulated dataset; see Remark 1 for more details.
  • C1. sup θ ( t ) Θ , t T y s ( · , θ ( t ) ) L ( Ω ) < + .
  • C2. For any β Θ ,
    y s ( · , β ) β L 2 ( Ω ) < , 2 y s ( · , β ) β 2 C ( Ω × Θ ) ) .
  • C3. For t T , θ ( t ) Θ , y s ( · , θ ( t ) ) N Φ ( Ω ) , where N Φ ( Ω ) is a reproducing kernel Hilbert space and N Φ ( Ω , ρ ) = { f : f N Φ ( Ω ) ρ } is Donsker for all ρ > 0 .
Assumptions C1–C2 require that both computer model y s ( · , β ) and its first-order partial derivatives with respect to β are bounded. These are standard regularity conditions that are typically easy to satisfy in practice and have been widely adopted in the literature; see, for example, [7,13]. Furthermore, we assume that the second-order partial derivatives of y s ( · , β ) with respect to β are continuous. This is a relatively mild assumption because it does not require the continuity of higher-order derivatives. Finally, Assumption C3 states that computer model y s ( · , θ ) for any fixed θ satisfies the Donsker property, which is crucial for establishing asymptotic normality via empirical process theory; see [7] for further discussion.

3.2. Asymptotic Normality

Theorem 1.
Under Conditions A1, A2, A6, and B1–B8, we have
δ ^ a , b ( x , t ) δ ( x , t ) = O p ( { n T h t 1 h x } 1 / 2 ) + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) ,
where γ = ( a T , b T ) T , γ = ( a , T , b , T ) .
Theorem 1 claims that the rate of convergence of δ ^ a , b is O p ( { n T h t 1 h x } 1 / 2 ) + o p ( γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) , which is slower than O p ( { n T } 1 / 2 ) . In this article, we select a n T -consistent estimator ( a ˜ , b ˜ ) of ( a , b ) ; thus, the above rate becomes max { O p ( { n T h t 1 h x } 1 / 2 ) , O p ( { n T h t 1 2 h x 2 } 1 / 2 ) , o p ( { n T h t 1 2 h x 2 } 1 / 2 ) }   = O p ( { n T h t 1 2 h x 2 } 1 / 2 ) .
Theorem 2.
Under Conditions A1–A8, B9, and C1–C3, we denote γ = ( a , T , b , T ) T and γ ˜ = ( a ˜ T , b ˜ T ) T ; then, we have
n T ( γ ˜ γ ) N ( 0 , V ˜ 0 1 V ˜ 1 V ˜ 0 1 ) ,
where V ˜ 0 is defined in Condition A7 and
V ˜ 1 = σ 2 E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 2 ( ( t t 0 ) / h t 3 ) + E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 2 ( ( t t 0 ) / h t 3 ) ( δ ( x , t ) ) 2 ,
where E x E t { · } represents taking the expectation over random t T and x Ω .
Theorem 2 establishes the asymptotic normality of the raw estimator for the time-varying parameter θ ( · ) at a fixed t 0 T , which facilitates the asymptotic properties of the proposed estimator. Based on the above results, we can derive the asymptotic distribution of θ ^ ( t 0 ) as follows:
Theorem 3.
Under Conditions A1–A8, B1–B9, and C1–C3, we denote γ ^ = ( a ^ T , b ^ T ) T ; then, we have
n T h t 1 2 h x 2 ( γ ^ γ ) N ( 0 , V 0 1 V 1 V 0 1 ) ,
where V ˜ 0 has also been defined in Condition A7 and
V 1 = σ 2 E x E t T ˜ ( t ) ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 2 ( ( t t 0 ) / h t 2 ) .
Theorem 3 establishes the asymptotic normality of the proposed estimator for the time-varying parameter θ ( · ) at a fixed point t 0 T , which is also called pointwise asymptotic normality.

4. Numerical Studies

4.1. Numerical Simulations

Suppose the output y i j p , i = 1 , , n , j = 1 , , T from physical experiments is generated by the following model:
y i j = 0.5 sin ( 2 x i ) e t i j / 2 + ( θ ( t i j ) 0.5 ) e 0.1 x i + e i j ,
where θ ( t i j ) = sin ( π 2 t i j ) is the optimal time-varying parameter. According to Section 2, we know that the computer model y s ( x , θ ( t ) ) = ( θ ( t ) 0.5 ) e 0.1 x and the true discrepancy function is δ ( x , t ) = 0.5 sin ( 2 x ) e t / 2 . We assume e i j N ( 0 , σ 2 ) , x i follows the uniform distribution U ( 0 , 1 ) and select n = 20 , T = 10 , 20 , 50 , and σ = 0.1 , 0.2 , 0.5 , respectively.
First, we fix σ = 0.1 and T = 50 to compare three estimators of the calibration function (the proposed estimator, the raw estimator, and the constant calibration estimator) with the true calibration function, and we let t = 1 / 50 k , where k = 0 , 1 , , 50 . The results are represented in Figure 1.
From Figure 1, we find that the proposed local linear estimator most closely follows the true parameter curve. In contrast, the raw estimator deviates more noticeably, and the constant calibration estimator shows a severe mismatch with the true parameter curve. These results indicate that the proposed time-varying calibration method achieves the best overall performance, the raw time-varying approach performs worse but still better than the constant calibration, and the constant calibration scheme is clearly infeasible in this scenario. In order to measure the performance of the proposed estimator, we utilize the following two components: the mean square errors (MSE) and the mean absolute prediction error (MAPE), where
M S E ( θ ^ ( · ) ) = 1 # { t 0 T 0 } t 0 T 0 ( θ ^ ( t 0 ) θ ( t 0 ) ) 2 M A P E ( θ ^ ( · ) ) = 1 n T i = 1 n j = 1 T | y i j p y s ( x i , θ ^ ( t i j ) ) | .
All results are shown in Table 1.
Table 1 reveals several noteworthy observations. Initially, the real-time calibration estimators, the ordinary least squares (OLS) and the quasi-profile least squares (PLS), exhibit significantly superior performance compared to the constant calibration estimator (COLS). Notably, the proposed estimator PLS outperforms the OLS marginally, as is evident from the MSE and MAPE. Specifically, irrespective of sample size and error term variance, the MSE of the constant calibration COLS is more than twice as large as that of OLS and PLS. Considering MAPE, although the difference between the constant calibration COLS and the proposed estimators OLS and PLS is minor, COLS consistently exhibits a higher MAPE than the proposed estimators. Despite the similar performance of the proposed estimators PLS and OLS, owing to the relatively small values of the discrepancy function δ ( x , t ) = 0.5 sin ( 2 x ) e t / 2 , PLS consistently demonstrates better performance than OLS.

4.2. Calibrating an Environmental Model for the Concentrations of Pollutants

Air pollution is a global problem that has long been of public concern. Pollutant concentration is one of the important indicators of air pollution, and it is an effective tool for predicting air pollution. The models for pollutant concentration have been investigated by many studies, such as [20,21]. However, existing calibration procedures for models of pollutant concentration always regard the calibration parameters as constants, which would not be accurate. Bliznyuk et al. used the following model about concentration representation [20]:
f ( s , t ; β ) = 4 π β 1 4 π β 2 t exp s 2 4 β 1 t + 4 π β 1 4 π β 2 ( t β 4 ) exp ( s β 3 ) 2 4 β 1 ( t β 3 ) I ( t > β 3 ) ,
where β = ( β 1 , β 2 , β 3 , β 4 ) T is the calibration parameter. Moreover, β 1 , β 2 , β 3 , and β 4 represent the same mass, location, diffusion rate, and time of the second spill, not all of which are constant with respect to time, such as β 3 . β 3 is the diffusion rate of the pollutant, which varies with time. Since the other three parameters were calibrated by [20], we only consider β 3 as a function of time. We replace β 1 , β 2 , and β 4 with their true values, and β 3 with β 3 ( t ) ; then, model (13) becomes
f ( s , t ; β 3 ( t ) ) = 4 π β 1 4 π β 2 t exp s 2 4 β 1 t + 4 π β 1 4 π β 2 ( t β 4 ) exp ( s β 3 ( t ) ) 2 4 β 1 ( t β 3 ( t ) ) I ( t > β 3 ( t ) ) .
Referring to [20], we set s = 0 , 0.5 , 1 , 1.5 , 2.5 and t = 0.3 + ( k 1 ) ( 60 0.3 ) / ( T 1 ) with k = 1 , , T . In addition, we assume that there exists a discrepancy δ ( x , t ) between the above model and the true process with δ ( s , t ) = 0.5 exp ( 0.1 s 2 ) t 1 + t and the random error follows a normal distribution N ( 0 , σ 2 ) . Similarly to in Section 4.1, we also utilize the MSE and MAPE to measure the performance of the estimators of the calibration function, which are shown in Table 2.
As we expected, the proposed estimator PLS has the most favorable performance in terms of MSE and MAPE, while the constant calibration COLS has the worst performance, by a significant difference. Specifically, the MSE of the proposed estimator PLS consistently remains in the order of 10 5 , while the MSE of the constant estimator COLS consistently remains in the order 10 3 . From the perspective of MAPE, the proposed estimators OLS and PLS are more accurate than COLS, especially when the sample size is relatively small. With increasing sample size, the discrepancy between OLS’s MAPE and PLS’s MAPE gradually diminishes, while COLS’s MAPE consistently exceeds that of the proposed estimators. To further discern the performance of the proposed estimators, we visualize the estimated values under σ = 0.5 and T = 200 of the calibration function compared to the true values in Figure 2.
From Figure 2, we find that the values estimated by the constant calibration are distant from the true calibration function, while the values estimated by the proposed method are close to the true function. Comparing OLS with PLS, we find that the values estimated by PLS are always close to the true values of the calibration function, while the values estimated by OLS have some significant fluctuations and sometimes deviate from the true values of the calibration function. In summary, the proposed estimator PLS consistently proves to be valid and efficient for the environmental model (14).

5. An Application to Calibrate the Forward Model in NASA’s OCO-2 Mission

In the NASA Orbiting Carbon Observatory-2 (OCO-2) mission, Observing System Uncertainty Experiments (OSUEs) play a crucial role in performing probabilistic assessments on retrieval algorithms. The forward model is an essential component of the OSUEs, and its prediction accuracy for real-world scenarios directly impacts the evaluation of retrieval algorithms, as detailed in [16].
The forward model describes the complex physical relationship between the atmospheric variable X t and radiances y t . This model typically involves four geometric parameters, Instrument Azimuth Angle (Inst-AziA), Instrument Zenith Angle (Inst-ZenA), Solar Azimuth Angle (Sol-AziA), and Solar Zenith Angle (Sol-ZenA), denoted by θ 1 ( t ) , , θ 4 ( t ) , which are time-dependent. These angles define the relative positions of the instrument’s line of sight and the incoming solar radiation. Specifically, azimuth angles describe the horizontal orientation of either the instrument or the sun with respect to a reference direction (typically north), while zenith angles measure the deviation from the vertical. Together, they determine the optical path of sunlight through the atmosphere and thus strongly influence the measured radiances.
Given the high computational cost of this forward model, we constructed a surrogate model based on experimental data. We utilized the simulated dataset from [16] and first employed Principal Component Analysis (PCA) to reduce the dimensionality of X t from 66 to 4. We considered the spectrometer’s measure of the radiation intensity in strong CO2 wavelength bands and computed its functional PCA weight as a new output because output y t is functional data concerning wavelength w. At last, we used Gaussian process regression to construct a surrogate model y ^ s ( · ,   · ) of the forward model based on normalized experimental parameters.
To reduce the uncertainty of the forward model, we need to identify the time-varying parameter θ ( t ) = ( θ 1 ( t ) , , θ 4 ( t ) ) T based on true observations. We downloaded data comprising 20 days from NASA’s official website and used MAPE and mean square prediction error (MSPE) to measure the performance of the involved calibration procedures. The calibrated results, including MAPE, MSPE, and the 25 % and 75 % quantiles Q 0.25 , Q 0.75 of the pointwise estimated values of the calibration parameter, are presented in Table 3 and Figure 3.
From Table 3, the proposed calibration procedure PLS achieves the minimal prediction error, while OLS is slightly worse than PLS. As expected, the constant calibration procedure COLS performs poorly in terms of MAPE and PMSE, validating our assumption of the time-varying parameter. Figure 3 reports the comparison for the predicted values through different methods to calibrate the forward model. The predicted values from the proposed PLS method are closest to the true observations, while those from the constant calibration procedure significantly deviate from the true observations. This application of the forward model further verifies the efficiency of the proposed calibration procedure.

6. Conclusions and Discussion

In this article, we propose a real-time calibration procedure for computer models with a time-varying parameter. To construct a pointwise estimator for the time-varying parameter at a fixed point, we employed a quasi-profile least squares estimation approach. This involved deriving a local linear estimator for the discrepancy function given the calibration parameter, followed by computing a quasi-profile least squares estimator for the calibration function at a specified time point. Additionally, we provided insights on the convergence rate of the estimator for the discrepancy function and explored the asymptotic properties of our proposed estimator of the time-varying parameter. Furthermore, we conducted numerical simulations and considered a real-world example, demonstrating the favorable performance of our proposed method.
Although our proposed method exhibits superior performance in both asymptotic theory and computational efficiency, there are some drawbacks in our calibration procedure. First, we assume that the computer model is fully known, which may not hold in practical applications. To address this, future research could focus on constructing surrogate models to approximate unknown computer models with time-varying parameters. Second, we assume that random errors are independent and identically distributed with finite variance. In situations where errors are correlated or exhibit heteroscedasticity, the efficiency of the proposed estimator may be reduced. Extending the method to account for correlated or non-standard error structures could significantly broaden its applicability. Finally, enhancing estimation efficiency through weighted quasi-profile least squares or other advanced techniques presents a promising avenue for further investigation. Overall, these potential extensions suggest that the proposed framework could be adapted to a wider range of complex and realistic modeling scenarios.

Author Contributions

Conceptualization, Y.S. and X.F.; methodology, Y.S.; software, Y.S.; formal analysis, Y.S. and X.F.; investigation, Y.S.; resources, X.F. and Y.S.; writing—original draft preparation, Y.S.; writing—review and editing, Y.S.; visualization, Y.S. and X.F.; supervision, X.F.; project administration, X.F. and Y.S.; funding acquisition, Y.S. and X.F. All authors have read and agreed to the published version of the manuscript.

Funding

This research was Supported by the China Postdoctoral Science Foundation under Grant Number 2025M773095 and GZC20252106.

Data Availability Statement

The original data presented in the study are openly available from the NASA Goddard Earth Science Data and Information Services Center at https://disc.gsfc.nasa.gov/datasets?keywords=OCO2_l1&page=1&processingLevel=1,1B,1A (accessed on 10 September 2025).

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Appendix A. Proofs of Theorems

Proof of Theorem 1.
Let
S p q = i = 1 n j = 1 T w i j ( t i j t ) p ( x i x ) q , R p q = i = 1 n j = 1 T w i j ( t i j t ) p ( x i x ) q Y ˜ i j ,
where Y ˜ i j = y i j p y s ( x i , θ ( t i j ) ) , ( x i x ) q = x i x when q = 1 while ( x i x ) q = ( x i x ) ( x i x ) T , and w i j = 1 n T h t 1 h x K 1 ( t i j t h t 1 , x i x h x ) . Let
L ( d 0 , d 1 , d 2 ; a , b ) = i = 1 n j = 1 T { y i j p y s ( x i , a + b ( t i j t ) ) ( d 0 + d 1 ( t i j t ) + d 2 T ( x i x ) ) } 2 K 1 ( t i j t h t 1 , x i x h x ) ,
Then, we have
L ( d 0 , d 1 , d 2 ; a , b ) | ( d 0 , d 1 , d 2 ) = ( d ^ 0 a , b , d ^ 1 a , b , d ^ 2 a , b ) = 0 ,
that is,
i = 1 n j = 1 T { y i j p y s ( x i , a + b ( t i j t ) ) ( d ^ 0 a , b + d ^ 1 a , b ( t i j t ) + ( x i x ) T d ^ 2 a , b ) } K 1 t i j t h t 1 , x i x h x = 0 , i = 1 n j = 1 T ( t i j t ) { y i j p y s ( x i , a + b ( t i j t ) ) ( d ^ 0 a , b + d ^ 1 a , b ( t i j t ) + ( x i x ) T d ^ 2 a , b ) } K 1 t i j t h t 1 , x i x h x = 0 , i = 1 n j = 1 T ( x i x ) { y i j p y s ( x i , a + b ( t i j t ) ) ( d ^ 0 a , b + d ^ 1 a , b ( t i j t ) + ( x i x ) T d ^ 2 a , b ) } K 1 t i j t h t 1 , x i x h x = 0 ,
which are equivalent to
S 00 d ^ 0 a , b + S 10 d ^ 1 a , b + S 01 T d ^ 2 a , b = 1 n T h t 1 h x i = 1 n j = 1 T { y i j p y s ( x i , a + b ( t i j t ) ) } K 1 t i j t h t 1 , x i x h x , S 10 d ^ 0 a , b + S 20 d ^ 1 a , b + S 11 T d ^ 2 a , b = 1 n T h t 1 h x i = 1 n j = 1 T ( t i j t ) { y i j p y s ( x i , a + b ( t i j t ) ) } K 1 t i j t h t 1 , x i x h x S 01 d ^ 0 a , b + S 11 d ^ 1 a , b + S 02 d ^ 2 a , b = 1 n T h t 1 h x i = 1 n j = 1 T ( x i x ) { y i j p y s ( x i , a + b ( t i j t ) ) } K 1 t i j t h t 1 , x i x h x .
By Taylor’s expansion and Condition A6, we know
y s ( x i , a + b ( t i j t ) ) y s ( x i , θ ( t i j ) ) = y s x i , ( T ˜ ( t i j ) T I d ) γ y s x i , ( T ˜ ( t i j ) T I d ) γ y s ( x i , θ ( t i j ) ) y s x i , ( T ˜ ( t i j ) T I d ) γ = T ˜ ( t i j ) y s ( x i , θ ) θ | θ = ( T ˜ ( t i j ) T I d ) γ T ( γ γ ) + o p ( γ γ 2 ) + o p ( { n T } 1 / 2 ) = O p ( γ γ ) + o p ( { n T } 1 / 2 ) ,
which yields that
S 00 d ^ 0 a , b + S 10 d ^ 1 a , b + S 01 T d ^ 2 a , b = R 00 + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) , S 10 d ^ 0 a , b + S 20 d ^ 1 a , b + S 11 T d ^ 2 a , b = R 10 + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) S 01 d ^ 0 a , b + S 11 d ^ 1 a , b + S 02 d ^ 2 a , b = R 01 + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) .
By Gammer’s rule, we can obtain that
d ^ 1 a , b = det ( A ) 1 det ( A 1 ) + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) , d ^ 2 , j a , b = det ( A ) 1 det ( A 2 , j ) + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) , j = 1 , , p ,
where d ^ 2 , j a , b is the j-th component of d ^ 2 a , b ,
A = S 00 , S 10 , S 01 T S 10 , S 20 , S 11 T S 01 , S 11 , S 02 , A 1 = S 00 , R 00 , S 01 T S 10 , R 10 , S 11 T S 01 , R 01 , S 02 , A 2 , j = S 00 , S 10 , S 01 T , ( j ) | R 00 S 10 , S 20 , S 11 T , ( j ) | R 10 S 01 , S 11 , S 02 T , ( j ) | R 01 ,
are ( p + 2 ) × ( p + 2 ) matrices, where S T , ( j ) | R is defined by replacing the j-th columnn of S T with a vector R. Thus, we can get
d ^ 0 a , b = R 00 d ^ 1 a , b S 10 S 01 T d ^ 2 a , b S 00 + O p ( h t 1 1 h x 1 γ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) .
By the proof of Theorem 3.2 of [18], we know that
R 00 d ^ 1 a , b S 10 S 01 T d ^ 2 a , b S 00 δ L 2 ( Ω × T ) = O p ( { n T h t 1 h x } 1 / 2 ) ,
which implies that
d ^ 0 a , b δ ( x , t ) = O p ( h t 1 1 h x 1 γ γ ) + O p ( { n T h t 1 h x } 1 / 2 ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) .
Proof of Theorem 2.
Let
˜ ( a , b ) = 1 n T i = 1 n j = 1 T y i j p y s ( x i , a + b ( t i j t 0 ) ) 2 K 2 t i j t 0 h t 3 .
By the definitions of a ˜ and b ˜ , we have ˜ ( a ˜ , b ˜ ) = 0 . Let T ˜ i j = ( 1 , t i j t 0 ) T ; then,
0 = 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ) [ y i j p y s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ] K 2 t i j t 0 h t 3 ,
which is equivalent to
0 = 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ) × [ e i j + δ ( x i , t i j ) + y s ( x i , θ ( t i j ) ) y s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ] K 2 t i j t 0 h t 3 = 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ) e i j K 2 t i j t 0 h t 3 + 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ) δ ( x i , t i j ) K 2 t i j t 0 h t 3 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) ) { y s ( x i , a ˜ + b ˜ ( t i j t 0 ) ) y s ( x i , θ ( t i j ) ) } K 2 t i j t 0 h t 3 = : R n 1 + R n 2 R n 3 .
At first, we need to prove the consistency of γ ˜ . By [15], we only need to prove R n 2 0 in probability. Denote γ = ( a T , b T ) T and γ ˜ = ( a ˜ T , b ˜ T ) T . By the definition of γ , we know that
γ = arg min θ Θ ζ ( x , t ) y s ( x , ( T ˜ T I d ) γ ) L 2 ( Ω × T ) 2 = arg min θ Θ E x E t ζ ( x , t ) y s ( x , ( T ˜ T I d ) γ ) 2 ,
which facilitates that
E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) I d ) γ δ ( x , t ) + ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ T I d ) γ ) ) = 0 .
By Conditions A6, A8, and C2, we know that
E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) I d ) γ y s ( x , θ ( t ) ) y s ( x , ( T ˜ T I d ) γ ) = 0 ,
which implies that
E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) I d ) γ δ ( x , t ) = 0 ,
which combined with the weak law of large numbers implies that
1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ δ ( x i , t i j ) 0 .
Thus, we obtain that
R n 2 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ δ ( x i , t i j ) K 2 ( ( t i j t 0 ) / h t 3 ) sup s ( 0 , 1 ) K 2 ( s ) 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ δ ( x i , t i j ) = o p ( 1 ) ,
where the last equality holds by Equation (A10) and Condition A3. Combining (A11) and the proof of Lemma 1 of the Supplementary Material of [15], we can obtain the consistency of γ ˜ .
Next, we derive the asymptotic results of R n 1 R n 3 . Recall the definition of R n 3 ; we have
R n 3 = 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , ( T ˜ i j I d ) γ ˜ ) { y s ( x i , ( T ˜ i j I d ) γ ˜ ) y s ( x i , θ ( t i j ) ) } K 2 t i j t 0 h t 3 = 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , ( T ˜ i j I d ) γ ˜ ) { y s ( x i , ( T ˜ i j I d ) γ ˜ ) y s ( x i , ( T ˜ i j I d ) γ ) } K 2 t i j t 0 h t 3 1 n T i = 1 n j = 1 T ( T ˜ i j y ˙ s ( x i , ( T ˜ i j I d ) γ ˜ ) { y s ( x i , θ ( t i j ) ) y s ( x i , ( T ˜ i j I d ) γ ) } K 2 t i j t 0 h t 3 = : R n 3 , 1 R n 3 , 2 .
Define the empirical process
E 1 n , 1 ( γ ) = 1 n T i = 1 n j = 1 T { i j ( γ ) ( y s ( x i , θ ( t i j ) ) y s ( x i , ( T ˜ i j I d ) γ ) ) K 2 t i j t 0 h t 3 E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 } ,
where i j ( γ ) = ( T ˜ i j y ˙ s ( x i , ( T ˜ i j I d ) γ ) and ( γ ˜ ) = ( T ˜ ( t ) ) y ˙ s ( x , ( T ˜ ( t ) I d ) γ ) . Denote by U γ a neighbourhood of γ = ( a , b ) . By conditions A3, A8 and C2, we know the set F 1 = { [ T ˜ ( t ) y ˙ s ( x , ( T ˜ i j I d ) γ ) ] K 2 ( t t 0 h t 3 ) : γ U γ , t T } is Donsker. Condition C3 implies that
F 2 = { ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) : ( x , t ) Ω × T }
is also Donsker. From conditions A3, A8, and C1–C2, we know that both F 1 and F 2 are uniformly bounded, which implies that the product class F 1 × F 2 is Donsker. Referring to the rigorous derivations in the proof of Theorem 1 of [7] and the consistency of γ ˜ , we can obtain that E 1 n , 1 ( γ ˜ ) 0 in probability, that is,
o p ( 1 ) = 1 n T i = 1 n j = 1 T i j ( γ ˜ ) ( y s ( x i , θ ( t i j ) ) y s ( x i , ( T ˜ i j I d ) γ ) ) K 2 t i j t 0 h t 3 n T E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 ,
which is equivalent to
o p ( 1 ) = n T R n 3 , 2 n T E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 .
Under Conditions A3 and A6, we have
E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 K 2 ( 0 ) sup x , t ( γ ) E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 = 0 ,
which in combination with (A13) implies that
R n 3 , 2 = o p ( { n T } 1 / 2 ) + E x E t ( γ ) ( y s ( x , θ ( t ) ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 = o p ( { n T } 1 / 2 ) ,
which together with (A12) yields that
R n 3 = R n 3 , 1 + o p ( { n T } 1 / 2 ) = 1 n T i = 1 n j = 1 T i j ( γ ˜ ) y s x i , ( T ˜ i j I d ) γ ˜ y s x i , ( T ˜ i j I d ) γ K 2 t i j t 0 h t 3 + o p ( { n T } 1 / 2 ) .
Define the empirical process
E 1 n , 2 ( γ ) = 1 n T i = 1 n j = 1 T { i j ( γ ) ( y s ( x i , ( T ˜ i j T I d ) γ ) y s ( x i , ( T ˜ i j I d ) γ ) ) K 2 t i j t 0 h t 3 E x E t ( ( γ ) ( y s ( x , ( T ˜ ( t ) I d ) γ ) y s ( x , ( T ˜ ( t ) I d ) γ ) ) K 2 t t 0 h t 3 } .
Similarly to the derivation of E 1 n , 1 , under Conditions A3, A6, A8, and C1–C3, we can obtain that E 1 n , 2 ( γ ˜ ) 0 in probability, that is,
o p ( 1 ) = E 1 n ( γ ˜ ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ˜ y s x i , ( T ˜ i j T I d ) γ ˜ y s x i , ( T ˜ i j T I d ) γ K 2 t i j t 0 h t 3 n T Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ˜ y s ( x , ( T ˜ ( t ) T I d ) γ ˜ ) y s ( x , ( T ˜ ( t ) T I d ) γ ) K 2 t t 0 h t 3 d x d t .
Let
m ( x , γ ˜ ) = T ˜ ( t ) y ˙ s x i , ( T ˜ ( t ) T I d ) γ ˜ y s ( x , ( T ˜ ( t ) T I d ) γ ˜ ) y s ( x , ( T ˜ ( t ) T I d ) γ ) ,
Then, we have
m ( x , γ ˜ ) = m ( x , γ ) + m ( x , γ ¯ ) γ T ( γ ˜ γ ) = 0 + γ T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ y s ( x , T ˜ ( t ) T I d γ ¯ ) y s ( x , T ˜ ( t ) I d γ ) ( γ ˜ γ ) = γ T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ y s ( x , T ˜ ( t ) T I d γ ¯ ) y s ( x , T ˜ ( t ) I d γ ) ( γ ˜ γ ) + T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ γ T y s ( x , T ˜ ( t ) T I d γ ¯ ) y s ( x , T ˜ ( t ) I d γ ) ( γ ˜ γ ) T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ γ T y s ( x , T ˜ ( t ) T I d γ ¯ ) y s ( x , T ˜ ( t ) I d γ ) ( γ ˜ γ ) = T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T ( γ ˜ γ ) ,
where γ ¯ lies between γ ˜ and γ . Combining the above results and the fact h t 3 0 as n T , we have
R n 3 = Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ˜ × y s ( x , ( T ˜ ( t ) T I d ) γ ˜ ) y s ( x , ( T ˜ ( t ) T I d ) γ ) K 2 t t 0 h t 3 d x d t + o p ( ( n T ) 1 / 2 ) = Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T K 2 t t 0 h t 3 d x d t ( γ ˜ γ ) + o p ( ( n T ) 1 / 2 ) .
By the consistency of γ ˜ ,
Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ ¯ T K 2 t t 0 h t 3 d x d t Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 t t 0 h t 3 d x d t = : V ˜ 0 .
Thus, we obtain the asymptotic result of R n 3 as follows:
R n 3 = V ˜ 0 ( γ ˜ γ ) + o p ( ( n T ) 1 / 2 ) .
Next, we consider R n 1 . Define the empirical process
E 2 n ( γ ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 t i j t 0 h t 3 n T E x E t T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ y ˙ s x , ( T ˜ ( t ) T I d ) γ e ( t ) K 2 t i j t 0 h t 3 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 t i j t 0 h t 3 .
Under conditions A3–A4, A8, and C2, let Γ 0 = { γ : γ γ F = o p ( 1 ) } ; we know the function class
F 3 = γ y s x , ( T ˜ ( t ) T I d ) γ γ y s x , ( T ˜ ( t ) T I d ) γ K 2 t t 0 / h t 3 e , ( x , t , γ ) Ω × T × Γ 0 .
is a Donsker class, which implies that E 2 n weakly converges in L ( U ) to a tight Gaussian process GP ( · ) by Chapter 2 of [22]. Since E 2 n ( γ ) = 0 for all n, we have GP ( γ ) = 0 . According to the consistency of γ ˜ [15] and the continuous mapping theorem in [23], we obtain E 2 n ( γ ˜ ) GP ( γ ) = 0 , which yields that
R n 1 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 ( ( t i j t 0 ) / h t 3 ) + o p ( ( n T ) 1 / 2 ) .
Similarly, we can obtain that
R n 2 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ δ ( x i , t i j ) K 2 ( ( t i j t 0 ) / h t 3 ) + o p ( ( n T ) 1 / 2 ) .
Combining (A15)–(A17), we have
V ˜ 0 ( γ ˜ γ ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 ( ( t i j t 0 ) / h t 3 ) + o p ( ( n T ) 1 / 2 ) + 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ δ ( x i , t i j ) K 2 ( ( t i j t 0 ) / h t 3 ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ( e i j + δ ( x i , t i j ) ) K 2 ( ( t i j t 0 ) / h t 3 ) + o p ( ( n T ) 1 / 2 ) ,
which together with condition A7 implies the desired result. □
Proof of Theorem 3.
Let
( a , b ) = 1 n T i = 1 n j = 1 T y i j p y s ( x i , a + b ( t i j t 0 ) ) δ ^ a ˜ , b ˜ ( x i , t i j ) 2 K 2 t i j t 0 h t 2 .
By the definitions of a ^ , b ^ , we have ( a ^ , b ^ ) = 0 , that is,
0 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ [ y i j p y s ( x i , ( T ˜ i j T I d ) γ ^ ) δ ^ a ˜ , b ˜ ( x i , t i j ) ] K 2 t i j t 0 h t 2 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ × [ e i j + y s ( x i , θ ( t i j ) ) + δ ( x i , t i j ) y s x i , ( T ˜ i j T I d ) γ ^ δ ^ a ˜ , b ˜ ( x i , t i j ) ] K 2 t i j t 0 h t 2 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ e i j K 2 t i j t 0 h t 2 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ y s x i , ( T ˜ i j T I d ) γ ^ y s x i , θ ( t i j ) K 2 ( t i j t 0 h t 2 ) 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ ( δ ^ a ˜ , b ˜ ( x i , t i j ) δ ( x i , t i j ) ) K 2 t i j t 0 h t 2 = : S n 1 S n 2 S n 3
where γ ^ = ( a ^ T , b ^ T ) T , T ˜ i j = ( 1 , t i j t 0 ) T . According to the derivation of R n 1 in the proof of Theorem 2, we have
S n 1 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ e i j K 2 ( t i j t 0 / h t 2 ) + o p ( ( n T ) 1 / 2 ) .
Next, we consider S n 2 . Similarly to the derivation of R n 3 of the proof of Theorem 2, it is not hard to obtain the asymptotic result of S n 2 as follows:
S n 2 = V 0 ( γ ^ γ ) + o p ( { n T } 1 / 2 ) ,
where
V 0 = Ω T T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T ˜ ( t ) y ˙ s x , ( T ˜ ( t ) T I d ) γ T K 2 t t 0 h t 2 d x d t .
At last, we derive the asymptotic result of S n 3 . By Theorem 1, we have
S n 3 = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ ( δ ^ a ˜ , b ˜ ( x i , t i j ) δ ( x i , t i j ) ) K 2 ( ( t i j t 0 ) / h t 2 ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ ^ ( O p ( { n T h t 1 h x } 1 / 2 ) + o p ( γ ^ γ ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) ) K 2 ( ( t i j t 0 ) / h t 2 ) = o p ( { n T h t 1 2 h x 2 } 1 / 2 ) ,
where the second equality holds by Theorem 1 and the third equality holds by conditions A1 and C2. Combining (A20)–(A23), we have
V 0 ( γ ^ γ ) = S n 1 S n 3 + o p ( { n T } 1 / 2 ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 ( ( t i j x 0 ) / h t 2 ) + o p ( ( n T ) 1 / 2 ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) = 1 n T i = 1 n j = 1 T T ˜ i j y ˙ s x i , ( T ˜ i j T I d ) γ e i j K 2 ( ( t i j x 0 ) / h t 2 ) + o p ( { n T h t 1 2 h x 2 } 1 / 2 ) ,
which together with condition A7 facilitates that
n T h t 1 2 h x 2 ( γ ^ γ ) N ( 0 , V 0 1 V 1 V 0 1 ) ,
where
V 1 = σ 2 E x E t T ˜ ( t ) ) y ˙ s x , ( T ˜ ( t ) ) T I d ) γ T ˜ ( t ) ) y ˙ s x , ( T ˜ ( t ) ) T I d ) γ T K 2 2 ( ( t t 0 ) / h t 2 ) .

References

  1. Kennedy, M.C.; O’Hagan, A. Bayesian calibration of computer models. J. R. Stat. Soc. Ser. B 2001, 63, 425–464. [Google Scholar] [CrossRef]
  2. Drignei, D.; Morris, M.D. Empirical Bayesian analysis for computer experiments involving finite difference codes. J. Am. Stat. Assoc. 2006, 101, 1527–1536. [Google Scholar] [CrossRef]
  3. Higdon, D.; Gattiker, J.; Williams, B.; Rightley, M. Computer model calibration using high-dimensional output. J. Am. Stat. Assoc. 2008, 103, 570–583. [Google Scholar] [CrossRef]
  4. Reich, B.J.; Storlie, C.B.; Bondell, H.D. Variable selection in Bayesian smoothing spline ANOVA models: Application to deterministic computer codes. Technometrics 2009, 51, 110–120. [Google Scholar] [CrossRef]
  5. Xie, F.Z.; Xu, Y.X. Bayesian projected calibration of computer models. J. Am. Stat. Assoc. 2021, 116, 1965–1982. [Google Scholar] [CrossRef]
  6. Tuo, R.; Wu, C.F.J. A theoretical framework for calibration in computer models: Parametrization, estimation and convergence properties. SIAM/ASA J. Uncertain. Quantif. 2016, 4, 767–795. [Google Scholar] [CrossRef]
  7. Tuo, R.; Wu, C.F.J. Efficient calibration for imperfect computer models. Ann. Stat. 2015, 43, 2331–2352. [Google Scholar] [CrossRef]
  8. Wong, K.W.; Storlie, C.B.; Lee, C.M. A frequentist approach to computer model calibration. J. R. Stat. Soc. Ser. B 2017, 79, 635–648. [Google Scholar] [CrossRef]
  9. Tuo, R. Adjustments to Computer Models via Projected Kernel Calibration. SIAM/ASA J. Uncertain. Quantif. 2019, 7, 553–578. [Google Scholar] [CrossRef]
  10. Wang, Y. Penalized Projected Kernel Calibration for Computer Models. SIAM/ASA J. Uncertain. Quantif. 2022, 10, 1652–1683. [Google Scholar] [CrossRef]
  11. Sung, C.L.; Berber, B.D.; Walker, B.J. Calibration of Inexact Computer Models with Heteroscedastic Errors. SIAM/ASA J. Uncertain. Quantif. 2022, 10, 1733–1752. [Google Scholar] [CrossRef]
  12. Sun, Y.; Fang, X. A model calibration procedure for count response. Commun. Stat. Theory Methods 2024, 53, 4272–4289. [Google Scholar] [CrossRef]
  13. Tuo, R.; He, S.Y.; Pourhabib, A.; Ding, Y.; Huang, J.Z. A Reproducing Kernel Hilbert Space Approach to Functional Calibration of Computer Models. J. Am. Stat. Assoc. 2023, 118, 883–897. [Google Scholar] [CrossRef]
  14. Tian, Y.; Chao, M.A.; Kulkarni, C.; Geobel, K.; Fink, O. Real-time model calibration with deep reinforcement learning. Mech. Syst. Signal Process. 2022, 165, 108284. [Google Scholar] [CrossRef]
  15. Kürüm, E.; Li, R.Z.; Wang, Y.; Sentürk, D. Nonlinear Varying-Coefficient Models with Applications to a Photosynthesis Study. J. Agric. Biol. Environ. Stat. 2013, 19, 57–81. [Google Scholar] [CrossRef]
  16. Ma, P.L.; Mondal, A.; Konomi, B.A.; Hobbs, J.; Song, J.J.; Kang, E.L. Computer Model Emulation with High-Dimensional Functional Output in Large-Scale Observing System Uncertainty Experiments. Technometrics 2022, 64, 65–79. [Google Scholar] [CrossRef]
  17. Chen, J.; Li, D.G.; Liang, H.; Wang, S.L. Semiparametric GEE Analysis in Partially Linear Single-Index Models for Longitudinal Data. Ann. Stat. 2015, 43, 1682–1715. [Google Scholar] [CrossRef]
  18. Jiang, C.R.; Wang, J.L. Covariate adjusted functional principle components analysis for longitudinal data. Ann. Stat. 2010, 38, 1194–1226. [Google Scholar] [CrossRef]
  19. Yao, F.; Müller, H.G.; Wang, J.L. Functional Data Analysis for Sparse Longitudinal Data. J. Am. Stat. Assoc. 2005, 100, 577–590. [Google Scholar] [CrossRef]
  20. Bliznyuk, N.; Ruppert, D.; Shoemaker, C.; Regis, R.; Wild, S.; Mugunthan, P. Bayesian Calibration and Uncertainty Analysis for Computationally Expensive Models Using Optimization and Radial Basis Function Approximation. J. Comput. Graph. Stat. 2008, 17, 270–294. [Google Scholar] [CrossRef]
  21. Mugunthan, P.; Shoemaker, C.A. Assessing the Impacts of Parameter Uncertainty for Computationally Expensive Groundwater Models. Water Resour. Res. 2006, 42, W10428. [Google Scholar] [CrossRef]
  22. van der Varrt, A.W.; Wellner, J.A. Weak Convergence and Empirical Processes: With Applications to Statistics; Springer: New York, NY, USA, 1996. [Google Scholar] [CrossRef]
  23. van der Vaart, A.W. Asymptotic Statistics; Cambridge Series in Statistical and Probabilistic Mathematics 3; Cambridge University Press: Cambridge, UK, 1998. [Google Scholar] [CrossRef]
Figure 1. Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with the true calibration function θ ( t ) in a toy model over the time period with σ = 0.1 and T = 50 .
Figure 1. Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with the true calibration function θ ( t ) in a toy model over the time period with σ = 0.1 and T = 50 .
Mathematics 13 02969 g001
Figure 2. Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with true calibration function θ ( t ) in a pollutant concentration model over the time period with σ = 0.5 and T = 200 .
Figure 2. Comparison of the estimated calibration functions obtained from the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least square estimator for constant parameter) with true calibration function θ ( t ) in a pollutant concentration model over the time period with σ = 0.5 and T = 200 .
Mathematics 13 02969 g002
Figure 3. Comparison of the predicted results for strong CO 2 ( SCO 2 ) obtained by calibrating the forward model with the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least squares estimator for constant parameter) with the true observations (NASA’s satellite remote sensing data).
Figure 3. Comparison of the predicted results for strong CO 2 ( SCO 2 ) obtained by calibrating the forward model with the proposed local linear estimator and two existing estimators (the raw local linear estimator without estimating discrepancy function, the least squares estimator for constant parameter) with the true observations (NASA’s satellite remote sensing data).
Mathematics 13 02969 g003
Table 1. MSE and MAPE of the estimators of the calibration function with n = 20 .
Table 1. MSE and MAPE of the estimators of the calibration function with n = 20 .
COLSOLSPLS
MSEMAPEMSEMAPEMSEMAPE
σ = 0.1 , T = 10 0.1820 0.3463 0.0815 0.2814 0.0751 0.2910
T = 20 0.2078 0.3869 0.0803 0.3296 0.0816 0.3186
T = 50 0.1729 0.3509 0.0767 0.2725 0.0721 0.2606
σ = 0.2 , T = 10 0.2366 0.4055 0.0856 0.3584 0.0695 0.3686
T = 20 0.1904 0.3655 0.0816 0.3397 0.0786 0.3200
T = 50 0.1904 0.3655 0.0816 0.3397 0.0786 0.3200
σ = 0.5 , T = 10 0.2149 0.5204 0.0800 0.5126 0.0787 0.5019
T = 20 0.2761 0.5400 0.0774 0.5194 0.0729 0.5167
T = 50 0.2761 0.5400 0.0774 0.5194 0.0729 0.5167
Table 2. MSE and MAPE of the estimators of the calibration function with n = 5 .
Table 2. MSE and MAPE of the estimators of the calibration function with n = 5 .
COLSPLSOLS
MSEMAPEMSEMAPEMSEMAPE
σ = 0.5 , T = 50 7.5177 × 10 3 0.3300 7.5002 × 10 4 0.2033 3.1889 × 10 5 0.1709
T = 100 7.6018 × 10 3 0.3501 9.1010 × 10 4 0.2592 4.3141 × 10 5 0.2429
T = 200 7.3197 × 10 3 0.4025 9.8375 × 10 4 0.3519 1.0364 × 10 5 0.3422
σ = 1 , T = 50 6.4438 × 10 3 0.2555 1.0042 × 10 3 0.1710 4.1504 × 10 5 0.1453
T = 100 6.4552 × 10 3 0.2686 9.4805 × 10 4 0.2115 4.0734 × 10 5 0.2026
T = 200 6.4644 × 10 3 0.3228 8.8762 × 10 4 0.2910 5.3540 × 10 5 0.2850
σ = 2 , T = 50 5.8375 × 10 3 0.2028 1.0006 × 10 3 0.1440 3.4010 × 10 5 0.1257
T = 100 5.9031 × 10 3 0.2216 9.7789 × 10 4 0.1820 3.5958 × 10 5 0.1716
T = 200 5.9485 × 10 3 0.2668 8.9767 × 10 4 0.2451 5.3599 × 10 5 0.2417
Table 3. MAPE and MSPE of the estimators of the calibration function in forward model based on NASA’s observations.
Table 3. MAPE and MSPE of the estimators of the calibration function in forward model based on NASA’s observations.
Inst-AziAInst-ZenASol-AziASol-ZenAPrediction Accuracy
Q 0 . 25 Q 0 . 75 Q 0 . 25 Q 0 . 75 Q 0 . 25 Q 0 . 75 Q 0 . 25 Q 0 . 75 MAPEMSPE
OLS 0.5514 0.6645 0.6187 0.5312 0.5931 0.5910 0.6134 0.7012 0.0743 0.0089
PLS 0.4735 2.3575 0.6283 0.6283 1.0044 2.2346 0.5521 0.5521 0.0442 0.0039
COLS 2.5599 2.5599 0.6283 0.6283 0.5638 0.5638 0.5521 0.5521 0.1134 0.0205
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sun, Y.; Fang, X. Calibration for Computer Models with Time-Varying Parameter. Mathematics 2025, 13, 2969. https://doi.org/10.3390/math13182969

AMA Style

Sun Y, Fang X. Calibration for Computer Models with Time-Varying Parameter. Mathematics. 2025; 13(18):2969. https://doi.org/10.3390/math13182969

Chicago/Turabian Style

Sun, Yang, and Xiangzhong Fang. 2025. "Calibration for Computer Models with Time-Varying Parameter" Mathematics 13, no. 18: 2969. https://doi.org/10.3390/math13182969

APA Style

Sun, Y., & Fang, X. (2025). Calibration for Computer Models with Time-Varying Parameter. Mathematics, 13(18), 2969. https://doi.org/10.3390/math13182969

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop