Estimation of Multiresponse Multipredictor Nonparametric Regression Model Using Mixed Estimator

: In data analysis using a nonparametric regression approach, we are often faced with the problem of analyzing a set of data that has mixed patterns, namely , some of the data have a certain pattern and the rest of the data ha ve a diﬀerent pattern. To handle this kind of dat um, we propose the use of a mixed estimator. In this study, we theoretically discuss a developed estimation method for a nonparametric regression model with two or more response variables and predictor variables, and there is a correlation between the response variables using a mixed estimator. The model is called the multiresponse multipredictor nonparametric regression (MMNR) model. The mixed estimator used for estimating the MMNR model is a mixed estimator of smoothing spline and Fourier series that is suitable for analyzing data with patterns that partly change at certain subintervals, and some others that follow a recurring pattern in a certain trend. Since in the MMNR model there is a correlation between responses, a symmetric weight matrix is involved in the estimation process of the MMNR model. To estimate the MMNR model, we apply the reproducing kernel Hilbert s pace (RKHS) method to penalized weighted least square (PWLS) optimization for estimating the regression function of the MMNR model, which consists of a smoothing spline component and a Fourier series component. A simulation study to show the performance of proposed method is also given. The obtained results are estimations of the smoothing spline component, Fourier series component, MMNR model, weight matrix, and consistency of estimated regression function. In conclusion, the estimation of the MMNR model using the mixed estimator is a combination of smoothing spline component and Fourier series component estimators. It depends on smoothing and oscillation parameters, and it has linear in observation and consistent properties.


Introduction
Often, we have to carry out statistical analyses that involve relationships between several variables in the sense of functional relationships between response variables and predictor variables.The appropriate statistical analysis to use in cases like this is regression analysis.In this regression analysis, basically, we build a mathematical model that is usually called a regression model, where the functional relationship between variables in the model is represented by a regression function, and for analysis purposes, we must estimate this Symmetry 2024, 16, 386 3 of 25 parametric regression models and discussed linear spline in the modeling of blood sugar; and Kirkby et al. [33] estimated nonparametric density using B-Spline.Additionally, Osmani et al. [34] estimated the coefficient of a rates model using kernel and spline estimators.
In statistical modeling such as regression modeling, we often have to analyze the functional relationship between response variable and predictor variable where there are two or more response variables and the between responses are correlated with each other.The regression models that draw a functional relationship between response variables and predictor variables with correlated responses are the multiresponse nonparametric regression (MNR) model and multiresponse semiparametric regression (MSR) model.Due to there being correlation between responses, in the estimation process of a regression function, we need to include a matrix called a weight matrix.The inclusion of a weight matrix in the estimation process is what differentiates the MNR model from a uniresponse nonparametric regression (UNR) model, where there is no correlation between responses.So, the process of estimating the regression functions of the MNR and MSR models requires a symmetric weight matrix, namely, a diagonal matrix.We can also use several estimators to estimate the regression functions of these MNR and MSR models.One of these estimators is the smoothing spline estimator.Currently, many researchers are paying attention to developing and applying the smoothing spline estimator in many areas of research.For example, Chamidah et al. [17], Lestari et al. [18] discussed smoothing splines in MSR models; Lestari et al. [35] and Wang et al. [28] discussed RKHS in an MNR model and applied an MNR model to determine associations between hormones, respectively.Because of its powerful and flexible properties, the smoothing spline estimator is one of the most popular estimators used for estimating the regression functions of UNR and MNR models.
The previous description shows that the smoothing spline estimator has been used in statistical analysis using regression model approaches such as UNR and MNR models.The smoothing spline estimator provides good fitting results because it can overcome the curve of data, which has a sharp decrease and increase pattern that results a relatively smooth curve.Also, using the smoothing spline provides several advantages in that it has unique statistical properties, enables visual interpretation, can handle smooth data and functions, and can readily handle data that change at certain subintervals [1,12,13].Meanwhile, the Fourier series estimator is a popular smoothing technique in nonparametric regression modeling.The Fourier series also provides good statistical and visual interpretation among nonparametric regression models.Using the Fourier series estimator provides advantages such as being able to handle data that have a repeating pattern at a certain trend interval and to provide good statistical interpretation [36].Suparti et al. [37], Mardianto et al. [38], and Amato et al. [39] proposed Fourier series methods for modeling inflation in Indonesia, for modeling longitudinal data, and for approximating separable models, respectively.
The research discussed in the previous description relating to the estimation of nonparametric regression models is still limited to the use of only one type of estimator, whereas in data analysis using a nonparametric regression approach, we are often faced with the problem of analyzing a set of data that has mixed patterns, namely, some of the data have a certain pattern and the rest of the data have a different pattern.To handle these kinds of data, we should use a mixed estimator, namely, a combination of more than one estimator.Mariati et al. [40] discussed a uniresponse multivariable nonparametric regression model using a mixed estimator comprising a smoothing spline and Fourier series that was applied to data of poor households in Bali Province.However, previous researchers, namely, Mariati et al. [40], discussed the use of a mixed estimator to estimate a nonparametric regression model, but they only discussed estimating a UNR model in the sense that there is no correlation between the response variables.In this study, therefore, we theoretically discuss a new estimation method using a mixed estimator for a nonparametric regression model with two or more response variables and predictor variables, and there is a correlation between the response variables.The model is called the multiresponse multipredictor nonparametric regression (MMNR) model.The mixed estimator used for estimating the MMNR model is a combination of smoothing spline and Fourier series suitable for analyzing data wherein some patterns partly change at certain subintervals, and others follow a recurring pattern in a certain trend.Since there is a correlation between responses in the MMNR model, a weight matrix is involved in its estimation process.Also, we apply the reproducing kernel Hilbert space (RKHS) method to penalized weighted least square (PWLS) optimization for estimating the regression function of our MMNR model.In this study, therefore, we discuss theoretically about determining the smoothing spline component and Fourier series component of the MMNR model, determining the goodness of fit and penalty components of PWLS optimization, determining the MMNR model estimate, determining the weight matrix, selecting optimal smoothing and oscillation parameters in the MMNR model, and investigating the consistency of the regression function estimator of the MMNR model.Additionally, the results of this study contribute to the development of the field of theoretical statistics, namely, statistical inference theory, regarding theories of estimation and hypothesis testing in multiresponse multipredictor nonparametric regression based on a mixed smoothing spline and Fourier series estimator.

Materials and Methods
To achieve the objectives of this research, in this section, we briefly present materials and methods such as the MMNR model, mixed smoothing spline and Fourier series estimator, reproducing kernel Hilbert space (RKHS), and penalized weighted least square (PWLS) optimization.

The MMNR Model
We first consider a paired dataset x r1i , x r2i , . . . ,x rpi , t r1i , t r2i , . . . ,t rqi , y ri for r = 1, 2, . . ., R and i = 1, 2, . . ., n.We say that the paired dataset follows an MMNR model if the relationship between y ri and (x r1i , x r2i , . . ., x rpi , t r1i , t r2i , . . ., t rqi ) satisfies the following functional equation: where r = 1, 2, . . ., R represents responses; i = 1, 2, . . ., n represents observations; y ri is the value of the response variable at the ith observation and rth response; m r is the unknown regression function of the rth response; x r1i , x r2i , . . ., x rpi are values of p smoothing spline predictor variables at the ith observation and rth response; t r1i , t r2i , . . ., t rqi are values of q Fourier series predictor variables at the ith observation and rth response; and ε ri is the random error at the ith observation and rth response, which has mean of zero and variance of σ 2 ri .Here, we assume that the correlation between responses is ρ ls = ρ l f or l = s 0 f or l ̸ = s .
Since the shape of the regression function m r in the MMNR model is assumed to be unknown and additive in nature, the regression function m r in Equation ( 1) can be presented as follows: where r = 1, 2, . . ., R; i = 1, 2, . . ., n; ∑ p j=1 f rj x rji is a smoothing spline component of regression function m r in which f rj x rji , j = 1, 2, . . ., p are contained in a Sobolev space W m 2 a j , b j , namely, f rj ∈ W m 2 a j , b j ; and ∑ q k=1 g rk (t rki ) is a Fourier series component of regression function m r that is approximated by a Fourier series function.Hereinafter, based on Equations ( 1) and (2), we have the following MMNR model: Furthermore, the regression function of the MMNR model presented in (3) is estimated using a mixed smoothing spline and Fourier series estimator by applying the reproducing kernel Hilbert space (RKHS) method to the developed penalized weighted least square.

Mixed Smoothing Spline and Fourier Series Estimator
To estimate the regression function of the MMNR model presented in (3), we use a mixed smoothing spline and Fourier series estimator, which is suitable for analyzing data with some patterns that partly change at certain subintervals, and others that follow a recurring pattern in a certain trend.The regression function m r in the MMNR model in (3) consists of the component ∑ p j=1 f rj x rji , which is approached by smoothing spline, and the component ∑ q k=1 g rk (t rki ), which is approximated by the following Fourier series function: where r = 1, 2, . . ., R; k = 1, 2, . . ., p; and i = 1, 2, . . ., n.
Since we involve a smoothing spline estimator in the process of estimating the regression function, according to Wahba [12] and Wang [13], we should use the reproducing kernel Hilbert space (RKHS) method approach, which is applied to the developed penalized weighted least square (PWLS) optimization.

Reproducing Kernel Hilbert Space (RKHS)
The reproducing kernel Hilbert space (RKHS) method was first introduced by Aronszajn [41].The RKHS method was developed for estimating nonparametric regression models and semiparametric regression models by several researchers such as Wang [13], Chamidah et al. [17], Lestari et al. [18], and Kimeldorf and Wahba [42].We call a Hilbert space H an RKHS on a set X over field F if it meets certain conditions, as shown by Aronszajn [41], Berlinet and Thomas-Agnan [43], Paulsen [44], and Yuan and Cai [45]: (i) If H is a vector, then H ⊂ (X, F), namely, H is the subspace of a vector space over F, which is notated by F (X,F); (ii) If H is equipped with an inner product, ⟨ ,⟩, then it will be a Hilbert space; (iii) If E y : H → F is a linear evaluation functional that is defined by E y ( f ) = f (y) for every y ∈ X, then the linear evaluation functional is bounded.Furthermore, if H ⊂ X, namely, H is an RKHS on X, then there exists a unique vector, k y ∈ H, for every y ∈ X, such that f (y) = f , k y for every f ∈ H.Note that every linear functional that is bounded will be given by the inner product with a unique vector in H, and under this condition, we call the function k y as a reproducing kernel (RK) for the point y.This means that the reproducing kernel (RK) for H is a two-variable function, which is defined by K(x, y) = k y (x).It implies that K(x, y) = k y (x) = k y , k x and E y 2 = k y 2 = k y , k y = K(y, y) [41][42][43][44][45].
In this study, the RKHS method is employed to the developed penalized weighted least square (PWLS) optimization for obtaining the estimation of the regression function of the MMNR model presented in (3).In the following section, we provide the penalized weighted least square (PWLS) optimization that we developed according to the case of our interest.

Penalized Weighted Least Square (PWLS) Optimization
Since the regression function of the multiresponse multipredictor nonparametric regression (MMNR) model in (3) consists of two components, namely, the smoothing spline component ∑ p j=1 f rj x rji and the Fourier series component ∑ q k=1 g rk (t rki ), for estimating the regression function, we developed the PWLS optimization for a single-estimator case, namely, the smoothing spline estimator, only as proposed by Wang et al. [28], Lestari et al. [18], and Islamiyati et al. [31], to the two mixed estimators, namely, the smoothing spline and Fourier series estimators.In the process of estimating the regression function of the multiresponse multipredictor nonparametric regression (MMNR) model by using the developed penalized weighted least square (PWLS) optimization, we assume that g rk (t rki ) is a fixed function.Hence, we can estimate the smoothing spline component in the multiresponse multipredictor nonparametric regression (MMNR) model presented in (3) for every response r = 1, 2, . . ., R by using the following developed penalized weighted least square (PWLS) optimization: Min where 0 < λ rj < ∞; r = 1, 2, . . ., R; j = 1, 2, . . ., p; and w ri represents a weight that is an inverse of variances.Note that we include a weight w ri in the developed penalized weighted least square (PWLS) presented in (5) because in this case there is a correlation between responses.

Results and Discussions
In this section, we theoretically discuss the estimating process of the multiresponse multipredictor nonparametric regression (MMNR) model.Firstly, suppose that we have a paired dataset such as x r1i , x r2i , . . ., x rpi , t r1i , t r2i , . . ., t rqi , y ri for r = 1, 2, . . ., R and i = 1, 2, . . ., n, where the functional relationship between predictor variables x r1i , x r2i , . . ., x rpi , t r1i , t r2i , . . ., t rqi and response variables y ri meets the MMNR model presented in Equation (3), namely: where information and assumptions about the multiresponse multipredictor nonparametric regression (MMNR) model are presented in Section 2.1.Since this multiresponse multipredictor nonparametric regression (MMNR) model has more than one response variable and predictor variable, and has a regression function that is composed of two components, namely, a smoothing spline component and a Fourier series component, to simplify the process of estimating the multiresponse multipredictor nonparametric regression (MMNR) model by using the reproducing kernel Hilbert space (RKHS) method approach that is employed to the developed penalized weighted least square (PWLS) optimization, we should express the multiresponse multipredictor nonparametric regression (MMNR) model presented in Equation (3) in the form of matrix notation, as follows: . . .
Furthermore, we may write the MMNR model presented in (6) as follows: . . .
. ( 7) Hence, the MMNR model given in ( 7) can be written as follows: where Next, by letting z = y − g, we may write the MMNR model given in (8) as follows: In the following sections, we discuss about determining smoothing spline component and Fourier series component of MMNR model, determining goodness of fit and penalty components of PWLS optimization, estimating the MMNR model, estimating the weight matrix, selecting optimal smoothing and oscillation parameters in the MMNR model, and investigating the consistency of regression function estimator of MMNR model.

Determining Smoothing Spline Component of MMNR Model
To determine the function of the smoothing spline component of the regression function of multiresponse multipredictor nonparametric regression (MMNR) model presented in Equation (1), we use the RKHS method, which is applied to PWLS optimization.Due to ∑ p j=1 f rj x rji being a smoothing spline component in the MMNR model presented in (3), we use the RKHS method to estimate it with the following steps: Firstly, suppose that we decompose a Hilbert space H into a direct sum of Hilbert subspaces H 0 with basis γ rj1 , γ rj2 , . . . ,γ rjm where m represents the order of polynomial spline, H 1 with basis β rj1 , β rj2 , . . . ,β rjn where n represents the number of observations, and H 0 ⊥H 1 (i.e., H 0 is perpendicular to H 1 ), as follows: Symmetry 2024, 16, 386 8 of 25 Hence, we express every function where f rj ∈ H as follows: (10) where u rj ∈ H 0 ; v rj ∈ H 1 ; c rj and d rj are constans; γ rj = γ rj1 , γ rj2 , . . ., γ rjm T ; Secondly, suppose L x ∈ H is a bounded linear functional, and f rj ∈ H, then we have the following relationship: Since L x ∈ H is a bounded linear functional, then according to Yuan and Cai [45] and Lestari et al. [35], there exists a representer ξ i ∈ H that meets the following equation: Based on Equations ( 10) and ( 11), we may write Equation ( 12) as follows: Hence, for j = 1, we may write Equation ( 13) as follows: Next, based on Equation ( 14) for j = 1 and i = 1, we have f r1 (x r1 ) as follows: Similarly, based on Equation ( 14) for j = 1 and i = 2, we have f r1 (x r2 ) as follows: If we continue this process for j = 1 and i = 3, 4, . . ., n in a similar way, then for i = n, we have f r1 (x rn ) as follows: Furthermore, based on Equation (13) for j = 2, we have f r2 (x ri ) as follows: Symmetry 2024, 16, 386 Hence, based on Equation ( 17) for j = 2 and i = 1, we have f r2 (x r1 ) as follows: In similar way, based on Equation ( 17) for j = 2 and i = 2, we have f r2 (x r2 ) as follows: We continue this process for j = 2 and i = 3, 4, . . ., n in a similar way such that for i = n we have f r2 (x rn ) as follows: Based on Equations ( 15)-( 17), we obtain the smoothing spline component of the regression function m r of the MMNR model for j = 1 and i = 1, 2, . . ., n as follows: . . .
Next, in the same way as in the process of obtaining Equation ( 22), we obtain f rj for j = 2, 3, . . ., p as follows: Since ξ rji , β rji = β rji , β rji , we have matrix V rj as follows: Based on Equations ( 22) and ( 23), we obtain the smoothing spline component for the rth response, namely, f r , as follows: . . .
Finally, based on Equation ( 25), we obtain the smoothing spline component for all responses r = 1, 2, . . ., R, namely, f, as follows:  Thus, the smoothing spline component of the MMNR model can be presented in matrix notation as given in Equation (26), that is, f = Uc + Vd.

Determining Goodness of Fit and Penalty Components of PWLS Optimization
To estimate the regression function of the MMNR model, we use the PWLS as presented in Equation ( 5), namely, by taking the solution to the following PWLS optimization: Min where 0 < λ rj < ∞; r = 1, 2, . . ., R; j = 1, 2, . . ., p; and w ri represents a weight that is an inverse of random error variances.
Based on Equations ( 8) and ( 9), we have the MMNR model, which can be expressed as y = f + g + ε or z = f + ε where z = y − g.Therefore, by considering Equation (26), the PWLS optimization can be written in matrix notation as follows: where W is a symmetrical weight matrix that is the inverse of a covariance matrix of random errors.Details of the weight matrix can be found in Lestari et al. [18,35].In addition, the PWLS optimization function presented in Equation (31) shows that the goodness of fit component of PWLS optimization is n Next, we determine the penalty component of PWLS optimization.In the PWLS optimization function presented in Equation (31) . . .
Hence, based on Equation (32), we obtain the penalty component of PWLS optimization, which can be expressed in matrix notation for the rth response as follows: Furthermore, based on Equation (33) and in the same way as in the process of obtaining Equation (26), the penalty component for all responses r = 1, 2, . . ., R can be obtained as follows: Let P r = ∑ Thus, the goodness of fit component of the PWLS optimization is presented as follows: and the penalty component of the PWLS optimization is presented as follows:

Estimating the MMNR Model
The PWLS optimization presented in Equation ( 31) is used to obtain the estimation of the MMNR model.Also, we definitely consider Equations ( 26), (30) and (34).In this section, we provide a complete explanation of the estimation process of the MMNR model.In Equation (31), we have the PWLS as follows: In this step, we first determine the estimator for d, namely, d, as follows: Let H = (WV + nL) such that Equation ( 35) can be written as follows: Next, we determine the estimator for c, namely, ĉ, as follows: By substituting Equation (36) into Equation (37), we obtain the following equation: Since H = (WV + nL), then we have VH −1 = I − nLH −1 , and we may therefore write Equation (38) as follows: Equation ( 39) returns the following result: By substituting Equation (40) into Equation (36), we obtain the following result: Hence, based on Equations ( 26), ( 40) and ( 41), we obtain the following estimated smoothing spline component: Thus, the estimation of the smoothing spline component of the MMNR model notated as f(λ,K) (x, t) is given by where and W is a special symmetric weight matrix, namely, a diagonal matrix, which is an inverse of a covariance matrix of random errors.In this case, based on the assumption of random errors of the MMNR model, the weight matrix W is given as follows [18,35]: where In the previous explanation, we obtained f(λ,K) (x, t), which is presented in (42).The next step is determining the Fourier series component of the regression function of the MMNR model.Let us look again at the MMNR model presented in Equation ( 8): We substitute Equation (42) into Equation ( 8) such that we obtain the following equation: Next, we substitute Equation ( 30), namely, g = Tα, into Equation ( 43) such that we obtain the equation for random error as follows: Hence, we have

A(λ, K) T − I (I − A(λ, K)).
Hereinafter, by substituting Equation (46) into Equation ( 42) we obtain the estimator for the smoothing spline component of the regression function of the MMNR model as follows: where

A(λ, K) T − I (I − A(λ, K)) .
Finally, based on Equations ( 2), ( 8), ( 46) and (47), we obtain the estimation of the regression function of the MMNR model, which consists of estimations of the smoothing spline component and Fourier series component as follows: Thus, the estimation result of the MMNR model by using the mixed smoothing spline and Fourier series estimator is as follows: where λ represents the smoothing parameter of the smoothing spline estimator, K represents the oscillation parameter of the Fourier series estimator, Next, in the following section, we discuss the estimation of weight matrix W.

Estimating Weight Matrix W
Suppose that we have a paired dataset x r1i , x r2i , . . ., x rpi , t r1i , t r2i , . . ., t rqi , y ri for r = 1, 2, . . ., R and i = 1, 2, . . ., n, which follows the MMNR model, and we assume that y = [y 1 , y 2 , . . . ,y R ] T is a normally distributed random sample with a mean of m and covariance of W −1 such that we have a likelihood function as follows: ), then we may write the likelihood function as follows: Hence, the estimated weight matrix can be obtained by determining the solution to the following optimization: Hereinafter, according to Johnson and Wichern [46], we can determine the maximum value of each component of the likelihood function by taking the following equations: Furthermore, based on Equation (48), we have where M(λ, K) = (F(λ, K) + G(λ, K)).Hence, we obtain the maximum likelihood estimator for the weight matrix W as follows: . This shows that the estimated weight matrix obtained is a symmetric matrix, especially a diagonal matrix whose main diagonal components are the estimated weight matrices of the first response, second response, and so on, up to the R-th response.
Next, in the following section, we provide discussion on selecting optimal smoothing and oscillation parameters in the MMNR model.

Selecting Optimal Smoothing and Oscillation Parameters in the MMNR Model
In statistical analysis using an MMNR model approach based on a mixed smoothing spline and Fourier series estimator, selecting the optimal smoothing parameter and oscillation parameter, namely, (λ Opt , K opt ), cannot be neglected, and it is crucial to good regression function fitting of the MMNR model based on the mixed smoothing spline and Fourier series estimator.In other words, the mixed smoothing spline and Fourier series estimator highly depends on both the smoothing parameter and oscillation parameter.Therefore, in this study, we also need to discuss selecting the optimal smoothing and oscillation parameters.There are some criteria used for selecting the smoothing parameter, including minimizing CV (cross-validation), GCV (generalized cross-validation), Mallows' C p , and AIC (Akaike's information criterion) [1,12].However, according to Ruppert and Carroll [47], for good fitting of a regression function based on a spline estimator, Mallows' C p and GCV were very satisfactory.
In this study, we use the GCV criterion, which has been developed for multiresponse cases to determine the optimal smoothing parameter and oscillation parameter values for good regression function fitting of the multiresponse multipredictor nonparametric regression (MMNR) model.Firstly, we determine the mean squared errors (MSEs) of the regression function presented in (48) as follows: where λ represents the smoothing parameter of the smoothing spline estimator, K represents the oscillation parameter of the Fourier series estimator,

A(λ, K) T − I (I − A(λ, K)), and
Secondly, based on the MSE(λ, K) given in Equation ( 49), we have the GCV (generalized cross-validation) for the MMNR model as follows: Hence, based on the GCV function presented in (50), we can obtain the optimal smoothing parameter (λ) and optimal oscillation parameter (K) values, namely, (λ Opt , K opt ), by taking the solution to the following optimization: where R + represents a positive real number set.Thus, the optimal values of both smoothing parameter and oscillation parameter, namely, (λ Opt , K opt ), can be obtained by taking the solution to the optimization function given in Equation (51).Furthermore, in the following section, we determine one of the statistically sound estimator properties, namely, the consistency property.

Consistency of Regression Function Estimator of MMNR Model
To investigate the consistency of the regression function estimator of the MMNR model, m(λ,K) (x, t), firstly, we should investigate asymptotic properties of the mixed smoothing spline and Fourier series estimator, m(λ,K) (x, t), based on an integrated mean squared error (IMSE) criterion.In this study, we develop the IMSE from the uniresponse with one predictor case proposed by Wand and Jones [48] and multiresponse with one predictor case proposed by Lestari et al. [35] into a multiresponse with multipredictor case.Next, by considering Equation (48), the IMSE can be decomposed into two components, namely, the Biased 2 (δ) component and Var(δ) component, as follows: W is a weight matrix, and m(λ,K) (x, t) as given in Equation ( 48) is a regression function of the MMNR model.
Next, by using Theorem 2 in Lestari et al. [35], we obtain Biased 2 (δ) ≤ O(δ) as n → ∞ , where O(•) represents the "big oh".We can find the details of the "big oh" in Wand and Jones [48], and Sen and Singer [49].Also, by using Theorem 3 in Lestari et al. [35], we as n → ∞ and for every response r = 1, 2, . . ., R. Hence, we obtain the asymptotic property of the mixed regression function estimator of the MMNR model based on the IMSE criterion as follows: where δ = (δ 1 , δ 2 , . . . ,δ R ) T , θ = , δ r = (λ r , K r ), and r = 1, 2, . . ., R. Furthermore, based on Equation (53), we obtain an inequality as follows: Hence, according to Sen and Singer [49] and Serfling [50], for any small positive number, ϑ > 0, we obtain the following relationship: then by considering Equation (54) and by applying the properties of probability, we obtain the following equation: Equation (55) shows that the regression function estimator, m(λ,K) (x, t), of the multire- sponse multipredictor nonparametric regression (MMNR) model obtained by using the mixed smoothing spline and Fourier series estimator is a consistent estimator based on the integrated mean squared error (IMSE) criterion.
Next, in the following section, we provide a simulation study.

Simulation Study
In this section, we provide a simulation study to show the performance of the mixed smoothing spline and Fourier series estimator in estimating the MMNR model.In other words, the simulation study provided is intended to show the sensitivity of parameter selection and its impact on model performance.In this simulation study, we used generation data of n = 50 with a correlation of 0.95 and a variance of 0.25.Here, we use a three-response MMNR model as follows: 50.
Next, to show the influence of smoothing parameters (λ) and oscillation parameters (K), we use different smoothing parameters by generating lambda (λ) randomly, and different oscillation parameters by moving K from 1 to 4. We carry out this process 15 times until the optimal values of smoothing parameters (λ) and oscillation parameters (K) are obtained.
Simulation results including MSE (mean squared error) values, minimum GCV (generalized cross-validation) values, coefficient of determination (R 2 ) values, and lambda (λ) values for every oscillation parameter value (K = 1, 2, 3, 4) are given in Table 1.56) for every oscillation parameter (K) where K = 1, 2, 3, 4. From Table 1, we can observe that every different value of both the smoothing parameter and oscillation parameter will give different performance to the estimation performance.These are indicated by the different coefficient of determination (R 2 ) and MSE (mean squared errors) values as the values change due to oscillation parameters (K) and smoothing parameters (λ).
Next, based on all the minimum GCV values presented in Table 1, we chose the minimum GCV value from the four oscillation parameter (K) values.In Table 1, the minimum GCV value is produced by K = 1, namely, 0.5786323, with an MSE value of 1.02363794, R 2 value of 0.95311712, and λ 1 = 0.84886039; λ 2 = 0.0021173; λ 3 = 0.001.In Table 1, these values are shown in bold.These values are the optimal values and the best MMNR estimate is obtained based on these optimal values.Therefore, these values are then used to determine the estimated values of the MMNR model and to produce the plots of the estimated values of the MMNR model presented in Equation ( 56    From Figures 1-3, we can observe that the values of the estimated MMNR models using the mixed smoothing spline and Fourier series estimator are close to the actual values.This is shown by the performance curves of the values of the estimated MMNR From Figures 1-3, we can observe that the values of the estimated MMNR models using the mixed smoothing spline and Fourier series estimator are close to the actual values.This is shown by the performance curves of the values of the estimated MMNR models, which tend to be similar to the actual values curves.Hence, based on the results of the simulation study presented in Table 1, and Figures 1-3, it shows that the estimation value of the MMNR model in (56) depends on both the smoothing spline parameter and oscillation parameter.In other words, it shows that the selection of the smoothing spline parameter and oscillation parameter is sensitive and impacts the MMNR model's performance.
Hereinafter, based on the results of the simulation study, we obtained the surface plots of the estimation results of the MMNR model presented in Equation (56).These surface plots of the estimated MMNR model are presented in three-dimensional plots (see Figures 4-6).

Conclusions
The estimation result of the MMNR model by using the mixed smoothing spline an Fourier series estimator, presented by  =  (,) (, ), is a combination of the smoothin spline component and Fourier series component estimators presented by  (,) (, ) an  (,) (, ), respectively.The estimator of the MMNR model is highly dependent on th smoothing parameter () and oscillation parameter (K).These optimal parameter values namely, ( ,  ), are determined by using the GCV criterion to determine the value that minimize the function (λ, K).Also, the estimator of the MMNR model by usin the mixed smoothing spline and Fourier series estimator that we have obtained is linea with respect to the observations given in Equation (48).Additionally, the regressio function estimator,  (,) (, ) , of the MMNR model obtained by using the mixe smoothing spline and Fourier series estimator is a consistent estimator based on the IMS criterion.This means that the mixed smoothing spline and Fourier series estimator i statistically a good estimator because it meets one of the criteria for the goodness of a estimator, namely, being consistent.It is therefore suitable for analyzing data wit patterns that partly change at certain subintervals, and others that follow a recurrin

Conclusions
The estimation result of the MMNR model by using the mixed smoothing spline and Fourier series estimator, presented by ŷ = m(λ,K) (x, t), is a combination of the smoothing spline component and Fourier series component estimators presented by f(λ,K) (x, t) and ĝ(λ,K) (x, t), respectively.The estimator of the MMNR model is highly dependent on the smoothing parameter (λ) and oscillation parameter (K).These optimal parameter values, namely, (λ Opt , K opt ), are determined by using the GCV criterion to determine the values that minimize the function GCV(λ, K).Also, the estimator of the MMNR model by using the mixed smoothing spline and Fourier series estimator that we have obtained is linear with respect to the observations given in Equation (48).Additionally, the regression function estimator, m(λ,K) (x, t), of the MMNR model obtained by using the mixed smoothing spline and Fourier series estimator is a consistent estimator based on the IMSE criterion.This means that the mixed smoothing spline and Fourier series estimator is statistically a good estimator because it meets one of the criteria for the goodness of an estimator, namely, being consistent.It is therefore suitable for analyzing data with patterns that partly change at certain subintervals, and others that follow a recurring pattern in a certain trend.In addition, the estimated weight matrix that we obtained is a diagonal matrix where the main diagonal elements are the estimated weight matrix of the first response, the estimated weight matrix of second response, and so on, until the estimated weight matrix of the R-th response.The results of this study contribute to developing statistical inference theories such as estimation and hypothesis testing theories, especially the development of statistical inference theory of nonparametric regression.
d T rj V rj d rj be the rth response penalty, and the penalty component for all responses r = 1, 2, . . ., R is presented by P as follows: P (Rn×1) = d 11 , d 21 , . . ., d R1 , d 12 , d 22 , . . ., d R2 , . . ., d 1p , d 2p , . . ., d Rp T (Rn×Rp) × diag λ 11 I, λ 21 I, . . ., λ R1 I, λ 12 I, λ 22 I, . . ., λ R2 I, . . ., λ 1p I, λ 2p I, . . ., λ Rp I (Rp×Rp) × diag V 11 , V 21 , . . ., V R1 , V 12 , V 22 , . . ., V R2 , . . ., V 1p , V 2p , . . ., V Rp × (Rp×Rp) d 11 , d 21 , . . ., d R1 , d 12 , d 22 , . . ., d R2 , . . ., d 1p , d 2p , . . ., d Rp 11 , d 21 , . . ., d R1 , d 12 , d 22 , . . ., d R2 , . . ., d 1p , d 2p , . . ., d Rp ; L = diag λ 11 I, λ 21 I, . . ., λ R1 I, λ 12 I, λ 22 I, . . ., λ R2 I, . . ., λ 1p I, λ 2p I, . . ., λ Rp I ; ). Hereinafter, we obtained the plots of estimated model values versus predictor (observation) values for every response together with the plots of their actual values.The plots of estimated model values versus predictor (observation) values for the first response together with the plots of actual values of the first response are presented in Figure 1.The plots of estimated model values versus predictor (observation) values for the second response together with the plots of actual values of the second response are presented in Figure 2. The plots of estimated model values versus predictor (observation) values for the third response together with the plots of actual values of the third response are presented in Figure 3. Hereinafter, we obtained the plots of estimated model values versus predictor (observation) values for every response together with the plots of their actual values.The plots of estimated model values versus predictor (observation) values for the first response together with the plots of actual values of the first response are presented in Figure 1.The plots of estimated model values versus predictor (observation) values for the second response together with the plots of actual values of the second response are presented in Figure 2. The plots of estimated model values versus predictor (observation) values for the third response together with the plots of actual values of the third response are presented in Figure 3.

Figure 1 .Figure 1 . 24 Figure 2 .Figure 2 .
Figure 1.Plots of estimated model values versus predictor (observation) values for the first response (  ) together with the plots of actual values of the first response.

Figure 2 .
Plots of estimated model values versus predictor (observation) values for the second response (  ) together with the plots of actual values of the second response.

Figure 3 .
Figure 3. Plots of estimated model values versus predictor (observation) values for the third response (  ) together with the plots of actual values of the third response.

Figure 3 .
Figure 3. Plots of estimated model values versus predictor (observation) values for the third response (y 2 ) together with the plots of actual values of the third response.
Symmetry 2024, 16, x FOR PEER REVIEW 21 of 2 models, which tend to be similar to the actual values curves.Hence, based on the result of the simulation study presented in Table1, and Figures1-3, it shows that the estimatio value of the MMNR model in (56) depends on both the smoothing spline parameter an oscillation parameter.In other words, it shows that the selection of the smoothing splin parameter and oscillation parameter is sensitive and impacts the MMNR model's per formance.Hereinafter, based on the results of the simulation study, we obtained the surfac plots of the estimation results of the MMNR model presented in Equation (56).Thes surface plots of the estimated MMNR model are presented in three-dimensional plot (see Figures4-6).

Figure 4 .
Figure 4. Surface plot of the estimated model versus predictors for the first response (  ).

Figure 4 .
Figure 4. Surface plot of the estimated model versus predictors for the first response (y 1 ).

Figure 4 .
Figure 4. Surface plot of the estimated model versus predictors for the first response (  ).

Figure 5 .
Figure 5. Surface plot of the estimated model versus predictors for the second response (  ).

Figure 5 . 2 Figure 6 .
Figure 5. Surface plot of the estimated model versus predictors for the second response (y 2 ).Symmetry 2024, 16, x FOR PEER REVIEW 22 of 2

Figure 6 .
Figure 6.Surface plot of the estimated model versus predictors for the third response (y 3 ).

Table 1
presents the optimal lambda values based on the minimum GCV values from 15 repetitions of the MMNR model presented in Equation (