Next Article in Journal
Influence of Liquid Hydrogen Diffusion on Nonlinear Mixed Convective Circulation around a Yawed Cylinder
Previous Article in Journal
A Generalized Explicit Iterative Method for Solving Generalized Split Feasibility Problem and Fixed Point Problem in Real Banach Spaces
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Consistency and Asymptotic Normality of Estimator for Parameters in Multiresponse Multipredictor Semiparametric Regression Model

1
Department of Mathematics, Faculty of Science and Technology, Airlangga University, Surabaya 60115, Indonesia
2
Research Group of Statistical Modeling in Life Science, Faculty of Science and Technology, Airlangga University, Surabaya 60115, Indonesia
3
Department of Mathematics, Faculty of Mathematics and Natural Sciences, The University of Jember, Jember 68121, Indonesia
4
Department of Statistics, Faculty of Sciences and Data Analytics, Sepuluh Nopember Institute of Technology, Surabaya 60111, Indonesia
5
Department of Physics, Faculty of Science and Technology, Airlangga University, Surabaya 60115, Indonesia
6
Department of Clinical Pathology, Faculty of Medicine, Airlangga University, Surabaya 60131, Indonesia
7
Department of Statistics, Faculty of Science, Muğla Sıtkı Koçman University, Muğla 48000, Turkey
*
Author to whom correspondence should be addressed.
Symmetry 2022, 14(2), 336; https://doi.org/10.3390/sym14020336
Submission received: 5 December 2021 / Revised: 20 January 2022 / Accepted: 1 February 2022 / Published: 6 February 2022
(This article belongs to the Section Mathematics)

Abstract

:
A multiresponse multipredictor semiparametric regression (MMSR) model is a combination of parametric and nonparametric regressions models with more than one predictor and response variables where there is correlation between responses. Due to this correlation we need to construct a symmetric weight matrix. This is one of the things that distinguishes it from the classical method, which uses a parametric regression approach. In this study, we theoretically developed a method of determining a confidence interval for parameters in a MMSR model based on a truncated spline, and investigating asymptotic properties of estimator for parameters in a MMSR model, especially consistency and asymptotic normality. The weighted least squares method was used to estimate the MMSR model. Next, we applied a pivotal quantity method, a Cramer–Wold theorem, and a Slutsky theorem to determine the confidence interval, investigate consistency, and asymptotic normality properties of estimator for parameters in a MMSR model. The obtained results were that the estimated regression function is linear to observation. We also obtained a 100 1 α % confidence interval for parameters in the MMSR model, and the estimator for parameters in MMSR model was consistent and asymptotically normally distributed. In the future, these obtained results can be used as a theoretical basis in designing a standard toddlers growth chart to assess nutritional status.

1. Introduction

A regression model which is used to analyze the functional relationship between response variable and predictor variable in various fields is widely used for both prediction and interpretation purposes. This functional relationship is represented by a regression function. If we consider the form of the regression function, we recognize two types of regression models, namely a parametric regression (PR) model and a nonparametric regression (NR) model. The combination of the two types of regression models produces a semiparametric regression (SR) model. Furthermore, if this model has more than one predictor and response variables where there is a correlation between responses, the model is called a multiresponse multipredictor semiparametric regression (MMSR) model.
In general, the main problem in the regression modeling is estimation of the regression function. To estimate the regression function, there are several estimators which are frequently used in NR modeling; for example local linear, local polynomial, kernel, and spline. For prediction purposes, the use of local linear, local polynomial, and kernel estimators is highly recommended as discussed by [1,2,3,4,5,6,7,8,9,10,11]. Meanwhile, for prediction and interpretation purposes, the use of spline estimators is better and more flexible as discussed by [12,13,14]. Because of the flexible nature of this spline estimator, many researchers have been interested in using and developing it in several cases. For examples, an M-type spline estimator was discussed by [15]; truncated spline and B-spline estimators were discussed by [16,17], respectively; penalized spline has been proposed by [18] to analyze current status data. Additionally, spline smoothing has been proposed by [19] to estimate the regression function drawing the association between cortisol and ACTH hormones, and spline regression was proposed by [20] to estimate regression function applied to censored data. Next, [21,22] used both smoothing kernel and spline estimators to estimate the regression function and select the optimal smoothing parameter of uni-response nonparametric regression (UNR) models, and multiresponse nonparametric regression (MNR) models, respectively. A Kernel estimator was proposed by [23] to estimate UNR model through a simulation study. Additionally, [24] discussed kernel and spline smoothing techniques to estimate coefficient in a rates model. However, local linear, local polynomial and kernel estimators are highly dependent on the neighborhood of the target point, called the bandwidth, so that if used for model estimation of fluctuating data, a small bandwidth is required and this will result in an estimation curve that is too rough. So the estimators only consider the goodness of fit and do not consider smoothness. Thus, these estimators are not good to use for estimating models of fluctuating data in the sub intervals, because the estimation results will provide a large mean square error (MSE) value. This is different from the spline estimator which considers goodness of fit and smoothness factors as has been discussed by several researchers. Furthermore, [25] compared smoothing and truncated splines in a model for estimating blood pressures model, and the results showed that the smoothing spline is better at estimating the model than truncated spline, where it is shown by the MSE value if we use a smoothing spline estimator that is smaller than the truncated spline estimator. This means that for prediction purposes, the smoothing spline is better than the truncated spline. Additionally, [26] have discussed estimating a regression function in MNR model using smoothing spline and investigated asymptotic properties of the regression function. Application of smoothing spline and Fourier series was discussed by [27]. Although those researchers mentioned above have discussed several estimators to estimate the regression functions of the regression models, but those researchers discussed these estimators for UNR models and MNR models only. This means that those researchers mentioned above discussed estimators in NR models only.
Next, there are researchers who have discussed estimators in SR models; for examples [28,29] discussed smoothing techniques for estimating SR models; Ref. [30] used a spline estimator for determining the number of knots and their locations based on statistical criteria; Ref. [31] used an iterative weighted partial spline least squares estimator to estimate longitudinal SR model; Ref. [32] used a local linear estimator for designing the standard growth chart of children; Ref. [33] predicted GDP in Turkey using a SR model approach; Ref. [34] discussed a SR model applied to censored data; Refs. [35,36,37] discussed smoothing spline in SR models; Ref. [38] used bias-correction technique to construct the empirical likelihood ratios for estimating semiparametric model. However, those researchers discussed estimators in uni-response semiparametric regression (USR) models only. However, there are several researchers who have discussed estimators in a multiresponse semiparametric regression (MSR) model; for examples [39] studied estimating MSR using a smoothing spline estimator; Ref. [40] discussed determining confidence interval for the parameter of a parametric component of binary response SR model using truncated spline estimator; Ref. [41] discussed estimating the regression function and confidence interval of a parameter MSR model using a smoothing spline estimator. Furthermore, Ref. [42] discussed estimating the confidence interval for parameters of a MMSR model using a truncated spline estimator. However, all of the previous researchers mentioned above have not yet discussed determining consistency and asymptotic normality properties of parameters in a MMSR model by using truncated spline estimator.
In practice, we are often faced with the problem of analyzing the functional relationship between more than one response variable and more than one predictor variable where some of the response variables have a linear functional relationship with the response variable and some of the other predictor variables do not form a functional relationship that points to a certain pattern, and there is a correlation between responses. To deal with this problem, the MMSR model approach is used. Basically, the main goal of MMSR modeling is to get a better model than USR modeling, considering that this model not only considers the effect of predictors on responses, but also the relationship between responses. The representation of the relationship between responses is usually expressed in the form of a covariance matrix, which is used as a weighting in estimating the parameters of model. Hence, the problem of estimating the regression function is more complicated for a MMSR model, because the regression function in this model consists of a parametric component and a nonparametric component. Additionally, in this model there is a correlation between responses, so that in the estimation process the regression function requires a weight matrix in the form of a symmetric matrix, especially a diagonal matrix.
Therefore, in this article we discuss a new method for determining the confidence interval of parameters in the MMSR model and investigating asymptotic normality and consistency properties of parameters in the MMSR model based on a truncated spline estimator, which has a very good ability to handle data whose behavior changes (fluctuates) at certain sub-intervals.

2. Materials and Methods

In this section, we describe materials and methods which are used to determine asymptotic normality and consistency of parameters in the MMSR model based on truncated spline.

2.1. Multiresponse Multipredictor Semiparametric Regression (MMSR) Model

A paired observations set y k i , x k 1 i , x k 2 i , , x k q i , t k 1 i , t k 2 i , , t k r i where k = 1 , 2 , , p ; i = 1 , 2 , , n ; q + r = n ; y represents response variable; and x , and t represent predictor variables follows a MMSR model if the relationship between observations of the response variable, namely y k i , and observations of the predictor variables, namely x k 1 i , x k 2 i , , x k q i , t k 1 i , t k 2 i , , t k r i , satisfies a model as follows:
y k i = f k x k 1 i , x k 2 i , , x k q i   +   g k t k 1 i , t k 2 i , t k r i   +   ε k i
where y k i is the ith observation value in the kth response, f k x k 1 i , x k 2 i , , x k q i is a parametric component of the kth response, g k t k 1 i , t k 2 i , t k r i is a nonparametric component of the kth response in which g k is assumed to be smooth in the sense that it fits in the Sobolev space W 2 m a k , b k , and ε k i is zero-mean random error with variance σ k i 2 .
The multiresponse multipredictor semiparametric regression (MMSR) model presented by Equation (1) can be written as follows:
y k i = h k x k 1 i , x k 2 i , , x k q i , t k 1 i , t k 2 i , , t k r i   +   ε k i
where h k x k 1 i , x k 2 i , , x k q i , t k 1 i , t k 2 i , , t k r i   =   f k x k 1 i , x k 2 i , , x k q i   +   g k t k 1 i , t k 2 i , t k r i is an unknown regression function of the MMSR model presented by Equation (1). This regression function consists of a parametric component and nonparametric component. Next, to estimate the regression function of the MMSR model presented by Equations (1) and (2) based on truncated spline estimator, we need to develop the truncated spline proposed by [12].

2.2. Truncated Spline

According to [12], the truncated section of a polynomial spline called as a piecewise polynomial, is a continuous segmented polynomial. A truncated spline regression model can adapt to the data characteristics, and it has the ability to overcome the data pattern showing a sharp rise or fall with the help of both knot points, and the number of knot points is such that its resulting curve is relatively smooth. Next, suppose we have a multiresponse multipredictor nonparametric regression (MMNR) model which can be expressed as follows:
y k i = l = 1 r g k l t k l i   +   ε k i ;   i = 1 , 2 , , n ;   k = 1 , 2 , , p
Then, the truncated function with B knots (i.e., b 1 , b 2 , , b B are knot points) and degree d is defined as follows [12]:
t k l i b k l j + d = t k l i b k l j d for t k l i b k l j 0                                 for t k l i < b k l j
where d represents degree of polynomial in which generally for d = 1 ,   2 and 3 , the Equation (4) gives the functions of linear polynomial, quadratic polynomial and cubic polynomial, respectively. Hence, based on Equations (3) and (4), the general form of truncated spline with degree of polynomial d and the number of knot points B for the regression function of MMNR model presented by Equation (3) can be expressed as follows:
g k l t k l i = α k 0 + s = 1 d α k l s t k l i s + j = 1 B β k l j t k l i b k l j + d
Furthermore, we can develop this technique to the MMSR model for estimating the regression function of MMSR model presented by Equation (1) or Equation (2) based on truncated spline. Next, to determine the confidence interval for parameters in the MMSR model, we need the pivotal quantity method as proposed by [43].

2.3. Pivotal Quantity

Suppose we have a random sample of size n, X 1 , X 2 ,…, X n , of a population X with probability density function f x , δ where δ is an unknown parameter. If T is a function of X 1 , X 2 ,…, X n and δ where its probability distribution is independent of the parameter δ , then T is called a pivotal quantity [43].
Finally, based on the truncated spline estimator proposed by [12] and by applying weighted least square (WLS) method, we can estimate the MMSR model presented by Equation (1). Next, we apply development of pivotal quantity method proposed by [43] to determine confidence interval for parameters in MMSR model. Additionally, we investigate consistent property of estimator for parameters in MMS model, and then we apply Cramer–Wold theorem [44] and Slutsky theorem [45] to determine the asymptotic normality of estimator for parameters in the MMSR model presented by Equation (1).

2.4. Simulation

In this simulation we generated three samples sized n = 30 , 50   and   100 . The response vector y = y 1 , y 2 , y 3 T , consisting of three response variables and the design vector X = x 1 , x 2 ,   x 3 were produced from a uniform distribution. In addition, the MMSR model to be generated includes two different smooth functions, g 1 t 1 and g 2 t 2 with nonparametric covariates   t 1 and t 2 , respectively. The number of replications for each sample used in simulation experiments were considered as 1000. Finally, the random error terms ε i -s were independent and identically distributed from the multivariate normal distribution ε i ~ MN 0 , Σ ε 2 for three models.

3. Results and Discussions

The results and discussions presented in this section include estimating a MMSR model, estimating confidence interval of the parameters in a MMSR model, investigating the consistency of estimator for parameters in a MMSR model, determining asymptotic normality of estimator for parameters in a MMSR model, and simulation study.

3.1. Estimating MMSR Model

By considering Equations (4) and (5), the estimation of a MMSR model presented by Equation (1) or Equation (2) based on truncated spline estimator is approximated by a linear function that is in the form of truncated spline with degree of polynomial d = 1 (i.e., a linear polynomial) and knot point b and the number of knots B such that the MMSR model presented by Equation (1) can be expressed as follows:
y ki = α k 0 + l = 1 q α kl x kli + j = 1 r β kj t kji + m = 1 B β k j + m t kji b kjm + 1 + ε ki
where k = 1 , 2 , , p and i = 1 , 2 , , n .
Therefore, the model presented by Equation (6) can be written in matrix notation as follows:
y   =   X t α β   +   ε
Next, let H   =   X t and δ = α β then MMSR model presented by Equation (7) can be written as follows:
y = H δ + ε
where = y 1 y 2 y p ; H = H 1 0 0 0 H 2 0 0 0 H p ; δ = α k 0 α k 1 α k q β k 1 β k 2 β k r ; ε = ε 1 ε 2 ε p .
Further, suppose ε is independently and identically distributed with zero mean and covariance W (namely). Note that there is correlation between responses. This implies that there is correlation between random errors of each response variable, too. Therefore, for estimating parameters of the MMSR model we use the weighted least squares (WLS) method, which needs a symmetrical weight matrix W 1 that is the inverse of the covariance matrix W . We can obtain construction of the covariance matrix W as follows:
Cov ε   =   E ε ε T   =   E ε 1 ε 1 T E ε 1 ε 2 T E ε 1 ε p T E ε 2 ε 1 T E ε 2 ε 2 T E ε 2 ε p T E ε p ε 1 T E ε p ε 2 T E ε p ε p T
= W 11 W 12 W 1 p W 21 W 22 W 2 p W p 1 W p 2 W pp = W   ( namely ) .
where W k l = d i a g σ k 1 2 , σ k 2 2 , , σ k n 2 for k = l d i a g σ k l 1 , σ k l 2 , , σ k l n for k l   ; k = 1 , 2 , , p .
Note that matrix W , given in (9), is a symmetrical matrix. Next, based on assumptions of the MMSR model given in (1) that ε k i is zero-mean random error with variance σ k i 2 , then it implies that σ k l = ρ k l σ k σ l and ρ k l = ρ k for k = l 0 for k l ; k = 1 , 2 , , p .
Hence, for k = 1 , 2 , , p we have:
W k l = d i a g σ k 1 2 , σ k 2 2 , , σ k n 2 for k = l 0                                                                   for k l
Therefore, the matrix given in (9) can be written as follows:
W σ 2 = W 1 σ 1 2 0 0 0 W 2 σ 2 2 0 0 0 W p σ p 2 = d i a g W 1 σ 1 2 , W 2 σ 2 2 , , W p σ p 2
where 0 is the null matrix, that is, a matrix in which all its element are null, and matrix W k σ k 2 is given as follows:
W k σ k 2 = σ k 1 2 σ k 1 , 2 σ k 1 , n σ k 2 , 1 σ k 2 2 σ k 2 , n σ k n , 1 σ k n , 2 σ k n 2 .
Estimation of parameters in the MMSR model presented by Equation (1) can be obtained by taking solution of the WLS optimization problem as follows:
min   δ Q δ = min δ y H δ T W 1 y H δ
Hence, by taking the partial derivative of Q δ with respect to δ as follows:
Q δ δ = δ y H δ T W 1 y H δ = 0
we get:
δ ^ = H T W 1 H 1 H T W 1 y
Next, based on the MMSR model presented in Equation (2) and the estimated parameters given by Equation (11), we have the estimated regression function of a MMSR model, namely h ^ , as follows:
h ^ = H δ ^
Thus, by considering Equations (8), (11) and (12), we get an estimation of the MMSR model given by Equation (1) based on a truncated spline estimator as follows:
y ^ = H δ ^ = H H T W 1 H 1 H T W 1 y = M y
where
M = H H T W 1 H 1 H T W 1 .

3.2. Estimating Confidence Interval of Parameters in MMSR Model

We assume that ε k i in the MMSR model presented by Equation (1) is normally distributed independently and identically with zero mean and variance σ k i 2 . It is commonly written as ε k i ~ N 0 ,   σ k i 2 i . i . d where σ k i 2 is unknown. Next, suppose we design the 100 1 α % confidence interval for δ v where v = 1 , 2 , , p ; p = p + p q + r p + p B and B is the number of knots. Therefore, we have a pivotal quantity as follows:
T v y k ,   x k 1 , , x k q , t k 1 , , t k r = δ ^ v δ v M S E   H T W 1 H v v 1
where M S E = y T I H H T W 1 H 1 H T W 1   y n p p = y T I M   y n p p = y T   Ω   y n p p ; Ω = I M ; and M = H H T W 1 H 1 H T W 1 . The Equation (15) is pivotal quantity for parameter δ v where δ v is the v t h element of the parameter vector δ , and H T W 1 H v v 1 is the diagonal element of H T W 1 H 1 . The pivotal quantity given by Equation (15) has a distribution of t -student with a degree of freedom of n p p .
Hereinafter, to determine the 100 1 α % confidence interval for δ v , we must take a solution of the following probability Equation:
P L v T v y k ,   x k 1 , , x k q , t k 1 , , t k r U v = 1 α
where L v is the lower limit value of the confidence interval and U v is the upper limit value of the confidence interval, and 1 α is the confidence level.
Next, by substituting Equation (15) into Equation (16) we get:
P L v δ ^ v δ v M S E   H T W 1 H v v 1 U v = 1 α
Here, the Equation (17) can be written as follows:
P δ ^ v U δ v δ ^ v L = 1 α
where L = L v y T Ω   y n p p D ; U = U v y T Ω   y n p p D ; Ω = I M ; M = H H T W 1 H 1 H T W 1 ; D = H T W 1 H v v 1   ; and subscript v v of H T W 1 H v v 1 in Equation (17) represents diagonal element of H T W 1 H 1 .
A confidence interval is called good if it has the shortest interval length. Because of this, we should determine values of L v and U v such that the length of the confidence interval in Equation (18) is the shortest.
Therefore, if l e n g t h L v , U v represents the length of the confidence interval in Equation (18), then we have:
l e n g t h L v , U v = δ ^ v L v y T Ω   y n p p D δ ^ v U v y T Ω   y n p p D = U v L v y T Ω   y n p p D .
Hence, we must determine the solution of the following optimization problem to take the shortest length of confidence interval for δ v :
Min L v , U v l e n g t h L v , U v = Min L v , U v U v L v y T Ω   y n p p D
and the following condition must be satisfied by Equation (19):
L v U v ψ ω d ω = 1 α   or   φ U v φ L v 1 α = 0
where function ψ · is a probability distribution of t n p p and function φ · is a cumulative probability distribution of t n p p .
Next, by using a Lagrange multiplier method we have:
R L v , U v , γ = U v L v y T Ω   y n p p D   +   γ φ U v φ L v 1 α
where γ is a Lagrange constant.
Hence, we get:
R L v , U v , γ L v = y T Ω   y n p p D γ φ L v = 0
R L v , U v , γ U v = y T Ω   y n p p D + γ φ U v = 0
R L v , U v , γ γ = φ U v φ L v 1 α = 0
Based on Equations (22) and (23), we obtain:
φ L v = φ U v
The Equation (25) implies L v = U v or L v = U v . Since, in this case L v = U v is not satisfied, then the shortest confidence interval for parameters vector δ v must be taken from the values of L v and U v that satisfy the following Equation:
L v ψ ω d ω = U v ψ ω d ω = α 2
By using the confidence level 1 α , the values of L v and U v that satisfy Equation (26) can be found in table of t n p p distribution.
Hence, the shortest confidence interval for parameters of a MMSR model based on the truncated spline estimator satisfies the following probability:
P δ ^ v U v y T Ω   y n p p D δ v δ ^ v + U v y T Ω   y n p p D = 1 α
where the U v value can be obtained from Equation (26) that is U v ψ ω d ω = α 2 .
Therefore, we have:
P δ ^ v t α 2 ; n p p   y T Ω   y n p p D δ v δ ^ v + t α 2 ; n p p y T Ω   y n p p D = 1 α
Thus, by using t -student distribution, the 100 1 α % confidence interval for parameters vector δ v of the MMSR model presented by Equation (1) based on the truncated spline estimator is:
δ ^ v t α 2 ; n p p   y T Ω   y n p p D δ v δ ^ v + t α 2 ; n p p y T Ω   y n p p D
where v = 1 , 2 , , p ; p = p + p q + r p + p B and B is the number of knots; Ω = I M ; M = H H T W 1 H 1 H T W 1 ; D = H T W 1 H v v 1 ; subscript v v of H T W 1 H v v 1 represents the diagonal elements of matrix H T W 1 H 1 ; H is a matrix of the predictor as given in Equation (8); and W is a symmetrical covariance matrix as given in Equation (9).

3.3. Investigating Consistency of Estimator for Parameters in MMSR Model

Before investigating consistency of the estimator for parameters in the MMSR model namely δ ^ , we need the following assumptions.
Assumption 1.
x k i = 2 i 1 2 n , i = 1 , 2 , , n and  t k i = 2 i 1 2 n , i = 1 , 2 , , n ; k = 1 , 2 , , p .
Assumption 2.
θ k = x k 1 , x k 2 , , x kq , t k 1 , t k 2 , , t kr , k = 1 , 2 , , p are independently and identically distributed with zero mean, the covariance matrix Z , and the third absolute moment is finite.
Assumption 3.
Lim n i = 1 n w k j = ϑ < , k = 1 , 2 , , p .
For investigating consistent property of the estimator for the parameters of the MMSR model namely δ ^ , we need the following Lemma 1 and Theorem 1.
Lemma 1.
If δ ^ as presented in Equation (11) is a truncated spline estimator for the parameters of the multiresponse multipredictor semiparametric regression (MMSR) model, then
δ ^ δ = H T W 1 H n 1 H T W 1 ε n
Proof of Lemma 1.
Based on Equations (8), and (11)–(14), we have:
δ ^ δ = H T W 1 H 1 H T W 1 y δ = H T W 1 H 1 H T W 1 H δ + ε δ = H T W 1 H 1 H T W 1 H δ + H T W 1 H 1 H T W 1 ε δ = H T W 1 H 1 H T W 1 H δ + H T W 1 H 1 H T W 1 ε H T W 1 H 1 H T W 1 H δ = H T W 1 H 1 H T W 1 ε = H T W 1 H n 1 H T W 1 ε n .
Thus, we obtain:
δ ^ δ = H T W 1 H n 1 H T W 1 ε n
Theorem 1.
If Assumptions 1–3 hold, then
( a ) .   H T W 1 H n   P Z ϑ   as   n .
( b ) .   H T W 1 ε n   P   0   as   n .
Proof of Theorem 1.
Given matrices M = m k i j , M T W M W 1 = c k i j , and W = d i a g w k 1 , w k 2 , , w k n ; k = 1 , 2 , , p ; i , j = 1 , 2 , , n . Because Assumptions 1–3 hold, then:
(a)
Based on the Strong Law of Large Numbers [45], we have:
H T H n   P   Z
it implies H T W 1 H n   P Z ϑ as n .
(b)
Note that H T W 1 ε n   P   0 as n hold if V a r H T W 1 ε n i   0 as n .
On the other hand, we have:
  E H T W 1 ε ε T W 1 H i = σ k i 2   t r z i i W 1 = o n   as   n .  
It means that:
V a r H T W 1 ε n i   0   as   n .
Therefore, we obtain:
H T W 1 ε n   P   0   as   n .
Furthermore, the following Theorem provides a consistency property of δ ^ that is a truncated spline estimator for parameters of MMSR model presented in Equation (1).
Theorem 2.
If  δ ^ is a truncated spline estimator for parameters of MMSR model presented in Equation (1), then
Lim n P δ ^ δ < ξ = 1 ,   ξ > 0 .  
In other word, an estimator δ ^ that satisfies the condition given in Equation (31) is said to be a consistent estimator.
Proof of Theorem 2.
Suppose δ ^ is estimator as given in Equation (11). Based on Lemma 1 and Theorem 1, we get:
δ ^ δ = H T W 1 H n 1 H T W 1 ε n   P   0   as   n .
According to [45], for every ξ > 0 we can express Equation (32) as follows:
Lim n P δ ^ δ ξ = 0 .
Next, by using probability property, we get:
Lim n P δ ^ δ < ξ = Lim n 1 P δ ^ δ ξ = 1 .
Thus, Lim n P δ ^ δ < ξ = 1 , ξ > 0 . It means that δ ^ is a consistent estimator. □

3.4. Determining Asymptotic Normality of Estimator for Parameters in MMSR Model

In this section, we determine asymptotic normality of δ that is a truncated spline estimator for parameters of the MMSR model presented in Equation (1). Before investigating asymptotic normality of δ , firstly we need to consider the following Lemma 2 and Theorem 3.
Lemma 2.
If M is the matrix as given in Equation (14) and h is the vector of the regression function of MMSR model, then
Lim   n n 3 / 2 i I M T W 1 h i 3 = 0
Proof of Lemma 2.
We have:
n 3 / 2 i I M T W 1 h i 3 = n 3 / 2 i I M T W 1 h i I M T W 1 h i 2 n 3 / 2 max j I M T W 1 h i j I M T W 1 h i 2
On the other hand, we have:
n 3 / 2 i I M T W 1 h i 3 n 3 / 2 i I M T W 1 h i 2   i I M T W 1 h i 2 = n 3 / 2 h T W 1 I M T I M T W 1 h 3 / 2
Hence, we obtain:
  n 3 / 2 i I M T W 1 h i 3 = o 1   or   Lim N n 3 / 2 i I M T W 1 h i 3 = 0 .
Theorem 3.
If M is matrix as given in Equation (14) and h is vector of regression function of MMSR model, then for n
H T I M T W 1 h + W 1 ε n d D ~ N 0 , σ k i 2 Z ϑ .
Proof of Theorem 3.
To prove this theorem, we apply the Cramer–Wold theorem [44,45]. Given a vector a such that:
a T H T I M T W 1 h + W 1 ε n = i K i
where K i = H a i W 1 ε i + i n is the zero mean independent random variable. Therefore, K i has a mean of 0 and variance as follows:
i Var ( K i ) = a T σ k i 2 Z ( 1 n i = 1 n w k i )   a + a T Z a 1 n i I M T W 1 h i 2
Next, by taking into account the Assumptions 1–3, then i Var ( K i ) converges to a T σ k i 2 Z ϑ a . Next, we have:
i E K i 3 = n 3 / 2 i E H a i 3 W 1 ε i + I M T W 1 h i 3                                               = n 3 / 2 E H a 1 3 i E W 1 ε i + I M T W 1 h i 3
On the other hand, we obtain:
i E K i 3   E H a 1 3 n 1 / 2 max i E W 1 ε i 3 + n 3 / 2 i I M T W 1 h i 3
Due to Lemma 2 and finite third absolute moment of W 1 ε i then i E K i 3 converges to zero.
Thus, i K i converges to a Normal distribution that is N 0 , a T σ k i 2 Z ϑ a   . □
Furthermore, the following Theorem provides asymptotic normality of δ ^ that is a truncated spline estimator for parameters of the MMSR model presented in Equation (1). □
Theorem 4.
If  δ ^ is a truncated spline estimator of the parameters in the MMSR model presented in Equation (1), then for  n ,
n δ ^ δ d D   ~   N 0 , σ k i 2 Z 1 ϑ 1
Proof of Theorem 4.
By considering Equation (28), we can express n δ ^ δ as follows:
n δ ^ δ = H T W 1 H n 1 H T W 1 ε n                                       = H T I M T W 1 H n 1 H T I M T W 1 h + W 1 ε n H T M T W 1 ε n
Hence, based on this we obtain:
H T I M T W 1 H n 1   P   Z 1 ϑ 1
as n and H T M T W 1 ε n P 0 as n .
Furthermore, based on Theorem 3, we have:
  H T I M T W 1 h + W 1 ε n d D ~ N 0   ,   σ k i 2 Z ϑ   as   n .
Next, by applying Slutsky theorem [45], we obtain:
n δ ^ δ d D   ~   N 0   , σ k i 2 Z 1 ϑ 1   as   n .

3.5. Simulation Study

In this section we give a simulation study for a MMSR model constructed by three response variables and three predictor variables.
(i)
Simulation Design: The simulation study scenarios are decided as follows:
  • We generate three samples sized n = 30 ,   50   and   100 .
  • The response vector y = y 1 , y 2 , y 3 T , consisting of three response variables, is created from the MMSR model given in (ii).
  • The design vector X = x 1 , x 2 ,   x 3 is produced from a uniform distribution.
  • For each model, a total of nine regression coefficients specified as α 1 = 1 ,   2 ,   4 T , α 2 = 2 ,   3 ,   5 T ,   α 3 = 0.5 ,   1 ,   3 T are considered here.
  • In addition, the MMSR model to be generated includes two different smooth functions, g 1 t 1 and g 2 t 2 with nonparametric covariates t 1 and t 2 , respectively.
  • The number of replications for each sample used in simulation experiments is considered as 1000.
(ii)
Data Generation: The MMSR model can be written as follows, according to given information in the simulation design;
y 1 y 2 y 3 = x 1   x 2     x 3 α 1 α 2 α 3 + j = 1 q = 2 g j t j i + ε i ,             1 i n
where X = x 1 , x 2 ,   x 3 is a n × 9 -dimensional design matrix and each x i is generated from a uniform distribution, that is, x i ~ U 0 n × 3 , 1 n × 3 . The vector of regression coefficients α = α 1 ,   α 2 ,   α 3 is defined in (i) above.
g 1 t 1 and g 2 t 2 are computed by using t 1 = i 0.5 / n i = 1 n and t 2 ~ U 2 ,   2 as follows:
g 1 t 1 = 1 48 t 1 + 218 t 1 2 315 t 1 3 + 145 t 1 4 g 2 t 2 = sin 2 t 2 + 2 e 16 t 2 2
Finally, the random error terms ε i ’s are independent and identically distributed from the multivariate normal distribution ε i ~ M N 0 , Σ ε 2 for three models.
The results and comments regarding the simulation study are given in the following tables and figures below.
Table 1 contains the estimated regression coefficients for the parametric component of the MMSR model and their 95% confidence intervals. From this, it can be said that as the sample size increases, the confidence intervals narrow. Note that this inference is supported by the boxplots in Figure 1. In addition, it can be said that the proposed model has succeeded in obtaining satisfactory estimates for the parametric component.
Figure 1 is drawn to see the convergence of the estimated regression coefficients to the true coefficients. It can be seen from the boxplots that, the range of the boxplot gets smaller as the sample sizes get larger. We would like to point out that this is an expected situation. Regarding the non-parametric component of the MMSR model, Table 2 and Figure 2 are obtained. Additionally, a Mann–Whitney U test is performed to determine the statistical significance of the difference between fitted curve and the real functions for both g 1 t 1   and g 2 t 2 , and the results are given in Table 2 below.
When the results given in Table 2 are examined carefully, it is seen that most of the p-Values are greater than 0.05 (these are shown in bold). As can be seen from this, it is clear that in most cases, the null hypothesis claiming that there is no difference between the medians of the fitted curves and the real functions cannot be rejected. In addition, these results are confirmed by the graphs given in Figure 2. Moreover, the influence of sample size is clearly visible from the panels in Figure 2.
After the parametric and nonparametric components, Figure 3 is obtained to examine the overall residuals from the total MMSR model. It should also be noted that Figure 3 includes the scatter plots for residuals versus fitted values of the MMSR model for all sample sizes. Note that although there is some improvement in the performance of the model when n = 100, it is clear that the residuals are not randomly distributed around zero. The existence of a heteroscedasticity problem is debatable. However, it has also been observed that when the sample size is large, the residuals do not exceed the standard deviation limits.

4. Conclusions

The estimated MMSR model we obtained is a combination of estimations between the parametric and nonparametric components, and it is linear to observation. Additionally, we found that the 100 1 α % confidence interval for parameters in a MMSR model depend on t -student distribution namely t α 2 ; n p p and the estimator of parameters in a MMSR model is consistent and asymptotically normally distributed. Based on simulation results, the influence of sample size is clearly visible and it also has been observed that when the sample size is large, the residuals do not exceed the standard deviation limits.

Author Contributions

All authors have contributed to this research article, namely conceptualization, N.C., B.L.; methodology, B.L., N.C., I.N.B. and D.A.; software, D.A., N.C., T.S.; validation, B.L., N.C., I.N.B., T.S. and D.A.; formal analysis, B.L., N.C., I.N.B. and D.A.; investigation, resource and data curation, D.A., R.R., P.W. and A.A.; writing—original draft preparation, B.L. and N.C.; writing—review and editing, B.L., N.C. and D.A.; visualization, B.L., N.C. and D.A.; supervision, I.N.B., D.A., R.R., P.W. and A.A.; project administration, N.C. and B.L.; funding acquisition, N.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Airlangga University, the Republic of Indonesia through Mandated Research (Riset Mandat) Grant with grant number: 373/UN3.14/PT/2020.

Institutional Review Board Statement

Since this study does not involve humans and animals, then an Institutional Review Board Statement is not applicable.

Informed Consent Statement

Since this study does not involve humans, then Informed Consent Statement is not applicable.

Data Availability Statement

The authors confirm that there are no data available.

Acknowledgments

The authors thank Airlangga University for technical and financial support. The authors are also grateful to the editors and peer reviewers who provided comments, corrections, criticisms, and suggestions that were useful for improving the quality of this article.

Conflicts of Interest

The authors declare no conflict of interest. In addition, the funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the article, or in the decision to publish the results.

References

  1. Ana, E.; Chamidah, N.; Andriani, P.; Lestari, B. Modeling of hypertension risk factors using local linear of additive nonparametric logistic regression. J. Phys. Conf. Ser. 2019, 1397, 012067. [Google Scholar] [CrossRef]
  2. Cheruiyot, L.R. Local linear regression estimator on the boundary correction in nonparametric regression estimation. J. Statist. Theory Appl. 2020, 19, 460–471. [Google Scholar] [CrossRef]
  3. Chamidah, N.; Yonani, Y.S.; Ana, E.; Lestari, B. Identification the number of mycobacterium tuberculosis based on sputum image using local linear estimator. Bullet. Elect. Eng. Inform. (BEEI) 2020, 9, 2109–2116. [Google Scholar] [CrossRef]
  4. Cheng, M.-Y.; Huang, T.; Liu, P.; Peng, H. Bias reduction for nonparametric and semiparametric regression models. Statistica Sinica 2018, 28, 2749–2770. [Google Scholar] [CrossRef] [Green Version]
  5. Delaigle, A.; Fan, J.; Carroll, R.J. A design-adaptive local polynomial estimator for the errors-in-variables problem. J. Amer. Stat. Assoc. 2009, 104, 348–359. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  6. Francisco-Fernandez, M.; Vilar-Fernandez, J.M. Local polynomial regression estimation with correlated errors. Comm. Statist. Theory Methods 2001, 30, 1271–1293. [Google Scholar] [CrossRef] [Green Version]
  7. Benhenni, K.; Degras, D. Local polynomial estimation of the mean function and its derivatives based on functional data and regular designs. ESAIM Probab. Stat. 2014, 18, 881–899. [Google Scholar] [CrossRef]
  8. Kikechi, C.B. On local polynomial regression estimators in finite populations. Int. J. Stats. Appl. Math. 2020, 5, 58–63. [Google Scholar]
  9. Wand, M.P.; Jones, M.C. Kernel Smoothing, 1st ed.; Chapman and Hall/CRC: New York, NY, USA, 1995. [Google Scholar]
  10. Cui, W.; Wei, M. Strong consistency of kernel regression estimate. Open J. Stats. 2013, 3, 179–182. [Google Scholar] [CrossRef] [Green Version]
  11. De Brabanter, K.; de Brabanter, J.; Suykens, J.A.K.; de Moor, B. Kernel regression in the presence of correlated errors. J. Mach. Learn. Res. 2011, 12, 1955–1976. [Google Scholar]
  12. Wahba, G. Spline Models for Observational Data; SIAM: Philadelphia, PA, USA, 1990. [Google Scholar]
  13. Eubank, R.L. Nonparametric Regression and Spline Smoothing, 2nd ed.; Marcel Dekker: New York, NY, USA, 1999. [Google Scholar]
  14. Wang, Y. Smoothing Splines: Methods and Applications; Taylor & Francis Group: Boca Raton, FL, USA, 2011. [Google Scholar]
  15. Liu, A.; Qin, L.; Staudenmayer, J. M-type smoothing spline ANOVA for correlated data. J. Multivar. Anal. 2010, 101, 2282–2296. [Google Scholar] [CrossRef] [Green Version]
  16. Chamidah, N.; Lestari, B.; Massaid, A.; Saifudin, T. Estimating mean arterial pressure affected by stress scores using spline nonparametric regression model approach. Commun. Math. Biol. Neurosci. 2020, 2020, 1–12. [Google Scholar]
  17. Eilers, P.H.C.; Marx, B.D. Flexible smoothing with B-splines and penalties. Statist. Sci. 1996, 11, 86–121. [Google Scholar] [CrossRef]
  18. Lu, M.; Liu, Y.; Li, C.-S. Efficient estimation of a linear transformation model for current status data via penalized splines. Stat. Meth. Medic. Res. 2020, 29, 3–14. [Google Scholar] [CrossRef]
  19. Wang, Y.; Guo, W.; Brown, M.B. Spline smoothing for bivariate data with applications to association between hormones. Stat. Sinica 2000, 10, 377–397. [Google Scholar]
  20. Yilmaz, E.; Ahmed, S.E.; Aydin, D. A-Spline regression for fitting a nonparametric regression function with censored data. Stats 2020, 3, 11. [Google Scholar] [CrossRef]
  21. Aydin, D. A comparison of the nonparametric regression models using smoothing spline and kernel regression. World Acad. Sci. Eng. Tech. 2007, 36, 253–257. [Google Scholar]
  22. Lestari, B.; Fatmawati; Budiantara, I.N.; Chamidah, N. Smoothing parameter selection method for multiresponse nonparametric regression model using spline and kernel estimators approaches. J. Phy. Conf. Ser. 2019, 1397, 012064. [Google Scholar] [CrossRef]
  23. Aydin, D.; Güneri, Ö.I.; Fit, A. Choice of bandwidth for nonparametric regression models using kernel smoothing: A simulation study. Int. J. Sci. Basic Appl. Research (IJSBAR) 2016, 26, 47–61. [Google Scholar]
  24. Osmani, F.; Hajizadeh, E.; Mansouri, P. Kernel and regression spline smoothing techniques to estimate coefficient in rates model and its application in psoriasis. Medic. J. Islamic Repub. Iran (MJIRI) 2019, 33, 90. [Google Scholar] [CrossRef]
  25. Fatmawati; Budiantara, I.N.; Lestari, B. Comparison of smoothing and truncated spline estimators in estimating blood pressures models. Int. J. Innov. Creat. Change (IJICC) 2019, 5, 1177–1199. [Google Scholar]
  26. Lestari, B.; Fatmawati; Budiantara, I.N. Spline estimator and its asymptotic properties in multiresponse nonparametric regression model. Songklanakarin J. Sci. Tech. (SJST) 2020, 42, 533–548. [Google Scholar]
  27. Mariati, M.P.A.M.; Budiantara, I.N.; Ratnasari, V. The application of mixed smoothing spline and Fourier series model in nonparametric regression. Symmetry 2021, 13, 2094. [Google Scholar] [CrossRef]
  28. Ruppert, D.; Wand, M.P.; Carroll, R.J. Semiparametric Regression; Cambridge University Press: New York, NY, USA, 2003. [Google Scholar]
  29. Heckman, N.E. Spline smoothing in a partly linear model. J. R. Stats. Soc. Ser. B. 1986, 48, 244–248. [Google Scholar] [CrossRef]
  30. Mohaisen, A.J.; Abdulhussein, A.M. Spline semiparametric regression models. J. Kufa Math. Comp. 2015, 2, 1–10. [Google Scholar]
  31. Sun, X.; You, J. Iterative weighted partial spline least squares estimation in semiparametric modeling of longitudinal data. Science in China Series A (Mathematics) 2003, 46, 724–735. [Google Scholar] [CrossRef]
  32. Chamidah, N.; Zaman, B.; Muniroh, L.; Lestari, B. Designing local standard growth charts of children in East Java province using a local linear estimator. Int. J. Innov. Creat. Change (IJICC) 2020, 13, 45–67. [Google Scholar]
  33. Aydin, D. Comparison of regression models based on nonparametric estimation techniques: Prediction of GDP in Turkey. Int. J. Math. Models Methods Appl. Sci. 2007, 1, 70–75. [Google Scholar]
  34. Aydın, D.; Ahmed, S.E.; Yılmaz, E. Estimation of semiparametric regression model with right-censored high-dimensional data. J. Stat. Comp. Simul. 2019, 89, 985–1004. [Google Scholar] [CrossRef]
  35. Gao, J.; Shi, P. M-Type smoothing splines in nonparametric and semiparametric regression models. Stat. Sinica 1997, 7, 1155–1169. [Google Scholar]
  36. Wang, Y.; Ke, C. Smoothing spline semiparametric nonlinear regression models. J. Comp. Graph. Stats. 2009, 18, 165–183. [Google Scholar] [CrossRef]
  37. Diana, R.; Budiantara, I.N.; Purhadi; Darmesto, S. Smoothing spline in semiparametric additive regression model with Bayesian approach. J. Math. Stats. 2013, 9, 161–168. [Google Scholar] [CrossRef] [Green Version]
  38. Xue, L.; Xue, D. Empirical likelihood for semiparametric regression model with missing response data. J. Multivar. Anal. 2011, 102, 723–740. [Google Scholar] [CrossRef] [Green Version]
  39. Wibowo, W.; Haryatmi, S.; Budiantara, I.N. On multiresponse semiparametric regression model. J. Math. Stats. 2012, 8, 489–499. [Google Scholar] [CrossRef] [Green Version]
  40. Li, J.; Zhang, C.; Doksum, K.A.; Nordheim, E.V. Simultaneous confidence intervals for semiparametric logistics regression and confidence regions for the multi-dimensional effective dose. Stat. Sinica 2010, 20, 637–659. [Google Scholar]
  41. Lestari, B.; Chamidah, N. Estimating regression function of multiresponse semiparametric regression model using smoothing spline. J. Southwest Jiaotong Univ. 2020, 55, 1–9. [Google Scholar]
  42. Hidayati, L.; Chamidah, N.; Budiantara, I.N. Confidence interval of multiresponse semiparametric regression model parameters using truncated spline. Int. J. Acad. Appl. Res. (IJAAR) 2020, 4, 14–18. [Google Scholar]
  43. Sahoo, P. Probability and Mathematical Statistics; University of Louisville: Lousville, KY, USA, 2013. [Google Scholar]
  44. Cramér, H.; Wold, H. Some theorems on distribution functions. J. London Math. Soc. 1936, 11, 290–295. [Google Scholar] [CrossRef]
  45. Sen, P.K.; Singer, J.M. Large Sample in Statistics: An Introduction with Applications; Chapman & Hall.: London, UK, 1993. [Google Scholar]
Figure 1. Boxplots of estimated regression coefficients for all sample sizes. Note that a1.1, a1.2, and a1.3 seen on the x–axis of the boxplot show the regression coefficients obtained from the regression of y 1 . Similarly, a2.1, a2.2, and a2.3 show the coefficients from the y 2 regression, while a3.1, a3.2, and a3.3 show the coefficients from the y 3 regression. (a). Boxplot of estimated regression coefficients for n = 30; (b). Boxplot of estimated regression coefficients for n = 50; (c). Boxplot of estimated regression coefficients for n = 100.
Figure 1. Boxplots of estimated regression coefficients for all sample sizes. Note that a1.1, a1.2, and a1.3 seen on the x–axis of the boxplot show the regression coefficients obtained from the regression of y 1 . Similarly, a2.1, a2.2, and a2.3 show the coefficients from the y 2 regression, while a3.1, a3.2, and a3.3 show the coefficients from the y 3 regression. (a). Boxplot of estimated regression coefficients for n = 30; (b). Boxplot of estimated regression coefficients for n = 50; (c). Boxplot of estimated regression coefficients for n = 100.
Symmetry 14 00336 g001
Figure 2. Real functions and their fitted curves from the three MMSR models. (a,b) n = 30; (c,d) n = 50; (e,f) n = 100.
Figure 2. Real functions and their fitted curves from the three MMSR models. (a,b) n = 30; (c,d) n = 50; (e,f) n = 100.
Symmetry 14 00336 g002
Figure 3. Residual versus vector {i.e., y ^ = y ^ 1 , y ^ 2 , y ^ 3 } of the fitted response values. (a) n = 30; (b) n = 50; (c) n = 100.
Figure 3. Residual versus vector {i.e., y ^ = y ^ 1 , y ^ 2 , y ^ 3 } of the fitted response values. (a) n = 30; (b) n = 50; (c) n = 100.
Symmetry 14 00336 g003
Table 1. Estimated regression coefficients from parametric component of the MMSR model.
Table 1. Estimated regression coefficients from parametric component of the MMSR model.
95% Confidence Interval
n = 30 n = 50 n = 100
Coefficients VectorLower α ^ j UpperLower α ^ j UpperLower α ^ j Upper
α 1 = 1 ,   2 ,   4 T −1.308−0.917−0.527−1.164−0.929−0.693−1.033−0.923−0.813
1.7142.1142.5131.8952.1202.3461.9772.0892.20
3.6444.0444.4443.8114.0414.2713.9794.0874.196
α 2 = 2 ,   3 ,   5 T −2.308−1.917−1.527−2.164−1.929−1.693−2.033−1.923−1.813
2.7143.1143.5132.8953.1203.3462.9773.0893.200
4.6445.0445.4444.8115.0415.2714.9795.0875.196
α 3 = 0.5 ,   1 ,   3 T −0.808−0.417−0.027−0.664−0.429−0.193−0.533−0.423−0.313
0.7141.1141.5130.8951.1201.3460.9771.0891.200
2.6443.0443.4442.8113.0413.2712.9793.0873.196
Table 2. Results of Mann–Whitney U test for the nonparametric components.
Table 2. Results of Mann–Whitney U test for the nonparametric components.
n = 30 n = 50 n = 100
Med diff p-Value Med diff p-Value Med diff p-Value
y ^ 1 g 1 g ^ 1 0.5670.911 *0.5660.806 *0.6030.690 *
g 2 g ^ 2 0.8420.830 *0.6750.817 *0.6750.789 *
y ^ 2 g 1 g ^ 1 1.0050.0300.6440.086 *0.6940.033
g 2 g ^ 2 0.8040.0300.7010.0460.6710.020
y ^ 3 g 1 g ^ 1 0.5680.912 *0.5660.807 *0.6030.689 *
g 2 g ^ 2 0.6380.830 *0.6750.817 *0.6750.790 *
* : H 0 : g 1 g ^ 1 = 0 ,   H 1 : g 1 g ^ 1 0 ;   H 0 : g 2 g ^ 2 = 0 ,   H 1 : g 2 g ^ 2 0 .
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Chamidah, N.; Lestari, B.; Budiantara, I.N.; Saifudin, T.; Rulaningtyas, R.; Aryati, A.; Wardani, P.; Aydin, D. Consistency and Asymptotic Normality of Estimator for Parameters in Multiresponse Multipredictor Semiparametric Regression Model. Symmetry 2022, 14, 336. https://doi.org/10.3390/sym14020336

AMA Style

Chamidah N, Lestari B, Budiantara IN, Saifudin T, Rulaningtyas R, Aryati A, Wardani P, Aydin D. Consistency and Asymptotic Normality of Estimator for Parameters in Multiresponse Multipredictor Semiparametric Regression Model. Symmetry. 2022; 14(2):336. https://doi.org/10.3390/sym14020336

Chicago/Turabian Style

Chamidah, Nur, Budi Lestari, I. Nyoman Budiantara, Toha Saifudin, Riries Rulaningtyas, Aryati Aryati, Puspa Wardani, and Dursun Aydin. 2022. "Consistency and Asymptotic Normality of Estimator for Parameters in Multiresponse Multipredictor Semiparametric Regression Model" Symmetry 14, no. 2: 336. https://doi.org/10.3390/sym14020336

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop