Next Article in Journal
Numerical Study of Salt Ion Transport in Electromembrane Systems with Ion-Exchange Membranes Having Geometrically Structured Surfaces
Previous Article in Journal
Physics-Informed Neural Networks and Fourier Methods for the Generalized Korteweg–de Vries Equation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Variable Selection for Additive Quantile Regression with Nonlinear Interaction Structures

1
School of Science, Beijing Information Science and Technology University, Beijing 100872, China
2
Department of Mathematics and Statistics & School of Data Science, University of North Carolina at Charlotte, Charlotte, NC 28223, USA
3
Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(9), 1522; https://doi.org/10.3390/math13091522
Submission received: 28 March 2025 / Revised: 30 April 2025 / Accepted: 3 May 2025 / Published: 5 May 2025
(This article belongs to the Section D1: Probability and Statistics)

Abstract

In high-dimensional data analysis, main effects and interaction effects often coexist, especially when complex nonlinear relationships are present. Effective variable selection is crucial for avoiding the curse of dimensionality and enhancing the predictive performance of a model. In this paper, we introduce a nonlinear interaction structure into the additive quantile regression model and propose an innovative penalization method. This method considers the complexity and smoothness of the additive model and incorporates heredity constraints on main effects and interaction effects through an improved regularization algorithm under marginality principle. We also establish the asymptotic properties of the penalized estimator and provide the corresponding excess risk. Our Monte Carlo simulations illustrate the proposed model and method, which are then applied to the analysis of Parkinson’s disease rating scores and further verify the effectiveness of a novel Parkinson’s disease (PD) treatment.

1. Introduction

Quantile regression [1] has become a widely used tool in both empirical studies and the theoretical analysis of conditional quantile functions. Additive models offer greater flexibility by expressing linear predictors as the sum of nonparametric functions of each covariate, which generally results in lower variance compared to fully nonparametric models. However, many practical problems require the consideration of interactions between covariates, an aspect that has been extensively explored in linear and generalized linear models [2,3], where incorporating interactions has been shown to improve prediction accuracy. Despite this, there is limited research on additive quantile regression models that explicitly account for interaction structures, particularly in the context of variable selection.
While nonparametric additive quantile regression models have been extensively studied in the literature [4,5,6,7,8], with recent advancements in the analysis of longitudinal data [9] and dynamic component modeling [10]. Specifically, Ref. [9] explored a partially linear additive model for longitudinal data within the quantile regression framework, utilizing quadratic inference functions to account for within-subject correlations. In parallel, Ref. [10] proposed a quantile additive model with dynamic component functions, introducing a penalization-based approach to identify non-dynamic components. Despite these advances, the integration of interaction effects remains relatively underexplored in the context of nonparametric additive quantile regression models. Building upon the existing literature, this paper considers an additive quantile regression model with a nonlinear interaction structure.
When p, the number of main effect terms and interaction effect terms, is large, many additive methods are not feasible since their implementation requires storing and manipulating the entire O ( p 2 ) × n design matrix; thus, variable selection becomes necessary. Refs. [11,12] proposed component selection and smoothing operator (COSSO) and an adaptive version of the COSSO algorithm (ACOSSO) to fit the additive model with interaction structures, respectively. But these methods violated the heredity condition, whereas [13] took the heredity constraint into account and proposed a penalty λ ( j = 1 p f j 2 + j = 1 p k = j + 1 p f j k 2 ) on the empirical L 2 norm of the main effects and interaction effects. This method shrinks the interactions, depending on the main effects that are already present in the model [9,10].
Several regularization penalties, especially Lasso, have been widely used to shrink the coefficients of main effects and interaction effects to achieve a sparse model. However, existing methods treated interaction terms and main terms similarly, which may select an interaction term but not the corresponding main terms, thus making them difficult to interpret in practice. There is a natural hierarchy among the variables in the model with interaction structures. Refs. [14,15] proposed a two-stage Lasso method to select important main and interaction terms. However, this approach is inefficient, as the solution path in the second stage heavily depends on the selection results from the first stage. To address this issue, several regularization methods [16,17,18] have been proposed that employ special reparametrizations of regression coefficients and penalty functions under the hierarchical principle. Ref. [19] introduced strong and weak heredity constraints into models with interactions. They proposed a Lasso penalty with convex constraints that produces sparse coefficients while ensuring that the strong or weak heredity is satisfied. However, existing algorithms often suffer from slow convergence, even with a moderate number of predictors. Ref. [20] proposed a group-regularized estimation method under both strong and weak heredity constraints and developed a computational algorithm that guarantees the convergence of the iterates. Ref. [21] extended the linear interaction model to a quadratic model, accounting for all two-term interactions, and proposed the Regularization Path Algorithm under the Marginality Principle (RAMP), which efficiently computes the regularization solution path while maintaining strong or weak heredity constraints.
Recent advances in Bayesian hierarchical modeling for interaction selection have been driven by innovative prior constructions. Ref. [22] developed mixture priors that link interaction inclusion probabilities to main effect strengths, automatically enforcing heredity constraints. Ref. [23] introduced structured shrinkage priors using hierarchical Laplace distributions to facilitate group-level interaction selection. Ref. [24] incorporated heredity principles into SSVS frameworks through carefully designed spike-and-slab priors. These methodological developments have enhanced the ability to capture complex dependency structures in interaction selection while preserving Bayesian advantages, with demonstrated applications across various domains, including spatiotemporal analysis.
However, all these works assumed that the effects of all covariates can be captured in a simple linear form, which does not always hold in practice. To handle this issue, several authors have proposed nonparametric and semiparametric interaction model. For example, Refs. [25,26] considered the variable selection and estimation procedure of single-index models with interactions for functional data and longitudinal data. Some recent work on semiparametric single-index models with interactions included [27,28], among others.
Driven by both practical needs and theoretical considerations, this paper makes the following contributions: (1) We propose a method for variable selection in quantile regression models that incorporates nonlinear interaction structures; (2) We modify the RAMP algorithm to extend its applicability to additive models with nonlinear interactions, which enhances its capability to adeptly handle complex nonlinear interactions while also preserving the hierarchical relationship between main effects and interaction effects throughout the variable selection process. This work addresses the challenges posed by complex nonlinear interactions while ensuring that the inherent hierarchical structure is maintained, thereby providing a robust framework for more accurate and interpretable models.
The rest of this paper is organized as follows. In Section 2, we present the additive quantile regression with nonlinear interaction structures and discuss the properties of the oracle estimator. In Section 3, we present a sparsity smooth penalized method fitted by regularization algorithm under marginality principle and derive its oracle property. In Section 4, simulation studies are provided to demonstrate the outperformance of the proposed method. We illustrate an application of the proposed method on a real dataset in Section 5, and a conclusion is presented in Section 6.

2. Additive Quantile Regression with Nonlinear Interaction Structures

Suppose that { ( Y i , x i ) : i = 1 , , n } is an independent and identically distributed (iid) sample, where x i = ( x i 1 , , x i p ) is a p-dimensional vector of covariates. The τ th ( 0 < τ < 1 ) conditional quantile of Y i given x i is defined as Q Y i | x i ( τ ) = inf { t : F ( t | x i ) τ } , where F ( · | x i ) is the conditional distribution function of Y i given x i . We consider the following additive nonlinear interaction model for the conditional quantile function:
Q Y i | x i ( τ ) = j = 1 p f j ( x i j ) + 1 j < k p f j k ( x i j , x i k ) ,
where the unknown real-valued two-variate functions f j k ( x i j , x i k ) represent the pairwise interaction terms between x i j and x i k . We assume that f j k ( a , b ) = f k j ( b , a ) for all a and b and that all j k and x i j [ 0 , 1 ] for all i and j. Let ϵ i = Y i Q Y i | x i ( τ ) ; then, ϵ i satisfies P ( ϵ i 0 | x i ) = τ and we may also write Y i = j = 1 p f j ( x i j ) + 1 j < k p f j k ( x i j , x i k ) + ϵ i , where ϵ i is the random error. For identification, we assume the τ quantile of f j ( x i j ) and f j k ( x i j , x i k ) for 1 j < k p to be zero (see [29]).
Let { φ 1 , φ 2 , } denote the preselected orthonormal basis with respect to the Lebesgue measure on the unit interval. We center all the candidate functions, so the basis functions are centered as well. We approximate each function f j ( x i j ) by a linear combination of the basis functions, i.e.,
f j ( x i j ) l = 1 L n φ j l ( x i j ) β j l ,
where { φ j l , l = 1 , , L n } s are centered orthonormal bases. Let Ψ i , j = ( φ j 1 ( x i j ) , , φ j L n ( x i j ) ) ; the main terms can be expressed as f j ( x i j ) = Ψ i , j β j , where β j = ( β j 1 , , β j L n ) denotes the L n -dimensional vector of basis coefficients for the j-th main terms. We consider the tensor product of the interaction terms f j k as a surface that can be approximated by
f j k ( x i j , x i k ) s = 1 L n d = 1 L n φ j s ( x i j ) φ k d ( x i k ) β j k , s d .
To simplify the expression, it is computationally efficient to re-express the surface in vector notation as f j k ( x i j , x i k ) = Φ j k β j k , where Φ i , j k = Ψ i , j Ψ i , k , and β j k = v e c ( Γ j k ) , where Γ j k = [ β j k , s d ] , s = 1 , , L n , d = 1 , , L n is the matrix with elements β j k , s d . Here, ⊗ denotes the Kronecker product. We can now express (1) as
Q Y i | x i ( τ ) = j = 1 p Ψ i , j β j + 1 j < k p Φ i , j k β j k

2.1. Oracle Estimator

For high-dimensional inference, it is often assumed that p is large, but only a small fraction of the main effects and interaction terms are present in the true model. Let M = { j : f j 0 for 1 j p } and I = { ( j , k ) : 1 j < k p and f j k 0 } be the index set of nonzero main effects and interaction effects, and q = | M | and s = | I | be the cardinality of M and I , respectively. That is, we assume that the first q of β j , j = 1 , , p are nonzero and the remaining components are zero. Hence, we can write β 0 = ( β 01 , 0 ( p q ) L n ) , where β 01 = ( β 1 , , β q ) . Further, we assume that the first s of pairwise interaction β j k corresponding to β 01 are nonzero and the remaining components are zero. Thus, we write β 00 = ( β 001 , 0 p ( p 1 ) s ( s 1 ) 2 L n 2 ) , where β 001 = ( β ( 12 ) , , β ( 1 s ) , , β ( ( s 1 ) 1 ) , , β ( ( s 1 ) s ) ) . Here, β ( j k ) denotes the component of β j k with { ( j , k ) : ( j , k ) I , j M , k M } or { ( j , k ) : ( j , k ) I ; j M , or k M } . Let Π = ( Π 1 , , Π n ) be an n by p L n + ( p ( p 1 ) / 2 ) L n 2 design matrix with Π i = ( Ψ i , Φ i ) , where Ψ i = ( Ψ i , 1 , , Ψ i , p ) R p L n and Φ i = ( Φ i , 12 , , Φ i , 1 p , , Φ i , ( p 1 ) 1 , , Φ i , ( p 1 ) p ) R p ( p 1 ) / 2 L n 2 . Likewise, we write Π A i = ( Ψ M i , Φ I i ) R q L n + s L n 2 , where Ψ M i is the subvector consisting of the first q L n elements of Ψ i with the active covariates and Φ I i is the subvector consisting of the first s L n 2 elements of Φ i with the active interaction covariates. We first investigate the estimator, which is obtained when the index sets M and I are known in advance, which we refer to as the oracle estimator. Now, we consider the quantile regression with the oracle information. Let
β ^ 01 , β ^ 001 = arg min β 01 , β 001 1 n i = 1 n ρ τ Y i Ψ M i β 01 Φ I i β 001 .
The oracle estimators for β 0 and β 00 are β ^ 01 , 0 ( p q ) L n and β ^ 001 , 0 p ( p 1 ) s ( s 1 ) 2 L n 2 , respectively. Accordingly, the oracle estimators for nonparametric function f j ( x i j ) and f i j ( x i j , x i k ) are f ^ j ( x i j ) = Ψ M i β ^ 01 and f ^ j ( x i j , x i k ) = Φ I i β ^ 001 , respectively.

2.2. Asymptotic Properties

We next present the asymptotic properties of the oracle estimators. Similar to [13], we assume that all true main effects belong to the Sobolev space of order two: l = 1 ( β j l ) 2 l 4 < C 2 , and impose the same requirement on each true interaction function s = 1 ( β j k , s d ) 2 s 4 < C 2 and d = 1 ( β j k , s d ) 2 d 4 < C 2 for each j , k , 1 j < k p , with C being some constant.
To establish the asymptotic properties of the estimators, the following regularity conditions are needed:
(A1)
The distribution of x is absolutely continuous with density f ( x ) , which is bounded away from 0;
(A2)
The conditional distribution F ε | x ( · | x ) of the error ε , given x = x , has a density f ε | x ( · | x ) , which satisfies the following two conditions:
(1)
sup ε , x f ε | x ( ε | x ) < ;
(2)
There exists positive constants b 1 and b 2 such that inf x inf | ε | < b 1 f ε | x ( ε | x ) b 2 ;
(A3)
Basis function φ 1 , , φ L n satisfies the following:
(1)
L n n 1 / ( 2 r + 1 ) , r > 1 / 2 ;
(2)
sup x [ 0 , 1 ] p Π 2 = O ( L n ) and H n = 1 n i = 1 n Π i Π i are uniformly bounded from 0 to ;
(3)
There is a vector γ = ( β 0 , β 00 ) such that sup x [ 0 , 1 ] p | m ( x ) Π γ | = O ( L n r ) , where m ( x ) = j = 1 p f j ( x j ) + 1 < j < k p f j k ( x j , x k ) and x j = ( x 1 j , , x n j ) ;
(A4)
q = O ( n C 1 ) for some C 1 < 1 3 .
It should be noticed that (A1) and (A2) are typical assumptions in nonparametric quantile regression (see [30]). Condition (3) in (A3) requires that the true component function f j ( x j ) and f j k ( x j , x k ) can be uniformly well-approximated by the basis φ j l ( x ) , l = 1 , , L n and φ j s ( x ) φ k d ( x ) , s = 1 , , L n , d = 1 , , L n . Finally, Condition (A4) is set to control the model size.
Theorem 1. 
Let γ 01 = ( β 01 , β 001 ) . Assume that Conditions (A1)–(A4) hold. Then, the following are true:
(a) 
γ ^ 01 γ 01 2 = o p ( n 1 / 2 L n ) ;
(b) 
m ^ ( x ) m ( x ) L 2 = O ( n ( 2 r 1 ) / 2 ( 2 r + 1 ) ) , where m ( x i ) = j = 1 p f j ( x i j ) + 1 j < k p f j k ( x i j , x i k ) .
Theorem 1 summarizes the rate of convergence for the oracle estimator, which is the same as the optimal convergence rate in the additive model [30] due to the merit of the additive structure. The proof of Theorem 1 can be found in Appendix A.1.

3. Penalized Estimation for Additive Quantile Regression with Nonlinear Interaction Structures

3.1. Penalized Estimator

Model (2) contains a large number of main effects f j ( x i j ) and interaction effects f j k ( x i j , x i k ) , which will significantly increase the computational burden, especially in cases where p (the number of features) is large. It is necessary to use penalty methods to ensure computational efficiency and prevent overfitting. Applying sparsity penalties (such as Lasso or grouped Lasso) to each f j can help to automatically select important variables and reduce the complexity of the model.
However, when using a large number of basis functions to fit complex relationships, a simple sparse penalty may oversimplify the model, ignore potential patterns in the data, and introduce unstable fluctuations. Therefore, in addition to sparse penalties, it is also necessary to introduce smoothness penalties (such as second-order derivative penalties) to avoid drastic fluctuations in the function and ensure smooth and stable fitting results. This method helps to control model complexity, reduce overfitting, and improve the model’s generalization ability.
Therefore, we minimize the following penalized objective function for ( β 0 , β 00 ) :
1 n i = 1 n ρ τ Y i j = 1 p Ψ i , j β j + 1 j < k p Φ i , j k β j k + P ( f ) ,
where
P ( f ) = λ 1 j = 1 p f j ( x j ) n 2 + λ 2 J 2 1 ( f j ( x j ) ) 1 / 2 + 1 j < k p f j k ( x j , x k ) n 2 + λ 2 J 2 1 ( f j k ( x j , x k ) ) 1 / 2 ,
is a sparsity smooth penalty function, and the first and second terms in the penalty function penalize the main and interaction effects, respectively. We employ f n 2 = 1 n i = 1 n f i 2 as a sparsity penalty, which encourages sparsity at the function level, and J 2 1 ( f ) = f 2 ( x ) d x as a roughness penalty, which controls the model’s complexity by ensuring that the estimated function remains smooth while fitting the data, thus preventing high-frequency fluctuations caused by an excessive number of basis functions. The two tuning parameters λ 1 , λ 2 0 control the amount of penalization.
It can be written that J 2 1 ( f j ( x j ) ) = β j Ω j β j , where Ω j is a band diagonal matrix of known coefficients and the ( l 1 , l 2 ) th element can be expressed as Ω j , l 1 , l 2 = φ j l 1 ( x j ) φ j l 2 ( x j ) d x , l 1 , l 2 { 1 , , L n } —see [31] for details. According to [32], the penalty J 2 1 ( f j k ( x j , x k ) ) can be represented as
β j k ( Λ j j I L n + I L n Λ k k ) β j k β j k Λ j k β j k ,
where Λ j j = f j 2 d x j and Λ k k = f k 2 d x k , with f j and f k being the second derivatives of f ( x j , x k ) with respect to x j and x k , respectively. Hence, the penalty function can be rewritten as
P ( f ) = λ 1 j = 1 p β j M j β j + 1 j < k p β j k K j k β j k ,
where M j = 1 n ψ j ψ j + λ 2 Ω j and K j k = 1 n ϕ j k ϕ j k + λ 2 Λ j k , with ψ j denoting the n × L n matrix with the ( i , k ) -th entry given by φ j k ( x i j ) , and ϕ j k representing the n × L n matrix with the ( i , l ) -th entry given by φ j l ( x i j ) φ k l ( x i k ) . Similar to [33], we decompose M j = R j R j and K j k = Q j k Q j k for some quadratic L n × L n matrix R j and L n 2 × L n 2 matrix Q j k . Then, Model (4) can be represented as
1 n i = 1 n ρ Y i j = 1 p Ψ i , j β j 1 j < k p Φ i , j k β j k + λ 1 j = 1 p R j β j 2 + 1 j < k p Q j k β j k 2 .
However, the above penalty treats the main and interaction effects similarly. That is, an entry of an interaction into the model generally adds more predictors than an entry of a main effect, which usually demands high computational cost and is hard to interpret for more interactions. To deal with this difficulty, we propose to add a set of heredity restrictions to produce sparse interaction models that an interaction only be included in a model if one or both variables are marginally important.
Naturally, T S 2 . Our target is to estimate S and T from the data consistently or sign consistently, like Th1 in [21]
There are two types of heredity restrictions, which iare called strong and weak hierarchy:
Strong hierarchy : β ^ j k 0 and β ^ j β ^ k 0
Weak hierarchy : β ^ j k 0 and max { β ^ j 0 , β ^ k 0 } 0 .
To ensure the strong and weak hierarchy conditions for β ^ j k and β ^ j , β ^ k , we structure the following penalties to enforce the strong hierarchy:
Q P ( β j , β j k ) = 1 n i = 1 n ρ τ ( Y i j = 1 p Ψ i , j β j 1 j < k p Φ i , j k β j k ) + λ 1 j = 1 p R j β j 2 + 1 k < j p I β j k 0 ( R j β j 2 + R k β k 2 ) 1 / 2 + 1 k < j p Q j k β j k 2 .
and weak hierarcy:
Q P ( β j , β j k ) = 1 n i = 1 n ρ τ ( Y i j = 1 p Ψ i , j β j 1 j < k p Φ i , j k β j k ) + λ 1 j = 1 p R j β j 2 + 1 k < j p I β j k 0 min ( R j β j 2 , R k β k 2 ) 1 / 2 + 1 k < j p Q j k β j k 2 .
These structures ensure hierarchy through the introduction of an indicative penalty. Specifically, the indicator function I β j k 0 activates the penalty only when β j k 0 , thus enforcing the strong condition. This ensures that both corresponding main effects must be present if the interaction term is non-zero. Additionally, the term min ( R j β j 2 , R k β k 2 ) ensures that the penalty is applied to the smaller of the two norms, thereby enforcing the weaker condition. This means that at least one of the corresponding main effects must be present for an interaction term to be included.

3.2. Algorithm

The regularization path algorithm under the marginality principle (RAMP, [21]) is a method for variable selection in high-dimensional quadratic regression models, designed to maintain the hierarchical structure between main effects and interaction effects during the selection process. However, the RAMP algorithm primarily focuses on linear interaction terms and considers interactions of variables with themselves, making it unsuitable for directly handling nonlinear interaction terms.
To address this limitation, we modified the RAMP algorithm, extending its application to additive models that include nonlinear interaction models. Through this enhancement, we can not only handle complex nonlinear interactions but also continue to preserve the hierarchical structure between main effects and interaction effects during variable selection. Specifically, the detailed steps of the modified algorithm are as follows.
Let P = { 1 , 2 , , p } and Q = { ( j , k ) : 1 j k p } be the index set for main effects and interaction effects, respectively. For an index set A P , define A 2 = A A = { ( j , k ) : j k ; j , k A } Q , and A P = { ( j , k ) : j k ; or j , k A } Q .
We fix a sequence of values { λ 1 , s } s = 1 S between λ 1 , m a x and λ 1 , m i n . Following [21], we set λ 1 , m a x = n 1 max | X y | and λ 1 , m i n = ζ λ 1 , m a x with some small ζ > 0 . At step s 1 , we denote the current active main effects set as P s 1 , and the interaction effects are set as Q s 1 . Define H s 1 as the parent set of Q s 1 , which contains the main effects that have at least one interaction effect in Q s 1 . Set H s 1 c = P H s 1 . Then, the algorithm is as detailed below:
  • Step 1. Generate a decreasing sequence λ 1 , m a x = λ 1 , 1 > λ 1 , 2 > , λ 1 , S = λ 1 , m i n , and λ 2 , s = λ 1 , s 2 for s = 1 , , S .
    We use the warm start strategy, where the solution for s 1 is used as a starting value for s;
  • Step 2. Given P s 1 , Q s 1 , H s 1 , add the possible interactions among main effects in P s 1 to the current model. Then, we minimize the penalty loss function
    1 n i = 1 n ρ τ Y i j P Ψ i , j β j ( j , k ) P s 1 2 Φ i , j k β j k + λ 1 , s j I j H s 1 c R j β j 2 + j , k I ( j , k ) P s 1 2 Q j k β j k 2
    where the penalty is imposed on the candidate interaction effects and H s 1 c , which contains the main effects not enforced by the strong heredity constraint. The penalty encourages smoothness and sparsity in the estimated functional components;
  • Step 3. Record P s , Q s , and H s according to the above solution. Add the corresponding effects from Q s into P s ;
  • Step 4. Calculate the quantile estimation based on the current model
    ( β ^ j ( s ) , β ^ j k ( s ) ) = arg min 1 n i = 1 n ρ τ Y i j P s Ψ i , j β j ( j , k ) Q s Φ i , j k β j k
  • Step 5. Repeat steps 2–4 S times and determine the active sets M s and I s according to λ s , which minimizes GIC.
Ref. [34] proposed the following GIC criterion:
G I C ( λ 1 s ) = log i = 1 n ρ τ ( Y i j P s Ψ i , j β ^ j ( s ) ( j , k ) Q s Φ i , j k β ^ j k ( s ) ) + C n κ s d f λ ,
where β ^ j ( s ) and β ^ j k ( s ) are the penalized estimators obtained by step 4, κ s = | P s | + | Q s | is the cardinality of the index set of nonzero coefficients in main effects and interaction effects, d f λ represents the degrees of freedom of the model, and C n is a sequence of positive constants diverging to infinity as n increases. We take C n = log ( log ( n ) ) in our simulation studies and real data analysis.
Following the same strategy, under the weak heredity condition, we use the set P s 1 P instead of P s 1 2 and solve the following optimization problem:
1 n i = 1 n ρ τ Y i j P Ψ i , j β j ( j , k ) P s 1 P Φ i , j k β j k + λ 1 , s j I j H s 1 c R j β j 2 + ( j , k ) I ( j , k ) P s 1 P Q j k β j k 2 ,
with respect to β j and β j k . That is, an interaction effect can enter the model for selection if at least one of its parents are selected in a previous step.

3.3. Asymptotic Theory

Let Ξ n ( λ ) be the set of local minima of Q P ( β j , β j k ) . The following result shows that, with probability approaching one, the oracle estimator belongs to the set Ξ n ( λ ) .
Theorem 2. 
Assume that conditions (A1)–(A4) are satisfied and consider the penalized function under strong heredity. Let γ ^ = ( β ^ 0 , β ^ 00 ) be the oracle estimator. If λ 1 = o ( 1 ) and n 1 L n 3 λ 1 0 as n , then
P ( γ ^ Ξ n ( λ ) ) 1 , as n .
Remark 1. 
The penalty term λ 1 decays to zero as n , allowing the oracle estimator to dominate. The rate condition n 1 L n 3 λ 1 0 guarantees that the approximation error from basis expansion L n is controlled by the penalty strength.
The proof of Theorem 2 can be found in Appendix A.2.
Define R n ( m ) = 1 n i = 1 n ρ τ ( Y i m ( x i ) ) and R ( m ) = E ( ρ τ ( Y m ( x ) ) ) as the empirical risk and predictive risk of quantile loss, respectively. And
M n = { m : m ( x ) = j = 1 p ψ j ( x j ) β j + 1 < j < k p ϕ j k ( x j , x k ) β j k : E ( ψ j ) = 0 , E ( ϕ j k ) = 0 , E ( ψ j 2 ) = 1 , E ( ϕ j k 2 ) = 1 }
is a functional class. Let m ˜ = arg min m M n R ( m ) denote the predictive oracle, i.e., the minimizer of the predictive risk over M n , and m ^ represent the minimization of Equation (6) over M n . Following [35], we say that an estimator m ^ is persistent (risk consistent) relative to a class of functions M n if R ( m ^ ) R ( m ˜ ) P 0 . Then, we have the following result.
Theorem 3. 
Under conditions (A1)–(A4), if p = o ( n ) , then for some constants t > 0 ,
P R ( m ^ ) R ( m ˜ ) ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) 1 exp ( n t 2 ) .
The above theorem establishes the convergence rate in terms of the excess risk of the estimator, which shows the prediction accuracy and consistency of the proposed estimator. The proof of Theorem 3 can be found in Appendix A.3.

4. Simulation Studies

We conduct extensive Monte Carlo simulations to assess the performance of our proposed method in comparison with two established approaches, hirNet [19] and RAMP [21], across five specific quantile levels: τ 0.1 , 0.25 , 0.5 , 0.75 , 0.90 . In each simulation scenario, we generate 500 independent training datasets with two sample sizes, n = 300 and n = 100 , and p = 15 main effects, resulting in a total of 15 + 15 × 14 / 2 = 121 potential terms (15 main effects and 105 pairwise interactions). All methods are implemented using pseudo-spline basis functions to ensure consistent comparison. For regularization parameters, we adopt the relationship λ 2 = λ 1 2 based on [13], which provides superior empirical performance.
The covariates x i j are generated independently from a uniform distribution U ( 0 , 1 ) . We specify five nonlinear main effect functions that are standardized before model fitting:
f 1 ( x ) = 10 x , f 2 ( x ) = 1 1 + x + 20 x ,
f 3 ( x ) = 20 sin ( x ) , f 4 ( x ) = 20 exp ( x ) , f 5 ( x ) = 10 x 2 .
Hence, each f j is standardized and the interaction functions are generated by multiplying together the standardized main effects,
f j k ( x j , x k ) = f j ( x j ) × f k ( x k ) , 1 j < k p .
The response variables are generated through quantile-specific models that incorporate either strong or weak heredity constraints. Under strong heredity, interactions are only included when all parent main effects are active, while weak heredity requires just one parent main effect.
Example 1. 
Strong heredity. Building on the quantile-specific sparsity framework established by [36], we incorporate strong heredity constraints into our modeling approach. This integration ensures two key aspects: (1) the set of active predictors varies depending on the quantile, and (2) all selected variables preserve strict hierarchical relationships. The resulting nonlinear data-generating process is as follows:
  • τ = 0.1 (Sparse):
    Y i = f 1 ( x i 1 ) + f 2 ( x i 2 ) + f 3 ( x i 3 ) + ε i .
  • τ = 0.25 , 0.50 , 0.75 (Dense):
    Y i = f 1 ( x i 1 ) + f 2 ( x i 2 ) + f 3 ( x i 3 ) + f 4 ( x i 4 ) + f 5 ( x i 5 ) + f 12 ( x i 1 , x i 2 ) + f 13 ( x i 1 , x i 3 ) + ε i .
  • τ = 0.9 (Sparse):
    Y i = f 1 ( x i 1 ) + f 3 ( x i 3 ) + f 5 ( x i 5 ) + ε i .
Example 2. 
Weak heredity. In contrast to strong heredity, we also incorporate weak heredity constraints into our modeling approach by extending the quantile-specific sparsity framework. The resulting nonlinear data-generating process is as follows:
  • τ 0.1 , 0.9 (Sparse):
    Y i = f 1 ( x i 1 ) + f 2 ( x i 2 ) + f 3 ( x i 3 ) + ε i .
  • τ = 0.25 , 0.50 , 0.75 (Dense):
    Y i = f 1 ( x i 1 ) + f 2 ( x i 2 ) + f 3 ( x i 3 ) + f 14 ( x i 1 , x i 4 ) + f 15 ( x i 1 , x i 5 ) + ε i .
To assess robustness across different error conditions, we consider four error distributions:
1.
Gaussian: The error terms ε i follow a normal distribution with mean 0 and variance 1, i.e., ε i N ( 0 , 1 ) ;
2.
Heavy-tailed: The error terms ε i follow a Student’s t-distribution with 3 degrees of freedom, i.e., ε i t ( 3 ) ;
3.
Skewed: The error terms ε i follow a chi-squared distribution with 2 degrees of freedom, i.e., ε i χ ( 2 ) ;
4.
Heteroscedasticity [37]: The error terms ε i are heteroscedastic and modeled as follows:
  • τ = 0.1 : ε i = ( x i 1 + x i 2 ) e i ;
  • τ 0.25 , 0.50 , 0.75 : ε i = ( x i 1 + x i 5 ) e i ;
  • τ = 0.9 ( s t r o n g h e r e d i t y ) : ε i = ( x i 1 + x i 3 ) e i ;
  • τ = 0.9 ( w e a k h e r e d i t y ) : ε i = ( x i 1 + x i 4 ) e i .
Recall that M = { j : f j 0 } and I = { ( j , k ) : f j k 0 } are the active sets of main and interaction effects. For each example, we run R = 500 Monte Carlo simulations and denote the estimated subsets as M ^ ( r ) and I ^ ( r ) , r = 1 , , R . We evaluate the performance on variable selection based on the following criteria:
(1)
True positive rate and false positive rate of main effects (mTPR, mFDR);
(2)
True positive rate and false positive rate of interaction effects (iTPR, iFDR);
(3)
Main effects coverage percentage ( P m ): R 1 r = 1 R I ( M M ^ ( r ) ) ;
(4)
Interaction effects coverage percentage ( P i ): R 1 r = 1 R I ( I I ^ ( r ) ) ;
(5)
Coverage rate of a single variable in the main effects and interaction effects;
(6)
Model size (size): R 1 r = 1 R ( | M ^ ( r ) | + | I ^ ( r ) | ) ;
(7)
Root mean squared error (RMSE): R 1 r = 1 R 1 n i = 1 n ( ρ τ ( Y i Q ^ τ ( r ) ( Y i | x i ) ) ) 2 , where Q ^ τ ( r ) ( Y i | x i ) is obtained by the estimated coefficients β ^ j ( r ) M ^ ( r ) and β ^ j k ( r ) I ^ ( r ) ; R is the number of iterations.
The results under the strong heredity for n = 100 and n = 300 are shown in Table 1 and Table 2, respectively. In the context of small sample data, the performance of the proposed method is a little worse than the other two methods in terms of true positive rates and coverage percentages of a single variable in the main and interaction effects. This is because the high complexity and degrees of freedom in nonparametric additive interaction models make them more prone to overfitting, leading to an inaccurate reflection of the underlying data structure and, consequently, a lower true positive rate (TPR) in variable selection. In contrast, methods like hirNet and RAMP, by adopting regularization and relatively simple linear structures, can more effectively prevent overfitting. Although this may come at the cost of some loss in model interpretability.
In large sample scenarios, our proposed method not only demonstrates higher true positive rates and broader coverage of main effects and interaction effects but also significantly reduces false positive rates and generates a more parsimonious model. These results indicate that the method can effectively handle nonlinear additive interaction models, accurately capturing complex relationships in the data while maintaining a low false discovery rate and higher model simplicity.
The results from weak heredity are shown in Table 3 and Table 4. In small sample cases, the hirNet method is effective at capturing both main and interaction effects but suffers from a high rate of false selections, leading to increased model complexity. On the other hand, the RAMP method selects fewer variables, reducing false selections but also under-selecting important variables, which negatively affects predictive accuracy. In contrast, the proposed method in this paper offers a better balance, with higher precision and stability in selecting both main and interaction effects, while significantly reducing false selections. This method results in a moderate model size and the lowest prediction error, demonstrating an optimal trade-off between model complexity and predictive performance.
In large sample cases, the proposed method excels in identifying both main and interaction effects, with the main effect true positive rate (mTPR) and interaction effect true positive rate (iTPR) approaching 1.000, indicating a nearly perfect identification of true positives. This method also maintains low mFDR and iFDR, minimizing false positives. In contrast, although the hirNet methods perform better in terms of mTPR and iTPR, they tend to include more noisy variables in the model, resulting in higher RMSE values and lower predictive accuracy.
From the perspective of quantile-specific sparsity, the proposed method demonstrates varying levels of sparsity across different quantiles (e.g., τ 0.1 , 0.25 , 0.5 , 0.75 , 0.90 ). At τ = 0.1 and τ = 0.9 quantiles, fewer variables are selected, indicating higher sparsity, while, at τ = 0.25 , 0.50 , 0.9 , more variables and interaction effects are included, showing that the model adapts its selection based on the distribution of the response variable. Simulation results further show that, when the τ = 0.9 , DGP consists only of main effects and our method achieves near-perfect variable selection: the TPR for main effects approaches 1 (all true effects retained) and the FDR is close to 0 (almost no false positives), leading to model sizes nearly identical to the true DGP and minimal prediction errors, as reflected in the low RMSE. These results are consistent with [38]’s work, which provides theoretical support for this outcome. Moreover, the RMSE patterns suggest that the method maintains strong predictive performance even when more variables are selected, thereby confirming the effectiveness of the quantile-specific sparsity approach.

5. Applications

Parkinson’s disease (PD) is a common degenerative disease of the nervous system that leads to shaking, stiffness, and difficulty with walking, balance, and coordination. PD symptom monitoring with the Unified Parkinson’s Disease Rating Scale (UPDRS) is very important but costly and logistically inconvenient due to the requirement of patients’ presence in clinics and time-consuming physical examinations by trained medical staff. A new treatment technology based on the rapid and remote replication of UPDRS assessments with more than a dozen biomedical voice measurement indicators has been proposed recently. The dataset consists of 5875 UPDRS from patients with early-stage PD and 16 corresponding biomedical voice measurement indicators. Reference [39] analyzed a PD dataset and mapped the clinically relevant properties from speech signals of PD patients to UPDRS using linear regression and nonlinear regression models, with the aim to verify the feasibility of frequent, remote, and accurate UPDRS tracking and effectiveness in telemonitoring frameworks that enable large-scale clinical trials into novel PD treatments. However, existing analyses of the UPDRS scores from the 16 voice measures with a model including only the main effects may fail to reflect the relationship between the UPDRS and 16 voice measures. For example, fundamental frequency and amplitude can work together on voice and thus may have an interactive effect on UPDRS. In this situation, it is necessary to consider the model with pairwise interactions. Furthermore, the effect on UPDRS from these indicators may be complicated and needs to be investigated further, including their nonlinear dependency, even nonparametric relation, so that both main effect and interactive effect may exist. By noting the typical skewness of the distribution of the UPDRS and the asymmetry of its τ -quantile-level and ( 1 τ )-quantile-level score ( 0 < τ < 1 ) in Figure 1 (left), the distribution of the UPDRS tends to asymmetric and heavy-tailed, so that one should examine how any quantile, including the median (rather than the mean) of the conditional distribution, is affected by changes in the 16 voice measures and then make a reliable decision about the new treatment technology. Therefore, the proposed additive quantile regression with nonlinear interaction structure is applied for the analysis of the data.
The 16 biomedical voice measures in the PD data include several measurements of variation in fundamental frequency (Jitter, Jitter (Abs), Jitter:RAP, Jitter:PPQ5, and Jitter:DDP), several measurements of variation in amplitude (Shimmer, Shimmer (dB), Shimmer:APQ3, Shimmer:APQ5, Shimmer:APQ11, and Shimmer:DDA), two measurements of the ratio of noise to tonal components in the voice (NHR and HNR), a nonlinear dynamical complexity measurement (RPDE), signal fractal scaling exponent (DFA), and a nonlinear measure of fundamental frequency variation (PPE). Our interest is to identify the biomedical voice measurements and their interactions that may effectively affect the UPDRS (Unified Parkinson Disease Rating Scale) in Parkinson’s disease patients. Figure 1 (right) shows the correlations among 16 biomedical voice measurements. As we can see from Figure 1 (right), except for variable DFA, there are strong correlations among the covariates, which makes variable selection more challenging. Therefore, the proposed additive quantile regression model with nonlinear interaction structures may provide a complete picture on how those factors and their interactions affect the distribution of UPDRS.
We first randomly generated 100 partitions of the data into training and test sets. For each training set, we select 5475 observations as training set and the remaining 400 observations as the test set. Normalization is conducted for the dataset. Also, quantile regression is used to fit the model selected by the proposed method. For comparison, we also use least square to fit the model selected by hirNet and RAMP on the training set. Finally, we evaluate the performance of these models using the fixed active sets on the corresponding test sets.
Table 5 summarizes the covariates selected by different methods. The proposed quantile-specific method reveals distinct covariate patterns across different UPDRS severity levels. At lower quantiles ( τ = 0.1 and τ = 0.25 ), the model highlights DFA and its interactions (e.g., HNRDFA, DFAPPE), suggesting that these acoustic features may be particularly relevant in the early stages of the disease. The middle quantile ( τ = 0.5 ) incorporates additional speech markers (APQ11, RPDE), which align with conventional methods, while higher quantiles ( τ = 0.75 and τ = 0.9 ) progressively simplify to core features (HNR, dB), indicating reduced covariate dependence in advanced stages. This dynamic selection demonstrates how quantile-specific modeling can capture stage-dependent biological mechanisms while maintaining model parsimony, selecting just 3–5 main effects and 1–2 interactions per quantile, compared to hitNet’s fixed 12-variable approach. The shifting importance of HNR and DFA interactions across quantiles particularly underscores their complex, nonlinear relationships with symptom progression.
Figure 2 and Figure 3 show the estimation results of main effects and interaction effects at τ = 0.50 . The solid lines represent the estimated effects, while the dashed lines indicate the 95 % confidence intervals obtained through a simulation-based approach. Specifically, for the chosen model, we conducted 100 repeated simulations on the dataset to derive point-wise confidence intervals for each covariate’s effect.
From Figure 2, it can be observed that, as APQ11 increases, the UPDRS score initially rises and then declines, peaking at a specific value. This inverted U-shaped relationship suggests that moderate amplitude variations are linked to more severe symptoms. For HNR, the UPDRS score decreases as the noise ratio increases, indicating that higher noise levels correlate with disease worsening. The confidence intervals around these trends confirm their statistical significance across the simulated datasets. RPDE shows a U-shaped curve, meaning that intermediate complexity levels are associated with more severe symptoms, while extreme values (high or low) indicate milder symptoms. The confidence intervals further support this nonlinear relationship, demonstrating its robustness to the data. DFA has a negative correlation with the UPDRS score: as fundamental frequency variability increases, the UPDRS score decreases, suggesting that greater variability is linked to symptom relief. The narrow confidence intervals around this trend highlight its consistency across the simulations.
Figure 3 illustrates the interaction effect between HNR and RPDE on the UPDRS score. This shows that, as HNR increases, the impact on the UPDRS score becomes more negative, especially when RPDE is at lower values, indicating a worsening in symptoms with higher noise to harmonic ratio. Conversely, for higher values of RPDE, the interaction effect stabilizes, suggesting less influence on the UPDRS score. The plot reveals a valley-like structure around the center where both HNR and RPDE are near zero, signifying minimal interaction effect. Notably, the most significant interactions occur at extreme values of either HNR or RPDE, highlighting the complex, nonlinear relationship between these variables and Parkinson’s disease symptom severity. The estimation results of the main effects and interaction effects at τ = 0.1 , 0.25 , 0.75 , 0.9 are presented in Appendix B.
From the results in Appendix B, it can be seen that, although the main effects selected at τ = 0.25 and τ = 0.75 are the same, their impacts on UPDRS scores differ due to nonlinear relationships or complex interactions between variables. This highlights the advantage of quantile regression, which not only captures the average trend of the dependent variable but also reveals how these effects vary across different quantiles. This method provides a more comprehensive understanding of the complex relationships between variables and helps to analyze the specific effects of variables on UPDRS under different conditions.
From the perspective of quantile-specific sparsity, the proposed method demonstrates varying levels of covariate selection across different quantiles. At lower quantiles ( τ = 0.1 and τ = 0.25 ), the model tends to select fewer covariates, focusing primarily on HNR, DFA, and PPE, which suggests a higher degree of sparsity. As the quantile increases to τ = 0.50 , the model selects more covariates, including APQ11 and RPDE, indicating a reduction in sparsity. At higher quantiles ( τ = 0.75 and τ = 0.9 ), the model again shows increased sparsity by selecting only a few key covariates such as HNR, DFA, and PPE. This pattern reflects the adaptability of the proposed method in capturing the specific characteristics of the data distribution at different quantiles, thereby achieving optimal sparsity tailored to each quantile level.
Table 6 compares the average RMSE and model sizes across 100 datasets for three methods: the proposed quantile-based approach (evaluated at τ = 0.1 , 0.25 , 0.5 , 0.75 , 0.9 ), RAMP, and hirNet. The results show that our method outperforms the others in predictive accuracy, particularly at τ = 0.75 , where it achieves the lowest RMSE of 0.858, outperforming RAMP (0.878) and hirNet (0.887). Additionally, the proposed method uses much smaller models, with only 2.3–5.0 variables, compared to hirNet’s larger model with 18.39 variables. This reduction in model size highlights the method’s efficiency without sacrificing predictive power. The method also performs consistently well across different quantiles, with a particularly strong result at τ = 0.25 (RMSE = 0.873), though the RMSE at τ = 0.5 is slightly higher (1.00), showing some variation in performance across quantiles. Furthermore, models that include interaction terms perform better than those with only main effects, emphasizing the importance of interaction effects in the model. In conclusion, the proposed method strikes an optimal balance between prediction accuracy, model simplicity, and computational efficiency, making it ideal for applications that require both interpretability and strong predictive performance.

6. Conclusions

We explore the variable selection for additive quantile regression with interactions. We fit the model using the sparsity smooth penalty function and add the regularization algorithm under the marginality principle to the additive quantile regression with interactions, which can select main and interaction effects simultaneously while keeping either the strong or weak heredity constraints. We demonstrate theoretical properties of the proposed method for additive quantile regression with interactions. Simulation studies demonstrate good performance of the proposed model and method.
Also, we applied the proposed method to the Parkinson’s disease (PD) data; our method successfully validates the important finding in the literature that frequent, remote, and accurate UPDRS tracking as a novel PD treatment could be effective in telemonitoring frameworks for PD symptom monitoring and large-scale clinical trials.

Author Contributions

Y.B.: Conceptualization, Formal analysis, Methodology, Software, Writing—original draft, Data curation; J.J.: Formal analysis, Methodology, Writing—review and editing; M.T.: Supervision, Funding acquisition. All authors have read and agreed to the published version of the manuscript.

Funding

The work was partially supported by the Beijing Natural Science Foundation (No.1242005).

Data Availability Statement

The dataset used in this study is publicly available at https://www.worldbank.org/en/home (accessed on 1 June 2023).

Conflicts of Interest

The authors declare no conflictw of interest.

Appendix A. Proof of the Theorem

Appendix A.1. Proof of Theorem 1

Throughout the proof, if v = ( v 1 , , v k ) is a vector, we use the norms v 2 = j = 1 k v j 2 and v = max j | v j | . For a function f on [ 0 , 1 ] , we denote its L 2 ( P ) norm by f L 2 = 0 1 f 2 ( x ) d P ( x ) = E ( f ) 2 . We write γ 01 = ( β 01 , β 001 ) and write γ ˜ 01 = ( β ˜ 01 , β ˜ 001 ) in the same fashion. Let θ = n ( γ ˜ 01 γ 01 ) . Note that we can express Q ^ Y i | x i ( τ ) = Ψ M i β ^ 01 + Φ I i β ^ 001 ; alternatively, we can express it as Q ^ Y i | x i ( τ ) = Ψ M i β ˜ 01 + Φ I i β ˜ 001 . By the identifiability of the model, we must have γ ^ 01 = γ ˜ 01 .
Notice that
1 n i = 1 n ρ τ Y i Ψ M i β ˜ 01 + Φ I i β ˜ 001 = 1 n i = 1 n ρ τ ϵ i Π ˜ A i θ U n i
where Π ˜ A i = n 1 / 2 Π A i and U n i = Π A i γ ˜ 01 m ( x i ) . Define the minimizers under the transformation as
θ ^ = arg min θ 1 n i = 1 n ρ τ ϵ i Π ˜ A i θ U n i .
Let a n be a sequence of positive numbers and define
Q i ( a n ) = ρ τ ( ϵ i a n Π ˜ A i θ U n i ) .
Define
D i ( θ , a n ) = Q i ( a n ) Q i ( 0 ) E Q i ( a n ) Q i ( 0 ) | x i + a n Π ˜ A i θ φ τ ( ϵ i ) ,
and
Q ˜ i ( θ , a n ) = Q i ( a n ) Q i ( 0 ) + a n Π ˜ A i θ φ τ ( ϵ i ) ,
where φ τ ( ϵ i ) = τ I ( ϵ i < 0 ) .
Lemma A1. 
Let q n = q L n + s L n 2 . If Conditions (A1)–(A4) are satisfied, then, for any positive constant L,
q n 1 sup θ L | D i ( θ , q n ) | = o p ( 1 ) .
Proof. 
Note that
D i ( θ , q n ) = Q ˜ i ( θ , q n ) E Q ˜ i ( θ , q n ) .
Using Knight’s identity
ρ τ ( r s ) ρ τ ( r ) = s { τ I ( r 0 ) } + 0 s { I ( r t ) I ( r 0 ) } d t ,
we have
Q ˜ i ( θ , q n ) = ρ τ ϵ i q n Π ˜ A i θ U n i ρ τ ϵ i U n i + q n Π ˜ A i θ φ τ ( ϵ i ) = 0 q n Π ˜ A i θ I ( ϵ i U n i < t ) I ( ϵ i U n i < 0 ) d t .
Therefore, V a r D i ( θ , q n ) = V a r Q ˜ i ( θ , q n ) E Q ˜ i 2 ( θ , q n ) . We have
i = 1 n E Q ˜ i 2 ( θ , q n ) | x i C q n n 1 / 2 i = 1 n 0 q n Π ˜ A i θ F i ( t + U n i ) F i ( U n i ) d t C q n n 1 / 2 i = 1 n 0 q n Π ˜ A i θ ( f ( 0 ) + o ( 1 ) ) ( t + o ( t 2 ) ) d 2 C q n 2 n 1 / 2 θ i = 1 n f i ( 0 ) Π ˜ A i Π ˜ A i θ ( 1 + o ( 1 ) ) C q n 2 n 1 / 2 θ 2 2 λ max ( n 1 Π A B n Π A ) C q n 2 n 1 / 2 ( 1 + o ( 1 ) ) .
for some positive constant C, where B n = d i a g ( f 1 ( 0 ) , , f n ( 0 ) ) is an n × n diagonal matrix with f i ( 0 ) denoting the conditional density function of ϵ i given x i evaluated at zero. Therefore, i = 1 n V a r { D i ( θ , q n ) } C q n 2 n 1 / 2 for some positive constant C and all n sufficiently large. By Bernstein’s inequality, for all n sufficiently large,
P q n 1 | i = 1 n D i ( θ , q n ) | > ν | x i q n 1 exp ν 2 C q n 2 n 1 / 2 + C ν n 1 / 2 q n 1 exp ( C n 1 / 2 q n 2 )
which converges to 0 as n by Conditions (A3) and (A4). Note that the upper bound does not depend on x i , so the above bound also holds unconditionally. □
Lemma A2. 
Suppose that Conditions (A1)–(A4) hold. Then, for any sequence { b n } with 1 b n L n ξ / 10 , 0 < ξ < ( r 1 / 2 ) / ( 2 r + 1 ) , we have
sup θ Π ˜ A Π ˜ A θ b n 2 L n | i = 1 n ρ ( ϵ i Π ˜ A i θ U n i ) ρ ( ϵ i U n i ) + Π ˜ A i θ ( τ I ( ϵ i < 0 ) ) E ϵ i | x i ( ρ ( ϵ i Π ˜ A i θ U n i ) ρ ( ϵ i U n i ) ) | = o p ( L n ) .
Using the similar arguments as described to prove Lemma 3.2 in [40], Lemma A2 can be proven.
Proof. 
For Theorem 1(1), we first prove that, η > 0 , there exists a C > 0 such that
P inf θ 2 L q n 1 i = 1 n ( Q i ( q n ) Q i ( 0 ) ) > 0 1 η .
Note that
q n 1 i = 1 n ( Q i ( q n ) Q i ( 0 ) ) = q n 1 i = 1 n D i ( θ , q n ) + q n 1 i = 1 n E Q i ( q n ) Q i ( 0 ) q n 1 / 2 i = 1 n Π ˜ A i θ φ τ ( ϵ i ) G n 1 + G n 2 + G n 3 ,
where the definition of G n i , i = 1 , 2 , 3 is clear from the context. First, we can see that inf θ 2 L | G n 1 | = O p ( 1 ) by Lemma A1.
For G n 2 , we have
q n 1 i = 1 n E { Q i ( q n ) Q i ( 0 ) } = q n 1 i = 1 n E ρ τ ( ϵ i q n Π ˜ A i θ U n i ) ρ τ ( ϵ i U n i ) = q n 1 i = 1 n E ( q n Π ˜ A i θ ( τ I ( ϵ i U n i 0 ) ) + 0 q n Π ˜ A i θ I ( ϵ i U n i < s ) I ( ϵ i U n i < 0 ) d s | x i ) = q n 1 / 2 i = 1 n E Π ˜ A i θ ( τ I ( ϵ i U n i 0 ) ) + q n 1 i = 1 n E U n i q n Π ˜ A i θ + U n i I ( ϵ i < s ) I ( ϵ i < 0 ) d s | x i W n 1 ( θ ) + W n 2 ( θ ) ,
where the definition of W n i ( θ ) , i = 1 , 2 is clear from the context. Note that | F ϵ | x ( 0 | x ) F ϵ | x ( U n i | x ) | B | U n i | for all x , where B is the constant in the assumption (A2). Let U n = ( u n 1 , , u n n ) . By Condition (A3), we have U n 2 = O ( L n r ) . Consequently, we can take a constant M > 0 such that sup θ 2 L | W n 1 ( θ ) | M q n 1 / 2 n 1 / 2 θ Π A 2 U n 2 = O p ( q n 1 / 2 n 1 / 2 L n r ) θ 2 = O p ( θ 2 ) by Condition (A3) and Lemma A2. For W n 2 ( θ ) , we have
W n 2 ( θ ) = q n 1 i = 1 n U n i q n Π ˜ A i θ + U n i f i ( 0 ) s d s ( 1 + o ( 1 ) ) = q n 1 i = 1 n f i ( 0 ) 1 2 q n ( Π ˜ A i θ ) 2 + U n i q n Π ˜ A i θ ( 1 + o ( 1 ) ) = C θ ( n 1 i = 1 n f i ( 0 ) Π A i Π A i ) θ × ( 1 + o ( 1 ) ) + q n 1 / 2 i = 1 n f i ( 0 ) U n i Π ˜ A i θ = C θ K n θ × ( 1 + o ( 1 ) ) + q n 1 / 2 i = 1 n f i ( 0 ) U n i Π ˜ A i θ
where K n = 1 n Π A B n Π A . Based on Condition (A3), there exists a finite constant c > 0 , such that C θ K n θ × ( 1 + o ( 1 ) ) c θ 2 2 with the probability approaching one. Combining Condition (A2) and the Cauchy–Schwarz inequality, we obtain
q n 1 / 2 i = 1 n f i ( 0 ) U n i Π ˜ A i θ = q n 1 / 2 n 1 / 2 θ Π A B n U n q n 1 / 2 n 1 / 2 θ Π A 2 · B n U n 2 = O p ( q n 1 / 2 n 1 / 2 L n r ) θ 2 = O p ( θ 2 ) .
We next evaluate G n 3 , noting that E ( G n 3 ) = 0 and
E ( G n 3 2 ) C q n 1 E θ ( n 1 i = 1 n Π A i Π A i ) θ = O ( q n 1 θ 2 2 ) .
Therefore, G n 3 = O ( q n 1 θ 2 2 ) . Hence, for L sufficiently large, the quadratic term will dominate and q n 1 i = 1 n ( Q i ( q n ) Q i ( 0 ) ) has asymptotically a lower bound c L 2 . By convexity, this implies θ ^ 2 = O p ( q n ) . From the definition of θ ^ , it follows that γ ^ 01 γ 01 2 = O p ( n 1 q n ) . This completes the proof of Theorem 1 (a).
The proof of Theorem 1 (b) is immediate since m ^ ( x ) m ( x ) L 2 = γ ^ 01 γ 01 2 and sup x [ 0 , 1 ] p Π 2 = O ( L n ) . □

Appendix A.2. Proof of Theorem 2

Proof. 
Note that γ ^ = ( β ^ 0 , β ^ 00 ) is the oracle estimator. Our goal is to prove that γ ^ is a local minimizer of Q P ( β j , β j k ) . The gradient function of i = 1 n ρ τ ( Y i j = 1 p Ψ i , j β j 1 j < k p Φ i , j k β j k ) is not applicable in the proof because the check loss function ρ τ is not differentiable at zero. We derive it directly from a certain lower bound of the difference of two check loss functions.
Suppose that there exists an index j 0 M c , k 0 M c and ( j 0 , k 0 ) I c , such that f j 0 0 and f j 0 k 0 0 . That is, β ^ j 0 0 and β ^ j 0 k 0 0 . Let γ ^ * = ( β ^ 0 * , β ^ 00 * ) be the vector obtained with β ^ j 0 0 and β ^ j 0 k 0 0 being replaced by 0. Since ρ τ ( u ) ρ τ ( v ) ( τ I ( v 0 ) ) ( u v ) for any u , v R , then
Q P ( β ^ j , β ^ j k ) Q P ( β ^ j * , β ^ j k * ) 1 n i = 1 n ( τ I ( Y i Π i γ ^ * ) ) Π i ( γ ^ γ ^ * ) + λ 1 ( j = 1 p R j 2 β j 0 2 + 1 k < j p I β j k 0 ( R j 2 β j 0 2 + R k 2 β k 0 2 ) 1 / 2 + 1 k < j p Q j k 2 β j 0 k 0 2 ) = 1 n i = 1 n ( τ I ( ϵ i 0 ) ) Π i ( γ ^ γ ^ * ) 1 n i = 1 n ( I ( ϵ i 0 ) I ( ϵ i r n i ) ) Π i ( γ ^ γ ^ * ) + λ 1 ( j = 1 p R j 2 β j 0 2 + 1 k < j p I β j k 0 ( R j 2 β j 0 2 + R k 2 β k 0 2 ) 1 / 2 + 1 k < j p Q j k 2 β j 0 k 0 2 ) 1 n i = 1 n ( τ I ( ϵ i 0 ) ) Π i γ ^ γ ^ * 1 n i = 1 n ( I ( ϵ i 0 ) I ( ϵ i r n i ) ) Π i γ ^ γ ^ * + λ 1 ( j = 1 p R j 2 β j 0 2 + 1 k < j p I β j k 0 ( R j 2 β j 0 2 + R k 2 β k 0 2 ) 1 / 2 + 1 k < j p Q j k 2 β j 0 k 0 2 ) T n 1 T n 2 + T n 3
where r n i = U n i Π i ( γ ^ γ ^ * ) . By simple calculation, one has that i = 1 n ( τ I ( ϵ i 0 ) ) Π i 2 = O p ( n 1 / 2 L n ) . Therefore, T n 1 = O p ( n 1 L n 2 ) . For T n 2 , from Conditions (A2) and (A3), we have
E i = 1 n ( I ( ϵ i 0 ) I ( ϵ i r n i ) ) Π i 2 n i = 1 n E ( I ( ϵ i 0 ) I ( ϵ i r n i ) ) Π i 2 n i = 1 n E Π i Π i | I ( ϵ i r n i 0 ) I ( ϵ i 0 ) | n i = 1 n E s ( n ) 2 I ( 0 | ϵ i | | r n i | ) L n i = 1 n | r n i | | r n i | f i ( s ) d s = O p ( n 1 / 2 L n 3 )
where s ( n ) = max i φ ( x i ) 2 c 1 n 1 / 2 L n for some positive constant c 1 . The last equality follows by observing that max 1 i n | r n i | O ( L n r ) + L n γ ^ γ ^ * 2 = O p ( n 1 / 2 L n 2 ) . This implies that i = 1 n ( τ I ( ϵ i 0 ) ) Π i 2 = O p ( n 1 / 2 L n 3 ) . Therefore, T n 2 = O p ( n 1 / 2 L n 3 ) . By T n 1 , T n 2 , and n 1 L n 3 λ 1 1 0 , we have that T n 3 will dominate. Therefore, Q P ( β ^ j , β ^ j k ) Q P ( β ^ j * , β ^ j k * ) > 0 with probability tending to one, which contradicts the fact that γ ^ is the minimizer of Q P ( β j , β j k ) . Similarly, the same results are proven under weak heredity. This completes the proof of Theorem 2. □

Appendix A.3. Proof of Theorem 3

Lemma A3. 
Let z 1 , , z n be independent random variable with values in some space Z and let Γ be a class of real-valued functions on Z , satisfying, for some positive constants η n and τ n ,
γ ( z ) 2 η n and 1 n i = 1 n v a r ( γ ( z i ) ) τ n 2 , γ Γ .
Define Z : = sup γ Γ | 1 n i = 1 n ( γ ( z i ) E γ ( z i ) ) | . Then, for t > 0 ,
P Z E ( Z ) + t 2 ( τ n 2 + 2 η n E ( Z ) ) + 2 η n t 2 3 exp ( n t 2 ) .
For the details of this proof, see [41].
Proof. 
Define Γ = { γ ( z ) : γ ( z ) = ρ ( Y m ^ ( x ) ) ρ ( Y m ˜ ( x ) ) } . We can write [ R ( m ^ ) R ( m ˜ ) ] [ R n ( m ^ ) R n ( m ˜ ) ] = E γ ( z ) 1 n i = 1 n γ ( z i ) , γ Γ . By Lemma A3, we have
Z E ( Z ) + 2 t 2 ( τ n 2 + 2 η n E ( Z ) ) + 2 η n t 2 3 .
with probability at least 1 exp ( n t 2 ) for t > 0 . Based on the subadditivity and inequality x y ( x + y ) / 2 , x , y 0 , we have
2 t 2 ( τ n 2 + 2 η n E ( Z ) ) 2 t 2 τ n 2 + 2 t 2 η n E ( Z ) 2 t 2 τ n 2 + 2 E ( Z ) + t 2 η n .
Then, with probability at least 1 exp ( n t 2 ) , we have
Z 3 E ( Z ) + 2 t 2 τ n 2 + 5 t 2 η n 3 .
Let γ ( x ) be the collection of all differences ρ ( Y m ^ ( x ) ) ρ ( Y m ˜ ( x ) ) . The bracketing number N [ ] ( δ , M n ) is the minimum number of ε -brackets [ l i , u i ] over M n , where u j l j ϵ , 1 j k . We can construct 2 ε -brackets over γ ( x ) by taking difference [ l i u j , u i l j ] for the upper and lower bounds. Therefore, the bracketing numbers N [ ] ( ϵ , γ ( x ) ) are bounded by the squares of the bracketing numbers N [ ] ( ϵ / 2 , M n ) . For δ > 0 , by theorem 19.5 in [42], there exists a finite number a ( δ ) such that
E sup i sup γ Γ | 1 n i = 1 n γ ( x i ) E γ ( z ) | J [ ] ( δ , M n ) n + M 1 { M > a ( δ ) n } .
where J [ ] ( δ , M n ) = 0 δ log N [ ] ( ϵ , M n ) d ϵ is the bracketing integral. The envelope function M can be taken as equal to the supremum of the absolute values of the upper and lower bounds of finitely many brackets over M n . Based on Theorem 19.5 in [42], the second term on the right is bounded by a ( δ ) 1 P M 2 1 { M > a ( δ ) n } and hence converges to zero as n . Given K > 0 , there exists a constant K such that, for Sobolev space S C 2 ( [ 0 , 1 ] ) ,
log N [ ] ( δ , S C 2 ( [ 0 , 1 ] ) ) K ( 1 δ ) 1 / 2 ,
Note that, j M φ j β j M n ( L n ) and ( j k ) I ϕ j k β j k M n ( L n ) , where φ j S C 2 ( [ 0 , 1 ] ) and ϕ j k S C 2 ( [ 0 , 1 ] 2 ) . Reference [43] implies that the bracketing integral of Sobolev space S C 2 ( [ 0 , 1 ] ) and S C 2 ( [ 0 , 1 ] 2 ) is bounded. Then,
J [ ] ( δ , M n ) = O ( ( log p ) 1 / 2 + ( 2 log p ) 1 / 4 ) = O ( log p ) .
By the convexity of ρ τ , there exists C > 0 such that E ( γ ( z ) ) 2 2 m ^ ( x ) m ˜ ( x ) L 2 2 and γ C m ^ ( x ) m ˜ ( x ) m ^ ( x ) m ˜ ( x ) L 2 . Let τ n 2 = m ^ ( x ) m ˜ ( x ) L 2 2 m ^ ( x ) m ( x ) L 2 2 + m ˜ ( x ) m ( x ) L 2 2 = O ( n ( 2 r 1 ) / ( 2 r + 1 ) ) and η n = m ^ ( x ) m ˜ ( x ) L 2 m ^ ( x ) m ( x ) L 2 + m ˜ ( x ) m ( x ) L 2 = O ( n ( 2 r 1 ) / 2 ( 2 r + 1 ) ) . Then, Equation (A1) implies that
Z n 1 / 2 log p + 2 t 2 m ^ ( x ) m ˜ ( x ) L 2 + 5 t 2 3 m ^ ( x ) m ˜ ( x ) L 2 log p n 1 / 2 + ( 2 t 2 + 5 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 )
with probability at least 1 exp ( n t 2 ) as n . From the above, we have R ( m ^ ) R ( m ˜ ) R n ( m ^ ) R n ( m ˜ ) + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) and that there exists D n such that P ( R ( m ^ ) R ( m ˜ ) D n ) = P ( R n ( m ^ ) R n ( m ˜ ) + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) ) . Under the assumption that the regression function is bounded, it follows, for φ j S C 2 ( [ 0 , 1 ] ) and ϕ j k S C 2 ( [ 0 , 1 ] 2 ) , that
j = 1 p β j 2 + 1 k < j p I β j k 0 ( β j 2 + β k 2 ) 1 / 2 + 1 k < j p β j k 2 L n
where L n = o ( [ n / l o g ( n ) ] 1 / 4 ) . Reference [13] also shows that the Lasso is persistent when p grows polynomially in n. Furthermore, according to the definition of m ^ ( x ) , we have R n ( m ^ ) R n ( m ˜ ) λ 1 L n and R ( m ^ ) R ( m ˜ ) = λ 1 L n + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) . Let D n = λ 1 L n + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) . Since R n ( m ^ ) R n ( m ˜ ) λ 1 L n always holds, the probability of P ( R ( m ^ ) R ( m ˜ ) D n ) = P ( R n ( m ^ ) R n ( m ˜ ) + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) λ 1 L n + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) ) is 1. Note that λ 1 L n + ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) = O ( ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) ) . Therefore, R ( m ^ ) R ( m ˜ ) ( 2 2 t 2 + 10 t 2 / 3 ) n ( 2 r 1 ) / 2 ( 2 r + 1 ) with probability at least 1 exp ( n t 2 ) . Similarly, the same results are proved under weak heredity. □

Appendix B. Estimated Results at τ = 0.1, 0.25, 0.75, 0.9 in Applications

Figure A1 illustrates the effects of HNR, RPDE, and DFA on the UPDRS score at τ = 0.1 . As HNR increases, the UPDRS score significantly decreases, showing a nonlinear relationship, where higher noise-to-harmonic ratios are associated with milder symptoms. RPDE exhibits a clear U-shaped curve, indicating that extreme values correspond to more severe symptoms, while intermediate values are associated with milder symptoms, reflecting a nonlinear effect pattern. DFA shows a negative correlation, with UPDRS scores decreasing as DFA increases, again revealing a nonlinear trend, suggesting that increased fundamental frequency variability may help to alleviate symptoms.
Figure A2 shows the interaction effect between HNR and DFA at τ = 0.1 , revealing a nonlinear relationship. When HNR is low and DFA is high, the UPDRS score is higher; as HNR increases, the UPDRS score decreases. However, when HNR is high and DFA is low, the score increases again. This suggests that the effects of HNR and DFA on the UPDRS score are complex and interdependent, highlighting the importance of understanding and analyzing the interactions between these variables.
Figure A1. Estimated main effects for the HNR, RPDE, and DFA variables at τ = 0.10 .
Figure A1. Estimated main effects for the HNR, RPDE, and DFA variables at τ = 0.10 .
Mathematics 13 01522 g0a1
Figure A2. Estimated interaction between HNR and DFA at τ = 0.1 .
Figure A2. Estimated interaction between HNR and DFA at τ = 0.1 .
Mathematics 13 01522 g0a2
Figure A3 illustrates the estimated main effect trends of three acoustic features—HNR, DFA, and PPE—at τ = 0.25 . Under this condition, as HNR increases, its impact on the analysis first rises slightly and then drops rapidly, indicating that the purity of the speech signal influences recognition but with diminishing returns beyond a certain point. DFA reveals an optimal fluctuation pattern, after which its ability to reflect health status declines. PPE shows that the randomness and complexity of the speech signal also have an optimal range, beyond which the effect weakens. Overall, the variations in these features highlight their potential applications in fields such as disease diagnosis, where analyzing these acoustic characteristics can assist in medical diagnostics and improve accuracy.
Figure A4 illustrates the estimated interaction between DFA and PPE at τ = 0.25 , presented through a 3D surface plot. As DFA and PPE values vary, their interaction significantly impacts the overall effect. Specifically, when DFA is at a low level, the overall effect initially increases slowly and then drops rapidly as PPE increases. In contrast, when DFA is high, this trend becomes more gradual, indicating a more complex interaction pattern. This suggests that the influence of their interaction varies significantly across different DFA and PPE combinations. For instance, in certain combinations, their interaction can more accurately reflect changes in health status within speech signals, which is important for disease diagnosis and improving speech processing technologies. Overall, the figure highlights the nonlinear interaction between DFA and PPE and its potential applications.
Figure A3. Estimated main effects for the HNR, DFA, and PPE variables at τ = 0.25 .
Figure A3. Estimated main effects for the HNR, DFA, and PPE variables at τ = 0.25 .
Mathematics 13 01522 g0a3
Figure A4. Estimated interaction between DFA and PPE at τ = 0.25 .
Figure A4. Estimated interaction between DFA and PPE at τ = 0.25 .
Mathematics 13 01522 g0a4
Figure A5 illustrates the estimated main effects of three variables HNR, DFA, and PPE at τ = 0.75 . From the figure, it can be observed that, as the HNR value increases, its impact on the overall effect first rises and then rapidly decreases. The change in DFA values shows a similar trend, but the decline is more gradual. Meanwhile, the PPE value exhibits a distinct peak before gradually decreasing. These variations in the features highlight their significance in practical applications, particularly in disease diagnosis and speech processing fields.
Figure A6 illustrates the estimated interaction effects between HNR and DFA, as well as between HNR and PPE at τ = 0.75 . From the figure, it can be observed that the interaction between HNR and DFA exhibits a complex nonlinear relationship, with the overall effect first increasing and then decreasing as the HNR value rises, showing a distinct fluctuation pattern. Similarly, the interaction between HNR and PPE also displays a complex pattern, but with more pronounced changes, particularly when the PPE value is high, leading to more significant variations in the overall effect.
Figure A5. Estimated main effects for the HNR and DFA variables at τ = 0.75 .
Figure A5. Estimated main effects for the HNR and DFA variables at τ = 0.75 .
Mathematics 13 01522 g0a5
Figure A6. Estimated interaction effects between HNR and DFA, as well as the interaction effect between HNR and PPE variables at τ = 0.75 .
Figure A6. Estimated interaction effects between HNR and DFA, as well as the interaction effect between HNR and PPE variables at τ = 0.75 .
Mathematics 13 01522 g0a6
Figure A7 provides the effects of dB and HNR on the UPDRS score at τ = 0.9 . The main effect of dB presents a U-shaped curve, indicating that both extremely high and low dB values lead to higher UPDRS scores, with lower scores at intermediate values. This emphasizes the importance of moderate changes in sound levels and their nonlinear effects. HNR also shows an inverted U-shaped curve, indicating that moderate HNR values are associated with the most severe symptoms, with this nonlinear relationship being particularly prominent at higher quantiles. These findings highlight the complex nonlinear effects of different variables on the UPDRS score, providing valuable insights for understanding Parkinson’s disease symptoms.
Figure A7. Estimated main effects for the dB and HNR variables at τ = 0.90 .
Figure A7. Estimated main effects for the dB and HNR variables at τ = 0.90 .
Mathematics 13 01522 g0a7

References

  1. Koenker, R.; Bassett, G. Regression quantiles. Econometrica 1978, 46, 33–50. [Google Scholar] [CrossRef]
  2. Nelder, J.A. A reformulation of linear models. J. R. Stat. Soc. Ser. A (Gen.) 1977, 140, 48–63. [Google Scholar] [CrossRef]
  3. McCullagh, P.; Nelder, J. Monographs on Statistics and Applied Probability; Chapman & Hall: London, UK, 1989. [Google Scholar]
  4. De Gooijer, J.G.; Zerom, D. On additive conditional quantiles with high-dimensional covariates. J. Am. Stat. Assoc. 2003, 98, 135–146. [Google Scholar] [CrossRef]
  5. Cheng, Y.; De Gooijer, J.G.; Zerom, D. Efficient estimation of an additive quantile regression model. Scand. J. Stat. 2011, 38, 46–62. [Google Scholar] [CrossRef]
  6. Lee, Y.K.; Mammen, E.; Park, B.U. Backfitting and smooth backfitting for additive quantile models. Ann. Stat. 2010, 38, 2857–2883. [Google Scholar] [CrossRef]
  7. Horowitz, J.L.; Lee, S. Nonparametric estimation of an additive quantile regression model. J. Am. Stat. Assoc. 2005, 100, 1238–1249. [Google Scholar] [CrossRef]
  8. Sherwood, B.; Maidman, A. Additive nonlinear quantile regression in ultra-high dimension. J. Mach. Learn. Res. 2022, 23, 1–47. [Google Scholar]
  9. Zhao, W.; Li, R.; Lian, H. Estimation and variable selection of quantile partially linear additive models for correlated data. J. Stat. Comput. Simul. 2024, 94, 315–345. [Google Scholar] [CrossRef]
  10. Cui, X.; Zhao, W. Pursuit of dynamic structure in quantile additive models with longitudinal data. Comput. Stat. Data Anal. 2019, 130, 42–60. [Google Scholar] [CrossRef]
  11. Lin, Y.; Zhang, H.H. Component selection and smoothing in multivariate nonparametric regression. Ann. Stat. 2006, 34, 2272–2297. [Google Scholar] [CrossRef]
  12. Storlie, C.B.; Bondell, H.D.; Reich, B.J.; Zhang, H.H. Surface estimation, variable selection, and the nonparametric oracle property. Stat. Sin. 2011, 21, 679–705. [Google Scholar] [CrossRef] [PubMed]
  13. Radchenko, P.; James, G.M. Variable selection using adaptive nonlinear interaction structures in high dimensions. J. Am. Stat. Assoc. 2010, 105, 1541–1553. [Google Scholar] [CrossRef]
  14. Efron, B.; Hastie, T.; Johnstone, I.; Tibshirani, R. Least angle regression. Ann. Stat. 2004, 32, 407–499. [Google Scholar] [CrossRef]
  15. Wu, T.T.; Chen, Y.F.; Hastie, T.; Sobel, E.; Lange, K. Genome-wide association analysis by lasso penalized logistic regression. Bioinformatics 2009, 25, 714–721. [Google Scholar] [CrossRef]
  16. Zhao, P.; Rocha, G.; Yu, B. The composite absolute penalties family for grouped and hierarchical variable selection. Ann. Stat. 2009, 37, 3468–3497. [Google Scholar] [CrossRef]
  17. Yuan, M.; Joseph, V.R.; Zou, H. Structured variable selection and estimation. Ann. Appl. Stat. 2009, 3, 1738–1757. [Google Scholar] [CrossRef]
  18. Choi, N.H.; Li, W.; Zhu, J. Variable selection with the strong heredity constraint and its oracle property. J. Am. Stat. Assoc. 2010, 105, 354–364. [Google Scholar] [CrossRef]
  19. Bien, J.; Taylor, J.; Tibshirani, R. A lasso for hierarchical interactions. Ann. Stat. 2013, 41, 1111–1141. [Google Scholar] [CrossRef]
  20. She, Y.; Wang, Z.F.; Jiang, H. Group regularized estimation under structural hierarchy. J. Am. Stat. Assoc. 2018, 113, 445–454. [Google Scholar] [CrossRef]
  21. Hao, N.; Feng, Y.; Zhang, H.H. Model Selection for High Dimensional Quadratic Regression via Regularization. J. Am. Stat. Assoc. 2018, 113, 615–625. [Google Scholar] [CrossRef]
  22. Liu, C.; Ma, J.; Amos, C.I. Bayesian variable selection for hierarchical gene–environment and gene–gene interactions. Hum. Genet. 2015, 134, 23–36. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, Y.; Basu, S.; Zhang, L. A Bayesian hierarchical variable selection prior for pathway-based GWAS using summary statistics. Stat. Med. 2020, 39, 724–739. [Google Scholar] [CrossRef] [PubMed]
  24. Kim, J.E.A. Bayesian variable selection with strong heredity constraints. J. Korean Stat. Soc. 2018, 47, 314–329. [Google Scholar] [CrossRef]
  25. Li, Y.; Wang, N.; Carroll, R.J. Generalized functional linear models with semiparametric single-index interactions. J. Am. Stat. Assoc. 2010, 105, 621–633. [Google Scholar] [CrossRef]
  26. Li, Y.; Liu, J.S. Robust variable and interaction selection for logistic regression and multiple index models. J. Am. Stat. Assoc. 2019, 114, 271–286. [Google Scholar] [CrossRef]
  27. Liu, Y.; Li, Y.; Carroll, R.J. Predictive functional linear models with diverging number of semiparametric single-index interactions. J. Econom. 2021, 230, 221–239. [Google Scholar] [CrossRef]
  28. Liu, H.; You, J.; Cao, J. A Dynamic Interaction Semiparametric Function-on-Scalar Model. J. Am. Stat. Assoc. 2021, 118, 360–373. [Google Scholar] [CrossRef]
  29. Yu, K.; Lu, Z. Local linear additive quantile regression. Scand. J. Stat. 2004, 31, 333–346. [Google Scholar] [CrossRef]
  30. Noh, H.; Lee, E.R. Component selection in additive quantile regression models. J. Korean Stat. Soc. 2014, 43, 439–452. [Google Scholar] [CrossRef]
  31. Wood, S.N. P-splines with derivative based penalties and tensor product smoothing of unevenly distributed data. Stat. Comput. 2017, 27, 985–989. [Google Scholar] [CrossRef]
  32. Eilers, P.H.C.; Marx, B.D. Flexible smoothing with B-splines and penalties. Stat. Sci. 1996, 11, 89–102. [Google Scholar] [CrossRef]
  33. Meier, L.; van de Geer, S.; Buehlmann, P. High-dimensional additive modeling. Ann. Stat. 2009, 37, 3779–3821. [Google Scholar] [CrossRef]
  34. Fan, Y.; Tang, C.Y. Tuning parameter selection in high dimensional penalized likelihood. J. R. Stat. Soc. Ser. B 2013, 75, 531–552. [Google Scholar] [CrossRef]
  35. Greenshtein, E.; Ritov, Y. Persistence in high-dimensional linear predictor selection and the virtue of overparametrization. Bernoulli 2004, 10, 971–988. [Google Scholar] [CrossRef]
  36. Jiang, L.; Bondell, H.; Wang, H. Interquantile shrinkage and variable selection in quantile regression. Comput. Stat. Data Anal. 2014, 69, 208–219. [Google Scholar] [CrossRef]
  37. Zou, H.; Ming, Y. Composite quantile regression and the oracle model selection theory. Ann. Stat. 2008, 36, 1108–1126. [Google Scholar] [CrossRef]
  38. Kohns, D.; Szendrei, T. Horseshoe prior Bayesian quantile regression. J. R. Stat. Soc. Ser. C Appl. Stat. 2024, 73, 193–220. [Google Scholar] [CrossRef]
  39. Tsanas, A.; Little, M.A.; McSharry, P.E.; Ramig, L.O. Accurate telemonitoring of parkinson’s disease progression by noninvasive speech tests. IEEE Trans. Bio-Med. Eng. 2010, 57, 884–893. [Google Scholar] [CrossRef]
  40. He, X.M.; Shi, P. Convergence rate of B-spline estimators of nonparametric conditional quantile functions. J. Nonparametr. Stat. 1994, 3, 299–308. [Google Scholar] [CrossRef]
  41. Bousquet, O. A Bennett concentration inequality and its application to suprema of empirical processes. Comptes Rendus Math. 2002, 334, 495–500. [Google Scholar] [CrossRef]
  42. Vaart, A. Asymptotic Statistics; Cambridge University Press: Cambridge, UK, 1990. [Google Scholar]
  43. Birman, M.; Solomjak, M.Z. Piecewise-polynomial approximations of functions of the classes. Math. USSR-Sb. 1967, 2, 295–317. [Google Scholar] [CrossRef]
Figure 1. (Left) Histogram and density curve of UPDRS; (right) Correlation between voice measurements.
Figure 1. (Left) Histogram and density curve of UPDRS; (right) Correlation between voice measurements.
Mathematics 13 01522 g001
Figure 2. Estimated main effects for the APQ11, HNR, RPDE, and DFA variables at τ = 0.5 .
Figure 2. Estimated main effects for the APQ11, HNR, RPDE, and DFA variables at τ = 0.5 .
Mathematics 13 01522 g002
Figure 3. Estimated interaction term for the HNR and RPDE variables at τ = 0.5 .
Figure 3. Estimated interaction term for the HNR and RPDE variables at τ = 0.5 .
Mathematics 13 01522 g003
Table 1. Selection and estimation results for strong heredity with n = 100 .
Table 1. Selection and estimation results for strong heredity with n = 100 .
MethodMainInter
P m x 1 x 2 x 3 x 4 x 5 mTPR mFDR P i x 1 x 2 x 1 x 3 iTPR iFDR Size RMSE
N ( 0 , 1 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0451.0001.0001.0001.0000.71012.1406.056
R A M P 0.9201.0001.0001.0001.0000.9200.9840.0461.0001.0001.0001.0000.3828.4007.200
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.038-----3.4201.030
p r o p o s e d ( τ = 0.25 ) 0.4001.0001.0001.0000.9200.4000.8640.0680.3400.9200.9200.3500.4267.5804.251
p r o p o s e d ( τ = 0.50 ) 0.7001.0001.0001.0000.9800.7000.9360.0520.9400.9800.9400.9600.2007.3602.919
p r o p o s e d ( τ = 0.75 ) 0.6401.0001.0001.0001.0000.6400.9280.0210.9800.9801.0000.9900.2327.3203.216
p r o p o s e d ( τ = 0.9 ) 1.0001.000-1.000-1.0001.0000.044-----3.9201.042
t ( 3 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.1101.0001.0001.0001.0000.72712.9607.392
R A M P 0.9001.0001.0001.0001.0000.9000.9800.0501.0001.0001.0001.0000.3638.3008.978
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.056-----3.4802.994
p r o p o s e d ( τ = 0.25 ) 0.3801.0001.0001.0000.9400.3800.8640.0560.5200.8800.8800.5500.3887.4407.718
p r o p o s e d ( τ = 0.50 ) 0.6201.0001.0001.0001.0000.6200.9240.0570.9400.9600.9600.9600.2137.3606.033
p r o p o s e d ( τ = 0.75 ) 0.6601.0001.0001.0001.0000.6600.9320.0251.0001.0001.0001.0000.2537.4606.146
p r o p o s e d ( τ = 0.9 ) 0.9600.960-1.000-1.0000.9860.019-----3.4802.827
χ 2 ( 2 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.1281.0001.0001.0001.0000.73513.3008.663
R A M P 0.8601.0001.0001.0001.0000.8600.9720.0611.0001.0001.0001.0000.4258.68010.217
p r o p o s e d ( τ = 0.1 ) 0.9800.9801.0001.000--0.9930.032-----3.1603.931
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0001.0001.0000.0000.10000.8200.8200.1000.4117.1807.638
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0001.0001.0000.0000.9401.0000.9400.0000.2107.2608.583
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0000.1800.3077.6408.942
p r o p o s e d ( τ = 0.1 ) 0.8600.860-1.000-1.0000.9530.046-----3.9006.049
σ ( x i ) e i
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0311.0001.0001.0001.0000.71712.2406.199
R A M P 0.9001.0001.0001.0001.0000.9000.9800.0571.0001.0001.0001.0000.4118.6007.441
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.026-----3.3801.306
p r o p o s e d ( τ = 0.25 ) 0.4041.0001.0001.0000.9570.4040.8720.0720.9140.9360.9780.9570.4037.9142.893
p r o p o s e d ( τ = 0.50 ) 0.6401.0001.0001.0000.9800.64000.9240.0570.9400.9800.9400.9600.2197.3802.355
p r o p o s e d ( τ = 0.75 ) 0.6601.0001.0001.0001.0000.6600.9320.0290.9800.9801.0000.9900.2327.3802.706
p r o p o s e d ( τ = 0.9 ) 1.0001.000-1.000-1.0001.0000.063-----4.0801.344
Table 2. Selection and estimation results for strong heredity with n = 300 .
Table 2. Selection and estimation results for strong heredity with n = 300 .
MethodMainInter
P m x 1 x 2 x 3 x 4 x 5 mTPR mFDR P i x 1 x 2 x 1 x 3 iTPR iFDR Size RMSE
N ( 0 , 1 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.69511.5606.158
R A M P 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.62910.8402.058
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.000-----3.0000.978
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0001.001.0000.0001.0001.0001.0001.0000.0917.2002.520
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0007.0002.472
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0387.0801.120
p r o p o s e d ( τ = 0.9 ) 1.0001.000-1.000-1.0001.0000.000-----3.0400.958
t ( 3 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.69611.5808.741
R A M P 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.63511.3204.641
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.000-----3.0203.765
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0747.1604.684
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0007.0004.664
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0297.0604.786
p r o p o s e d ( τ = 0.1 ) 1.0001.000-1.000-1.0001.0000.000-----3.0603.658
χ 2 ( 2 )
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.69811.6609.979
R A M P 1.0001.0001.0001.0001.0001.0001.0000.3751.0001.0001.0001.0000.64611.6605.843
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.000-----3.0004.913
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0567.1205.702
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0297.0605.777
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0567.1205.787
p r o p o s e d ( τ = 0.9 ) 0.9400.940-1.000-1.0000.9800.000-----3.1006.053
σ ( x i ) e i
h i r N e t 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.69611.6606.282
R A M P 1.0001.0001.0001.0001.0001.0001.0000.1001.0001.0001.0001.0000.62510.9002.181
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.000--1.0000.000-----3.0401.346
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0907.2002.777
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0107.0202.693
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0001.0001.0000.0001.0001.0001.0001.0000.0107.0202.782
p r o p o s e d ( τ = 0.9 ) 1.0001.000-1.000-1.0001.0000.000-----3.0801.357
Table 3. Selection and estimation results for weak heredity with n = 100 .
Table 3. Selection and estimation results for weak heredity with n = 100 .
MethodMainInter
P m x 1 x 2 x 3 mTPR mFDR P i x 1 x 4 x 1 x 5 iTPR iFDR Size RMSE
N ( 0 , 1 )
h i r N e t 0.9801.0001.0001.0001.0000.4461.0001.0001.0001.0000.78114.5831.954
R A M P 0.0601.0000.2600.2400.5000.0501.0001.0001.0001.0000.6197.14067.840
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.047-----3.5001.067
p r o p o s e d ( τ = 0.25 ) 0.3401.0000.5800.4800.6860.5070.7201.0000.7200.8600.5057.86048.135
p r o p o s e d ( τ = 0.50 ) 0.3201.0000.5200.5000.6730.5040.7401.0000.7400.8700.4627.48042.688
p r o p o s e d ( τ = 0.75 ) 0.3401.0000.5200.5200.6800.5070.7801.0000.7800.8900.4767.78041.641
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.062-----4.0001.058
t ( 3 )
h i r N e t 1.0001.0001.0001.0001.0000.4501.0001.0001.0001.0000.78814.92032.742
R A M P 0.0601.0000.2800.2400.5060.0371.0001.0001.0001.0000.6197.14069.608
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.006-----7.0601.710
p r o p o s e d ( τ = 0.25 ) 0.3401.0000.5000.5200.6730.5020.7601.0000.7600.8800.4437.38046.631
p r o p o s e d ( τ = 0.50 ) 0.3601.0000.5400.5400.6930.4970.7201.0000.7200.8600.4597.44032.514
p r o p o s e d ( τ = 0.75 ) 0.3601.0000.5600.5400.7000.4920.7201.0000.7200.8600.6797.68052.425
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.010-----7.0001.765
χ 2 ( 2 )
h i r N e t 0.9801.0001.0001.0001.0000.4681.0001.0001.0001.0000.79415.36033.515
R A M P 0.0401.0000.2400.2200.4860.0871.0001.0001.0001.0000.6287.30069.560
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.051-----6.9402.821
p r o p o s e d ( τ = 0.25 ) 0.2801.0000.5000.4000.6330.5150.6200.9800.6200.8100.6837.28041.928
p r o p o s e d ( τ = 0.50 ) 0.3401.0000.5400.4600.6660.5040.6400.9800.6400.8100.4807.38042.535
p r o p o s e d ( τ = 0.75 ) 0.3601.0000.5800.5000.6930.5000.6800.9800.6800.8300.5227.92058.314
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.011-----6.1662.929
σ ( x i ) e i
h i r N e t 1.0001.0001.0001.0001.0000.4441.0001.0001.0001.0000.78414.70032.115
R A M P 0.0000.3000.3800.1200.2660.2591.0001.0000.3000.6500.8218.36013.755
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.124---- 7.7750.681
p r o p o s e d ( τ = 0.25 ) 0.3331.0000.5620.5000.6870.5050.7501.0000.7500.8750.4787.68724.665
p r o p o s e d ( τ = 0.50 ) 0.3541.0000.6040.4790.6940.5020.7501.0000.7500.8750.4817.75026.570
p r o p o s e d ( τ = 0.75 ) 0.3541.0000.6250.5200.7150.5000.7911.0000.7910.8950.5008.08329.956
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.013-----6.5600.760
Table 4. Selection and estimation results for weak heredity with n = 300 .
Table 4. Selection and estimation results for weak heredity with n = 300 .
MethodMainInter
P m x 1 x 2 x 3 mTPR mFDR P i x 1 x 4 x 1 x 5 iTPR iFDR Size RMSE
N ( 0 , 1 )
h i r N e t 1.0001.0001.0001.0001.0000.4010.1600.9000.1600.5300.04719.60032.574
R A M P 1.0001.0000.7000.7201.0000.0001.0001.0001.0001.0000.6959.36068.139
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.000-----3.6200.908
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.1077.2803.944
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0000.4021.0001.0001.0001.0000.0657.1603.288
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.1377.3603.912
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.000-----3.0330.921
t ( 3 )
h i r N e t 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.67611.20032.903
R A M P 1.0001.0000.7000.6801.0000.0001.0001.0001.0001.0000.6969.36070.609
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.000-----3.3203.572
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.0747.1805.371
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.0567.1605.568
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0000.4000.9801.0000.9800.9900.0917.2005.145
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.000-----3.2003.560
χ 2 ( 2 )
h i r N e t 1.0001.0001.0001.0001.0000.40001.0001.0001.0001.0000.67911.26034.494
R A M P 0.0001.0000.7600.6000.6660.0001.0001.0001.0001.0000.6829.06072.785
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.011-----4.7664.723
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0000.4000.9001.0000.9000.9500.2517.6205.462
p r o p o s e d ( τ = 0.50 ) 1.0001.0001.0001.0001.0000.4020.9401.0000.9400.9700.1637.4005.814
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0000.4000.9201.0000.9200.9600.1507.3406.005
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.000-----3.5664.820
σ ( x i ) e i
h i r N e t 1.0001.0001.0001.0001.0000.4021.0001.0001.0001.0000.67311.14030.643
R A M P 0.8401.0000.9200.9000.9400.2411.0001.0000.8600.9300.84515.8607.298
p r o p o s e d ( τ = 0.1 ) 1.0001.0001.0001.0001.0000.000-----3.4401.055
p r o p o s e d ( τ = 0.25 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.0107.0203.950
p r o p o s e d ( τ = 0.50 ) 1.001.0001.0001.0001.0000.4001.0001.0001.0001.0000.0297.0603.948
p r o p o s e d ( τ = 0.75 ) 1.0001.0001.0001.0001.0000.4001.0001.0001.0001.0000.0297.0603.948
p r o p o s e d ( τ = 0.9 ) 1.0001.0001.0001.0001.0000.000-----3.1661.052
Table 5. The covariates selected by the proposed methods.
Table 5. The covariates selected by the proposed methods.
MethodCovariates
RAMPAPQ11HNRRPDEDFAPPE
HNR*RPDEHNR*DFADFA*PPE
hirNetPPQ5APQ11NHRHNRRPDE
DFAPPEAPQ11*PPEAPQ11*DFAHNR*RPDE
RPDE*PPEDFA*PPE
Proposed method
τ = 0.1 HNRRPDEDFAHNR*DFA
τ = 0.25 HNRDFAPPEDFA*PPE
τ = 0.5 APQ11HNRRPDEDFAHNR*RPDE
τ = 0.75 HNRDFAPPEHNR*DFAHNR*PPE
τ = 0.9 dBHNR
The symbol * represents the interaction between the two main effects.
Table 6. The average RMSE over the 100 data sets.
Table 6. The average RMSE over the 100 data sets.
MethodRMSESizeMethodSizeRMSEMethod τ SizeRMSE
τ = 0.1 30.891
τ = 0.25 4.70.873
RAMP0.8789.5hirNet0.88718.39proposed method τ = 0.5 4.91.00
τ = 0.75 5.00.858
τ = 0.9 2.30.971
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bai, Y.; Jiang, J.; Tian, M. Variable Selection for Additive Quantile Regression with Nonlinear Interaction Structures. Mathematics 2025, 13, 1522. https://doi.org/10.3390/math13091522

AMA Style

Bai Y, Jiang J, Tian M. Variable Selection for Additive Quantile Regression with Nonlinear Interaction Structures. Mathematics. 2025; 13(9):1522. https://doi.org/10.3390/math13091522

Chicago/Turabian Style

Bai, Yongxin, Jiancheng Jiang, and Maozai Tian. 2025. "Variable Selection for Additive Quantile Regression with Nonlinear Interaction Structures" Mathematics 13, no. 9: 1522. https://doi.org/10.3390/math13091522

APA Style

Bai, Y., Jiang, J., & Tian, M. (2025). Variable Selection for Additive Quantile Regression with Nonlinear Interaction Structures. Mathematics, 13(9), 1522. https://doi.org/10.3390/math13091522

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop