Depth Induced Regression Medians and Uniqueness

: The notion of median in one dimension is a foundational element in nonparametric statistics. It has been extended to multi-dimensional cases both in location and in regression via notions of data depth. Regression depth (RD) and projection regression depth (PRD) represent the two most promising notions in regression. Carrizosa depth D C is another depth notion in regression. Depth-induced regression medians (maximum depth estimators) serve as robust alternatives to the classical least squares estimator. The uniqueness of regression medians is indispensable in the discussion of their properties and the asymptotics (consistency and limiting distribution) of sample regression medians. Are the regression medians induced from RD, PRD, and D C unique? Answering this question is the main goal of this article. It is found that only the regression median induced from PRD possesses the desired uniqueness property. The conventional remedy measure for non-uniqueness, taking average of all medians, might yield an estimator that no longer possesses the maximum depth in both RD and D C cases. These and other ﬁndings indicate that the PRD and its induced median are highly favorable among their leading competitors.


Introduction
Regular univariate sample median defined as the innermost (deepest) point of a data set is unique (If the sample median is defined to be the point θ that minimizes the sum of its distances to sample points (i.e., θ = arg min θ∈R 1 ∑ n i=1 |θ − x i |, where x i , i = 1, · · · , n are the given n sample points in R 1 ), then it is not unique. However, to overcome this drawback, conventionally it is defined as (2) ≤ · · · ≤ x (n) are ordered values of x i 's and · is the floor function. Namely, it is the innermost point (from both left and right direction) or the average of two deepest sample points. Hence, it is unique). The population median defined as the 1 2 -th quantile (Recall, for any univariate distribution function F, and for 0 < p < 1, the quantity F −1 (p) := inf{x : F(x) ≥ p} is called the pth quantile or fractile of F (see page 3 of Serfling (1980) [1])) of the underlying distribution (there are other versions of definition) is also unique. The most outstanding feature of the univariate median is its robustness. In fact, among all translation equivariant location estimators, it has the best possible breakdown point (Donoho (1982) [2]) (and the minimum maximum bias if underlying distribution has a unimodal symmetric density (Huber (1964) [3]). Besides serving as a promising robust location estimator, the univariate median also provides a base for a center-outward ordering (in terms of the deviations from the median), an alternative to the traditional left-to-right ordering.
Depth notions in location have also been extended to regression. Regression depth (RD) of Rousseeuw and Hubert (1999) (RH99) [10], the most famous, exemplifies a direct extension of Tukey location depth to regression. Projection regression depth (PRD) of Zuo (2018a) (Z18a) [11] is another example of the extension of prominent PD in location to regression. The RD and PRD represent the two leading notions of depth in regression ( [11]) which satisfy desirable axiomatic properties. Carrizosa depth D C (Carrizosa (1996) (C96)) [12] (defined in Section 2.2) is one of the other notions of depth in regression ( [11]). One of the outstanding advantages of depth notions is that they can be directly employed to introduce median-type deepest estimating functionals (or estimators in the empirical case) for the location or regression parameters in a multi-dimensional setting based on a general min-max stratagem. The maximum (deepest) regression depth estimator (also called regression median) serves as a robust alternative to the classical least squares or least absolute deviations estimator for the unknown parameters in a general linear regression model: where denotes the transpose of a vector, and vector x i = (1, x i1 , · · · , x i(p−1) ) and parameter vector , and e i is a random variable in R. One can regard the observations (y i , x i ) as a sample from random vector (y, x ) ∈ R p+1 . Robustness of the median induced from RD and PRD have been investigated in Van Aelst and Rousseeuw (2000) (VAR00) [13] and Zuo (2018b) [14], respectively. These medians, just like their location or univariate counterpart, indeed possess high breakdown point robustness.
Regression median, as the deepest regression hyperplane, just like their location or univariate counterpart, is expected to be unique because non-uniqueness would result in vagueness in the inference (prediction and estimation) via regression median. Uniqueness is the indispensable feature and axiomatic property when one (i) investigates the population median, or (ii) deals with the convergence in probability or in distribution of the sample regression median to its inevitably unique population version (iii) it is also an essential property in the computation of the sample regression medians for the convergence of approximate algorithms. The uniqueness issue of multidimensional location medians has been addressed in Zuo (2013) [15].
Are the medians induced from regression depth notions via the min-max scheme generally unique? Answering this question is the goal of this article. It turns out that the regression depth-induced medians are not necessarily unique. The conventional remedy measure for this issue is taking average of all. It, however, might not work (in the sense that the resulting estimator might no longer possess the maximum depth) for both RD of RH99 and D C of C96. On the other hand, PRD-induced regression medians are unique.
The rest of article is organized as follows. Section 2 introduces leading regression depth notions and induced medians and show these medians indeed recover the regular univariate sample median in the special univariate case. Empirical examples of regression depth and medians and their behavior are illustrated in Section 3. Section 4 establishes general results on uniqueness of regression medians. Brief concluding remarks in Section 5 end the article.

Maximum Depth Functionals (Regression Medians)
Let D(β; P) be a generic non-negative functional on R p × P, where β ∈ R p and P is a collection of distributions F Z of Z = (y, x ) ∈ R p+1 (F Z and P are used interchangeably).
If D(β; P) satisfies four axiomatic properties: (P1) (regression, scale and affine) invariance; (P2) maximality at center; (P3) monotonicity relative to any deepest point and (P4) vanishing at infinity, then it is called a regression depth functional (see [11] for details). The maximum regression depth functional, or the regression median, can be defined as Note that β * might not be unique, and a conventional remedy measure is to take the average of all maximum depth points. Unfortunately, this could lead to a scenario where the resulting functional (or estimator) might not have the maximum depth any more. For detailed discussions on D(β; F Z ) and β * (F Z ), see [11]. In the following we elaborate three examples.

Median Induced from Regression Depth of RH99
Definition 1. For any β ∈ R p and the joint distribution P of (y, x ) in (1), [10] defined the regression depth of β, denoted hence by RD RH (β; P), to be the minimum probability mass that needs to be passed when tilting (the hyperplane induced from) β in any way until it is vertical. The maximum regression depth functional β * RD RH (regression median) is defined as The RD HR (β; P) definition above is rather abstract and not easy to comprehension, many characterizations of it, or equivalent definitions, have been given in the literature though, see, e.g., [11] and references cited therein.

Median Induced from Carrizosa Depth of C96
Among regression depth notions investigated in [11], Carrizosa depth D C (β; P) for any β ∈ R p and underlying probability measure P associated with (y, x ), was a pioneer regression depth notion introduced in [12] and thoroughly investigated in [11], where r(γ) = y − x γ. As characterized in [11] (see Proposition 2.2 there), it turns out that (p ≥ 2) The maximum regression depth functional (or regression median) was then defined as As shown in [11], β * D C always exists if the assumption: (A) P(H v ) = 0, for any vertical hyperplane H v , holds. Unfortunately, as D C violates (P3) generally (see [11]), we will not focus on it in the sequel. On the other hand, under (A) RD RH above satisfies (P1)-(P4).

Median Induced from Projection Regression Depth of Z18a
Hereafter, assume that R is a univariate regression estimating functional which satisfies (A1) regression, scale and affine equivariant. That is, respectively, where x, y ∈ R 1 are random variables. Throughout, the lower case x stands for a variable in R 1 while the bold x for a vector in R p (p > 1).
is continuous in β and v, and quasi-convex in β, for β ∈ R p , v ∈ S p−1 .
Let S be a positive scale estimating functional that is scale equivariant and location invariant. R will be restricted to the form R(F (y−x β, x v) ) = T F (y−x β)/(x v) and T will be a univariate location functional that is location, scale and affine equivariant (see pages 158-159 of Rousseeuw and Leroy (1987) (RL87) [16] for definitions). Hereafter we assume that (A0) P(x v = 0) = 0 for any v ∈ S p−1 (see (I) of Remarks 4.1 for the explanations).
Examples of T include, among others, the mean, weighted mean, and quantile functionals.  [17]) and the median of absolute deviations (MAD) functional.
Equipped with a pair of T and S, we can introduce a corresponding projection based regression estimating functional. By modifying a functional in Marrona and Yohai (1993) [18] to achieve scale equivarance, [11] defined which represents unfitness of β at F (y, x ) w.r.t. T along the v ∈ S p−1 . If R is a Fisher consistent regression estimating functional, then T(F (y−x β 0 )/x v ) = 0 for some β 0 (the true parameter of the model) and ∀ v ∈ S p−1 . Thus overall, one expects |T| to be small and close to zero for a candidate β, independent of the choice of v and x v. The magnitude of |T| measures the unfitness of β along the v. Taking the supremum over all v ∈ S p−1 yields the unfitness of β at F (y, x ) w.r.t. T. Now applying the min-max scheme, [11] obtained the projection regression estimating functional (also denoted by β * PRD ) w.r.t. the pair (T, S) where the projection regression depth (PRD) functional was defined in [19] as Just like S (which is for achieving scale invariance and is nominal), T sometimes is also suppressed in above functionals for simplicity. The authors of [11] showed that PRD satisfies (P1)-(P4).
For robustness consideration, in the sequel, (T, S) is the fixed pair (Med, MAD), unless otherwise stated. Hereafter, we write Med(Z) rather than Med(F Z ). For this special choice of T and S, we have that To end this section, we show that the three maximum depth estimators above indeed deserve to be called regression median since they recover the regular univariate sample median in the special univariate case. (The result below also holds true for the population case).

Proposition 1.
For univariate data, the β * RD RH , β * D C and β * PRD all recover the univariate sample median.
Proof. (i) For β * RD RH , this has already been discussed and claimed in [10] (page 390). So we only need to focus on the other two.
(ii) For β * D C , we no longer can use (5) and have to invoke (4). Note that r(β) = y − β in this case (no slope term any more).
PRD , first we note that (without loss of generality, assume that S(F y ) = 1) When p = 1, it reduces to the following It is readily seen that where the first equality follows from (12) and the oddness of median operator, the second one follows from the translation equivalence (see page 249 of [16] for definition) of the median as a location estimator. The last display means that β * PRD recovers the sample median.

Examples of Regression Depths and Regression Medians
For a better comprehension of depth notions and depth-induced medians in the last section, we present empirical examples below. We will confine attention to RD and PRD only since D C (β, P) is just the probability mass carried by the hyperplane determined by y = x β.

Example 1. Example 3.1 (Empirical RD RH and PRD). What do empirical RD RH and PRD look like?
To answer the question, 30 random bivariate standard normal points are generated (plotted in Figure 1) and RD RH and PRD are computed w.r.t. these points.
We select 961 equally spaced grid points from the square of [x, y] with range of |x| ≤ 3 and |y| ≤ 3, then treat each point (x, y) as a β = (β 1 , β 2 ) and compute its regression depth (RD RH and PRD) w.r.t. the 30 bivariate normal points. The depths of these 961 points are plotted in Figure 2.
Inspecting the Figure reveals that (i) sample RD RH function is a step-wise increasing function (each step in this case is 1/30). For this roughly symmetric data case, it can attain maximum depth around the center of symmetry (the origin), while (ii) on the other hand, PRD is a strictly monotonically increasing function and attains its maximum value at the center of symmetry, sharply contrasting the behavior of RD RH around the center (one has a unique maximum depth point and the other is opposite (multiple maximum depth points)).  Example 2. Uniqueness of medians induced from empirical RD RH and PRD This example illustrates the uniqueness behavior of the regression depth (RD RH and PRD)-induced medians in the empirical distribution case via a concrete example on the real data from the Hertzsprung-Russell diagram of the star cluster CYG OB1 (see Table 3 in chapter 2 of [16]), which contains 47 stars in the direction of Cygnus. Here, x is the logarithm of the effective temperature at the surface of the star (T e ), and y is the logarithm of its light intensity (L/L 0 ); see Figure 3 for the plot of the data set.
Five regression lines are plotted in Figure 3. Among them, three (dashed red, dotted blue, and dotdash green) are regression medians from RD RH , one (solid black) from PRD, and the other (longdash purple) is the least squares line. Note that the classical least squares regression estimator (as well as many traditional regression estimators) could be regarded as a depth-induced median under the general "objective depth" D Obj framework (see [11]). Thus, for the benchmark purpose, the least squares line is also plotted in Figure 3 alongside the four other median lines.
The LS line also justifies the legitimacy of the existence of RD RH -and PRD-induced medians (as robust alternatives) since the LS line fails to capture the main-sequence/pattern of the data cloud (stars) and is heavily affected by four giant stars whereas the other four depth medians resist the four leverage points (outliers) and catch the main trend/cluster. It turns out that there exist three maximum depth lines (medians) induced from RD RH . Each of the three lines goes through exactly two data points. In terms of (intercept, slope) form, they are (−6.065000, 2.500000), (−8.586500, 3.075000), and (−7.903043, 2.913043). These lines are plotted by dash red, dotted blue, and dotdash green in The computation issues of RD RH have been discussed in RH99, Rousseeuw and Struyf (1998) [20], and Liu and Zuo (2014) [21]. For the discussion on the computation of the PRD and induced regression medians, see Zuo (2019b) (Z19b) [22].
After obtaining ( β 1 , β 2 ), one can immediately get the fitted lineŷ = β 1 + β 2 x (which has actually already been plotted in Figure 3), and the predicted values:ŷ i = β 1 + β 2 x i , and hence the residuals: r i := y i −ŷ i . All these involve the uniqueness issue, we first need to have a unique fitted line for each method. Here due to the non-uniqueness of the deepest RD lines, we select the first deepest line (−6.065000, 2.500000) among the three as a representative. Then we construct a table with nine columns: 1st is the id's of observations, 2nd is the explanatory variable x i values, 3rd is dependent variable y i values, 4th-6th are the predictedŷ i values for LS, RD, PRD methods, respectively, 7th-9th are the residuals r i for LS, RD, PRD methods, respectively (Table 1).
Next the residuals of three methods are plotted below in the Figure 4.  Inspecting the residuals plot immediately reveals that in this case, the residuals of LS method are rather deceptively homogeneous, its plot fails to identify any outliers whereas the robust regression median lines all can easily spot the four obvious outliers and two groups of stars. Based on the residual plot, one can make some conclusions. For example, the four outliers are not necessarily errors but might be exceptional observations (they come from a different group of stars), and LS line does not provide a good fit (only explained 4.4% of total variation in observations of y).
In the empirical distribution case, one can always take the average over all regression medians to take care of the non-uniqueness issue. Nevertheless, challenges arise computationally if there exist infinitely many medians in higher dimensions. Furthermore, the average sometimes will no longer be a deepest line/hyperplane (as seen in this example and more in Section 4).
The non-uniqueness issue is more vital with the population case since without the uniqueness, there will be no uniquely defined median and it is impossible to discuss the convergence (or consistency) and the limiting distribution of the unique empirical regression median.

Uniqueness of Regression Medians
From the empirical example in the previous section, we see that there can exist multiple empirical regression medians induced from RD RH while in the case of PRD there exists a unique one. These results are just empirical special examples and not for general cases. In the following, we address general cases and draw general conclusions.

D C
Under certain symmetry assumption (e.g., regression symmetry of Rousseeuw and Struyf (2004) (RS04)) [19] and other conditions, the regression median induced from RD RH can be unique (see Theorem 3 and Corollary 3 of [19]). However, generally speaking, we have Proposition 2. β * RD RH (F (y, x) ) is not unique in general. The average of all β * RD RH (F (y, x) ) might not possess the maximum depth any more.

Proof. A counterexample suffices.
In fact, the real data example 3.2 could serve as one counterexample, where one has three maximum depth lines and the average line no longer possesses the maximum RD RH value.
An even simpler counterexample could be constructed. Assume that there are three sample points A = (−1, 0), B = (0, 1) and C = (1, 0). Then it is readily seen that three lines each of which formed by two sample points are (1, −1); (1, 1) and (0, 0) in terms of (intercept, slope) form and each line has the maximum RD RH 2/3 whereas the average of all maximum depth lines is (2/3, 0) which has RD RH only 1/3. For special distributions, the median induced from Carrizosa depth can also be unique. But generally speaking, it is not. Proposition 3. β * D C (F (y,x) ) is not unique in general. The average of all β * D C (F (y,x) ) might not possess the maximum depth value any more.

Proof. A counterexample suffices.
Denote by H β the hyperplane determined by y = x β for any β ∈ R p and by θ β the acute angle formed between the hyperplane H β and the horizontal hyperplane H h (y=0).
Assume that β i ∈ R p , (i = 1, 2), β 1 = β 2 , and H β i each contains 1/2 probability mass; any hyperline in H β i contains no probability mass (i = 1 or 2); θ β 1 = θ β 2 , and H β 1 intersects with H β 2 at a hyperline in the horizontal hyperplane H h . Now in light of characterization (5) of D C , it is readily seen that at each β i , D C (β i ; P) attains the maximum depth value 1/2.
Let γ = (β 1 + β 2 )/2, then it is readily seen that the D C (γ; P) = 0, and H γ is no longer a hyperplane with the maximum depth value.
Proof. To prove the proposition, we first invoke the following result.
(i) The β * PRD (F (y, x ) ) is regression, scale and affine equivariant in the sense that respectively. (ii) The maximum of PRD(β; F (y,x ) ) exists and is attained at a β 0 ∈ R p with β 0 < ∞.
(iii) The PRD(β; F (y,x ) ) monotonically decreases along any ray stemming from a deepest point in the sense that for any β ∈ R p and λ ∈ [0, 1], where β * is a maximum depth point of PRD(β; F (y,x ) ) for any β ∈ R p . Now we are in a position to prove the proposition. Assume, w.l.o.g., that S(F y ) = 1 (since it does not involve v and β, it has nothing to do with the maximum depth point β * PRD ). The existance of the maximum depth point (the regression median) is guaranteed in light of Lemma 4.1 above. We thus focus on the uniqueness. Assume that there are two maximum depth points β * 1 = β * 2 . We seek a contradiction. Let β * 0 = (β * 1 + β * 2 )/2. By virtue of Lemma 4.1 above, β * 0 is also a maximum depth point. By the invariance of the projection regression depth functional (see [11]) and Lemma 4.1 above, assume (w.l.o.g.) that β * 0 = 0. For a given β ∈ R p , write g(β, v) := T(F (y−x β)/x v ). In light of the continuity of T in v, the generalized extreme value theorem on a compact set, and (A1), there exists a v β ∈ S p−1 such that For simplicity, denote by v 0 for v β * 0 . Then we have Denote by l(β * 1 , β * 2 ) the hyperline that connects β * 1 and β * 2 in the parameter space of β ∈ R p . Consider two cases.
Case I x does not concentrate on any single hyperplane. In light of this assumption, there exists at least one γ ∈ R p on l(β * 1 , β * 2 ) in the parameter space R p such that −x γ = 0. Assume (w.l.o.g.) that −x γ < 0 = −x β * 0 . By (15) and the strictly monotonicity of T, one has that for the v γ defined in (14) which is a contradiction. This completes the proof of the Case I.
Case II x concentrates on a single hyperplane. This implies that there is a v ∈ S p−1 such that x (ω)v = 0 for any ω ∈ Ω. This contradicts (A0), however. This completes the proof of Case II and thus the proposition.  − x β) x v , then T is strictly monotonic at any β as long as the related expectations exist and is the qth quantile associated with the random variable Z (i.e., Q q (Z) = inf{z : P(Z ≤ z) ≥ q}), then T is strictly monotonic at any β as long as the CDF of Z(β; v, y, x) := (y − x β) x v is not flat at β for a given v ∈ S p−1 .
(IV) The proposition covers the sample case. That is, when F (y,x ) is replaced by its sample version in the proposition, we have the uniqueness of the sample regression median induced from PRD, which is very helpful in the practical computation of the median and consistent with the finding in Figure 2.

Why Do We Care About the Non-Uniqueness of Regression Medians?
Uniqueness is actually implicitly assumed when we discuss the property (such as the Fisher consistency, regression, scale and affine equivariance, or asymptotic breakdown point) of regression medians. Without the uniqueness, (i) the sample regression median can never converge in probability or in distribution to its population version, (ii) deepest regression will yield more than one response and residual for a given x, (iii) algorithms for the approximate computation of sample medians can never converge.
Uniqueness is so essential in our discussion of medians that there is a conventional remedy measure for non-uniqueness: to take average of all medians. This works in many scenarios, but not for β * RD RH and β * D C . This phenomenon for β * RD RH was first noticed by Mizera and Volauf (2002) [24] and Van Aelst et al. (2002) [25]. Concrete examples such as the real data Example 3.2 and the artificially constructed one in the proof of Proposition 4.1 are presented here though.

Why Do
We Just Treat Three Regression Medians? D C ( [12]) and RD RH ( [10]) are two pioneer notions of regression depth. PRD was recently introduced in [11]. The latter systematically studied the three regression depth notions w.r.t. four axiomatic properties, that is, (P1), (P2), (P3) and (P4) (see Section 2). It is found out that both regression depth RD RH and projection regression depth PRD are real depth notions in regression since both satisfy (P1)-(P4). While the former needs an extra assumption (A) (see Section 2.2), the latter does not need any extra assumptions. On the other hand, Carrizosa depth D C violates (P3) in general, hence is not a real regression depth notion w.r.t. the definition in [23]. That motivates us to just focus on RD RH and PRD throughout.

Summary and Conclusions
In terms of robustness, both depth-induced medians are indeed robust. In fact, the median β * RD RH can asymptotically resist up to 33% [13] contamination, whereas β * PRD can resist up to 50% [14] contamination without breakdown, sharply contrasting to the 0% of the classical LS estimator.
In terms of efficiency, sample β * PRD could possess a higher relative efficiency when compared with sample β * RD RH (see [22]). Now in terms of uniqueness, β * PRD again distinguishes itself from the leading depth median β * RD RH by generally possessing the desirable uniqueness property. From the computational point of view, RD (and β * RD RH ) has an edge over PRD (and β * PRD ). The former is relatively easier to compute than the latter (see [22]).
By virtue of the performance criteria above, we conclude that PRD and β * PRD are promising options among the leading regression depths and their induced medians.
Funding: This research received no external funding.