You are currently viewing a new version of our website. To view the old version click .
Entropy
  • Article
  • Open Access

24 December 2025

Concomitants of Order Statistics from a Bivariate Generalized Linear Exponential Distribution: Theory and Practice

Department of Mathematics and Statistics, Faculty of Science, Taif University, P.O. Box 11099, Taif 21944, Saudi Arabia
This article belongs to the Section Information Theory, Probability and Statistics

Abstract

This paper investigates the concomitants of order statistics from the bivariate generalized linear exponential (BGLE) distribution. We obtain the probability density function of a single concomitant and the joint probability density function of two concomitants of order statistics from the BGLE distribution. In addition, expressions for the single and product moments of concomitants of order statistics are derived. Furthermore, we find the best linear unbiased estimator of a scale parameter related to a study variable using various ranked set sampling techniques. Finally, we apply the findings to a real-life dataset.

1. Introduction

The linear exponential (LE) distribution has applications in various fields, including applied statistics and reliability analysis. The LE distribution is also known as the linear failure rate (LFR) distribution. The LE distribution is quite useful in modeling lifetime data with a linear increasing failure rate function, and it includes the exponential and Rayleigh distributions as submodels. In the literature, numerous generalizations of the LE distribution have been presented and investigated. Ref. [1] proposed the generalized linear failure rate (GLFR) distribution. The hazard rate function (HRF) of the GLFR distribution can take various shapes, making it adaptable and suitable for a wide range of survival data sets. The GLFR distribution generalizes numerous well-known distributions, including LFR, generalized exponential, and generalized Rayleigh distributions. Another generalization of the LE distribution was suggested by [2], which is known as the generalized linear exponential (GLE) distribution. Recently, Ref. [3] presented the exponentiated generalized linear exponential (EGLE) distribution, which generalizes the GLFR and GLE distributions.
Bivariate lifetime data is frequently encountered in many practical scientific scenarios. Therefore, it is critical to consider various bivariate models that could be utilized to model such bivariate lifetime data. These models are of interest in a variety of applications, including computer systems, reliability engineering, and Olympic games. The literature has many proposals and investigations into bivariate exponential distributions and their extensions; see, for instance, [4,5,6,7,8,9,10,11,12]. In 2022, Pathak and Vellaisamy introduced a novel family of bivariate generalized linear exponential (BGLE) distributions whose univariate marginals are members of the GLE distribution. They also investigated its various statistical properties. Owing to the presence of five parameters, the joint probability density function (PDF) of the BGLE distribution is very flexible and can take on various shapes depending on the values of the parameters. The BGLE distribution has a bivariate generalized exponential (BGE) distribution and a bivariate generalized Rayleigh (BGR) distribution as sub-models for particular values of parameters. The joint cumulative distribution function (CDF), joint PDF, and conditional PDF for the BGLE distribution are all in closed forms, making them suitable for practical usage. Furthermore, they can be used to model bivariate lifetime data in a variety of situations.
Concomitant or induced order statistics (OSs) were first introduced in the early 1970s by [13,14]. In brief, if there is a sample from a bivariate distribution arranged by the first variable, the second variable associated with the i t h first variable is called the concomitant of the i t h OS. To review the fundamental results on concomitants of order statistics (COSs), refer to [15]. COSs have found numerous applications in the areas of selection procedures, engineering, inference and prediction issues, and double sampling plans. For a brief overview of COS applications, refer to [16] and the references therein. Several authors have investigated the COSs, including [17,18,19,20,21,22,23,24,25].
Ranked set sampling (RSS) is one of the most common and effective sampling designs, first proposed by [26]. Most statisticians favor using this sampling design since it provides more efficient estimates when compared to simple random sampling. McIntyre’s notion of ranking is feasible whenever it can be done easily by a judgment technique. For a detailed overview of the theory and applications of the RSS, see [27]. In some practical instances, the variable of main interest, say Y, is more difficult to measure than an auxiliary variable X related to Y, which is easily quantifiable and can be precisely arranged. In this instance, ref. [28] proposed an alternative RSS scheme, which is as follows:
  • Choose m independent bivariate samples, each of size m, at random.
  • Rank the units in each sample according to an auxiliary variable ( X ) together with its related variable ( Y ) .
  • Measure the i t h observation of the i t h set ( X ( i ) i , Y [ i ] i ) , i = 1 , , m , where X ( i ) i represents the i t h OS of the i t h sample and Y [ i ] i represents the corresponding measurement conducted on the study variable Y of the same unit.
  • If a large sample size is required, repeat Steps 1 through 3 d times until a sample of size n = m d is obtained, where m is the set size and d is the cycle count. Therefore, ( Y [ 1 ] 1 , Y [ 2 ] 2 , , Y [ n ] n ) constitutes a ranked set sample. Here, it is evident that Y [ i ] i is the concomitant of i t h OS arising from the i t h sample, as coined by [29].
Ref. [30] provided a modified RSS method whereby only the greatest or smallest judgment-ranked unit is chosen for quantification. Suppose n random samples of size n are chosen from a bivariate distribution. From each of the n samples, select the unit with the largest (smallest) measurement on the auxiliary variable X and measure the Y variable related to it. The set of observations Y [ n ] 1 , Y [ n ] 2 , , Y [ n ] n ( Y [ 1 ] 1 , Y [ 1 ] 2 , , Y [ 1 ] n ) is called the upper RSS (URSS) (lower RSS (LRSS)). Multiple authors in the literature investigated the estimation of parameters for various bivariate distributions utilizing RSS and its modifications. In this field, some works were published by [23,24,31,32,33,34,35,36,37,38,39,40]. To the best of our knowledge, COSs from the BGLE distribution have not yet been studied. So, the objective of this study is to develop the distribution theory for COSs originating from the BGLE distribution and apply it to related inference issues.
The organization of the present paper is as follows: In Section 2, we present a general overview of the BGLE distribution and its characteristics, followed by a brief description of COSs. Section 3 provides the marginal PDF as well as the explicit formulas for the single moments of COSs from the BGLE distribution. Section 3 additionally presents the joint PDF of COSs from the BGLE distribution. Moreover, the explicit expressions for the product moments of COSs are derived. The best linear unbiased (BLU) estimator of a scale parameter related to a study variable, based on different RSS techniques, is obtained in Section 4. Then, in Section 5, we apply the paper’s results to a real dataset. Finally, Section 6 contains a conclusion.

2. Preliminaries

2.1. BGLE Distribution

The PDF of the BGLE distribution for a bivariate random variable ( X , Y ) with parameters ( β 1 , β 2 , θ 1 , θ 2 , λ ) , as given by [41], is
f ( x , y ) = λ ( β 1 + θ 1 x ) ( β 2 + θ 2 y ) e ϕ ( x , y ; ω ) ( 1 e ϕ ( x , y ; ω ) ) λ 2 ( 1 λ e ϕ ( x , y ; ω ) ) ,
where x , y 0 , β i , θ i 0 such that β i + θ i > 0 for i = 1 , 2 , and 0 < λ 1 , ϕ ( x , y ; ω ) = β 1 x + θ 1 2 x 2 + β 2 y + θ 2 2 y 2 , and ω = ( β 1 , θ 1 , β 2 , θ 2 ) . Some specific distributions can be obtained from the BGLE distribution, as follows:
  • If θ 1 = θ 2 = 0 , then the BGLE distribution becomes the BGE distribution. For details, see [10].
  • If β 1 = β 2 = 0 , then the BGR distribution is obtained.
A series expansion of the PDF of the BGLE distribution (Pathak and Vellaisamy, [41]) is
f ( x , y ) = ( β 1 + θ 1 x ) ( β 2 + θ 2 y ) j = 1 λ j ( 1 ) j + 1 j 2 e j ϕ ( x , y ; ω ) .
The marginal PDFs of X and Y are given by ( Pathak and Vellaisamy [41])
f X ( x ) = λ ( β 1 + θ 1 x ) e ( β 1 x + θ 1 2 x 2 ) ( 1 e ( β 1 x + θ 1 2 x 2 ) ) λ 1 , x 0 ,
and
f Y ( y ) = λ ( β 2 + θ 2 y ) e ( β 2 y + θ 2 2 y 2 ) ( 1 e ( β 2 y + θ 2 2 y 2 ) ) λ 1 , y 0 ,
respectively. And the marginal CDFs of X and Y are as follows
F X ( x ) = ( 1 e ( β 1 x + θ 1 2 x 2 ) ) λ , x 0 ,
and
F Y ( y ) = ( 1 e ( β 2 y + θ 2 2 y 2 ) ) λ , y 0 ,
respectively. The conditional PDF of Y given X = x is given by (Pathak and Vellaisamy, [41])
f ( y | x ) = ( β 2 + θ 2 y ) e ( β 2 y + θ 2 2 y 2 ) ( 1 e ϕ ( x , y ; ω ) ) λ 2 ( 1 λ e ϕ ( x , y ; ω ) ) ( 1 e ( β 1 x + θ 1 2 x 2 ) ) λ 1 .
Further, the bivariate ( k 1 , k 2 ) t h product moments of the BGLE distribution is given by (Pathak and Vellaisamy, [41])
(i)
for β 1 = β 2 = 0 and θ 1 , θ 2 > 0 , k 1 , k 2 = 1 , 2 , ,
E ( X k 1 Y k 2 ) = 2 ( k 1 + k 2 ) / 2 Γ ( k 1 / 2 + 1 ) Γ ( k 2 / 2 + 1 ) θ 1 k 1 / 2 θ 2 k 2 / 2 j = 1 λ j ( 1 ) j + 1 1 j ( k 1 + k 2 ) / 2 ,
(ii)
for β 1 , β 2 > 0 and θ 1 , θ 2 0 , k 1 , k 2 = 1 , 2 , ,
E ( X k 1 Y k 2 ) = j = 1 t = 0 s = 0 λ j ( 1 ) j + t + s + 1 θ 1 t θ 2 s j t + s + 2 2 t + s t ! s ! ( β 1 j ) k 1 + 2 t + 1 ( β 2 j ) k 2 + 2 s + 1 Γ ( k 1 + 2 t + 1 ) × Γ ( k 2 + 2 s + 1 ) ( β 1 + θ 1 ( k 1 + 2 t + 1 ) β 1 j ) ( β 2 + θ 2 ( k 2 + 2 s + 1 ) β 2 j ) .
See [41] for a thorough proof of formulas (1) and (2), as well as more discussion.
Using the transformation Z = X θ 1 and W = Y θ 2 , θ i = θ i 1 / 2 , i = 1 , 2 , the standard BGLE distribution has the joint PDF as
f ( z , w ) = ( β 1 θ 1 + z ) ( β 2 θ 2 + w ) j = 1 λ j ( 1 ) j + 1 j 2 e j ϕ ( z , w ; ω ) ,
where ϕ ( z , w ; ω ) = ( β 1 θ 1 z + z 2 2 ) + ( β 2 θ 2 w + w 2 2 ) , and ω = ( β 1 , θ 1 , β 2 , θ 2 ) . Obviously, the variables Z and W follow the standard GLE distribution, and their corresponding PDFs are given as
f Z ( z ) = λ ( β 1 θ 1 + z ) e ( β 1 θ 1 z + z 2 2 ) ( 1 e ( β 1 θ 1 z + z 2 2 ) ) λ 1 , z 0 ,
f W ( w ) = λ ( β 2 θ 2 + w ) e ( β 2 θ 2 w + w 2 2 ) ( 1 e ( β 2 θ 2 w + w 2 2 ) ) λ 1 , w 0 .

2.2. Concomitants of Order Statistics

Suppose ( X i , Y i ) , i = 1 , 2 , , n , is a random sample from a bivariate distribution with PDF f ( x , y ) . If we denote X i : n as the i t h OS of the X sample values, then Y values associated with X i : n are called the concomitants of the i t h OS and are symbolized by Y [ i : n ] , i = 1 , 2 , , n . For a detailed overview of concomitants of order statistics (COSs), we refer to [15,29].
The PDF of the concomitant of the i t h OS is given by
h [ i : n ] ( y ) = f ( y | x ) f i : n ( x ) d x ,
where f ( y | x ) is the conditional PDF of Y given X = x , and f i : n ( x ) is the PDF of the i t h OS, which is given by
f i : n ( x ) = C i : n f ( x ) F ( x ) i 1 1 F ( x ) n i ,
where C i : n = n ! ( i 1 ) ! ( n i ) ! . The relationship between the PDF of the concomitant of the first OS and the concomitant of the i t h OS is given by (Balasubramanian and Beg [42])
h [ i : n ] ( y ) = s = n i + 1 n ( 1 ) s n + i 1 s 1 n i n s h [ 1 : s ] ( y ) .
Further, for 1 i < j n , the joint PDF of concomitants of the i t h and j t h OSs is given by
h [ i , j : n ] ( y 1 , y 2 ) = x 2 f ( y 1 | x 1 ) f ( y 2 | x 2 ) f i , j : n ( x 1 , x 2 ) d x 1 d x 2 ,
where f i , j : n ( x 1 , x 2 ) is the joint PDF of the i t h and j t h OSs and is given by
f i , j : n ( x 1 , x 2 ) = C i , j : n f ( x 1 ) f ( x 2 ) F ( x 1 ) i 1 F ( x 2 ) F ( x 1 ) j i 1 1 F ( x 2 ) n j ,
where C i , j : n = n ! ( i 1 ) ! ( j i 1 ) ! ( n j ) ! (see [43,44]).

3. Distribution Theory of COSs from the BGLE Distribution

This section presents the distributions and moments of COSs arising from the BGLE distribution. Assume ( X i , Y i ) and ( Z i , W i ) are random samples of size n drawn from the BGLE distribution and the standard BGLE distribution, with PDFs given by (1) and (3), respectively. Let W [ r : n ] represent the concomitant of the r t h OS Z r : n . The following theorem provides the PDF of W [ r : n ] , where r = 1 , , n .
Theorem 1. 
If W [ r : n ] is the concomitant of the r t h OS from the standard BGLE distribution, then the PDF h [ r : n ] ( w ) of W [ r : n ] , for 1 r n , is given by
h [ r : n ] ( w ) = s = n r + 1 n ( 1 ) s n + r 1 s 1 n r n s h [ 1 : s ] ( w ) , w 0 ,
where
h [ 1 : s ] ( w ) = s ! ( s 1 ) ! i = 0 s 1 j = 1 ( 1 ) i + j + 1 s 1 i λ j j 2 ( β 2 θ 2 + w )
× e j ( β 2 θ 2 w + w 2 2 ) B ( j , i λ + 1 ) ,
and B ( . , . ) is the complete beta function.
Proof. 
The PDF of the concomitant of the first OS W [ 1 : n ] is given by
h [ 1 : n ] ( w ) = 0 f ( w | z ) f 1 : n ( z ) d z .
In view of (3), (4), and (7), we get
h [ 1 : n ] ( w ) = n ! ( n 1 ) ! i = 0 n 1 j = 1 ( 1 ) i + j + 1 n 1 i λ j j 2 ( β 2 θ 2 + w ) × e j ( β 2 θ 2 w + w 2 2 ) I ,
where
I = 0 ( β 1 θ 1 + z ) e j ( β 1 θ 1 z + z 2 2 ) ( 1 e ( β 1 θ 1 z + z 2 2 ) ) i λ d z ,
setting t = e ( β 1 θ 1 z + z 2 2 ) and d t = ( β 1 θ 1 + z ) e ( β 1 θ 1 z + z 2 2 ) d z in (14), we obtain
I = 0 1 t j 1 ( 1 t ) i λ d t , = B ( j , i λ + 1 ) .
Now, using (15) in (13), we get
h [ 1 : n ] ( w ) = n ! ( n 1 ) ! i = 0 n 1 j = 1 ( 1 ) i + j + 1 n 1 i λ j j 2 ( β 2 θ 2 + w ) × e j ( β 2 θ 2 w + w 2 2 ) B ( j , i λ + 1 ) , w 0 .
Now, by using the relation (8), we obtain the result given in (11). □
The k t h moments of W [ r : n ] , 1 r n , is given by the following theorem:
Theorem 2. 
The k t h moment of the concomitant of OS W [ r : n ] , for k = 1 , 2 , , is given by
μ [ r : n ] ( k ) = s = n r + 1 n ( 1 ) s n + r 1 s 1 n r n s μ [ 1 : s ] ( k ) ,
where μ [ 1 : s ] ( k ) is given by
For β 2 = 0 , θ 2 > 0 ,
μ [ 1 : s ] ( k ) = s ! ( s 1 ) ! i = 0 s 1 j = 1 ( 1 ) i + j + 1 s 1 i λ j B ( j , i λ + 1 ) 2 k / 2 Γ ( k / 2 + 1 ) j k / 2 1 ,
and for β 2 > 0 , θ 2 0 ,
μ [ 1 : s ] ( k ) = s ! ( s 1 ) ! i = 0 s 1 j = 1 c = 0 ( 1 ) i + j + c + 1 s 1 i λ j B ( j , i λ + 1 ) c ! 2 c × Γ ( k + 2 c + 1 ) j k + c 1 ( β 2 θ 2 ) k + 2 c + 1 β 2 θ 2 + k + 2 c + 1 j β 2 θ 2 .
Proof. 
The k t h moment of W [ r : n ] is given as
μ [ r : n ] ( k ) = E [ W [ r : n ] k ] = 0 w k h [ r : n ] ( w ) d w .
Now, using (11), we get
μ [ r : n ] ( k ) = s = n r + 1 n ( 1 ) s n + r 1 s 1 n r n s μ [ 1 : s ] ( k ) ,
where μ [ 1 : s ] ( k ) is the k t h moment of W [ 1 : s ] , which is given as follows
μ [ 1 : s ] ( k ) = E [ W [ 1 : s ] k ] = 0 w k h [ 1 : s ] ( w ) d w = s ! ( s 1 ) ! i = 0 s 1 j = 1 ( 1 ) i + j + 1 s 1 i λ j j 2 B ( j , i λ + 1 ) I ,
where
I = 0 w k ( β 2 θ 2 + w ) e j β 2 θ 2 w e j w 2 2 d w .
(1)
When β 2 = 0 , θ 2 > 0 , we obtain
I = 0 w k + 1 e j w 2 2 d w = 2 k / 2 Γ ( k / 2 + 1 ) j k / 2 + 1 .
Using (24) in (22), we get (18).
(2)
When β 2 > 0 , θ 2 0 , using the following expansion:
e j w 2 2 = c = 0 ( 1 ) c j c w 2 c c ! 2 c ,
from (23), we have
I = c = 0 ( 1 ) c j c c ! 2 c 0 ( β 2 θ 2 w k + 2 c + w k + 2 c + 1 ) e j β 2 θ 2 w d w = c = 0 ( 1 ) c j c c ! 2 c Γ ( k + 2 c + 1 ) ( j β 2 θ 2 ) k + 2 c + 1 β 2 θ 2 + k + 2 c + 1 j β 2 θ 2 .
Using (25) in (22), we get (19). □
Theorem 3. 
The joint PDF of concomitants W [ r : n ] and W [ s : n ] , 1 r < s n , is given by
h [ r , s : n ] ( w 1 , w 2 ) = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 δ λ ( 1 , 2 , i 1 , i 2 ) ( β 2 θ 2 + w 1 ) × ( β 2 θ 2 + w 2 ) e i 2 ( β 2 θ 2 w 1 + w 1 2 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) , w 1 , w 2 > 0 ,
where
δ λ ( 1 , 2 , i 1 , i 2 ) = ϕ λ ( 1 , 2 , i 1 , i 2 ) B ( i 1 + i 2 , λ ( r + 1 1 ) + 1 )
× 3 F 2 i 1 , λ ( s r 1 + 2 1 ) , i 1 + i 2 ; i 1 + 1 , i 1 + i 2 + λ ( r + 1 1 ) + 1 ; 1 ,
ϕ λ ( 1 , 2 , i 1 , i 2 ) = ( 1 ) i 1 + i 2 + 1 + 2 + 2 i 1 i 2 2 s r 1 1 n s 2 λ i 1 λ i 2 ,
where F 2 3 ( a , b , c ; d , e ; x ) denotes the hypergeometric function defined by
F 2 3 ( a , b , c ; d , e ; x ) = j = 0 ( a ) j ( b ) j ( c ) j ( d ) j ( e ) j x j j ! ,
and ( k ) j = k ( k + 1 ) ( k + j 1 ) is the ascending factorial.
Proof. 
Using (9), the joint PDF of the r t h and s t h COS W [ r : n ] and W [ s : n ] is given as
h [ r , s : n ] ( w 1 , w 2 ) = 0 z 1 f ( w 1 | z 1 ) f ( w 2 | z 2 ) f r , s : n ( z 1 , z 2 ) d z 2 d z 1
In view of (10), we get
h [ r , s : n ] ( w 1 , w 2 ) = C r , s : n 1 = 0 s r 1 2 = 0 n s ( 1 ) 1 + 2 s r 1 1 n s 2 × 0 f ( z 1 , w 1 ) [ F ( z 1 ) ] r + 1 1 I ( z 1 ) d z 1 ,
where
I ( z 1 ) = z 1 f ( z 2 , w 2 ) [ F ( z 2 ) ] s r 1 + 2 1 d z 2 = i 1 = 1 λ i 1 ( 1 ) i 1 + 1 i 1 2 ( β 2 θ 2 + w 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) × z 1 ( β 1 θ 1 + z 2 ) e i 1 ( β 1 θ 1 z 2 + z 2 2 2 ) ( 1 e ( β 1 θ 1 z 2 + z 2 2 2 ) ) d z 2 ,
setting t 1 = e ( β 1 θ 1 z 2 + z 2 2 2 ) , we get
I ( z 1 ) = i 1 = 1 λ i 1 ( 1 ) i 1 + 1 i 1 2 ( β 2 θ 2 + w 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) × 0 ω t 1 i 1 1 ( 1 t 1 ) λ ( s r 1 + 2 1 ) d t 1 , = i 1 = 1 λ i 1 ( 1 ) i 1 + 1 i 1 2 ( β 2 θ 2 + w 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) × B ω ( i 1 , λ ( s r 1 + 2 1 ) + 1 ) ,
where ω = e ( β 1 θ 1 z 1 + z 1 2 2 ) , and B ω ( a , b ) is the incomplete beta function defined by B ω ( a , b ) = 0 ω x a 1 ( 1 x ) b 1 d x .
Placing the value of I ( z 1 ) in (28), we obtain
h [ r , s : n ] ( w 1 , w 2 ) = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 ( 1 ) 1 + 2 + i 1 + i 2 + 2 s r 1 1 n s 2 λ i 1 × λ i 2 i 1 2 i 2 2 ( β 2 θ 2 + w 1 ) ( β 2 θ 2 + w 2 ) e i 2 ( β 2 θ 2 w 1 + w 1 2 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) × 0 ( β 1 θ 1 + z 1 ) e i 2 ( β 1 θ 1 z 1 + z 1 2 2 ) ( 1 e ( β 1 θ 1 z 1 + z 1 2 2 ) ) λ ( r + 1 1 ) × B ω ( i 1 , λ ( s r 1 + 2 1 ) + 1 ) d z 1 .
Letting t 2 = e ( β 1 θ 1 z 1 + z 1 2 2 ) in (30), we get
h [ r , s : n ] ( w 1 , w 2 ) = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 ( 1 ) 1 + 2 + i 1 + i 2 + 2 s r 1 1 n s 2 λ i 1 × λ i 2 i 1 2 i 2 2 ( β 2 θ 2 + w 1 ) ( β 2 θ 2 + w 2 ) e i 2 ( β 2 θ 2 w 1 + w 1 2 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) × 0 1 t 2 i 2 1 ( 1 t 2 ) λ ( r + 1 1 ) B t 2 ( i 1 , λ ( s r 1 + 2 1 ) + 1 ) d t 2 .
We know that B t ( a , b ) = t a a F 1 2 ( a , 1 b ; a + 1 ; t ) , and
0 1 t a 1 ( 1 t ) b 1 F 1 2 ( c , d ; e ; t ) d t = B ( a , b ) F 2 3 ( c , d , a ; e , a + b ; 1 ) .
(see [45]). Now by using (32) in (31), we obtain the result of (26). □
Theorem 4. 
The k t h and l t h moments of W [ r : n ] and W [ s : n ] is given as follows.
For β 2 = 0 , θ 2 > 0 ,
μ [ r , s : n ] ( k , l ) = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 δ λ ( 1 , 2 , i 1 , i 2 ) × 2 k + l 2 i 1 l 2 + 1 i 2 k 2 + 1 Γ k 2 + 1 Γ l 2 + 1 ,
and for β 2 > 0 , θ 2 0 ,
μ [ r , s : n ] ( k , l ) = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 c 1 = 0 c 2 = 0 ( 1 ) c 1 + c 2 i 1 c 2 i 2 c 1 c 1 ! c 2 ! 2 c 1 + c 2 δ λ ( 1 , 2 , i 1 , i 2 ) × Γ ( k + 2 c 1 + 1 ) Γ ( l + 2 c 2 + 1 ) i 2 ( k + 2 c 1 + 1 ) i 1 ( l + 2 c 2 + 1 ) β 2 k + l + 2 c 1 + 2 c 2 + 2 θ 2 k + l + 2 c 1 + 2 c 2 × β 2 + k + 2 c 1 + 1 i 2 β 2 θ 2 2 β 2 + l + 2 c 2 + 1 i 1 β 2 θ 2 2 .
Here Γ ( . ) is the complete gamma function.
Proof. 
The k t h and l t h moments of W [ r : n ] and W [ s : n ] is given as
μ [ r , s : n ] ( k , l ) = E [ W [ r : n ] k W [ s : n ] l ] = 0 0 w 1 k w 2 l h [ r , s : n ] ( w 1 , w 2 ) d w 1 d w 2 = C r , s : n 1 = 0 s r 1 2 = 0 n s i 1 = 1 i 2 = 1 δ λ ( 1 , 2 , i 1 , i 2 ) I ,
where
I = I 1 I 2 ,
where
I 1 = 0 w 1 k ( β 2 θ 2 + w 1 ) e i 2 ( β 2 θ 2 w 1 + w 1 2 2 ) d w 1 ,
and
I 2 = 0 w 2 l ( β 2 θ 2 + w 2 ) e i 1 ( β 2 θ 2 w 2 + w 2 2 2 ) d w 2 .
(1)
When β 2 = 0 , θ 2 > 0 , we have
I = 0 w 1 k + 1 e i 2 w 1 2 2 d w 1 0 w 2 l + 1 e i 1 w 2 2 2 d w 2 = 2 k + l 2 i 1 l 2 + 1 i 2 k 2 + 1 Γ k 2 + 1 Γ l 2 + 1 .
Using (37) in (35), we get (33).
(2)
When β 2 > 0 , θ 2 0 , using the following expansion:
e i 2 w 1 2 2 = c 1 = 0 ( 1 ) c 1 i 2 c 1 w 1 2 c 1 c 1 ! 2 c 1 ,
from (36), we have
I 1 = c 1 = 0 ( 1 ) c 1 i 2 c 1 c 1 ! 2 c 1 0 w 1 k + 2 c 1 ( β 2 θ 2 + w 1 ) e i 2 β 2 θ 2 w 1 d w 1 ,
I 1 = c 1 = 0 ( 1 ) c 1 i 2 c 1 c 1 ! 2 c 1 β 2 θ 2 0 w 1 k + 2 c 1 e i 2 β 2 θ 2 w 1 d w 1 + 0 w 1 k + 2 c 1 + 1 e i 2 β 2 θ 2 w 1 d w 1 = c 1 = 0 ( 1 ) c 1 i 2 c 1 c 1 ! 2 c 1 Γ ( k + 2 c 1 + 1 ) ( i 2 β 2 θ 2 ) k + 2 c 1 + 1 β 2 θ 2 + k + 2 c 1 + 1 i 2 β 2 θ 2 .
Similarly, using the expansion
e i 1 w 2 2 2 = c 2 = 0 ( 1 ) c 2 i 1 c 2 w 2 2 c 2 c 2 ! 2 c 2 ,
we have
I 2 = c 2 = 0 ( 1 ) c 2 i 1 c 2 c 2 ! 2 c 2 β 2 θ 2 0 w 2 l + 2 c 2 e i 1 β 2 θ 2 w 2 d w 2 + 0 w 2 l + 2 c 2 + 1 e i 1 β 2 θ 2 w 2 d w 2 = c 2 = 0 ( 1 ) c 2 i 1 c 2 c 2 ! 2 c 2 Γ ( l + 2 c 2 + 1 ) ( i 1 β 2 θ 2 ) l + 2 c 2 + 1 β 2 θ 2 + l + 2 c 2 + 1 i 1 β 2 θ 2 .
Using (39) and (40) in (35), we get (34).
Table 1 shows the means and variances of the COSs of the standard BGLE distribution for different choices of n , λ and β 2 . It is worth noting that the condition r = 1 n μ [ r : n ] j = n μ 1 : 1 j ,   j = 1 , 2 is fulfilled (see [29]). Table 1 displays that the variances are decreasing with respect to β 2 , while the means and variances are increasing with respect to λ .
Table 1. Mean and variance of the COS for the standard BGLE distribution for different choices of n , λ and β 2 .
From Theorem (2), the means and variances of the COSs Y [ r : n ] arising from the BGLE distribution are expressed as follows:
E [ Y [ r : n ] ] = θ 2 E [ W [ r : n ] ] = θ 2 μ [ r : n ] ,
V a r [ Y [ r : n ] ] = θ 2 2 V a r [ W [ r : n ] ] = θ 2 2 δ r , r : n ,
where V a r [ W [ r : n ] ] = μ [ r : n ] ( 2 ) ( μ [ r : n ] ) 2 , and r = 1 , , n . The covariances between Y [ r : n ] and Y [ s : n ] are expressed using Theorems (2) and (4) as follows:
C o v [ Y [ r : n ] , Y [ s : n ] ] = θ 2 2 C o v [ W [ r : n ] , W [ s : n ] ] = θ 2 2 δ r , s : n ,
where C o v ( W [ r : n ] , W [ s : n ] ) = μ [ r , s : n ] μ [ r : n ] μ [ s : n ] , and 1 r < s n .

4. BLU Estimator of the Parameter θ 2 Based on Different RSS Schemes

In this part, we obtain the best linear unbiased estimator of the parameter θ 2 using RSS, LRSS, and URSS schemes, assuming the parameters β 2 and λ are known.
Suppose that the bivariate random vector ( X , Y ) follows the BGLE distribution with the PDF provided in (1). Choose a ranked set sample according to the Stokes RSS procedure. Let X ( i : n ) i denote the observation obtained on the auxiliary variable X in the i t h unit of the RSS, and let Y [ i : n ] i denote the measurement made on the variable related with X ( i : n ) i , i = 1 , 2 , , n . Clearly, Y [ i : n ] i is the concomitant of the i t h OS of a random sample of size n arising from BGLE distribution (refer to (p. 145, [29])). Let Y [ n ] = ( Y [ 1 : n ] 1 , Y [ 2 : n ] 2 , , Y [ n : n ] n ) denote the column vector of the ranked set sample. According to (41) and (42), the mean and variance of Y [ i : n ] i , i = 1 , 2 , , n are given below
E [ Y [ i : n ] i ] = θ 2 μ [ i : n ] ,
and
V a r [ Y [ i : n ] i ] = θ 2 2 δ i , i : n .
Because Y [ i : n ] i and Y [ j : n ] j , for ( i j ) , represent measurements on Y made from units engaged in two independent samples, the covariance between Y [ i : n ] i and Y [ j : n ] j is zero.
Then, from (44) and (45), we can write
E [ Y [ n ] ] = θ 2 μ ,
and the dispersion matrix of Y [ n ] ,
D [ Y [ n ] ] = θ 2 2 Δ ,
where μ = ( μ [ 1 : n ] , , μ [ n : n ] ) and Δ = d i a g ( δ 1 , 1 : n , δ 2 , 2 : n , , δ n , n : n ) .
If the parameters β 2 and λ involved in μ and Δ are known, then proceeding as in [29] (p. 185), the BLU estimator θ 2 ^ of θ 2 is obtained as
θ 2 ^ = ( μ Δ 1 μ ) 1 μ Δ 1 Y [ n ] = i = 1 n a i Y [ i : n ] i ,
where a i = μ [ i : n ] / δ i , i : n i = 1 n μ [ i : n ] 2 / δ i , i : n , and the variance of θ 2 ^ is given by
V a r [ θ 2 ^ ] = ( μ δ 1 μ ) 1 θ 2 2 = i = 1 n μ [ i : n ] 2 / δ i , i : n 1 θ 2 2 .
Table 2 and Table 3 display the coefficients for the BLU estimator θ 2 ^ of θ 2 and V a r [ θ 2 ^ ] / θ 2 2 for various values of n , λ and β 2 .
Table 2. The coefficients for the BLU estimator θ 2 ^ of θ 2 and V a r [ θ 2 ^ ] / θ 2 2 for λ = 0.50 .
Table 3. The coefficients for the BLU estimator θ 2 ^ of θ 2 and V a r [ θ 2 ^ ] / θ 2 2 for λ = 0.90 .
We now present the BLU estimator of θ 2 based on the URSS and LRSS. Let n random samples of size n be taken from the BGLE distribution. Choose the unit with the smallest (largest) measurement on the auxiliary variable X from each of the n samples. Then measure the Y variable related to it. The set of observations Y [ 1 : n ] 1 , Y [ 1 : n ] 2 , , Y [ 1 : n ] n ( Y [ n : n ] 1 , Y [ n : n ] 2 , , Y [ n : n ] n ) is referred to as the Lower RSS (LRSS) (Upper RSS (URSS)).
The BLU estimators θ ˜ 2 , L R S S and θ ˜ 2 , U R S S of θ 2 based on LRSS and URSS are
θ ˜ 2 , L R S S = 1 n μ [ 1 : n ] i = 1 n Y [ 1 : n ] i ,
θ ˜ 2 , U R S S = 1 n μ [ n : n ] i = 1 n Y [ n : n ] i ,
and their variances are
V a r [ θ ˜ 2 , L R S S ] = n μ [ 1 : n ] 2 / δ 1 , 1 : n 1 θ 2 2 ,
V a r [ θ ˜ 2 , U R S S ] = n μ [ n : n ] 2 / δ n , n : n 1 θ 2 2 .
The efficiencies e 1 of θ ˜ 2 , L R S S and e 2 of θ ˜ 2 , U R S S relative to θ 2 ^ are given by
e 1 = V a r [ θ 2 ^ ] V a r [ θ ˜ 2 , L R S S ] , e 2 = V a r [ θ 2 ^ ] V a r [ θ ˜ 2 , U R S S ] ,
see, for instance, Refs. [24,46]. Table 4 displays the efficiencies e 1 and e 2 for n = 2 , , 5 , β 2 = 0 , 0.5 , and λ = 0.50 , 0.90 . Based on Table 4, it is evident that:
Table 4. Efficiencies of the estimators θ ˜ 2 , L R S S and θ ˜ 2 , U R S S relative to θ 2 ^ .
  • The efficiency e 1 is less than one for all chosen values of β 2 , λ , and n. Therefore, θ 2 ^ is relatively more efficient than θ ˜ 2 , L R S S . For a fixed pair ( n , β 2 ) , the efficiency e 1 increases as λ increases.
  • For all selected values of β 2 , λ , and n, the efficiency e 2 is greater than one. Thereby, θ ˜ 2 , U R S S is relatively more efficient than θ 2 ^ . For a fixed pair ( n , β 2 ) , the efficiency e 2 decreases with increasing λ .

5. Real Data Application

In this part, we present real data analysis to illustrate the utility of our procedure. We consider the real data set used by [41], originally taken from [47]. The data set in Table 5 represents the time (in minutes) of the first kick goal scored by any team ( X ) and the time of the first goal of any type scored by the home team ( Y ) . According to [41], the BGLE model is an appropriate fit for this data set. The estimators of β 1 , β 2 , θ 1 , θ 2 , and λ are, respectively, 0.00001, 0.00311, 0.00079, 0.00092, and 0.75905 (for additional details, see [41]). Using the data from Table 5, we generate random samples of size five. For RSS, we choose m 2 bivariate ( X , Y ) pairs, divide them into m sets of size m, and rank each set according to X. The i t h order statistic of Y is chosen from the i t h set ( i = 1 : m ), resulting in an RSS sample of size m every cycle. For URSS (LRSS), we select m 2 bivariate ( X , Y ) pairs, divide them into m sets of size m, and rank each set based on X. We select the unit with the greatest (smallest) measurement on the variable X and measure the Y variable associated with it, resulting in a URSS (LRSS) sample of size m per cycle. This technique is repeated d times to obtain ( n = m × d ) . Here, we take m = 5 , and d = 1 . Table 6 displays the samples under RSS schemes.
Table 5. Data for the UEFA Champions League from 2004 to 2006.
Table 6. Samples of size n = 5 under different RSS techniques.
The proposed estimator of θ 2 under various RSS schemes depends on the parameters β 2 and λ , which are unknown in this case. Thus, the estimators of these parameters can be obtained using the moment estimation approach (see [38], for example). Here, assuming that β 2 = 0 , we use a moment equation based on the correlation between ( X ( i : n ) i , Y [ i : n ] i ) , i = 1 , 2 , , n , to get the moment estimator of λ . This yields λ ^ = 0.40844 . Table 7 displays the estimates of θ 2 using the RSS, LRSS, and URSS techniques. The findings indicate that θ ˜ 2 , U R S S has the smallest variance. This is in line with the efficiency performance study’s results, which are shown in Section 4.
Table 7. The estimates of θ 2 under different RSS techniques.

6. Conclusions

The BGLE distribution was presented by [41] as a new flexible bivariate distribution. This study examined the COSs from this distribution and derived explicit expressions for single and product moments of COSs. The BLU estimator of the scale parameter associated with the study variable is obtained under different RSS techniques. Additionally, a real data example is provided. The numerical findings emphasize that the BLU estimate under the URSS scheme is more efficient than the BLU estimates under the RSS and LRSS schemes.

Funding

The author would like to acknowledge the Deanship of Graduate Studies and Scientific Research, Taif University, for funding this work.

Data Availability Statement

Data are contained within the article.

Conflicts of Interest

The author declares no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GLEGeneralized linear exponential
BGLEBivariate generalized linear exponential
BGEBivariate generalized exponential
BGRBivariate generalized Rayleigh
PDFProbability density function
CDFCumulative distribution function
COSsConcomitants of order statistics
RSSRanked set sampling
URSSUpper ranked set sampling
LRSSLower ranked set sampling
BLUBest linear unbiased

References

  1. Sarhan, A.M.; Kundu, D. Generalized linear failure rate distribution. Commun. Stat.-Theory Methods 2009, 38, 642–660. [Google Scholar] [CrossRef]
  2. Mahmoud, M.A.; Alam, F.M.A. The generalized linear exponential distribution. Stat. Probab. Lett. 2010, 80, 1005–1014. [Google Scholar] [CrossRef]
  3. Sarhan, A.M.; Abd EL-Baset, A.A.; Alasbahi, I.A. Exponentiated generalized linear exponential distribution. Appl. Math. Model. 2013, 37, 2838–2849. [Google Scholar] [CrossRef]
  4. Block, H.W.; Basu, A.P. A continuous, bivariate exponential extension. J. Am. Stat. Assoc. 1974, 69, 1031–1037. [Google Scholar]
  5. Dolati, A.; Amini, M.; Mirhosseini, S.M. Dependence properties of bivariate distributions with proportional (reversed) hazards marginals. Metrika 2014, 77, 333–347. [Google Scholar] [CrossRef]
  6. Gumbel, E.J. Bivariate exponential distributions. J. Amer. Statist. Assoc. 1960, 55, 698–707. [Google Scholar] [CrossRef]
  7. Kundu, D.; Gupta, R.D. Bivariate generalized exponential distribution. J. Multivar. Anal. 2009, 100, 581–593. [Google Scholar] [CrossRef]
  8. Kundu, D.; Gupta, R.D. Absolute continuous bivariate generalized exponential distribution. AStA Adv. Stat. Anal. 2011, 95, 169–185. [Google Scholar] [CrossRef]
  9. Marshall, A.W.; Olkin, I. A generalized bivariate exponential distribution. J. Appl. Probab. 1967, 4, 291–302. [Google Scholar] [CrossRef]
  10. Mirhosseini, S.M.; Amini, M.; Kundu, D.; Dolati, A. On a new absolutely continuous bivariate generalized exponential distribution. Stat. Methods Appl. 2015, 24, 61–83. [Google Scholar] [CrossRef]
  11. Mohsin, M.; Kazianka, H.; Pilz, J.; Gebhardt, A. A new bivariate exponential distribution for modeling moderately negative dependence. Stat. Methods Appl. 2014, 23, 123–148. [Google Scholar] [CrossRef]
  12. Sarhan, A.M.; Hamilton, D.C.; Smith, B.; Kundu, D. The bivariate generalized linear failure rate distribution and its multivariate extension. Comput. Stat. Data Anal. 2011, 55, 644–654. [Google Scholar] [CrossRef]
  13. Bhattacharya, P.K. Convergence of sample paths of normalized sums of induced order statistics. Ann. Stat. 1974, 2, 1034–1039. [Google Scholar] [CrossRef]
  14. David, H.A. Concomitants of order statistics. Bull. Int. Statist. Inst. 1973, 45, 295–300. [Google Scholar]
  15. David, H.A.; Nagaraja, H.N. 18 Concomitants of order statistics. Handb. Statist. 1998, 16, 487–513. [Google Scholar]
  16. Veena, T.G.; Thomas, P.Y. Role of concomitants of order statistics in determining parent bivariate distributions. Commun. Stat.-Theory Methods 2017, 46, 7976–7997. [Google Scholar] [CrossRef]
  17. Abd Elgawad, M.A.; Barakat, H.M.; Xiong, S.; Alyami, S.A. Information measures for generalized order statistics and their concomitants under general framework from Huang-Kotz FGM bivariate distribution. Entropy 2021, 23, 335. [Google Scholar] [CrossRef] [PubMed]
  18. Barakat, H.M.; Nigm, E.M.; Alawady, M.A.; Husseiny, I.A. Concomitants of order statistics and record values from iterated FGM type bivariate-generalized exponential distribution. REVSTAT-Statist. J. 2021, 19, 291–307. [Google Scholar]
  19. Barakat, H.M.; Nigm, E.M.; Syam, A.H. Concomitants of order statistics and record values from Bairamov-Kotz-Becki-FGM bivariate-generalized exponential distribution. Filomat 2018, 32, 3313–3324. [Google Scholar] [CrossRef]
  20. Deka, U.; Das, B.; Deka, D. Concomitants of order statistics for bivariate exponentiated inverted Weibull distribution. J. Math. Comput. Sci. 2021, 11, 6444–6467. [Google Scholar] [CrossRef]
  21. Koshti, R.D.; Kamalja, K.K. A review on concomitants of order statistics and its application in parameter estimation under ranked set sampling. J. Korean Stat. Soc. 2024, 53, 65–99. [Google Scholar] [CrossRef]
  22. Kumar, S.; Khan, M.J.S.; Kumar, S. Concomitant of order statistics from new bivariate gompertz distribution. J. Mod. Appl. Stat. Meth. 2019, 18. [Google Scholar] [CrossRef]
  23. Philip, A.; Thomas, P.Y. On concomitants of order statistics arising from the extended Farlie—Gumbel—Morgenstern bivariate logistic distribution and its application in estimation. Stat. Methodol. 2015, 25, 59–73. [Google Scholar] [CrossRef]
  24. Philip, A.; Thomas, P.Y. On concomitants of order statistics and its application in defining ranked set sampling from Farlie-Gumbel-Morgenstern bivariate Lomax distribution. J. Iran. Stat. Soc. 2017, 16, 67–95. [Google Scholar]
  25. Philip, A.; Thomas, P.Y. On concomitants of order statistics from farlie-gumbel-morgenstern bivariate lomax distribution and its application in estimation. J. Iran. Stat. Soc. 2022, 16, 67–95. [Google Scholar]
  26. McIntyre, G.A. A method for unbiased selective sampling, using ranked sets. Aust. J. Agric. Res. 1952, 3, 385–390. [Google Scholar] [CrossRef]
  27. Chen, Z.; Bai, Z.; Sinha, B.K. Ranked Set Sampling: Theory and Applications; Springer: New York, NY, USA, 2004; Volume 176. [Google Scholar]
  28. Stokes, S.L. Ranked set sampling with concomitant variables. Commun. Stat.-Theory Methods 1977, 6, 1207–1211. [Google Scholar] [CrossRef]
  29. David, H.A.; Nagaraja, H.N. Order Statistics; John Wiley & Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  30. Stokes, S.L. Inferences on the correlation coefficient in bivariate normal populations from ranked set samples. J. Am. Stat. Assoc. 1980, 75, 989–995. [Google Scholar] [CrossRef]
  31. Basikhasteh, M.; Lak, F.; Tahmasebi, S. Bayesian estimation of morgenstern type bivariate rayleigh distribution using some types of ranked set sampling. Rev. Colombiana Estadíst. 2021, 44, 279–296. [Google Scholar] [CrossRef]
  32. Bohn, L.L. A review of nonparametric ranked-set sampling methodology. Commun. Stat.-Theory Methods 1996, 25, 2675–2685. [Google Scholar] [CrossRef]
  33. Barakat, H.M.; Newer, H.A. Exact prediction intervals for future exponential and Pareto lifetimes based on ordered ranked set sampling of non-random and random size. Stat. Pap. 2022, 63, 1801–1827. [Google Scholar] [CrossRef]
  34. Chacko, M.; Thomas, P.Y. Estimation of a parameter of Morgenstern type bivariate exponential distribution by ranked set sampling. Ann. Inst. Stat. Math. 2008, 60, 301–318. [Google Scholar] [CrossRef]
  35. Dong, Y.F.; Chen, W.X.; Xie, M.Y. Best linear unbiased estimators of location and scale ranked set parameters under moving extremes sampling design. Acta Math. Appl. Sin. Engl. Ser. 2023, 39, 222–231. [Google Scholar] [CrossRef]
  36. Irshad, M.R.; Maya, R.; Al-Omari, A.I.; Arun, S.P.; Alomani, G. The extended Farlie-Gumbel-Morgenstern bivariate Lindley distribution: Concomitants of order statistics and estimation. Electron. J. Appl. Stat. Anal. 2021, 14, 373–388. [Google Scholar]
  37. Irshad, M.R.; Maya, R.; Al-Omari, A.I.; Hanandeh, A.A.; Arun, S.P. Estimation of a Parameter of Farlie-Gumbel-Morgenstern Bivariate Bilal Distribution by Ranked Set Sampling. Reliabil. Theory Appl. 2023, 18, 129–140. [Google Scholar]
  38. Kamalja, K.K.; Koshti, R.D. Application of ranked set sampling in parameter estimation of cambanis-type bivariate exponential distribution. Statistica 2022, 82, 145–175. [Google Scholar]
  39. Koshti, R.D.; Kamalja, K.K. Parameter estimation of Cambanis-type bivariate uniform distribution with ranked set sampling. J. Appl. Stat. 2021a, 48, 61–83. [Google Scholar] [CrossRef]
  40. Kotb, M.S. Bayesian prediction bounds for the exponential-type distribution based on ordered ranked set sampling. Stoch. Qual. Control 2016, 31, 45–54. [Google Scholar] [CrossRef]
  41. Pathak, A.K.; Vellaisamy, P. A bivariate generalized linear exponential distribution: Properties and estimation. Commun. Stat.-Simul. Comput. 2022, 51, 5426–5446. [Google Scholar] [CrossRef]
  42. Balasubramanian, K.; Beg, M.I. Concomitant of order statistics in Gumbel’s bivariate exponential distribution. Sankhy Ser. B 1998, 60, 399–406. [Google Scholar]
  43. Arnold, B.C.; Balakrishnan, N.; Nagaraja, H.N. A First Course in Order Statistics; Society for Industrial and Applied Mathematics: Delhi, India, 2008. [Google Scholar]
  44. David, H.A. Order Statistics; John Wiley and Sons: New York, NY, USA, 1981. [Google Scholar]
  45. Mathai, A.M.; Saxena, R.K. Generalized Hypergeometric Functions with Applications in Statistics and Physical Sciences; Springer: Berlin/Heidelberg, Germany, 2006; Volume 348. [Google Scholar]
  46. Koshti, R.D.; Kamalja, K.K. Efficient estimation of a scale parameter of bivariate Lomax distribution by ranked set sampling. Calcutta Statist. Assoc. Bull. 2021, 73, 24–44. [Google Scholar] [CrossRef]
  47. Meintanis, S.G. Test of fit for Marshall—Olkin distributions with applications. J Stat. Plan. Inference 2007, 137, 3954–3963. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.