Next Article in Journal
Measuring Sphericity in Positive Semi-Definite Matrices
Previous Article in Journal
The Mean Square of the Hurwitz Zeta-Function in Short Intervals
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Advancing the Analysis of Extended Negative Dependence Random Variables: A New Concentration Inequality and Its Applications for Linear Models

by
Zouaoui Chikr Elmezouar
1,
Abderrahmane Belguerna
2,*,
Hamza Daoudi
3,
Fatimah Alshahrani
4 and
Zoubeyr Kaddour
2
1
Department of Mathematics, College of Science, King Khalid University, P.O. Box 9004, Abha 61413, Saudi Arabia
2
Department of Mathematics, Sciences Institute, S.A University Center, P.O. Box 66, Naama 45000, Algeria
3
Department of Electrical Engineering, College of Technology, Tahri Mohamed University, Al-Qanadisa Road, P.O. Box 417, Bechar 08000, Algeria
4
Department of Mathematical Sciences, College of Science, Princess Nourah bint Abdulrahman University, P.O. Box 84428, Riyadh 11671, Saudi Arabia
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(8), 511; https://doi.org/10.3390/axioms13080511
Submission received: 4 June 2024 / Revised: 17 July 2024 / Accepted: 23 July 2024 / Published: 29 July 2024

Abstract

:
This paper introduces an innovative concentration inequality for Extended Negative Dependence (END) random variables, providing new insights into their almost complete convergence. We apply this inequality to analyze END variable sequences, particularly focusing on the first-order auto-regressive (AR(1)) model. This application highlights the dynamics and convergence properties of END variables, expanding the analytical tools available for their study. Our findings contribute to both the theoretical understanding and practical applications of END variables in fields such as finance and machine learning, where understanding variable dependencies is crucial.

1. Introduction

The exploration of dependence among random variables is fundamental in probability theory and spans various scientific disciplines. Particularly, Extended Negative Dependence (END) has emerged as a significant area of study, shedding light on systems where variables inversely influence each other. This paper builds upon the rich theoretical background established by seminal works to advance the understanding of END random variables, their convergence properties, and implications in statistical modeling and analysis.
Liu’s [1] pioneering work on precise large deviations for dependent variables with heavy tails marked a significant advancement in the field, providing a methodological foundation that has influenced subsequent studies. Understanding the behavior of dependent variables, especially those with heavy tails, is crucial as it impacts the reliability of statistical inferences in real-world applications, ranging from financial risk management to environmental statistics.
Building on this foundation, Shen et al. [2,3] introduced probability inequalities for END sequences and elucidated their convergence behaviors. These contributions are pivotal as they not only enhance the theoretical understanding of END variables but also offer practical tools for analyzing their collective dynamics. The study of convergence properties, as explored by Yan [4] and Wang et al. [5], further expands this knowledge base, offering nuanced insights into the statistical properties of END variables through the lens of complete convergence and moment convergence.
The theoretical underpinnings of END variable analysis also draw heavily from classic results in probability theory, such as those by Baum and Katz [6] on convergence rates in the law of large numbers. This theory provides a mathematical framework for assessing the convergence behavior of random variables, a critical aspect when dealing with END variables. Additionally, the work by Chen et al. [7] and Chow [8] on uniform asymptotics and moment complete convergence, respectively, extends the application of these theories to more complex and realistic models.
At the intersection of dependence structures and statistical modeling, the contributions of Ebrahimi and Ghosh [9] on multivariate negative dependence, and Wu and Min [10] on the asymptotic theory of unbounded negatively dependent variables, lay the groundwork for a nuanced understanding of END variables [11,12,13,14]. These studies highlight the complexity of dependence structures and their impact on the asymptotic behavior of statistical estimators and models. Exponential type inequality for unbounded Widely Orthant Dependent (WOD) random variables can be found in [15].
This paper not only synthesizes the aforementioned theoretical advancements but also aims to extend them by exploring new dimensions of END variables and their applications. Boucheron, Lugosi, and Massart’s [16] work on concentration inequalities and Vershynin’s [17] introduction to high-dimensional probability theory provide a modern context for our study. These perspectives are crucial for addressing the challenges posed by high-dimensional data in understanding the behavior of END random variables.
The concentration inequality plays a pivotal role in many limit theorem proofs by providing a measure of partial sum convergence crucial for complete convergence analysis. It was Liu [1] who initially proposed the concept of Extended Negative Dependence (END). Before delving further, let us review the definitions of Lower Negatively Extended Dependent and Upper Negatively Extended Dependent. As outlined by Liu, the Extended Negative Dependence structure surpasses the Negatively Orthant-Dependent (NOD) structure as it can reflect both negative and positive dependence structures to some extent. Liu also provided intriguing examples demonstrating that Extended Negative Dependence supports a wide range of dependency models. According to Joag-Dev and Proschan [18], random variables that are negatively associated (NA) must satisfy the property of negative orthant dependence (NOD). However, not all random variables satisfying NOD are necessarily negatively associated. Therefore, random variables that exhibit negative association are considered endogenous (END).
The potential scenarios where END errors might be relevant include situations with volatility clustering and asymmetric dependence observed in financial time series in financial data, negative dependence between observations due to resource constraints or competition in environmental data, and dependence structures arising from interactions between interconnected components in networked systems.
Therefore, it is crucial to examine the asymptotic characteristics of this expanded END category. Following the release of Ebrahimi and Ghosh’s work [9], researchers have extensively investigated the convergence properties of NOD random sequences using various approaches. A sequence of random variables Z n , n 1 is said to converge completely to a constant if, for each ε > 0 , the sum of the probabilities Σ n = 1 P ( | Z n a | > ε ) < . As a consequence of the Borel–Cantelli lemma, Z n a almost surely. Therefore, the idea of complete convergence is more powerful than that of almost surely convergence. The first explanation of complete convergence came from Robbins and Hsu [11]. In general, Baum and Katz [6], Chow [8], Wang and others [3,5,7] can all be cited.
This paper aims to enhance the understanding of END variables. We begin by reviewing the literature on END variables, highlighting their defining characteristics and the theoretical advancements. We then introduce a novel concentration inequality tailored to END variables, detailing its derivation and potential applications. Following this, we explore the convergence properties of sequences of END variables, employing theoretical analysis and simulated data to illustrate the practical implications of our findings.
We apply our theoretical findings to linear models, with a particular focus on the first-order auto-regressive (AR(1)) model. This application not only demonstrates the utility of our contributions in a practical setting but also offers new insights into the behavior of END variables in statistical modeling. Through rigorous analysis, we aim to show how our research advances both the theoretical and practical understanding of END variables.
In this comprehensive exploration, our paper unfolds across seven distinctive sections. Section 1 introduces the theoretical foundation and motivation, providing a context for our study. Section 2, Preliminary Lemmas, establishes essential mathematical lemmas that underpin our subsequent analysis. The main theorems and their proofs are presented in Section 3, Main Results and Proofs, offering a deep dive into the theoretical framework, introducing the novel concentration inequality tailored to END random variables and establishing its foundational properties. In Section 4, we present the principal finding of almost complete convergence for sequences of Extended Negative Dependence random variables, leveraging the developed concentration inequality. Section 5 showcases the application of these results in the context of the AR(1) model. Section 6, Numerical Illustration, breathes life into the theoretical framework, showcasing its practical implications through a detailed numerical example. Finally, Section 7 concludes the paper, summarizing its contributions and outlining potential avenues for future research, and also reflecting on the broader significance of our study in the realms of extreme value theory and statistical analysis.

2. Initial Lemmas

The random variables X i , i 1 are said to be Lower Extended Negative Dependence (LEND) if there exist some constant M > 0 such that, for all x i , i = 1 , , n , we have
P ( X 1 x 1 , X 2 x 2 , , X n x n ) M i = 1 n P ( X i x i ) .
They are said to be Upper Extended Negative Dependence (UEND) if there exist some constant M > 0 such that, for all x i , i = 1 , , n , we have
P ( X 1 > x 1 , X 2 > x 2 , , X n > x n ) M i = 1 n P ( X i > x i )
They are said to Extended Negative Dependence (END) if both (1) and (2) hold for each n 1 . Liu developed the notion of the Extended Negative Dependency random process. Clearly, when M = 1 , the Extended Negative Dependence random process concept is reduced to the concept of negatively orthant-dependent (NOD) random process.
Lemma 1
([19]). For any integer n greater than or equal to 1, if the sequence Z n , n 1 consists of non-negative numbers and has a dominant coefficient M, then
E i = 1 n Z i M i = 1 n E ( Z i )
Specifically, if Z n , n 1 are uniformly exponentially bounded with dominant coefficients M, then for each n 1 and s 0 ,
E exp { s i = 1 n Z i } M i = n E ( e s Z i )
By Lemma 1, we can obtain the following corollary immediately.
Corollary 1.
Let Z n , n 1 be sequence of END random variables. Then, for each n 1 and s R + *
E exp { s i = 1 n Z i } M i = 1 n E ( e s Z i ) .
Lemma 2.
Let Z n , n 1 be a sequence of END random variables. And τ > 0 ; then,
E cosh τ i = 1 n Z i M i = 1 n E e τ | Z i | τ | Z i | .
Proof of Lemma 2.
Let { Z i } i = 1 n be a sequence of END random variables, and let τ > 0 . We need to prove that
E cosh τ i = 1 n Z i M i = 1 n E e τ | Z i | τ | Z i | .
Recall the expression for the hyperbolic cosine function:
c o s h ( x ) = e x + e x 2 .
Applying this to the left-hand side, we obtain
E cosh τ i = 1 n Z i = E e τ i = 1 n Z i + e τ i = 1 n Z i 2 .
By linearity of expectation, this becomes
1 2 E e τ i = 1 n Z i + E e τ i = 1 n Z i .
Since Z i are END random variables, we leverage properties of END sequences. END sequences satisfy certain inequalities that allow us to bound the expectation of sums involving these variables.
For END random variables, we know that negative dependence often leads to certain forms of decoupling inequalities. Specifically, for END random variables, we have
E e τ i = 1 n Z i i = 1 n E e τ Z i ,
and similarly,
E e τ i = 1 n Z i i = 1 n E e τ Z i .
Thus, we have
E cosh τ i = 1 n Z i 1 2 i = 1 n E e τ Z i + i = 1 n E e τ Z i .
By the assumption that the sequence { Z i } i = 1 n is uniformly exponentially bounded, there exist constants M > 0 , and for all i:
E e τ | Z i | τ | Z i | M i .
Therefore, combining the bounds for each Z i :
i = 1 n E e τ | Z i | τ | Z i | i = 1 n M i = M ,
for some constant M that depends on the product of the M i .
Given the bounds from the END property, we can conclude that
E cosh τ i = 1 n Z i M i = 1 n E e τ | Z i | τ | Z i | ,
completing the proof with the correct handling of extended negative dependence. □
Lemma 3.
For all z R
exp { z } 1 + | z | exp { | z | } .
Lemma 4.
Let Z 1 , , Z n be an Extended Negative Dependence random process, and for 1 i n , E ( Z i ) = 0 , if P ( | Z i | c ) = 1 holds for a series of positive numbers c, set S n = i = 1 n Z i and for all K, 0 < E | Z | k ϕ Z . Then, for any ξ > 0 , we have
P ( | S n | > n ξ ) 4 M exp { n ξ 2 log ( ξ 2 ϕ Z ) 1 } .
Proof. 
For τ > 0 ,
P ( S n > n ξ ) = P ( τ S n > τ n ξ ) = P e τ S n > e τ n ξ e τ n ξ E ( e τ S n ) ( By Markov’s inequality ) e τ n ξ E ( 2 cosh ( τ S n ) ) ( By e x 2 cosh x ) 2 M e τ n ξ i = 1 n E ( e τ | Z i | τ E ( | Z i | ) ) ( By Lemma 2 ) 2 M e τ n ξ i = 1 n [ 1 + E ( τ | Z i | e τ | Z i | ) τ E ( | Z i | ) ] ( By Lemma 3 ) 2 M e τ n ξ i = 1 n 1 + E e 2 τ | Z i | ( By x e x and E ( | Z i | ) 0 ) 2 M e τ n ξ i = 1 n 1 + E k = 0 2 τ | Z i | k k ! ( By e x = x k k ! ) 2 M e τ n ξ i = 1 n 1 + k = 0 ( 2 τ ) k k ! E ( | Z i | k ) 2 M e τ n ξ i = 1 n 1 + ϕ Z k = 0 ( 2 τ ) k k ! ( For all , E ( | Z | k ) ϕ Z ) 2 M e τ n ξ i = 1 n 1 + ϕ Z e 2 τ 2 M e τ n ξ i = 1 n e ϕ Z e 2 τ ( By 1 + x e x ) 2 M e τ n ξ + n ϕ Z e 2 τ .
The right term is optimal for τ = 1 2 log ( ξ 2 ϕ Z ) .
We obtain
P ( S n > n ξ ) 2 M exp { n ξ 2 log ( ξ 2 ϕ Z ) 1 } .
And we have
P ( | S n | > n ξ ) = P ( S n > n ξ ) + P ( S n < n ξ )
Since Z n , n 1 is also END, we obtain by (2) that
P ( S n > n ξ ) = P ( S n < n ξ ) 2 M exp { n ξ 2 log ( ξ 2 ϕ Z ) 1 } .
By (5) and (6) the result (3) follows. □

3. Main Results and Proofs

Theorem 1.
Let Z 1 , , Z n be END random variables for all E ( Z i k ) < if there exists a positive constant c such that P ( | Z i | c ) = 1 , for i 1 and for all k , 0 < E ( | Z i E Z i | k ) φ z . Then, for any ξ ( ξ > 2 φ z ) , we have
P ( | S n E S n | > n ξ ) 4 M exp { n ξ 2 log ( ξ 2 φ z ) 1 } .
Proof. 
Using Markov’s inequality, along with Corollary 1 and Lemma 2,
P ( S n E S n > n ξ ) e τ n ξ E e τ ( S n E S n ) ( By Markov’s inequality ) e τ n ξ E cosh τ ( S n E S n ) ( By e x 2 cosh x ) 2 M e τ n ξ i = 1 n E e τ | Z i E Z i | τ | Z i E Z i | ( By Lemma 2 ) 2 M e τ n ξ i = 1 n E e τ | Z i E Z i | τ E ( | Z i E Z i | )
By Lemma 4, we obtain
P ( S n E S n > n ξ ) 2 M e τ n ξ i = 1 n E 1 + τ | Z i E Z i | × e τ | Z i E Z i | , ( E ( Z i E Z i ) = 0 ) 2 M e τ n ξ i = 1 n 1 + E e 2 τ | Z i E Z i | ( By x e x ) 2 M e τ n ξ i = 1 n 1 + E k = 0 ( 2 τ | Z i E Z i | ) k k ! ( By e x = x k k ! ) 2 M e τ n ξ i = 1 n 1 + k = 0 ( 2 τ ) k k ! E ( | Z i E Z i | k ) 2 M e τ n ξ i = 1 n 1 + φ z k = 0 ( 2 τ ) k k ! 2 M e τ n ξ i = 1 n e φ z e 2 τ ( By 1 + x e x ) 2 M e τ n ξ + n φ z e 2 τ .
By taking τ = 1 2 log ( ξ 2 φ z ) , we obtain
P ( S n E S n > n ξ ) 2 M exp { n ξ 2 log ( ξ 2 φ z ) 1 } .
Then,
P ( | S n E S n | > n ξ ) 4 M exp { n ξ 2 log ( ξ 2 φ z ) 1 } .

4. Almost Complete Convergence

Corollary 2.
Consider a sequence Z n , n 1 of identically distributed END random processes. Let us assume that there is a positive number n 0 such that it is true for all n greater than or equal to n 0 . If there is a positive constant c such that the probability of the absolute value of Z i being less than or equal to c is 1, for i greater than or equal to 1, then for any ξ, 2 φ z < ξ 1 , we get
P ( | S n E S n | > n ξ ) 4 M exp { n ξ 2 log ( ξ 2 φ z ) 1 } .
Theorem 2.
Consider the sequence of END random variables Z n , n 1 . It is given that the expected value E ( Z i ) = 0 , and the probability that the absolute value of Z i is less than or equal to c is 1, for i 1 , where c is a positive constant. Now, let Δ be any value greater than 0.
n = 1 P ( | S n | > n Δ ξ ) <
Proof. 
For any positive value of ξ , we can deduce from Lemma 4 that
n = 1 P ( | S n | > n Δ ξ ) 4 M n = 1 e n Δ ξ 2 log ( ξ 2 φ z ) 1 4 M n = 1 e ξ 2 log ( ξ 2 φ z ) 1 n Δ 4 M n = 1 ( k 1 ) n Δ < ,
Let k 1 ( 0 < k 1 < 1 ) be a positive constant that does not depend on n. Therefore, the proof is finished. □
Theorem 3.
Consider a sequence Z n , n 1 of identically distributed END random variables. Let us assume that there is a positive integer n 0 such that the probability of the absolute value of Z i being less than or equal to c is equal to 1, for i greater than or equal to 1, for every n greater than or equal to n 0 , where c is a positive constant. For any positive value of Δ,
n = 1 P ( | S n E S n | > n Δ ξ ) <
Proof. 
According to Theorem 1, for any given value of ξ greater than zero, there exists
n = 1 P ( | S n E S n | > n Δ ξ ) 4 M n = 1 exp { ξ 2 log ( ξ 2 φ z ) 1 } n Δ 4 M n = 1 [ k 2 ] n Δ <
where k 2 is a positive constant independent of n, 0 < k 2 < 1 . Therefore, we have successfully completed the proof. □

5. Application of Results in a Linear Model

The Autoregressive Model of Order 1 (AR(1))

We explore an autoregressive model of the first order AR(1), which is defined by
Z i = θ Z i 1 + ξ i , i = 1 , 2 , ,
where ξ i , i 1 is a sequence of END random variables with ξ 0 = Z 0 = 0 , 0 E ξ k 4 < , k = 1 , 2 , and θ is a parameter with | θ | < 1 Hence (15) as follows:
Z i = j = 1 θ j ξ i j
The coefficient θ is determined by the least squares method, resulting in the estimator below.
θ ^ n = j = 1 n Z j Z j 1 j = 1 n Z j 1 2
From (15) and (17), we obtain that
θ ^ n θ = j = 1 n ξ j Z j 1 j = 1 n Z j 1 2
Theorem 4.
Once the requirements of Theorem 1 are met, for each ( E Z 1 ) ρ 2 < ξ 2 < j = 1 n E Z j 1 2 2 n 2 that is positive, we obtain
P ( n | θ ^ n θ | > ρ ) 4 M e n ( ξ 2 ρ 2 E Z 1 ) 2 log ξ 2 ρ 2 E Z 1 2 φ z 1 + 2 M e ( n 2 ξ 2 B n ) 2 log B n n 2 ξ 2 2 n ψ Z 1 .
where B n = j = 1 n E Z j 1 2
Proof. 
From (18) it follows that
P ( n | θ ^ n θ | > ρ ) P 1 n j = 1 n ξ j Z j 1 1 n j = 1 n E Z j 1 2 > ρ
Thus, based on the probability characteristics and Holder’s inequality, it follows that for every ξ > 0 ,
P ( n | θ ^ n θ | > ρ ) P 1 n j = 1 n Z j ξ 2 ρ 2 + P 1 n 2 j = 1 n Z j 1 2 ξ 2 P j = 1 n Z j ( ξ 2 ρ 2 ) n + P j = 1 n Z j 1 2 ξ 2 n 2 I n 1 + I n 2
First, we begin by estimating the value of I n 1 , followed by estimating the value of I n 2 .
It is evident that
I n 1 = P j = 1 n Z j ( ξ 2 ρ 2 ) n = P j = 1 n ( Z j E Z j + E Z j ) ( ξ 2 ρ 2 ) n = P j = 1 n ( Z j E Z j ) ( ξ 2 ρ 2 E Z 1 ) n .
Applying Theorem 1, the expression on the right-hand side of Equation (20) is transformed.
I n 1 = P j = 1 n Z j ( ξ 2 ρ 2 ) n 2 M e ( ρ ξ 2 n 2 + n φ z e 2 ρ ρ B n )
Next, we establish an upper limit for the right-hand side of the equation of I n 1 . For any 0 < τ , it follows that
I n 2 = P ( j = 1 n Z j 1 2 ξ 2 n 2 ) = P ( n 2 ξ 2 j = 1 n Z j 1 2 0 ) = E ( I { n 2 ξ 2 j = 1 n Z j 1 2 0 } ) E exp { τ ( n 2 ξ 2 j = 1 n Z j 1 2 ) } e τ n 2 ξ 2 E e ( τ j = 1 n Z j 1 2 ) e τ n 2 ξ 2 E j = 1 n e τ Z j 1 2 2 e τ n 2 ξ 2 E j = 1 n cosh ( e τ Z j 1 2 ) ( By e x 2 cosh x ) 2 M e τ n 2 ξ 2 j = 1 n E e τ Z j 1 2 τ Z j 1 2 ( By Lemma 2 ) 2 M e τ n 2 ξ 2 j = 1 n [ 1 + E τ Z j 1 2 e τ Z j 1 2 τ E Z j 1 2 ] ( By Lemma 3 ) 2 M e τ n 2 ξ 2 j = 1 n 1 + E e 2 τ Z j 1 2 τ E Z j 1 2 ( By x e x ) 2 M e τ n 2 ξ 2 j = 1 n 1 + E k = 0 ( 2 τ Z j 1 2 ) k k ! τ E Z j 1 2 ( By e x = k = 0 x k k ! ) 2 M e τ n 2 ξ 2 j = 1 n 1 + k = 0 ( 2 τ ) k k ! τ E Z j 1 2 τ E Z j 1 2 2 M e τ n 2 ξ 2 j = 1 n 1 + ψ Z × e 2 τ τ E Z j 1 2 ( For all k , E Z j 1 2 k ψ Z ) 2 M e τ n 2 ξ 2 j = 1 n e ( ψ Z × e 2 τ τ E Z j 1 2 ) ( By 1 + x e x ) 2 M e ( τ n 2 ξ 2 + n ψ Z × e 2 τ τ B n ) .
By taking τ = 1 2 log ( B n n 2 ξ 2 2 n ψ Z ) , then for any ρ > 0 ,
P ( n | θ ^ n θ | > ρ ) 4 M e n ( ξ 2 ρ 2 E Z i ) 2 [ log ( ξ 2 ρ 2 E Z i 2 φ z ) 1 ] + 2 M e ( n 2 ξ 2 B n ) 2 [ log ( B n n 2 ξ 2 2 n ψ Z ) 1 ]
Corollary 3.
Whenever the sequence θ ^ n completely converges to the parameter θ of the first-order autoregressive process, then we can conclude that
n P ( n | θ ^ n θ | > ρ ) < +
Proof. 
By applying Theorem 3 and assuming that E Z j 2 is finite and ψ Z is finite, we may directly obtain the result stated in Corollary 3. □

6. Numerical Illustration

In this section, we provide a comprehensive numerical demonstration of concentration inequalities applied to a dataset generated from an autoregressive process with transformed residuals. Our main objective is to validate and visualize the concentration of the sum of transformed residuals around its expected value.
Concentration inequalities are vital tools in probability theory and statistical analysis, offering limits on the deviation of random variables from their expected values. This section explores the application of concentration inequalities to a dataset derived from an autoregressive process with transformed residuals, examining the implications of extreme value theory (EVT) on the distribution’s tails.
The methodology of this study involved generating a dataset that represents an autoregressive process AR(1) with residuals transformed into Extended Negative Dependence random variables (END) using the sine function, in Figure 1 and Figure 2 below.
When we proceed to verify END properties for the transformed residuals, we find that the condition, for all k, 0 < E ( | Z E Z | k ) φ z , is not satisfied. Therefore, we apply an additional transformation using a sine function, Figure 3 below.
Before verifying the condition above for this final transformation, we ensure that the sine-transformed data remain Extended Negative Dependence random variables (ENDs) through moment conditions and the Q-Q plot in the Figure 4 below.
The points roughly follow a straight line from around −2.0 to 1.5 on the x-axis. This indicates that the data approximates a normal distribution.
Prior to delving into the concentration inequality, we rigorously confirm that the transformed residuals meet the conditions outlined by Theorem 1. Specifically, we validate that for all k , 0 < E ( | Z E Z | k ) φ z .
Ensuring the meticulous verification of these conditions is crucial for laying the foundation for the subsequent analysis of concentration inequalities.
With these conditions confirmed, we proceed to compute the theoretical bound for the concentration inequality. The bound is expressed as
P ( | S n E S n | > n ξ ) 4 M e n ξ 2 log ( ξ 2 φ Z ) 1
where M, ξ , and φ Z are constants determined based on the characteristics of the transformed residuals.
Subsequently, we compute the actual probability of the sum of transformed residuals deviating from its expected value for varying sample sizes. This involves calculating the cumulative sum of the transformed data (Sn) and comparing it with the expected cumulative sum (ESn).
The culmination of our numerical illustration is visualized through a line plot. This plot showcases the theoretical bound alongside the actual probability for different sample sizes, offering a visual comparison of the concentration behavior in Figure 5 below.
The results obtained from the numerical illustration shed light on the concentration behavior of the sum of transformed residuals. The visual comparison of the theoretical bound and the actual probability reinforces the validity of the concentration inequality for the given dataset.
Furthermore, the verification of conditions ensures the applicability of Theorem 1 to the transformed residuals, affirming the foundational assumptions crucial for the subsequent analysis.
In conclusion, this section provides a comprehensive numerical illustration of concentration inequalities applied to transformed residuals from an autoregressive process. Through meticulous verification of conditions, calculation of theoretical bounds, and visual comparison with actual probabilities, we gain insights into the concentration behavior of the transformed data.
This analysis contributes to a deeper understanding of statistical properties in the context of EVT, particularly for non-random variables exhibiting extreme value behavior. The verified concentration inequalities provide a solid foundation for subsequent analyses and form a basis for further exploration into the tails of the distribution.
Through this illustration, we not only showcase the application of concentration inequalities but also emphasize the importance of thorough verification and visual representation in establishing the reliability of statistical results.

7. Conclusions

Our research investigates the complete convergence of Extended Negative Dependence (END) random variables and their applications in autoregressive AR(1) models. By employing exponential inequalities and specific inequalities for the likelihood of END variables, we demonstrate significant insights into their convergence properties. Our findings show that the proposed concentration inequalities effectively characterize the statistical behavior of END random variables.
The results from our theoretical analysis and numerical illustrations underscore the robustness of our approach. Specifically, the application of our methodology to AR(1) models reveals important dynamics and convergence properties of END variables, validating the practical significance of our theoretical contributions.
Moving forward, our methodology can be extended to other statistical models to further validate its applicability. Additionally, exploring higher-order moments and different tail behaviors in real-world datasets can enhance our understanding of extreme events and dependencies. This research lays a solid foundation for future studies, encouraging further collaboration and innovation in statistical analysis and extreme value theory.

Author Contributions

Authors Z.C.E., A.B., H.D., F.A. and Z.K. have made significant contributions to this manuscript. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by two funding sources: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project number (PNURSP2024R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia and (2) The Deanship of Research and Graduate Studies at King Khalid University, which provided a grant (RGP1/163/45) for a Small Group Research Project.

Data Availability Statement

Data used in this research study are within the paper.

Acknowledgments

The authors thank and extend their appreciation to the funders of this work: (1) Princess Nourah bint Abdulrahman University Researchers Supporting Project Number (PNURSP- 2024R358), Princess Nourah bint Abdulrahman University, Riyadh, Saudi Arabia; (2) The authors extend their appreciation to the Deanship of Research and Graduate Studies at King Khalid University for funding this work through small under grant number group research RGP1/163/45.

Conflicts of Interest

There are no conflicts of interest to declare by the authors.

References

  1. Liu, L. Precise large deviations for dependent random variables with heavy tails. Stat. Probab. Lett. 2009, 79, 1290–1298. [Google Scholar] [CrossRef]
  2. Shen, A. Probability inequalities for END sequence and their applications. J. Inequalities Appl. 2011, 2011, 98. [Google Scholar] [CrossRef]
  3. Shen, A.; Zhang, Y.; Wang, W. Complete convergence and complete moment convergence for Extended Negative Dependence random variables. Filomat 2017, 31, 1381–1394. [Google Scholar] [CrossRef]
  4. Yan, J. Complete convergence and complete moment convergence for maximal weighted sums of Extended Negative Dependence random variables. Acta Math. Sin. Engl. Ser. 2018, 34, 1501–1516. [Google Scholar] [CrossRef]
  5. Wang, X.; Xu, C.; Hu, T.; Andrei, V.; Hu, S. On complete convergence for widely orthant dependent random variables and its applications in nonparametric regression models. TEST 2014, 23, 607–629. [Google Scholar] [CrossRef]
  6. Baum, L.E.; Katz, M. Convergence rates in the law of large numbers. Ams Am. Math. Soc. 1965, 120, 108–123. [Google Scholar] [CrossRef]
  7. Chen, Y.; Wang, L.; Wang, Y. Uniform asymptotics for the finite-time ruin probabilities of two kinds of nonstandard bidimensional risk models. J. Math. Anal. Appl 2013, 401, 114–129. [Google Scholar] [CrossRef]
  8. Chow, Y.S. On the rate of moment complete convergence of sanple sums and extremes. Bull. Inst. Math. Acad. Sin. 1988, 16, 177–201. [Google Scholar]
  9. Ebrahimi, N.; Ghosh, M. Multivariate negative dependence. Commun. Stat. Theory Methods 1981, 10, 307–337. [Google Scholar]
  10. Wu, W.; Min, W. On the asymptotic theory of unbounded negatively dependent random variables. Stat. Sin. 2005, 15, 1141–1155. [Google Scholar]
  11. Robbins, H.; Hsu, L. Complete convergence and the law of large numbers. Proc. Natl. Acad. Sci. USA 1947, 33, 25–31. [Google Scholar]
  12. Tang, Q. Insensitivity to negative dependence of the asymptotic behavior of precise large deviations. Electron. J. Probab. 2006, 11, 107–120. [Google Scholar] [CrossRef]
  13. Yongfeng, W.; Mei, G. onvergence properties of the sums for sequences of END random variables. Korean Math. Soc. 2012, 49, 1097–1110. [Google Scholar]
  14. Yongming, L. On the rate of strong convergence for a recursive probability density estimator of END samples and its applications. J. Math. Inequal. 2017, 11, 335–343. [Google Scholar]
  15. Kaddour, Z.; Belguerna, A.; Benaissa, S. New tail probability type inequalities and complete convergence for wod random variables with application to linear model generated by wod errors. J. Sci. Arts 2022, 22, 309–318. [Google Scholar] [CrossRef]
  16. Boucheron, S.; Lugosi, G.; Massart, P. Concentration Inequalities: A Nonasymptotic Theory of Independence; Oxford University Press: Oxford, UK, 2013. [Google Scholar]
  17. Vershynin, R. High-Dimensional Probability: An Introduction with Applications in Data Science; Cambridge University Press: Cambridge, UK, 2018. [Google Scholar]
  18. Joag-Dev, K.; Proschan, F. Negative Association of Random Variables, with Applications. Ann. Stat. 1983, 11, 286. [Google Scholar] [CrossRef]
  19. Chen, Y.; Chen, A.; Ng, K.W. The Strong Law of Large Numbers for Extended Negatively Dependent Random Variables. J. Appl. Probab. 2010, 47, 908–922. [Google Scholar] [CrossRef]
Figure 1. Generated dataset representing an autoregressive process AR(1).
Figure 1. Generated dataset representing an autoregressive process AR(1).
Axioms 13 00511 g001
Figure 2. Residuals.
Figure 2. Residuals.
Axioms 13 00511 g002
Figure 3. Sinus transformation.
Figure 3. Sinus transformation.
Axioms 13 00511 g003
Figure 4. Q-Q plot.
Figure 4. Q-Q plot.
Axioms 13 00511 g004
Figure 5. Comparison of the concentration behavior.
Figure 5. Comparison of the concentration behavior.
Axioms 13 00511 g005
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chikr Elmezouar, Z.; Belguerna, A.; Daoudi, H.; Alshahrani, F.; Kaddour, Z. Advancing the Analysis of Extended Negative Dependence Random Variables: A New Concentration Inequality and Its Applications for Linear Models. Axioms 2024, 13, 511. https://doi.org/10.3390/axioms13080511

AMA Style

Chikr Elmezouar Z, Belguerna A, Daoudi H, Alshahrani F, Kaddour Z. Advancing the Analysis of Extended Negative Dependence Random Variables: A New Concentration Inequality and Its Applications for Linear Models. Axioms. 2024; 13(8):511. https://doi.org/10.3390/axioms13080511

Chicago/Turabian Style

Chikr Elmezouar, Zouaoui, Abderrahmane Belguerna, Hamza Daoudi, Fatimah Alshahrani, and Zoubeyr Kaddour. 2024. "Advancing the Analysis of Extended Negative Dependence Random Variables: A New Concentration Inequality and Its Applications for Linear Models" Axioms 13, no. 8: 511. https://doi.org/10.3390/axioms13080511

APA Style

Chikr Elmezouar, Z., Belguerna, A., Daoudi, H., Alshahrani, F., & Kaddour, Z. (2024). Advancing the Analysis of Extended Negative Dependence Random Variables: A New Concentration Inequality and Its Applications for Linear Models. Axioms, 13(8), 511. https://doi.org/10.3390/axioms13080511

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop