Optimized Tail Bounds for Random Matrix Series

Random matrix series are a significant component of random matrix theory, offering rich theoretical content and broad application prospects. In this paper, we propose modified versions of tail bounds for random matrix series, including matrix Gaussian (or Rademacher) and sub-Gaussian and infinitely divisible (i.d.) series. Unlike present studies, our results depend on the intrinsic dimension instead of ambient dimension. In some cases, the intrinsic dimension is much smaller than ambient dimension, which makes the modified versions suitable for high-dimensional or infinite-dimensional setting possible. In addition, we obtain the expectation bounds for random matrix series based on the intrinsic dimension.


Introduction
Random matrix theory is a significant branch of mathematics, which delves into the properties and behavior of random matrices.Its applications span across various fields, including wireless communications [1], combinatorial optimization [2], matrix low-rank approximation [3], neural networks [4,5], and deep learning [6].Random matrices have a wide range of applications in physics, entropy, and information science.They can provide comprehensive descriptions and analyses when dealing with multiple interacting elements, high-dimensional systems, and complex statistical relationships.Random matrices can better capture the complex interactions between multiple particles or multiple physical processes.When dealing with high-dimensional physical systems with a large number of degrees of freedom, random matrices provide a more natural and effective representation [7,8].Random matrices can be used to calculate the entropy of complex systems to measure the degree of chaos and uncertainty of the system [9,10].In the field of information science, random matrices can be used for performance optimization and signal processing of communication systems [11].Random matrix theory provides a powerful theoretical basis for dealing with problems in these fields.Among them, the random matrix series is an important research topic in the field of random matrix theory, and has wide application and research value.
The study of random matrix theory comprises two branches: asymptotic theory and non-asymptotic theory.There have been several notable asymptotic results in random matrix theory, including Wigner's semicircle law [12], the Marchenko-Pastur law [13], and the Bai-Yin law [14].While these asymptotic statements can offer precise limiting results as the matrix dimension approaches infinity, they do not specify the rate at which these probability terms converge to their limits.In response to this challenge, non-asymptotic approaches to analyzing these probability terms have emerged.
Ahlswede and Winter [15] illustrated the application of the Golden-Thompson inequality [16,17] in extending the Laplace transform method to the matrix scenario to derive tail bounds for sums of random matrices.Tropp [18] utilized a corollary of Lieb's theorem [19] to achieve a significant improvement over the Ahlswede-Winter outcome.To address the notable limitation of results being dependent on the intrinsic dimensions of the matrix, where the bounds become excessively loose in scenarios involving high-dimensional matrices, Hsu et al. [20] presented a tighter analogy to matrix Bernstein's inequality.Minsker [21] extended Bernstein's concentration inequalities for random matrices by enhancing the results in [20] through the introduction of the concept of effective rank.Zhang et al. [22] introduced dimension-free tail bounds for the largest singular value of sums of random matrices.
The matrix series form ∑ k x k A k has played a crucial role in recent studies [23][24][25], where x k represents a random variable and A k is a fixed matrix.The variable x k can encompass various types of random variables, including Gaussian, Bernoulli, infinitely divisible random variables, and more.Tropp [18] utilized Gaussian series to study the key characteristics of matrix tail bounds.Zhang et al. [26] studied the tail inequalities of the largest eigenvalue of a matrix infinitely divisible (i.d.) series and applied them to optimization problems and compressed sensing.
Let {A k } n k=1 be a finite sequence of fixed Hermitian matrices with dimension d.Tropp [18] gave the following result for any t ≥ 0: A significant distinction between (1) and ( 2) is the presence of the matrix dimension factor d in the latter.Hsu et al. [20] obtained the following tail bound: We observe that the right side of the inequality is the product of two terms.When both terms are smaller, the result will be tighter.Compared with (2), we know that tr(Ξ)/λ max (Ξ) ≤ d but t(e t − t − 1) −1 > e −t for t > 0. That is, one term of the results (2) and (3) becomes smaller, while the other term becomes larger.In other words, both outcomes have their respective limitations.
Let {β k } n k=1 be a finite sequence of independent sub-Gaussian random variables.The tail bound can be obtained from where c is an absolute constant.Let {γ k } n k=1 be a finite sequence of independent infinitely divisible random variables.
For any 0 < t < ρh(M − ), Zhang et al. [26] deduced the following results: where h(M − ) := lim s↑M h(s) and h −1 (s) is the inverse of h(s).For any t ≥ ρh(M − ), where ρ := λ max ∑ K k=1 B 2 k .In addition, Tropp [27] gave the expectation bound for the matrix Gaussian series, and Zhang et al. [26] also proposed an expectation bound for infinitely divisible matrix series under some given conditions.However, the significant drawback of the above results lies in the reliance on the ambient dimension of the matrix.The bounds tend to very loose when the matrices have a high dimension.To solve this problem, we optimize the existing theory.Tighter tail bounds for random matrices mean more precise and reliable probability estimates, which enables people to have a more accurate grasp of the behavior of random matrices and helps to improve the accuracy, efficiency and reliability of theory and application.

Overview of Main Results
With the aim of enhancing the limitations of the existing theory and to complement and refine the existing random matrix theory, we put forward optimized tail and expectation bounds for random matrix series in this paper, including matrix Gaussian (or Rademacher), sub-Gaussian, and infinitely divisible (i.d.) series.This makes the modified version potentially adaptable to high-dimensional or infinite-dimensional matrix settings.Taking the matrix Gaussian series as an example, we obtain the tighter conclusion: and The d and ω 2 will be introduced in detail later in the paper.The rest of this paper is organized as follows.Section 2 introduces some preliminary knowledge on the intrinsic dimension and Gaussian (or Rademacher), sub-Gaussian, and infinitely divisible (i.d.) distributions.Section 3 gives tail and expectation bounds based on the intrinsic dimension bounds for Gaussian (or Rademacher), sub-Gaussian, and infinitely divisible (i.d.) matrix series.The last section concludes the paper.

Notations and Preliminaries
In this section, some preliminary knowledge will be provided about the intrinsic dimension of the matrix, and also about Gaussian (or Rademacher), sub-Gaussian, infinitely divisible distributions, and matrix series.

The Intrinsic Dimension
Existing tail bounds on random matrix series depend on the ambient dimension of the matrix.We introduce the concept of the intrinsic dimension, which is much smaller than the ambient dimension in some cases (see also [27]).Definition 1.For a positive-semidefinite matrix S, the intrinsic dimension is defined as It can be seen from the definition that the intrinsic dimensions do not significantly affected by changes in the size of the matrix.Actually, when the eigenvalues of S decrease very powerfully, the intrinsic dimension is much smaller than the ambient dimension.

Several Distributions
In this section, we briefly introduce three random distributions and their moment generating functions, including Gaussian (or Rademacher), sub-Gaussian, and infinitely divisible (i.d.) distributions.
The Gaussian distribution is a very important continuous distribution in probability theory and statistics, and is often used to represent real-valued random variables with unknown distribution.Given a Gaussian variable α, the moment generating function (mgf) is given by The Rademacher distribution is a discrete probability distribution in which the random variable takes on the value of 1 or −1 with probability 1  2 .Given a Rademacher variable ξ, the moment generating function is given by The sub-Gaussian distribution has strong tail decay, including many distributions, such as uniform and all bounded random distributions.Given a central sub-Gaussian random variable β, it holds that where c is an absolute constant.
Infinitely divisible (i.d.) distributions are referring to a large class of probability distributions that play an important role in probability theory with limit theorems.A random variable γ has an i.d.distribution if, for any n ∈ N + , there exists independent and identically distributed (i.i.d.) random variables γ 1 , • • • , γ n , such that γ has the same distribution as In discrete distributions, infinitely divisible distributions include Poisson distribution, negative binomial distribution, and geometric distribution.Among the continuous distributions, Cauchy distribution, Lévy distribution, stable distribution and Gamma distribution are examples of infinitely divisible distributions.
A real-valued random variable γ is i.d. if and only if there exists a triplet (b, σ 2 , ν), where the characteristic function of γ is defined by where b ∈ R, σ ≥ 0 and ν is a Lévy measure.This necessary and sufficient condition is Lévy-Khintchine Theorem.

Random Matrix Series
Given n fixed matrices A 1 , A 2 , • • • , A n , a random matrix series is represented as ∑ n k=1 x k A k , where x 1 , x 2 , • • • , x n are independent variables.The tail and expectation bounds for random matrix series can be bounded to P λ max ∑ k x k A k ≥ t and Eλ max ∑ k x k A k .

Intrinsic Dimension Bounds for Matrix Series
In this section, we present tail bounds for random matrix series based on intrinsic dimension bounds, and also obtain the expectation bounds.

Matrix Gaussian (or Rademacher) Series with Intrinsic Dimension
This section presents the tail and expectation bounds for matrix Gaussian (or Rademacher) series with an intrinsic dimension.
Compared with the previous results in ( 2) and (3), our result in (15) improves upon their respective shortcomings, and is more tight.Therefore, our bound is more applicable for the case of high-dimensional matrices.
Theorem 2. Given a matrix Gaussian (or Rademacher) series ∑ k α k A k , then it holds that Compared with the previous result in (7), our result in (16) depends on the intrinsic dimensions of the matrix, and is more applicable for the case of high-dimensional matrices.
The proofs of Theorems 1 and 2 are similar to the proofs of sub-Gaussian matrix series; we omit them here.

Matrix Sub-Gaussian Series with Intrinsic Dimension
This section presents the tail and expectation bounds for matrix sub-Gaussian series with an intrinsic dimension.
where c is an absolute constant.
Before proving this theorem, we first introduce a proposition [27] that will be used in the proof process.This proposition is a key step in our proof.

Proposition 1.
Let Y be a random Hermitian matrix.Let ψ : R → R + be a nonnegative function that is nondecreasing on [0, ∞).For each t ≥ 0, Proof.Let the sum Fix a number θ > 0, and define the function Introduce the matrix M ⪰ ∑ k A 2 k .According to the mgf of a sub-Gaussian random variable in (12) and the transfer rule (consider a real-valued function f , if f (a) ≤ g(a) for a ∈ I, then f (A) ⪯ g(A), when the eigenvalues of A lie in I), it can be known that Introduce the function φ(a) = e a − 1, and observe that E tr(e θY − I) ≤ intdim(M)φ g(θ)∥M∥ .
Define the following parameters: Next, combine the bound (21) and the probability bound to obtain We use the following formula to control the fraction: We select θ = t/(2c 2 ω 2 ) to obtain Install the assumption that t > √ 2cω and yield the conclusion.
Since the large deviation inequality considers the case where t is large, the limitation t > √ 2cω is reasonable.
Theorem 4. Given a matrix sub-Gaussian series ∑ k β k A k , then it holds that Proof.Fix a number t > µ > √ 2cω. Select

Matrix Infinite Divisible Series with Intrinsic Dimension
This section presents the tail and expectation bounds for matrix i.d.series with an intrinsic dimension.
For any t ≥ h(M − )ω 2 , we have where Compared with the previous result in [26], our results depend on the intrinsic dimensions of the matrix and are more applicable for the case of high-dimensional matrices.

Proof. Let the sum
k .Similar to the above proof, according to the mgf of i.d.random variable in (14) and the transfer rule, we can obtain Next, we minimize the right-hand side of (29) with respect to θ.Since Ee θγ < +∞ for all 0 < θ < M, ϕ(θ) is infinitely differentiable on (0, M), with and Since ϕ(0) = h(0) = h −1 (0) = 0, we have Install the assumption that t > 1 h −1 (t/ω 2 ) ; we have Actually, when t ≥ h(M − )ω 2 , according to the convexity of ω 2 ϕ(θ) − θt with respect to θ > 0 and the monotonicity of h −1 (s) (s > 0), the solution to the optimization problem is θ = M. Thus, for any t ≥ h(M − )ω 2 , we have Given some specific settings of the measure ν, we can obtain the following corollary.
Given a matrix i.d.series ∑ k γ k B k , then it holds that In other words, in the case where the tail bound is integrable, we can use Formula (37) to obtain the expectation bound based on the intrinsic dimensions for matrix i.d.series.
Compared with existing studies, our results are based on the intrinsic dimension of the matrix.The tail and expectation bounds are tighter than the previous results.Therefore, our bounds are more applicable for the case of high-dimensional matrices.
In addition, by using the Hermitian dilation, our results can also be extended to the scenario of non-Hermitian random matrix series.Consider that the general random matrix series Thus, we may invoke each theorem to obtain tail and expectation bounds for the norm of the random matrix series.

Conclusions
In this paper, we propose optimized tail and expectation bounds for random matrix series, including matrix Gaussian (or Rademacher) and sub-Gaussian and infinitely divisible (i.d.) series.Different from existing studies, our results depend on intrinsic dimension rather than ambient dimension, and are more suitable for the case of high-dimensional matrices.
In future work, we will use the obtained results to study tail bounds and expectation bounds for other eigenvalues of random matrix series.

Theorem 1 .
Consider a finite sequence {A k : k = 1, ..., n} of fixed Hermitian matrices with the same dimensional d, with {α k } being a finite sequence of independent Gaussian (or Rademacher) variables.Introduce the matrix M ⪰ ∑ k A 2 k .Define the following parameters: d = intdim(M) and ω 2 = ∥M∥ Then, it holds that

Theorem 3 .
Consider a finite sequence {A k : k = 1, ..., n} of fixed Hermitian matrices with the same dimensional d, with {β k } being a finite sequence of independent central sub-Gaussian variables.Introduce the matrix M ⪰ ∑ k A 2 k .Define the following parameters: d = intdim(M) and ω 2 = ∥M∥ Then, it holds that , where a 1 , a 2 , • • • , a n are real numbers and α 1 , α 2 , • • • , α n are independent standard Gaussian variables.There is the probability inequality