Next Article in Journal
The Upper Bound of the Edge Mostar Index with Respect to Bicyclic Graphs
Next Article in Special Issue
On the Special Issue “Limit Theorems of Probability Theory”
Previous Article in Journal
A Comprehensive Formalization of Propositional Logic in Coq: Deduction Systems, Meta-Theorems, and Automation Tactics
Previous Article in Special Issue
Bound for an Approximation of Invariant Density of Diffusions via Density Formula in Malliavin Calculus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

On Structured Random Matrices Defined by Matrix Substitutions

by
Manuel L. Esquível
1,* and
Nadezhda P. Krasii
2
1
Department of Mathematics, FCT NOVA, and CMA New University of Lisbon, 2829-516 Caparica, Portugal
2
Department of Higher Mathematics, Faculty of Informatics and Computer Engineering, Don State Technical University, 344003 Rostov-on-Don, Russia
*
Author to whom correspondence should be addressed.
Mathematics 2023, 11(11), 2505; https://doi.org/10.3390/math11112505
Submission received: 30 March 2023 / Revised: 16 May 2023 / Accepted: 16 May 2023 / Published: 29 May 2023
(This article belongs to the Special Issue Limit Theorems of Probability Theory)

Abstract

:
The structure of the random matrices introduced in this work is given by deterministic matrices—the skeletons of the random matrices—built with an algorithm of matrix substitutions with entries in a finite field of integers modulo some prime number, akin to the algorithm of one dimensional automatic sequences. A random matrix has the structure of a given skeleton if to the same number of an entry of the skeleton, in the finite field, it corresponds a random variable having, at least, as its expected value the correspondent value of the number in the finite field. Affine matrix substitutions are introduced and fixed point theorems are proven that allow the consideration of steady states of the structure which are essential for an efficient observation. For some more restricted classes of structured random matrices the parameter estimation of the entries is addressed, as well as the convergence in law and also some aspects of the spectral analysis of the random operators associated with the random matrix. Finally, aiming at possible applications, it is shown that there is a procedure to associate a canonical random surface to every random structured matrix of a certain class.

1. Introduction

Let us start with some motivations. A generic problem in Big Data analysis may have as a starting point a large matrix having columns to represent the questions and the lines to represent the subject’s answers (see [1], p. 28). The typical observed matrix may appear to be random. The questions can admit answers that can be either categorical—and so can be modelled by random variables taking values in a finite set—or be quantitative and be modelled by random variables taking values in some set of numbers; in this case, we can also have random variables taking values in a finite set by consider a partition in intervals of the range of the real valued random variables. A natural generic question about these matrices is to determine the existence of a possible structure of the matrix. One initial idea, to better understand this line of problems, is to build matrices with random entries but with a prescribed structure and try to recover this structure by means of some statistical tests or by the spectral analysis of the matrix. These ideas give a practical motivation for this study.
Let us situate our work in the context of the subject of substitutions. The analysis of scalar or string substitutions so to say, is a widely studied subject for which [2,3] are comprehensive references. Important results in the subject of substitutions are to be found also under the denomination of automated sequences, for instance in [4,5]. To the best of our present knowledge, the study of matrix valued substitutions has received no special attention in the literature. In this work, we propose a first approach to this topic. There has been work in multidimensional substitutions but in a different perspective than the adopted here that can be studied in [6,7,8] and in the chapter by J. Peyriére in [9] and other references therein.
An important starting point of the study of spectral statistics of random matrices is the work [10]. In it, the author focuses on three ensembles of asymmetric Gaussian random matrices derived from the Gaussian Orthogonal, Gaussian Unitary and Gaussian Symplectic random matrix ensembles by relaxing the Hermitian character. The three sets of matrices have a common Gaussian probability measure but they exhibit profound differences in their spectral patterns, differences that are qualitatively described in this work although the quantitative description was further improved by other authors. The difficult study of generic properties of random matrices related to the spectral analysis has received much attention in recent years as perfectly demonstrated in the following works: [11,12,13,14,15,16,17,18]. Readable introductions to the subject are presented in [19,20,21,22,23,24,25].
For a remarkable general formulation of the circular law that is most useful for our purposes we will refer the following result that conveys the flavour of an universality result that may be a relevant guide for the statistical analysis of possible existing particular types of structure in large observed matrices.
Theorem 1
(Circular law, Tao and Wu [22]). If M n is a n × n matrix with entries that are independent identically distributed with a complex centred and standardised random variable. Then, given,
μ M n ( x , y ) : = 1 n # 1 i n : λ i x , λ i y ,
the empirical spectral distribution of the eigenvalues λ i of ( 1 / n ) M n , we have that the sequence ( μ M n ( x , y ) ) n 1 converges to the uniform measure on the unit disc given by:
d μ c i r c u l a r ( x , y ) = 1 π 1 I { | x | 2 + | y | 2 1 } ( x , y ) d x d y .
We stress that until this optimal formulation was reached, several other technically involved formulations were obtained attesting the intrinsic difficulty of the subject, displayed in the works on the subject first referred above. Let us quote Terence Tao for a synthesis of the recent short history of the subject: A rigorous proof of the circular law was then established by Bai, assuming additional moment and boundedness conditions on the individual entries. These additional conditions were then slowly removed in a sequence of papers by Gotze–Tikhimirov, Girko, Pan–Zhou, and Tao–Vu.
We now refer to recent developments in the study of random matrices having some structure, the main topic that is dealt with in the present work, in particular results on the spacing distribution, on invertibility, and appearance of large structures and on the spectral analysis of these random matrices. These works may give an idea of the amount of exploratory work needed in the subject of random matrices with structure.
In [26], the authors consider four specific sparse patterned random matrices, namely the Symmetric Circulant, Reverse Circulant, Toeplitz, and the Hankel matrices. The entries are assumed to be Bernoulli with success probability linearly decreasing to zero. The moment approach is used to show that the expected empirical spectral distribution converges weakly for all these sparse matrices. The work in [27] is a complementary reference where the author investigates the existence and properties of the limiting spectral distribution of different patterned random matrices as the dimension grows. The method of moments and normal approximation with some combinatorics is used to deal with the Wigner matrix, the sample covariance matrix, the Toeplitz matrix, the Hankel matrix, the sample auto-covariance matrix, and the k-Circulant matrices.
In [28], a bound on the growth of the smallest singular value is found for random matrices with independent uniformly anti-concentrated entries with no restrictions on the null mean or identical distribution of the entries. The result obtained covers inhomogeneous matrices with different variances of the entries as long as the sum of second moments has sub-quadratic growth with the order of the matrix. Following this work, the reference [29] extends the results of Tao and Vu and Krishnapur on the universality of empirical spectral distributions to a class of inhomogeneous complex random matrices where the entries are linear images of standardised independent random variables satisfying a lower bound and Pastur’s condition. The proof uses an anti-concentration for sums of non-identically distributed independent complex random variables.
In [30], the semicircle law is established for a sequence of random symmetric matrices that may be considered as adjacency matrices of random graphs; the random matrices have independent entries given by the product of independent standardised random variables, the weight of the edges, with Bernoulli random variables that gives the probability of the edge. The empirical distribution of the eigenvalues of the normalised random matrix converges in the Kolmogorov distance to the distribution function of the semicircle law under boundedness and average conditions.
The work [31] deals with random ray pattern matrices that is matrices for which each of its nonzero entries has modulus one. A ray pattern matrix corresponds to a weighted digraph. A random model of ray pattern matrices with order n is introduced, where a uniformly random ray pattern matrix is defined to be the adjacency matrix of a simple random digraph whose arcs are weighted with i.i.d. random variables uniformly distributed over the unit circle in the complex plane. In this paper, it is shown that the threshold function for a random ray pattern matrix to be ray nonsingular is 1 / n . This function is also a threshold function for the property that giant strong components appear in the simple random digraph.
The work [32] deals with patterned random matrices which are real symmetric with substantially less independent entries than in real symmetric matrices. The main results are the calculation of spacing distribution for order three matrices deriving the distributions analytically. As expected, spacing distribution displays a range of behaviours based on the structural constraints imposed on the matrices.
In this work, we propose and study an algorithm to build sequences of random matrices, with independent entries, that have a built in structure. Furthermore, we explore some aspects of this kind of random matrices related to identification, spectral analysis, and an idea for applications. An overview of the content of this work is now detailed.
  • In Section 2, we present a first example of the algorithm, used to build structured matrices, given by the iterative application of matrix valued substitutions; the second example uses powers of the Kronecker product of a given matrix and is a particular case of the generic algorithm of matrix substitutions. A general procedure of construction of the sequence of structured matrices by substitutions is detailed in Section 3.1.
  • In Section 3, we present the results on fixed points of matrix substitutions.
  • The randomisation of structured matrices defined by matrix substitutions is studied in Section 4. Preliminary results on the spectral analysis of these random matrices are presented in Section 4.3. An application to modelling is detailed in Section 4.4 with an algorithm to associate a random field to an infinite random matrix of the kind studied in this work.

2. Structured Matrices Built by Substitutions

We start by presenting two examples of an algorithm to build sequences of arbitrary large matrices with entries in a finite set. For technical reasons we suppose that the entries of the structured matrices take values in some finite field, for instance:
Z p = Z / p Z = 0 , 1 , 2 , , p 1 ,
p being a prime number. The identification of the entries of the matrix as elements of Z p matters, essentially for the matrix substitution procedure used to build these structured matrices. Further ahead we will also consider that the entries of the matrix represent integer real numbers.
We will proceed to show, in Section 3, that in a certain class of matrix substitution maps we define, namely the affine matrix substitution maps, every such map admits either a fixed point or a periodic point.

2.1. A Matrix Sequence Built by Iterated Application of a Matrix Substitution

In the following examples, we suppose that the matrices entries take values in the field Z 3 = 0 , 1 , 2 . We now consider an example of a sequence of matrices with a structure defined by substitutions. The main idea of the construction of this sequence of matrices is the following. We start with some initial matrix M 0 . The second matrix in the sequence, the matrix M 1 , is obtained by replacing each term of the M 0 matrix by the matrices given by σ 0 , σ 1 , σ 2 according to the entry of M 0 we are replacing is, respectively, 0 , 1 , 2 .
M 0 = 2 0 1 1 2 1 1 0 2 σ 0 = 0 1 2 1 1 2 2 0 1 σ 1 = 1 0 0 0 2 0 1 0 1 σ 2 = 1 2 2 0 1 2 0 0 1 .
In Section 3 we present a formal description of this procedure in a more general case. With this algorithm we have that,
M 1 = 1 2 2 0 1 2 1 0 0 0 1 2 1 1 2 0 2 0 0 0 1 2 0 1 1 0 1 1 0 0 1 2 2 1 0 0 0 2 0 0 1 2 0 2 0 1 0 1 0 0 1 1 0 1 1 0 0 0 1 2 1 2 2 0 2 0 1 1 2 0 1 2 1 0 1 2 0 1 0 0 1
and also,
M 2 = 1 0 0 1 2 2 1 2 2 0 1 2 1 0 0 1 2 2 1 0 0 0 1 2 0 1 2 0 2 0 0 1 2 0 1 2 1 1 2 0 2 0 0 1 2 0 2 0 1 1 2 1 1 2 1 0 1 0 0 1 0 0 1 2 0 1 1 0 1 0 0 1 1 0 1 2 0 1 2 0 1 0 1 2 1 0 0 1 2 2 1 0 0 1 0 0 1 2 2 0 1 2 1 2 2 0 1 2 1 1 2 0 2 0 0 1 2 0 2 0 0 2 0 0 1 2 1 1 2 0 1 2 1 1 2 2 0 1 1 0 1 0 0 1 1 0 1 1 0 1 0 0 1 2 0 1 0 0 1 2 0 1 0 1 2 0 1 2 1 0 0 1 2 2 0 1 2 1 0 0 1 0 0 0 1 2 1 0 0 1 1 2 1 1 2 0 2 0 0 1 2 1 1 2 0 2 0 0 2 0 1 1 2 0 2 0 2 0 1 2 0 1 1 0 1 0 0 1 2 0 1 1 0 1 1 0 1 2 0 1 1 0 1 1 0 0 0 1 2 0 1 2 1 0 0 1 2 2 1 2 2 1 0 0 0 1 2 0 1 2 0 2 0 1 1 2 1 1 2 0 2 0 0 1 2 0 1 2 0 2 0 1 1 2 1 1 2 1 0 1 2 0 1 2 0 1 1 0 1 0 0 1 0 0 1 1 0 1 2 0 1 2 0 1 0 1 2 1 2 2 0 1 2 0 1 2 1 0 0 1 2 2 0 1 2 1 2 2 0 1 2 1 1 2 0 1 2 1 1 2 1 1 2 0 2 0 0 1 2 1 1 2 0 1 2 1 1 2 2 0 1 0 0 1 2 0 1 2 0 1 1 0 1 0 0 1 2 0 1 0 0 1 2 0 1 1 0 0 0 1 2 1 0 0 0 1 2 0 1 2 1 0 0 1 0 0 0 1 2 1 0 0 0 2 0 1 1 2 0 2 0 1 1 2 1 1 2 0 2 0 0 2 0 1 1 2 0 2 0 1 0 1 2 0 1 1 0 1 2 0 1 2 0 1 1 0 1 1 0 1 2 0 1 1 0 1 1 0 0 0 1 2 0 1 2 0 1 2 1 0 0 1 2 2 1 0 0 1 2 2 1 2 2 0 2 0 1 1 2 1 1 2 1 1 2 0 2 0 0 1 2 0 2 0 0 1 2 0 1 2 1 0 1 2 0 1 2 0 1 2 0 1 1 0 1 0 0 1 1 0 1 0 0 1 0 0 1 0 1 2 1 2 2 0 1 2 1 0 0 1 0 0 1 2 2 0 1 2 1 0 0 1 2 2 1 1 2 0 1 2 1 1 2 0 2 0 0 2 0 0 1 2 1 1 2 0 2 0 0 1 2 2 0 1 0 0 1 2 0 1 1 0 1 1 0 1 0 0 1 2 0 1 1 0 1 0 0 1 1 0 0 0 1 2 1 0 0 1 2 2 0 1 2 1 0 0 0 1 2 0 1 2 1 0 0 0 2 0 1 1 2 0 2 0 0 1 2 1 1 2 0 2 0 1 1 2 1 1 2 0 2 0 1 0 1 2 0 1 1 0 1 0 0 1 2 0 1 1 0 1 2 0 1 2 0 1 1 0 1

2.2. A Matrix Sequence Built by Kronecker Power Iterations

An apparently different way of building substitution structured matrices is by means of Kronecker powers of an initially given matrix that we now illustrate. The initial matrix is given by:
R 0 = 2 1 0 0 1 1 1 0 2
The sequence of the matrices taking values in Z 3 = 0 , 1 , 2 is defined by induction for n + 1 by taking the Kronecker product of the matrix for index n with R 0 modulo 3 to keep the entries of the matrix in Z 3 , that is,
R n + 1 : = R n R 0 ( mod 3 ) .
So, the second matrix of the sequence is,
R 1 = 1 2 0 2 1 0 0 0 0 0 2 2 0 1 1 0 0 0 2 0 1 1 0 2 0 0 0 0 0 0 2 1 0 2 1 0 0 0 0 0 1 1 0 1 1 0 0 0 1 0 2 1 0 2 2 1 0 0 0 0 1 2 0 0 1 1 0 0 0 0 2 2 1 0 2 0 0 0 2 0 1 ,
and the third matrix of the sequence is:
R 2 = 2 1 0 1 2 0 0 0 0 1 2 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 2 2 0 0 0 0 2 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 2 2 0 1 0 0 0 2 0 1 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 1 2 0 0 0 0 2 1 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 2 0 1 0 0 0 1 0 2 1 0 2 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 2 1 0 2 1 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 0 1 1 0 1 1 0 0 0 0 2 2 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 1 0 2 1 0 2 0 0 0 2 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 2 1 0 0 0 0 1 2 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 1 1 0 0 0 0 2 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 1 0 2 0 0 0 2 0 1 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 0 2 1 0 0 0 0 2 1 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 1 1 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 1 0 2 1 0 2 0 0 0 1 0 2 1 0 2 0 0 0 0 0 0 0 0 0 2 1 0 0 0 0 1 2 0 2 1 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 2 2 0 1 1 0 0 0 0 2 2 0 0 0 0 0 0 0 0 0 1 0 2 0 0 0 2 0 1 1 0 2 0 0 0 2 0 1 1 2 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 2 1 0 1 2 0 0 0 0 0 2 2 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 0 2 2 0 0 0 2 0 1 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0 1 0 2 2 0 1 0 0 0 0 0 0 2 1 0 2 1 0 0 0 0 0 0 0 0 0 0 0 0 0 1 2 0 1 2 0 0 0 0 0 1 1 0 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 2 2 0 2 2 0 0 0 1 0 2 1 0 2 0 0 0 0 0 0 0 0 0 0 0 0 2 0 1 2 0 1 2 1 0 0 0 0 1 2 0 0 0 0 0 0 0 0 0 0 1 2 0 0 0 0 2 1 0 0 1 1 0 0 0 0 2 2 0 0 0 0 0 0 0 0 0 0 2 2 0 0 0 0 1 1 1 0 2 0 0 0 2 0 1 0 0 0 0 0 0 0 0 0 2 0 1 0 0 0 1 0 2 .
Remark 1
(Kronecker power matrices are matrix substitutions). We observe that the above example of a Kronecker power matrix sequence corresponds to a special kind of substitution, the linear matrix substitution (see Definition 3 ahead). In fact, the algorithm for building a Kronecker power series of matrices is given by the substitutions in the sense of Section 2.1 with the matrices σ 0 , σ 1 and σ 2 defined by:
σ 0 = 0 0 0 0 0 0 0 0 0 σ 1 = R 0 = 2 1 0 0 1 1 1 0 2 σ 2 = 1 2 0 0 2 2 2 0 1 .
This is a consequence of the fact that computing a Kronecker power sequence starting with the matrix R 0 is equivalent to computing a matrix substitution given by:
σ 0 = ( 0 · R 0   mod 3 ) = 0 3 × 3 , σ 1 = ( 1 · R 0   mod 3 ) = R 0 , σ 2 = ( 2 · R 0   mod 3 ) .
We observe that the two kinds of substitutions give rise to different structured matrices. For instance, the distribution of the absolute values of the eigenvalues—in C , that is, supposing that the entries are complex—of the seventh iteration of substitutions for these two types of matrix substitutions are different and is shown, as histograms in Figure 1
Another significant difference between the two constructions is noticeable in the form of the dispersion, in the plane, of the eigenvalues that can be seen in Figure 2.
Remark 2.
The dispersion of eigenvalues observed in Figure 2 is to be compared to the dispersion of samples of randomised matrices of both kinds, Kronecker and simple, presented in Figure 3 ahead. It is as if the general structure of this dispersion remains despite the randomisation, at least whenever the variance of the random variables is small. This leads to conjecture that it may be important to determine the spectral distribution of the substitution matrices in order to infer for the spectral distribution of the randomised matrices.

3. On the Fixed Points of Affine Matrix Substitutions

In this Section we present fixed point theorems for affine matrix substitutions. The work here presented rests upon a procedure to build sequences of structured matrices, by means of matrix substitutions. In order for such matrices to be a usable model, subject to observation, some stable resulting structure should result from the procedure. Our view is that this stable structure should be either a fixed point or at least a periodic point of a map on some space of matrices. We opt to consider spaces of infinite matrices. A general and historic approach to the subject of infinite matrices is given in [33]. A more recent account of important results on this subject is given in [34]. Furthermore, a flavour of a specific kind of problems can be read in [35]. The perspective of considering an infinite matrix as a linear operator on some Banach space of power summable sequences is exploited in the reference book [36] in which the concept of band-dominated operators, corresponding to operators that are limits of operators defined by infinite matrices with a finite number of non-null lines and columns, plays an important role. A particular case of this concept is of crucial importance in our work to prove the existence of a particular kind of observable fixed point.
To begin with we define some spaces of finite and infinite matrices with entries in Z p .

3.1. Some Spaces of Matrices

Let us briefly describe the setting. For simplicity, let p be a prime number and let Z p = { 0 , 1 , , p 1 } be the finite field with # Z p = p . The set Z p may be though as the alphabet when the perspective of finite automata is adopted or, in the context of Big Data the set that codifies the possible answers. We next define the space of infinite matrices with entries in the field Z p .
M + : = M = a i j i , j 1 : a i j Z p = Z p N { 0 } × N { 0 } .
We have that M + is a vectorial space over the field Z p . Let M 0 be a particular subspace of M + which may be identified to a set of finite square matrices if all infinite parts of rows and infinite parts of columns having as entries only 0 Z p are discarded, that is:
M 0 : = M = a i j i , j 1 M + : n 1 i , j n a i j = 0 .
We have that M 0 is a vectorial subspace of M + and we observe that M M 0 can have null lines and columns. We now decompose M 0 by observing that for each M M 0 there always exists n M , the first integer n 1 such that for all i , j > n M we have that a i j = 0 . Using this property, let us define M n × n # = M n × n # ( Z p ) M 0 as:
M n × n # : = M = a i j i , j 1 : n 1 , i , a i n 0 j , a n j 0 i , j > n , a i j = 0 .
that is, M n × n # is a subset of M 0 of infinite square matrices having a leading principal matrix of exact order n such that neither the column or the line of order n have all its entries equal to zero and such that all columns or rows of order greater or equal to n + 1 have only zero entries. M n × n # is not a subspace as the sum of two matrices in M n × n # may be an element of M n 1 × n 1 # by the fact that the entries belong to Z p and the sum is to be computed modulus p. We then may define:
M n × n ( Z p ) = M n × n : = 1 k n M k × k # ,
which is a vectorial space of infinite matrices over Z p , a subset of M 0 , defined in such a way such that the decomposition is of partition type, and that we have,
M 0 = n 1 M n × n ( Z p ) .
We now introduce a sequence of infinite matrices associated with a given matrix substitution map. This sequence will be obtained by operating substitutions either on the finite matrix corresponding to the leading principal matrix of the infinite matrix or, directly, on the infinite matrix.
Definition 1
(Matrix substitution map). The matrix substitution map associated with matrix substitution rules is defined in the following sequence of steps.
1. 
Let us consider the initial state as M 0 M n × n ( Z p ) for some n 1 .
2. 
We associate to M 0 its leading principal matrix of order n, denoted by M 0 < which, we stress, is a finite matrix of order n. Let M n × n < ( Z p ) denote the set of the leading principal matrices of order n associated with the elements of M n × n ( Z p ) , M 0 or M + .
3. 
For technical reasons we will restrain our study by considering that we chose d 1 such that for all k Z p we have σ k a finite matrix of order d that is, such that σ k M d × d < ( Z p ) . In the applications we may have d = n . Let us define the global substitution rule  σ : Z p M d × d < ( Z p ) , associated with { σ 0 , σ 1 , , σ p 1 } by:
j Z p σ ( j ) = k = 0 p 1 σ k 1 I { k = j } ( j ) ,
We now have an associated finite matrix substitution map denoted by Φ σ < defined by:
A = a i , j 1 i , j r M n × n < Φ σ < ( A ) = σ a i , j 1 i , j r M d · n × d · n < .
4. 
We define matrix substitution map denoted by Φ σ by adding to the finite matrix Φ σ < ( A ) M d · n × d · n < infinite rows and columns of entries of 0 Z p in such a way that Φ σ ( A ) is an infinite matrix such that we have Φ σ ( A ) M n × n ( Z p ) and such that the leading principal matrix of order n of Φ σ ( A ) is precisely Φ σ < ( A ) .
5. 
We now define the extension of the notion of a matrix substitution map for matrices in M n × n , to the space of infinite matrices M + . Given that we supposed that global substitution σ : Z p M d × d < ( Z p ) take values in a space of finite matrices of order d, we may define Φ σ ( M ) for M M + , with M = [ a i j ] i , j 1 as the matrix [ σ ( a i j ) ] i , j 1 , that is, an infinite matrix having entries matrices [ σ k ( a i j ) ] , for a i j = k with k Z p .
6. 
The matrix substitutions sequence denoted by M σ ( M m ) m 0 is defined by induction, for M 0 = M with M M d · n × d · n < or M M + , by:
n 0 M m + 1 = Φ σ < M m , M M d · n × d · n < ; M m + 1 = Φ σ M m , M M + .
Remark 3
(A substantiation for operating on finite order matrices). The procedure of applying matrix substitutions to the leading principal matrix of the infinite matrices is designed to overcome the restriction of having σ 0 always equal to the null matrix with only 0 Z p entries.
Remark 4
(Generalisations and open problems). It is possible to generalize this procedure in several ways. For instance, we could have two different matrix substitution maps applied successively. There are several interesting problems under the perspective of this setting.
(I) 
Given a sequence of matrices ( A n ) n 0 , satisfying some compatibility conditions, is it possible to determine conditions under which there exists an initial state M 0 and a matrix substitution map Φ σ such that ( A n ) n 0 = M σ ?
(II) 
A related and very important problem is to determine the properties of the eigenvalues of the matrices of the sequence M σ that may be derived from the properties of Φ σ .

3.2. On the Existence of Fixed Points for Matrix Substitution Maps

In this Section we consider the existence of fixed points of matrix substitution maps both for matrices in M + and in M 0 .

3.2.1. Fixed Points for Matrix Substitution Maps over Infinite Matrices

Let us first deal with fixed points in M + (see the definition in Formula (3)) of a linear matrix substitution map Φ σ . We consider the definition of a matrix substitution map given in Definition 1 for matrices in the space of infinite matrices M + . For infinite matrices we will show that a matrix substitution map defined on M + may be seen as a usual substitution of constant length on a finite set in the sense of ([3], p. 87).
Theorem 2
(On the existence of fixed points for infinite matrices). Let σ : Z p M d × d < ( Z p ) be a global substitution taking values in a space of finite matrices, of order d, with entries in Z p , and let Φ σ be the associated matrix substitution map defined on M + . Then, there exists an integer ρ and M M + such that,
M = Φ σ ρ ( M ) : = Φ σ Φ σ Φ σ ρ t i m e s ( M ) ,
that is, M is a fixed point for the matrix substitution map Φ σ ρ ( M ) defined for M M + .
Proof. 
We will show that to each matrix substitution map there corresponds a univocal substitution map in the usual sense and then, we will apply a well known result that guarantees the existence of fixed points for usual substitution maps (see [3], pp. 87–88). We first observe that given s = [ s i j ] 1 i , j d a d × d matrix with entries in Z p we have an enumeration of these entries given by ( s ˜ k ) k = 1 , , d 2 with:
s i j = s ˜ ( i 1 ) d + j = s ˜ k .
This type of enumeration of a finite matrix will be applied to to the matrices of the substitutions σ k in order to convert the matrix σ k in a word of length constituted by letters taken from Z p . The reversion of this enumeration works as follows. Given a finite word having d 2 letters we associate to it a d × d square matrix having as its first line the first d letters of the word, as its second line the letters of order d + 1 to 2 d 2 and so on and so forth. It is clear that applying the enumeration and then the reversion gives the initial matrix.
Next, we have that given an infinite matrix, M = [ m i j ] i , j 1 with entries in Z p we have an enumeration of these entries given by ( m ˜ l ) l 1 with:
m i j = m ˜ ( i + j 1 ) ( i + j 2 ) 2 + i = m ˜ l .
This second type of enumeration will be applied to convert an infinite matrix with entries in Z p in an infinite word. Again, let us detail how the reversion of this enumeration process works. Take an infinite word and consider the associated infinite matrix as follows: the first letter of the word is the first entry of the matrix; the second and the third letters of the word give the first diagonal, just below the first entry, in the direction up-down; the forth, fifth, and sixth letters of the word give the second diagonal, just below the first diagonal, in the direction up–down and so on and so forth. It is clear also that applying the second enumeration and then this reversion process gives the initial matrix. Now, take the global matrix substitution rule σ that replaces each k Z p by the d × d matrix σ k . Consider the associated words σ ˜ k with letters in Z p obtained by applying the first enumeration to the matrices σ k . Take an infinite matrix M with entries in Z p and apply the second enumeration rule to M to obtain an infinite word M ˜ = ( m ˜ l ) l 1 ; we may define first an usual substitution rule σ ˜ on Z p by σ ˜ ( k ) = σ ˜ k and also an usual word substitution map Φ ˜ σ on the set of infinite words built with letters in Z p by:
Φ ˜ σ ( M ˜ ) = ( Φ ˜ σ ( m ˜ l ) ) l 1 ,
which is an infinite word obtained from the infinite word M ˜ by replacing each one of its letters k Z p by the correspondent word σ ˜ k . Recall Proposition V.1 in ([3], p. 88) that guarantees the existence of some infinite word M ˜ and some integer ρ such that:
Φ ˜ σ ρ ( M ˜ ) = Φ ˜ σ Φ ˜ σ Φ ˜ σ ρ   times ( M ) = M ˜ ,
and consider the infinite matrix M such that the second type of enumeration applied to it returns M ˜ . It is clear that if we apply the second enumeration process to Φ σ ρ ( M ) we obtain Φ ˜ σ ρ ( M ˜ ) which is equal to M ˜ and by reverting the enumeration process on M ˜ we finally obtain M, that is:
Φ σ ρ ( M ) = M ,
as stated above. □

3.2.2. Fixed Points for Matrix Affine Substitutions Maps Defined over Finite Matrices

We can obtain finite dimensional fixed points of matrix substitution maps by applying Theorem 2.
Definition 2
(Generalised fixed points for a finite matrix substitution map). Let us consider a given integer n 1 . The matrix M M n × n < ( Z p ) (see Definition 1) is a finite matrix fixed point of the matrix substitution map ( Φ σ < ) if and only if there exists an integer ρ 1 such that the leading principal part of order n of ( Φ σ < ) ρ ( M ) is equal to M.
Proposition 1.
For any integer n 2 and a given matrix substitution map ( Φ σ < ) there exists fixed points in the sense of Definition 2.
Proof. 
We only have to apply Theorem 2 in order to obtain a fixed point of order ρ of M M + for the matrix substitution map Φ σ and then to consider the leading principal matrix of order n of M. We obtain that ( Φ σ < ) ρ ( M ) M n d ρ × n d ρ < ( Z p ) and since we have that the leading principal part of Φ σ ρ ( M ) of order n d ρ is equal to the finite matrix ( Φ σ < ) ρ ( M ) we will have that the leading principal part of order n of ( Φ σ < ) ρ ( M ) is equal to M. □
We will pursue next the goal of obtaining fixed points of matrix substitution maps in an algorithmic way, that is, by dealing with finite matrices. Let us now introduce topological structures over the spaces of matrices defined in Section 3. In order to define semi-norms over M n × n , a space we may identify to the space of finite matrices of order n over the field Z p = Z / p Z , we will consider the trivial absolute value | · | p (see [37], pp. 197–198), given by:
k Z p | k | p = 0 if k = 0 1 if k 0 .
If Z p is considered as a vectorial space over itself then, due to the properties of an absolute value over a field, we have that | · | p may be considered as a norm over the vectorial space Z p . For M M n × n ( Z p ) let the modified sum semi-norm be given, for m > 1 , by:
M m : = 1 m 2 1 i , j m | a i j | p 1 .
Essentially, M m counts the proportion of nonzero elements in the leading principal matrix of order m of M. We observe that—with m the order of the semi-norm and n the order of the matrix—as m > n grows, M m will tend to zero. · m is a semi-norm as the proportion of nonzero entries of the sum of two matrices—with entries in the field Z p —can only decrease with respect with the sum of the proportions of each matrix. As a consequence of the decomposition of M n × n ( Z p ) in Formula (4), we have that:
M [ n ] = M M n × n ( Z p ) : = 1 n 2 1 i , j n | a i j | p 1 ,
is a norm over M n × n ( Z p ) and, with the norm M M n × n ( Z p ) the space of matrices M n × n ( Z p ) is, obviously, a Fréchet space. Now, let j : M n × n M ( n + 1 ) × ( n + 1 ) be the natural injection which is well defined taking into account Formula (4). Since we have that, for M M n × n M n + 1 × n + 1 that for i = n + 1 or j = n + 1 , | a i j | p = 0 , we then have,
j ( M ) [ n + 1 ] = 1 ( n + 1 ) 2 1 i , j n + 1 | a i j | p 1 ( n + 1 ) 2 1 i , j n | a i j | p + 1 ( n + 1 ) 2 i = n + 1 j = n + 1 | a i j | p 1 n 2 1 i , j n | a i j | p = M [ n ] .
As a consequence j maps continuously M n × n , · n into M n + 1 × n + 1 , · n + 1 . Furthermore, as a consequence, we may consider over M 0 the inductive topology generated by the family of Fréchet spaces M n × n , · n n 1 (see ([38], pp. 53–65) or ([39], pp. 57–60), or ([40], pp. 222–225)).
Remark 5
(On the topology of the space M 0 ). Let τ be this topology over M 0 . As a consequence of the well known results in the theory of LF spaces, we have that:
1. 
The restriction of τ to M n × n coincides with the norm topology · n .
2. 
M 0 , τ is a Hausdorf space.
3. 
We have the Dieudonné–Schwartz lemma, that is, if a set B is bounded in M 0 , τ then there exists some n b 1 such that B M n b × n b .
4. 
A sequence ( M n ) n 1 converges in M 0 , τ if and and only if there exists some n c 1 such that { M n : n 1 } M n c × n c and the sequence ( M n ) n 1 converges in M n c × n c , · n c .
5. 
We have Köthe’s theorem, that is, M 0 , τ is a complete space.
Remark 6
(A comparable topology). If we consider over M + the family of semi-norms ( s m ) m 1 , given by:
s m a i j i , j 1 : = sup n m 1 n 2 1 i , j n | a i j | p ,
we have that (see [38], p. 64 for a proof of this result) M + is a Fréchet space, we have that M 0 , τ embeds continuously in M + and that the closure of M 0 , τ is M + .
Now, let us consider M σ ( M n ) n 0 with M n + 1 = Φ σ M n Our first goal is to study the contraction properties of Φ σ over M 0 . The second goal is to extend Φ σ to M + , also as a contraction. This allows us to identify an invariant set. For that purpose we have to identify conditions under which Φ σ is linear, or affine over M 0 .
Definition 3
(Linear matrix substitutions). The matrix substitution map Φ σ (see Formulas (6)–(8)) is defined to be a linear matrix substitution map over M 0 iff for all k , k Z p we have that:
σ k + σ k = σ ( k + k     mod p ) a n d k · σ k = σ ( k · k     mod p ) .
Remark 7
(A substantiation of Definition 3). With k + k Z p and k · k Z p we will obviously have that,
Φ σ M + N = σ a i j + b i j 1 i , j n = σ a i j 1 i , j n + σ b i j 1 i , j n = Φ σ M + Φ σ N .
In fact, for the sum property—as for the product property the justification is similar—we have by definition,
σ a i j = σ k   i f f   a i j = k a n d   σ b i j = σ k   i f f   b i j = k ,
and so,
σ a i j + σ b i j = σ k + σ k = σ ( k + k     mod p ) = σ a i j + b i j   i f f   a i j + b i j = ( k + k   mod p ) .
Remark 8
(A consequence of Definition 3). Condition (13) for having a matrix substitution linear implies that σ 0 = 0 Z p because we should have for all k { 0 , 1 , 2 , p 1 } that σ 0 + σ k = σ k .
Remark 9
(Examples of linear matrix global substitution rules). A first example of a linear matrix substitution in Z 3 is given by:
σ 0 = 0 0 0 0 σ 1 = 0 2 1 1 σ 2 = 0 1 2 2 .
Let us return to the example of Section 2.2. We observe that:
σ 2 + σ 1 ( mod 3 ) = σ ( 2 + 1     mod 3 ) = σ 0 = 0 3 × 3 ,
thus showing that the substitution is a linear matrix substitution. A linear matrix substitution is essentially defined by its σ 1 substitution and so, every linear matrix substitution is derived from a Kronecker power matrix equal to σ 1 as defined in Section 2.2. We stress that not all matrix substitutions are linear as the first example in Section 2.1 shows. In fact, with the notations and definitions of this first example, we have that:
( σ 1 + σ 1 ) (   mod 3 ) = 2 0 0 0 1 0 2 0 2 ( σ 2 σ 0 ) (   mod 3 ) = 1 1 0 2 0 0 1 0 0 ,
and ( σ 2 σ 0 mod 3 ) ( σ 1 + σ 1   mod 3 ) thus showing that the substitution is not linear.
Remark 10
(On the contraction character of a matrix substitution map). Let us suppose that we have some matrix with constant entries, for instance:
M = a i j i , j 1 M n × n   w i t h   a i j p 1 .
Then, with the usual absolute value over Z p ,
M [ n ] = 1 n 2 1 i , j n | a i j | p = 1 n 2 1 i , j n 1 = 1 .
Now suppose, in the worst case scenario, that σ p 1 M d × d < is a matrix with all its entries, except one, equal to p 1 and the exception is 0. We now have as a consequence that all the entries of leading principal matrix of order n + 1 of M n + 1 = Φ σ M n will be equal to p 1 with n 2 entries that will be equal to 0. It then follows that,
M n + 1 [ d · n ] = Φ σ M n [ d · n ] = 1 d · n 2 1 i , j d · n | a i j | p = 1 d · n 2 d · n 2 n 2 = 1 1 d 2 = 1 1 d 2 M [ n ] ,
since M n = 1 . This example shows that the contraction properties of Φ σ depend on the proportion of zeros vis-a-vis the nonzero entries of the substitutions.
Proposition 2
(Linear matrix substitutions that are contractions). Let Φ σ be a linear matrix substitution map associated with a global substitution rule σ such that the maximum number of zeros in each σ k , for k { 1 , , k 1 } , is r with 1 r < d 2 . We recall that σ 0 is the square matrix with d 2 entries all equal to 0 Z p . Then, the map Φ σ is a contraction from M n × n into M n · d × n · d for every n 1 .
Proof. 
Take a matrix A M n × n such that the number of zero entries in the leading principal matrix of order n of A is s with 0 s < n 2 . The case where A is a null matrix is irrelevant because, in this case, Φ σ ( A ) is the null matrix. Then in the leading principal matrix of order n d of Φ σ ( A ) there will be at least s d 2 zero entries due to the substitution of each zero in A by d 2 zeros of the matrix σ 0 which is a matrix of order d. Now, there are n 2 s entries on A which are different of zero and for each of these non-null entries there correspond a maximum of r zero entries in Φ σ ( A ) . As a consequence the total number of zero entries in Φ σ ( A ) is bounded by s d 2 + ( n 2 s ) r . As such, we have that the proportion of nonzero elements in Φ σ ( A ) has the following upper bound:
Φ σ A [ d · n ] 1 s d 2 + ( n 2 s ) r n 2 d 2 = 1 s n 2 1 r d 2 = 1 r d 2 M [ n ] ,
and so, Φ σ is a contraction with constant 1 r / d 2 < 1 . □
Remark 11
(On the fixed points of linear matrix substitutions maps). We have first to observe that if Φ σ is a linear matrix substitution map associated with any global substitution rule σ then any null matrix M = a i j i , j 1 M n × n , that is, such that a i j 0 Z p , is a fixed point of Φ σ . In fact, since h a i j 0 Z p and σ 0 = 0 we have,
Φ σ ( M ) = M = 0 .
Let us describe now the non-null other fixed points of Φ σ , a linear matrix substitution map belonging to M 0 (see Formula (5)). Consider a non-null matrix M = a i j i , j 1 M n × n such that Φ σ ( M ) = M . By recalling that Φ σ ( M ) M n d × n d and reverting to the leading principal matrices of both M—a finite matrix of order n—and Φ σ ( M ) —which in turn is a finite matrix of order n d —we may conclude that, with 0 a 11 , if a 11 = k for k { 1 , 2 , , p 1 } Z p , then σ ( a 11 ) = σ k ( a 11 ) = M 0 . Moreover, we should also have, due to Φ σ ( M ) = M , that:
( i , j ) ( 1 , 1 ) , a i j a 11   a n d   l { 1 , 2 , , p 1 } , l k σ l = 0 .
We may conclude that if we are given a linear matrix substitution map then either the the correspondent global substitution rule has the particular structure described above or there exists no other fixed points in M 0 besides the null matrix.
In order to overcome the limitation of the fixed points for linear matrix substitutions maps we may consider other matrix substitution maps such as the ones defined next.
Definition 4
(Affine matrix substitutions). A matrix substitution map Φ is an affine matrix substitution map if there exists a linear global substitution rule σ and a constant global substitution rule ν c such that,
Φ = Φ σ + ( mod p ) Φ ν c = Φ σ + ( mod p ) ν c ,
with Φ σ the linear matrix substitution map associated with σ and Φ ν c the constant matrix substitution map associated with ν c .
Remark 12.
The important equality in the right-hand side of Formula (15) can be verified by resorting to the definition of a matrix substitution map associated with a global substitution rule.
We will now consider Definition 2 of the generalised fixed points for finite matrix substitution maps. Recall that according to the definition in Formula (7) for we have that Φ σ < ( M ) M d · n × d · n < and introduce the following notation,
Φ σ + ν c < ( M ) n ,
to denote the leading principal part of order n of Φ σ + ν c < ( M ) for M M n × n .
Theorem 3
(Fixed points of affine matrix substitutions). Consider an affine matrix substitution Φ σ + ν c = Φ σ + Φ ν c such that for the linear part global substitution rule σ, the maximum number of zeros in each σ k , for k { 1 , , k 1 } , is r with 1 r < d 2 . Then we have that:
1. 
Φ σ + ν c is a contraction from M n × n into M n · d × n · d for every n 1 .
2. 
Φ σ + ν c is a contraction from M 0 into M 0 .
3. 
There exists s 1 and L = a i j i , j 1 M s × s a fixed point of Φ σ + ν c , that is, such that Φ σ + ν c < ( L ) s = L .
Proof. 
The first statement follows from Formula (14) of Proposition 2. Recall that, by Formula (10) we have that M [ n ] = M n for M M n × n where for m integer M m is the semi-norm defined in Formula (9). For M , N M n × n we have that:
Φ σ + ν c M Φ σ + ν c N [ d · n ] = Φ σ M N [ d · n ] 1 r d 2 M N [ n ] ,
thus showing that the second statement is a consequence of the definition of the inductive topology of M 0 and of a natural definition of a contraction in an L F topological vector space. The last statement follows from a usual Banach fixed point theorem type argument, suitably modified. We first show the Cauchy sequence contraction inequality. Let M M n × n be given and, consider the matrix substitutions sequence M σ + ν c ( M n ) n 0 that is defined, by induction, by:
n 0 M n + 1 = Φ σ + ν c M n = Φ σ + ν c ( n + 1 ) M 0 ,
with M 0 = M and, the iterated application map given, for instance for the second order iteration by Φ σ + ν c ( 2 ) = Φ σ + ν c Φ σ + ν c . We now show that M σ + ν c is a Cauchy sequence in M 0 . For that, see ([38], p. 30), we have to show that for every U, a neighbourhood of zero in M 0 there exists some integer m 0 1 such that for all p 1 and m m 0 we have M m + p M m U . We start by using Formula (17) to establish a contraction Cauchy sequence type inequality.
M m + p M m [ d m + p · n ] k = 1 p M m + k M m + k 1 [ d m + k · n ] k = 1 p Φ σ + ν c ( m + k ) M 0 Φ σ + ν c ( m + k 1 ) M 0 [ d m + k · n ] k = 1 p 1 r d 2 m + k 1 Φ σ + ν c M 0 M 0 [ d · n ] = d 2 r 1 r d 2 m Φ σ + ν c M 0 M 0 [ d · n ] .
Since by Köthe’s Theorem M 0 is a complete space the conclusion now follows by the following argument. Let us rewrite the inequality (18) in the form:
M m + p M m B [ d m + p · n ] 0 , c λ m ,
with B [ d m + p · n ] 0 , c λ m the ball centred on zero with radius c λ m in M d m + p · n × d m + p · n with,
c : = d 2 r Φ σ + ν c M 0 M 0 [ d · n ]   a n d   λ : = 1 r d 2 .
Now, let U be a convex neighbourhood of zero in M 0 . Then, see ([38], p. 57), for all n 1 we have that U M n × n is a neighbourhood of zero in M n × n and so,
ϵ > 0 B M n × n ( 0 , ϵ ) U M n × n U .
Let m 0 be an integer such that for all m m 0 we have that c λ m < ϵ , which is possible as λ < 1 . Now, due to the decreasing properties of the norms of the spaces M n × n we have that
p 1 , M m 0 B [ d m + p · n ] 0 , c λ m B M n × n ( 0 , ϵ ) U ,
thus showing that M σ + ν c is a Cauchy sequence in M 0 . Finally, as a consequence of the properties of the topology of the space M 0 , we have that the sequence M σ + ν c converges in M 0 and so, for some s 1 we have that M σ + ν c converges in M s × s . As a consequence, there exists L M s × s such that:
lim n + L Φ σ + ν c ( n ) M ) [ s ] = 0
We now observe that:
L Φ σ + ν c L [ s + 1 ] Φ σ + ν c L Φ σ + ν c ( n + 1 ) M ) [ s + 1 ] + Φ σ + ν c ( n ) M Φ σ + ν c ( n + 1 ) M [ s + 1 ] + L Φ σ + ν c ( n ) M ) [ s + 1 ] .
Now by the contraction property of Φ σ + ν c shown in Formula (17) by the canonical injection of M s + 1 × s + 1 in M s × s shown in Formula (11) we have that:
Φ σ + ν c L Φ σ + ν c ( n + 1 ) M ) [ s + 1 ] L Φ σ + ν c ( n ) M ) [ s + 1 ] L Φ σ + ν c ( n ) M ) [ s ] ,
and so, by Formulas (18) and (20) we have that L Φ σ + ν c L [ s + 1 ] = 0 and this implies that Φ σ + ν c < ( L ) s = L , that is, L is a generalised fixed point for the finite matrix substitution map Φ σ + ν c . □
Remark 13
(Comparing Theorem 3 and Proposition 2). Theorem 3 is an improvement of Proposition 2 in two directions. It is a constructive result since it gives an algorithm to obtain a fixed point and while in Proposition 2 the fixed point was a fixed point of some number of iterations of the matrix substitution map in Theorem 3 the fixed point obtained is a fixed point of only one iteration of the matrix substitution map.

4. Random Matrices Associated to Structured Matrices

In this Section we consider structured random matrices derived from the structured matrices considered in Section 2. Our approach to the spectral analysis of random matrices derived from matrices built with a matrix substitution procedure relies on the general theory of random linear operators as exposed in [41]. Other more recent approaches to this subject are given in [42,43,44]. Take a structured matrix built by substitutions—that we will denominate the skeleton of the random matrix—and consider the associated random matrix having as entries random variables such that to the occurrence of each field element i Z p in the skeleton structured matrix there corresponds a random variable with at least the same expected value as the expected value of a given random variable X i , the same for a given i Z p . We will also consider the more stringent assumption that the entries in the random matrix corresponding to same field element i Z p are equi-distributed with a given random variable X i .The random matrix can have independent entries or not. As usual the study of the independent case is easier and we will assume independence. For instance, take the matrix M 1 in Formula (2), that is:
M 1 = m i , j 1 i , j = 1 2 2 0 1 2 1 0 0 0 1 2 1 1 2 0 2 0 0 0 1 2 0 1 1 0 1 1 0 0 1 2 2 1 0 0 0 2 0 0 1 2 0 2 0 1 0 1 0 0 1 1 0 1 1 0 0 0 1 2 1 2 2 0 2 0 1 1 2 0 1 2 1 0 1 2 0 1 0 0 1
This matrix is the skeleton of the following random matrix:
M 1 ( X # ) = X # 1 X # 2 X # 2 X # 0 X # 1 X # 2 X # 1 X # 0 X # 0 X # 0 X # 1 X # 2 X # 1 X # 1 X # 2 X # 0 X # 2 X # 0 X # 0 X # 0 X # 1 X # 2 X # 0 X # 1 X # 1 X # 0 X # 1 X # 1 X # 0 X # 0 X # 1 X # 2 X # 2 X # 1 X # 0 X # 0 X # 0 X # 2 X # 0 X # 0 X # 1 X # 2 X # 0 X # 2 X # 0 X # 1 X # 0 X # 1 X # 0 X # 0 X # 1 X # 1 X # 0 X # 1 X # 1 X # 0 X # 0 X # 0 X # 1 X # 2 X # 1 X # 2 X # 2 X # 0 X # 2 X # 0 X # 1 X # 1 X # 2 X # 0 X # 1 X # 2 X # 1 X # 0 X # 1 X # 2 X # 0 X # 1 X # 0 X # 0 X # 1 ,
built with the rules detailed above and so it is a structured random matrix M 1 ( X # ) = X # ( m i , j 1 ) i , j with skeleton M 1 = m i , j 1 i , j such that the entries are independent and verify, at least, E X # ( m i , j 1 ) = m i , j 1 .
We will address, in Section 4.1, Section 4.2 and Section 4.4 several questions regarding these structured random matrices, to wit:
  • Identification of a random matrix model (Section 4.1);
  • Convergence in law of random matrices built on skeletons matrices derived from substitution maps having a fixed point (Section 4.2);
  • Spectral analysis of some random structured matrices (Section 4.3);
  • Random surfaces associated with random matrices built on skeletons matrices derived from substitution maps having a fixed point (Section 4.4).

4.1. Testing for a Given Matrix Structure in a Realisation of a Stochastic Matrix

In this Section we will address the problem of testing if a given observed matrix can be considered as a realisation of a random matrix associated with a structured matrix built by a substitution map; this will be performed in a simple case. Let us suppose that we are given a realisation M = x i j 1 i , j N of a random matrix M = X i j 1 i , j N having a structure derived from a matrix substitution map. We will admit the following assumptions.
(A) 
The matrix M has its skeleton—that is, a matrix M = m i , j i , j with entries in Z p —which is a fixed point of the matrix substitution map. This assumption is justified on the grounds of the process that originated the skeleton being over its transient phase.
(B) 
The random variables which are entries of the random matrix M form a set of independent random variables.
Consider now, for each i Z p the sequence X N i i = ( X n i ) 1 n N p formed by the random variables of the random matrix M that correspond to the entries in the skeleton with value i; we observe that i Z p N i = N 2 . We assume furthermore that:
(C) 
For each i Z p we have that X i G i ( θ ) , that is, the correspondent random variable X i has a probability law G i ( θ ) with θ Θ i R q a parameter.
Due to hypothesis (B) and (C), the sequence X N i p is a sample of the given random variable X i . Furthermore, so a test procedure such as, a likelihood ratio test can be applied to determine if the matrix realisation M comes from a prescribed model of a random matrix with entries distributions verifying assumption (C) and with the skeleton given by a fixed point of the substitution map according to assumption (A).
Remark 14
(On the detection of a structured random matrix). Let us suppose that we have an observed large matrix which we suppose to be a realisation of a random matrix with independent centred entries. If the random variables are identically distributed then by force of the circular law, as quoted in Theorem 1 the spectral distribution of the normalised random matrix should be approximately the uniform distribution in the unit circle; a rejection of such a null hypothesis can be thought to be a strong indication of the existence of some particular structure in the matrix, namely that the entries are not identically distributed. For a formulation of such a statistical test, see [45] and also [46,47,48] and other references therein. Let us observe that it may be impossible to discern between possible existing structure or not; in fact, we have examples that show that if the coefficient of variation is large the distribution of eigenvalues of a structured matrix may have a similar pattern to the distribution of eigenvalues of a unstructured matrix.

4.2. Convergence in Law of Random Structured Matrices Built by Arbitrary Substitutions

In this section, we show that if we consider a matrix fixed point of a matrix substitution map then the sequence of random matrices having as skeletons the sequence of iterates, by the matrix substitution map, of a given matrix converges in law to the random matrix that has as skeleton the fixed point of the matrix substitution map. We suppose that we are in the following context and notations.
  • A global substitution given by: σ : Z p M d × d < ( Z p ) ;
  • The associated matrix substitution map Φ σ defined on M + ;
  • A fixed point M of the substitution map Φ σ .
  • The entries in the random matrix corresponding to same field element p Z p are equi-distributed with a given random variable X p .
We recall that if M 0 M + and M n = Φ σ ( M n 1 ) for n 1 then M = M lim n + M n the convergence taking place in the topology of M defined by the increasing sequence of semi-norms given in Formula (12) (see Remark 6).
Theorem 4
(Convergence in law of random structured matrices). Suppose that for each i Z p the characteristic function of the random variable X i is continuous at zero. If for n 1 , M n ( X # ) and M ( X # ) are the random structured matrices with skeletons M n and M , respectively, and as defined above then:
L a w ( M n ( X # ) ) n + L a w ( M ( X # ) ) .
Proof. 
Before applying Levy’s continuity theorem we clarify the convergence in M . The increasing family of semi-norms ( s m ) m 1 defined by:
s m M = s m a i j i , j 1 : = sup n m 1 n 2 1 i , j n | a i j | p ,
gives the maximum proportion of non-null terms in the leading principal parts of dimension less or equal to m of the matrix M = a i j i , j 1 . Taking M 0 M + and M n = Φ σ ( M n 1 ) for n 1 , we have that M = M lim n + M n if and only if:
m 1 , lim n + s m M n M = 0 .
If this is the case, taking now ϵ < 1 / m , for a given m 1 , and if s m M n M ϵ we have necessarily the leading principal parts of order m of M n and M are equal. This implies that all the entries of the leading principal parts of order m of M n ( X # ) and M ( X # ) have that same laws. Now given an infinite random matrix M ( X # ) ) = a i j ( X # ) i , j 1 with skeleton M = a i j i , j 1 we may consider its characteristic function φ M ( X # ) , for each t R , by:
t R , φ M ( X # ) ( t ) = φ a i j ( X # ) ( t ) i , j 1 = E e i t a i j ( X # ) i , j 1 .
For each t R , we have that φ M n ( X # ) ( t ) and φ M ( X # ) ( t ) are infinite matrices with coefficients in C . We consider on the space M ( C ) of infinite matrices z i j i , j 1 with coefficients in z i j C the topology defined by the increasing family of semi-norms:
ρ m z i j i , j 1 = sup n m 1 i , j n | z i j | ,
and we now show that:
lim n + φ M n ( X # ) ( t ) = M ( C ) φ M ( X # ) ( t ) m 1 lim n + ρ m φ M n ( X # ) ( t ) φ M ( X # ) ( t ) = 0 ,
for every fixed t R . It is enough to consider ϵ < 1 / m for any fixed m 1 . As seen above if n 1 is such that s m M n M ϵ we have necessarily the leading principal parts of order m of M n ( X # ) and M ( X # ) have that same laws and so their the characteristic functions of the entries of the respective leading principal parts of order m also coincide and so ρ m φ M n ( X # ) ( t ) φ M ( X # ) ( t ) = 0 . As a consequence of Levy’s continuity theorem (see ([49], p. 389) or ([50], p. 144)), we have the thesis of the theorem in Formula (21). □

4.3. Spectral Analysis of Some Structured Random Matrices

In this Section we will provide results shedding light on the spectral analysis of some random structured matrices. The first result shows that under some mild assumptions a random structured matrix defines, almost surely for each one of its realisations, a Hilbert-Schmidt operator on l 2 ( N ) , the Hilbert space of square summable sequences. The two main references needed in this Section are [51,52] for the results on Hilbert–Schmidt operators and [41] for random linear operators.
Theorem 5
(Random structured matrices with vanishing second moments). Consider a random structured matrix M ( X ( # ) ) = X i j m i j i , j with skeleton M = m i j i , j only verifying E [ X i j m i j ] = m i j besides the independence of the entries. Let ( e i ) i 1 be the canonical orthonormal basis of l 2 ( N ) , that is, e i = ( e i 1 , e i 2 , e i n , ) with e i n = δ i n the Kronecker’s delta. We assume that the second moments E X i j m i j 2 of the random matrix entries go to zero, sufficiently fast as i , j grow indefinitely, more precisely:
i , j E X i j m i j 2 = C < + .
Then we have that:
P i , j M ( X ( # ) ) e i , e j 2 < + = 1 .
Moreover, for ω Ω almost surely, M ( X ( # ) ) ( ω ) defines a bounded operator in l 2 ( N ) which is also a Hilbert–Schmidt operator in l 2 ( N ) .
Proof. 
The proof essentially relies on a Skorohod’s sufficient condition for random linear operators in Hilbert space. We observe that:
i , j M ( X ( # ) ) e i , e j 2 = i , j X i j m i j 2 .
Condition in Formula (22) implies, by Lebesgue’s monotone convergence theorem that:
E i , j X i j m i j 2 = i , j E X i j m i j 2 = C < +
Furthermore, so, by a standard argument we have the conclusion announced in Formula (23),
P i , j M ( X ( # ) ) e i , e j 2 < + = P i , j X i j m i j 2 < + = 1 .
We first have for ω Ω almost surely, that the operator M ( ω ) : = M ( X ( # ) ( ω ) ) is bounded, since, for all s l 2 ( N ) , that is such that s = ( s i ) i 1 with i 1 | s i | 2 < + , we have, by Parseval’s equality and by Cauchy–Schwartz’s inequality:
M ( ω ) ( s ) 2 = j 1 | M ( ω ) ( s ) , e j | 2 = j 1 | i 1 M ( ω ) ( e i ) , e j s , e i | 2 j 1 i 1 | M ( ω ) ( e i ) , e j | 2 i 1 | s , e i | 2 = i , j 1 | M ( ω ) ( e i ) , e j | 2 s 2 ,
and thus, by Formula (23), the operator M ( ω ) is bounded. The final conclusion results from Remark 2 in Skorohod’s treaty ([41], p. 8) stating that the condition expressed in Formula (23), is suffices for the matrix operator defined by the random matrix M ( X ( # ) ) to be a Hilbert–Schmidt operator, almost surely. In fact, by Theorem 2 in ([51], p. 34) we have that a sufficient condition for the operator M ( ω ) to be an Hilbert–Schmidt operator is that:
i 1 M ( ω ) ( e i ) 2 = j 1 i 1 | M ( ω ) ( e i ) , e i | 2 < + ,
and so the last result announced follows. □
As a consequence of Theorem 5 and of the spectral theorem we obtain the spectral representation of the kind of structured random matrices we studied in this Section.
Remark 15
(On the definition of eigenvalues of random structured matrices). Since every Hilbert–Schmidt operator is compact and the random matrix entries are real the spectral theorem for compact self adjoint operators (see [52], p. 113) shows that, for ω Ω almost surely, there is an orthonormal system ( ϕ i ( ω ) ) i 1 of eigenvectors of M ( ω ) and the corresponding eigenvalues ( λ i ( ω ) ) i 1 such that for all s l 2 ( N ) we have that:
M ( ω ) ( s ) = i 1 λ i ( ω ) s , ( ϕ i ( ω ) ϕ i ( ω ) ,
and since the operator M ( ω ) is Hilbert–Schmidt we have that:
j 1 M ( ω ) ( ϕ j ( ω ) ) 2 = j 1 i 1 λ i ( ω ) ϕ j ( ω ) , ( ϕ i ( ω ) ϕ i ( ω ) 2 = j 1 λ j ( ω ) ϕ j ( ω ) 2 = j 1 | λ j ( ω ) | 2 < + .
So, the random structured matrices studied in this Section have, almost surely, square integrable eigenvalues sequences.
The next result shows that the image of a nonrandom vector by some of the structured random matrices in this Section is, asymptotically, a Gaussian vector.
Theorem 6
(Gaussian character of images of nonrandom vectors by some structured random matrices). Consider a random structured matrix M ( X ( # ) ) = X i j m i j i , j with skeleton M = m i j i , j only verifying E [ X i j m i j ] = m i j and that and V X i j m i j is bounded, besides the independence of the entries. Suppose that x l 2 ( N ) l 1 ( N ) . Suppose additionally that:
δ L : = max j L E | x , e j X i j m i j | 3 E | x , e j X i j m i j | 2 L + 0 .
Then M ( X ( # ) ) ( x ) is a vector which has components that are asymptotically Gaussian, a property that we summarise in the form:
j 1 x , e j X i j m i j a ( j ) N ( D , C 2 ) = N j 1 x , e j m i j , j 1 x , e j 2 V X i j m i j ,
for each component of M ( X ( # ) ) ( x ) .
Proof. 
The proof is an application of Lyapunov’s central limit theorem for independent but not identically distributed random variables (see [53], p. 362). We consider the operator M ( X ( # ) ) : l 2 ( N ) l 2 ( N ) and for notational purposes that ( e i ) i 1 is the canonical orthonormal basis of l 2 ( N ) and that ( e i ) i 1 is its the dual basis. With the notation M ( ω ) : = M ( X ( # ) ( ω ) ) we have that M ( ω ) ( x ) = i 1 M ( ω ) ( x ) , e i e i and if we take a nonrandom vector x = i 1 x , e i e i we have that M ( ω ) ( x ) = i 1 x , e i M ( ω ) ( e i ) , an expression that may be developed into:
M ( ω ) ( x ) = i 1 j 1 x , e j M ( ω ) ( e j ) , e i e i = i 1 j 1 x , e j M ( ω ) ( e j ) , e i e i = i 1 j 1 x , e j X i j m i j e i ,
using the fact that M ( ω ) = X i j m i j i , j . We observe that using previous notations we have that:
E x , e j X i j m i j = x , e j m i j   and   V x , e j X i j m i j = x , e j 2 V X i j m i j .
Now due to Lyapunov central limit theorem, the assumption made in Formula (25) and Berry estimate for the rate of convergence, we may write, for a variable A = A ( L ) = O ( L ) :
P j L x , e j X i j m i j E x , e j X i j m i j j L x , e j 2 V X i j m i j x = 1 2 π x e t 2 2 d t + A δ L .
The above expression may be written as:
P j L x , e j X i j m i j x j L x , e j 2 V X i j m i j + j L x , e j m i j = 1 2 π x e t 2 2 d t + A δ L .
Since x l 2 ( N ) and V X i j m i j , the variances of the entries of the matrix M ( ω ) , are bounded we have that:
j 1 x , e j 2 V X i j m i j = C 2 < + .
Since x l 1 ( N ) l 2 ( N ) and m i j Z p we have that j 1 x , e j m i j < + . As a consequence let:
j 1 x , e j m i j = D R .
Consider the partial sums j L x , e j m i j = D L and j L x , e j 2 V X i j m i j : = C L . We may write Formula (26) in the form:
P j L x , e j X i j m i j x C L + D L = 1 2 π x e t 2 2 d t + A δ L ,
which, by a change of variable, amounts to:
P j L x , e j X i j m i j y = 1 2 π C 2 y e ( u D ) 2 2 C 2 d u + A δ L .
Since we have that:
1 2 π C 2 y e ( u D ) 2 2 C 2 d u = 1 2 π x C + D e t 2 2 d t = lim L + 1 2 π x C L + D L e t 2 2 d t = lim L + 1 2 π C L 2 y e ( u D L ) 2 2 C L 2 d u ,
and from Formula (27), we have immediately:
lim L + P j L x , e j X i j m i j y = 1 2 π C 2 y e ( u D ) 2 2 C 2 d u .
We may conclude that, on account of the independence of the entries of the random matrix, we have that M ( X ( # ) ) ( x ) , for all nonrandom x, is a random vector which has components j 1 x , e j X i j m i j that are asymptotically Gaussian. □
Remark 16.
The spectral analysis discussed in Remark 15 ensures a spectral decomposition of the random structured matrix operator M ( X ( # ) ( ω ) ) to exist for ω almost surely and so not only the eigenvalues but also the eigenvectors are random variables. Theorem 6 shows that if there exist an almost surely constant eigenvector of the operator M ( X ( # ) ( ω ) ) then the correspondent eigenvalue is Gaussian.
Whenever the distributions for the three symbols are identical the effect of having a structured matrix naturally disappears as a consequence of Theorem 1. With different distributions the effects of having structured matrices appear.
For an illustration example in Figure 3 we have chosen,
X 0 N ( 0 , σ 2 )   and   X 1 N ( 1 , σ 2 )   and   X 2 N ( 2 , σ 2 )
and we took successively larger values for the variance.
Remark 17
(Identifying a random structured model by spectral analysis). There are two conclusions that we may obtain from a first analysis of Figure 3. The first is that, as expected, for smaller variances there is a similarity between the distribution of the eigenvalues in the plane of the structure matrix, the skeleton of the random matrix with entries considered in the complex field, and of the associated random matrix; a second observation, stressing well known facts, is that for sufficiently large variance the distribution of the eigenvalues of the random matrix is similar to the distribution of eigenvalues of a random matrix with independent and identically distributed entries as in Theorem 1.

4.4. Modelling: Random Surfaces Associated to Random Matrices

In this Section we show that to each structured infinite matrix, under some hypothesis, we can associate in a canonical way a random field, for instance, defining a random surface over the unit square in the plane. The procedure is akin to the ones used to define the multiplicative chaos of Mandelbrot, Kahane, and Peyrière (see [54]) with the difference that we use products of real valued random variables instead of non-negative ones.
Prior to that we first provide a technical observation. The general theory of infinite products of random variables of arbitrary sign is quite elaborated when compared with the theory of infinite sums of random variables (see, for instance, [55,56,57]). Nevertheless, in the case that the sequence of products is a (sub or super) martingale there are immediately convergence results that can be taken to be used. Consider an infinite matrix M which is a fixed point of some matrix substitution map. This assumption is motivated by the idea that an observed matrix structure must have some permanence in time in order to be observed. We will define an infinite random structured matrix with given skeleton M as a matrix [ X i , j ] i , j 1 having as entries independent random variables, such that E X i , j m i , j = m i , j .
We now associate to the columns of the random matrix [ X i , j ] i , j 1 the following sequence of random variables ( L j ) j 1 .
L j = L j ( α , γ ) : = γ 1 x j α i = 1 + X i , j m i , j p i   with   x j : = i = 1 + m i , j p i
with α 1 and 0 < γ 1 . We will also suppose that there are no columns with only zeros in any of the substitution matrices, which implies that there exists ϵ > 0 such that x j > ϵ . The parameters α and γ will be chosen to satisfy certain conditions ahead. In order to define the random surface we take a partition of ] 0 , 1 [ 2 by a sequence of dyadic cells. A representation of a decreasing sequence of dyadic cells in ] 0 , 1 [ 2 is given in Figure 4.
In order to link the column random variables of the sequence ( L j ) j 1 to the dyadic cells we consider for each decreasing sequence of dyadic cells such as:
C = ( c ( i 1 , i 1 i 2 , i 1 i 2 i 3 , , i 1 i 2 i 3 i N ) ) N 1
which is uniquely identified by the indexes identifying the decreasing sequence of dyadic cells i 1 , i 1 i 2 , i 1 i 2 i 3 , ⋯, i 1 i 2 i 3 i N , ⋯, i 1 , i 2 , i 3 , , i N , { 1 , 2 , 3 , 4 } . We have the following algorithm to rename the column random variables of the sequence ( L j ) j 1 ( L j ( α ) ) j 1 :
W 1 = L 1 W 2 = L 2 W 3 = L 3 W 4 = L 4 W 1 , 1 = L 5 W 1 , 2 = L 6 W 1 , 3 = L 7 W 1 , 4 = L 8 W 2 , 1 = L 9 W 2 , 2 = L 10 W 2 , 3 = L 11 W 2 , 4 = L 12 W 3 , 1 = L 13 W 3 , 2 = L 14 W 3 , 3 = L 15 W 3 , 4 = L 16 W 4 , 1 = L 17 W 4 , 2 = L 18 W 4 , 3 = L 19 W 4 , 4 = L 20
The linking algorithm of the column random variables to the dyadic cells of [ 0 , 1 ] 2 in its first step and second steps is as indicated in Figure 5.
We now detail the sequence of random variables that give the height of the random surface. For that purpose we define a sequence of random variables ( M N ) n 1 uniquely associated with a decreasing sequence of dyadic cells in the following way:
M N = M N ( c ( i 1 , i 1 i 2 , , i 1 i 2 i N ) ) : = W i 1 · W i 1 i 2 · W i 1 i 2 i 3 W i 1 i 2 i 3 i N = k = 1 N W i 1 i 2 i 3 i k ,
observing that M N = M N ( c ( i 1 , i 1 i 2 , , i 1 i 2 i N ) ) with c ( i 1 , i 1 i 2 , , i 1 i 2 i N ) the finite sequence of dyadic cells that goes until the Nth step. We further observe that for every ( s , t ) ] 0 , 1 [ × ] 0 , 1 [ there exists an unique sequence C ( s , t ) = ( c ( i 1 , i 1 i 2 , , i 1 i 2 i N ) ) N 1 of decreasing dyadic cells such that:
( s , t ) = N 1 c ( i 1 , i 1 i 2 , i 1 i 2 i 3 , , i 1 i 2 i 3 i N )
This decreasing sequence of dyadic cells of a given point allows, with an additional hypothesis, the definition of the random surface via the sequence ( M N ) N 1 = ( M N ( C ( s , t ) ) N 1 .
Consider the left (negative) tail average for the distribution of X i , j m i , j given by:
a i , j : = 0 x d F X i , j m i , j ( x ) .
We have the following result.
Theorem 7
(Existence of a nontrivial random field associated with a structured random matrix). Suppose that the following assumptions are verified:
(a) 
The left tail averages verify:
i = 1 a i , j p i m < + ,
for some constant m.
(b) 
The variances of the random variables X i , j m i , j verify V X i , j m i , j = x j 2 α 0 · v i , for a certain α 0 = α 0 ( m ) to be determined later and with v i such that:
1 < V : = i = 1 v i p 2 i < + .
Then, there is a combination of the parameters α , γ such that, for each ( s , t ) ] 0 , 1 [ × ] 0 , 1 [ the sequence ( M N ) N 1 = ( M N ( C ( s , t ) ) N 1 is a supermartingale that converges almost surely to a random variable X ( s , t ) defining the random field ( X ( s , t ) ) ( s , t ) ] 0 , 1 [ 2 , that is:
X ( s , t ) : = lim N + M N ( c ( i 1 , i 1 i 2 , , i 1 i 2 i N ) ) a . s . ,
and E | X ( s , t ) | < + . Moreover, V X ( s , t ) 1 , that is, the random variable X ( s , t ) is not constant.
Proof. 
We first observe that since x j ϵ we have:
E | L j ( α , γ ) | γ x j α i = 1 + E | X i , j m i , j | p i = γ x j α i = 1 + m i , j + a i , j p i γ 1 + m ϵ α .
We now choose α = α 0 such that ( 1 + m ) / ϵ α 1 . Due to the independence of the of the random variables X i , j m i , j , we have that:
V L j = γ 2 x j 2 α 0 i = 1 + V X i , j m i , j p 2 i = γ 2 x j 2 α 0 i = 1 + x j 2 α 0 · v i p 2 i = γ 2 V .
We now choose γ = γ 0 1 such that γ 0 2 V = 1 . The random variables of the sequence ( W i 1 , W i 1 i 2 , W i 1 i 2 i 3 , W i 1 i 2 i 3 i N ) N 1 are, in fact, distinct random variables of the sequence ( L j ( α 0 , γ 0 ) ) j 1 and so, are independent. It is well known that, since
0 E L j ( α 0 , γ 0 ) = | E L j ( α 0 , γ 0 ) | E | L j ( α 0 , γ 0 ) | 1 ,
a sequence such as the one defined by Formula (28) is a supermartingale with respect to its natural filtration (see, for instance, ([58], p. 475)). Due to the independence we have that:
E | M N | = E | W i 1 | · E | W i 1 i 2 | · E | W i 1 i 2 i 3 | E | W i 1 i 2 i 3 i N | = k = 1 N E | W i 1 i 2 i 3 i k | 1 ,
that is, sup N 1 E | M N | 1 , and so, due to a well known theorem of Doob (see, for instance, ([58], p. 508)) the first conclusion follows. Using the facts that V L j ( α 0 , γ 0 ) = 1 and that the random variables W i 1 i 2 i 3 i k are distinct elements of the sequence ( L j ( α 0 , γ 0 ) ) j 1 , observing that for k l we have that,
V L k · L l = V L k · V L l + V L k · E L l 2 V L l · E L k 2 = 1 + E L l 2 + E L k 2 1 ,
by induction, we now can state that:
V M N = V k = 1 N W i 1 i 2 i 3 i k 1 ,
and so the second conclusion also follows. □
Let us give an idea of a random field built under the hypothesis of Theorem 7. In Figure 6, we present a low order approximation of the random surface associated with the example introduced by Formula (1) in Section 2.1. The skeleton for this approximation is the matrix M 7 a square matrix having around 43 million entries.
Remark 18
(On the covariance of the random field ( X ( s , t ) ) ( s , t ) ] 0 , 1 [ 2 ). Due to the general procedure considered in the construction of the random field it is possible to determine some interesting results on the covariance. In fact let, for two distinct points ( s , t ) , ( s , t ) ] 0 , 1 [ 2 , be the correspondent martingale sequences with elements M N ( C ( s , t ) ) and M N + P ( C ( s , t ) ) with N , P 1 . Let us suppose that the integer 0 N 0 < N is the largest integer such that the points ( s , t ) , ( s , t ) both belong to the same dyadic cell. It is then clear then that:
C o v M N ( C ( s , t ) ) , M N + P ( C ( s , t ) ) = E M N 0 ( C ( s , t ) ) 2 E k = N 0 + 1 N W i 1 i 2 i 3 i k ( C ( s , t ) ) E k = N 0 + 1 N + P W i 1 i 2 i 3 i k ( C ( s , t ) ) E M N ( C ( s , t ) ) E M N + P ( C ( s , t ) ) .
If all the random variables of the sequence ( L j ) j 1 have mean equal to 1 and then, forcefully, the absolute moment of second order is strictly larger than 1, for instance, equal to 2, then, again by Lebesgue convergence theorem we have that:
C o v X ( s , t ) , X ( s , t ) = 2 N 0 1 ,
where, as already said, N 0 0 is the largest integer such that the points ( s , t ) , ( s , t ) both belong to the same dyadic cell. If the points do not belong to any common dyadic cell (see Figure 5), that is if N 0 = 0 , the covariance is null. The closer the points are, the larger the integer N 0 is, and so, the larger the covariance.

5. Conclusions and Future Work

In this work, we introduced structured random matrices having a skeleton built from the a matrix substitution process with entries in a finite field. We showed that the iterated application of a particular kind of matrix substitution generates a sequence of matrices that admit a periodic point—that may be a fixed point—or a fixed point for the sequence of matrix principal parts of a given order. The random matrices, with independent entries, having as skeletons matrices derived from this matrix substitution process have remarkable properties whenever the random variables satisfy some uniform properties. It is showed, under adequate hypothesis, that:
  • The existence of a particular type of structure of matrix substitution type is identifiable by simple statistical procedures;
  • The convergence in law of a sequence of random matrices having as skeletons a sequence of matrices with entries in a finite field that, of matrix substitution type, converges to a fixed point;
  • There is a generic result on the spectral analysis for the random matrices derived from a matrix substitution procedure;
  • There is a canonical manner to associate a nontrivial random field with interesting properties to a random matrix having as a skeleton a matrix with entries in a finite field of matrix substitution type.
A more detailed analysis of the spectral properties of the random matrices here introduced is, for us, open to future work. Furthermore, matrices with a high percentage of zeros can be generated by considering special global matrix substitutions maps; the detailed properties of these matrices will be object of future work. Finally, a reciprocal problem to the one considered in this work is to determine if a large matrix is a fixed point of some global matrix substitution map. A reasonable conjecture is that for every large matrix there exists a global matrix substitution map admitting a fixed point that is close, in some sense, to the given matrix.

Author Contributions

Conceptualization, M.L.E.; methodology M.L.E.; software M.L.E.; validation M.L.E. and N.P.K.; formal analysis, M.L.E. and N.P.K.; investigation M.L.E. and N.P.K.; resources M.L.E. and N.P.K.; writing—original draft preparation M.L.E.; writing—review and editing, M.L.E. and N.P.K.; visualization, M.L.E. and N.P.K.; supervision M.L.E.; project administration M.L.E.; funding acquisition M.L.E. All authors have read and agreed to the published version of the manuscript.

Funding

For the first author this work was partially supported through the project of the Centro de Matemática e Aplicações, UID/MAT/00297/2020 financed by the Fundação para a Ciência e a Tecnologia (Portuguese Foundation for Science and Technology). The APC was by supported by Fidelidade-Companhia de Seguros, S.A. to which the authors express their warmest acknowledgment.

Data Availability Statement

Not applicable.

Acknowledgments

This work was published with financial support from by the New University of Lisbon. The authors express gratitude to the comments, corrections, and questions of the referees that led to a revised and better version of this work.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Liu, S.; McGree, J.; Ge, Z.; Xie, Y. Computational and Statistical Methods for Analysing Big Data with Applications; Elsevier: Amsterdam, The Netherlands; Academic Press: Cambridge, MA, USA, 2016; pp. 11 + 194. [Google Scholar]
  2. Pytheas Fogg, N.; Berthé, V.; Ferenczi, S.; Mauduit, C.; Siegel, A. (Eds.) Substitutions in Dynamics, Arithmetics and Combinatorics; Springer: Berlin/Heidelberg, Germany, 2002; Volume 1794, pp. 15 + 402. [Google Scholar] [CrossRef]
  3. Queffélec, M. Substitution Dynamical Systems. Spectral Analysis, 2nd ed.; Springer: Dordrecht, The Netherlands, 2010; Volume 1294, pp. 15 + 351. [Google Scholar] [CrossRef]
  4. von Haeseler, F. Automatic Sequences; Walter de Gruyter: Berlin/Heidelberg, Germany, 2003; pp. 6 + 191. [Google Scholar]
  5. Allouche, J.P.; Shallit, J. Automatic Sequences; Theory, Applications, Generalizations; Cambridge University Press: Cambridge, UK, 2003; pp. 16 + 571. [Google Scholar] [CrossRef]
  6. Frank, N.P. Multidimensional constant-length substitution sequences. Topol. Its Appl. 2005, 152, 44–69. [Google Scholar] [CrossRef]
  7. Bartlett, A. Spectral theory of Zd substitutions. Ergod. Theory Dyn. Syst. 2018, 38, 1289–1341. [Google Scholar] [CrossRef]
  8. Jolivet, T.; Kari, J. Consistency of multidimensional combinatorial substitutions. Theor. Comput. Sci. 2012, 454, 178–188. [Google Scholar] [CrossRef]
  9. Fogg, N.P.; Berthé, V.; Ferenczi, S.; Mauduit, C.; Siegel, A. (Eds.) Polynomial dynamical systems associated with substitutions. In Substitutions in Dynamics, Arithmetics and Combinatorics; Springer: Berlin/Heidelberg, Germany, 2002; pp. 321–342. [Google Scholar]
  10. Ginibre, J. Statistical ensembles of complex, quaternion, and real matrices. J. Math. Phys. 1965, 6, 440–449. [Google Scholar] [CrossRef]
  11. Girko, V. Theory of Random Determinants; Translated from the Russian; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1988; pp. 25 + 677. [Google Scholar]
  12. Girko, V. Statistical Analysis of Observations of Increasing Dimension; Translated from the Russian; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1995; pp. 21 + 286. [Google Scholar]
  13. Bai, Z.D. Methodologies in Spectral Analysis of Large Dimensional Random Matrices, a review. Stat. Sin. 1999, 9, 611–662. [Google Scholar]
  14. Götze, F.; Tikhomirov, A. The circular law for random matrices. Ann. Probab. 2010, 38, 1444–1491. [Google Scholar] [CrossRef]
  15. Alexeev, N.; Götze, F.; Tikhomirov, A. Asymptotic distribution of singular values of powers of random matrices. Lith. Math. J. 2010, 50, 121–132. [Google Scholar] [CrossRef]
  16. Götze, F.; Naumov, A.; Tikhomirov, A. Distribution of linear statistics of singular values of the product of random matrices. Bernoulli 2017, 23, 3067–3113. [Google Scholar] [CrossRef]
  17. Götze, F.; Naumov, A.; Tikhomirov, A.; Timushev, D. On the local semicircular law for Wigner ensembles. Bernoulli 2018, 24, 2358–2400. [Google Scholar] [CrossRef]
  18. Götze, F.; Tikhomirov, A. Rate of convergence in probability to the Marchenko-Pastur law. Bernoulli 2004, 10, 503–548. [Google Scholar] [CrossRef]
  19. Mehta, M.L. Random Matrices, 3rd ed.; Pure and Applied Mathematics (Amsterdam); Elsevier: Amsterdam, The Netherlands; Academic Press: Cambridge, MA, USA, 2004; Volume 142, pp. 18 + 688. [Google Scholar]
  20. Anderson, G.W.; Guionnet, A.; Zeitouni, O. An Introduction to Random Matrices; Cambridge Studies in Advanced Mathematics; Cambridge University Press: Cambridge, UK, 2010; Volume 118, pp. 14 + 492. [Google Scholar]
  21. Guionnet, A. Grandes matrices aléatoires et théorèmes d’universalité (d’après Erdos, Schlein, Tao, Vu et Yau). Astérisque 2011, 1, 203–237. [Google Scholar]
  22. Tao, T. Topics in Random Matrix Theory; Graduate Studies in Mathematics; American Mathematical Society: Providence, RI, USA, 2012; Volume 132, pp. 10 + 282. [Google Scholar] [CrossRef]
  23. Vu, V.H. (Ed.) Modern aspects of random matrix theory. In Proceedings of the Symposia in Applied Mathematics, San Diego, CA, USA, 6–7 January 2013; Papers from the AMS Short Course on Random Matrices. American Mathematical Society: Providence, RI, USA, 2014; Volume 72, pp. 8 + 174. [Google Scholar] [CrossRef]
  24. Akemann, G.; Baik, J.; Di Francesco, P. (Eds.) The Oxford Handbook of Random Matrix Theory; Paperback edition of the 2011 original [MR2920518]; Oxford University Press: Oxford, UK, 2015; pp. 31 + 919. [Google Scholar]
  25. Erdos, L.; Yau, H.T. A Dynamical Approach to Random Matrix Theory; Courant Lecture Notes in, Mathematics; Courant Institute of Mathematical Sciences: New York, NY, USA; American Mathematical Society: Providence, RI, USA, 2017; Volume 28, pp. 9 + 226. [Google Scholar]
  26. Banerjee, D.; Bose, A. Patterned sparse random matrices: A moment approach. Random Matrices Theory Appl. 2017, 6, 1750011. [Google Scholar] [CrossRef]
  27. Bose, A. Patterned Random Matrices; CRC Press: Boca Raton, FL, USA, 2018; pp. 21 + 267. [Google Scholar] [CrossRef]
  28. Livshyts, G.V.; Tikhomirov, K.; Vershynin, R. The smallest singular value of inhomogeneous square random matrices. Ann. Probab. 2021, 49, 1286–1309. [Google Scholar] [CrossRef]
  29. Jain, V.; Silwal, S. A note on the universality of ESDs of inhomogeneous random matrices. ALEA Lat. Am. J. Probab. Math. Stat. 2021, 18, 1047–1059. [Google Scholar] [CrossRef]
  30. Tikhomirov, A.N. On the Wigner law for generalizided random graphs. Sib. Adv. Math. 2021, 31, 301–308. [Google Scholar] [CrossRef]
  31. Liu, Y.; Chen, A.; Lin, F. Threshold function of ray nonsingularity for uniformly random ray pattern matrices. Linear Multilinear Algebra 2022, 70, 5708–5715. [Google Scholar] [CrossRef]
  32. Ali, M.S.; Srivastava, S.C.L. Patterned random matrices: Deviations from universality. J. Phys. A 2022, 55, 495201. [Google Scholar] [CrossRef]
  33. Bernkopf, M. A history of infinite matrices. A study of denumerably infinite linear systems as the first step in the history of operators defined on function spaces. Arch. History Exact Sci. 1968, 4, 308–358. [Google Scholar] [CrossRef]
  34. Shivakumar, P.N.; Sivakumar, K.C. A review of infinite matrices and their applications. Linear Algebra Appl. 2009, 430, 976–998. [Google Scholar] [CrossRef]
  35. Williams, J.J.; Ye, Q. Infinite matrices bounded on weighted 1 spaces. Linear Algebra Appl. 2013, 438, 4689–4700. [Google Scholar] [CrossRef]
  36. Lindner, M. Infinite Matrices and Their Finite Sections; Frontiers in Mathematics; An introduction to the limit operator method; Birkhäuser Verlag: Basel, Switzerland, 2006; pp. 15 + 191. [Google Scholar]
  37. Warusfel, A. Structures Algébriques Finies. Groupes, Anneaux, Corps; Collection Hachette Université, Librairie Hachette: Paris, France, 1971; p. 271. [Google Scholar]
  38. Koan, V.K. Distributions, Analyse de Fourier, Opérateurs aux Dérivées Partielles; Number tome 1 in Cours et exercices résolus maîtrise de mathématiques: Certficat C2; Vuibert: Paris, France, 1972. [Google Scholar]
  39. Schaefer, H.H. Topological Vector Spaces; Springer: New York, NY, USA, 1971; Volume 3. [Google Scholar]
  40. Köthe, G. Topological Vector Spaces. I; Die Grundlehren der mathematischen Wissenschaften in Einzeldarstellungen. 159; Garling, D.J.H., Translator; Springer: Berlin/Heidelberg, Germany; New York, NY, USA, 1969; pp. 15, 456. [Google Scholar]
  41. Skorohod, A.V. Random Linear Operators; Mathematics and its Applications (Soviet Series); Translated from the Russian; D. Reidel Publishing Co.: Dordrecht, The Netherlands, 1984; pp. 16 + 199. [Google Scholar] [CrossRef]
  42. Guo, T.X. Extension theorems of continuous random linear operators on random domains. J. Math. Anal. Appl. 1995, 193, 15–27. [Google Scholar] [CrossRef]
  43. Thang, D.H.; Thinh, N. Generalized random linear operators on a Hilbert space. Stochastics 2013, 85, 1040–1059. [Google Scholar] [CrossRef]
  44. Quy, T.X.; Thang, D.H.; Thinh, N. Abstract random linear operators on probabilistic unitary spaces. J. Korean Math. Soc. 2016, 53, 347–362. [Google Scholar] [CrossRef]
  45. Chiu, S.N.; Liu, K.I. Generalized Cramér-von Mises goodness-of-fit tests for multivariate distributions. Comput. Stat. Data Anal. 2009, 53, 3817–3834. [Google Scholar] [CrossRef]
  46. Thas, O. Comparing Distributions; Springer: New York, NY, USA, 2010; pp. 18 + 353. [Google Scholar]
  47. McAssey, M.P. An empirical goodness-of-fit test for multivariate distributions. J. Appl. Stat. 2013, 40, 1120–1131. [Google Scholar] [CrossRef]
  48. Fan, Y. Goodness-of-Fit Tests for a Multivariate Distribution by the Empirical Characteristic Function. J. Multivar. Anal. 1997, 62, 36–63. [Google Scholar] [CrossRef]
  49. Shiryaev, A.N. Probability. 1, 3rd ed.; Boas, R.P., Chibisov, D.M., Eds.; Graduate Texts in Mathematics; Translated from the fourth (2007) Russian; Springer: New York, NY, USA, 2016; Volume 95, pp. 17 + 486. [Google Scholar]
  50. Kallenberg, O. Foundations of Modern Probability; Probability Theory and Stochastic Modelling; Third edition [of 1464694]; Springer: Cham, Switzerland, 2021; Volume 99, pp. 12 + 946. [Google Scholar] [CrossRef]
  51. Gel’fand, I.M.; Vilenkin, N.Y. Generalized Functions. Vol. 4: Applications of Harmonic Analysis; Translated by Amiel Feinstein; Academic Press: New York, NY, USA; London, UK, 1964; pp. 14 + 384. [Google Scholar]
  52. Gohberg, I.; Goldberg, S. Basic Operator Theory; Birkhäuser: Boston, MA, USA, 1980; pp. 13 + 285. [Google Scholar]
  53. Billingsley, P. Probability and Measure, 3rd ed.; Wiley Series in Probability and Mathematical Statistics; A Wiley-Interscience Publication; John Wiley & Sons, Inc.: New York, NY, USA, 1995; pp. 14 + 593. [Google Scholar]
  54. Kahane, J.P.; Peyrière, J. Sur certaines martingales de Benoit Mandelbrot. Adv. Math. 1976, 22, 131–145. [Google Scholar] [CrossRef]
  55. Lévy, P. Esquisse d’une théorie de la multiplication des variables aléatoires. Ann. Sci. École Norm. Sup. 1959, 76, 59–82. [Google Scholar] [CrossRef]
  56. Zolotarev, V.M. General theory of the multiplication of random variables. Dokl. Akad. Nauk SSSR 1962, 142, 788–791. [Google Scholar]
  57. Simonelli, I. Convergence and symmetry of infinite products of independent random variables. Statist. Probab. Lett. 2001, 55, 45–52, Erratum in Statist. Probab. Lett. 2003, 62, 323. [Google Scholar] [CrossRef]
  58. Shiryaev, A.N. Probability, 2nd ed.; Graduate Texts in Mathematics; Translated from the first (1980) Russian edition by R. P. Boas; Springer: New York, NY, USA, 1996; Volume 95, pp. 16 + 623. [Google Scholar] [CrossRef]
Figure 1. Histogram of absolute values of the eigenvalues of the structured matrices R 7 and M 7 .
Figure 1. Histogram of absolute values of the eigenvalues of the structured matrices R 7 and M 7 .
Mathematics 11 02505 g001
Figure 2. Dispersion or real and imaginary parts of eigenvalues of R 7 and M 7 .
Figure 2. Dispersion or real and imaginary parts of eigenvalues of R 7 and M 7 .
Mathematics 11 02505 g002
Figure 3. Eigenvalues distribution in C of a sample of 40 matrices with affine substitution induced structure and increasing variance.
Figure 3. Eigenvalues distribution in C of a sample of 40 matrices with affine substitution induced structure and increasing variance.
Mathematics 11 02505 g003
Figure 4. A decreasing sequence of dyadic cells.
Figure 4. A decreasing sequence of dyadic cells.
Mathematics 11 02505 g004
Figure 5. The placement of the first four random variables: first step (left); The placement of the next 16 random variables: second step (right).
Figure 5. The placement of the first four random variables: first step (left); The placement of the next 16 random variables: second step (right).
Mathematics 11 02505 g005
Figure 6. An approximation of low order of the random surface, built upon the skeleton M 7 : surface plot (left); Contour plot: (right).
Figure 6. An approximation of low order of the random surface, built upon the skeleton M 7 : surface plot (left); Contour plot: (right).
Mathematics 11 02505 g006
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Esquível, M.L.; Krasii, N.P. On Structured Random Matrices Defined by Matrix Substitutions. Mathematics 2023, 11, 2505. https://doi.org/10.3390/math11112505

AMA Style

Esquível ML, Krasii NP. On Structured Random Matrices Defined by Matrix Substitutions. Mathematics. 2023; 11(11):2505. https://doi.org/10.3390/math11112505

Chicago/Turabian Style

Esquível, Manuel L., and Nadezhda P. Krasii. 2023. "On Structured Random Matrices Defined by Matrix Substitutions" Mathematics 11, no. 11: 2505. https://doi.org/10.3390/math11112505

APA Style

Esquível, M. L., & Krasii, N. P. (2023). On Structured Random Matrices Defined by Matrix Substitutions. Mathematics, 11(11), 2505. https://doi.org/10.3390/math11112505

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop