Next Article in Journal
Image Forensics in the Encrypted Domain
Previous Article in Journal
Critical Points in the Noiseberg Achievable Region of the Gaussian Z-Interference Channel
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Interval-Valued Random Matrices

by
Abdolnasser Sadeghkhani
1,* and
Ali Sadeghkhani
2
1
Department of Mathematics and Statistics, North Carolina Agricultural and Technical State University, Greensboro, NC 27411, USA
2
Department of Mathematics and Statistics, University of Windsor, Windsor, ON N9B 3P4, Canada
*
Author to whom correspondence should be addressed.
Entropy 2024, 26(11), 899; https://doi.org/10.3390/e26110899
Submission received: 26 September 2024 / Revised: 17 October 2024 / Accepted: 17 October 2024 / Published: 23 October 2024
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
This paper introduces a novel approach that combines symbolic data analysis with matrix theory through the concept of interval-valued random matrices. This framework is designed to address the complexities of real-world data, offering enhanced statistical modeling techniques particularly suited for large and complex datasets where traditional methods may be inadequate. We develop both frequentist and Bayesian methods for the statistical inference of interval-valued random matrices, providing a comprehensive analytical framework. We conduct extensive simulations to compare the performance of these methods, demonstrating that Bayesian estimators outperform maximum likelihood estimators under the Frobenius norm loss function. The practical utility of our approach is further illustrated through an application to climatology and temperature data, highlighting the advantages of interval-valued random matrices in real-world scenarios.
MSC:
AMS (2000) subject classification: Primary: 62C10; Secondary: 62F15, 62E15

1. Introduction

Symbolic Data Analysis (SDA), first introduced by [1], focuses on analyzing complex data types that can encompass multiple values, distributions, or more generalized forms rather than traditional single-valued types. Unlike classical random variables, which take on a single, specific value, symbolic random variables can represent a range of values, leading to an internal distribution that captures uncertainty or variability within the data. The foundation of SDA relies on the concept that exploratory analyses and statistical inferences are often more meaningful at the group level rather than examining individual values in isolation [2].
SDA emphasizes the importance of group-level summaries—known as symbols—as the primary units of statistical analysis. These symbols, often represented as intervals, histograms, or other distributional forms that occupy a hypercube in R p (as opposed to considering values within R p single-valued data), serve as the foundation for statistical symbolic data [3,4].
Interval-valued data, a specific type of symbolic data, provide a structured way to represent information that naturally falls within ranges rather than precise point values. This approach has gained significant attention recently due to its ability to manage uncertainty and variability more effectively than traditional point-based methods. As discussed by [2,3,5], interval-valued data offer a versatile framework accommodating both univariate and multivariate forms, making them highly suitable for a wide range of real-world applications in various applied sciences.
The attractiveness of interval-valued data lies in their simplicity, flexibility, and broad applicability, driving extensive research on their statistical analysis. Recent studies have investigated both frequentist and Bayesian methods for constructing estimators tailored to interval-valued data (e.g., [5,6]. Moreover, innovative statistical techniques have been developed to explore relationships within interval-valued datasets. For example, ref. [7] introduced a Principal Component Analysis (PCA) method designed to represent intervals, while other methods have refined regression techniques to directly handle intervals. Some researchers have focused on extracting numerical characteristics from intervals, such as the midpoint [8], the midpoint and range [9], or the lower and upper limits [10], then applying these attributes to standard regression models. Alternatively, other researchers have chosen to work with intervals directly, treating them as the core unit of analysis without converting them into numerical summaries (e.g., [11,12]).
Complex data can also be studied in the context of random matrices, which extend the principles of univariate and multivariate analysis to matrix-based data structures. Unlike conventional multivariate methods that treat data as vectors, matrix-variate approaches are particularly well-suited for the analysis of data that naturally take the form of matrices, which are common in multidimensional settings. These methods are effective for the modeling of complex dependencies where multiple variables are inter-related and organized into rows and columns, capturing two-dimensional structures. Applications of matrix-variate random variables are found in various fields, such as econometrics, multivariate analysis, and machine learning, where they offer sophisticated approaches for the handling of relational data (see, e.g., [13], and references therein).
Matrix-variate methodologies offer significant advantages for the analysis of complex data, as they effectively capture correlations both within rows and across columns of a matrix. This ability allows for the simultaneous analysis of multiple variables while accounting for their interdependencies. Methods such as the use of matrix-variate normal distributions and matrix regression models provide robust frameworks for investigating relationships at both the row and column levels. This dual-level analysis is especially valuable in practical applications involving relational data, such as the joint analysis of multiple financial indicators in econometrics or understanding covariance structures among feature matrices in machine learning. For more information, see [14].
With the increasing complexity and dimensionality of modern datasets, both matrix-variate random variables and interval-valued data provide flexible frameworks to tackle these challenges. Interval-valued data handle uncertainty and variability by representing values as intervals rather than fixed points, effectively accommodating the imprecision common in real-world data. In contrast, matrix-variate methods are adept at modeling complex dependencies within matrix-structured data. Combining these two approaches—matrix-variate methods and interval-valued data—offers a powerful strategy for the analysis of large, complex datasets, allowing for a more comprehensive modeling of uncertainties and dependencies. This integration enhances the flexibility and robustness of statistical models, expanding their applicability to more intricate, high-dimensional scenarios.
This paper aims to explore the integration of matrix-variate and interval-valued data approaches through both frequentist and Bayesian methods. By examining how these methodologies can be combined, it seeks to develop a comprehensive strategy that enhances the analysis of complex data structures, accommodating both uncertainty and multidimensional dependencies. The focus is on leveraging the strengths of both methods—frequentist methods for their precision and objectivity and Bayesian methods for their flexibility and incorporation of prior knowledge—to create robust inferential frameworks. This dual approach is designed to be highly applicable across a range of scientific and practical fields, such as econometrics, machine learning, and bioinformatics, where complex and high-dimensional data are commonly encountered. Ultimately, this study aims to provide new insights and tools for researchers and practitioners to better understand, model, and interpret intricate datasets.
The rest of this paper is organized as follows. We begin with some preliminaries and definitions regarding matrix-variate and tensor distributions in Section 2. We continue by defining likelihood functions and Maximum Likelihood (ML) estimators, along with their asymptotic distributions. Section 3 is dedicated to defining priors, posteriors, and Bayes estimators under the Frobenius norm loss function, as well as the corresponding dominance results. In Section 4, we present simulations to demonstrate the superiority of the proposed estimators and to validate the theoretical results. Section 5 involves an analysis of temperature variations across different seasons using a dataset comprising interval-valued temperature matrices. Finally, concluding remarks are provided in Section 6.

2. Likelihood Function of Interval-Valued Matrices and ML Estimators

Before defining the likelihood function for interval-valued random variables, we introduce the tensor Wishart and inverse tensor Wishart distributions, which generalize the classical Wishart distribution. For a more detailed discussion, see [13,15].
Definition 1
(Tensor Wishart Distribution). Let  A  be a  p × p × q  positive definite tensor, where for each fixed  i { 1 , , q } , matrix slice  A ( i )  is positive definite. We say that  A  follows a tensor Wishart distribution, as denoted by  TW p , q ( m , V ) , where m represents the degrees of freedom and  V  is the scale tensor. The probability density function (PDF) is given by
TW p , q ( A m , V ) = i = 1 q | A ( i ) | m p 1 2 2 m p q 2 i = 1 q | V ( i ) | m 2 Γ p , q m 2 exp 1 2 i = 1 q t r ( V ( i ) ) 1 A ( i ) ,
where  | A ( i ) |  and  t r ( · )  refer to the determinant and trace of matrix  A ( i ) , respectively. The generalized multivariate gamma function for tensors is denoted by  Γ p , q ( · )  and is expressed as
Γ p , q m 2 = π p q ( p 1 ) 4 i = 1 p j = 1 q Γ m + 1 i j 2
The degrees of freedom must satisfy  m > p 1  to ensure that each matrix slice ( A ( i ) ) is invertible. The expectation of  A  is  E [ A ] = m V .
Definition 2
(Inverse Tensor Wishart Distribution). Let  B = A 1 , where the inverse is defined slice-wise, i.e., for each i,  B ( i ) = ( A ( i ) ) 1 . Then,  B  follows an inverse tensor Wishart distribution, as denoted  ITW p , q ( m , U ) , where  U = V 1  is the scale tensor. Its PDF is given by
ITW p , q ( B m , U ) = i = 1 q | U ( i ) | m 2 i = 1 q | B ( i ) | m + p + 1 2 2 m p q 2 Γ p , q m 2 exp 1 2 i = 1 q tr U ( i ) ( B ( i ) ) 1 ,
where  | B ( i ) |  and  t r ( · )  refer to the determinant and trace of matrix  B ( i ) , respectively. The expectation of  B  is  E [ B ] = U m p 1 , which holds for  m > p + 1 .
In Definition 2, if U is a vector, then it corresponds to IW p ( m , U ) , and its PDF is given as
IW p ( B m , U ) = | U | m 2 | B | m + p + 1 2 2 m p 2 Γ p m 2 exp 1 2 tr ( U B 1 ) ,
with E [ B ] = U / ( m p 1 ) for m > p + 1 .
To establish the structure of our model, we now consider the following interval-valued random matrix:
X = ( a 11 , b 11 ) ( a 12 , b 12 ) ( a 1 q , b 1 q ) ( a 21 , b 21 ) ( a 22 , b 22 ) ( a 2 q , b 2 q ) ( a p 1 , b p 1 ) ( a p 2 , b p 2 ) ( a p q , b p q ) ,
where each element forms an interval ( a j k , b j k ) with a j k < b j k for j = 1 , , p and k = 1 , , q . This setup generalizes the concept of an interval-valued random vector of size p × 1 , which represents a hyper-rectangle (or box) in R p . When X is a matrix of size p × q , where each element is an interval, the geometric interpretation becomes more complex. In this case, each row of the matrix can be viewed as a 1 × q vector, with each element representing an interval in R q . Consequently, each row represents a hyper-rectangle in R q , and the entire matrix ( X ) can be interpreted as a collection or product of p such hyper-rectangles.
Overall, matrix X represents a generalized hyper-rectangle (or hyper-parallelepiped) in R p q , where each interval contributes to the overall dimensionality of the object. This matrix structure introduces additional complexity by capturing variability across both rows and columns, resulting in a richer and more intricate geometric representation compared to the vector case.
Given that matrix X in (3) contains aggregated observed values over uniformly distributed intervals ( a j k , b j k ) , with a j k < b j k for j = 1 , , p and k = 1 , , q , we extend the results reported by [5,16,17] regarding the existence of a one-to-one correspondence between X and Θ = ( Θ 1 , Θ 2 ) , where Θ 1 and Θ 2 represent the mean matrix and the variance–covariance tensor of the internal distribution, respectively, and are expressed as
Θ 1 = a 11 + b 11 2 a 12 + b 12 2 a 1 q + b 1 q 2 a 21 + b 21 2 a 22 + b 22 2 a 2 q + b 2 q 2 a p 1 + b p 1 2 a p 2 + b p 2 2 a p q + b p q 2 ,
Θ 2 = T i j k l = 1 12 ( b i j a i j ) 2 , if i = k and j = l , 1 12 ( b i j a i j ) ( b k l a k l ) , if i k or j l ,
where a i j < b i j and a k l < b k l for all i , k = 1 , , p and j , l = 1 , , q .
Let X i for i = 1 , , n be a random matrix with the PDF denoted by f i X i ( x i ; Θ ) . The associated parameters, Θ i 1 and Θ i 2 , as defined in Equations (4) and (5), vary and take different values. The PDFs of these parameters are given by
Θ i 1 MN p × q ( μ , Σ R , Σ C ) ,
Θ i 2 TW p , q ( m , V ) ,
where MN p × q represents the matrix normal distribution with a mean matrix ( μ ) of size p × q , a row covariance matrix ( Σ R ) of size p × p , and a column covariance matrix ( Σ C ) of size q × q with the following PDF:
MN p × q ( Θ 1 μ , Σ R , Σ C ) = 1 ( 2 π ) p q 2 det ( Σ R ) p 2 det ( Σ C ) q 2 etr 1 2 Ψ 1 ( Θ 1 μ ) Σ R 1 ( Θ 1 μ ) ,
and TW p , q represents the tensor Wishart distribution with degrees of freedom m and scale tensor V as in (1).

2.1. ML Estimators and Related Properties

For a given tensor ( V ), which can be decomposed into a product of matrices ( { V ( k ) } k = 1 q ), we introduce the determinantal product, which is defined as the product of the determinants of these matrices as follows:
D ( V ) = k = 1 q | V ( k ) | ,
where | V ( k ) | denotes the determinant of matrix V ( i ) . Assuming that { Θ i 1 } i = 1 n and { Θ i 2 } i = 1 n are independent, the likelihood is given by L = L 1 × L 2 , where
L 1 = L 1 ( μ , Σ R , Σ C Θ 11 , , Θ n 1 ) = | Σ R | n q 2 | Σ C | n p 2 × exp 1 2 i = 1 n tr Σ R 1 ( Θ i 1 μ ) Σ C 1 ( Θ i 1 μ ) = | Σ R | n q 2 | Σ C | n p 2 exp 1 2 tr S R Σ R 1 tr S C Σ C 1 ,
L 2 = L 2 ( V Θ 12 , , Θ n 2 ) = D ( V ) n m 2 exp 1 2 i = 1 n k = 1 q tr Θ i 2 ( k ) ( V ( k ) ) 1 ,
where
S R = i = 1 n ( Θ i 1 Θ ¯ 1 ) ( Θ i 1 Θ ¯ 1 ) , S C = i = 1 n ( Θ i 1 Θ ¯ 1 ) ( Θ i 1 Θ ¯ 1 ) , Θ ¯ 1 = 1 n i = 1 n Θ i 1 .
The ML estimators of the unknown mean matrix ( μ ), the covariance matrices ( Σ R and Σ C ), and the tensor ( V ) are presented in the following theorem.
Theorem 1.
The maximum likelihood (ML) estimators for the matrix parameters ( μ ,  Σ R , and  Σ C ) and the tensor parameter ( V ) are given by
μ ^ M L = Θ ¯ 1 ,
Σ ^ R M L = S R n q ,
Σ ^ C M L = S C n p ,
V ^ M L = i = 1 n Θ i 2 ( 1 ) n m , i = 1 n Θ i 2 ( 2 ) n m , , i = 1 n Θ i 2 ( q ) n m .
Proof. 
To find the ML estimator for μ , we differentiate the log-likelihood function ( log L 1 ) with respect to μ and set it to zero as follows:
log L 1 μ = i = 1 n Σ R 1 ( Θ i 1 μ ) Σ C 1 = 0 .
Solving for μ , we obtain (12). For Σ R and Σ C , we differentiate log L 1 with respect to Σ R and Σ C and solve the resulting equations, yielding (13) and (14), respectively.
For the tensor parameter ( V ), we differentiate the log-likelihood function ( log L 2 ) with respect to each slice ( V ( k ) ) of V , set it to zero, and solve
log L 2 V ( k ) = n m 2 ( V ( k ) ) 1 + 1 2 ( V ( k ) ) 1 i = 1 n Θ i 2 ( k ) ( V ( k ) ) 1 = 0 , for k = 1 , , q .
Multiplying by V ( k ) , rearranging terms, and solving for V ( k ) , we obtain
V ^ ( k ) , M L = i = 1 n Θ i 2 ( k ) n m , for k = 1 , , q .
Thus, the MLE for the tensor parameter V is given by
V ^ M L = i = 1 n Θ i 2 ( 1 ) n m , i = 1 n Θ i 2 ( 2 ) n m , , i = 1 n Θ i 2 ( q ) n m .

2.2. Asymptotic Properties of ML Estimators

Theorem 2.
Consider  Sym ( p , q ) , the set of  p × p  and  q × q  real symmetric matrices, and let  P ( p , q ) Sym ( p , q )  represent the subset consisting of symmetric positive definite matrices, forming convex regular cones. Setting  ω = ( μ , Σ R , Σ C ) Ω = R p × q × P ( p ) × P ( q ) , we have
n ( ω ^ ω ) d MN m ( 0 m , I 1 ( ω ) ) ,
where  I ( ω )  is the Fisher information matrix with elements given by
I i j ( ω ) = μ ω i Σ R 1 μ ω j + 1 2 tr Σ R 1 Σ R ω i Σ R 1 Σ R ω j + 1 2 tr Σ C 1 Σ C ω i Σ C 1 Σ C ω j ,
and  m = dim ( Ω ) = p q + p ( p + 1 ) 2 + q ( q + 1 ) 2 .
Proof. 
The Fisher information matrix for matrix-variate distributions (see, e.g., [13]) is more precisely characterized by the covariance tensor of the score (gradient of the log-likelihood function). Specifically, if we let MN p , q ( Θ 1 μ , Σ R , Σ C ) denote the matrix normal distribution as in (8), then the Fisher information matrix is given by
I ( ω ) = Cov ω log MN p , q ( Θ 1 μ , Σ R , Σ C ) = E ω log MN p , q ( Θ 1 μ , Σ R , Σ C ) log MN p , q ( Θ 1 μ , Σ R , Σ C ) = E ω 2 log MN p , q ( Θ 1 μ , Σ R , Σ C ) .
Here, the expectation is assumed with respect to the distribution parameterized by ω = ( μ , Σ R , Σ C ) . Note that the covariance is represented in terms of a tensor product, emphasizing that the covariance structure involves interactions between the matrices.
Following the theory presented by [18,19], the Fisher information matrix for the matrix normal distribution can be broken down using the Kronecker product properties of the row and column covariances. For matrix-variate distributions parameterized by an m-dimensional vector ( ψ = ( ψ 1 , , ψ m ) R m , where μ = ( ψ 1 , , ψ p q ) and Σ R ( ψ ) and Σ C ( ψ ) are the row and column covariance matrices corresponding to the remaining parameters), the Fisher information matrix components are given by
I i j ( ω ) = μ ω i Σ R 1 μ ω j + 1 2 tr Σ R 1 Σ R ω i Σ R 1 Σ R ω j + 1 2 tr Σ C 1 Σ C ω i Σ C 1 Σ C ω j .
These components follow from the matrix differentiation results that account for the second-order derivatives and trace operations needed to properly capture the covariance interactions.
To verify that the stationary point corresponds to a maximum of the log-likelihood function, we conduct a second derivative test by examining the Hessian matrix of the negative log-likelihood function at the stationary point. If this Hessian is negative definite, it confirms that the point is a maximum. Given that the Fisher information matrix ( I ( ω ) ) reflects the curvature of the log-likelihood function, its negative definiteness guarantees that our stationary point is, indeed, a local maximum.
Therefore, the Fisher information matrix for the matrix normal distribution respects the Kronecker product structure and tensor properties of the distribution, as detailed by [13,19]. This completes the proof. □

3. Bayesian Setup

In a Bayesian setup, we begin by defining independent prior distributions for the unknown matrix and tensor parameters in models (6) and (7). Using the determinantal product defined in (9), the prior distribution is given by
π ( μ , Σ R , Σ C , V ) | Σ R | p + 2 2 | Σ C | q + 2 2 D ( V ) p q + 1 2 ,
where μ is the mean matrix; Σ R and Σ C are the row and column covariance matrices, respectively; and V represents the tensor of interest.
Given the independent likelihood functions in (10) and (11), the joint posterior distribution of the matrices ( μ , Σ R , Σ C ) and the tensor ( V ) can be written as
π ( μ , Σ R , Σ C , V Θ ) L 1 ( μ , Σ R , Σ C Θ 11 , , Θ 1 n ) L 2 ( V Θ 12 , , Θ n 2 ) π ( μ , Σ R , Σ C , V ) | Σ R | n q + p + 2 2 | Σ C | n p + q + 2 2 × exp 1 2 tr ( S R Σ R 1 ) 1 2 tr ( S C Σ C 1 ) × exp n 2 i = 1 n tr ( Θ i 1 μ ) Σ R 1 ( Θ i 1 μ ) Σ C 1 × D ( V ) p q + 1 + n m p q 2 exp 1 2 i = 1 n k = 1 q tr Θ i 2 ( k ) ( V ( k ) ) 1 .
where S R and S C are the scatter matrices for row and column covariances, respectively.
The following lemma provides the full conditional posterior distributions associated with the posterior distribution in (17).
Lemma 1.
The full conditional distributions associated with the posterior distribution in (17) are given by
μ Σ R , Σ C , Θ 1 MN p , q Θ ¯ 1 , Σ R n , Σ C n ,
Σ R μ , Σ C , Θ 1 IW p n q + 1 , S R + i = 1 n ( Θ i 1 μ ) Σ C 1 ( Θ i 1 μ ) ,
Σ C μ , Σ R , Θ 1 IW q n p + 1 , S C + i = 1 n ( Θ i 1 μ ) Σ R 1 ( Θ i 1 μ ) ,
V Θ 2 ITW p , q n m , i = 1 n Θ i 2 ,
where  MN p , q  is the matrix normal distribution,  IW p  and  IW q  are inverse Wishart distributions, and  ITW p q  is the inverse tensor Wishart distribution.
Proof. 
From the joint posterior distribution in (17), it can easily be seen that
L ( μ Σ R , Σ C , Θ 1 ) exp n 2 i = 1 n ( Θ i 1 μ ) Σ R 1 ( Θ i 1 μ ) Σ C 1 ;
thus, the full conditional distribution for μ is
μ Σ R , Σ C , Θ 1 MN p , q Θ ¯ 1 , Σ R / n , Σ C / n ,
where Θ ¯ 1 = 1 n i = 1 n Θ i 1 is the sample mean.
Next, the full conditional for Σ R comes from the part of the posterior involving Σ R
L ( Σ R μ , Σ C , Θ 1 ) | Σ R | n q + p + 2 2 exp 1 2 tr ( S R Σ R 1 ) ,
where S R = i = 1 n ( Θ i 1 μ ) Σ C 1 ( Θ i 1 μ ) . This corresponds to an inverse Wishart distribution in (2), so we have
Σ R μ , Σ C , Θ 1 IW p n q + 1 , S R .
Similarly, the full conditional for Σ C is derived as
L ( Σ C μ , Σ R , Θ 1 ) | Σ C | n p + q + 2 2 exp 1 2 tr ( S C Σ C 1 ) ,
where S C = i = 1 n ( Θ i 1 μ ) Σ R 1 ( Θ i 1 μ ) . Hence, the full conditional for Σ C is
Σ C μ , Σ R , Θ 1 IW q n p + 1 , S C .
Finally, the full conditional for V is derived from the likelihood involving V in model (7). This likelihood is proportional to an inverse tensor Wishart distribution (Definition 2). Specifically,
L ( V Θ 2 ) | V | n m + p q + 1 2 exp 1 2 tr i = 1 n Θ i 2 V 1 ,
which leads to the full conditional for V as in (21). □

3.1. Loss Functions

One of the most common loss functions for estimating a matrix ( μ ) using μ ^ is the Frobenius norm, which is given by
L 2 = | | μ μ ^ | | F 2 ,
where, for a given matrix ( A R p × q ), the Frobenius norm is defined as
| | A | | F = i = 1 p j = 1 q | a i j | 2 ,
with a i j representing the elements of matrix A . The Frobenius norm can be interpreted as the square root of the sum of the squares of all the entries in the matrix. It corresponds to the Euclidean norm when the matrix is flattened into a vector. For more information, see [20,21].
For the tensor-valued case, the tensor Frobenius norm generalizes the matrix Frobenius norm. If M and M ^ are tensors of the same dimensions, the loss function is given by
L 2 ( M , M ^ ) = | | M M ^ | | F 2 ,
where the tensor Frobenius norm for a tensor ( ( M R p × q × r ) is defined as
| | M | | F = i = 1 p j = 1 q k = 1 r | m i j k | 2 .
The interpretation is similar to that of the matrix case; it is the square root of the sum of the squares of all the tensor elements. This generalization is crucial for the extension of matrix-based methods to higher-order data structures such as tensors.

3.2. Bayes Estimators

Theorem 3.
Consider models (6) and (7) and prior (16). The Bayes estimators of matrix parameters  μ ,  Σ R , and  Σ C  (with respect to the Frobenius norm loss in (22)) and tensor  V  (with respect to the entropy loss function in (23)) are given by
μ ^ B = Θ ¯ 1 ,
Σ ^ R B = S R n q + p ,
Σ ^ C B = S C n p + q ,
V ^ B = i = 1 n Θ i 2 ( 1 ) n m p 1 , i = 1 n Θ i 2 ( 2 ) n m p 1 , , i = 1 n Θ i 2 ( q ) n m p 1 .
Proof. 
The Bayes estimators are derived from the expectations of the posterior marginal distributions of the parameters. Based on the posterior distribution (18), one can easily show that the posterior marginal distribution for μ is
μ Θ MT p , q Θ ¯ 1 , S R n + q , S C n + p , n + q p ,
where MT p , q ( M , Σ R , Σ C , ν ) represents a matrix T distribution with a mean matrix ( M ), row covariance matrix ( Σ R ), column covariance matrix ( Σ C ), and ν degrees of freedom, with the PDF given by
MT p , q ( μ M , Σ R , Σ C , ν ) = Γ p ν + q 2 ( π ) p q / 2 Γ p ν 2 det ( Σ R ) q / 2 det ( Σ C ) p / 2 × I p + ( μ M ) Σ C 1 ( μ M ) Σ R 1 ( ν + q ) / 2 .
The expectation of the matrix T distribution in (29) is E [ μ ] = M = Θ ¯ 1 , yielding (24).
Next, the posterior marginal distribution for Σ R is
Σ R Θ IW p n q + p , S R ,
where IW p is the inverse Wishart distribution. The expectation of the inverse Wishart distribution is S R n q + p , which proves (25).
Similarly, it can be shown that
Σ C Θ IW q n p + q , S C ,
Hence, the proof of (26) is complete.
To derive the Bayes estimator for the tensor parameter ( V ), note that the posterior distribution for V given as Θ i 2 follows an inverse tensor Wishart distribution as follows:
V Θ i 2 ITW p , q n m , i = 1 n Θ i 2 ,
and the expectation of the inverse tensor Wishart distribution is
E [ V Θ i 2 ] = i = 1 n Θ i 2 n m p 1 ,
which completes the proof for (27). □
In statistical estimation, evaluating the efficacy of Bayes estimators compared to ML estimators is crucial, especially under non-informative priors. The next theorem investigates the performance of Bayes estimators under a proposed prior distribution for matrix and tensor parameters compared to ML estimators. Our goal is to demonstrate that, under the Frobenius norm loss function, Bayes estimators exhibit dominance over their ML counterparts.
Theorem 4.
Under the assumptions of Theorem 3, the Bayes estimators of parameters  Σ R ,  Σ C , and  V given in Equations (25)–(27) dominate the ML estimators obtained by Theorem 1.
Proof. 
To prove that the Bayes estimators dominate the ML estimators under the Frobenius norm loss, we need to compute the differences in risk between the Bayes and ML estimators for the matrix parameters ( Σ R and Σ C ) and the tensor parameter ( V ). These differences in risk are given by the following expected losses:
Δ Σ R = E L 2 ( Σ ^ R M L , Σ R ) L 2 ( Σ ^ R B , Σ R ) ,
Δ Σ C = E L 2 ( Σ ^ C M L , Σ C ) L 2 ( Σ ^ C B , Σ C ) ,
Δ V = E L 2 ( V ^ M L , V ) L 2 ( V ^ B , V ) .
For the matrix parameter ( Σ R ), the ML and Bayes estimators are given by (13) and (25), respectively. The expected loss (under Frobenius norm squared) for the ML estimator is
E L 2 ( Σ ^ R M L , Σ R ) = tr ( Σ R 2 ) n q ,
and for the Bayes estimator, we have
E L 2 ( Σ ^ R B , Σ R ) = tr ( Σ R 2 ) n q + p .
Therefore,
Δ Σ R = tr ( Σ R 2 ) n q tr ( Σ R 2 ) n q + p = tr ( Σ R 2 ) 1 n q 1 n q + p .
Since 1 n q > 1 n q + p , it follows that Δ Σ R > 0 .
Similarly, using the ML and Bayes estimators given by (14) and (26), respectively, we have
E L 2 ( Σ ^ C M L , Σ C ) = tr ( Σ C 2 ) n p ,
and
E L 2 ( Σ ^ C B , Σ C ) = tr ( Σ C 2 ) n p + q .
Thus, the difference in risk is
Δ Σ C = tr ( Σ C 2 ) n p tr ( Σ C 2 ) n p + q = tr ( Σ C 2 ) 1 n p 1 n p + q .
Since 1 n p > 1 n p + q , it follows that Δ Σ C > 0 .
For the tensor parameter ( V ), the ML and Bayes estimators are given by (15) and (27), respectively. The expected loss for the ML estimator is
E L 2 ( V ^ M L , V ) = D ( V ) n m ,
where D ( V ) = tr ( V 2 ) = i , j , k V i j k 2 . For the Bayes estimator, it is
E L 2 ( V ^ B , V ) = D ( V ) n m p q 1 .
Thus, the difference in risk is
Δ V = D ( V ) n m D ( V ) n m p q 1 = D ( V ) 1 n m 1 n m p q 1
Since 1 n m > 1 n m p q 1 , it follows that Δ V > 0 .
Therefore, the Bayes estimators for Σ R , Σ C , and V dominate the ML estimators under the Frobenius norm loss. □

4. Simulation Results

In this section, we present simulation results to evaluate the performance of the ML estimators (Equations (12)–(15)) and Bayesian estimators (Equations (24)–(27)) for interval-valued matrix data under models (6) and (7). The parameters of interest include the row covariance matrix ( Σ R ), the column covariance matrix ( Σ C ), and the variance–covariance tensor ( V ). To assess the performance of these estimators, we compute the expected loss (also known as risk) under the Frobenius norm loss functions given by (22) and (23) for matrix and tensor estimators, respectively.
The simulation setup involves generating N = 1000 iterations with sample sizes of n = 25 , 100 , and 250 of interval-valued matrices with known matrix and tensor parameters, as shown below.
μ = 1 0 0 0 1 0 0 0 1 , Σ R = 1.5 0 0 0 1.5 0 0 0 1.5 , Σ C = 2 0 0 0 2 0 0 0 2 ,
V ( : , : , 1 ) = 1.5 0 0 0 1.5 0 0 0 1.5 , V ( : , : , 2 ) = 2 0 0 0 2 0 0 0 2 , V ( : , : , 3 ) = 2.5 0 0 0 2.5 0 0 0 2.5 .
It can be observed that p = q = 3 . The ML and Bayesian estimations of the parameters, with m selected as n p q 1 , for each sample size are provided in Table 1, Table 2 and Table 3.
Table 4 illustrates the risk functions of the ML and Bayes estimators under the Frobenius norm loss for different sample sizes ( n = 25 , 100 , 250 ). In Table 4, we observe that for small sample sizes ( n = 25 ), the Bayesian estimator generally outperforms the maximum likelihood (ML) estimator, particularly for the row and column covariance matrices ( Σ R and Σ C ). As the sample size increases to n = 100 and beyond, the risk values for both ML and Bayes estimators decrease, indicating improved performance. Additionally, the difference between the risks of the two estimators narrows, showing that their performances become more similar. This convergence is due to the consistency property of ML estimators, which ensures asymptotically good performance. As more data become available, the influence of prior information in the Bayesian approach diminishes, leading both estimators to yield similar results.

5. Real Application Example

The dataset consists of interval-valued (minimum and maximum) temperature matrices representing temperature variations across different seasons—winter, spring, summer, and fall—i.e., p = 4 , over a 20-year period spanning four time intervals, namely 1950–1969, 1970–1989, 1990–2009, and 2010–2023, ( q = 4 ), in n = 39 Asian countries.
In other words, for each country, the data are arranged in a matrix format where the rows correspond to seasons and the columns correspond to different time periods spanning multiple years. Each matrix element is an interval representing the minimum and maximum temperatures recorded during the corresponding period. The data are sourced from the Berkeley Earth Surface Temperature Study and can be accessed at Kaggle. Table 5 illustrates the dataset’s structure.
Table 6 presents the ML and Bayesian estimations for matrix parameters μ , Σ R , and Σ C . Table 7, Table 8, Table 9 and Table 10 provide estimations for the tensor parameter ( V ), with each table showing one season’s component of the tensor. Note that m is set to n p q 1 .
While we observed the superiority of Bayes estimators over ML estimators for the matrix parameters ( μ and Σ R ) and the tensor parameter ( V ), we further assessed the difference in the average length of the corresponding 95% intervals for each element of the parameters. Specifically, we calculated the average length of the 95% confidence intervals obtained from the ML estimation as l M L = 1 16 i = 1 4 j = 1 4 U M L ( C i j ) L M L ( C i j ) and, similarly, the average length of the 95% credible intervals from the Bayesian estimation as l B = 1 16 i = 1 4 j = 1 4 U B ( C i j ) L B ( C i j ) . Here, U M L and U B represent the upper limits of the corresponding intervals for element C i j , while L M L and L B represent the lower limits. Each parameter matrix can be denoted generically as C = ( C i j ) , for i , j = 1 , , 4 . The results indicate that the average length of the credible intervals from the Bayesian estimation is consistently shorter than that of the ML confidence intervals, confirming that Bayesian estimators provide narrower intervals. This finding is consistent with our previous simulation results, which demonstrated the superior performance of Bayesian estimation.

6. Concluding Remarks

In this study, we introduced the concept of interval-valued random matrices, representing a significant advancement at the intersection of symbolic data analysis and matrix theory. This novel approach offers a robust framework for statistical modeling, particularly in contexts characterized by complex and large datasets where traditional methods may fall short.
One of the strengths of our model is its ability to leverage both frequentist and Bayesian approaches for statistical inference. We have demonstrated that, especially when the sample size is small, Bayesian estimators dominate maximum likelihood estimators under the Frobenius norm loss function. Moreover, the asymptotic distribution of the ML estimators was established. This work paves the way for further research and application across various fields, encouraging the exploration of similar frameworks to handle symbolic data. The use of non-informative priors enhances decision making in complex analytical scenarios, ultimately improving the quality of statistical modeling in diverse disciplines.
A critical application explored in this paper involves analyzing temperature variations across different seasons using a dataset of interval-valued temperature matrices. Specifically, we examined seasonal temperature ranges (winter, spring, summer, and fall) across four distinct 20-year periods from 1950 to 2023. This practical example underscores the versatility of interval-valued random matrices, demonstrating their capacity to capture and analyze variability in temperature data.

Author Contributions

Writing—review & editing, A.S. (Abolnasser Sadeghkhani) and A.S. (Ali Sadeghkhani). Both co-authors are contributed equally. All authors have read and agreed to the published version of the manuscript.

Funding

This work is funded by North Carolina A&T State University, fund code 152099.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

The data presented in this study are openly available in https://www.kaggle.com/berkeleyearth/climate-change-earth-surface-temperature-data (accessed on 25 September 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Diday, E. Introduction à l’approche symbolique en analyse des données. RAIRO-Oper. Res. 1989, 23, 193–236. [Google Scholar] [CrossRef]
  2. Billard, L.; Diday, E. From the statistics of data to the statistics of knowledge: Symbolic data analysis. J. Am. Stat. Assoc. 2003, 98, 470–487. [Google Scholar] [CrossRef]
  3. Billard, L.; Diday, E. Symbolic Data Analysis: Conceptual Statistics and Data Mining; John Wiley & Sons: Hoboken, NJ, USA, 2012. [Google Scholar]
  4. Beranger, B.; Lin, H.; Sisson, S. New models for symbolic data analysis. Adv. Data Anal. Classif. 2023, 17, 659–699. [Google Scholar] [CrossRef]
  5. Sadeghkhani, A.; Sadeghkhani, A. Multivariate Interval-Valued Models in Frequentist and Bayesian Schemes. arXiv 2024, arXiv:2405.06635. [Google Scholar]
  6. Samadi, S.Y.; Billard, L.; Guo, J.H.; Xu, W. MLE for the parameters of bivariate interval-valued model. Adv. Data Anal. Classif. 2023, 1–24. [Google Scholar] [CrossRef]
  7. Lauro, C.N.; Palumbo, F. Principal component analysis of interval data: A symbolic data analysis approach. Comput. Stat. 2000, 15, 73–87. [Google Scholar] [CrossRef]
  8. Billard, L.; Diday, E. Regression analysis for interval-valued data. In Data Analysis, Classification, and Related Methods; Springer: Berlin/Heidelberg, Germany, 2000; pp. 369–374. [Google Scholar]
  9. Neto, E.D.A.L.; de Carvalho, F.D.A. Centre and range method for fitting a linear regression model to symbolic interval data. Comput. Stat. Data Anal. 2008, 52, 1500–1515. [Google Scholar] [CrossRef]
  10. Billard, L.; Diday, E. Symbolic regression analysis. In Classification, Clustering, and Data Analysis: Recent Advances and Applications; Springer: Berlin/Heidelberg, Germany, 2002; pp. 281–288. [Google Scholar]
  11. Blanco-Fernández, A.; Colubi, A.; García-Bárzana, M. A set arithmetic-based linear regression model for modelling interval-valued responses through real-valued variables. Inf. Sci. 2013, 247, 109–122. [Google Scholar] [CrossRef]
  12. García-Bárzana, M.; Ramos-Guajardo, A.B.; Colubi, A.; Kontoghiorghes, E.J. Multiple linear regression models for random intervals: A set arithmetic approach. Comput. Stat. 2020, 35, 755–773. [Google Scholar] [CrossRef]
  13. Gupta, A.K.; Nagar, D.K. Matrix Variate Distributions; Chapman and Hall/CRC: Boca Raton, FL, USA, 2018. [Google Scholar]
  14. Zhang, X.D. Matrix Analysis and Applications; Cambridge University Press: Cambridge, UK, 2017. [Google Scholar]
  15. Itskov, M. Tensor Algebra and Tensor Analysis for Engineers; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  16. Le-Rademacher, J.; Billard, L. Likelihood functions and some maximum likelihood estimators for symbolic data. J. Stat. Plan. Inference 2011, 141, 1593–1602. [Google Scholar] [CrossRef]
  17. Billard, L. Brief overview of symbolic data and analytic issues. Stat. Anal. Data Min. ASA Data Sci. J. 2011, 4, 149–156. [Google Scholar] [CrossRef]
  18. Muirhead, R.J. Aspects of Multivariate Statistical Theory; John Wiley & Sons: Hoboken, NJ, USA, 2009. [Google Scholar]
  19. Magnus, J.R.; Neudecker, H. Matrix Differential Calculus with Applications in Statistics and Econometrics; John Wiley & Sons: Hoboken, NJ, USA, 2019. [Google Scholar]
  20. Huber, P.J. Robust estimation of a location parameter. In Breakthroughs in Statistics: Methodology and Distribution; Springer: New York, NY, USA, 1992; pp. 492–518. [Google Scholar]
  21. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011, 58, 1–37. [Google Scholar] [CrossRef]
Table 1. ML and Bayes estimations of the mean matrix ( μ ), row covariance matrix ( Σ R ), column covariance matrix ( Σ C ), and tensor ( V ) for n = 25 .
Table 1. ML and Bayes estimations of the mean matrix ( μ ), row covariance matrix ( Σ R ), column covariance matrix ( Σ C ), and tensor ( V ) for n = 25 .
ParameterML EstimatorBayes Estimator
μ 0.9933 0.0105 0.0122 0.0067 0.9869 0.0091 0.0024 0.0034 0.9886 0.9933 0.0105 0.0122 0.0067 0.9869 0.0091 0.0024 0.0034 0.9886
Σ R 2.9068 0.0003 0.0140 0.0003 2.8717 0.0097 0.0140 0.0097 2.9047 2.7950 0.0003 0.0135 0.0003 2.76133 0.0093 0.0135 0.0093 2.7930
Σ C 2.8833 0.0065 0.0118 0.0065 2.8976 0.0136 0.0118 0.0136 2.902 2.7724 0.0063 0.0113 0.0063 2.7861 0.0131 0.0113 0.0131 2.790
V (ML Estimator) 1.4954 0.0020 0.0018 0.0020 1.4996 0.0057 0.0018 0.0057 1.5041 1.57413 0.0021 0.0019 0.0021 1.5786 0.0060 0.0019 0.0060 1.58336
2.00304 0.0049 0.00096 0.00490 1.9991 0.0076 0.00096 0.0076 1.99592 2.1084 0.0051 0.0010 0.0051 2.1043 0.0080 0.0010 0.0080 2.1009
2.49098 0.0092 0.0063 0.0092 2.50823 0.0065 0.0063 0.0065 2.4846 2.6220 0.0097 0.0067 0.0097 2.6402 0.0068 0.0067 0.0068 2.61544
Table 2. Estimated mean, row covariance, and column covariance matrices and tensor ( V ) for ML and Bayes estimators with n = 100 .
Table 2. Estimated mean, row covariance, and column covariance matrices and tensor ( V ) for ML and Bayes estimators with n = 100 .
ParameterML EstimatorBayes Estimator
μ 0.9938 0.0016 0.0034 0.0055 1.0046 0.0010 0.0086 0.0103 0.9996 0.9938 0.0016 0.0034 0.0055 1.0046 0.0010 0.0086 0.0103 0.9996
Σ R 2.9673 0.0022 0.0046 0.0022 2.9692 0.0014 0.0046 0.0014 2.9567 2.9379 0.0022 0.0046 0.0022 2.9398 0.0014 0.0046 0.0014 2.9275
Σ C 2.9806 0.0031 0.0107 0.0031 2.9527 0.0051 0.0107 0.0051 2.9600 2.9511 0.0030 0.0106 0.0030 2.9234 0.0051 0.0106 0.0051 2.9307
V (ML Estimator) 1.5019 0.0022 0.0033 0.0022 1.4982 0.0000 0.0033 0.0000 1.4985 1.5209 0.0022 0.0033 0.0022 1.5172 0.0000 0.0033 0.0000 1.5175
2.0025 0.0038 0.0006 0.0038 2.0016 0.0044 0.0006 0.0044 1.9977 2.0278 0.0038 0.0007 0.0038 2.0270 0.0045 0.0007 0.0045 2.0230
2.5010 0.0020 0.0034 0.0020 2.4972 0.0028 0.0034 0.0028 2.4986 2.5327 0.0020 0.0034 0.0020 2.5288 0.0028 0.0034 0.0028 2.5303
Table 3. Estimated mean, row covariance, and column covariance matrices and tensor ( V ) for ML and Bayes estimators with n = 250 .
Table 3. Estimated mean, row covariance, and column covariance matrices and tensor ( V ) for ML and Bayes estimators with n = 250 .
ParameterML EstimatorBayes Estimator
μ 1.0016 0.0013 0.0031 0.0020 0.9988 0.0031 0.0017 0.0003 1.0044 1.0016 0.0013 0.0031 0.0020 0.9988 0.0031 0.0017 0.0003 1.0044
Σ R 2.9975 0.0046 0.0008 0.0046 2.9846 0.0019 0.0008 0.0019 2.9909 2.9856 0.0046 0.0008 0.0046 2.9727 0.0019 0.0008 0.0019 2.9790
Σ C 2.9942 0.0006 0.0025 0.0006 2.9927 0.0064 0.0025 0.0064 2.9862 2.9823 0.0006 0.0025 0.0006 2.9808 0.0064 0.0025 0.0064 2.9743
V (ML Estimator) 1.4999 0.0011 0.0013 0.0011 1.4994 0.0008 0.0013 0.0008 1.4998 1.5075 0.0011 0.0013 0.0011 1.5069 0.0008 0.0013 0.0008 1.5073
2.0010 0.0019 0.0004 0.0019 2.0005 0.0022 0.0004 0.0022 1.9979 2.0110 0.0019 0.0004 0.0019 2.0106 0.0022 0.0004 0.0022 2.0079
2.4966 0.0002 0.0004 0.0002 2.4988 0.0007 0.0004 0.0007 2.5007 2.5092 0.0002 0.0004 0.0002 2.5113 0.0007 0.0004 0.0007 2.5133
Table 4. Comparison of risks for ML and Bayes estimators under Frobenius norm loss for different sample sizes ( n = 25 , 100 , 250 ).
Table 4. Comparison of risks for ML and Bayes estimators under Frobenius norm loss for different sample sizes ( n = 25 , 100 , 250 ).
Sample Size (n)ParameterML RiskBayes Risk
25Mean Matrix, μ 0.11800.1180
Row Covariance, Σ R 0.80500.6938
Column Covariance, Σ C 0.42110.3472
Tensor, V 0.03790.0346
100Mean Matrix, μ 0.03010.0301
Row Covariance, Σ R 0.75440.7253
Column Covariance, Σ C 0.34890.3295
Tensor, V 0.00720.0069
250Mean Matrix, μ 0.01220.0122
Row Covariance, Σ R 0.75690.7450
Column Covariance, Σ C 0.34370.3358
Tensor, V 0.00280.0028
Table 5. Interval-valued matrices for selected Asian countries.
Table 5. Interval-valued matrices for selected Asian countries.
CountryInterval-Valued Matrix
United Arab Emirates ( 15.75 , 21.5 ) ( 16.08 , 21.97 ) ( 17.1 , 23.56 ) ( 19.24 , 22.1 ) ( 21.43 , 30.25 ) ( 19.79 , 31.43 ) ( 19.8 , 32.34 ) ( 22.49 , 32.73 ) ( 31.06 , 34.42 ) ( 30.53 , 34.92 ) ( 31.36 , 36.1 ) ( 33.34 , 36.37 ) ( 23.09 , 32.53 ) ( 22.7 , 32.44 ) ( 23.7 , 33.34 ) ( 24.95 , 34.47 )
Iraq ( 4.72 , 14.98 ) ( 6.42 , 15.09 ) ( 6.64 , 15.55 ) ( 9.66 , 14.89 ) ( 12.98 , 30.34 ) ( 14.14 , 30.29 ) ( 13.06 , 31.3 ) ( 15.38 , 30.88 ) ( 30.89 , 36.43 ) ( 31.46 , 36.58 ) ( 31.91 , 38.28 ) ( 33.8 , 37.9 ) ( 15.36 , 32.06 ) ( 13.36 , 33.37 ) ( 15.46 , 32.97 ) ( 14.24 , 33.64 )
Azerbaijan ( 5.15 , 5.53 ) ( 5.91 , 5.94 ) ( 4.11 , 5.74 ) ( 2.42 , 7.26 ) ( 0.72 , 18.24 ) ( 0.59 , 17.58 ) ( 2.38 , 17.55 ) ( 3.08 , 19.09 ) ( 18.58 , 25.34 ) ( 18.01 , 25.89 ) ( 18.91 , 26.51 ) ( 22.01 , 27 ) ( 2.44 , 21.27 ) ( 4.93 , 20.78 ) ( 1.7 , 21.25 ) ( 2.83 , 20.68 )
Turkey ( 6.2 , 5.74 ) ( 5.99 , 5.57 ) ( 5.37 , 4.98 ) ( 3.27 , 5.25 ) ( 0.44 , 16.88 ) ( 0.39 , 16.45 ) ( 2.09 , 18.3 ) ( 2.54 , 17.46 ) ( 16.86 , 23.66 ) ( 17.29 , 24.22 ) ( 18.02 , 25.75 ) ( 18.3 , 26.04 ) ( 2.76 , 20.23 ) ( 2.64 , 19.35 ) ( 2.79 , 21.36 ) ( 1.54 , 20.92 )
Table 6. ML and Bayes estimators of parameters.
Table 6. ML and Bayes estimators of parameters.
EstimatorValue
μ ^ M L 13.52397 13.88910 14.34654 14.70949 21.54192 21.82295 22.27026 22.78782 27.19256 27.46731 27.90397 28.41500 22.57231 22.44821 22.52013 22.62744
Σ ^ R M L 0.109389249 0.072021087 0.075759802 0.007602738 0.072021087 0.103560670 0.081688315 0.007713221 0.075759802 0.081688315 0.087399746 0.004339049 0.007602738 0.007713221 0.004339049 0.064772075
Σ ^ C M L 11.69090 11.52679 11.48719 11.59250 11.52679 11.40047 11.37008 11.47697 11.48719 11.37008 11.38523 11.48793 11.59250 11.47697 11.48793 11.69355
μ ^ B 13.52397 13.88910 14.34654 14.70949 21.54192 21.82295 22.27026 22.78782 27.19256 27.46731 27.90397 28.41500 22.57231 22.44821 22.52013 22.6274
Σ ^ R B 0.106654518 0.070220560 0.073865807 0.007412669 0.070220560 0.100971654 0.079646107 0.007520391 0.073865807 0.079646107 0.085214753 0.004230573 0.007412669 0.007520391 0.004230573 0.06315277
Σ ^ C B 11.39863 11.23862 11.20001 11.30269 11.23862 11.11546 11.08583 11.19004 11.20001 11.08583 11.10060 11.20073 11.30269 11.19004 11.20073 11.40121
Table 7. Winter season components of tensor V ^ —ML and Bayes.
Table 7. Winter season components of tensor V ^ —ML and Bayes.
ComponentMLBayes
1 5.8586 3.5697 1.4356 3.4064 5.7615 3.5114 1.4250 3.3843 5.7587 3.4333 1.4111 3.4909 5.8290 3.3441 1.3603 3.4502 5.8586 3.5697 1.4357 3.4064 5.7616 3.5115 1.4250 3.3843 5.7588 3.4334 1.4112 3.4910 5.8291 3.3441 1.3604 3.4502
2 5.7615 3.5134 1.4234 3.3467 5.6749 3.4612 1.4126 3.3270 5.6719 3.3797 1.3979 3.4283 5.7308 3.2955 1.3468 3.3975 5.7616 3.5134 1.4234 3.3467 5.6750 3.4612 1.4127 3.3271 5.6720 3.3797 1.3980 3.4283 5.7309 3.2955 1.3469 3.3975
3 5.6447 3.4498 1.3978 3.2844 5.5596 3.3979 1.3865 3.2654 5.5601 3.3169 1.3723 3.3646 5.6169 3.2380 1.3233 3.3359 5.7588 3.5196 1.4261 3.3508 5.6720 3.4666 1.4146 3.3314 5.6725 3.3840 1.4000 3.4326 5.7305 3.3034 1.3500 3.4034
4 5.7136 3.4873 1.4195 3.3417 5.6173 3.4270 1.4092 3.3178 5.6169 3.3542 1.3983 3.4252 5.6984 3.2683 1.3522 3.3810 5.8291 3.5578 1.4481 3.4093 5.7309 3.4962 1.4377 3.3848 5.7305 3.4220 1.4266 3.4944 5.8136 3.3343 1.3795 3.4494
Table 8. Spring season components of tensor V ^ —ML and Bayes.
Table 8. Spring season components of tensor V ^ —ML and Bayes.
ComponentMLBayes
1 3.4990 2.3897 1.0933 2.2433 3.4438 2.3550 1.0885 2.2276 3.4498 2.2949 1.0815 2.2852 3.4873 2.2522 1.0387 2.2500 3.5697 2.4380 1.1154 2.2887 3.5134 2.4026 1.1105 2.2726 3.5196 2.3413 1.1033 2.3314 3.5578 2.2977 1.0597 2.2955
2 3.4419 2.3550 1.0709 2.1986 3.3926 2.3274 1.0677 2.1840 3.3979 2.2642 1.0583 2.2380 3.4270 2.2224 1.0143 2.2075 3.5115 2.4026 1.0926 2.2430 3.4612 2.3744 1.0892 2.2282 3.4666 2.3099 1.0797 2.2833 3.4962 2.2674 1.0348 2.2521
3 3.3653 2.2949 1.0405 2.1529 3.3128 2.2642 1.0380 2.1360 3.3169 2.2090 1.0306 2.1924 3.3542 2.1651 0.9880 2.1552 3.4334 2.3413 1.0616 2.1964 3.3797 2.3099 1.0590 2.1792 3.3840 2.2537 1.0515 2.2368 3.4220 2.2089 1.0080 2.1987
4 3.2779 2.2522 1.0390 2.1153 3.2303 2.2224 1.0354 2.1011 3.2380 2.1651 1.0292 2.1531 3.2683 2.1311 0.9891 2.1221 3.3441 2.2977 1.0600 2.1580 3.2955 2.2674 1.0564 2.1436 3.3034 2.2089 1.0500 2.1966 3.3343 2.1742 1.0091 2.1650
Table 9. Summer Season Components of Tensor V ^ —ML and Bayes.
Table 9. Summer Season Components of Tensor V ^ —ML and Bayes.
ComponentMLBayes
1 1.4072 1.0933 0.7229 1.0710 1.3952 1.0709 0.7211 1.0684 1.3978 1.0405 0.7313 1.0896 1.4195 1.0390 0.7112 1.0777 1.4357 1.1154 0.7375 1.0927 1.4234 1.0926 0.7357 1.0900 1.4261 1.0616 0.7461 1.1116 1.4481 1.0600 0.7256 1.0995
2 1.3968 1.0885 0.7211 1.0657 1.3847 1.0677 0.7212 1.0624 1.3865 1.0380 0.7313 1.0835 1.4092 1.0354 0.7106 1.0695 1.4250 1.1105 0.7357 1.0872 1.4127 1.0892 0.7358 1.0839 1.4146 1.0590 0.7461 1.1054 1.4377 1.0564 0.7249 1.0911
3 1.3832 1.0815 0.7313 1.0669 1.3703 1.0583 0.7313 1.0631 1.3723 1.0306 0.7442 1.0856 1.3983 1.0292 0.7248 1.0696 1.4112 1.1033 0.7461 1.0884 1.3980 1.0797 0.7461 1.0846 1.4000 1.0515 0.7592 1.1076 1.4266 1.0500 0.7394 1.0912
4 1.3334 1.0387 0.7112 1.0322 1.3202 1.0143 0.7106 1.0296 1.3233 0.9880 0.7248 1.0508 1.3522 0.9891 0.7102 1.0359 1.3604 1.0597 0.7256 1.0531 1.3469 1.0348 0.7249 1.0504 1.3500 1.0080 0.7394 1.0720 1.3795 1.0091 0.7246 1.0568
Table 10. Fall season components of tensor V ^ —ML and Bayes.
Table 10. Fall season components of tensor V ^ —ML and Bayes.
ComponentMLBayes
1 3.3389 2.2433 1.0710 2.2138 3.2804 2.1986 1.0657 2.1976 3.2844 2.1529 1.0669 2.2691 3.3417 2.1153 1.0322 2.2290 3.4064 2.2887 1.0927 2.2586 3.3467 2.2430 1.0872 2.2420 3.3508 2.1964 1.0884 2.3149 3.4093 2.1580 1.0531 2.2741
2 3.3173 2.2276 1.0684 2.1976 3.2611 2.1840 1.0624 2.1877 3.2654 2.1360 1.0631 2.2556 3.3178 2.1011 1.0296 2.2232 3.3843 2.2726 1.0900 2.2420 3.3271 2.2282 1.0839 2.2319 3.3314 2.1792 1.0846 2.3012 3.3848 2.1436 1.0504 2.2681
3 3.4218 2.2852 1.0896 2.2691 3.3604 2.2380 1.0835 2.2556 3.3646 2.1924 1.0856 2.3390 3.4252 2.1531 1.0508 2.3041 3.4910 2.3314 1.1116 2.3149 3.4283 2.2833 1.1054 2.3012 3.4326 2.2368 1.1076 2.3863 3.4944 2.1966 1.0720 2.3507
4 3.3818 2.2500 1.0777 2.2290 3.3302 2.2075 1.0695 2.2232 3.3359 2.1552 1.0696 2.3041 3.3810 2.1221 1.0359 2.2912 3.4502 2.2955 1.0995 2.2741 3.3975 2.2521 1.0911 2.2681 3.4034 2.1987 1.0912 2.3507 3.4494 2.1650 1.0568 2.3375
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sadeghkhani, A.; Sadeghkhani, A. Interval-Valued Random Matrices. Entropy 2024, 26, 899. https://doi.org/10.3390/e26110899

AMA Style

Sadeghkhani A, Sadeghkhani A. Interval-Valued Random Matrices. Entropy. 2024; 26(11):899. https://doi.org/10.3390/e26110899

Chicago/Turabian Style

Sadeghkhani, Abdolnasser, and Ali Sadeghkhani. 2024. "Interval-Valued Random Matrices" Entropy 26, no. 11: 899. https://doi.org/10.3390/e26110899

APA Style

Sadeghkhani, A., & Sadeghkhani, A. (2024). Interval-Valued Random Matrices. Entropy, 26(11), 899. https://doi.org/10.3390/e26110899

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop