Next Article in Journal
The Mean Minkowski Content of Homogeneous Random Fractals
Next Article in Special Issue
Using a Fuzzy Inference System to Obtain Technological Tables for Electrical Discharge Machining Processes
Previous Article in Journal
Relating Multi-Adjoint Normal Logic Programs to Core Fuzzy Answer Set Programs from a Semantical Approach
Previous Article in Special Issue
M-CFIS-R: Mamdani Complex Fuzzy Inference System with Rule Reduction Using Complex Fuzzy Measures in Granular Computing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

EA/AE-Eigenvectors of Interval Max-Min Matrices

1
Faculty of Informatics and Management, University of Hradec Králové, 50003 Hradec Králové, Czech Republic
2
Faculty of Electrical Engineering and Informatics, Technical University of Košice, 04200 Košice, Slovakia
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(6), 882; https://doi.org/10.3390/math8060882
Submission received: 13 April 2020 / Revised: 24 May 2020 / Accepted: 25 May 2020 / Published: 1 June 2020
(This article belongs to the Special Issue Applications of Fuzzy Optimization and Fuzzy Decision Making)

Abstract

:
Systems working in discrete time (discrete event systems, in short: DES)—based on binary operations: the maximum and the minimum—are studied in so-called max–min (fuzzy) algebra. The steady states of a DES correspond to eigenvectors of its transition matrix. In reality, the matrix (vector) entries are usually not exact numbers and they can instead be considered as values in some intervals. The aim of this paper is to investigate the eigenvectors for max–min matrices (vectors) with interval coefficients. This topic is closely related to the research of fuzzy DES in which the entries of state vectors and transition matrices are kept between 0 and 1, in order to describe uncertain and vague values. Such approach has many various applications, especially for decision-making support in biomedical research. On the other side, the interval data obtained as a result of impreciseness, or data errors, play important role in practise, and allow to model similar concepts. The interval approach in this paper is applied in combination with forall–exists quantification of the values. It is assumed that the set of indices is divided into two disjoint subsets: the E-indices correspond to those components of a DES, in which the existence of one entry in the assigned interval is only required, while the A-indices correspond to the universal quantifier, where all entries in the corresponding interval must be considered. In this paper, the properties of EA/AE-interval eigenvectors have been studied and characterized by equivalent conditions. Furthermore, numerical recognition algorithms working in polynomial time have been described. Finally, the results are illustrated by numerical examples.
MSC:
Primary: 08A72; 90B35; Secondary: 90C47

1. Introduction

Matrices in max–min algebra (fuzzy matrices), in which the binary operations of addition and multiplication are replaced by binary operations of maximum and minimum, are useful when modeling fuzzy discrete dynamic systems. They are also useful for graph theory, scheduling, knowledge engineering, cluster analysis, fuzzy systems and when describing the diagnosis of technical devices [1,2] or medical diagnosis [3]. The problem studied in [3] leads to the problem of finding the greatest invariant of the fuzzy system.
Fuzzy DES combine fuzzy set theory with discrete events systems and are represented by vectors and matrices having entries between 0 and 1 and describing uncertain and vague values. The papers [4,5] are devoted to a generalization of DES into fuzzy DES and spreading optimal control of discrete event systems to fuzzy discrete event systems. The authors of [6] deal with predictability in fuzzy DES. In particular, these papers are motivated by an ambition to clear a difficulty with vagueness and subjectivity in real medical applications. The other possibility how to treat the possible inaccuracy of DES entries is to use interval data in combination with forall–exists quantification of values. Namely, some elements of the interval vector and the interval matrix are taken into account for each value of the interval, and some of them are only considered for at least one value. This approach allows to obtain alternative solutions.
The research of fuzzy algebra is also motivated by max-plus interaction discrete-events systems (DESs) whereby applications on the system of processors and multi-machine interactive production process were presented in [7,8], respectively. In these systems, we have n entities (e.g., processors, servers, machines, etc.) which that work in stages. In the algebraic model of their interactive work, the entry x i ( k ) of the state vector x ( k ) represents the start-time of the kth stage on entity i, i = 1 , , n , and the entry a i j of the transition matrix encodes the influence of the work of entity j in the previous stage on the work of entity i in the current stage. The system is assumed to be homogeneous, in the sense that A does not change from stage to stage.
Summing up all the influence effects multiplied by the results of previous stages, we have x i ( k + 1 ) = j a i j x j ( k ) , where = max and = + . In max-plus algebra, the maximum is often interpreted as waiting until all works of the system are finished and all of the necessary influence constraints are satisfied. The problem of finding the vectors for which the DES reaches the steady state leads to the eigenproblem A x = λ x , and is one of the most intensively studied questions (see max–min case study in Section 3.2.
Analogously, = min in max–min algebra. The summing is then interpreted as computing the maximal capacity of the path leading to the next state of the system. Because the operations max and min do not create new values, a DES in max–min algebra necessarily comes to periodic repetition of the state vector (i.e., to a steady state) if the period is 1. The eigenproblem then has the form A x = x . In comparison with the max–plus algebra, the eigenvalue λ is omitted (in other words, we assume that λ is equal to the maximal value I).
In practice, the values of the matrix entries obtained as a result of roundoff, truncation, or data errors are not exact numbers and they are usually contained in some intervals. Interval arithmetic is an efficient way to represent matrices in a guaranteed way on a computer. Meanwhile, fuzzy algebra is a convenient algebraic setting for some types of optimization problems, see [9]. Matrices and vectors with interval entries play important role in practice. They can be applied in several branches of applied mathematics, as for instance, a solvability of systems of linear interval equations in classical linear algebra [10] and in max-plus algebra [11] or the stability of the matrix orbit in max–min algebra [12,13].
The motivation for the basic questions studied in this paper comes from an investigation of the steady states of max–min systems with interval coefficients. Suppose that X is an interval vector and A is an interval matrix, then X is called a strong eigenvector of A if A x = x holds for every x X and for every A A . The eigenvectors correspond to steady states, and it may happen in reality that this interpretation—with the universal quantifier for every index i N and for every pair ( i , j ) N × N —is too strong for all of the entries.
In other words, in some model situations only the existence of some x i (some a i j ) is required for i N ( for ( i , j ) N ˜ ) , while all possible values of x i (of a i j ) must be considered for i N = N \ N ( for ( i , j ) N ˜ = N × N \ N ˜ ) .
Hence, we assume that X and A can be split into two subsets according to the exists/forall quantification of its interval entries; that is, X = X X or A = A A (or both splittings simultaneously) take place.
According to the first two cases, the properties of various types of the strong EA/AE-eigenvectors, or the EA/AE-strong eigenvectors, are studied in this paper. In addition, their characterizations by equivalent conditions are given. Moreover, polynomial recognition algorithms for the described conditions are presented. The mixed case (the EA/AE-strong EA/AE-eigenvectors) is briefly considered without recognition algorithms.
Related concepts of robustness (when an eigenvector of A is reached with any starting vector) and strong robustness (when the greatest eigenvector of A is reached with any starting vector) in fuzzy algebra were introduced and studied in [14,15]. Equivalent conditions for the robustness of interval matrix were presented in [11] and efficient algorithms for checking of strong robustness were described in [16]. The papers by [12,13] deal with AE/EA robustness of interval circulant matrices and X A E / X E A robustness of max–min matrices. Polynomial procedures for the recognition of weak robustness were described in [15].
The rest of this paper is organized as follows. The next section contains the basic definitions and notation. Section 3 and Section 4 deal with the definitions and equivalent conditions for the EA/AE-eigenvectors. In particular, Section 3 is divided into two subsections, where Section 3.1 contains the methodology and Section 3.2 presents a study case application based on a numerical example. Section 5 describes the strong EA/AE-eigenvectors. Meanwhile, Section 6 is devoted to characterization of the necessary and sufficient conditions for the EA/AE-strong eigenvectors. Finally, the generalization to the mixed case of EA/AE-strong EA/AE-eigenvectors is briefly sketched in Appendix A.

2. Preliminaries and Basic Definitions

Let ( B , ) be a bounded linearly ordered set with the least element in B denoted by O and the greatest element denoted by I. For given natural numbers m , n , we use the notation M = { 1 , 2 , , m } and N = { 1 , 2 , , n } , respectively. The set of m × n matrices over B is denoted by B ( m , n ) , the set of n × 1 vectors over B is denoted by B ( n ) and, for α B , the constant vector ( α , , α ) T is denoted by α .
The max–min algebra is defined as a triple ( B , , ) , where a b = max ( a , b ) and a b = min ( a , b ) . The operations , are extended to the matrix-vector algebra over B by the direct analogy to the conventional linear algebra. If each entry of a matrix A B ( m , n ) (a vector x B ( n ) ) is equal to O, then we write A = O ( x = O ).
The ordering from B is naturally extended to vectors and matrices. For example, for x = ( x 1 , , x n ) T B ( n ) and y = ( y 1 , , y n ) T B ( n ) we write x y , if x i y i holds for each i N .
For A ̲ , A ¯ B ( n , n ) , A ̲ A ¯ and x ̲ , x ¯ B ( n ) , x ̲ x ¯ , the interval matrix A with bounds A ̲ , A ¯ and the interval vector X with bounds x ̲ , x ¯ are defined as follows
A = [ A ̲ , A ¯ ] = A B ( n , n ) ; A ̲ A A ¯ ,
X = [ x ̲ , x ¯ ] = x B ( n ) ; x ̲ x x ¯ .
In the rest of this paper we assume that subsets N , N N are given with N = N N and N = N N = . In other words, we consider a partition N = { N , N } . If i N ( i N ), then we say that the index i is associated with the existential (universal) quantifier.
Using the given partition N = { N , N } , we can split the interval vector X as X X , where X = [ x ̲ , x ¯ ] is the interval vector comprising the universally quantified entries and X = [ x ̲ , x ¯ ] concerns the existentially quantified entries. In the other words, every vector x X can be written in the form x = x x , with x X , x X .
More precisely, x i = x i for i N , x i = O for i N ; and similarly, x i = x i for i N , x i = O for i N .
Definition 1.
Let interval vector X B ( n ) and partition N = { N , N } be given. Interval vector X = [ x ̲ , x ¯ ] is called
  • E-subvector of X, if x ̲ i = x ¯ i = O for each i N and [ x ̲ i , x ¯ i ] = [ x ̲ i , x ¯ i ] for each i N ,
and interval vector X = [ x ̲ , x ¯ ] is called
  • A-subvector of X, if x ̲ i = x ¯ i = O for each i N and [ x ̲ i , x ¯ i ] = [ x ̲ i , x ¯ i ] for each i N ,
Example 1.
Suppose that B = [ 0 , 10 ] , N = { N , N } . Consider interval vector X which has the form
X = [ 1 , 2 ] [ 1 , 3 ] [ 3 , 4 ] [ 1 , 2 ] [ 0 , 1 ] with N = { 1 , 2 , 3 } and N = { 4 , 5 } .
Then subvectors X and X have the form
X = [ 1 , 2 ] [ 1 , 3 ] [ 3 , 4 ] [ 0 , 0 ] [ 0 , 0 ] , and X = [ 0 , 0 ] [ 0 , 0 ] [ 0 , 0 ] [ 1 , 2 ] [ 0 , 1 ] .
For given A B ( n , n ) , x B ( n ) we say that x is eigenvector of A, if
A x = x .
In the rest of this paper, we assume that a partition N = { N , N } is given. The corresponding subvectors X , X and entries will always be related to this fixed N, without explicit formulation. The same is true for the EA/AE-eigenvectors that are defined as follows.
Definition 2.
Let matrix A B ( n , n ) and interval vector X = [ x ̲ , x ¯ ] B ( n ) be given. We say that X is
  • EA-eigenvector of A if
    ( x X ) ( x X ) A ( x x ) = ( x x ) ,
  • AE-eigenvector of A if
    ( x X ) ( x X ) A ( x x ) = ( x x ) .
All matrices belonging to A and vectors belonging to X can be represented as max–min linear combinations of so-called generators, which are defined as follows. For every i , j N , A ( i j ) B ( n , n ) and x ( i ) , x [ i ] , B ( n ) are defined by putting, for every k , l N ,
a k l ( i j ) = a ¯ i j , for k = i , l = j a ̲ k l , otherwise ,
x k ( i ) = x ¯ i , for k = i x ̲ k , otherwise x k [ i ] = x ̲ i , for k = i x ¯ k , otherwise
Furthermore, we denote x ( n + 1 ) : = x ̲ , x [ n + 1 ] : = x ¯ and X G = { x ( i ) , x [ i ] ; i N { n + 1 } } . Notice that | X G | = 2 n + 2 .
Lemma 1.
Let x B ( n ) and A B ( n , n ) . Then,
(i)
x X if and only if x = i N β i x ( i ) for some β i B with x ̲ i β i x ¯ i ,
(ii)
A A if and only if A = i , j N α i j A ( i j ) for some α i j B with a ̲ i j α i j a ¯ i j .
Proof. 
For the proof of statement (i), let us suppose that x X ; that is, the inequalities x ̲ i x i x ¯ i hold for every i N . Denoting β i = x i we get β i x ¯ i = x i x ¯ i = x i and β i x ̲ i = x i x ̲ i = x ̲ i x i for every i N . It can be easily verified that i N β i x ( i ) = x . The proof of statement (ii) is analogous. □

3. EA-Eigenvector

3.1. Description of the Methodology

The first result is the characterization of an interval EA-eigenvector of a given (non-interval) matrix A, with the help of generators.
Theorem 1.
Let A B ( n , n ) and X = [ x ̲ , x ¯ ] be given. Then, X is EA-eigenvector of A if and only if
( x X ) ( i N { n + 1 } ) A ( x x ( i ) ) = x x ( i ) .
Proof. 
Suppose that there is x X such that A ( x x ( i ) ) = x x ( i ) holds for all i N { n + 1 } . For fixed x define the auxiliary interval vector X ^ = ( x ^ 1 , , x ^ n ) T as follows:
x ^ i = [ x i , x i ] , for i N x ̲ i , x ¯ i , for i N .
Notice that vectors x ^ ( i ) of X ^ have the form
x ^ k ( i ) = x ¯ i , for k = i i N x ̲ i , for k i k N x k , for k N ,
or equivalently, x ^ ( i ) = x x ( i ) for each i N and x ^ ( i ) = x ̲ ^ = x x ( n + 1 ) for each i N . It is easy to see that X is EA-eigenvector if and only if A x ^ = x ^ holds for each x ^ X ^ . By Lemma 1, an arbitrary vector x ^ X ^ is defined as the max–min linear combination x ^ = i N β i x ^ ( i ) . We will then prove that the equality A x ^ = x ^ holds for each x ^ X ^ . Thus, we get
A x ^ = A i N β i x ^ ( i ) = A i N β i x ^ ( i ) i N β i x ^ ( i ) =
A i N β i x ̲ ^ i N β i x ^ ( i ) = i N β i A x ̲ ^ i N β i A x ^ ( i ) =
i N β i A ( x x ( n + 1 ) ) i N β i A ( x x ( i ) ) =
i N β i ( x x ( n + 1 ) ) i N β i ( x x ( i ) ) =
i N β i x ^ ( i ) i N β i x ^ ( i ) = x ^ .
The reverse implication is trivial. □
The next result shows that the conditions in Theorem 1 can be equivalently formulated as the solvability condition of a system of two-sided max–min linear equations. Hence, the interval EA-eigenvectors of a given non-interval matrix can be recognized in polynomial time.
Without loss of generality, suppose that N = { 1 , 2 , , k } and N = { k + 1 , k + 2 , , n } ; that is,
X = ( [ x ̲ 1 , x ¯ 1 ] , , [ x ̲ k , x ¯ k ] , [ O , O ] , [ O , O ] ) T
and
X = ( [ O , O ] , [ O , O ] , [ x ̲ k + 1 , x ¯ k + 1 ] , , [ x ̲ n , x ¯ n ] ) T .
Define the matrices C , D B n ( n k + 1 ) , n + 1 as follows
C = A x ( 1 ) A x ( k ) A x ( k + 1 ) O O O A x ( 1 ) A x ( k ) O A x ( k + 2 ) O O A x ( 1 ) A x ( k ) O O O A x ( n + 1 )
and
D = x ( 1 ) x ( k ) x ( k + 1 ) O O O x ( 1 ) x ( k ) O x ( k + 2 ) O O x ( 1 ) x ( k ) O O O x ( n + 1 ) .
Theorem 2.
Let A and X = [ x ̲ , x ¯ ] be given. Then, X is EA-eigenvector of A if and only if the system C β = D β is solvable with x ̲ i β i x ¯ i for i N and β j = I for j N { n + 1 } .
Proof. 
By Lemma 1, if x ̲ i β i x ¯ i for i N , then x = i = 1 k β i x ( i ) belongs to X ; and if x X , then we can find x ̲ i β i x ¯ i for i N such that x = i = 1 k β i x ( i ) .
We also have that the system C β = D β is solvable with x ̲ i β i x ¯ i for i N and β j = I for j N { n + 1 } if and only if the following equivalences hold true:
C β = D β
j N { n + 1 } i N ( A x ( i ) β i ) ( A x ( j ) β j ) =
i N ( x ( i ) β i ) ( x ( j ) β j )
j N { n + 1 } A ( i N ( x ( i ) β i ) x ( j ) ) = i N ( x ( i ) β i ) x ( j )
( x X ) ( j N { n + 1 } ) A ( x x ( j ) ) = x x ( j )
because of β j = I for j N { n + 1 } . Thus by Theorem 1, the assertion follows. □
A polynomial algorithm for solving a general two-sided system C β = D β of max–min linear equations with C , D B ( r , s ) , is presented in [17]. This method finds the maximum possible solution, β max , of the system. If this possible solution does not satisfy all of the conditions of the system, then the system is not solvable. In our case, the insolvability means that the considered X is not an EA-eigenvector of A. The computational complexity of the proposed algorithm is O ( r s · min ( r , s ) ) .
Theorem 3.
Suppose that we are given a matrix A and an interval vector X = [ x ̲ , x ¯ ] . The recognition problem of whether a given interval vector X of A is EA-eigenvector is then solvable in O ( n 4 ) time.
Proof. 
According to Theorem 2, the recognition problem of EA-eigenvector of A is equivalent to recognizing whether the system C β = D β is solvable with x ̲ j β j x ¯ j for j N and β j = I for j N . In the general case, the computation of the needs O ( r s · min ( r , s ) ) time (see [17]). In our case, we have r = n ( n k + 1 ) , s = n + 1 ; therefore, the computation of C β = D β is done in O ( n 2 · n · n ) = O ( n 4 ) time. □

3.2. Case Study Application of the Methodology Based on Numerical Example

Consider a data transfer system consisting of n computers and one server S. The computed data from computer c i , i N are sent to S whereby the corrected data have to return to c i . We assume that the connection between c i and S is only possible via one of n security processors p j , the connections between c i and p j are one-way connections, and the capacity of the connection between c i N and p j N is equal to a i j . Moreover, suppose that security processors p j N are connected with S by two-way connections with capacities x j in both directions. The data are transmitted in data packets, and every data packet is transmitted over just one connection as an inseparable unit. Therefore, the total capacity of the connection between i and S is equal to max j N { min { a i j , x j } } , that is, different used connections are comprised as the maximum of capacities (not as their sum).
The transfer from S to i is carried out via other one-way connections between security processors j N and i N with capacities between j and i equal to the constant I (the greatest element) if i = j , and equal to O (the least element), otherwise. Since the connections between S and j are two-way connections, the total capacity of the connection between S and i is equal to min { I , x i } = x i for every i N . The goal is to find optimal capacities x j , j N such that the maximal capacity of all connections between i and S via j is equal to the maximal capacity of connections between S and i on the way back, that is, we have to choose x j , j N in such a way that max j N { min { a i j , x j } } = x i for all i N .
Consider a data transfer system which consists of 4 computers, 4 security processors and one server (see Figure 1). To find optimal capacities x j , j M = { 1 , 2 , 3 , 4 } means to solve an eigenproblem for a matrix A. Then for B = [ 0 , 10 ] and the matrix A we look for a solution of the equality A x = x , or in matrix-vector form, we have
A x = 0 2 2 1 1 0 2 2 2 1 4 1 1 2 1 5 x 1 x 2 x 3 x 4 = x 1 x 2 x 3 x 4
One solution of the set of all solutions describing optimal capacities x j , j N of the data transfer system is vector x = ( 2 , 2 , 4 , 5 ) T .
Assume now that the capacities x j , j N are limited by the lower bound x ̲ j and upper bound x ¯ j . Furthermore, we assume that not all of the processed data have the same importance: whereby, for some more important types of data, all values of the interval must be taken into account (all capacities of the data transfer system have to be involved in optimal solutions), and for some—less important data types—it is sufficient to be considered for at least one value (some value of these capacities of the data transfer system are optimal). In the above defined terminology, the optimal solution has to satisfy the definition of the EA-eigenvector of A.
For numerical illustration of this situation suppose that interval vector X has the form
x ̲ = 2 2 3 2 , x ¯ = 4 5 4 3
with N = { 1 , 2 } , N = { 3 , 4 } and N { n + 1 } = { 3 , 4 , 5 } .
Then generators of X and its matrix-vector products can be computed as follows:
x ( 1 ) = 4 2 0 0 , x ( 2 ) = 2 5 0 0 , x ( 3 ) = 0 0 4 2 , x ( 4 ) = 0 0 3 3 , x ( 5 ) = 0 0 3 2 .
and
A x ( 1 ) = 2 1 2 2 , A x ( 2 ) = 2 1 2 2 , A x ( 3 ) = 2 2 4 2 ,
A x ( 4 ) = 2 2 3 3 , A x ( 5 ) = 2 2 3 2 ,
By Theorem 2 we will show that X is EA-eigenvector. At first we shall construct matrices C and D and after that we shall solve the system C β = D β with x ̲ j β j x ¯ j for j N and β i = I for i N { n + 1 } , i.e.,
2 2 2 0 0 1 1 2 0 0 2 2 4 0 0 2 2 2 0 0 2 2 0 2 0 1 1 0 2 0 2 2 0 3 0 2 2 0 3 0 2 2 0 0 2 1 1 0 0 2 2 2 0 0 3 2 2 0 0 2 β 1 β 2 β 3 β 4 β 5 = 4 2 0 0 0 2 5 0 0 0 0 0 4 0 0 0 0 2 0 0 4 2 0 0 0 2 5 0 0 0 0 0 0 3 0 0 0 0 3 0 4 2 0 0 0 2 5 0 0 0 0 0 0 0 3 0 0 0 0 2 β 1 β 2 β 3 β 4 β 5
with 2 β 1 4 ; 2 β 2 5 ; 10 β 3 10 ; 10 β 4 10 ; 10 β 5 10 .
To obtain a solution of the system (3), we use the Algorithm 1 presented in [17]. For the convenience of the reader, the algorithm is described in the original notation.
Let A , B B ( m , n ) be given matrices. Denote
M ^ = { x B ( n ) ; A x = B x } , I = { 1 , , m } , J = { 1 , , n } , a i ( x ) = max j J ( a i j x j ) , b i ( x ) = max j J ( b i j x j ) , M ( x ¯ = { x ; x M ^ x x ¯ } , I < ( x ¯ ) = { i I ; a i ( x ¯ ) < b i ( x ¯ ) } , I = ( x ¯ ) = { i I ; a i ( x ¯ ) = b i ( x ¯ ) } , α ( x ¯ ) = min { a i ( x ¯ ) ; i I < ( x ¯ ) } , I < ( α ( x ¯ ) ) = { i I < ( x ¯ ) ; a i ( x ¯ ) = α ( x ¯ ) } , I = ( α ( x ¯ ) ) = { i I = ( x ¯ ) ; a i ( x ¯ ) α ( x ¯ ) } , J ( α ( x ¯ ) ) = { j J ; ( i I < ( α ( x ¯ ) ) [ b i j x ¯ j > α ( x ¯ ) ] } .
Algorithm 1: Solving a general two-sided system.
  • Input: m , n , x ¯ .
  • Output: x max .
  • begin
  •  1  If x ¯ M ( x ¯ ) , then x max : = x ¯ , STOP;
  •  2  Change notation so that a i ( x ¯ ) b i ( x ¯ ) for all i I ;
  •  3  Compute α ( x ¯ ) , I < ( α ( x ¯ ) ) , I = ( α ( x ¯ ) ) ;
  •  4  Set x ˜ j : = α ( x ¯ ) if j J ( α ( x ¯ ) ) , x ˜ j : = x ¯ j otherwise;
  •  5  If x ˜ M ( x ¯ ) , then x max : = x ˜ , STOP;
  •  6  Put x ¯ : = x ˜ go to  2 ;
  • end
We will now apply the items of the Algorithm 2 to obtain the greatest solution of the system (3) whereby x ¯ , a i ( x ¯ ) , b i ( x ¯ ) will be substituted by β ¯ , c i ( β ¯ ) , d i ( β ¯ ) , respectively:
Algorithm 2: Solving a general two-sided system - example.
  • Input: m = 12 , n = 5 , I = { 1 , , 12 } , J = { 1 , , 5 } , x ¯ = ( 4 , 5 , 10 , 10 , 10 ) T .
  • Output: β max .
  •  1  β ¯ = ( 4 , 5 , 10 , 10 , 10 ) M ( β ¯ ) ;
  •  2  ( c 1 ( β ¯ ) , , c 12 ( β ¯ ) ) = ( 2 , 2 , 4 , 2 , 2 , 2 , 3 , 3 , 2 , 2 , 3 , 2 )
  •    ( d 1 ( β ¯ ) , , d 12 ( β ¯ ) ) = ( 4 , 5 , 4 , 2 , 4 , 5 , 3 , 3 , 4 , 5 , 3 , 2 ) ;
  •  3  α ( β ¯ ) = 2 , I < ( α ( β ¯ ) ) = I < ( 2 ) = { 1 , 2 , 5 , 6 , 9 , 10 } , I = ( α ( β ¯ ) ) = I = ( 2 ) =
  •    { 4 , 12 } , J ( α ( β ¯ ) ) = J ( 2 ) = { 1 , 2 } ;
  •  4  β ˜ = ( 2 , 2 , 10 , 10 , 10 ) ;
  •  5  β ˜ = ( 2 , 2 , 10 , 10 , 10 ) M ( β ¯ ) , β max = β ˜ , STOP.
The output of the Algorithm 2 is vector β max = ( 2 , 2 , 10 , 10 , 10 ) M ( β ¯ ) , which is the greatest possible solution of the system (3). It is easy to verify that the possible solution β max satisfies all conditions of (3). In other words, β max is the solution, and we can conclude that X is an EA-eigenvector of A.

4. AE-Eigenvector

As in the previous section, we characterize an interval AE-eigenvector of a given non-interval matrix A with the help of generators. We recall that X G = { x ( i ) , x [ i ] ; i N { n + 1 } } X , where
x k ( i ) = x ¯ i , for k = i x ̲ k , otherwise , x k [ i ] = x ̲ i , for k = i x ¯ k , otherwise ,
for i N , and x ( n + 1 ) : = x ̲ , x [ n + 1 ] : = x ¯ .
Theorem 4.
Let A B ( n , n ) and X = [ x ̲ , x ¯ ] be given. Then, X is an AE-eigenvector of A if and only if
( x X G ) ( x X ) A ( x x ) = x x .
Proof. 
Suppose that X is not an AE-eigenvector of A; that is,
( x X ) ( x X ) A ( x x ) ( x x )
or equivalently
( x X ) ( x X ) ( i N ) ( A ( x x ) ) i ( x x ) i .
We shall prove that either there is k N { n + 1 } such that for each x X the inequality A ( x x ( k ) ) ( x x ( k ) ) holds true or there is k N { n + 1 } such that for each x X the inequality A ( x x [ k ] ) ( x x [ k ] ) is fulfilled.
We will next analyze four cases:
Case (i).
Suppose that i N and ( A ( x x ) ) i < ( x x ) i . We then have
j N a i j ( x x ) j < ( x x ) i
and for x ̲ = x ( n + 1 ) we obtain
j N a i j ( x x ̲ ) j j N a i j ( x x ) j < ( x x ) i = x i = ( x x ̲ ) i .
Case(ii).
Suppose that i N and ( A ( x x ) ) i > ( x x ) i . We then have
j N a i j ( x x ) j = a i i ( x x ) i j i a i j ( x x ) j > ( x x ) i
and hence
j i a i j ( x x ) j > ( x x ) i
because a i i ( x x ) i ( x x ) i . Moreover, there exists k N , k i such that
j i a i j ( x x ) j = a i k ( x x ) k > ( x x ) i .
We will consider two subcases:
Subcase 1: k N . Then, for x ( k ) , we obtain
j N a i j ( x x ( k ) ) j a i k ( x x ( k ) ) k = a i k ( x x ¯ ) k
a i k ( x x ) k > ( x x ) i = x i = ( x x ( k ) ) i .
Subcase 2: k N . Then, for x [ n + 1 ] = x ¯ , we obtain
j N a i j ( x x ¯ ) j a i k ( x x ¯ ) k
a i k ( x x ) k > ( x x ) i = x i = ( x x ¯ ) i .
Case (iii).
Suppose that i N and ( A ( x x ) ) i < ( x x ) i . We then have
j N a i j ( x x ) j < ( x x ) i
a i i ( x x ) i j N a i j ( x x ) j < ( x x ) i
and hence it follows that
a i i ( x x ) i < ( x x ) i a i i < ( x x ) i .
Thus, for x ( i ) we obtain
j N a i j ( x x ( i ) ) j = a i i ( x x ¯ ) i j i a i j ( x x ̲ ) j
a i i ( x x ¯ ) i j i a i j ( x x ) j < ( x x ) i = x i x ¯ i = x i ( i ) = ( x x ( i ) ) i .
Case (iv).
Suppose that i N and ( A ( x x ) ) i > ( x x ) i . We then have
j N a i j ( x x ) j > ( x x ) i ,
hence there is k N , k i such that a i k ( x x ) k > ( x x ) i because a i i ( x x ) i ( x x ) i .
We will consider two subcases:
Subcase 1: k N . Then, for x ( k ) , we obtain
j N a i j ( x x ( k ) ) j a i k ( x x ( k ) ) k = a i k ( x x ¯ ) k
a i k ( x x ) k > ( x x ) i = x i x ̲ i = x i ( k ) = ( x x ( k ) ) i .
Subcase 2: k N . Then, for x [ i ] , we obtain
j N a i j ( x x [ i ] ) j a i k ( x x [ i ] ) k = a i k ( x x ¯ ) k
a i k ( x x ) k > ( x x ) i = x i x ̲ i = x i [ i ] = ( x x [ i ] ) i .
The reverse implication is trivial. □
The last theorem can be rewritten in the following form:
Corollary 1.
Let A B ( n , n ) and X = [ x ̲ , x ¯ ] be given. Then, X is AE-eigenvector of A if and only if
( i N { n + 1 } ) ( x ( i ) X ) A ( x ( i ) x ( i ) ) = x ( i ) x ( i ) ,
( i N { n + 1 } ) ( x [ i ] ) X ) A ( x [ i ] ) x [ i ] ) = x [ i ] x [ i ] .
The next two theorems show that the conditions in Corollary 1 can be equivalently formulated as the solvability conditions of a finite set of two-sided max–min linear systems. Consequently, the interval AE-eigenvectors of a given non-interval matrix can be recognized in polynomial time.
Without loss of generality suppose that N = N N , where N = { 1 , 2 , , k } and N = { k + 1 , k + 2 , , n } ; that is,
X = ( [ x ̲ 1 , x ¯ 1 ] , , [ x ̲ k , x ¯ k ] , [ O , O ] , [ O , O ] ) T
and
X = ( [ [ O , O ] , [ O , O ] , x ̲ k + 1 , x ¯ k + 1 ] , , [ x ̲ n , x ¯ n ] ) T .
Define matrices C ( i ) , D ( i ) , E ( i ) , F ( i ) B ( n , k + 1 ) , for i N { n + 1 } , as follows
C ( i ) = ( A x ( 1 ) A x ( k ) A x ( i ) ) , D ( i ) = ( x ( 1 ) x ( k ) x ( i ) )
E [ i ] = ( A x ( 1 ) A x ( k ) A x [ i ] ) , F [ i ] = ( x ( 1 ) x ( k ) x [ i ] )
Also, denote β = ( β 1 , , β k , β k + 1 ) T , γ = ( γ 1 , , γ k , γ k + 1 ) T B ( k + 1 ) .
Theorem 5.
Let A B ( n , n ) , X and i N { n + 1 } be given. Then
  • ( x X ) A ( x x ( i ) ) = ( x x ( i ) ) if and only if the system C ( i ) β = D ( i ) β with x ̲ j β j x ¯ j for j N and β k + 1 = I is solvable,
  • ( x X ) A ( x x [ i ] ) = ( x x [ i ] ) if and only if the system E [ i ] γ = F [ i ] γ with x ̲ j γ j x ¯ j for j N and γ k + 1 = I is solvable.
Proof. 
By Lemma 1, if x ̲ j β j x ¯ j for j N , then x = j = 1 k β i x ( j ) belongs to X , and if x X then we can find x ̲ j β j x ¯ j for j N such that x = j = 1 k β j x ( j ) .
We also have the following equivalences for an arbitrary i N { n + 1 }
C ( i ) β = D ( i ) β
j N A x ( j ) β j A x ( i ) β k + 1 = j N x ( j ) β j x ( i ) β k + 1
A ( j N x ( j ) β j x ( i ) ) = j N x ( j ) β j x ( i )
( x X ) A ( x x ( i ) ) = x x ( i )
because β k + 1 = I .
Similarly, we can prove the second part of the theorem. □
Theorem 6.
Suppose that we are given a matrix A B ( n , n ) and an interval vector X = [ x ̲ , x ¯ ] . Then, X is an AE-eigenvector of A if and only if for each i N { n + 1 } the systems C ( i ) β = D ( i ) β and C [ i ] β = D [ i ] β are solvable with x ̲ j β j x ¯ j for j N , β k + 1 = I and x ̲ j γ j x ¯ j for j N , γ k + 1 = I , respectively.
Proof. 
The assertion follows from Theorems 4 and 5. □
Theorem 7.
Suppose that we are given a matrix A B ( n , n ) and an interval vector X = [ x ̲ , x ¯ ] . The recognition problem of whether a given interval vector X is an AE-eigenvector of A is then solvable in O ( n 4 ) time.
Proof. 
According to Theorem 6, the recognition problem of whether a given interval vector X is an AE-eigenvector of A is equivalent to recognizing if the system C ( x ) β = D ( x ) β is solvable for each x X G with x ̲ j β j x ¯ j for j N and β k + 1 = I . The computation of a system A y = B y needs O ( r s · min ( r , s ) ) time (see [17]), where A , B B ( r , s ) . Therefore, the computation of at most n such systems is done in n · O ( n 3 ) = O ( n 4 ) time. □

5. Strong Eigenvectors

In this section, we study various eigenvector types for the interval matrix A = [ A ̲ , A ¯ ] and the interval vector X = [ x ̲ , x ¯ ] . The basic type, which is called a strong eigenvector, is related to all matrices in A and all vectors in X. Further types, which are called strong EA-eigenvectors (strong AE-eigenvectors), are related to all matrices A A and to EA-eigenvectors (AE-eigenvectors) derived from X.
Definition 3.
Let A , X be given. The interval vector X is called strong eigenvector of A if ( A A ) ( x X ) A x = x .
Theorem 8.
Let A , X be given. Then, X is a strong eigenvector of A if and only if A ̲ x ( k ) = x ( k ) and A ¯ x ( k ) = x ( k ) for all k N .
Proof. 
Let us assume that x X , A ̲ x ( k ) = x ( k ) and A ¯ x ( k ) = x ( k ) for all k N . Then, for arbitrary x X we get
A ̲ x = A ̲ i = 1 n β i x ( i ) = i = 1 n β i ( A ̲ x ( i ) ) = i = 1 n β i x ( i ) = x
and
A ¯ x = A ¯ i = 1 n β i x ( i ) = i = 1 n β i ( A ¯ x ( i ) ) = i = 1 n β i x ( i ) = x .
The assertion follows from the monotonicity of operations; that is, x = A ̲ x A x A ¯ x = x for each A A . The converse implication is trivial. □
Remark 1.
It is easy to see that the conditions in Theorem 8 can be verified in O ( n 3 ) time.
Definition 4.
Let A , X be given. Then interval vector X is called
  • a strong EA-eigenvector of A if
    ( A A ) ( x X ) ( x X ) A ( x x ) = x x ,
  • and a strong AE-eigenvector of A if
    ( A A ) ( x X ) ( x X ) A ( x x ) = x x .
Theorem 9.
Let A , X be given. The following conditions are equivalent
(i)
X is a strong EA-eigenvector of A,
(ii)
( x X ) ( x X ) A ̲ ( x x ) = x x A ¯ ( x x ) = x x ,
(iii)
( x X ) ( x ( k ) X ) A ̲ ( x x ( k ) ) = x x ( k ) A ¯ ( x x ( k ) ) = x x ( k ) .
Proof. 
These assertions follow from Theorems 1 and 8. □
Theorem 10.
Let A , X be given. The following conditions are equivalent
(i)
X is a strong AE-eigenvector of A,
(ii)
( x X ) ( x X ) A ̲ ( x x ) = x x A ¯ ( x x ) = x x ,
(iii)
( x X G ) ( x X ) A ̲ ( x x ) = x x A ¯ ( x x ) = x x .
Proof. 
These assertions follow from Theorems 4 and 8. □
Remark 2.
By Theorems 9 and 10, the verification of whether
(i) 
X is a strong EA-eigenvector,
(ii) 
X is a strong AE-eigenvector,
reduces to finding a vector x X satisfying some linear max–min equations, similar to Theorems 2 and 6. Hence, the recognition problem for these types of strong eigenvectors is polynomially solvable.

6. EA/AE-Strong Eigenvectors

In the previous sections, we worked with a fixed partition N = { N , N } , with N = N N and N = N N = . In other words, every index i N is associated either with the existential, or with the universal quantifier. According to partition N, the interval vector X is presented as a sum of subintervals X X . The interpretation of this partition is such that, for technical reasons, the vector entries in subinterval X only require the existence of one possible value x i [ x ̲ i , x ¯ i ] , while the entries in subinterval X require all possible values x i [ x ̲ i , x ¯ i ] .
A similar interpretation can be applied to the matrix entries. Suppose that each interval of A is associated either with the universal or with the existential quantifier. We can then split the interval matrix as A = A A , where A is the interval matrix comprising universally quantified coefficients and A concerns existentially quantified coefficients.
Hence, we work with partition N ˜ = { N ˜ , N ˜ } , where N ˜ N ˜ = N × N and N ˜ N ˜ = . In other words, a ̲ i j = a ¯ i j = O for each pair ( i , j ) N ˜ and a ̲ i j = a ¯ i j = O for each ( i , j ) N ˜ .
Definition 5.
Let A, X be given. Interval vector X is called
  • EA-strong eigenvector of A if there is A A such that for any A A the vector X is a strong eigenvector of A A ,
  • AE-strong eigenvector of A if for any A A there is A A such that X is a strong eigenvector of A A .

6.1. EA-Strong Eigenvector

Theorem 11.
Let A, X be given. Then, X is an EA-strong eigenvector of A if and only if
( A A ) ( x X ) ( A A ̲ ) x = x ( A A ¯ ) x = x .
Proof. 
Suppose that there is A A such that ( A A ̲ ) x = x and ( A A ¯ ) x = x hold for each x X . By monotonicity of the operations ⊕ and ⊗ for an arbitrary matrix A A , we get
x = ( A A ̲ ) x ( A A ) x ( A A ¯ ) x = x .
The reverse implication trivially holds. □
Theorem 12.
Let A, X be given. Then, X is an EA-strong eigenvector of A if and only if
( A A ) ( i N [ ( A A ̲ ) x ( i ) = x ( i ) ( A A ¯ ) x ( i ) = x ( i ) ] .
Proof. 
By Lemma 1, if x ̲ j β j x ¯ j for j N then x = j = 1 n β j x ( j ) belongs to X; and if x X , then we can find x ̲ j β j x ¯ j for j N such that x = j = 1 n β j x ( j ) . Then, we have
( A A ̲ ) x = ( A A ̲ ) i N β i x ( i ) =
i N ( A A ̲ ) x ( i ) β i = i N β i x ( i ) = x .
Similarly, we can prove the second equality and by Theorem 11 the assertion follows. The reverse implication trivially holds. □
The last theorem enables us to check the equivalent conditions of Theorem 12 in practice, whereby ( A A ̲ ) x ( i ) = x ( i ) and ( A A ¯ ) x ( i ) = x ( i ) are joined into one system of equalities.
Let A and X be given and N ˜ = { ( i 1 , j 1 ) , , ( i k , j k ) } . We denote the block matrix A ˜ B ( 2 n 2 , k + 1 ) and vectors x ˜ B ( 2 n 2 ) , α B ( k + 1 ) as follows
A ˜ = A ( i 1 j 1 ) x ( 1 ) A ( i 1 j k ) x ( 1 ) A ( i 2 j 1 ) x ( 1 ) A ( i k j k ) x ( 1 ) A ̲ x ( 1 ) A ( i 1 j 1 ) x ( 2 ) A ( i 1 j k ) x ( 2 ) A ( i 2 j 1 ) x ( 2 ) A ( i k j k ) x ( 2 ) A ̲ x ( 2 ) A ( i 1 j 1 ) x ( n ) A ( i 1 j k ) x ( n ) A ( i 2 j 1 ) x ( n ) A ( i k j k ) x ( n ) A ̲ x ( n ) A ( i 1 j 1 ) x ( 1 ) A ( i 1 j k ) x ( 1 ) A ( i 2 j 1 ) x ( 1 ) A ( i k j k ) x ( 1 ) A ¯ x ( 1 ) A ( i 1 j 1 ) x ( 2 ) A ( i 1 j k ) x ( 2 ) A ( i 2 j 1 ) x ( 2 ) A ( i k j k ) x ( 2 ) A ¯ x ( 2 ) A ( i 1 j 1 ) x ( n ) A ( i 1 j k ) x ( n ) A ( i 2 j 1 ) x ( n ) A ( i k j k ) x ( n ) A ¯ x ( n ) ,
x ˜ = x 1 ( 1 ) , , x n ( 1 ) , , x 1 ( n ) , , x n ( n ) , x 1 ( 1 ) , , x n ( 1 ) , , x 1 ( n ) , , x n ( n ) T ,
and
α = α 11 , , α 1 k , α 21 , , α 2 k , , α k 1 , , α k k , α k + 1 T ,
where α k + 1 is a variable corresponding to the last column of A ˜ .
Theorem 13.
Let A, X be given. Then X is EA-strong eigenvector of A if and only if the system A ˜ α = x ˜ has a solution α such that a ̲ i j α i j a ¯ i j and α k + 1 = I .
Proof. 
The system A ˜ α = x ˜ is solvable if and only if there is a vector α such that
i , j = 1 k α i j A ( i j ) x ( k ) A ̲ x k = x ( k ) ,
i , j = 1 k α i j A ( i j ) x ( k ) A ¯ x k = x ( k )
for all k N with a ̲ i j α i j a ¯ i j and α k + 1 = I . Put A = i , j = 1 k α i j A ( i j ) and by Theorem 12 the assertion holds true. □
Theorem 14.
Suppose that we are given a matrix A and interval vector X = [ x ̲ , x ¯ ] . The recognition problem of whether a given interval vector X is EA-strong eigenvector of A is solvable in O ( n 5 ) time.
Proof. 
According to Theorem 13, the recognition problem of whether a given interval vector X is EA-strong eigenvector of A is equivalent to recognizing if the system A ˜ α = x ˜ has a solution α with a ̲ i j α i j a ¯ i j and α k + 1 = I . The computation of a system A y = b needs O ( r s · min ( r , s ) ) time (see [18]), where A B ( r , s ) , b B ( r ) . Therefore, the computation of such systems is done in O ( n 2 · n 2 · n ) = O ( n 5 ) time. □

6.2. AE-Strong Eigenvector

Denote A G = { A ̲ , A ¯ } .
Theorem 15.
Let A, X be given. Then, X is an AE-strong eigenvector of A if and only if
( A A G ) ( A A ) ( x X ) ( A A ) x = x .
Proof. 
Suppose that for A ̲ there is B A such that for all x X the equality ( B A ) x = x holds true and for A ¯ there is C A such that for all x X the equality ( C A ) x = x is fulfilled. Moreover, assume that x X is arbitrary but fixed. Then, for any i N there is k , l N such that the following
( ( A B ) x ) i = j N ( a ̲ i j b i j ) x j = ( a ̲ i k b i k ) x k ( a ¯ i v c i v ) x v ,
( a ̲ i v b i v ) x v ( a ¯ i l c i l ) x l = j N ( a ¯ i j c i j ) x j = ( ( A C ) x ) i
holds for any v N .
We will prove that for an arbitrary but fixed matrix A A , there is A A such that ( A A ) x = x .
Put A : = B . Then, there is r N such that
( ( B A ) x ) i = ( a i r b i r ) x r .
Consider two cases.
Case 1. For ( i , r ) N ˜ , we get
( a i r b i r ) x r = a i r x r a ¯ i r x r = ( a ¯ i r c i r ) x r ( a ̲ i k b i k ) x k = x i .
Thus, we have
( ( A B ) x ) i = ( a i r b i r ) x r x i .
The reverse inequality follows from the monotonicity of operations
( A B ) x ( A ̲ B ) x = x .
Case 2. For ( i , r ) N ˜ , we get
( a i r b i r ) x r = b i r x r = ( a ̲ i r b i r ) x r ( a ¯ i l c i l ) x l = x i .
Because the reverse inequality trivially follows from (4), the equality ( ( B A ) x ) i = x i is proven.
The reverse implication is trivial. □
Theorem 16.
Let A, X be given. Then, X is AE-strong eigenvector of A if and only if
( A A G ) ( A A ) ( k N ) ( A A ) x ( k ) = x ( k ) .
Proof. 
By Lemma 1, if x ̲ k β k x ¯ k for k N , then x = k = 1 n β k x ( k ) belongs to X; and if x X , then we can find x ̲ k β k x ¯ k for k N such that x = k = 1 n β k x ( k ) . Then, for any A A G there is A A such that for and fixed x X we have
( A A ) x = ( A A ) k N β k x ( k ) =
i N ( A A ) x ( k ) β k = k N x ( k ) β k = k N β k x ( k ) = x
and by Theorem 15 the implication follows. The reverse implication trivially holds true. □
Let A and X be given and N ˜ = { ( i 1 , j 1 ) , , ( i k , j k ) } . For each A A G and v N , we denote the block matrix C ( A , v ) B ( n , k + 1 ) and α B ( k + 1 ) as follows
C ( A , v ) = A ( i 1 j 1 ) x ( v ) A ( i 1 j k ) x ( v ) A ( i 2 j 1 ) x ( v ) A ( i k j k ) x ( v ) A x ( v ) ,
and
α = α 11 , , α 1 k , α 21 , , α 2 k , , α k 1 , , α k k , α k + 1 T ,
where α k + 1 is a variable corresponding to the last column of C ( A , v ) .
Theorem 17.
Let A, X be given. Then, X is an AE-strong eigenvector of A if and only if each A A G and for each v N the system C ( A , v ) α = x ( v ) has a solution α such that a ̲ i j α i j a ¯ i j and α k + 1 = I .
Proof. 
Suppose that v N and A A G are fixed. The system C ( A , v ) α = x ( v ) is solvable if and only if there is a vector α such that
i , j = 1 k α i j A ( i j ) x ( v ) A ( r s ) x v = x ( v )
with a ̲ i j α i j a ¯ i j and α k + 1 = I . Put A = i , j = 1 k α i j A ( i j ) and by Theorem 16 the assertion holds true. □
Theorem 18.
Let A, X be given. The recognition problem of whether a given interval vector X is an AE-strong eigenvector of A is solvable in O ( n 5 ) time.
Proof. 
According to Theorem 17, the recognition problem of whether a given interval vector X is an AE-strong eigenvector of A is equivalent to recognizing if the system C ( A , v ) α = x ( v ) has a solution α with a ̲ i j α i j a ¯ i j and α k + 1 = I . The computation of a system A y = b needs O ( r s · min ( r , s ) ) time (see [18]), where A B ( r , s ) , b B ( r ) . Therefore, the computation of such systems is done in n · O ( n · n 2 · n ) = O ( n 6 ) time. □

7. Conclusions

In this paper, we have presented the properties of steady states in max–min discrete event systems. This concept, in connection with inexact entries and its exists/forall quantification, represents an alternating version to fuzzy discrete events systems which are using vectors and matrices with entries between 0 and 1, and are describing uncertain and vague values. The practical significance of this approach is that some elements of the vector X and the matrix A are taken into account for all values of the interval (corresponding to A-index), and some of them are only considered for at least one value (corresponding to E-index).
The concept of various types of the strong EA/AE-eigenvectors and the EA/AE-strong eigenvectors have been studied. In addition, their characterizations by equivalent conditions are given. All findings have been formally analyzed with a target to estimate the computational complexity of checking the obtained equivalent conditions. The results have been illustrated by an application of the obtained methodology on a numerical example.
The investigation of AE/EA concepts for steady state of discrete events systems with interval data has brought new efficient equivalent conditions. There is a good reason to continue the study of exists/forall quantification method for tolerable, universal and weak eigenvectors which are still unexplored and stay open for future research.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Czech Science Foundation (GAČR), grant number 18-01246S.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study, or in the decision to publish the results.

Appendix A

The idea of EA/AE-splitting the interval vector (matrix) X (A) in the form X X ( A A ), can be considered simultaneously using partitions N = { N , N } and N ˜ = { N ˜ , N ˜ } . By combining both approaches, the following four notions can be defined.
Definition A1.
Let A , X be given. Then X is called
  • an EA-strong EA-eigenvector of A if there is A A such that for each A A interval vector X is an EA-eigenvector of A A ,
  • an EA-strong AE-eigenvector of A if there is A A such that for each A A interval vector X is an AE-eigenvector of A A ,
  • an AE-strong EA-eigenvector of A if for each A A there is A A such that interval vector X is an EA-eigenvector of A A ,
  • an AE-strong AE-eigenvector of A if for each A A there is A A such that interval vector X is an AE-eigenvector of A A .
Every of these notions can be characterized in a similar way as that used in the previous two sections: Theorems 11 and 12, or Theorems 15 and 16.
For the sake of brevity, only the first notion will be discussed here. The remaining cases are analogous.
Theorem A1.
Let A , X be given. Then, interval vector X is an EA strong EA-eigenvector of A if and only if
( A A ) ( x X ) ( x X ) [ ( A A ̲ ) ( x x ) = ( x x )
( A A ¯ ) ( x x ) = ( x x ) ] .
Proof. 
(⇐) The assertion follows from the monotonicity of the operations; that is,
x x = ( A A ̲ ) ( x x ) ( A A ) ( x x )
( A A ¯ ) ( x x ) = x x .
The converse implication is trivial. □
Theorem A2.
Let A , X be given. Then, interval vector X is an EA strong EA-eigenvector of A if and only if
( A A ) ( x X ) ( x ( k ) X ) [ ( A A ̲ ) ( x x ( k ) ) =
( x x ( k ) ) ( A A ¯ ) ( x x ( k ) ) = ( x x ( k ) ) ] .
Proof. 
This assertion follows from Theorem 1 and Theorem A1. □
Remark A1.
In view of Theorem A2, the verification of whether or not X is an EA-strong EA-eigenvector of A requires us to find a vector x X and a matrix A A satisfying some two–sided max–min quadratic systems. This recognition problem and the analogous problems for remaining cases in Definition A1 have not been studied in this paper.

References

  1. Terano, T.; Tsukamoto, Y. Failure diagnosis by using fuzzy logic. In Proceedings of the IEEE Conference on Decision Control, New Orleans, LA, USA, 7–9 December 1977; pp. 1390–1395. [Google Scholar]
  2. Zadeh, L.A. Toward a theory of fuzzy systems. In Aspects of Network and Systems Theory; Kalman, R.E., Claris, N.D., Eds.; Hold, Rinehart and Winston: New York, NY, USA, 1971; pp. 209–245. [Google Scholar]
  3. Sanchez, E. Resolution of eigen fuzzy sets equations. Fuzzy SetsAnd Syst. 1978, 1, 69–74. [Google Scholar] [CrossRef]
  4. Lin, F.; Ying, H. Fuzzy discrete event systems and their observability. In Proceedings of the Joint 9th IFSA World Congress and 20th NAFIPS International Conference, Vancouver, BC, Canada, 25–28 July 2011; pp. 271–1277. [Google Scholar]
  5. Lin, F.; Ying, H. Modeling and control of fuzzy discrete event systems. IEEE Trans. Cybern. 2002, 32, 408–415. [Google Scholar]
  6. Benmessahel, B.; Touahria, M.; Nouioua, F. Predictability of fuzzy discrete event systems. Discret. Event Dyn. Syst. 2017, 27, 641–673. [Google Scholar] [CrossRef]
  7. Butkovič, P.; Schneider, H.; Sergeev, S. Recognizing weakly stable matrices. SIAM J. Control Optim. 2012, 50, 3029–3051. [Google Scholar] [CrossRef] [Green Version]
  8. Butkovič, P. Max-linear Systems: Theory and Algorithms; Springer Monographs in Mathematics; Springer: Berlin, Germany, 2010. [Google Scholar]
  9. Fiedler, M.; Nedoma, J.; Ramík, J.; Rohn, J.; Zimmermann, K. Linear Optimization Problems with Inexact Data; Springer: Berlin, Germany, 2006. [Google Scholar]
  10. Rohn, J. Systems of Linear Interval Equations. Lin. Algebra Appl. 1989, 126, 39–78. [Google Scholar] [CrossRef] [Green Version]
  11. Molnárová, M.; sková, H.M.; Plavka, J. The robustness of interval fuzzy matrices. Lin. Algebra Appl. 2013, 438, 3350–3364. [Google Scholar] [CrossRef]
  12. Mysková, H.M.; Plavka, J. XAE and XEA robustness of max–min matrices. Discret. Appl. Math. 2019, 267, 142–150. [Google Scholar] [CrossRef]
  13. Myšková, H.; Plavka, J. AE and EA robustness of interval circulant matrices in max–min algebra. Fuzzy Sets Syst. 2020, 384, 91–104. [Google Scholar] [CrossRef]
  14. Plavka, J.; Szabó, P. On the λ-robustness of matrices over fuzzy algebra. Discret. Appl. Math. 2011, 159, 381–388. [Google Scholar] [CrossRef]
  15. Plavka, J. On the weak robustness of fuzzy matrices. Kybernetika 2013, 49, 128–140. [Google Scholar]
  16. Plavka, J. On the O(n3) algorithm for checking the strong robustness of interval fuzzy matrices. Discret. Appl. Math. 2012, 160, 640–647. [Google Scholar] [CrossRef] [Green Version]
  17. Gavalec, M.; Zimmermann, K. Solving systems of two–sided (max,min)–linear equations. Kybernetika 2010, 46, 405–414. [Google Scholar]
  18. Zimmermann, K. Extremální Algebra; Ekonomicko-matematická laboratoř Ekonomického ústavu ČSAV: Praha, Czech Republic, 1976. [Google Scholar]
Figure 1. Data transfer system (application).
Figure 1. Data transfer system (application).
Mathematics 08 00882 g001

Share and Cite

MDPI and ACS Style

Gavalec, M.; Plavka, J.; Ponce, D. EA/AE-Eigenvectors of Interval Max-Min Matrices. Mathematics 2020, 8, 882. https://doi.org/10.3390/math8060882

AMA Style

Gavalec M, Plavka J, Ponce D. EA/AE-Eigenvectors of Interval Max-Min Matrices. Mathematics. 2020; 8(6):882. https://doi.org/10.3390/math8060882

Chicago/Turabian Style

Gavalec, Martin, Ján Plavka, and Daniela Ponce. 2020. "EA/AE-Eigenvectors of Interval Max-Min Matrices" Mathematics 8, no. 6: 882. https://doi.org/10.3390/math8060882

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop