Next Article in Journal
S-Almost Automorphic Solutions for Impulsive Evolution Equations on Time Scales in Shift Operators
Next Article in Special Issue
Similarity Measure of Lattice Ordered Multi-Fuzzy Soft Sets Based on Set Theoretic Approach and Its Application in Decision Making
Previous Article in Journal
A Note on Symmetry of Birkhoff-James Orthogonality in Positive Cones of Locally C*-algebras
Previous Article in Special Issue
An Extension of Fuzzy Competition Graph and Its Uses in Manufacturing Industries
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra

Faculty of Informatics and Management, University of Hradec Králové, 50003 Hradec Králové, Czech Republic
*
Author to whom correspondence should be addressed.
Mathematics 2020, 8(6), 1026; https://doi.org/10.3390/math8061026
Submission received: 26 April 2020 / Revised: 14 June 2020 / Accepted: 19 June 2020 / Published: 23 June 2020
(This article belongs to the Special Issue Fuzzy Sets, Fuzzy Logic and Their Applications 2020)

Abstract

:
The max-Łukasiewicz algebra describes fuzzy systems working in discrete time which are based on two binary operations: the maximum and the Łukasiewicz triangular norm. The behavior of such a system in time depends on the solvability of the corresponding bounded parametric max-linear system. The aim of this study is to describe an algorithm recognizing for which values of the parameter the given bounded parametric max-linear system has a solution—represented by an appropriate state of the fuzzy system in consideration. Necessary and sufficient conditions of the solvability have been found and a polynomial recognition algorithm has been described. The correctness of the algorithm has been verified. The presented polynomial algorithm consists of three parts depending on the entries of the transition matrix and the required state vector. The results are illustrated by numerical examples. The presented results can be also applied in the study of the max-Łukasiewicz systems with interval coefficients. Furthermore, Łukasiewicz arithmetical conjunction can be used in various types of models, for example, in cash-flow system.

1. Introduction

The max-Łukasiewicz algebra (max-Łuk algebra, for short), is one of the so-called max-T fuzzy algebras, which are defined for various triangular norms T.
A max-T fuzzy algebra works with variables in the unit interval I = 0 , 1 and uses the binary operations of maximum and a t-norm, T, instead of the conventional operations of addition and multiplication. Formally, a max-T fuzzy algebra is a triplet ( I , , T ) , where I = 0 , 1 and = max , T = T are binary operations on I . By I ( m , n ) , I ( n ) , we denote the set of all matrices, vectors, of the given dimensions over I . The operations , T are extended to matrices and vectors in the standard manner. Similarly, partial orderings on I ( m , n ) and I ( n ) are induced by the linear ordering on I .
The triangular norms (t-norms, for short) were introduced in [1], in connection with probabilistic metric spaces. The t-norms interpretations are mainly the conjunction in fuzzy logics and intersection of fuzzy sets. Therefore, they find applications in many domains, for example in decision making processes, game theory and statistics, information and data processing or risk management. The t-norms and t-conorms belong to basic notions in the theory of fuzzy sets. The following four main t-norms: Łukasiewicz, Gödel, product and drastic (and many others) can be found in [2].
The Łukasiewicz norm is often characterized as a logic of absolute or metric comparison.
The Łukasiewicz conjunction is defined by formula
x L y = max { x + y 1 , 0 } .
The Gödel norm is defined as the minimum of the entries (the truth degrees of the constituents). Gödel logic is the simplest norm; it is often characterized as a logic of relative comparison
x G y = min ( x , y ) .
The product norm is defined by the formula
x p y = x · y .
The drastic triangular norm (the “weakest norm”) is a basic example of a non-divisible t-norm on any partially ordered set. This t-norm is defined by the formula
x error y = min ( x , y ) if max ( x , y ) = 1 , 0 if max ( x , y ) < 1 .
The max-T algebras with the above mentioned t-norms have various applications and their steady states and optimization methods have been intensively studied, see, for example, [3,4,5,6,7]. The algebras with interval entries have been studied in [8,9,10].
In the particular case when T is the Gödel t-norm, we get an important max-min algebra which is useful in solving various problems in fuzzy scheduling and optimization. Max-min algebra belongs to the so-called tropical mathematics, which has many applications and brings a great number of contributions to mathematical theory. Interesting monographs [11,12,13,14] and collections of papers [15,16,17,18,19] come from tropical mathematics and its applications.
Tropical algebras are often used for describing and studying systems working in discrete time stages. The state of the system in stage k is described by the state vector, x ( k ) . Then the transition matrix, A, determines the transition of the system to the next stage. In more detail, the next state of the system, x ( k + 1 ) , is obtained by multiplication A x ( t ) = x ( t + 1 ) . During the work of the system, it can happen that, after some time, the system reaches a steady state. In algebraic notation, the state vectors of steady states are eigenvectors of the transition matrix with some eigenvalue λ I : A x = λ x .
The eigenproblem in max-min algebra has been frequently investigated, and many interesting results have been found. The structure of the eigenspace has been described and algorithms for computing the largest eigenvector have been suggested, see for example [20,21]. The eigenvectors in a max-T algebra, for various triangular norms T, have applications in fuzzy set theory. Such eigenvectors have been studied in [5,7,22]. The eigenvalues and eigenvectors are important characteristics of the system described by the fuzzy algebra. For the case of the drastic and product t-norms, the structure of the eigenspace has been studied in [5,7]. Finally, [22] describes the case of a Łukasiewicz fuzzy algebra.
Łukasiewicz arithmetical conjunction has applications in many model situations. The operation subtracts 1 from the sum of the components and takes the maximum with zero.This leads to the idea that the result of the operation is a remainder that is over the unit. Thus, the Łukasiewicz conjunction can be used, for example, in describing backup of data on a computer, the maximal capacity of an oil tank or lump payment in finances.
Such applications often lead to systems of max-Łuk linear equations. There is no inverse operation to ⊕ in max-Łuk algebra, therefore the transfer of variables from one side of equation to the other side is not possible. As a consequence, solving the one-sided linear systems (with variables, say, on the left-hand side of the equations) requires an approach different from solving the two-sided systems (with variables on both sides).
The aim of this paper is to present an algorithm for recognizing solvability of a given one-sided max-Łuk linear system with bounded variables, in dependence of a linear parameter factor on the right side, see (9) and (10) for an exact formulation.
This problem has not yet been studied in the parametrized version. The main contribution of this paper is description of the recognition algorithm, which has crucial role in the investigation of interval eigenvectors. The algorithm for recognizing the solvability of a given one-sided max-Łuk linear system can be shortly summarized in the following steps:
  • permute the equations in the system so that the right-hand side will be decreasing, that is
    0 1 b 1 1 b 2 1 b m 1 ,
  • recognize the solvability for some λ with 1 b m < λ 1 , according to Theorem 3 (case A), by verifying C L y ( λ max m ) = λ max m L b ,
  • recognize the solvability for some λ with 0 1 b h < λ 1 b h + 1 1 , according to Theorem 4 (case B), by verifying C L y ( λ max h ) = λ max h L b . This step may be repeated, if necessary, with different indices h m ,
  • recognize the solvability for some λ with 0 λ 1 b 1 , according to Theorem 5 (case C), by verifying y ̲ j i M 1 c i j , for every j N .
  • the system is solvable if the answer is positive at least once in steps 2, 3 or 4. Otherwise, the system is insolvable for any value of λ .
The structure of this paper is the following. Section 2 contains a case study based on an interactive cash-flow system, which shows motivation for solving linear systems in max-Łuk algebra. The problem is formulated in Section 3, where the preparatory results are also presented. The main results are described in Section 4. Illustrative numerical examples related to the case study from Section 2 are shown with details in Section 5. Discussion, comparison of the results with other papers, as well as future developments, are given in Conclusions.

2. Case Study: Interactive Cash-Flow System

Consider an interactive cash-flow system created by a network of n cooperating banks, B 1 , B 2 , , B n . Assume that the cooperation is performed in stages. During the run of the system, variable interest rates of the banks mutually influence each other. In each stage, every bank B i chooses a cash-flow cooperation with some other bank B j (choice i = j is also possible) in order to achieve the optimal profit, expressed by the value of the interest rate achieved for the next stage.
The system can be modeled as a discrete events system (DES). For any bank B i , variable x i ( k ) shows the interest rate value in stages k = 1 , 2 , , the vector x ( k ) = x 1 ( k ) , x 2 ( k ) , , x n ( k ) T is called the state vector of DES in stage k. The change of the next state-vector values during the transition of the DES to the state vector x ( k + 1 ) depends on the entries a i j of the so-called transition matrix A.
The possible increase of the profit coming from the cooperation of B i with bank B j is equal to a i j (including the lump payment). Thus, the efficient sum of a i j and x i ( k ) is only the part exceeding 1 (that is, exceeding 100 % ), in the case when B i chooses B j for cooperation in the stage k.
Optimization of the variable interest rate in stage k + 1 leads every B i to such a choice of B j , where the efficient increase of the profit is maximal. That is, x i ( k + 1 ) = max j N max a i j + x j ( k ) 1 , 0 . In max-Łuk notation the optimal choice can be written as
x i ( k + 1 ) = j N a i j L x j ( k ) , or
x ( k + 1 ) = A L x ( k ) .
For simplicity we assume that the system is homogeneous (that is, A does not change from stage to stage).
In real life, the matrix and vector entries are not always exact values. For example, if (7) is applied for prediction, then the transition matrix is not exactly known, it is only an estimation, belonging to some interval A A = [ A ̲ , A ¯ ] . Analogously, the state vector belongs to some interval x X = [ x ̲ , x ¯ ] . We say that the DES is considered with interval coefficients.
For formulas with interval coefficients, it must be decided which values from the corresponding interval will be taken. One possibility is to take all values (using the universal quantifier). The other possibility is to use the existential quantifier and only require that there is some value from the interval, such that the formula is satisfied.
If there are more interval variables in the formula in consideration, then the quantifiers can be combined. For example, various types of quantified notions in max-min algebra are described in [23,24].
By recurrent application of (7), the sequence of state vectors (also called: orbit of the DES) x , A x , , A k x , , where A k = A A (k times), can be created. The orbit represents a predicted evolution of interest rates. Two natural questions arise:
  • Q1. Can the orbit reach a fixed given state vector value?
  • Q2. Can the orbit reach a steady state (such a state which does not change from stage to stage)?
Q1 requires to recognize whether, in some stage k, there is a value y = x ( k ) such that b = x ( k + 1 ) for a given vector b I ( n ) . If we consider the problem in the interval arithmetic, then we get the state vector variable y [ y ̲ , y ¯ ] . Moreover, we can generalize the problem by adding a parameter λ I to the given value b. Then the original question is beeing solved as a special subcase with λ = 1 .
Therefore, question Q1 can be solved as one-sided bounded parametric problem studied in Section 3 and Section 4. The main result is Theorem 6, which describes a necessary and sufficient condition for solvability of the system (9) and (10).
The computations answering to Q1 are illustrated by Example 1 (positive answer) and Example 2 (negative answer) in Section 5, with detailed interpretation.
Q2 is connected with the eigenproblem of the transition matrix. A steady state is characterized by the equation x ( k + 1 ) = x ( k ) or, equivalently, by A L x = x . That is, steady states are equivalent to max-Łuk eigenvectors of the transition matrix A. Usually the eigenvectors are considered in a more general form, with added the so-called eigenvalue λ I . That is, x I ( n ) is an eigenvector of matrix A I ( n , n ) with eigenvalue λ I if A L x = λ L x . The eigenvectors in max-Łuk algebra have been studied in [3,6], and recently, in a more general context, in [25].
If we wish to answer Q2 in the interval arithmetics, then we have to consider A A = [ A ̲ , A ¯ ] and x X = [ x ̲ , x ¯ ] . According to the choice of universal/existential quantifier for A A , and for x X , various types of interval eigenproblem have been studied by various authors over max/plus and max-min algebra.
For example, X is called a strongly tolerable eigenvector of A if
( λ ) ( A A ) ( x X ) [ A x = λ x ]
In words, we ask for the existence of λ and A A such that every x X is an eigenvector of A with eigenvalue λ (we shortly say that every x X is tolerated by A).
Analogous problem has been solved in max-min algebra in [23], where it has been shown that the problem can be reduced to the solvability of the system C ˜ y = λ b ˜ using generators of the interval matrix A . The main idea of the algorithm is to find a certificate matrix of the given instance of dimension n × n , as a max-min linear combination of generators. The necessary coefficients of this linear combinations can be computed by solving an auxiliary one-sided max-min linear system of dimension n 2 × n 2 .
By analogy, this approach can easily be transferred from max-min to max-Łuk algebra, with a single exception of recognizing the solvability for the auxiliary one-sided linear system of dimension n 2 × n 2 . Namely, to recognize the parametric solvability of a one-sided linear system is substantially more complicated problem in max-Łuk algebra than it is in max-min algebra. In fact, it is this manuscript, where an efficient algorithm for the necessary solvability problem has been formulated.
Till now, the specific methods of max-Łuk algebra have only been presented at Conference EURO 2019 in Dublin. The extended version of this presentation is in preparation and will be submitted soon. The recognition method described in this manuscript, plays important role in the proofs of the following two theorems.
Theorem 1
([23]). Let an interval matrix A = [ A ̲ , A ¯ ] and an interval vector X = [ x ̲ , x ¯ ] be given. Then, X is a strongly tolerable eigenvector of A if and only if C ˜ L y = λ L b ˜ is solvable for some λ I .
Theorem 2
([23]). The recognition problem of whether a given interval vector X is a strong tolerance eigenvector of a given interval matrix A in max-min algebra, is solvable in O ( n 5 ) time.

3. Bounded Parametric Systems of max-Łuk Linear Equations

In view of the motivation inspired by the case study in Section 2, the solvability problem for a bounded parametric linear system in max-Łuk algebra is studied in this paper.
We consider the system
C L y = λ L b ,
y ̲ y y ¯ ,
with fixed matrix C I ( m , n ) and the right-hand side vector b I ( m ) . The basic question is whether the system is solvable for some value 0 < λ I of the parameter. In other words, we are looking for a necessary and sufficient condition allowing the recognition of whether there is a λ I \ { 0 } such that (9) and (10) is solvable (the case λ = 0 is trivial).
In the sequel, we use the notation M = { 1 , 2 , , m } and N = { 1 , 2 , , n } . The set of all solutions to (9) without any constraint is denoted by S ( C , λ L b ) ; the solution set with the upper bound is S ( C , λ L b , y ¯ ) , and the solution set with both upper and lower bound is denoted by S ( C , λ L b , y ¯ , y ̲ ) . That is, we have to recognize whether S ( C , λ L b , y ¯ , y ̲ ) , for some λ I or not.
Without any loss of generality, we assume till the end of the paper that the right-hand side vector b I ( m ) satisfies the monotonicity condition
1 b 1 b 2 b m 0 . Then
0 1 b 1 1 b 2 1 b m 1 .
System (9) is equivalent to
( i M ) j N c i j L y j = λ L b i ,
which is further equivalent to
( i M ) ( j N ) [ c i j L y j λ L b i ] ,
( k M ) ( j N ) [ c k j L y j = λ L b k ] .
In view of the definition of L , the inequality in (14) takes one of the following forms
0 < c i j + y j 1 λ + b i 1 ,
0 c i j + y j 1 , 0 < λ + b i 1 ,
0 c i j + y j 1 , 0 λ + b i 1 .
We shall use the notation H ( λ ) = { i M ; 0 < λ + b i 1 } (for short: H if λ is clear from the context). For i M \ H we have 0 λ + b i 1 . Therefore,
λ L b i = λ + b i 1 for i H ,
λ L b i = 0 for i M \ H .
For brevity, we write d i j = b i c i j for every i M , j N .
Lemma 1.
If y S ( C , λ L b , y ¯ , y ̲ ) , then
1. 
( j N ) y j λ + i H d i j ,
2. 
( j N ) y j i M \ H 1 c i j ,
3. 
( j N ) y ̲ j y j y ¯ j .
Proof. 
Let j N be fixed.
(i) For every i H we have 0 < λ + b i 1 , which implies c i j + y j 1 λ + b i 1 , in view of (16) and (17). That is, y j λ + b i c i j = λ + d i j . As a consequence, y j i H λ + d i j = λ + i H d i j .
(ii) For i M \ H we have 0 λ + b i 1 , which implies 0 c i j + y j 1 , by (18). Then y j 1 c i j , that is, y j i M \ H 1 c i j .
(iii) The assertion follows directly from the definition.  □
If the equality c k j L y j = λ L b k in (15) holds, then we say that y j is active in row k. If so, we write k A j ( λ ) and A j = A j ( λ ) is then called the activity set of the variable y j .
There are two possible activity subcases:
y j = λ + d k j for k H ,
0 y j 1 c k j for k M \ H .
Namely, if k H , then 0 < λ + b i 1 = λ L b k . Then also c k j L y j > 0 , which gives c k j + y j 1 = λ + b i 1 . As a consequence, y j = λ + b k c k j = λ + d k j . On the other hand, if k M \ H , then 0 λ + b i 1 . That is, λ L b k = 0 . Then, also c k j L y j = 0 , which implies c k j + y j 1 0 , and y j 1 c k j .
In subcase (21) with k H , we have y j = λ + d k j λ + i H d i j . As a consequence,
d k j = i H d i j .
In subcase (22) with k M \ H , we get, using Lemma 1(ii),
0 y j i M \ H 1 c i j 1 c k j .
Lemma 2.
Assume C I ( m , n ) , b I ( m ) with the monotonicity condition (11), y , y ¯ , y ̲ I ( n ) , λ I and h M with 1 b h < λ 1 b h + 1 . Then H = { 1 , 2 , , h } and the following statements are equivalent:
1. 
y S ( C , λ L b , y ¯ , y ̲ ) ,
2. 
y S ( C H , λ L b H , y ¯ y ¯ h , y ̲ ) ,
where the submatrix C H (subvector b H ) consists of the rows in C i (in b i ) with i H . Analogously, the vector y ¯ h I ( n ) with y ¯ j h = i M \ H ( 1 c i j ) for every j N is constructed from the rows C i of C, i M \ H .
Proof. 
(i) ⇒ (ii).
Assume (9)–(11). Then (14) and (15) are satisfied. In particular, considering only the rows i , k H M , we get
C H L y = λ L b H .
The inequalities y ̲ y y ¯ follow by assumption (i). Moreover, (24) implies y y ¯ h = i M \ H ( 1 C i ) . Summarizing, we have
y S ( C H , λ L b H , y ¯ y ¯ h , y ̲ ) .
(ii) ⇒ (i). Conversely, assume (26). Then j N ( c i j L y j ) = λ L b i , for every i H .
Moreover, for i M \ H and j N , we have y j 1 c i j , by assumption y y ¯ H . Therefore, c i j + y j 1 0 , which implies c i j L y j = 0 = λ L b i since i M \ H , which gives λ + b i 1 0 . The inequalities y ̲ y y ¯ follow immediately.  □
For λ I and for every i M , j N , we define
y i j ( λ ) = λ + d i j if i H 1 c i j if i M \ H .
Furthermore, we define y ( λ ) I ( n ) by putting, for j N ,
y j ( λ ) = y ¯ j i H λ + d i j i M \ H 1 c i j .
Lemma 3.
Let C I ( m , n ) , b I ( m ) , λ I and y ¯ I ( n ) . Then
1. 
( i M ) ( j N ) c i j L y j ( λ ) λ L b i ,
2. 
y ( λ ) y ¯ ,
3. 
y ( λ ) is the maximal vector in I ( n ) fulfilling conditions (i) and (ii).
Proof. 
It is easy to verify, using the definition of L , that c i j L y i j ( λ ) = λ L b i for every i M , j N . Then, assertions (i) and (ii) follow from the definition of y ( λ ) .
(iii) Assume y I ( n ) satisfies conditions (i) and (ii) with y ( λ ) replaced by y. Let i M , j N . By (i) we have c i j L y j λ L b i . Suppose, by contradiction, that there is j N such that y j > y j ( λ ) . Then, in view of (28), there is i M such that y j > y i j ( λ ) . We consider two cases.
Case (a): i H . Then y j > λ + b i c i j , which implies c i j + y j 1 > λ + b i 1 . Thus, c i j L y j > λ L b i , a contradiction.
Case (b): i M \ H . Then y j > 1 c i j implies c i j + y j 1 > 0 . Thus, c i j L y j > 0 = λ L b i , a contradiction.
Given that i , j are arbitrary, y y ( λ ) follows.  □
Lemma 4.
Let C I ( m , n ) , b I ( m ) , λ I and y ¯ I ( n ) . Then the following statements are equivalent.
1. 
S ( C , λ L b , y ¯ ) ,
2. 
y ( λ ) S ( C , λ L b , y ¯ ) .
If y S ( C , λ L b , y ¯ ) , then y y ( λ ) .
Proof. 
The assertion of the lemma follows directly from Lemma 3.  □
Lemma 5.
Let C I ( m , n ) , b I ( m ) , λ I and let y ̲ , y ¯ I ( n ) with y ̲ y ¯ . Then the following statements are equivalent.
1. 
S ( C , λ L b , y ¯ , y ̲ ) ,
2. 
y ( λ ) S ( C , λ L b , y ¯ , y ̲ ) .
If y S ( C , λ L b , y ¯ , y ̲ ) , then y y ( λ ) .
Proof. 
Assume that y S ( C , λ L b , y ¯ , y ̲ ) . Then, in particular, y S ( C , λ L b , y ¯ ) , which implies y ( λ ) S ( C , λ L b , y ¯ ) and y y ( λ ) , in view of Lemma 3. Furthermore, y ̲ y y ( λ ) implies y ( λ ) S ( C , λ L b , y ¯ ) . The converse implication is trivial.  □
Remark 1.
The assertions of Lemma 5 are often expressed by saying that for fixed λ I , y ( λ ) is the maximal possible candidate for a solution of the system (9) and (10).
Remark 2.
By a standard definition, the minimum of the empty subset of I is the maximal element in I . Hence, if there is no i M with λ 1 b i , then i M \ H 1 c i j = I , and
y j ( λ ) = y ¯ j i H λ + d i j .
Similarly, if there is no i M with λ > 1 b i , then i H λ + d i j = I , and
y j ( λ ) = y ¯ j i M \ H 1 c i j .

4. Parametric Solvability Problem of max-Łuk Linear Equations

The main result of this paper is description of a recognition algorithm for the parametric solvability problem. The problems (9) and (10) will be discussed according to
h ( λ ) = max { i M ; λ > 1 b i } .
Similarly to Remark 2, we put h ( λ ) = max = 0 if λ 1 b i for all i M . For j N we also use the notation
y j ( λ ) = i H λ + d i j i M \ H 1 c i j .
We consider three cases: (A) h ( λ ) = m , (B) 0 < h ( λ ) < m and (C) h ( λ ) = 0 . The solvability in case A is described by the following theorem, with the notation
λ max m = 1 j N i M y ¯ j d i j .
Theorem 3.
Case (A). Assume C I ( m , n ) and b I ( m ) with the monotonicity condition (11). Then the following statements are equivalent.
1. 
S ( C , λ L b , y ¯ , y ̲ ) for some λ with 1 λ > 1 b m ,
2. 
S ( C , λ max m L b , y ¯ , y ̲ ) .
Proof. 
(i) ⇒ (ii) Assume λ I is given with 1 λ > 1 b m and S ( C , λ L b , y ¯ , y ̲ ) .
We have 1 λ > 1 b m 1 b i for every i M . That is, H = M and M \ H = . In view of Remark 2, for every j N
y j ( λ ) = y ¯ j i M λ + d i j .
The assumption S ( C , λ L , y ¯ , y ̲ ) implies C L y ( λ ) = λ L b . That is, (10), (14) and (15) are satisfied with y j = y j ( λ ) .
Now we consider κ I with λ κ 1 . We look for a necessary and sufficient condition such that (10), (14) and (15) hold for y j = y j ( κ ) .
Since λ κ , we have y j ( λ ) y j ( κ ) , for every j N . Therefore, y ̲ j y j ( λ ) implies y ̲ j y j ( κ ) . That is, y ̲ y ( κ ) for every λ κ 1 . The conditions for the upper bound inequality y ( κ ) y ¯ will be discussed later.
First, we verify conditions (14) and (15). In view of the assumption, we have 1 κ λ > 1 b m 1 b i for every i M . That is, H = M and M \ H = . Then, for every j N
y j ( λ ) = i M λ + d i j = λ + i M d i j .
Similarly, for every j N
y j ( κ ) = κ + i M d i j .
It follows that the equalities
c i j + y j ( λ ) 1 = λ + b i 1 ,
c i j + y j ( κ ) 1 = κ + b i 1
are equivalent. Similarly,
c i j + y j ( λ ) 1 λ + b i 1 ,
c i j + y j ( κ ) 1 κ + b i 1
are equivalent.
Therefore, the assumption y ( λ ) S ( C , λ L b ) is equivalent to y ( κ ) S ( C , κ L b ) , for every κ I with λ κ 1 .
To achieve also the upper bound inequality y ( κ ) y ¯ , further conditions must be imposed on κ . Namely, for every j N the condition
y j ( κ ) = κ + i M d i j y ¯ j
must be added. As a consequence, we get
κ j N i M y ¯ j d i j .
Now, with the notation
λ max m = 1 j N i M y ¯ j d i j
we have
y ( κ ) y ¯ κ λ max m .
Therefore, y ( κ ) S ( C , λ max m L b , y ¯ , y ̲ ) . The converse implication (ii) ⇒ (i) is trivial.
In case B we have 0 < h ( λ ) < m . That is,
0 1 b h < λ 1 b h + 1 1 .
Write Λ ( h ) = ( 1 b h , 1 b h + 1 , for brevity. In view of (19) and (20), we have H = { 1 , 2 , , h } (with 0 < λ + b i 1 for i H ) and M \ H = { h + 1 , h + 2 , , m } (with 0 λ + b i 1 for i M \ H ).
Moreover, we denote
N ( h ) = j N ; y ¯ j y ¯ j h i H d i j > 1 b h ,
λ max h = 1 b h + 1 j N ( h ) y ¯ j y ¯ j h i H d i j .
Theorem 4.
Case (B). Assume C I ( m , n ) , b I ( m ) and h M . Further assume the monotonicity condition (11) holds. Then the following statements are equivalent.
1. 
S ( C , λ L b , y ¯ , y ̲ ) for some λ Λ ( h ) ,
2. 
S ( C H , λ max h L b H , y ¯ y ¯ h , y ̲ ) .
Proof. 
For fixed λ Λ h , (i) is equivalent (in view of Lemma 5) to the statement
y ( λ ) S ( C , λ L b , y ¯ , y ̲ ) ,
which is further equivalent (in view of Lemma 2) to
y ( λ ) S ( C H , λ L b H , y ¯ y ¯ h , y ̲ ) .
The proof will be completed by demonstrating that (47) is equivalent to (ii). We assume (47) for fixed λ Λ ( h ) , and we describe conditions under which (47) also holds for arbitrary κ Λ ( h ) with λ κ .
We shall verify the restrictions y ̲ y ( κ ) y ¯ y ¯ h and the activity of the variables y ( κ ) in every row k H of the matrix C H with vector b H .
The lower restriction follows by monotonicity y ̲ j y j ( λ ) y j ( κ ) , for every j N . The upper restriction
y j ( κ ) = y ¯ j κ + i H d i j i M \ H ( 1 c i j ) y ¯ j y ¯ j h
follows by Lemma 2 directly from
y j ( κ ) = y ¯ j y ¯ j h κ + i H d i j y ¯ j y ¯ j h .
As a consequence, the activity condition (22) is fulfilled in all rows k M \ H for every variable y j ( κ ) with N .
To verify also the second activity condition (21) for at least one variable j N in every row k H , we denote by μ j h the break-point at which the (min/plus)-linear function
y j ( κ ) = y ¯ j y ¯ j h κ + i H d i j
of the variable κ changes its direction. In other words,
y j ( κ ) = κ + i H d i j if κ μ j h , y ¯ j y ¯ j h if κ μ j h .
By condition 1 b i < κ , for i H , we get 0 < κ + b i 1 κ + b i c i j = κ + d i j .
At the break-point, both parts of the function (50) have the same value. That is,
μ j h + i H d i j = y ¯ j y ¯ j h ,
or, equivalently,
μ j h = y ¯ j y ¯ j h i H d i j .
 □
Claim 1.
Assume j N , λ Λ ( h ) . If y j ( λ ) is active in k H , then 1 b h < λ μ j h . Moreover, for every κ Λ ( h ) with κ μ j h , y j ( κ ) is active in k.
Proof of Claim 1.
By assumption, y j ( λ ) = λ + d k j = λ + i H d i j , in view of (21) and (23). Then, λ μ j h , and the activity of y j ( λ ) in k is described by the formula
c k j + λ + i H d i j 1 = λ + b k 1 ,
while the activity of y j ( κ ) under assumption κ μ j h is described by
c k j + κ + i H d i j 1 = κ + b k 1 .
As (54) and (55) are equivalent, the assertion of Claim 1 follows.
In view of (44), (45) and (53) we have
N ( h ) = j N ; μ j h > 1 b h ,
λ max h = 1 b h + 1 j N ( h ) μ j h .
 □
Claim 2.
Assume y ( λ ) S ( C H , λ L b H , y ¯ y ¯ h , y ̲ ) . If λ κ Λ ( h ) , with κ μ j h for every j N ( h ) , then y ( κ ) S ( C H , λ max h L b H , y ¯ y ¯ h , y ̲ ) .
Proof of Claim 2.
By assumption, for every k H there is a j N such that y ( λ ) is active in k. Then by Claim 1, under assumption λ κ j N μ j h , for every k H there is a j N ( h ) such that y ( κ ) is active in k. That is, y ( κ ) S ( C H , λ max h L b H , y ¯ y ¯ h , y ̲ ) .  □
In case C we have H = and M \ H = M . That is, λ 1 b i for all i M . The solvability in case C is described by the following theorem.
Theorem 5.
Case (C). Assume C I ( m , n ) , b I ( m ) and y ̲ y ¯ I ( n ) , with the monotonicity condition (11). Then the following statements are equivalent
1. 
S ( C , λ L b , y ¯ , y ̲ ) , for some λ with 0 λ 1 b 1 ,
2. 
y ̲ j i M 1 c i j , for every j N .
Proof. 
Assume 0 λ 1 b 1 and y S ( C , λ L b , y ¯ , y ̲ ) . By Lemma 5(ii), this is equivalent to y ( λ ) S ( C , λ L b , y ¯ , y ̲ ) . For every j N we have, in view of Remark 2,
y ̲ j y j ( λ ) = y ¯ j i M 1 c i j .
The equivalence (i) ⇔ (ii) follows immediately.  □
Theorem 6.
Assume C I ( m , n ) , b I ( m ) and y ̲ y ¯ I ( n ) , with the monotonicity condition (11). The bounded parametric system (9) and (10) is solvable for some λ I if and only if at least one of the following statements is fulfilled
1. 
h ( λ ) = m , S ( C , λ max m L b , y ¯ , y ̲ ) ,
2. 
0 < h ( λ ) < m , λ Λ ( h ) , S ( C H , λ max h L b H , y ¯ y ¯ h , y ̲ ) ,
3. 
h ( λ ) = 0 and y ̲ j i M 1 c i j for every j N .
Proof. 
For the convenience of the reader, we recall the previous definitions.
h ( λ ) = max { i M ; λ > 1 b i } ,
Λ ( h ) = ( 1 b h , 1 b h + 1 ,
N ( h ) = j N ; y ¯ j y ¯ j h i H d i j > 1 b h ,
λ max m = 1 j N i M y ¯ j d i j ,
λ max h = 1 b h + 1 j N ( h ) y ¯ j y ¯ j h i H d i j .
Assume that the system (9) and (10) is solvable for some λ I . Clearly, one of these possibilities takes place: (a) h ( λ ) = m , (b) 0 < h ( λ ) < m , λ Λ ( h ) , (c) h ( λ ) = 0 . The assertion of the theorem then follows from Theorems 3–5.  □
Theorem 7.
Suppose that C I ( m , n ) , b I ( m ) and y ̲ , y ¯ I ( n ) . The problem of recognizing the solvability of the bounded parametric max-Łuk linear system
C L y = λ L b
with bounds y ̲ y y ¯ for some value of the parameter λ I can be solved in O ( m n 2 ) time.
Proof. 
In view of Theorem 6, the solvability of the bounded max-Łuk linear system (64) for some λ I can be verified by verifying the solvability of (64) for the values λ max 1 , λ max 2 , λ max m and verifying the condition y ̲ j i M 1 c i j , for every j N .
For every h = 1 , 2 , , m , λ max h can be computed in O ( n 2 ) time and the computation of y ( λ max h ) requires O ( n ) time. The verification of y ( λ max h ) S ( C H , λ max h L b H , y ¯ , y ̲ ) needs O ( n ) time, while condition (ii) in Theorem 5 can be verified in O ( m n ) time. Thus, the total computational complexity is O ( m n 2 ) .  □

5. Numerical Examples

Example 1
(A numerical illustration to Q1—solvable case). Assume that transition matrix C and required state vector b are given. Our goal is to recognize whether there is 0 < λ I and y I ( n ) with y ̲ y y ¯ such that C L y = λ L b . In other words, we ask whether the system (9) and (10) with entries (65) is solvable for some 0 < λ I .
C = 0.6 0.5 0.5 0.8 0.8 0.5 0.7 0.6 0.5 0.9 0.3 0.9 0.3 0.3 0.0 0.1 0.7 0.9 0.2 0.9 0.9 0.2 0.2 0.6 0.8 , b = 0.7 0.6 0.3 0.1 0.1 , y ̲ = 0.1 0.0 0.1 0.3 0.1 , y ¯ = 0.8 0.6 0.6 0.9 0.5 .
Applying Theorem 6, we get a positive answer. Namely, the system (9) and (10) is solvable for λ ( 0.3 , 0.4 with solution y = ( 0.1 , 0.1 , 0.1 , 0.3 , 0.1 ) and has no solution for λ ( 0 , 0.3 ( 0.4 , 1 . Therefore, the orbit of the DES considered in case study in Section 2 can reach the state λ b for every 0.3 < λ 0.4 , if the starting state is y . On the other hand, the DES cannot reach the state b = 1 L b nor can reach any state λ L b if λ 0.3 or λ > 0.4 .
Details of the computation are shown below. We use the method described in Theorem 6. By Definition (59), we get five different values of h ( λ ) and distinguish the following cases: (a) h ( λ ) = 5 , (b) h ( λ ) = 1 , 2 , 3 and (c) h ( λ ) = 0 .
Case (a). We have H = { 1 , 2 , 3 , 4 , 5 } , M \ H = and λ ( 0.9 , 1 . By (62), we get λ max 5 = 1 . Using (32) we compute the maximal possible candidate for a solution: y ( λ max 5 ) = y ( 1 ) = ( 0.2 , 0.4 , 0.2 , 0.5 , 0.2 ) T . Clearly, y ̲ y ( 1 ) . It remains to see whether y ( 1 ) is a solution to (9).
C L y ( 1 ) = 0.6 0.5 0.5 0.8 0.8 0.5 0.7 0.6 0.5 0.9 0.3 0.9 0.3 0.3 0.0 0.1 0.7 0.9 0.2 0.9 0.9 0.2 0.2 0.6 0.8 0.2 0.4 0.2 0.5 0.2 = 0.3 0.1 0.3 0.1 0.1 0.7 0.6 0.3 0.1 0.1 = 1 0.7 0.6 0.3 0.1 0.1 = λ max 5 L b
In view of Lemma 5, C L y ( 1 ) λ max 5 L b implies that the system has no solution when 0.9 < λ 1 .
Case (b). Three subcases are considered.
For h ( λ ) = 3 , we have H = { 1 , 2 , 3 } , M \ H = { 4 , 5 } and Λ ( 3 ) = ( 0.7 , 0.9 . Using (53) and (61), we compute μ 3 = ( 0.1 , 0.9 , 0.1 , 0.5 , 0.4 ) T , N ( 3 ) = { 2 } . Then λ max 3 = 0.9 0.9 = 0.9 Λ ( 3 ) , in view of (63). Applying this value we get y ( 0.9 ) = ( 0.1 , 0.3 , 0.1 , 0.4 , 0.1 ) T , in view of (28). The candidate y ( 0.9 ) fulfills y ̲ y j ( 0.9 ) , but it is not a solution to the system, because C L y ( 0.9 ) λ max 3 L b . Therefore, the system has no solution when 0.7 < λ 0.9 .
For h ( λ ) = 2 we have H = { 1 , 2 } , M \ H = { 3 , 4 , 5 } and Λ ( 2 ) = ( 0.4 , 0.7 . Similarly to the previous subcase, we can calculate μ 2 = ( 0 , 0.2 , 0.1 , 0.5 , 0.4 ) T and N ( 2 ) = { 4 } . Then λ max 2 = 0.7 0.5 = 0.5 Λ ( 2 ) . The obtained result y ( 0.5 ) = ( 0.1 , 0.1 , 0.1 , 0.4 , 0.1 ) T fulfills the condition y ̲ y ( 0.5 ) , but again, y ( 0.5 ) is not a solution to the system, which means that there are no solutions when 0.4 < λ 0.7 .
For h ( λ ) = 1 we have H = { 1 } , M \ H = { 2 , 3 , 4 , 5 } and Λ ( 1 ) = ( 0.3 , 0.4 . Similarly to the previous subcases, μ 1 = ( 0 , 0.1 , 0.1 , 0.5 , 0.2 ) T and N ( 1 ) = { 4 } , and so λ max 1 = 0.4 0.5 = 0.4 Λ ( 1 ) . For λ max 1 = 0.4 we compute y ( 0.4 ) = ( 0.1 , 0.1 , 0.1 , 0.3 , 0.1 ) T . This candidate fulfills y ̲ y j ( 0.4 ) and also is a solution to the system, because of the equality C L y ( 0.4 ) = 0.4 L b . It follows that the system (9) and (10) considered in this example is solvable when λ = 0.4 .
Case (c). In this case we have h ( λ ) = 0 , i.e., H = , M \ H = M and λ ( 0 , 0.3 . The maximal candidate y ( λ ) = ( 0.1 , 0.1 , 0.1 , 0.2 , 0.1 ) T satisfies C L y ( λ ) = λ L b , but the requirement of y ̲ y ( λ ) is not fulfilled. As a consequence, the considered system is not solvable when 0 < λ 0.3 .
Example 2
(A numerical illustration to Q1—insolvable case). Similarly to Example 1, transition matrix C and required state vector b are given. Again, we wish to recognize whether there is 0 < λ I and y I ( n ) with y ̲ y y ¯ such that C L y = λ L b . In this example, the matrix C is the same, only the vectors b, y ̲ and y ¯ have different entries.
C = 0.6 0.5 0.5 0.8 0.8 0.5 0.7 0.6 0.5 0.9 0.3 0.9 0.3 0.3 0.0 0.1 0.7 0.9 0.2 0.9 0.9 0.2 0.2 0.6 0.8 , b = 0.8 0.8 0.8 0.5 0.5 , y ̲ = 0.1 0.2 0.3 0.8 0.5 , y ¯ = 0.7 0.4 0.7 0.9 0.8 .
Applying the method described in Theorem 6, we get a negative result: the system has no solution for any λ I . As a consequence, neither b, nor any of its multiples λ L b can be reached by the orbit of the DES.
The details of the computation are shown below. By Definition (59), we get three different values of h ( λ ) and distinguish the following cases: (a) h ( λ ) = 5 , (b) h ( λ ) = 3 and (c) h ( λ ) = 0 .
Case (a). We have H = { 1 , 2 , 3 , 4 , 5 } , M \ H = and λ ( 0.5 , 1 . By (62), we get λ max 5 = 0.6 . From entries (66) we compute the maximal possible candidate for solution, y ( λ max 5 ) = y ( 0.6 ) = ( 0.2 , 0.4 , 0.2 , 0.5 , 0.2 ) T . Clearly, y ̲ y ( 0.6 ) . It remains to verify whether y ( 0.6 ) is a solution to (9).
In view of Lemma 5, C L y ( 6 ) λ max 5 L b implies that the system has no solution when 0.5 < λ 1 .
Case (b). In this case, only one subcase has to be considered.
For h ( λ ) = 3 we have H = { 1 , 2 , 3 } , M \ H = { 4 , 5 } and Λ ( 3 ) = ( 0.2 , 0.5 . Using (53) and (61), we compute μ 3 = ( 0.1 , 0.4 , 0.1 , 0.4 , 0.2 ) T , N ( 3 ) = { 2 , 4 } . Then λ max 3 = 0.5 0.4 = 0.4 Λ ( 3 ) , in view of (63). Applying this value we get y ( 0.4 ) = ( 0.1 , 0.3 , 0.1 , 0.4 , 0.1 ) T , in view of (28). The candidate y ( 0.4 ) does not fulfill y ̲ y j ( 0.4 ) and at the same time is not a solution to the system, because C L y ( 0.4 ) λ max 3 L b . Therefore, the system has no solution when 0.2 < λ 0.5 .
Case (c). In this case we have h ( λ ) = 0 , i.e., H = , M \ H = M and λ ( 0 , 0.2 . The maximal candidate y ( λ ) = ( 0.1 , 0.1 , 0.1 , 0.2 , 0.1 ) T satisfies C L y ( λ ) = λ L b , but the requirement of y ̲ y ( λ ) is not fulfilled. As a consequence, the considered system is not solvable when 0 < λ 0.2 .

6. Conclusions

In this study, existence of a bounded solution to a one-sided linear system in max-Łuk algebra has been considered in dependence on a given linear parameter factor on the fixed side of the system. Equivalent solvability conditions have been found and a polynomial-time recognition algorithm has been suggested. The correctness of the algorithm has been exactly demonstrated. The work of the algorithm has been illustrated by numerical examples.
The results are new: although the solvability of a one-sided linear system in max-Łuk algebra in the non-parametric case can easily be verified, the method of recognizing the solvability of the parameterized system has not yet been known.
The presented results can be applied in the study of the max-Łukasiewicz systems with interval coefficients. Łukasiewicz arithmetical conjunction can also be used in various types of optimization problems, for example, in the study of interactive cash-flows. Furthermore, the suggested recognition algorithm plays an important role in the investigation of interval eigenvectors.
An advantage of the presented algorithm is, that not only the existence or non-existence of the solution is recognized; the solution values are computed as well, in the solvable case. A possible generalization of the results for other t-norms, different from the Łukasiewicz t-norm and minimum (the Gödel t-norm), remains open for future research.

Author Contributions

All authors contributed equally to this work. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Czech Science Foundation (GAČR) #18-01246S.

Acknowledgments

The authors appreciate the valuable ideas and suggestions of J. Plavka (Technical University of Košice) expressed in personal discussions about this manuscript.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

References

  1. Schweizer, B.; Sklar, A. Statistical metric spaces. Pac. J. Math. 1960, 10, 313–334. [Google Scholar] [CrossRef] [Green Version]
  2. Gottwald, S. A Treatise on Many-Valued Logics; Studies in Logic and Computation; Research Studies Press: Baldock, UK, 2001. [Google Scholar] [CrossRef]
  3. Gavalec, M.; Němcová, Z. Steady states of max-Łukasiewicz fuzzy systems. Fuzzy Sets Syst. 2017, 325, 58–68. [Google Scholar] [CrossRef]
  4. Cimler, R.; Gavalec, M.; Zimmermann, K. An optimization problem on the image set of a (max, min) fuzzy operator. Fuzzy Sets Syst. 2018, 341, 113–122. [Google Scholar] [CrossRef]
  5. Gavalec, M.; Rashid, I.; Cimler, R. Eigenspace structure of a max-drast fuzzy matrix. Fuzzy Sets Syst. 2014, 249, 100–113. [Google Scholar] [CrossRef]
  6. Rashid, I.; Gavalec, M.; Sergeev, S. Eigenspace of a three-dimensional max-Łukasiewicz fuzzy matrix. Kybernetika 2012, 48, 309–328. [Google Scholar]
  7. Rashid, I.; Gavalec, M.; Cimler, R. Eigenspace structure of a max-prod fuzzy matrix. Fuzzy Sets Syst. 2016, 303, 136–148. [Google Scholar] [CrossRef]
  8. Gavalec, M.; Plavka, J.; Tomášková, H. Interval eigenproblem in max-min algebra. Liner Algebra Appl. 2014, 440, 24–33. [Google Scholar] [CrossRef]
  9. Gavalec, M.; Plavka, J.; Ponce, D. Tolerance types of interval eigenvectors in max-plus algebra. Inf. Sci. 2017, 367, 14–27. [Google Scholar] [CrossRef]
  10. Gavalec, M.; Plavka, J.; Ponce, D. Tolerance and weak tolerance of interval eigenvectors in fuzzy algebra. Fuzzy Sets Syst. 2019, 369, 145–156. [Google Scholar] [CrossRef]
  11. Gavalec, M. Periodicity in Extremal Algebras; Gaudeamus: Hradec Králpvé, Czech Republic, 2004. [Google Scholar]
  12. Golan, J.S. Semirings and Their Applications; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
  13. Gondran, M.; Minoux, M. Graphs, Dioids and Semirings: New Models and Algorithms; Springer: Berlin/Heidelberg, Germany, 2008; Volume 41. [Google Scholar]
  14. Kolokoltsov, V.; Maslov, V.P. Idempotent Analysis and Its Applications; Springer: Berlin/Heidelberg, Germany, 1997; Volume 401. [Google Scholar]
  15. Mysková, H. Interval eigenvectors of circulant matrices in fuzzy algebra. Acta Electrotech. Inform. 2012, 12, 57. [Google Scholar] [CrossRef]
  16. Mysková, H. Weak stability of interval orbits of circulant matrices in fuzzy algebra. Acta Electrotech. Inform. 2012, 12, 51. [Google Scholar] [CrossRef]
  17. Plavka, J. On the weak robustness of fuzzy matrices. Kybernetika 2013, 49, 128–140. [Google Scholar]
  18. Tan, Y.J. Eigenvalues and eigenvectors for matrices over distributive lattices. Linear Algebra Appl. 1998, 283, 257–272. [Google Scholar] [CrossRef] [Green Version]
  19. Zimmermann, K. Extremální Algebra; Útvar vědeckỳch informací Ekonomického ústavu ČSAV: Praha, Czech Republic, 1976. [Google Scholar]
  20. Cuninghame-Green, R.A. Minimax Algebra; Springer: Berlin/Heidelberg, Germany, 2012; Volume 166. [Google Scholar]
  21. Gavalec, M. Monotone eigenspace structure in max-min algebra. Linear Algebra Appl. 2002, 345, 149–167. [Google Scholar] [CrossRef] [Green Version]
  22. Gavalec, M.; Němcová, Z.; Sergeev, S. Tropical linear algebra with the Łukasiewicz T-norm. Fuzzy Sets Syst. 2015, 276, 131–148. [Google Scholar] [CrossRef] [Green Version]
  23. Gavalec, M.; Ponce, D.; Plavka, J. Strong tolerance of interval eigenvectors in fuzzy algebra. Fuzzy Sets Syst. 2019, 369, 145–156. [Google Scholar] [CrossRef]
  24. Gavalec, M.; Plavka, J.; Ponce, D. EA/AE-Eigenvectors of Interval Max-Min Matrices. Mathematics 2020, 8, 882. [Google Scholar] [CrossRef]
  25. Wang, Q.; Qin, N.; Yang, Z.; Sun, L.; Peng, L.; Wang, Z. On Monotone Eigenvectors of a Max-T Fuzzy Matrix. J. Appl. Math. Phys. 2018, 6. [Google Scholar] [CrossRef] [Green Version]

Share and Cite

MDPI and ACS Style

Gavalec, M.; Němcová, Z. Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra. Mathematics 2020, 8, 1026. https://doi.org/10.3390/math8061026

AMA Style

Gavalec M, Němcová Z. Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra. Mathematics. 2020; 8(6):1026. https://doi.org/10.3390/math8061026

Chicago/Turabian Style

Gavalec, Martin, and Zuzana Němcová. 2020. "Solvability of a Bounded Parametric System in Max-Łukasiewicz Algebra" Mathematics 8, no. 6: 1026. https://doi.org/10.3390/math8061026

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop