You are currently on the new version of our website. Access the old version .
AxiomsAxioms
  • Article
  • Open Access

27 November 2025

Fuzzified Matrix Space and Solvability of Matrix Equations

and
1
Faculty of Agriculture, University of Belgrade, Nemanjina 6, Zemun, 11080 Belgrade, Serbia
2
Mathematical Institute SANU, Kneza Mihaila 36, 11000 Belgrade, Serbia
3
Department of Mathematics and Informatics, Faculty of Sciences, University of Novi Sad, Trg Dositeja Obradovića 4, 21000 Novi Sad, Serbia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Advances in Fuzzy Mathematics

Abstract

A fuzzified matrix space consists of a collection of matrices with a fuzzy structure, modeling the cases of uncertainty on the part of values of different matrices, including the uncertainty of the very existence of matrices with the given values. The fuzzified matrix space also serves as a test for the admissibility of certain approximate solutions to matrix equations, as well as a test for the approximate validity of certain laws. We introduce quotient structures derived from the original fuzzified matrix space and demonstrate the transferability of certain fuzzy properties from the fuzzified matrix space to its associated quotient structures. These properties encompass various aspects, including the solvability and unique solvability of equations of a specific type, the (unique) solvability of individual equations, as well as the validity of identities such as associativity. While the solvability and unique solvability of a single equation in a matrix space are equivalent to the solvability and unique solvability in a certain quotient structure, we proved that the (unique) solvability of a whole type of equations, as well as the validity of a certain algebraic law, are equivalent to the (unique) solvability and validity in all the quotient structures. Consequently, these quotient structures serve as an effective tool for evaluating whether specific properties hold within a given fuzzified matrix space.

1. Introduction

Starting from Zadeh’s definition of the fuzzy set as a generalized subset of a given set [1], Rosenfeld applied this concept to sets endowed with algebraic structures, namely to groupoids and groups [2]. He introduced fuzzy subgroupoid and fuzzy subgroup as natural generalizations of the concepts of subgroupoid and subgroup. This was generalized in [3], in the concept of fuzzy subalgebra, introduced by Filep and Maurer in 1989, who also introduced the concept of compatibility of a fuzzy relation with the algebra operations. Another direction for generalizing the notion of compatibility can be found in [4].
In the course of time, the [ 0 ,   1 ] interval in the definition of the fuzzy set was replaced with more general codomain structures, mainly with lattices with or without additional operations. Actually, it was first done already in 1967, when Goguen introduced the notion of so-called L-fuzzy sets. They were defined as L-valued sets, i.e., as mappings from a given set to a given lattice L [5]. Thus, a membership degree was no longer measured in percentages, i.e., by an element in [ 0 ,   1 ] interval, but it was measured by a lattice element. Already in 1975. Negoita and Ralescu published a book dealing with L-valued fuzzy sets [6], and in 1976, Sanchez published a paper providing a methodology for the solution of certain basic fuzzy relational equations, taking a complete Browerian lattice as a codomain lattice for L-valued fuzzy sets [7]. After a pioneering work of Pavelka, who introduced a fuzzy logic with truth values in a residuated lattice [8], many researchers investigated fuzzy sets with residuated lattices as suitable codomain lattices (see e.g. [9]). Different cases of bounded codomain lattices with or without some additional properties or additional structures were also studied, and are currently being studied (see [10,11]). In line with such generalizations, a definition of fuzzy subalgebras and the definition of compatibility of a fuzzy relation with an algebraic structure were generalized as well.
Another direction in the generalization of the concept of a fuzzy subalgebra of a crisp algebra—with lattice-valued equality replacing the classical one—was introduced, e.g., in [12] for groups, [13] for quasigroups, [14] for rings, and, in a special form, in [15] for groupoids.
In our approach, a complete lattice was used as the codomain lattice. Instead of a fuzzy subset that is closed (in the fuzzy sense) under the groupoid operation and a fuzzy relation compatible with the groupoid operation, we consider a more general situation: we use a mapping : A × B C , where for each of three sets A , B , C is equipped with a weak fuzzy equivalence relation (i.e., a symmetric and transitive fuzzy relation), and these relations are compatible with the given mapping. In the case A = B = C , we obtain a fuzzy subgroupoid of A). This structure gives rise to the fuzzy subsets μ A , μ B , and μ C and, for every p in the codomain lattice, to a mapping induced by ∘, which maps the direct product of certain quotient sets of the p-cuts of μ A and μ B to a quotient set of the p-cut of μ C .
Such an approach is used to define the so-called matrix space (in this article called fuzzified matrix space) [15]. It consists of a set of ordered pairs, the first component of which is the set of all matrices of a given order, and the second component is a weak fuzzy equivalence relation defined on it. It is characterized by the compatibility of the weak fuzzy equivalence relations with a partial binary operation defined on matrices, which is a generalized matrix product. This operation is defined when a usual matrix product would be defined, i.e., when the number of columns of the first matrix equals the number of rows of the second one.
We introduce cut and quotient structures naturally arising from a fuzzified matrix space. We study the so-called cutworthiness of some properties in a fuzzified matrix space, i.e., we study the connection between some fuzzy properties in a fuzzified matrix space and the related properties in the defined quotient structures.
We treat the problem of the existence of approximate solutions to matrix equations of an arbitrary equation type in a fuzzified matrix space. In order to do this, we give a definition of the type. A type of matrix equation is characterized by a unique arrangement of constants and variables. The definition of approximate solutions (so-called weak solutions) given in [15] is generalized to fit any matrix equation with the defined binary operation, and so is the definition of unique solvability. Here, we have an arbitrary number of variables in an equation, so the weak solutions are ordered sets of elements from the domain.
As for the unique solvability of a matrix equation in a fuzzified matrix space, it is, for a given equation, equivalent to the unique solvability of the related equation in a quotient structure. By proving this, we generalize a result from [15].
The uncertainty that is present in fuzzy sets gives us different possibilities for approximate problem solving, e.g., finding an approximate solution to an equation or a system of equations. There are different approaches, depending on where uncertainty exists. For example, in the case of our study, initial data (or known objects) are known and without any uncertainty; thus, they are represented by crisp objects. Even the solutions we are looking for are precise, crisp objects, but we allow them not to be completely correct in order to find at least approximate solutions to the given crisp problems. The accuracy depends on the introduced fuzzy structure. A similar approach is contained in [16], where two known objects in a fuzzy relation equation are given precisely, whereas solutions must lie in an interval depending on them. Another approach must be applied if there is uncertainty in some of the initial input data, e.g., some known objects in an equation or a system of equations. In [17], a system of fuzzy equations is presented, where uncertainty exists in the known objects on the right side of the equations, but not in those on the left side, where there are exact, crisp coefficients. In [18], fuzzy equations are presented in which there is only one known object lying on the left side of the equations, and it is a fuzzy object, i.e., an object containing approximate information, so the uncertainty exists on the left side of the equations. Finally, uncertainty may exist in the known objects on both sides of a fuzzy equation, as in the systems of fuzzy equations considered in [19], or in the matrix equations considered in [20]. In all these cases we mentioned, where uncertainty exists in the input objects, the task of solving is defined in a way that no new uncertainties arise in the solving process, other than those following from the uncertainty of the input data. To achieve this, moreover, in [17,20], the problem of solving equations with uncertainties in the known objects is actually reduced to crisp problems. In many other cases, it is allowed for some new uncertainties to arise when solving equations that have already been stated as fuzzy equations. When dealing with systems of fuzzy equations, two main approaches to such approximate reasoning were developed. The first one, starting with [21,22,23], is based on allowing some input data (i.e., equations) to be omitted and defining an approximate solution as a solution to the rest, i.e., to most of the equations. Another approach, presented, e.g., in [24,25], was to introduce a certain degree to which a fuzzy set or fuzzy relation solves (or does not solve) a fuzzy equation or a system of fuzzy equations. These approaches, as well as the one presented in this paper, enable us to look for an approximate solution when there is no exact solution.
Since matrix products are often associative, we define an approximate law in the fuzzified matrix space corresponding to associativity. Moreover, for any law in a set of matrices, we define, using this approach, a corresponding approximate law, which we prove to be cutworthy.
In 1964, Give’on defined the algebra of lattice-valued matrices [26], though not in a fuzzy setting. But since a fuzzy relation between two finite sets can be described by a matrix, a sort of L-valued fuzzy matrices appeared simultaneously with the newly introduced L-valued fuzzy sets, i.e., in 1967, under the name of L-relations [5]. Although most recent investigations on fuzzy matrices predominantly use matrices with values in [ 0 ,   1 ] -interval, still there are some more recent works dealing with more genereal lattice-valued matrices [27,28,29,30]. The investigation of different classes of fuzzy matrix equations (fuzzy relational equations) continues to be of significant relevance in recent years, due to their wide range of applications [31,32,33].
The composition of two L-valued fuzzy relations can be seen as a sort of matrix product. To solve a fuzzy relational equation, we need to solve a matrix equation. To model the cases when there also exist uncertainties regarding the values of fuzzy relations, different cases of interval-valued fuzzy sets and relations were introduced for all kinds of fuzzy sets [34,35,36,37]. A notion of fuzzified matrix space presents another approach, which, applied to the case when matrices represent fuzzy relations, brings the possibility not only to model the uncertainty on the part of values of fuzzy relations, but also to model the uncertainty of their existence.
In Section 2, we recall some basic notions such as L-valued sets, relations and their cuts, as well as the notion of weak equivalence; we recall the notion of compatibility of a fuzzy relation with an algebraic structure and its recent generalization used to introduce a notion of matrix space. We also use this generalization in Section 3 (SubSection 3.1), to introduce quotient structures of a fuzzified matrix space. In SubSection 3.2, we introduce a set of terms and identities over a partial matrix groupoid and over the quotient structures of a matrix space defined over the partial matrix groupoid. We divide those terms and identities into types based on the arrangement of the variables and constants used, with different dimensions. In SubSection 3.3, we introduce a generalized matrix equation and the concept of weak solvability of such an equation. We prove a connection between the (unique) weak solvability of an equation and the (unique) weak solvability of a whole type of equations to the solvability of the corresponding equation(s) in the introduced quotient structures. In SubSection 3.4, we define so-called weak laws, which are fulfilled in a matrix space when the corresponding exact laws are kind of approximately satisfied in the partial matrix groupoid. We prove that a law in a matrix space is weakly satisfied if and only if it is (exactly) satisfied in all its quotient structures. In Section 4, we explain some future perspectives of this investigation.

2. Preliminaries and Notations

Throughout this paper, L is a complete lattice. By ∧ we denote the infimum and by ∨ the supremum in L. By ⩽ we denote the corresponding order relation, and by 0 and 1 the least and the greatest element of L, respectively.
If A is a set, an L -valued set on A (or an L -valued subset of A ), is a mapping μ : A L .
Given p L , a cut set at level p or a p -cut of an L -valued fuzzy set μ : A L is a subset μ p of A defined by
μ p = { x A p μ ( x ) } .
An L-valued fuzzy set R on A 2 , i.e., a mapping R : A 2 L is called an L -valued relation on A .
Given p L and an L-valued relation R on A, by R p we denote the cut set of R at level p, and we also call it the cut relation at level p .
We say that a property of L-valued sets (L-valued relations) is cutworthy if and only if the analog crisp property is satisfied on all the cut sets (relations).
Well-known properties of L-valued relations that we use here are as follows:
  • R is symmetric if R ( x , y ) = R ( y , x ) for all x , y A ;
  • and R is transitive if R ( x , z ) R ( z , y ) R ( x , y ) for all x , y , z A .
The mentioned properties are cutworthy, in the sense that all the p-cuts of an L-valued relation are symmetric (or transitive) in the usual (crisp) sense, if and only if the L-valued relation is symmetric (or transitive), respectively.
If an L-valued relation R : A × A L is symmetric and transitive, it is called a weak L-valued equivalence relation.
Now, let A be equipped with some operations, i.e., let A = ( A , F ) be an algebra, with the family of operations F. An L-valued subalgebra of A is an L-valued set μ : A L satisfying, for every operation f F with arity n > 0 ( n N ) , and for all a 1 , , a n A ,
i = 1 n μ ( a i ) μ ( f ( a 1 , , a n ) ) ,
and   for   every   nullary   operation c F , μ ( c ) = 1 .
We say that an L-valued relation R on A is compatible with the operations of A if it is an L-valued subalgebra of A 2 , i.e., if the following two conditions hold: for every n-ary operation f F , for all a 1 , , a n , b 1 , , b n A , and for every nullary operation c F
i = 1 n R ( a i , b i ) R ( f ( a 1 , , a n ) , f ( b 1 , , b n ) ) ;
R ( c , c ) = 1 .
The notion of compatibility was first introduced by Rosenfeld, for the group operations [2], including the binary operation. For a binary operation, the definition of compatibility was generalized to include the compatibility of a fuzzy binary operation with a fuzzy relation in [4]. Another possible generalization of the notion of compatibility of an L-valued relation with a binary operation of an algebra was introduced in [15], by the following definition.
Let A 1 ,   A 2 , and A 3 be sets, and : A 1 × A 2 A 3 a function (i.e., an operation from A 1 × A 2 to A 3 ). A ∘-compatible weak lattice-valued equivalence triple  ( E 1 , E 2 , E 3 ) is an ordered triple of weak lattice-valued equivalences: E i : A i 2 L ( i { 1 , 2 , 3 } ) , such that the following condition is satisfied: for all x , y A 1 , z , t A 2 ,
E 1 ( x , y ) E 2 ( z , t ) E 3 ( x z , y t ) .
If A 1 = A 2 = A 3 and E 1 = E 2 = E 2 , ∘ is just a binary operation on a set A 1 , and E 1 is a compatible weak lattice-valued equivalence.
This generalized compatibility is also cutworthy, as is proven in [15].
Lemma 1
([15]). Let A 1 ,   A 2 and A 3 be sets, : A 1 × A 2 A 3 a function and let ( E 1 , E 2 , E 3 ) be an ordered triple, such that each E i is a weak lattice-valued equivalence on set A i for i { 1 , 2 , 3 } .
Then, ( E 1 , E 2 , E 3 ) is a -compatible weak lattice-valued equivalence triple if and only if for every p L , and cuts E 1 p , E 2 p and E 3 p :
If ( x , y ) E 1 p and ( z , t ) E 2 p then ( x z , y t ) E 3 p .
Let ( E 1 , E 2 , E 3 ) be a ∘-compatible weak lattice-valued equivalence triple. Then, L-valued sets μ i : A i L for i { 1 , 2 , 3 } are defined by μ i ( x ) : = E i ( x , x ) .
For every p L , because of the cutworthiness of symmetry and transitivity, E 1 p , E 2 p and E 3 p are symmetric and transitive relations on A 1 ,   A 2 and A 3 , respectively. They are equal with their restrictions to μ 1 p , μ 2 p , μ 3 p , which are also reflexive in μ 1 p , μ 2 p , μ 3 p ; thus, E 1 p , E 2 p and E 3 p are equivalence relations of these subsets of A 1 ,   A 2 and A 3 . Therefore, for every p L , we have the factor sets μ 1 p / E 1 p , μ 2 p / E 2 p , and μ 3 p / E 3 p , i.e., the sets of equivalence classes of these equivalence relations.
Let [ x ] E 1 p , [ y ] E 2 p and [ z ] E 3 p be the classes of x , y , z in E 1 p , E 2 p , E 3 p , respectively. We define an operation ¯ : μ 1 p / E 1 p × μ 2 p / E 2 p μ 3 p / E 3 p with
[ x ] E 1 p ¯ [ y ] E 2 p : = [ x y ] E 3 p
  • In [15], the following proposition is proved, establishing that ¯ is a well-defined operation.
Proposition 1.
If x μ 1 p and y μ 2 p , then x y μ 3 p . Moreover, if [ x 1 ] E 1 p = [ x ] E 1 p and [ y 1 ] E 2 p = [ y ] E 2 p , then we have [ x 1 y 1 ] E 3 p = [ x y ] E 3 p .
Let I = { 1 , n } and J = { 1 , m } be finite index sets and S be a set. Then a matrix over S is a mapping I × J S . We call such a matrix a matrix over S of order n × m . By M n × m we denote the set of all matrices over S of order n × m .
By definition, a matrix over S of order n × m is an ordered n-tuple of some ordered m-tuples of elements of S.
The set of all matrices over S is denoted by M :
M = ( n , m ) N × N M n × m .
Let ∘ be a partial binary operation on M , such that M 1 M 2 is defined for M 1 M n × m and M 2 M t × s if and only if m = t , and then M 1 M 2 M n × s . The pair ( M , ) is called partial matrix groupoid.
We say that the operation ∘ is associative if ( M 1 M 2 ) M 3 = M 1 ( M 2 M 3 ) whenever all operations on both sides of the equation are defined.
It is easy to check that if both operations on one side of the equality are defined, then the operations on the other side are also defined.
The following notion was introduced in [15]. Let ( M , ) be a partial matrix groupoid. An L -valued matrix space (fuzzified matrix space) over ( M , ) is a set of ordered couples
{ ( M n × m , E n × m ) n , m N } , where each E n × m is a weak L-valued equivalence on M n × m , and such that for each n , m , s N , the tripple ( E n × m , E m × s , E n × s ) is a ∘-compatible weak lattice-valued equivalence triple.

3. Results

3.1. Quotient Structures of a Fuzzified Matrix Space

Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over a partial matrix groupoid ( M , ) . Let μ n × m ( X ) = E n × m ( X , X ) . For such an L-valued matrix space and p L , we define an algebraic structure M p consisting of the set ( n , m ) N 2 μ n × m p / E n × m p and the partial binary operation ¯ derived from ∘ as in Equation (6). Such a structure is called a quotient structure of the partial matrix groupoid ( M , ) .
We also say that for every n , m N the elements of μ n × m p / E n × m p are elements in M p of the order n × m . Using Proposition 1, we get that for all n , m , k N , and for any p L , the multiplication of matrices of dimensions n × m and m × k induces a multiplication in the corresponding quotient sets (see Figure 1).
Figure 1. A quotient structure.
Example 1.
Let P be the set of prime numbers, let L = ( N 0 { } ) P , and let the ordering on L be defined componentwise (taking the usual order in N 0 and taking ∞ to be the greatest element in N 0 { } ). That is, for ϕ , ψ ( N 0 { } ) P , we define
ϕ ψ ( p P ) ϕ ( p ) ψ ( p )
Let M be the set of matrices over Z , and ( M , ) be the matrix groupoid with the usual matrix multiplication. Let A , A M n × m and s N , A = ( a i j ) , A = ( a i j ) we write A s A , iff a i j s a i j , for all ( i , j ) { 1 , 2 , , n } × { 1 , 2 , , m } .
We define E n × m : M n × m × M n × m L by:
E n × m ( A , A ) ( p ) = r r i s   t h e   g r e a t e s t   n a t u r a l   n u m b e r   s u c h   t h a t   A p r A A = A
Since congruence is compatible with multiplication and addition, it is also compatible with matrix multiplication, i.e., if A , A M n × m and B , B M m × k , we have
A s A   a n d   B s B   i m p l i e s   A B s A B
If E n × m ( A , A ) ( p ) = s and E m × k ( B , B ) ( p ) = t , then A p l A and B p l B , where l = m i n { s , t } ; thus, A B p l A B and E n × k ( A B , A B ) ( p ) s t .
Thus, { ( M n × m , E n × m ) n , m N } is a matrix space.
For all p P and A M n × m we have E n × m ( A , A ) ( p ) = , which implies that μ n × m ( A ) equals the greatest element in L (we shall denote it by 1). We interpret μ n × m ( A ) as the level of existence of the fuzzy matrix A; thus, all the matrices exist with certainty, and μ n × m l = M n × m for all l L .
For any matrix A, we denote by m a x ( | A | ) the absolute value of the element in A having the greatest absolute value. For A , A M n × m , such that A A , we have that for all p P greater than m a x { m a x | A | , m a x | A | } , E ( A , A ) ( p ) = 0 ; thus, for ϕ L , if ϕ ( p ) 0 for infinitely many p, then ( A , A ) E n × m ϕ , and for any such ϕ, we have that E n × m ϕ is the diagonal (equality) relation in M n × m .
If ϕ ( p ) 0 for finitely many p P , i.e., for p { p 1 , p 2 , , p r } , we have that ( A , A ) E n × m ϕ if and only if elements of A A are divisible by p 1 ϕ ( p 1 ) , p 2 ϕ ( p 2 ) ,…, p r ϕ ( p r ) , which holds if and only if elements of A A are divisible by p 1 ϕ ( p 1 ) · p 2 ϕ ( p 2 ) p r ϕ ( p r ) .
We define a mapping Ψ, mapping M n × m to the set of matrices over Z p 1 ϕ ( p 1 ) × × Z p r ϕ ( p r ) of dimension n × m , which maps A to the ordered set of matrices ( A 1 , A 2 , , A r ) , such that matrix A k contains, in place of any element a i j in matrix A, its remainder when divided by p k ϕ ( p k ) . It is obvious that Ψ is an epimorphism.
Note that Ψ ( A ) = Ψ ( A ) iff ( A , A ) E n × m ϕ , i.e., k e r n e l ( Ψ ) = E n × m ϕ , and thus μ n × m ϕ / E n × m ϕ = M n × m / E n × m ϕ is isomorphic to the set of matrices of dimension n × m over Z p 1 ϕ ( p 1 ) × × Z p r ϕ ( p r ) .
Since this holds for any pair ( n , m ) N × N , we have that M ϕ = ( n , m ) N × N μ n × m ϕ / E n × m ϕ is isomorphic to the set of matrices over Z p 1 ϕ ( p 1 ) × × Z p r ϕ ( p r ) , or to the set of matrices over Z p 1 ϕ ( p 1 ) p r ϕ ( p r ) .
Figure 2 illustrates how a fuzzy matrix is projected in different quotient structures of this fuzzified matrix space. Here, ϕ ( 2 ) = 2 ; ϕ ( 3 ) = 1 ; ϕ ( 7 ) = 1 ; ϕ ( p ) = 0 for all others p P .
Figure 2. An illustration of how a fuzzy matrix is projected in different quotient structures.
Example 2.
Let S = R + , and let M be a set of matrices over S. Let L = [ 0 ,   1 ] , and the usual relation "less than or equal to".
We define a partial operation in the set of matrices:
Let A be a matrix of order m × n , and C a matrix of order n × k . We define A C as the matrix whose element in the i-th row and j-th column is the arithmetic mean of all the elements in the i-th row of A and j-th column of C.
To define fuzzy relations in the sets of matrices of the same order, we introduce some notation. If A is a matrix, m i n A is the smallest element of A, S ( A ) is the sum of all the elements of A, and m e a n ( A ) is the arithmetic mean of all the elements of A. In every set of matrices of the same order m × n we introduce the following fuzzy relation:
E m × n ( A , B ) = m i n { m i n A m e a n ( A ) , m i n B m e a n ( B ) } .
The fuzzy relation is obviously symmetric and transitive. We prove that it is also compatible with :
E m × n ( A , B ) = α ; E n × k ( C , D ) = β .
Let C , D be matrices of the order n × k . Let E m × n ( A , B ) = α ; E n × k ( C , D ) = β .
E m × k ( A C , B D ) = m i n { m i n ( A C ) m e a n ( A C ) , m i n ( B D ) m e a n ( B D ) } .
We should prove that m i n { α , β } E m × k ( A C , B D ) .
Suppose that
m i n ( A C ) m e a n ( A C ) < α ; m i n ( A C ) m e a n ( A C ) < β .
From (7) we have:
m i n A S ( A ) α m n ; m i n C S ( C ) β n k
and thus
S ( A ) m n · m i n A α ; S ( C ) n k · m i n C β
From the definition of and the above inequation we have:
S ( A C ) = k S ( A ) + m S ( C ) 2 n k m n · m i n A α + m n k · m i n C β 2 n
m e a n ( A C ) 1 2 · ( m i n A α + m i n C β ) .
Obviously, m i n ( A C ) m i n A + m i n C 2 . Thus:
m i n ( A C ) m e a n ( A C ) m i n A + m i n C m i n A α + m i n C β .
From (8) we obtain
m i n A + m i n C m i n A α + m i n C β < α   a n d   m i n A + m i n C m i n A α + m i n C β < β ,
which is equivalent to
m i n C < α β m i n C   a n d   m i n A < β α m i n A ,
i.e., equivalent to β < α and α < β .
This contradiction proves that (8) cannot hold, thus
m i n ( A C ) m e a n ( A C ) m i n { α , β } .
Analogously, m i n ( B D ) m e a n ( B D ) m i n { α , β } .
Thus, E m × n ( A , B ) E n × k ( C , D ) = m i n { α , β } m i n { m i n ( A C ) m e a n ( A C ) , m i n ( B D ) m e a n ( B D } = E m × k ( A C , B D ) .
Thus, { ( M n × m , E n × m ) n , m N } is a matrix space.
Here, μ n × m p = { A M n × m m i n ( A ) m e a n ( A ) p } .
Also, for A , B M n × m we have:
( A , B ) E n × m p ( A , B ) m i n { m i n ( A ) m e a n ( A ) , m i n ( B ) m e a n ( B ) } p A , B μ n × m p .
Thus, μ n × m p / E n × m p is one-element structure for each pair ( n , m ) N × N , and M p is isomorphic to ( N × N , ) , where is a partial operation on N × N , defined for ( ( n , m ) , ( n , m ) ) iff m = n , in which case ( n , m ) ( n , m ) = ( n , m ) .
The level of existence of any fuzzified matrix A of the dimension n × m equals m i n ( A ) m e a n ( A ) , while it is similar to the level p to all the matrices in μ n × m p ; thus, it collapses in M p into the only element of M n × m p .

3.2. Terms and Identities in the Language with One Binary Operation

Let ( M , ) be a partial matrix groupoid and p L . We define two sets of terms in a language with one binary operation symbol ∘, which we call the set of terms over M and the set of terms over M p . We start from M —for the first set of terms—and from the universe of M p , for the second set of terms. Their elements are called constants. Let ( n , m ) be any pair of natural numbers; every element of M of the order n × m is called a constant in M of the order n × m , and every element in M p of the order n × m is called a constant in M p of the order n × m . We add to both sets of terms, for every pair of natural numbers ( n , m ) , the same countable set of variables V n × m = { X n × m 1 , X n × m 2 , } (when convenient, we shall denote them differently).
We define the set of terms over M (or the set of terms over M p ), in an inductive way:
  • For every pair ( n , m ) of natural numbers, constants in M (or in M p , for the second set of terms) of the order n × m , as well as variables from V n × m (for both sets of terms), are terms over M (over M p ) of the order n × m .
  • If for n , m , t N , T 1 is a term over M (or over M p ) of the order n × m and T 2 is a term over M (over M p ) of the order m × t , then ( T 1 T 2 ) is a term over M (over M p ) of the order n × t .
  • Terms are exactly those expressions obtained with finitely many applications of the previous two steps.
Equivalently, we may define these two sets of terms as intersections of all the sets containing all the constants (in M and M p , respectively) and variables, and fulfilling the above condition 2. Usually, we delete outer brackets, i.e., those obtained by the last application of step 2 in the process of forming a term.
If T 1 and T 2 are terms over M (or over M p ) of the same order n × m , then T 1 = T 2 is an identity over M (over M p ) of the order  n × m .
Let T 1 = T 2 be an identity over M or an identity over M p and let A 1 , , A s be all constant terms and let X 1 , , X t be all variables appearing in T 1 and/or T 2 .
By T 1 ( B 1 , , B t ) and T 2 ( B 1 , , B t ) we denote the terms obtained from T 1 and T 2 by replacing X j with a constant B j of the same order for j { 1 , 2 , , t } , as well as the values of these terms when ∘ is interpreted either as the partial operation of ( M , ) , for the terms over M , or by interpreting ∘ as the partial operation ¯ of M p , for terms over M p .
We say that an identity T 1 = T 2 over M is true in a fuzzified matrix space { ( M n × m , E n × m ) n , m N } in valuation ( B 1 , , B t ) if the following is true:
i = 1 s μ ( A i ) j = 1 t μ ( B j ) E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) ) .
This means that the level of existence of the solution ( B 1 , , B t ) and the level of similarity of the left and the right sides of the evaluated expressions have to exceed the level of existence of the equation itself, measured by the level of existence of the constants appearing in the equation. For simplicity, here the indices in E and μ are not written; they are equal to the order of T 1 = T 2 and to the corresponding orders of A i and B j .
In the sequel, we consider identities that are similar to one another, having an analogous arrangement of variables, constants, and operations. Therefore, we need to define what it means for two terms T 1 and T 2 , each of which is a term over M or over M p for some p L , to be of the same type. We do this in an inductive manner.
The relation that is to be of the same type in the family of all terms over M and M p is defined as the intersection of all the equivalence relations on the union of the sets of terms over M and over M p for all p L , such that:
(i)
If A and B are constants of the same order, they are of the same type (here also, a constant from M is of the same type as a constant over M p of the same order; hence, some terms from M are of the same type as terms over M p ).
(ii)
If T 1 and T 1 , as well as T 2 and T 2 , are of the same type, and the terms T 1 T 2 and T 1 T 2 are defined, then they are of the same type.
The set of all equivalence relations fulfilling (i) and (ii) is nonempty—we can take the full relation, in which any two terms are related.
Thus, the intersection of all the equivalence relations fulfilling (i) and (ii) exists, and since the intersection of any set of equivalence relations is also an equivalence relation, the relation “to be of the same type” is an equivalence, and thus a reflexive and nonempty relation on the union of the sets of terms over M and M p for all p L . Let us call its classes types of terms. Let us denote the type containing a given term T by type ( T ) .
We say that identities T 1 = T 2 and T 1 = T 2 are of the same type if { t y p e ( T 1 ) , t y p e ( T 2 ) } = { t y p e ( T 1 ) , t y p e ( T 2 ) } . It is straightforward that the relation "to be of the same type" is an equivalence relation in the set of all identities. Classes of that relation are called types of identities.
As an equivalence class, a type of identity is determined by any identity T 1 = T 2 belonging to it and may be written as [ T 1 = T 2 ] .
Also, we say that an identity is of the type T 1 = T 2 if it belongs to the class [ T 1 = T 2 ] .
Some simple types of identities are described in the following example.
Example 3.
Let m , n , s be fixed natural numbers.
(1) 
Identities of the form A X = C , where A is a constant of the order n × m , and C is a constant of the order n × s and X is a variable from V m × s , form a type of identity; i.e., the set of all such identities over M and M p is a type of identity.
(2) 
Identities of the form Y B = C , where B is a constant of the order m × s , and C is a constant of the order n × s , and Y is a variable from V n × m , form a type of identity; i.e., the set of all such identities over M and M p is a type of identity.

3.3. Generalized Matrix Equation

Let ( M , ) be a partial matrix groupoid. An identity T 1 = T 2 over M containing constants A 1 , A 2 , , A s M and also some variables X 1 , , X t , is called an equation over ( M , ) . Note that ∘ in T 1 and T 2 is interpreted as the partial operation ∘ in M . We also write T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) . By the type of an equation over ( M , ) , we simply mean its type as an identity.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over ( M , ) , p L and M p the corresponding algebraic structure we have defined in Section 3.1.
We denote by [ A ] E p the class of A μ n × m p in E n × m p , where n × m is the order of A.
An identity T 1 = T 2 over M p containing constants A 1 , A 2 , , A s from the universe of M p and also some variables X 1 , , X t , is called an equation over M p . Here, ∘ in T 1 and T 2 is interpreted as the partial operation ¯ in M p . We also write T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) . By the type of an equation over M p , we simply mean its type as an identity.
In the following definitions and theorems, we write μ ( A ) and E ( A , B ) instead of μ n × m ( A ) and E m × n ( A , B ) , and also μ p and E p instead of μ n × m p and E m × n p , whenever m and n are arbitrary or not known.
We say that an equation T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) over ( M , ) containing constants A 1 , A 2 , , A s M is weakly solvable in { ( M n × m , E n × m ) n , m N } if there are B 1 , , B t M such that the identity T 1 = T 2 is true in the valuation B 1 , , B t ; i.e., if
i = 1 s μ ( A i ) j = 1 t μ ( B j ) E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) ) ,
for some B 1 , , B t M .
We say that ( B 1 , , B t ) is a weak solution of the equation T 1 = T 2 in the fuzzified matrix space { ( M n × m , E n × m ) n , m N } .
We relate the weak solvability of equations over the partial matrix groupoid ( M , ) in a fuzzified matrix space to the solvability of the corresponding equations over the related quotient structure M p .
As for the solvability in M p , we define it in a usual way. Namely, we say that an equation T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) over M p is solvable in M p , if there is an ordered t-tuple ( [ B 1 ] E p , , [ B t ] E p ) of constants in M p , such that the equality
T 1 ( [ B 1 ] E p , , [ B t ] E p ) = T 2 ( [ B 1 ] E p , , [ B t ] E p )
is true, when ∘ in T 1 and T 2 is interpreted as ¯ . We also say that ( [ B 1 ] E p , , [ B t ] E p ) is a solution to the equation T 1 = T 2 in  M p .
In order to investigate relationships between weak solvability of equations over ( M , ) in an L-valued matrix space { ( M n × m , E n × m ) n , m N } and the solvability of some related equations over M p , we define a sort of projection of a term T over M in an L-valued matrix space { ( M n × m , E n × m ) n , m N } .
If p i = 1 s A i , where { A 1 , , A s } is the set of all constants occurring in a term T over M , the p-projection of T in an L -valued matrix space { ( M n × m , E n × m ) n , m N } is an expression derived from T by replacing every A i with [ A i ] E p M p and ∘ with ¯ , as defined above. We denote the projection by T p .
Using the definition of the projection and the definition of ¯ , we obtain the following lemma.
Lemma 2.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over a partial matrix groupoid ( M , ) , p L and T ( X 1 , , X t ) a term over M containing t variables X 1 , , X t . If every constant in T belongs to μ n × m p for some n , m N and if ( B 1 , , B t ) is an ordered set of matrices, each one belonging to μ n × m p , for some n , m N , such that X i is of the same order as B i for i { 1 , , t } , then T ( B 1 , , B t ) μ n × m p , for some n , m N and [ T ( B 1 , . . . , B t ) ] E p = T p ( [ B 1 ] E p , , [ B t ] E p ) .
Proof. 
We prove this by induction on the complexity of the term.
For a constant or a variable, the assertion of the lemma holds by the definition of the projection.
If the assertion of the lemma holds for T 1 ( X 1 , , X t ) and T 2 ( Y 1 , , Y s ) , we prove that it holds also for ( T 1 T 2 ) ( X 1 , , X t , Y 1 , , Y s ) .
Let ( B 1 , , B t , C 1 , , C s ) be an ordered set of matrices, each B i being a matrix of the same order as X i and C j a matrix of the same order as Y j —for i { 1 , 2 , t } and j { 1 , 2 , s } —then
[ ( T 1 T 2 ) ( B 1 , , B t , C 1 , , C s ) ] E p = [ T 1 ( B 1 , , B t ) T 2 ( C 1 , , C s ) ) ] E p =
[ ( T 1 ( B 1 , , B t ) ] E p ¯ [ T 2 ( C 1 , , C s ) ] E p =
T 1 p ( [ B 1 ] , , [ B t ] ) ¯ T 2 p ( [ C 1 ] E p , , [ C s ] E p ) =
( T 1 T 2 ) p ( [ B 1 ] E p , [ B t ] E p , [ C 1 ] E p , , [ C s ] E p ) .
Thus, we have proved the induction step. □
Theorem 1.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space and T 1 = T 2 an equation over ( M , ) . All the equations over ( M , ) of the type T 1 = T 2 are weakly solvable in the L-valued matrix space if and only if for every p L , all the equations over M p of that type are solvable in M p .
Proof. 
Suppose that all the equations over ( M , ) of the type T 1 = T 2 are weakly solvable in the L-valued matrix space. Let t be the number of variables occurring in T 1 = T 2 . Let p L and T 1 = T 2 be an equation over M p of the type T 1 = T 2 . It also has t variables.
Every constant in T 1 and T 2 is of the form [ A ] E p , where A μ p is a matrix. Replacing every constant in T 1 and T 2 with some matrix belonging to the equivalence class denoted by that constant, we obtain an equation over ( M , ) —let us say T 3 = T 4 —of the same type as T 1 = T 2 —having t variables. This equation is—by assumption—weakly solvable in the L-valued matrix space. Let { A 1 , , A s } be the set of constants in that matrix equation and ( B 1 , , B t ) be a weak solution to it.
Now,
i = 1 s μ ( A i ) i = 1 t μ ( B i ) E ( T 3 ( B 1 , , B t ) , T 4 ( B 1 , , B t ) ) .
Since every A i is in μ p , we have that p μ ( A i ) for every i { 1 , , k } ; thus p i = 1 s μ ( A i ) i = 1 t μ ( B i ) E ( T 3 ( B 1 , , B t ) , T 4 ( B 1 , , B t ) ) . Herefrom, p i I μ ( B i ) μ ( B i ) ; thus B i μ p for i { 1 , , t } ; also ( T 3 ( B 1 , , B t ) , T 4 ( B 1 , , B t ) ) E p and [ T 3 ( B 1 , . . . , B t ) ] E p = [ T 4 ( B 1 , . . . , B t ) ] E p .
Now, using Lemma 2, we have
[ T 3 ( B 1 , . . . , B t ) ] E p = T 3 p ( [ B 1 ] E p , , [ B t ] E p )   and   [ T 4 ( B 1 , . . . , B t ) ] E p = T 4 p ( [ B 1 ] E p , , [ B t ] E p )
Here, T 3 p and T 4 p contain a constant [ A ] E p for every matrix A contained in T 3 and T 4 ; such a matrix A is previously chosen as a representative of its E p -class [ A ] E p occurring in T 1 or T 2 ; therefore, T 1 and T 2 become T 3 p and T 4 p when ∘ in them are replaced with ¯ . Since T 3 p ( [ B 1 ] E p , , [ B t ] E p ) = [ T 3 ( B 1 , . . . , B t ) ] E p = [ T 4 ( B 1 , . . . , B t ) ] E p = T 4 p ( [ B 1 ] E p , , [ B t ] E p ) , we have proved that the initial equation T 1 = T 2 is solvable in M p .
To prove the converse, suppose that for every p L , all the equations over M p of the type T 1 = T 2 have at least one solution. Let T 3 ( X 1 , , X t ) = T 4 ( X 1 , , X t ) be an equation over ( M , ) . Let { A 1 , , A s } be the set of all constants contained in T 3 and T 4 , and let p = i = 1 s μ ( A i ) . Now, replacing every A i by [ A i ] E p , we obtain the equation T 3 = T 4 over M p of the same type T 1 = T 2 , which by assumption has a solution, let us say ( [ B 1 ] E p , , [ B t ] E p ) ; here, B i μ p for every i { 1 , 2 , , t } . Since T 3 and T 4 become T 3 p and T 4 p when ∘ in them are replaced by ¯ , we have that T 3 p ( [ B 1 ] E p , , [ B t ] E p ) = T 4 p ( [ B 1 ] E p , , [ B t ] E p ) , and thus by Lemma 2 we have [ T 3 ( B 1 , . . . , B t ) ] E p = T 3 p ( [ B 1 ] E p , , [ B t ] E p ) = T 4 p ( [ B 1 ] E p , , [ B t ] E p ) =
[ T 4 ( B 1 , , B t ) ] E p , so p E ( T 3 ( B 1 , , B t ) , T 4 ( B 1 , , B t ) ) . Thus:
i = 1 s μ ( A i ) = p i = 1 t μ ( B i ) E ( T 3 ( B 1 , , B t ) , T 4 ( B 1 , , B t ) ) ,
and T 3 = T 4 has a weak solution ( B 1 , , B t ) and is weakly solvable. □
Taking the types described in Example 3, we obtain, as special cases and as corollaries of Theorem 1, Theorems 1 and 2 from [15].
Corollary 1.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space and n , m , s N .
(1) 
All of the equations of the form A X = C —where A M n × m and C M n × s —are weakly solvable over the L-valued matrix space if and only if for every p L all of the equations of the form A ¯ ¯ X ¯ = C ¯ —where A ¯ μ n × m p / E n × m p and C ¯ μ n × s p / E n × s p —are solvable in μ m × s p / E m × s p .
(2) 
All of the equations of the form Y B = C , where B M m × s and C M n × s , are weakly solvable over the L-valued matrix space if and only if for every p L all of the equations of the form Y ¯ ¯ B ¯ = C ¯ , where B ¯ μ m × s p / E m × s p and C ¯ μ n × s p / E n × s p , are solvable in μ n × m p / E n × m p .
We give some more applications of Theorem 1 to some other special types of equations.
Example 4.
For a fuzzified matrix space { ( M n × m , E n × m ) n , m N } , and n , m , s N we have the following.
(1) 
All the equations of the form A X = A X , where A , A M n × m and X is a variable of the order m × s , are weakly solvable in the fuzzified matrix space if and only if all the equations of the form A X = A X , where A , A μ n × m p are solvable in M p .
(2) 
All the equations of the form Y B = Y B , where B , B M m × s and Y is a variable of the order n × m , are weakly solvable in the fuzzified matrix space if and only if all the equations of the form Y B = Y B , where B , B μ m × s p are solvable in M p .
We could also generalize another result from [15], allowing us to test weak solvability of a single equation over ( M , ) in a fuzzified matrix space by the solvability of the corresponding equation over M p , for a suitably chosen p L . Namely, the following theorem generalizes Theorem 3 from [15].
Theorem 2.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over a partial matrix groupoid ( M , ) and T 1 = T 2 an equation over ( M , ) . Let { A 1 , , A s } be the set of constants contained in T 1 and/or T 2 , and let p = i = 1 s μ ( A i ) . The equation T 1 = T 2 is weakly solvable in the L-valued matrix space if and only if the equation over M p that we obtain from it by replacing every constant A i with its class in E p is solvable in M p .
Proof. 
Let T 1 = T 2 be weakly solvable in the L-valued matrix space
{ ( M n × m , E n × m ) n , m N } , and let T 1 = T 2 be the equation over M p , derived from T 1 = T 2 by replacing every constant A i by [ A i ] E p M p , where p = i = 1 s μ ( A i ) .
First, suppose that T 1 = T 2 is weakly solvable. Let ( B 1 , B 2 , B t ) be a weak solution to T 1 = T 2 , which means that
p = i = 1 s μ ( A i ) i = 1 t μ ( B i ) E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) )
Since p μ ( B i ) for all i { 1 , , j } , we have B i μ p ; thus, [ B i ] E p M p ; by Lemma 2, T 1 ( B 1 , , B t ) μ p , T 2 ( B 1 , , B t ) μ p . By the above inequation, p E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) ) and [ T 1 ( B 1 , . . . , B t ) ] E p = [ T 2 ( B 1 , . . . , B t ) ] E p . By Lemma 2:
T 1 p ( [ B 1 ] E p , , [ B t ] E p ) = [ T 1 ( B 1 , . . . , B t ) ] E p = [ T 2 ( B 1 , . . . , B t ) ] E p = T 2 p ( [ B 1 ] E p , , [ B t ] E p ) .
T 1 ( [ B 1 ] E p , , [ B t ] E p ) , T 2 ( [ B 1 ] E p , , [ B t ] E p ) become T 1 p ( [ B 1 ] E p , , [ B t ] E p ) , T 2 p ( [ B 1 ] E p , , [ B t ] E p ) , respectively, when ∘ in them are replaced with ¯ ; thus, we have proven that ( [ B 1 ] E p , , [ B t ] E p ) is a solution to the equation T 1 = T 2 and, consequently, T 1 = T 2 is solvable in M p .
To prove the other implication, suppose that T 1 = T 2 is solvable in M p , and let ( [ B 1 ] E p , , [ B t ] E p ) be a solution to T 1 = T 2 . T 1 ( [ B 1 ] E p , , [ B t ] E p ) and T 2 ( [ B 1 ] E p , , [ B t ] E p ) become T 1 p ( [ B 1 ] E p , , [ B t ] E p ) and T 2 p ( [ B 1 ] E p , , [ B t ] E p ) , when ∘ in them are replaced with ¯ ; since ( [ B 1 ] E p , , [ B t ] E p ) is a solution to T 1 = T 2 , we have T 1 p ( [ B 1 ] E p , , [ B t ] E p ) = T 2 p ( [ B 1 ] E p , , [ B t ] E p ) . By Lemma 2:
[ T 1 ( B 1 , . . . , B t ) ] E p = T 1 p ( [ B 1 ] E p , , [ B t ] E p ) = T 2 p ( [ B 1 ] E p , , [ B t ] E p ) = [ T 2 ( B 1 , , B t ) ] E p , so p E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) ) .
Since [ B i ] E p M p for i { 1 , , t } , we have B i μ p and p μ ( B i ) . Finally,
i = 1 s μ ( A i ) = p i = 1 t μ ( B i ) E ( T 1 ( B 1 , , B t ) , T 2 ( B 1 , , B t ) )
and T 1 = T 2 has a weak solution ( B 1 , , B t ) and is weakly solvable. □
Applying this theorem to the linear equations A X = C and Y B = C , we obtain the following corollary, equivalent to Theorems 3 and 4 from [15].
Corollary 2.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space, and A M n × m , B M m × s , and C M n × s .
(1) 
The equation A X = C —where X is a variable of the order m × s —is weakly solvable in the L-valued matrix space if and only if the equation [ A ] E n × m p X = [ C ] E n × s p , where p = μ n × m ( A ) μ n × s ( C ) , is solvable in M p .
(2)
The equation Y B = C —where Y is a variable of the order n × m —is weakly solvable in the L-valued matrix space if and only if the equation Y [ B ] E m × s p = [ C ] E n × s p , where p = μ m × s ( B ) μ n × s ( C ) , is solvable in M p .
Example 5.
Let k N , and let ( M , ) be the space of matrices over the ring ( Z , + , · ) . Let be the usual matrix product: if A M n × m , B M m × s , ( A B ) i j = l = 1 m a i l · b l j .
We say that, for n , m , k N and k > 1 , two matrices A , A of the order n × m are congruent modulo k if for all ( i , j ) { 1 , , n } × { 1 , , m } we have a i j a i j (mod k). We write A A (mod k).
Let L = { 0 , t , u , 1 } be a four-element lattice in Figure 3, and let k > 1 and for all ( n , m ) N 2 :
Figure 3. A lattice L.
E n × m ( A , A ) = 1 i f   a l l   t h e   e l e m e n t s   f r o m   A   a n d   A   a r e   d i v i s i b l e   b y   k ; t i n   a l l   o t h e r   c a s e s   w h e n   A A ( m o d k ) ; 0 o t h e r w i s e .
For any k N , E n × m is a symmetric and transitive fuzzy relation, i.e., it is a weak L-valued equivalence relation. Moreover, it is compatible with ∘.
Thus, { ( M n × m , E n × m ) n , m N } is a matrix space. Let us describe the quotient structure M p for all p L :
p = 0 : μ n × m 0 = M n × m and E n × m 0 = M n × m 2 , so μ n × m 0 / E n × m 0 is a one-element set for all ( n , m ) N × N . Thus, M 0 is isomorphic to ( N × N , ) , where is a partial operation on N × N , defined for ( ( n , m ) , ( n , m ) ) iff m = n , in which case ( n , m ) ( n , m ) = ( n , m ) .
p = t : μ n × m t = M n × m and E n × m t = { ( A , B ) A B ( m o d k ) } , for all ( n , m ) N × N , so M t is isomorphic to the set of matrices over the ring ( Z k , + k , · k ) , where Z k = { 0 , 1 , , k 1 } , + k and · k are addition and multiplication modulo k, respectively, (multiplication of matrices is as usual).
p = 1 or p = u : μ n × m 1 is a set of matrices whose elements are divisible by k, while E n × m 1 is the full relation on that set; thus M 1 (as well as M u ) is isomorphic to ( N × N , ) , where is a partial operation on N × N , defined for ( ( n , m ) , ( n , m ) ) iff m = n , in which case ( n , m ) ( n , m ) = ( n , m ) .
Thus, for any fuzzified matrix A of the dimension n × m , we may with certainty say that it exists; i.e., its level of existence in { ( M n × m , E n × m ) n , m N } is 1 (because μ n × m ( A , A ) = 1 ), while it is similar to the level t to all the matrices whose elements are k to its elements at the same position. It collapses in M 1 , M 0 , M u into the only element of M n × m 1 , M n × m 0 , M n × m u , while in M t it is reduced to the matrix over Z k , containing—instead of an element m i j —its remainder when divided by k.
Consider the equation A X = C , where A = 1 2 3 4 5 6 and C = 1 2 3 4 .
We can easily see that this matrix equation is not solvable over the domain ( Z , + , · ) in the usual way.
Now, we want to determine whether the equation A X = C is weakly solvable.
We need to check the solvability of the equation [ A ] E 2 × 3 p X = [ C ] E 2 × 2 p , where p = μ 2 × 3 ( A ) μ 2 × 2 ( C ) .
For k > 1 , we have μ 2 × 3 ( A ) μ 2 × 2 ( C ) = t t = t .
For k = 2 , we have [ A ] E 2 × 3 t = [ A ] E 2 × 3 t , where A = 1 0 1 0 1 0 .
Also, [ C ] E 2 × 2 t = [ C ] E 2 × 2 t , where C = 1 0 1 0 .
  • [ A ] E 2 × 3 t X = [ C ] E 2 × 2 t is solvable in M t , namely, for B = 1 1 1 0 0 1 , we have A B = 1 2 1 0 which is in the same equivalence class as C under the equivalence E 2 × 2 t ; therefore,
[ A ] E 2 × 3 t ¯ [ B ] E 3 × 2 t = [ A ] E 2 × 3 t ¯ [ B ] E 3 × 2 t = [ A B ] E 2 × 2 t = [ C ] E 2 × 2 t
Thus, A X = C is weakly solvable in ( M , ) , if k = 2 .
The matrix space we introduced fuzzifies a set of matrices with a partial groupoid structure (i.e., with the multiplication of matrices defined in a case when the standard multiplication is defined), in order to apply it to various cases of matrix multiplication, which can model, e.g., the composition of fuzzy relations and various other processes and transformations for which addition may not be defined or relevant. Matrix space preserves multiplication in the initial set of matrices by its very definition. But a set of matrices that we want to fuzzify does not, generally, include addition, and if it does, a necessary and sufficient condition for a matrix space to preserve addition would be that the existing addition is compatible—under the usually defined compatibility—with weak equivalences within all subsets of matrices of a given dimension. That is the case in our Examples 1 and 5.
In the sequel, we define the unique weak solvability of the equations over ( M , ) in an L-valued matrix space.
Let T 1 and T 2 be terms over M . Let { A 1 , , A s } be the set of constants contained in T 1 and/or T 2 . We say that the equation T 1 = T 2 is uniquely weakly solvable in an L-valued matrix space { ( M n × m , E n × m ) n , m N } if it is weakly solvable and, for any two of its weak solutions ( B 1 , , B t ) and ( B 1 , , B t ) , we have
j = 1 s A i i = 1 t E ( B i , B i ) ,
which is equivalent to
j = 1 s A j E ( B i , B i ) , for   i { 1 , , t } .
This means that a uniquely weakly solvable equation T 1 = T 2 over ( M , ) may have several weak solutions, but they should be equal on the “level” j = 1 s A j . Therefore, we are able to test the unique weak solvability of an equation over ( M , ) in the quotient structure M p for p = j = 1 s A j , as we prove in the following theorem.
Theorem 3.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over a partial matrix groupoid ( M , ) and let T 1 = T 2 be an equation over ( M , ) . Let { A 1 , , A s } be the set of constants contained in T 1 and/or T 2 , and let p = j = 1 s μ ( A j ) . The equation T 1 = T 2 is uniquely weakly solvable in the fuzzified matrix space if and only if the equation we obtain from it by replacing every constant A i with its class in E p is uniquely solvable in M p .
Proof. 
Let T 1 = T 2 be the equation over M p we obtain from T 1 = T 2 by replacing every constant A j with its class in E p , where p = j = 1 s μ ( A j ) .
First, suppose that T 1 = T 2 is uniquely weakly solvable. By the proof of Theorem 2, if ( B 1 , , B t ) is a weak solution to T 1 = T 2 , we have that B i μ p for all i { 1 , , t } and ( [ B 1 ] E p , , [ B t ] E p ) is a solution to T 1 = T 2 . If ( [ B 1 ] E p , , [ B t ] E p ) is any solution to T 1 = T 2 , by the proof of Theorem 2, we have that ( B 1 , , B t ) is a weak solution to T 1 = T 2 ; thus, by the weak uniqueness we have p E ( B i , B i ) , and ( [ B 1 ] E p , , [ B t ] E p ) = ( [ B 1 ] E p , , [ B t ] E p ) .
Now, suppose that T 1 = T 2 is uniquely solvable, and that ( [ B 1 ] E p , , [ B t ] E p ) is its unique solution. By the proof of Theorem 2, we have that ( B 1 , , B t ) is a weak solution to T 1 = T 2 . Let ( B 1 , , B t ) be another weak solution to T 1 = T 2 , by the proof of Theorem 2 we have p μ ( B i ) for i { 1 , , t } and ( [ B 1 ] E p , , [ B t ] E p ) is a solution to T 1 = T 2 . By uniqueness, we have [ B i ] E p = [ B i ] E p for i { 1 , , t } , consequently p E ( B i , B i ) . □
Example 6.
In Example 5, for k = 2 , the equation A X = C is not uniquely weakly solvable, since for p = μ 2 × 3 ( A ) μ 2 × 2 ( C ) = t , both [ B ] E 3 × 2 t and [ B ] E 3 × 2 t , where B = 1 1 1 0 0 1 and B = 1 0 1 0 0 0 are solutions to [ A ] E 2 × 3 t X = [ C ] E 2 × 2 t . By the definition of E 3 × 2 in Example 5, we can note that E 3 × 2 ( B , B ) = 0 < t , and hence the equation is not uniquely solvable.
But the equation A 1 X = C , where A 1 = 1 0 0 1 is uniquely weakly solvable. Namely, it is weakly solvable, since [ A 1 ] E 2 × 2 t X = [ C ] E 2 × 2 t is solvable, because A 1 C = C and, thus [ A 1 ] E 2 × 2 t ¯ [ C ] E 2 × 2 t = [ A 1 C ] E 2 × 2 t = [ C ] E 2 × 2 t . Let [ Y ] E 2 × 2 t be a solution to [ A 1 ] E 2 × 2 t X = [ C ] E 2 × 2 t ; since A 1 Y = Y for all Y for which A 1 Y is defined, we have [ Y ] E 2 × 2 t = [ A 1 Y ] E 2 × 2 t = [ A 1 ] E 2 × 2 t ¯ [ Y ] E 2 × 2 t = [ C ] E 2 × 2 t ; thus, any solution [ Y ] E 2 × 2 t to the equation [ A 1 ] E 2 × 2 t X = [ C ] E 2 × 2 t equals to [ C ] E 2 × 2 t .
We obtain, as corollaries to Theorem 3, assertions that are slightly reformulated Theorems 5 and 6 from [15].
Corollary 3.
Let ( M , ) be a partial matrix groupoid, A M n × m , B M m × s , and C M n × s for some n , m , s N .
(1) 
The equation A X = C over ( M , ) —where X is a variable of the order m × s —is uniquely weakly solvable in a given fuzzified matrix space { ( M n × m , E n × m ) n , m N } , if and only if the equation [ A ] E n × m p X = [ C ] E n × s p —where p = μ n × m ( A ) μ m × s ( C ) —is uniquely solvable in M p .
(2) 
The equation Y B = C over ( M , ) —where Y is a variable of the order n × m —is uniquely weakly solvable in a given fuzzified matrix space { ( M n × m , E n × m ) n , m N } , if and only if the equation Y [ B ] E m × s p = [ C ] E n × s p —where p = μ m × s ( B ) μ n × s ( C ) —is uniquely solvable in M p .
Another corollary to Theorem 3 generalizes Corollary 1 from [15].
Corollary 4.
Let { ( M n × m , E n × m ) n , m N } be an L-valued matrix space over a partial matrix groupoid ( M , ) . Let T 1 = T 2 be an equation over ( M , ) . If for every p L all the equations over M p of the type T 1 = T 2 are uniquely solvable in M p , then all the equations over ( M , ) of the same type are uniquely weakly solvable in { ( M n × m , E n × m ) n , m N } .

3.4. Approximate Laws in a Fuzzified Matrix Space

Since a matrix product is usually associative, we define an analog of associativity in a fuzzified matrix space { ( M n × m , E n × m ) n , m N } . The definition is similar to that of the definition of fuzzified associativity in E-fuzzy groups defined in [12].
We say that the fuzzified matrix space is weakly associative if for all m , n , s , t N and for all A M m × n , B M n × s , C M s × t , the following holds:
μ m × n ( A ) μ n × s ( B ) μ s × t ( C ) E m × t ( ( A B ) C , A ( B C ) ) .
Analogously, for any other law given by T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) , where T 1 ( X 1 , , X t ) and T 2 ( X 1 , , X t ) are terms consisting of t variables X 1 , X 2 , , X t , we may define its weak analog in the fuzzified matrix space { ( M n × m , E n × m ) n , m N } . For this purpose, we build the set of terms in the same way as in Section 3.2, except that we take a countable set of variables without a defined order. They represent just any element from a fuzzified matrix space or any of its quotient structures M p .
We say that the law T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) is weakly satisfied in a fuzzified matrix space if, for all A 1 , A t from the fuzzified matrix space for which T 1 and T 2 are defined, we have
μ ( A 1 ) μ ( A t ) E ( T 1 ( A 1 , , A t ) , T 2 ( A 1 , , A t ) ) .
Here, we omitted indices as they may vary and take any values such that T 1 ( A 1 , , A t ) and T 2 ( A 1 , , A t ) are defined.
Associativity, as well as any other law (e.g., commutativity, idempotency…), is cutworthy in a fuzzified matrix space. We prove it in the following theorem.
Theorem 4.
If { ( M n × m , E n × m ) n , m N } is a fuzzified matrix space, T 1 ( X 1 , , X t ) and T 2 ( X 1 , , X t ) terms consisting of variables X 1 , , X t , then the following holds: T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) is weakly satisfied in the fuzzified matrix space if and only if, for all p L , the equality T 1 p ( X 1 , , X t ) = T 2 p ( X 1 , , X t ) is satisfied for all ( [ A 1 ] E p , , [ A t ] E p ) ( μ p / E p ) t .
Proof. 
Let T 1 ( X 1 , , X t ) = T 2 ( X 1 , , X t ) be weakly satisfied in the fuzzified matrix space. Consider the following equality:
T 1 p ( [ A 1 ] E p , , [ A t ] E p ) = T 2 p ( [ A 1 ] E p , , [ A t ] E p ) .
By Lemma 2 it is equivalent to [ T 1 ( A 1 , . . . , A t ) ] E p = [ T 2 ( A 1 , . . . , A t ) ] E p and thus also equivalent to E ( T 1 ( A 1 , , A t ) , T 2 ( A 1 , , A t ) ) p , which is true since A 1 μ p ,…, A t μ p , and thus μ ( A i ) p for i { 1 , 2 , , t } and E ( T 1 ( A 1 , , A t ) , T 2 ( A 1 , , A t ) ) μ ( A 1 ) μ ( A t ) p .
To prove the converse, let T 1 p ( [ A 1 ] E p , , [ A t ] E p ) = T 2 p ( [ A 1 ] E p , , [ A t ] E p ) be always satisfied, and let A 1 , , A t be matrices for which T 1 and T 2 are defined. Consider the inequality:
E ( T 1 ( A 1 , , A t ) , T 2 ( A 1 , , A t ) ) μ ( A 1 ) μ ( A t )
Taking p = μ ( A 1 ) μ ( A t ) , we have that T 1 p ( [ A 1 ] E p , , [ A t ] E p ) = T 2 p ( [ A 1 ] E p , , [ A t ] E p ) ; by Lemma 2, we have that [ T 1 ( A 1 , . . . , A t ) ] E p = [ T 2 ( A 1 , . . . , A t ) ] E p , i.e.,
E ( T 1 ( A 1 , , A t ) , T 2 ( A 1 , , A t ) ) p = μ ( A 1 ) μ ( A t ) .
 □
Thus, we have proved that any law in a fuzzified matrix space is weakly satisfied if and only if the corresponding law in all the cut structures M p is satisfied.
Example 7.
In the Matrix space in Example 5, we may conclude that the associativity is weakly satisfied, because it is satisfied in all its quotient structures: in M 1 , M 0 , M u , associativity is trivially satisfied, and in M t it is satisfied, because it is isomorphic to the set of matrices over the ring ( Z k , + k , · k ) .
In the Matrix space in Example 2, we conclude that all the laws are weakly satisfied, since all the cut-structures are trivial, i.e., all the matrices of a given dimension collapse into a single element.

4. Conclusions

We have proved that the weak solvability of the equations of any given type, defined in a natural way, is a cutworthy property; i.e., it is satisfied if and only if the equations of the same type are solvable in all the quotient structures. As for the unique weak solvability of the equations of any type, also defined in a natural way, it is satisfied if the equations of the same type are solvable in all the quotient structures M p .
As for the (unique) weak solvability of a single equation in a fuzzified matrix space, it is equivalent to the (unique) solvability of the corresponding equation over M p for a suitably chosen p L , which depends on the equation itself.
Theorems we have proved here concerning the (unique) weak solvability of equations and their types generalize all the Theorems from [15], which we obtain here as corollaries.
Since associativity is often satisfied in a set of matrices in which a multiplication is defined, we defined a sort of associativity in a fuzzy framework, in a fuzzified matrix space. A similar approach led us to the definition of an L-valued analog of any law defined using matrix multiplication. We have proved that any such law is cutworthy; thus, it can be tested in the quotient structures we defined.
It is reasonable to expect that matrix space would show more regularity and have additional properties if we move from this more general frame of lattice-valued matrix space to [0, 1]-valued matrix space. However, investigating those regularities and properties would require a whole new investigation.
In conclusion, our approach opens up possibilities for approximately solving equations or systems of equations of various types. Our method empowers us to seek out approximate solutions in instances where exact solutions do not exist. We can identify solutions that will make the equation true up to some equivalence. As we extend our research, we intend to investigate the practical applications of our approach, exploring its potential in real-world contexts. A limitation in our theoretical study is that the sensitivity analysis has not been performed. Sensitivity analysis would be essential when dealing with fuzzy logic, because it would be important that small variations do not influence the final solution. However, to our knowledge, there are no studies where the sensitivity analysis has been performed in the case of lattice-valued fuzzy logic. Since our manuscript is theoretical, our plan is to perform the sensitivity analysis in a further study with real applications.
Moreover, an important direction for future work is to investigate whether an efficient algorithm is available for solving the matrix equations associated with our approach, using the comprehensive overview provided in [38], or whether it will be necessary to design a new one.

Author Contributions

Conceptualization, V.S. and A.T.; methodology, V.S. and A.T.; investigation, V.S. and A.T.; writing—original draft preparation, V.S. and A.T.; writing—review and editing, V.S. and A.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science Fund of the Republic of Serbia, # Grant no 6565, Advanced Techniques of Mathematical Aggregation and Approximative Equations Solving in Digital Operational Research-AT-MATADOR. The authors also gratefully acknowledge the financial support of the Ministry of Science, Technological Development and Innovation of the Republic of Serbia (Grants No. 451-03-137/2025-03/200116, 451-03-137/2025-03/200125 & 451-03-136/2025-03/200125 & 451-03-136/2025-03/200029).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Zadeh, L.A. Similarity relations and fuzzy orderings. Inf. Sci. 1971, 3, 177–200. [Google Scholar] [CrossRef]
  2. Rosenfeld, A. Fuzzy groups. J. Math. Anal. Appl. 1971, 35, 512–517. [Google Scholar] [CrossRef]
  3. Filep, L.; Maurer, G.I. Fuzzy congruences and compatible fuzzy partitions. Fuzzy Sets Syst. 1989, 29, 357–361. [Google Scholar] [CrossRef]
  4. Demirci, M.; Recasens, J. Fuzzy groups, fuzzy functions and fuzzy equivalence relations. Fuzzy Sets Syst. 2004, 144, 441–458. [Google Scholar] [CrossRef]
  5. Goguen, J.A. L-fuzzy sets. J. Math. Anal. Appl. 1967, 18, 145–174. [Google Scholar] [CrossRef]
  6. Negoita, C.V.; Ralescu, D.A. Applications of Fuzzy Sets to Systems Analysis; Interdisciplinary Systems Research Series; Birkhaeuser, Basel, Stuttgart and Halsted Press: New York, NY, USA, 1975; Volume 11. [Google Scholar] [CrossRef]
  7. Sanchez, E. Resolution of composite fuzzy relation equations. Inf. Control 1976, 30, 38–48. [Google Scholar] [CrossRef]
  8. Pavelka, J. On fuzzy logic II. Enriched residuated lattices and semantics of propositional calculi. Math. Log. Q. 1979, 25, 119–134. [Google Scholar] [CrossRef]
  9. Rachunek, J.; Slezak, V. Bounded dually residuated lattice ordered monoids as a generalization of fuzzy structures. Math. Slovaca 2006, 56, 223–233. Available online: http://dml.cz/dmlcz/133054 (accessed on 24 November 2025).
  10. Díaz-Moreno, J.C.; Medina, J.; Turunen, E. Minimal solutions of general fuzzy relation equations on linear carriers. An algebraic characterization. Fuzzy Sets Syst. 2017, 311, 112–123. [Google Scholar] [CrossRef]
  11. Järvinen, J.; Kondo, M. Relational correspondences for L-fuzzy rough approximations defined on De Morgan Heyting algebras. Soft Comput. 2024, 28, 903–916. [Google Scholar] [CrossRef]
  12. Budimirović, B.; Budimirović, V.; Šešelja, B.; Tepavčević, A. E-fuzzy groups. Fuzzy Sets Syst. 2016, 289, 94–112. [Google Scholar] [CrossRef]
  13. Krapež, A.; Šešelja, B.; Tepavčević, A. Solving linear equations by fuzzy quasigroups techniques. Inf. Sci. 2019, 491, 179–189. [Google Scholar] [CrossRef]
  14. Jimenez, J.; Serrano, M.L.; Šešelja, B.; Tepavčević, A. Omega-rings. Fuzzy Sets Syst. 2023, 455, 183–197. [Google Scholar] [CrossRef]
  15. Medina, J.; Stepanovic, V.; Tepavcevic, A. Solutions of matrix equations with weak fuzzy equivalence relations. Inf. Sci. 2023, 629, 634–645. [Google Scholar] [CrossRef]
  16. Medina, J. Minimal solutions of generalized fuzzy relational equations: Clarifications and corrections towards a more flexible setting. Int. J. Approx. Reason. 2017, 84, 33–38. [Google Scholar] [CrossRef]
  17. Behera, D.; Chakraverty, S. Solving the nondeterministic static governing equations of structures subjected to various forces under fuzzy and interval uncertainty. Int. J. Approx. Reason. 2020, 116, 43–61. [Google Scholar] [CrossRef]
  18. Stepanović, V.; Tepavčević, A. Fuzzy sets (in)equations with a complete codomain lattice. Kybernetika 2022, 58, 145–162. [Google Scholar] [CrossRef]
  19. di Nola, A.; Sessa, S.; Pedrycz, W. A Study on Approximate Reasoning Mechanisms via Fuzzy Relation Equations. Int. J. Approx. Reason. 1992, 6, 33–44. [Google Scholar] [CrossRef]
  20. Guo, H.; Shang, D. Fuzzy Approximate Solution of Positive Fully Fuzzy Matrix Equations. J. Appl. Math. 2013, 2013, 178209. [Google Scholar] [CrossRef][Green Version]
  21. Gottwald, S.; Pedrycz, W. Analysis and synthesis of fuzzy controller. Probl. Control Infom. Theory 1985, 13, 33–45. [Google Scholar]
  22. Gottwald, S.; Pedrycz, W. Solvability of fuzzy relational equations and manipulation of fuzzy data. Fuzzy Sets Syst. 1986, 18, 45–65. [Google Scholar] [CrossRef]
  23. Pedrycz, W. Approximate solutions of fuzzy relational equations. Fuzzy Sets Syst. 1988, 28, 183–202. [Google Scholar] [CrossRef]
  24. Pedrycz, W. Numerical and applicational aspects of fuzzy relational equations. Fuzzy Sets Syst. 1983, 11, 1–18. [Google Scholar] [CrossRef]
  25. Stanimirović, S.; Micić, I. On the solvability of weakly linear systems of fuzzy relation equations. Inf. Sci. 2022, 607, 670–687. [Google Scholar] [CrossRef]
  26. Give’on, Y. Lattice matrices. Inf. Control 1964, 7, 477–484. [Google Scholar] [CrossRef]
  27. Sun, F.; Qu, X.; Zhu, L. On pre-solution matrices of fuzzy relation equations over complete Brouwerian lattices. Fuzzy Sets Syst. 2020, 384, 34–53. [Google Scholar] [CrossRef]
  28. Ćirić, M.; Ignjatović, J. The Existence of Generalized Inverses of Fuzzy Matrices. In Interactions Between Computational Intelligence and Mathematics Part 2; Studies in Computational Intelligence; Kóczy, L., Medina-Moreno, J., Ramírez-Poussa, E., Eds.; Springer: Cham, Switzerland, 2019; Volume 794. [Google Scholar]
  29. Stamenković, A.; Ćirić, M.; Bašić, M. Ranks of fuzzy matrices. Applications in state reduction of fuzzy automata. Fuzzy Sets Syst. 2018, 333, 124–139. [Google Scholar] [CrossRef]
  30. Merino, L.; Navarro, G.; Santos, E. Induced operators on bounded lattices. Inf. Sci. 2022, 608, 114–136. [Google Scholar] [CrossRef]
  31. Dragić, D.; Mihailović, B.; Nedović, L. The general algebraic solution of dual fuzzy linear systems and fuzzy Stein matrix equations. Fuzzy Sets Syst. 2024, 487, 108997. [Google Scholar] [CrossRef]
  32. Guo, F.; Fu, R.; Shen, J. Inverses of fuzzy relation matrices with addition-min composition. Fuzzy Sets Syst. 2024, 490, 109037. [Google Scholar] [CrossRef]
  33. He, M.; Jiang, H.; Liu, X. General strong fuzzy solutions of fuzzy Sylvester matrix equations involving the BT inverse. Fuzzy Sets Syst. 2024, 480, 108862. [Google Scholar] [CrossRef]
  34. Atanassov, K.; Gargov, G. Interval valued intuitionistic fuzzy sets. Fuzzy Sets Syst. 1989, 31, 343–349. [Google Scholar] [CrossRef]
  35. Jeevaraj, S. Ordering of interval-valued Fermatean fuzzy sets and its applications. Expert Syst. Appl. 2021, 185, 115613. [Google Scholar] [CrossRef]
  36. Kumar, V.; Gupta, A.; Taneja, H.C. Interval valued picture fuzzy matrix: Basic properties and application. Soft Comput. 2023, 27, 14929–14950. [Google Scholar] [CrossRef]
  37. Padder, R.A.; Alqurashi, T.; Rather, Y.A.; Malge, S. Interval Valued Spherical Fuzzy Matrix in Decision Making. Eur. J. Pure Appl. Math. 2025, 18, 6095. [Google Scholar] [CrossRef]
  38. Respondek, J.S. Fast Matrix Multiplication with Applications, 1st ed.; Studies in Big Data; Springer Nature: Cham, Switzerland, 2025; Volume 166. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.