 Next Article in Journal
Schwarzschild Field of a Proper Time Oscillator
Next Article in Special Issue
On Spectral Properties of Doubly Stochastic Matrices
Article

# An Efficient Algorithm for Eigenvalue Problem of Latin Squares in a Bipartite Min-Max-Plus System

1
Department of Mathematics, Quaid-i-Azam University, Islamabad 45320, Pakistan
2
Department of Mathematics and Computer Sciences, Stetson University, DeLand, FL 32723, USA
3
School of Mathematical Sciences, Rochester Institute of Technology, Rochester, NY 14623, USA
4
Department of Mathematics, Wells College, Aurora, NY 13026, USA
*
Author to whom correspondence should be addressed.
Symmetry 2020, 12(2), 311; https://doi.org/10.3390/sym12020311
Received: 8 January 2020 / Revised: 11 February 2020 / Accepted: 12 February 2020 / Published: 21 February 2020
(This article belongs to the Special Issue Symmetry in Numerical Linear and Multilinear Algebra)

## Abstract

In this paper, we consider the eigenproblems for Latin squares in a bipartite min-max-plus system. The focus is upon developing a new algorithm to compute the eigenvalue and eigenvectors (trivial and non-trivial) for Latin squares in a bipartite min-max-plus system. We illustrate the algorithm using some examples. The proposed algorithm is implemented in MATLAB, using max-plus algebra toolbox. Computationally speaking, our algorithm has a clear advantage over the power algorithm presented by Subiono and van der Woude. Because our algorithm takes $0 . 088783$ sec to solve the eigenvalue problem for Latin square presented in Example 2, while the compared one takes $1 . 718662$ sec for the same problem. Furthermore, a time complexity comparison is presented, which reveals that the proposed algorithm is less time consuming when compared with some of the existing algorithms.

## 1. Introduction

Time evolution of discrete event dynamic systems can be described through equations composed using three operations the maximum, the minimum and the addition. Such systems are described as min-max-plus systems. The bipartite min-max-plus systems are determined by the union of two sets of equations: one set of equations containing the maximization and the addition; and another set of equations containing the minimization and the addition.
The idea of max-plus algebra was first seen in the 1950s or even at an earlier period. Nowadays, it has been studied intensively. In , the authors have investigated the set of invertible linear operators over a subalgebra of max-plus algebra. For a matrix in max-plus algebra, some necessary and sufficient conditions have been established to possess various types of g-inverses including Moore–Penrose inverse . In , the authors have characterized the linear operators which preserve the maximal column rank of matrices over max-plus algebra.
Max-plus algebra has many applications in mathematics as well as in different areas such as mathematical physics, optimization, combinatorics, and algebraic geometry . Max-plus algebra is used in machine scheduling, telecommunication networks, control theory, manufacturing systems, parallel processing systems, and traffic control [5,6,7]. Max-plus algebra is also used in image steganography . Max-plus algebra can be used to model disk events related to synchronization and time delays . In  the authors have shown how max-plus algebra can be helpful for the dynamic programming of algorithms. In , the author has described the whole Dutch railway system by a max-plus system.
The eigenproblems for a matrix A of order $n × n$ are the problems to find out an eigenvalue $λ$ and the eigenvector v, such that $A v = λ v$. In this article, we consider eigenproblems for the bipartite min-max-plus systems. Mostly, power algorithm , is used to compute the eigenvalue and eigenvectors in an iterative way. Umer et al.  developed an efficient algorithm to solve the eigenvalue problem in max-plus systems, which computes the eigenvalue as a maximal cycle mean and use an iterative approach to determine the eigenvectors. In , the authors have demonstrated that the presence of eigenvectors (non-trivial) depend on the position of the greatest and the least elements in the Latin squares in a bipartite min-max-plus system. For further study of eigenproblems for discrete event systems, see [10,14,15,16].
As per our knowledge and from the literature review, one of the open problems in the max-plus algebra is to come up with an efficient and fast algorithm to solve the eigenproblems for these systems. In this paper, a robust algorithm is developed, that can calculate the eigenvectors corresponding to an eigenvalue $λ$ in an iterative way. In particular, we apply this algorithm to find the eigenvectors (trivial and non-trivial) for Latin squares in bipartite min-max-plus systems. A computational comparison of the proposed algorithm and the power algorithm presented by Subiono and van der Woude  is given, which shows the efficiency of the proposed algorithm.
The structure of the paper is along these lines. First of all, some preliminary notions and bipartite min-max-plus systems are presented in Section 2, then a robust algorithm is developed for determining the eigenvalue and eigenvectors of such systems. In Section 3, we present these systems for the Latin squares and calculate trivial and non-trivial eigenvectors for such models. The proposed algorithm and the power algorithm presented by Subiono and van der Woude , are implemented in MATLAB (using max-plus algebra toolbox for MATLAB ). A time complexity comparison of these algorithms has been given at the end of Section 3. Finally, the conclusion are made in Section 4.

## 2. Bipartite Min-Max-Plus Systems

First of all we denote the set of real numbers and the set of whole numbers by $R$ and $W$ respectively, then we define $R ϵ = R ∪ { ϵ }$, $R τ = R ∪ { τ }$, where $ϵ = − ∞$, $τ = + ∞$. A matrix or a vector where all components equal to $− ∞$ is represented by $ϵ$, whereas a matrix or a vector where all components equal to $+ ∞$ is represented by $τ$. $n ̲$ denotes the set of first n positive integers, i.e., $n ̲ = { 1 , … , n }$. The following scalar operations are introduced as;
$r ⊕ s = max { r , s } , for each r , s ∈ R ϵ r ⊕ ′ s = min { r , s } , for each r , s ∈ R τ r ⊗ s = r + s , for each r , s ∈ R .$
The algebraic structure $R min = ( R τ , ⊕ ′ , ⊗ )$ represents min-plus algebra and $R max = ( R ϵ , ⊕ , ⊗ )$ represents max-plus algebra. Since in both $R min$ and $R max$ the multiplication operator is defined by addition, therefore in both systems the notation for multiplication operator is the same.
The collection of all matrices of order $m × n$ in min-plus and max-plus algebra is represented as $R min m × n$ and $R max m × n$ respectively. While $R min m$ and $R max m$ denotes the set of all vectors in min-plus and max-plus algebra respectively. The scalar operations to matrices are extended as follows.
Suppose that $A = [ a i j ]$, $B = [ b i j ]$, $U = [ u i j ]$, $V = [ v i j ]$ such that $A , B ∈ R max m × n$; $U , V ∈ R min m × n$ and $α ∈ R$ then
$A ⊕ B = [ c i j ] , where c i j = max { a i j , b i j } , U ⊕ ′ V = [ w i j ] , where w i j = min { u i j , v i j } , α ⊗ A = α ⊗ [ a i j ] = α + [ a i j ] ,$
If $A ∈ R max m × r$, $B ∈ R max r × n$, $U ∈ R min m × r$ and $V ∈ R min r × n$, then
$A ⊗ B = [ c i j ] , where c i j = ⨁ k = 1 r ( a i k ⊗ b k j ) = max { a i k + b k j } , U ⊗ ′ V = [ w i j ] , where w i j = ⨁ k = 1 r ′ ( u i k ⊗ v k j ) = min { u i k + v k j } ,$
$for k ∈ r ̲ , i ∈ m ̲ , j ∈ n ̲ .$
One can see that the notation of the multiplication operator in both systems is different. The addition is defined as maximum (minimum) in $R max$ (respectively, $R min$). A bipartite min-max-plus system can be represented as:
$u i ( l + 1 ) = m a x { a i 1 + w 1 ( l ) , … , a i n + w n ( l ) } w j ( l + 1 ) = m i n { b j 1 + u 1 ( l ) , … , b j m + u m ( l ) } ,$
where $u i ( l ) , a i j ∈ R ϵ$; $w j ( l ) , b j i ∈ R τ$; $l ∈ W$ for all $i ∈ m ̲$; $j ∈ n ̲$. These equations can be written as;
$u i ( l + 1 ) = ⨁ j = 1 n ( a i j ⊗ w j ( l ) ) , w j ( l + 1 ) = ⨁ i = 1 m ′ ( b j i ⊗ u i ( l ) ) .$
The above equations can be denoted as;
$u ( l + 1 ) = A ⊗ w ( l ) , f o r l ∈ W w ( l + 1 ) = B ⊗ ′ u ( l ) , f o r l ∈ W ,$
where $W$ is the set of whole numbers and
$u ( l ) = u 1 ( l ) u 2 ( l ) · · · u m ( l ) ∈ R ϵ m , w ( l ) = w 1 ( l ) w 2 ( l ) · · · w n ( l ) ∈ R τ n$
$A = a 11 a 12 a 13 ⋯ a 1 n a 21 a 22 a 23 ⋯ a 2 n ⋮ ⋮ ⋮ ⋱ ⋮ a m 1 a m 2 a m 3 ⋯ a m n ∈ R ϵ m × n ,$
$B = b 11 b 12 b 13 ⋯ b 1 m b 21 b 22 b 23 ⋯ b 2 m ⋮ ⋮ ⋮ ⋱ ⋮ b n 1 b n 2 b n 3 ⋯ b n m ∈ R τ n × m .$
Compactly, system $( 1 )$ can be written by a mapping $M ( · )$, such that
$x ( l + 1 ) = M ( x ( l ) ) ,$
For a system of type $( 2 )$, the notion of eigenvalue and eigenvectors is defined as follows. A real number $λ ∈ R$ is said to be an eigenvalue corresponding to an eigenvector $v ∈ R m + n$ if
$M ( v ) = λ ⊗ v .$
The cycle time vector in a system of type $( 2 )$ is defined as
$η ( A , B ) = lim l → ∞ x ( l ) .$
The cycle time vector $η ( A , B ) ∈ R ϵ n$ is a vector such that each component is equal to the eigenvalue for the given system of type (2). The cycle time vector is independent of initial vector $x ( 0 )$. In this section, we determine the eigenvalue $λ$ by computing cycle time vector. To compute the eigenvectors, we propose an iterative algorithm. For this purpose, define $A λ = − λ ⊗ A$ and $B λ = − λ ⊗ B$, corresponding to the eigenvalue $λ$ of a system of type (2). Also define
$u * ( l + 1 ) = A λ ⊗ w * ( l ) w * ( l + 1 ) = B λ ⊗ ′ u * ( l )$
for $l ∈ W$, where $u * ( l ) ∈ R ϵ m$ and $w * ( l ) ∈ R τ n$.
System (4) can be denoted as,
$x * ( l + 1 ) = N ( x * ( l ) )$
for $l ∈ W$. Let v be a vector, then $N l ( v )$ represents a vector obtained after applying $N$ on the vector v by l times. A relation between (2) and (5) is shown in the following theorem.
Theorem 1.
Let λ be an eigenvalue for a system of type (2) and $v = u ( l ) w ( l )$ be a vector, then
$M ( v ) = λ ⊗ N ( v ) .$
Proof.
Since
$M ( v ) = A ⊗ w ( l ) B ⊗ ′ u ( l ) = λ ⊗ ( − λ ) ⊗ A ⊗ w ( l ) B ⊗ ′ u ( l ) = λ ⊗ ( − λ ) ⊗ A ⊗ w ( l ) ( − λ ) ⊗ B ⊗ ′ u ( l ) = λ ⊗ A λ ⊗ w ( l ) B λ ⊗ ′ u ( l ) = λ ⊗ N ( v ) .$
□
Corollary 1.
Let v be an eigenvector corresponding to the eigenvalue λ of a system of type $( 2 )$. Then
$N ( v ) = v .$
Proof.
By definition, $M ( v ) = λ ⊗ v$. Using Theorem 1, we get, $λ ⊗ N ( v ) = λ ⊗ v$. Hence $N ( v ) = v$.  □
Theorem 2.
Let λ be an eigenvalue of a system of type (2) and let
$x * ( l + 1 ) = N ( x * ( l ) )$
for $l ∈ W$, where $x * ( 0 )$ is an initial state vector. If $x * ( r ) = x * ( s )$ for some integers $r > s ≥ 0$, then
$M ( v ) = λ ⊗ v ,$
where
$v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) .$
Proof.
From Theorem 1,
$M ( v ) = λ ⊗ N ( v ) = λ ⊗ A λ ⊗ w * ( s ) B λ ⊗ ′ u * ( s ) ⊕ … ⊕ A λ ⊗ w * ( r − 1 ) B λ ⊗ ′ u * ( r − 1 ) = λ ⊗ { x * ( s + 1 ) ⊕ … ⊕ x * ( r ) } = λ ⊗ v .$
□
Now we present Algorithm 1 in the following, which gives the eigenvectors corresponding to the eigenvalue $λ$. But, first we give three basic assumptions in the following that are necessary for the convergence of the algorithm.
• For any arbitrary initial vector $x ( 0 )$, the system of type (2), ends up in a periodic behavior, after a finite number of iterations.
• Eigenvalue of the system exists.
• Every periodic behavior has the same average weight, equal to eigenvalue.
 Algorithm 1 Eigenvectors for systems of type (2) Compute the eigenvalue $λ$, by calculating the cycle time vector $η ( A , B )$.Define $A λ = − λ ⊗ A$, and $B λ = − λ ⊗ B$.Take an initial state vector $x * ( 0 )$.Iterate $x * ( l + 1 ) = N ( x * ( l ) ) = A λ ⊗ w * ( l ) B λ ⊗ ′ u * ( l )$, for $l ∈ W$, until there are positive integers $r > s ≥ 0$, such that $x * ( r ) = x * ( s )$.Compute the candidate eigenvector $v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) .$ If $N ( v ) = v$ then v is the correct eigenvector and algorithm stops. Else if $N ( v ) ≠ v$ then go to the following step.Take v as new starting vector and iterate $x * ( l + 1 ) = N ( x * ( l ) )$, until for some $t ≥ 0$, it holds that $x * ( t + 1 ) = x * ( t )$. Finally $x * ( t )$ is an eigenvector.
Theorem 3.
Let λ be an eigenvalue and v be a vector computed as in Algorithm 1 for a system of type (2), then
$N ( v ) ≥ v .$
Proof.
From the given algorithm
$v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) ≥ x * ( l ) for all l ∈ { s , … , r − 1 } .$
Hence
$N ( v ) ≥ N ( x * ( l ) ) for all l ∈ { s , … , r − 1 } = x * ( l + 1 ) for all l ∈ { s , … , r − 1 } .$
These inequalities imply that
$N ( v ) ≥ x * ( s + 1 ) ⊕ … ⊕ x * ( r ) = v .$
□
Lemma 1.
Let v be a vector computed as in the above Algorithm 1 for a bipartite min-max-plus system. Let (5) be restarted with $x * ( 0 ) = v$, then
$x * ( l + 1 ) ≥ x * ( l ) for all l = 0 , 1 , 2 , …$
Proof.
Since $x * ( l ) = N l ( v )$, which implies that
$x * ( l + 1 ) = N l + 1 ( v ) = N l ( N ( v ) ) ≥ N l ( v ) = x * ( l ) .$
□
Proof of Algorithm 1.
Since the system ends up in a periodic behavior aftera finite number of iterations, therefore step 3 of the algorithm performs. If $N ( v ) = v$ then v is the correct eigenvector and clearly algorithm can stop. If $N ( v ) ≠ v$ then start by considering $x * ( 0 ) = v$ as new initial vector and iterate (5).
By first assumption, there exist integers $r > s ≥ 0$, with $x * ( r ) = x * ( s )$. By Lemma 1, $x * ( l + 1 ) ≥ x * ( l )$ for all $l ∈ { s , … , r − 1 }$. Our goal is to show that $x * ( l + 1 ) = x * ( l )$ for all $l ∈ { s , … , r − 1 }$. Suppose on contrary that $x j ′ * ( l ′ + 1 ) > x j ′ * ( l ′ )$, for some integers $l ′ , j ′$ such that, $s ≤ l ′ ≤ r − 1$ and $1 ≤ j ′ ≤ n$. We obtain that $x j ′ * ( r ) > x j ′ * ( s )$, which is a contradiction because $x * ( r ) = x * ( s )$. Hence $x * ( l + 1 ) = x * ( l )$ for all $l ∈ { s , … , r − 1 }$. Which completes the proof.  □
Example 1.
Consider a system of type (2), given as follows:
$A = 2 0 3 1 5 ϵ ϵ 2 4 , B = τ 3 1 2 4 4 5 τ 0 .$
Since the cycle time vector is equal to . Therefore the eigenvalue $λ = 2$.
We get,
$A λ = 0 − 2 1 − 1 3 ϵ ϵ 0 2 , B λ = τ 1 − 1 0 2 2 3 τ − 2 .$
Now, take the initial state vector
$x * ( 0 ) = u * ( 0 ) w * ( 0 ) w i t h u * ( 0 ) = 0 1 0 , w * ( 0 ) = 1 0 1 .$
The following sequence is obtained, after iterating (5),
$x * ( 0 ) → x * ( 1 ) → x * ( 2 ) → x * ( 3 ) → x * ( 4 ) → x * ( 5 ) 0 1 0 1 0 1 → 2 3 3 − 1 0 − 2 → − 1 3 0 2 2 1 → 2 5 3 − 1 − 1 − 2 → − 1 2 0 2 2 1 → 2 5 3 − 1 − 1 − 2 .$
Since $x * ( 5 ) = x * ( 3 )$, therefore $s = 3$, $r = 5$. To compute a corresponding eigenvector we proceed with the computation of the vector v.
$v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) = x * ( 3 ) ⊕ x * ( 4 )$
$v = 2 5 3 − 1 − 1 − 2 ⊕ − 1 2 0 2 2 1 = 2 5 3 2 2 1 .$
Now verify that either v is the correct eigenvector or not. So
$M ( v ) = 4 7 5 4 4 3 = λ ⊗ v .$
Which shows that the eigenvector v is the correct eigenvector obtained by Algorithm 2.

## 3. Latin Squares in Bipartite (Min, Max, Plus)-Systems

A Latin square is a square matrix of order n, with elements from n independent variables over $R +$ in such a way that each row and each column is a different permutation of the n variables . In the following, an example of a Latin square of order 4 is given
$L = 3 2 4 1 4 1 3 2 2 4 1 3 1 3 2 4 .$
We consider the Latin squares of size n in a system of type (2). We have four possibilities for the Latin squares in a system of type (2): (1) The entries of both matrices A and B are $n ̲$; (2) The entries of matrix A are $n ̲ ϵ$ and the entries of matrix B are $n ̲$; (3) The entries of matrix A are $n ̲$ and the entries of matrix B are $n ̲ τ$; (4) The entries of matrix A are $n ̲ ϵ$ and the entries of matrix B are $n ̲ τ$, where $n ̲ ϵ = { 1 , … , n − 1 , ϵ }$ and $n ̲ τ = { 1 , … , n − 1 , τ }$.
In this section, we consider Latin squares for systems of type (2) and Algorithm 1 is extended to calculate the eigenvalue and eigenvectors for such type of systems. In , authors show that for Latin squares in a system of type (2), the eigenvalue $λ$ is determined as
$λ = max ( A ) + min ( B ) 2 .$
Now, we propose Algorithm 2 to solve eigenvalue problem for Latin squares in a system of type (2) as follows:
 Algorithm 2 Eigenvalue and Eigenvectors for Latin squares in a system of type (2) Compute the eigenvalue as, $λ = max ( A ) + min ( B ) 2$.Define $A λ = − λ ⊗ A$, and $B λ = − λ ⊗ B$.Take a starting vector $x * ( 0 )$.Iterate (5), until for some integers $r > s ≥ 0$, it holds that $x * ( r ) = x * ( s )$.Determine the eigenvector as, $v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) .$ If $N ( v ) = v$ then v is the correct eigenvector corresponding to the eigenvalue $λ$ and algorithm can stops. Else if $N ( v ) ≠ v$ then go to the following step.Take v as initial state vector and iterate $x * ( l + 1 ) = N ( x * ( l ) )$, until for some $t ≥ 0$, it holds that $x * ( t + 1 ) = x * ( t )$. Finally $x * ( t )$ is an eigenvector.
Here we find the eigenvalue and eigenvectors for Latin squares in systems of type (2) by using Algorithm 2. First, recall the power algorithm (Algorithm 3) for systems of type (2), proposed by Subiono and van der Woude , then we compare it with Algorithm 2.
 Algorithm 3 Eigenproblems for Bipartite (Min, Max, Plus)-Systems Define a starting vector $x ( 0 )$.Iterate (2), until there exist a real number c and positive integers $r ; s$, such that $r > s ≥ 0$ with $x ( r ) = c ⊗ x ( s )$.The eigenvalue is obtained as, $λ = c r − s .$Determine the eigenvector as, $v = ⊕ j = 1 r − s ( λ ⊗ ( r − s − j ) ⊗ x ( s + j − 1 ) ) .$If $M ( v ) = λ ⊗ v$ then v is the required eigenvector corresponding to the eigenvalue $λ$ and algorithm can stop. If $M ( v ) ≠ λ ⊗ v$, then the algorithm has to be continued as follows.Define $x ( 0 ) = v$ as new starting vector and restart (2), until for some r, there holds $x ( r + 1 ) = λ ⊗ x ( r )$. Then $x ( r )$ is a correct eigenvector of system (2).
To illustrate the Algorithms 2 and 3, consider the following example. In this example, we consider a system of type (2), where A and B are Latin Squares with entries in $n ̲ ϵ$ and $n ̲ τ$ respectively.
Example 2.
Let A and B are Latin squares in a system of type (2), given as follows:
$A = 3 2 ϵ 1 ϵ 1 3 2 2 3 1 ϵ 1 ϵ 2 3 , B = 2 3 τ 1 3 τ 1 2 1 2 3 τ τ 1 2 3 .$
1.
The eigenvalue λ is given as,
$λ = max ( A ) + min ( B ) 2 = 2 .$
By Algorithm 2,
$A λ = 1 0 ϵ − 1 ϵ − 1 1 0 0 1 − 1 ϵ − 1 ϵ 0 1 , B λ = 0 1 τ − 1 1 τ − 1 0 − 1 0 1 τ τ − 1 0 1 .$
Now, take the initial state vector
$x * ( 0 ) = u * ( 0 ) w * ( 0 ) w i t h u * ( 0 ) = 0 1 0 1 , w * ( 0 ) = 1 0 1 0 .$
The following sequence is obtained, after iterating (5),
$x * ( 0 ) → x * ( 1 ) → x * ( 2 ) → x * ( 3 ) → x * ( 4 ) → x * ( 5 ) → x * ( 6 ) 0 1 0 1 1 0 1 0 → 2 2 1 1 0 − 1 − 1 0 → 1 0 0 1 0 0 1 1 → 1 2 1 2 0 − 1 0 − 1 → 1 1 0 0 1 0 0 1 → 2 1 1 2 − 1 − 1 0 0 → 0 1 0 1 1 0 1 0 .$
Since $x * ( 6 ) = x * ( 0 )$, therefore $s = 0 , r = 6$. The required eigenvector v is computed as,
$v = x * ( s ) ⊕ … ⊕ x * ( r − 1 ) = x * ( 0 ) ⊕ … ⊕ x * ( 5 ) = 0 1 0 1 1 0 1 0 ⊕ 2 2 1 1 0 − 1 − 1 0 ⊕ 1 0 0 1 0 0 1 1 ⊕ 1 2 1 2 0 − 1 0 − 1 ⊕ 1 1 0 0 1 0 0 1 ⊕ 2 1 1 2 − 1 − 1 0 0 = 2 2 1 2 1 0 1 1 .$
Now verify that either v is the correct eigenvector or not. So
$M ( v ) = 4 4 3 4 3 2 3 3 = λ ⊗ v .$
Which shows that the eigenvector v is the correct eigenvector obtained by Algorithm 2.
2.
For Algorithm 3, take the initial vector
$x ( 0 ) = u ( 0 ) w ( 0 ) with u ( 0 ) = 0 1 0 1 , w ( 0 ) = 1 0 1 0 .$
The following sequence is obtained, after iterating (2)
$x ( 0 ) → x ( 1 ) → x ( 2 ) → x ( 3 ) → x ( 4 ) → x ( 5 ) → x ( 6 ) 0 1 0 1 1 0 1 0 → 4 4 3 3 2 1 1 2 → 5 4 4 5 4 4 5 5 → 7 8 7 8 6 5 6 5 → 9 9 8 8 9 8 8 9 → 12 11 11 12 9 9 10 10 → 12 13 12 13 13 12 13 12 .$
Since $x ( 6 ) = 12 ⊗ x ( 0 )$. It follows that $s = 0 , r = 6$, and $c = 12$. The corresponding eigenvector v is obtained as
$v = ⊕ j = 1 r − s ( λ ⊗ ( r − s − j ) ⊗ x ( s + j − 1 ) ) = ⊕ j = 1 6 ( λ ⊗ ( 6 − j ) ⊗ x ( j − 1 ) ) = λ ⊗ ( 5 ) ⊗ x ( 0 ) ⊕ λ ⊗ ( 4 ) ⊗ x ( 1 ) ⊕ λ ⊗ ( 3 ) ⊗ x ( 2 ) ⊕ λ ⊗ ( 2 ) ⊗ x ( 3 ) ⊕ λ ⊗ x ( 4 ) ⊕ x ( 5 ) = 10 11 10 11 11 10 11 10 ⊕ 12 12 11 11 10 9 9 10 ⊕ 11 10 10 11 10 10 11 11 ⊕ 11 12 11 12 10 9 10 9 ⊕ 11 11 10 10 11 10 10 11 ⊕ 12 11 11 12 9 9 10 10$
Finally, we obtain
$v = 12 12 11 12 11 10 11 11 .$
Which is the correct eigenvector.
Using Algorithm 3 in Example 2, system ends up in a periodic behaviour after 6th iteration and we get $s = 0$, $r = 6$ and $c = 12$. If we consider in Example 2, then we get $s = 0$, $r = 2$ and $c = 4$. So in case of Algorithm 3, one can get different value of c for different initial state vectors, while in Algorithm 2, we need not a real number c.
Remark 1.
The main focus in this paper is to develop an efficient algorithm to find out the eigenvectors for systems of type (2). Also, we have made computational experiment to compare the Algorithms 2 and 3 for Latin squares. We have implemented both algorithms in MATLAB for $r ∈ A 1$, $s ∈ A 2$ and $c ∈ A 3$ such that $| A i | = n i , i = 1 , 2 , 3$. Here $| A i |$ represents the cardinality of each set $A i$. In case of Algorithm 3, it stops if there exists a positive integers $r > s ≥ 0$ and a real number c with $x ( r ) = x ( s ) ⊗ c$, while in that of Algorithm 2, it stops if for some integers $r > s ≥ 0$ it holds $x * ( r ) = x * ( s )$. Therefore, the time complexity of Algorithm 3 is $O ( n 1 n 2 n 3 )$, while the time complexity of Algorithm 2 is $O ( n 1 n 2 )$. Thus the proposed algorithm takes less time than the compared one.
The computational efficiency of the proposed method can be seen by comparing the runtimes of Algorithm 2 and 3. Both algorithms are executed under the same Laptop environment with 1.70 GHz CPU, 4 GB RAM, MATLAB (R2015a) simulation software. We computed the time of calculating eigenvalue and eigenvector for Example 2 by both algorithms. The time consumptions of Algorithm 2 and 3 are 0.088783 s and 1.718662 s respectively, which clearly reveals that the proposed algorithm is faster than the other one. Also in case of Algorithm 2, we obtain the eigenvector v by a simple formula, given as $v = x * ( s ) ⊕ … ⊕ x * ( r − 1 )$ however, while using that Algorithm 3 an eigenvector is obtained as
$v = ⊕ j = 1 r − s ( λ ⊗ ( r − s − j ) ⊗ x ( s + j − 1 ) ) .$
which is quite difficult compared to Algorithm 2.

## 4. Conclusions

The eigenproblem of bipartite min-max-plus systems for Latin squares has been discussed in this work. The iterative Algorithm 1 was developed for the computation of the eigenvectors (trivial and nontrivial) for systems of type (2). In particular, this algorithm was extended to compute the eigenvalue and eigenvectors for Latin squares in a system of type (2). At the end, a computational comparison has been given. Experimental result shows that the proposed Algorithm 2 is much more computationally efficient than the compared one. Of course, Algorithm 2 is only for Latin squares in a system of type (2), while Algorithm 3 may be used for all bipartite min-max-plus systems. Similarly, one can derive an algorithm to calculate the eigenvalue and eigenvectors for separated min-max-plus systems.

## Author Contributions

The individual contribution and responsibilities of the authors were as follows: U.H., M.U. and F.A. designed and proposed the research to A.A. and P.K. During the implementation of algorithms F.A., A.A. and P.K. provided useful advices to M.U. and U.H. All authors participated in the writing the manuscript. All authors have read and agreed to the published version of the manuscript.

## Funding

This research received no external funding.

## Acknowledgments

We thank the referees for their valuable suggestions and helpful remarks.

## Conflicts of Interest

The authors declare no conflict of interest.

## References

1. Song, S.Z.; Kang, K.T.; Jun, Y.B. Invertible commutativity preservers of matrices over max algebra. Czechoslovak Math. J. 2006, 56, 1185–1192. [Google Scholar] [CrossRef]
2. Kang, K.T.; Song, S.Z. Regular matrices and their generalized inverses over the max algebra. Linear Multilinear Alg. 2015, 63, 1649–1663. [Google Scholar] [CrossRef]
3. Song, S.Z.; Kang, K.T. Column ranks and their preservers of matrices over max algebra. Linear Multilinear Alg. 2003, 51, 311–318. [Google Scholar] [CrossRef]
4. Halburd, R.G.; Southall, N.J. Tropical nevanlinna theory and ultra-discrete equations. Int. Math. Res. Not. 2009, 5, 887–911. [Google Scholar]
5. Cuninghame-Green, R.A. Lecture notes in economics and mathematical systems. In Minimax Algebra; Springer: New York, NY, USA, 1979. [Google Scholar]
6. De Schutter, B. On the ultimate behavior of the sequence of consecutive powers of a matrix in the max-plus algebra. Linear Alg. Appl. 2000, 307, 103–117. [Google Scholar] [CrossRef]
7. Gaubert, S. Methods and applications of (max,+) linear algebra. In Annual Symposium on Theoretical Aspects of Computer Science; Springer: Berlin/Heidelberg, Germany, 1997; pp. 261–282. [Google Scholar]
8. Santoso, K.A.; Suprajitno, H. On max-plus algebra and its application on image steganography. Sci. World J. 2018, 6718653. [Google Scholar] [CrossRef] [PubMed]
9. Kubo, S.; Nishinari, K. Applications of max-plus algebra to flow shop scheduling problems. Discret. Appl. Math. 2018, 247, 278–293. [Google Scholar] [CrossRef]
10. Braker, J.G.; Olsder, G.J. The power algorithm in max algebra. Linear Alg. Appl. 1993, 182, 67–89. [Google Scholar] [CrossRef]
11. Subiono. On Classes of Min-Max-Plus Systems and Their Application. Ph.D. Thesis, Delft University of Technology, Delft, The Netherlands, 2000. [Google Scholar]
12. Subiono; van der Woude, J. Power algorithms for (max,+)- and bipartite (min,max,+)-systems. Discret. Event Dyn. Syst. 2000, 10, 369–389. [Google Scholar] [CrossRef]
13. Umer, M.; Hayat, U.; Abbas, F. An efficient algorithm for nontrivial eigenvectors in max-plus algebra. Symmetry 2019, 11, 738. [Google Scholar] [CrossRef]
14. Subiono; Mufid, M.S.; Adzkiya, D. Eigenproblems of latin squares in bipartite (min, max, plus)-systems. Discret. Event Dyn. Syst. 2016, 26, 657–668. [Google Scholar] [CrossRef]
15. Akian, M.; Gaubert, S.; Nitica, V.; Singer, I. Best approximation in maxplus semimodules. Linear Alg. Appl. 2011, 435, 3261–3296. [Google Scholar] [CrossRef]
16. Garca-Planas, M.I.; Magret, M.D. Eigenvectors of permutation matrices. Adv. Pure Math. 2015, 5, 390–394. [Google Scholar] [CrossRef]
17. Stańczyk, J. Max-Plus Algebra Toolbox for Matlab. 2016. Available online: http://gen.up.wroc.pl/stanczyk/mpa/ (accessed on 16 February 2020).
18. McKay, B.D.; Wanless, I.M. On the number of Latin squares. Ann. Comb. 2005, 9, 334–344. [Google Scholar] [CrossRef]