1. Introduction
One of the oldest topics related to difference equations and systems of difference equations is their solvability. Many classical methods for solving some classes of equations and systems, including linear ones, can be found in the following known books: [
1,
2,
3,
4,
5,
6,
7].
There has been some renewed recent interest in the topic, especially in the solvability of various classes of nonlinear difference equations and systems. One of the reasons for this is that our method for solving the following second-order nonlinear difference equation
from 2004, has attracted some attention. Namely, for the case when
for every
, the equation can be transformed to a linear one by using a suitable change of variables. Some generalizations of the equation, which are studied by developing the method, can be found in [
8,
9,
10] (see also [
11] where a slight extension of the equation was studied in another way). A related solvable system of difference equations was treated in [
12]. Since that time various modifications of the method have been often used (see [
13,
14] and the references therein for some related difference equations, as well as [
15,
16,
17] and the references therein for some related systems of difference equations). It should be pointed out that the systems are usually symmetric or close-to-symmetric, whose study was popularized by Papaschinopoulos and Schinas ([
18,
19,
20,
21,
22,
23,
24]). In some of their papers, such as [
19,
20,
21,
23], they study the solvability and the long-term behaviour of solutions to the equations and systems by finding their invariants. For some applications of solvability and related matters see [
6,
25,
26,
27,
28,
29]. Some product-type difference equations and systems have been essentially solved by using the linear ones, but in a more complex way ([
30,
31,
32]). Several methods, including the method of transformation and methods connected to product-type equations and systems, can be found in the representative paper [
33].
Recall that a sequence , , is a solution to a k-dimensional system of difference equations if it satisfies the system for every . If every solution to a system can be obtained from a finite family of formulas, then such a system is solvable in closed form.
It is not difficult to see that the following nonlinear system of difference equations
where the parameters
and the initial values
are complex numbers, is solvable in closed form (see, for example, [
34]).
It is easily seen that system (
1) is related to the equation
(the bilinear one) with the initial condition
(note that for every well-defined solution to system (
1),
is defined, since in this case it must be
). It is well-known that there are several methods for solving Equation (
2). For example, the equation can be solved by transforming it into a two-dimensional linear system (see [
4]), which can be solved by several methods (for solving more general linear systems, see, for example, [
2,
5]). The original Russian version of the book [
4] from 1937, which, at the moment, can be freely found on the internet, gives the solution. In [
6,
7,
35], a solution is presented to the equation by transforming it, by using a suitable changes of variables, to a homogeneous linear second-order difference equation with constant coefficients. The idea was later used in our paper [
36] where, among other things, a representation of the general solution to the equation was given in terms of the, so called, generalized Fibonacci sequence. Equation (
2) is also connected to finding the
nth power of the following matrix
associated to the equation, which can also be used to get the general solution to Equation (
2).
All these facts, show the importance of the system of difference Equation (
1) and suggest that there are many interesting things behind the system which could be studied in detail. Having noticed these facts a natural question arises of finding some related three-dimensional systems of difference equations which are solvable in closed form.
Motivated by the problem, we have investigated some natural extensions of system (
1) and, recently in [
37], have shown the solvability of the following three-dimensional system
where the parameters
, and initial values
are complex numbers.
We want to point out that in [
37], we showed that system (
4) is
practically solvable, in the sense that the set of closed-form formulas for its solutions can be explicitly given for all possible values of parameters
,
, and initial values
. To clarify the notion, say that, for example, a homogeneous linear difference equation with constant coefficients of order greater or equal to five is an example of a
theoretically solvable difference equation, which is not always practically solvable one, because the characteristic polynomial associated to the equation need not be solvable by radicals in this case ([
38]).
Pushing further this line of investigations, quite recently, in [
39], we have shown that the following four-dimensional system
where the parameters
,
, and initial values
, are complex numbers, is also practically solvable, by giving a detailed description of how closed-form formulas, in all possible cases, can be found.
The line of investigations in [
34,
37,
39] has motivated us to try to find what it is that decides the solvability of the systems studied therein, that is, of systems of difference Equations (
1), (
4) and (
5). The fact that in the study of the systems in [
37,
39] appeared matrices consisting of the parameters which appeared in the systems, as well as the fact that, as we have already mentioned, the general solution to system (
1) can be solved by using the matrix (
3), have strikingly suggested that matrices have some important role in the solvability of these systems. This also suggested to us that systems (
4) and (
5), should be written in the following, somewhat nicer, forms
respectively.
Our aim is to unify and extend the results in [
34,
37,
39], by explaining what is behind the solvability of systems (
1), (
4) and (
5). To do this, motivated by the forms of systems (
4) and (
5) given in (
6) and (
7), here we consider the following general system of difference equations
for
, where
f is a complex-valued function on
, such that
and
when
,
By
,
, we denote the
Binomial coefficients. They can be defined algebraically as the coefficients of the polynomial
,
, that is, we have
By comparing the coefficients in the following identity
, the following recurrence relation
is obtained for
such that
For more information regarding the coefficients consult the following classics: [
4,
7,
40,
41,
42,
43]. It is interesting that the recurrence relation (
12) is also a solvable difference equation, but with two independent variables which are usually called partial difference equations (for some results on solvability of the equations see [
3,
5,
44]), and that there is a closed form formula for the general solution to the equation on its natural domain, the, so called,
combinatorial domain (see [
45]). Namely, in [
45] was devised a method, which is called the
method of half-lines for which it turned out that can be used for solving several other important difference equations with two independent variables (see, [
46] and the related references therein).
Throughout the paper we will use the standard convention , when , and .
2. Analysis of Solvability of System (8) and the Main Result
For every complex square matrix
A of order
k there is a nonsingular matrix
a transition matrix, such that
where
,
, are matrices of the following form
the, so called, Jordan blocks. If the submatrices
,
, are of orders
,
, respectively, then, of course, it must be
. The matrix
J in (
13) is called the Jordan normal form of matrix
A.
Remark 1. Recall that for a given matrix A its normal form is not unique. Namely, the Jordan blocks of a Jordan matrix corresponding to matrix A, can be permutated and the obtained block diagonal matrix is also a Jordan matrix of A (see, for example, [47,48]). Using the change of variables
in (
8), and if in the obtained system we employ equality (
13), it follows that
for
The system (
16) can be written as a set of
l systems each of which corresponds to a Jordan block of matrix
J, that is, for the Jordan block
, where
is fixed, we have
for every
.
First, we assume that
that is, that none of the characteristic values of the matrix
A is equal to zero.
Then for every solution to system (
16) which has no zero component, we have
for
Let
for
Since
, we have
for
.
From (
22), it follows that
that is,
for
.
Further, we have
for
.
Employing (
23) in (
24), we obtain
, that is,
.
From (
25), we obtain
for
.
Motivated by (
23) and (
26), we assume that
for every
and each
.
By using (
27) in the following equality
, we get
and consequently
for
.
Now by using (
12), we have
Employing (
30) in (
29), we obtain
From (
31) and by induction it follows that (
27) holds for every
and
, which can also be written as follows
where
for
and
.
On the other hand, from (
20) we also have
for
from which it follows that
for
From (
32) and (
35) it follows that
for
and
.
From the above analysis we have the following result.
Lemma 1. Consider system (
16)
, where , , are Jordan blocks of orders , , whose diagonal elements are nonzero numbers , . Letwhere , are defined in (
17)
, , , are defined in (
33)
, whereas , are defined in (
21)
. Then, for any solution to the system such that , for every , , the following equalities holdfor Employing (
38) in the following consequence of (
20)
is obtained
which due to the fact that
, is nothing but
Remark 2. Bearing in mind Remark 1, we see that the transition matrix could be chosen such that any of the Jordan blocks could go to the last position (lth one in our notations) in the corresponding Jordan matrix J, from which it follows that in formula (
39)
instead of can be any of the other characteristic values of matrix A. The form of formula (
39)
will be the same, but the values of sequences , can be different. The transformation in (
15), as well as its inverse one, maps the sets of
k-dimensional Lebesgue measure zero to sets of measure zero, since
T is a nonsingular matrix. Note also that the set
has
k-dimensional Lebesgue measure zero.
Using these two facts, it follows that the sets
and
have measure zero for every
, and consequently the unions
From this and the condition in (
10), it follows that all the solutions to the systems (
8) and (
16) are well-defined outside a set of
k-dimensional Lebesgue measure zero.
Now we formulate and prove our main result.
Theorem 1. Consider system (
8)
, where f is a complex-valued function on satisfying the conditions (
9)
and (
10)
, is a complex square matrix of order k, is a nonsingular transition matrix which transforms the matrix A to its Jordan normal form J, whose blocks , correspond to the characteristic values , , , , are defined by (
33)
, and , , are defined by (
37)
, where , are some nonzero arbitrary constants. If the difference equationis solvable in closed form, for some , then system (
8)
is also solvable in closed form. Proof. First assume that (
19) holds, that is,
for
By using the change of variables (
15), system (
8) is transformed into system (
16). From the analysis preceding the formulation of the theorem we see that for every solution to system (
16) which has no zero component (so, for almost all initial values) the sequence
is a solution to Equation (
42). Since the equation is solvable, a closed-form formula for sequence
can be found, from which along with Lemma 1 it follows that some closed-form formulas for sequences
,
can be found. By using the formulas in (
15) we get some closed-form formulas for solutions to (
8), from which the theorem follows in this case.
Now assume that zero is a characteristic value of matrix
A of order
s. Then, due to the comment in Remark 1 we may assume that
which implies that
In this case, one or several Jordan blocks correspond to the zero characteristic value. Then from (
18) with
, we obtain
for every
, from which, for every (well-defined) solution to (
16), it follows that
for
.
From (
44) it is easily obtained that
Since (
45) holds for any Jordan block corresponding to the characteristic value
, we have that
for large enough
n.
Hence, if all the characteristic zeros of matrix A are equal to zero, then we see that all the solutions will be eventually equal to zero.
If there is a nonzero characteristic value, then (
20) holds when
l is replaced by
, and condition (
42) assumes that it holds when
is replaced by
, from which, as in the first case, it follows that some closed-form formulas can be found for
,
from which along with (
46) and (
15) closed-form formulas for system (
8) are found. ☐
Example 1. The corresponding
k-dimensional extension of systems (
1), (
4) and (
5) is the following
for
.
Note that system (
47) can be written in the form
for
.
Now note that from (
48), it follows that
for
and
.
Let
where
are the coefficients of system (
48).
By using the change of variables
where
is a transition matrix for the matrix
and employing the above presented procedure we get that for solvability of system (
47), it is enough to prove the solvability of the system
where
is a nonzero characteristic value of the matrix
A, while
are the sequences defined in (
37), for the case when all the characteristic zeros of
A are different from zero. For the case when one of the characteristic values is equal to zero but is not of order
k, then the situation is similar and
is a nonzero characteristic value of the matrix
A, but instead of
k will be another integer number. When all the characteristic zeros of the matrix
A are equal to zero, then from the proof of Theorem 1 we see that all the solutions will be eventually equal to zero.
Equation (
49) is a special case of the following one
Various special cases of Equation (
50) have appeared recently during investigation of solvability of some product-type systems (see, for example, Theorem 2.1 in [
31], Theorem 1 and Theorem 2 in [
32]).
Now we will generalize these results from [
31,
32], by showing the solvability of Equation (
50) under some more general conditions, which includes Equation (
49).
Lemma 2. Assume that , and . Then for every solution to Equation (
50)
, the following equality holdsfor every such that . Proof. If
, then
and (
51) is reduced to (
50) for
. Assume (
51) holds for an
and
. Then from (
50) with
and the hypothesis, we have
for
. If
, then (
51) obviously holds. The inductive argument proves the lemma. ☐
If in (
51) we chose
, we have the following corollary.
Corollary 1. Assume , and . Then the general solution to Equation (
50)
isfor every . From Corollary 1 and since
it follows that Equation (
49) is solvable in closed form when
, for every
, from which the solvability of system (
47) follows.
Remark 3. Since for the characteristic polynomialassociated to the matrix appearing in system (
47)
can be solved by well-known formulas ([49]), in this case the system is also practically solvable, as we have proved in [37,39]. The same fact was one of the main reasons for the solvability of the product-type difference equations and systems recently investigated in our papers [30,31,32,33]. If , then the system cannot be solved by using the method, since the polynomial need not be solvable by radicals. Nevertheless, the proof in Example 1 shows that the system is theoretically solvable.
Remark 4. Note also that if the sequence in (
50)
is not a sequence of integers, then the equation need not have a unique solution. For example, the sequence defined by the following recurrence relationwhere is not uniquely defined. Namely, ifthenwhich implies thatif k is even, andif k is odd, which are two different points due to condition Repeating the procedure we get a binary tree whose branching points are different values of , This shows that difference Equation (
54)
can have a continuum of solutions.