1. Introduction
In a recent (2006, [
1]) survey on the Maximum Entropy Production principle, reference is made to an early attempt (1967, [
2,
3]) to extend (a pioneering paper dealing with this extension problem was written by E.T. Jaynes himself, see [
4]) the celebrated Jaynes’ Maximum Entropy principle to thermodynamic systems in a stationary non-equilibrium. Informally stated, assumed that the dynamic is described by a Markov chain, the authors want to find a stochastic transition matrix such that: (i) it has a prescribed probability distribution as a stationary distribution, (ii) the associated chain evolution satisfies given constraints on the macroscopic scale (admissible microscopic evolution) and (iii) the selected transition matrix generates the maximum number of equally probable microscopic evolution paths. Motivated by their derivation, we proceed here to a self-contained independent approach to the same problem with, we think, more far–reaching consequences.
To state the main ideas, let us begin by recalling a restricted version of the ergodic theorem for Markov chains ([
5]). Let us suppose that the system has a finite state space 
 and that its statistical description is given by a stationary, time-homogeneous Markov chain.
Theorem 1 Let  be a stochastic matrix with positive entries and denote with  the  power of P. Then there exists a unique distribution π which is stationary for P, i.e.,  where  denotes the transposed matrix of P. Moreover,  and, for every probability distribution ν we have, setting , 
 It is immediate to realize from the above Theorem that the stochastic matrix 
P determines its stationary distribution, while the contrary is false. In this paper we put forth a selection criterion to choose among all stochastic matrices admitting a fixed distribution 
π as stationary. In a sense, we want to select a preferred dynamics for the approach to equilibrium 
π. To this, recall that if we take the stationary distribution 
π as the initial distribution, then the Markov chain 
 defines a discrete-time, finite-states stationary stochastic process. Let 
 be the 
χ-valued random variable which describe the state of the system at time 
i. For a stationary process, we can define its 
entropy rate (see [
6]) as 
 where 
 and 
 In a sense, the entropy rate 
 is the thermodynamic 
N-limit of the entropy of the system formed by all realizations of length 
N of the process. Therefore, we will call it the system entropy rate in the sequel. Moreover, for the stationary process associated to 
 it holds that (see [
5],[
6]) 
Furthermore, in agreement with (ii) above, we assume that the knowledge of the macroscopic fluxes acting on the system in a stationary non-equilibrium state will force constraints on the two-dimensional probability distribution 
 in the form of constraints 
 on the associated stochastic matrix (see 
Section 2 below). We can now state our selection criterion, which we call the Maximum Entropy Rate principle and amounts to the following constrained extremum problem:
	  
M.E.R.P. Given a positive probability distribution π, find the stochastic matrix  which admits π as a stationary distribution and that has maximum entropy rate .
This principle can be seen as an instance of a Maximum Entropy principle for the case of constraints on two-dimensional distributions. Its justification is provided in [
7] (see also [
8],[
9]) using a large deviation theory type estimate on the empirical second order distributions 
, in the same spirit of Sanov’ Theorem for first order empirical distributions.
Remark. In the case that the only constraints on 
P are the stationarity and normalization ones, 
 The answer to this problem can be easily found by using the elementary inequality, see [
6] 
 with equality if and only if 
 (all rows are equal). This result is a simple consequence of the fact that the entropy 
 can be seen as the entropy rate of a stationary process with i.i.d. random variables 
 and that conditioning reduces the entropy rate. It is immediate to verify that 
 admits 
π as a stationary distribution. By recalling the ergodic theorem, we see also that 
 for every 
N.
 The starting point of the above M.E.R. principle is the probability distribution 
π, which is, apart from being positive, completely arbitrary. It affords the statistical description of our system at equilibrium. In case we have some macroscopic information on the system, for example its average energy, that we represent as 
 where 
 is the energy of the system in state 
i, then this information can be used to select a probability distribution 
 among all these satisfying the above constraint. By applying the Maximum Entropy principle one gets 
 the Gibbs probability distribution. Here the inverse temperature 
 is uniquely determined by the value 
e.
  2. Model of the System in a Stationary Non-Equilibrium State
We begin with the model of the energy exchange between our system 
χ and another system 
. Let us suppose that the system is described by our stationary Markov chain 
. We suppose also that the whole system 
 is energetically insulated, that is if 
 and 
, then the microscopic energy conservation law holds 
 Let us denote with 
 the skew-symmetric matrix of the difference of energy between states and with 
 the joint probability at one time step. The average energy transfer between 
 and 
χ is 
 and it follows easily that if 
π is stationary for 
P, and if our microscopic energy conservation law holds, then 
 hence the stationary distribution 
π describes the system 
χ at macroscopic equilibrium with 
.
  2.1. Coupling of the system with two environments
We now want to model the coupling of the system with two environments 
 and 
. As before, the statistical description of the system is given by a Markov chain. As a simplifying assumption, we suppose that at every time step along the evolution of the chain, the system 
χ is in contact (i.e., the microscopic energy conservation holds) with 
only one of the environments, alternatively 
 and 
. Therefore, now the system is described by the non-homogeneous Markov chain (
) 
We will see that it is not restrictive to suppose that the stochastic matrices 
 have positive entries. Also, notice that the chain “observed” at even times 
 is a time-homogeneous chain described by the positive stochastic matrix 
, hence the ergodic theorem applies. Let us denote with 
π the unique stationary distribution for 
. If 
π is the distribution describing the system 
χ at 
, then we have 
Therefore, the probability distribution describing the chain switches between 
π (at even times) and 
 (at odd times). Let us compute as before the joint probability at one and two time-steps starting from an even time 
 . We have 
Reasoning as before, we can compute the averaged energy differences 
Modifications of the above formulae for the case that we start the observation of the chain at an odd time are straightforward. By a simple inspection of the above equalities we can draw the following conclusions:
Proposition 1 If π is a stationary distribution for , then  and also  Moreover, if π is a stationary distribution for  and A, then it is a stationary distribution for  B  also and Therefore, with a distribution 
π stationary for 
 and 
A we model a system at equilibrium with the two environments, and with a probability distribution 
π stationary for 
 but not for A, we model a system in a stationary non equilibrium state. Moreover, the condition of stationarity of 
π with respect to 
, which is sufficient (but not necessary in the general case) for the balance of energy to hold, is the one that will allow to compute the entropy rate. The value of the macroscopic energy flux can be specified by the macroscopic constraint introduced by the first equality in (2.2) 
Note that, as before, we made no assumption on the distribution 
π. We have to choose matrices 
A and 
B satisfying the following constraints (here 
) 
The above constraints do not specify in the general case the matrices 
A and 
B. Therefore, we need to invoke a selection criterion, which will be our M.E.R. principle. We are led to investigate the existence of an entropy rate for the stochastic process described by the non-homogeneous Markov chain (2.1) with initial distribution 
π stationary for 
. Before turning to this, we look at the solution of our problem in case of zero flux, 
. It is immediate to see that if we limit ourselves to the case 
, then the chain is ergodic and 
 is the maximum entropy rate solution.
  2.2. Computation of the entropy rate
It is easy to see that the chain described by (2.1) with initial distribution 
π stationary for 
 but not for 
A describes a non-stationary stochastic process. Moreover, the chain is not strongly ergodic, but, with the non restricting assumption that 
A and 
B have positive entries it is weakly ergodic (see e.g., [
10] for the notion of strong and weak ergodicity and [
11],[
12] and the bibliography therein for the study of convergence of Markov processes using information theoretic tools.) We now show that the entropy rate is well defined and finite for the process at hand. This is not surprising since the inhomogeneity of the chain is very mild and the chain will become homogeneous with a suitable time-reparameterization, However, we will proceed with the chain as it is for simplicity’ sake. Recall that the probability of a typical sequence 
 of length 
N of the chain with initial distribution 
π is 
To compute 
 we use the well known chain rule (see [
6]):
 Therefore, we have 
 Hence, if the condition of stationary 
 holds, the terms following 
 alternate between the form of 
 and 
 Hence, 
 or 
  2.3. Application of the M.E.R. principle and solution
In this section we solve the constrained extremum problem for the objective function 
 subject to the constraints (2.4) using the Lagrange multipliers method. The Lagrange function is 
 and the necessary condition for the extremum are 
By simple computations we get the expression for the solution 
A and 
B in terms of the unknown multipliers as 
In the following we use 
 as unknowns in place of the Lagrange multipliers. By using the normality constraint on 
A and 
B respectively we get 
By using the stationarity constraint of 
π with respect to 
 we get the following equations for 
, 
, 
Since the above introduced matrix 
 is a stochastic one with positive entries 
, it is easy to see that the solution of the above equation for 
 is 
. Hence, 
 and from (2.7) 
Before turning to the inflow constraint to determine the multiplier 
β, we note that the matrix 
 admits a simpler form using the definition 
 Therefore 
, since 
Now the inflow constraint can be rewritten as 
Hence, for any given 
π and 
q, setting 
, the Lagrange multiplier 
 is uniquely determined by the equation 
We conclude by noting that, if the relation 
 is seen as a constraint for the unknown probability distribution 
π, then the maximum entropy assignation for 
π is the Gibbs distribution 
We have found that the maximum entropy rate assignation for the non-stationary stochastic process described by the non-homogeneous Markov chain (2.1) defined by the stochastic matrices 
A and 
B and by the probability distribution 
π which is stationary for 
 is 
 where 
 is the Gibbs distribution for 
 and 
π is the Gibbs distribution for 
. As expected, the solution depends only on the macroscopic information supplied: the equilibrium energy 
e and the flow 
q.
  2.4. Entropy rate and entropy production
By a direct computation from (1.1), (2.8), (2.10), the entropy rate of the process is the sum of two terms of the type 
 Hence, from (2.5) 
Since the chain spends “half of its time” in a state with average energy e and the remaining half in a state with average energy , the above formula proposes that the entropy rate be the time average of its “instantaneous” entropies.
Which is the relation between the entropy rate of the stochastic process and the thermodynamic entropy of the system in a stationary non-equilibrium state? If 
q is small, we can consider the Taylor expansion of 
 w.r.t. 
q and get 
 Hence, by the well known identification 
, 
 In the above formula, the entropy rate is the sum of two terms, one of which 
 is 
non negative while the other has the sign of 
q. It is appealing to interpret the first as the source term an the other as the flux term.
Moreover, from the relation between the average energy 
e and the related multiplier 
, we have that, if 
q is small 
 Hence, up to 
 order terms, 
  2.5. Entropy production of the  system
Let us consider the insulated system 
 and let us suppose that the two environments 
 and 
 are two thermostats respectively at temperatures 
 and 
. The net effect of putting the system 
χ alternatively in contact with 
 and with 
 is the flow in two time steps of the chain of an average energy amount 
 from a reservoir at higher temperature 
 to a reservoir a lower temperature 
T leaving the system 
χ unchanged, since the equilibrium distribution 
π is stationary for 
. By a standard non-equilibrium thermodynamics formula (see e.g. [
13]) the entropy production in the 
 cycle is 
If we now compute the information divergence (also called relative entropy, see [
6]) of the Gibbs distribution 
 with respect to Gibbs distribution 
π we find 
The information divergence it is not a symmetric function of the two probability distributions 
 while the symmetrized information divergence (see [
14]) of 
p and 
q is a symmetric and 
non negative one 
The standard interpretation (see again [
14]) of the symmetrized information divergence is a measure of the difficulty of assessing which is the statistical description (
p or 
q) of the system on the basis of the observations of the system state. Since 
 we have the following
Proposition 2 The entropy production of the closed system  is equal to the symmetrized information divergence between the probability distributions π,   Remark. In the literature (see e.g the books [
15],[
16] or the papers [
17], [
18]) there is a well established notion of entropy production rate for a stationary Markov chain with countable state space. Let 
 be the transition matrix and its unique stationary distribution respectively. Then the entropy production rate, also called the information gain of the stationary chain with respect to its time reversal, is defined as 
 We immediately see that if the the chain satisfy the detailed balance condition 
, then the entropy production rate is zero. Let us show that this is the case for our 
χ system described by the maximum entropy rate matrices 
A and 
B defined in (2.10). Indeed, the chain observed at even times is described by the transition matrix 
 with entries 
, whose stationary distribution is the same 
π in (2.9), and this stationary Markov chain satisfy trivially the detailed balance condition. Therefore, the entropy production of the overall system 
 is exclusively due to the energy exchange between the two reservoirs 
 and 
 as computed above.