A Discrete Meta-Control Procedure for Approximating Solutions to Binary Programs

Pengbo Zhang , Wolf Kohn \* and Zelda B. Zabinsky

Department of Industrial and Systems Engineering, University of Washington, Seattle, WA 98195, USA; E-Mails: pbzhang@u.washington.edu (P.Z.); zelda@u.washington.edu (Z.B.Z.)

\* Author to whom correspondence should be addressed; E-Mail: wolfk@u.washington.edu; Tel.: +1-206-790-0624; Fax: +1-206-685-3702.

*Received: 15 June 2013; in revised form: 30 August 2013 / Accepted: 30 August 2013 / Published: 4 September 2013*

Abstract: Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs efficiently. The key idea is to map the vector of n binary decision variables into a scalar function defined over a time interval [0, n] and construct a linear quadratic tracking (LQT) problem that can be solved efficiently. We prove that an LQT formulation has an optimal binary solution, analogous to a classical bang-bang control in continuous time. Our LQT approach can provide advantages in reducing computation while generating a good approximate solution. Numerical examples are presented to demonstrate the usefulness of the proposed method.

Keywords: large-scale binary integer programs; linear quadratic tracking; optimal control

#### 1. Introduction

Many decision problems in economics and engineering can be formulated as binary integer programming (BIP) problems. These BIP problems are often easy to state but difficult to solve due to the fact that many of them are NP-hard [1], and even finding a feasible solution is considered NP-complete [2,3]. Because of their importance in formulating many practical problems, BIP algorithms have been widely studied. These algorithms can be classified into exact and approximate algorithms as follows [4]:

(1) Exact algorithms: The exact algorithms are guaranteed either to find an optimal solution or prove that the problem is infeasible, but they are usually computationally expensive. Major methods for BIP problems include branch and bound [5], branch-and-cut [6], branch-and-price [7], dynamic programming methods [8], and semidefinite relaxations [9].

(2) Approximate algorithms: The approximate algorithms are used to achieve efficient running time with a sacrifice in the quality of the solution found. Examples of well-known metaheuristics, as an approximate approach, are simulated annealing [10], annealing adaptive search [11], cross entropy [12], genetic algorithms [13] and nested partitions [14]. Moreover, many hybrid approaches that combine both the exact and approximate algorithms have been studied to exploit the benefits of each [15]. For additional references regarding large-scale BIP algorithms, see [1,16–18].

Another effective heuristic technique that transforms discrete optimization problems into problems falling in the control theory and information theory or signal processing domains has also been studied recently. In [19,20], circuit related techniques are used to transform unconstrained discrete quadratic programming problems and provide high quality suboptimal solutions. Our focus is on problems with linear objective functions, instead of quadratic, and linear equality constraints, instead of unconstrained.

In our previous work [21], we introduced an approach to approximating a BIP solution using continuous optimal control theory, which showed promise for large-scale problems. The key innovation to our optimal control approach is to map the vector of n binary decision variables into a scalar function defined over a time interval [0, n] and define a linear quadratic tracking (LQT) problem that can be solved efficiently. In this paper, we use the same mapping, but instead of solving the LQT problem in continuous time, we explore solving the LQT problem in discrete time, because the time index in our reformulation of the BIP represents the dimension of the problem, {0, 1,...,n}, and a discrete time approach more accurately represents the partial summing reformulation than the continuous approach. In addition, in our previous work, the transformation into a continuous LQT problem was based on a reduced set of constraints, and a least squares approach was used to estimate the error due to the constraint reduction. The algorithm iteratively solved the LQT problem and the least squares problem until convergence conditions were satisfied. In this paper, instead of iteratively solving the LQT problem based on a reduced set of constraints, we solve the LQT problem only once for the full state space. This approach improves the flow of information for convergence.

We have chosen a quadratic criterion for our approach because its formalism includes a measure of the residual entropy of the dynamics of the algorithm as it computes successive approximation to a solution. Because of the mapping used in our algorithm, the information measure is given by the inverse of the Riccati equation that we solve. That inverse of the solution of the Riccati equation is a Fisher information matrix of the algorithm as a dynamical system [22,23]. The information from the algorithm in the criterion determines the quality of the solution.

The computational complexity for solving the LQT problem is polynomial in the time horizon, the dimension of the state space and the number of control variables. In our LQT problem, the time horizon is n, the dimension of the state space is the number of constraints m, and the number of control variables is 1. Our meta-control approach solves the LQT problem to obtain an efficient approximate solution to the original BIP problem.

In Section 2, our approach is presented in detail, and numerical results are given in Section 3. In Section 4, we state the conclusions of this work.

#### 2. Development of the Meta-Control Algorithm for BIP Problems

The original BIP problem is:

Problem 1.

$$\min\_{\substack{u\_j\\j=0,\ldots,n-1}} \quad \sum\_{j=0}^{n-1} \tilde{c}\_j u\_j \tag{1}$$

$$\text{s.t.} \qquad \sum\_{j=0}^{n-1} \tilde{a}\_{ij} u\_j = \tilde{b}\_i \quad i = 1, \dots, m \tag{2}$$

$$u\_j \in \{0, 1\} \qquad j = 0, \ldots, n - 1 \tag{3}$$

where <sup>u</sup><sup>j</sup> for <sup>j</sup> = 0,...,n <sup>−</sup> <sup>1</sup> are binary decision variables. We assume <sup>c</sup>˜<sup>j</sup> , <sup>a</sup>˜ij , and ˜b<sup>i</sup> are real known values for i = 1,...,m and j = 0,...,n − 1 and there exists at least one feasible point.

#### *2.1. Partial Summing Formulation*

We start by defining partial summing variables as in [21] from the original BIP problem as

$$f\_{0,j+1} = \ \ f\_{0,j} + \tilde{c}\_j u\_j \tag{4}$$

$$f\_{i,j+1} = \ \ f\_{i,j} + \tilde{a}\_{ij}u\_j \tag{5}$$

for i = 1,...,m and j = 0,...,n − 1, with initial conditions f<sup>0</sup>,<sup>0</sup> = fi,<sup>0</sup> = 0.

For ease of notation, we create a new (m + 1) × 1 vector x<sup>j</sup> = [f<sup>0</sup>,j , f<sup>1</sup>,j ,...,fm,j ] <sup>T</sup> and the i th element of x<sup>j</sup> is denoted xj(i) for i = 1,...,m+1 and for j = 0, . . . , n. We also define the (m + 1)×1 vector a<sup>j</sup> = [˜c<sup>j</sup> , a˜<sup>1</sup><sup>j</sup> ,..., a˜mj ] <sup>T</sup> for <sup>j</sup> = 0,...,n−1, and the (<sup>m</sup> + 1)×<sup>1</sup> vector <sup>b</sup> <sup>=</sup> 0, ˜b1,..., ˜b<sup>m</sup> T , where the i th element of b is denoted b(i) for i = 1,...,m + 1. We define Problem 2 as follows, with initial conditions x<sup>0</sup> as a vector of zeros:

### Problem 2.

$$\begin{array}{ll}\min & x\_{n(1)}\\\text{s.t.} & x\_{j+1} = x\_j + a\_j u\_j & j = 0, \dots, n-1\end{array} \tag{6}$$

$$x\_0 = 0\tag{7}$$

$$x\_{n(i)} = b\_{(i)} \qquad \qquad \qquad i = 2, \ldots, m+1 \tag{8}$$

$$u\_j(u\_j - 1) = 0 \qquad \qquad j = 0, \ldots, n - 1 \tag{9}$$

#### Proposition 1. *Problem 2 exactly represents Problem 1.*

The proof is straight-forward; the constraints ensure feasibility and the objective function is equivalent to Problem 1.

#### *2.2. Construct the LQT Problem*

We construct an LQT problem, Problem 3, by first defining an error term, as a measure of unsatisfied constraints, an (m + 1) × 1 vector e<sup>j</sup> for j = 0,...,n, as

$$e\_j = x\_j - b$$

$$(10)$$

We develop the dynamics in terms of the measure e<sup>j</sup> , by combining Equation (10) with Equation (6), yielding

$$e\_{j+1} = e\_j + a\_j u\_j \tag{11}$$

and note that e<sup>0</sup> = −b, given initial conditions x<sup>0</sup> = 0. The criterion is to minimize the measure of unsatisfied constraints using a terminal penalty for infeasibility and objective function value, which is given by

$$J(u) = \frac{1}{2} \sum\_{j=0}^{n-1} e\_j^T Q\_j e\_j + \frac{1}{2} e\_n^T F e\_n \tag{12}$$

We also relax constraint (9) with 0 ≤ u<sup>j</sup> ≤ 1.

The parameters Q<sup>j</sup> and F are positive semi-definite and user-specified. The (m + 1) × (m + 1) matrix Q<sup>j</sup> is used to penalize the unsatisfied constraints. The (m + 1) × (m + 1) matrix F is used to penalize the terminating conditions and aid in minimizing the original objective function.

We now summarize our discrete LQT problem with the criterion in Equation (12) as follows:

#### Problem 3.

$$\min\_{\substack{u\_j\\j=0,\ldots,n-1}} \qquad J(u) = \frac{1}{2} \sum\_{j=0}^{n-1} e\_j^T Q\_j e\_j + \frac{1}{2} e\_n^T F e\_n \tag{13}$$

$$\text{s.t.} \qquad e\_{j+1} = e\_j + a\_j u\_j \quad j = 0, \ldots, n-1 \tag{14}$$

$$0 \le u\_j \le 1 \qquad \qquad j = 0, \ldots, n-1 \tag{15}$$

$$e\_0 = -b \tag{16}$$

It is known that solving Problem 3 directly is numerically unstable [24]. However, Theorem 1 suggests an algorithmic approach to solving Problem 3, by making a discrete analog to a bang-bang control with a switching function.

Theorem 1. *Analogous to a bang-bang control in continuous time, Problem 3 has an optimal binary solution with* u<sup>j</sup> ∈ {0, 1} *for discrete times* j = 0, 1,...,n − 1 *with non-singular arcs.*

Proof. We first construct the Hamiltonian function [24] as follows

$$H(e\_j, \lambda\_{j+1}, u\_j) = \frac{1}{2} e\_j^T Q\_j e\_j + \lambda\_{j+1}^T \left( e\_j + a\_j u\_j \right) \tag{17}$$

where λ<sup>j</sup> is the (m + 1) × 1 costate vector, for j = 0,...,n − 1, and it satisfies

$$
\lambda\_j = \lambda\_{j+1} + Q\_j e\_j \text{ and } \lambda\_n = F e\_n \tag{18}
$$

Let e∗, λ<sup>∗</sup> and u<sup>∗</sup> be the optimal solution, by the necessary conditions for the optimality [24], we have: H(e<sup>∗</sup> <sup>j</sup> , λ<sup>∗</sup> j+1, u<sup>∗</sup> <sup>j</sup> ) ≤ H(e<sup>∗</sup> <sup>j</sup> , λ<sup>∗</sup> <sup>j</sup>+1, u<sup>j</sup> )

$$\Rightarrow \frac{1}{2} e\_j^{\*T} Q\_j e\_j^\* + \lambda\_{j+1}^{\*T} \left( e\_j^\* + a\_j u\_j^\* \right) \leq \frac{1}{2} e\_j^{\*T} Q\_j e\_j^\* + \lambda\_{j+1}^{\*T} \left( e\_j^\* + a\_j u\_j \right)$$

$$\Rightarrow \lambda\_{j+1}^{\*T} a\_j u\_j^\* \leq \lambda\_{j+1}^{\*T} a\_j u\_j, \quad \forall u\_j \in [0, 1] \tag{19}$$

Thus, we have

$$u\_j^\* = \begin{cases} 1 & \text{if } \lambda\_{j+1}^{\*T} a\_j < 0 \\ \in [0, 1] & \text{if } \lambda\_{j+1}^{\*T} a\_j = 0 \\ 0 & \text{if } \lambda\_{j+1}^{\*T} a\_j > 0 \end{cases} \tag{20}$$

If λ∗<sup>T</sup> <sup>j</sup>+1a<sup>j</sup> = 0, binary values for u<sup>∗</sup> <sup>j</sup> are determined by Equation (20). When λ∗<sup>T</sup> <sup>j</sup>+1a<sup>j</sup> = 0, the arc is singular, and we may reintroduce constraint (9), u<sup>j</sup> (1 − u<sup>j</sup> )=0, to force a binary solution.

To get an intuitive understanding of the singularity issue, suppose all Q<sup>j</sup> = 0, and the element at row 1, column 1 of matrix F equals zero. Then Problem 3 reduces to minimize the infeasibility penalty term, <sup>1</sup> 2 m <sup>i</sup>=1 <sup>n</sup> −1 k=0 <sup>a</sup>˜iku<sup>k</sup> <sup>−</sup> ˜b<sup>i</sup> 2 Fi . If this term equals zero, then e<sup>n</sup> = 0, satisfying all of the original constraints (2), and λ<sup>n</sup> = 0 from Equation (18), and because Q<sup>j</sup> = 0, all λ<sup>j</sup> = 0. Then λ∗<sup>T</sup> <sup>j</sup>+1a<sup>j</sup> = 0 for all j. However, if Q<sup>j</sup> and the first element of F have positive values, then λ∗<sup>T</sup> <sup>j</sup>+1a<sup>j</sup> may be positive or negative and Equation (20) is useful. An auxiliary problem to determine values for Q<sup>j</sup> and F that resolve the singularity will be explored in future research.

To create an LQT problem that is practical to solve, we introduce a penalty term u<sup>j</sup> (u<sup>j</sup> − 1)R<sup>j</sup> in the criterion, where R<sup>j</sup> is a Lagrangian multiplier associated with constraint (9):

#### Problem 4.

$$\min\_{\substack{u\_j\\j=0,\ldots,n-1}} \quad \frac{1}{2} \sum\_{j=0}^{n-1} \left( e\_j^T Q\_j e\_j + u\_j(u\_j - 1) R\_j \right) + \frac{1}{2} e\_n^T F e\_n \tag{21}$$

$$\text{s.t.} \qquad e\_{j+1} = e\_j + a\_j u\_j \qquad j = 0, \ldots, n-1 \tag{22}$$

$$e\_0 = -b \tag{23}$$

The optimal control for Problem 4 uˆ<sup>j</sup> can be solved by the standard dynamic programming method [25] (see appendix for details). The computation associated with solving Problem 4 is O(nm<sup>3</sup>). We then obtain an approximate binary solution to the original BIP problem as follows:

$$u\_j^\* = \begin{cases} \ 0 \text{ for } \hat{u}\_j < 0.5\\ \ 1 \text{ for } \hat{u}\_j \ge 0.5 \end{cases} \tag{24}$$

for j = 0, 1,...,n − 1.

Motivated by the successive overrelaxation method [26], we introduce a weighting factor ω to improve the stability of our proposed method. Rather than applying quantization at the final step as shown in Equation (24), we did quantization at each step and propagate the binary value u¯<sup>j</sup> during the dynamic programming procedure (see appendix for details). At the final step, we then replace uˆ<sup>j</sup> in Equation (24) with ωuˆ<sup>j</sup> + (1 − ω)¯u<sup>j</sup> to get the approximate binary solution.

#### 3. Numerical Results

We explore the limits of the algorithm with some test problems obtained from MIPLIB [27]. MIPLIB is a standard and widely used benchmark for comparing the performance of various mixed integer programming algorithms, and most of the problems in the MIPLIB arise from real-world applications. We have presented 6 tests in our numerical result section, where air01, air03, air04, air05 and nw04 are airline crew scheduling type problems. The dimensions and the optimal solutions for the test problems and the numerical results are shown in Table 1. The CPU time is given for a single run with branch-and-cut with CPLEX, branch-and-bound in MATLAB, and our method in MATLAB. In Table 1, the feasibility measure is the summation of the absolute differences of feasibility over all constraints, and the optimality measure is defined as <sup>f</sup> <sup>ˆ</sup>−f<sup>∗</sup> <sup>f</sup><sup>W</sup> <sup>−</sup>f<sup>∗</sup> [28], where <sup>f</sup> <sup>∗</sup> denotes the true objective function value, ˆf denotes the function value found by our proposed method and f<sup>W</sup> denotes the worst (largest) function value. All tests are done on an Intel(R) Core(TM) i3 CPU @2.4 GHz machine under 64bit Windows7 with 4 GB RAM.


Table 1. Test Problems from MIPLIB.

In the numerical tests, we experimented with different values for parameters Q<sup>j</sup> , R<sup>j</sup> and F on the small problems enigma and air01. The diagonal elements of Q<sup>j</sup> were set to 0, 1 and 10, and we found that smaller values were better, so we report results with Q<sup>j</sup> = 0 in Table 1. We also tested values for parameter R<sup>j</sup> set to 1, 10, 100 and 1000, and there was not much difference in performance, so we set R<sup>j</sup> = 10. As for parameter F, we found that bigger values were better, so we set the diagonal elements of F to 100, 000. The parameters Q<sup>j</sup> penalize the intermediate error values whereas the parameter F penalizes the terminal error at n. Since the terminal error better reflects the original BIP optimality and infeasibility measures, intuitively, it makes sense to set Q<sup>j</sup> = 0 and F large.

Values for the weighting factor ω ranged between 0.5 to 0.9 in our exploratory tests, and the best results were typically for ω between 0.5 and 0.6.

CPLEX ran very quickly and always found an optimal solution; branch-and-bound in MATLAB was slower and only found a feasible solution for enigma, air01 and air03; our method in MATLAB ran slower than CPLEX, but generally faster than branch-and-bound in MATLAB. Even though our numerical results are "worse" than CPLEX, our methodology has a potential for extension with polynomial computational complexity.

#### 4. Summary and Conclusion

The meta-control algorithm for approximately solving large-scale BIPs shows much promise because the computational complexity is linear in n (the number of variables) and polynomial in m (the number of constraints), specifically on the order of O(nm<sup>3</sup>). An LQT approach is suggested by the result in Theorem 1, which proves the existence of an optimal binary solution to the LQT problem. We provide numerical results with experimentally chosen parameter values that demonstrate the effectiveness of our approach.

In our future research, we will develop an auxiliary iterative method that can provide an explicit algorithm for detecting valid parameter values automatically and investigate other ways to integrate the quantization into the meta-control algorithm to improve the performance of this algorithm. We will also develop a stochastic decomposition method to reduce the computation time.

#### Acknowledgments

This research is sponsored, in part, by the National Science Foundation through Grant CMMI-0908317.

#### Conflicts of Interest

The authors declare no conflict of interest.

#### Appendix

We solve for uˆ<sup>j</sup> in Problem 4 using a dynamic programming approach. We write the cost-to-go equation as:

$$V\left(e\_j, j\right) = \min\_{u\_j} \left\{ \begin{array}{l} \frac{1}{2} e\_j^T Q\_j e\_j + \frac{1}{2} u\_j (u\_j - 1) R\_j + V(e\_{j+1}, j+1) \\ \end{array} \right\} \tag{25}$$

with V (en, n) = <sup>1</sup> 2 eT <sup>n</sup> F en, and equate it to the Riccati form

$$V\left(e\_j, j\right) = \frac{1}{2} e\_j^T \Sigma\_j e\_j + e\_j^T \Psi\_j + \Omega\_j \tag{26}$$

where Σ<sup>j</sup> represents a symmetric positive-definite (m + 1) × (m + 1) matrix, Ψ<sup>j</sup> is a positive (m + 1) × 1 vector, and Ω<sup>j</sup> is a positive scalar.

Combining the Equations (25), (26) and the dynamics in Equation (22), we have

$$V(e\_j, j) = \min\_{u\_j} \left\{ \frac{1}{2} e\_j^T Q\_j e\_j + \frac{1}{2} u\_j (u\_j - 1) R\_j + \frac{1}{2} \left( e\_j + a\_j u\_j \right)^T \Sigma\_{j+1} \left( e\_j + a\_j u\_j \right) \right\} \tag{27}$$

$$+ \left( e\_j + a\_j u\_j \right)^T \Psi\_{j+1} + \Omega\_{j+1} \right\} \tag{27}$$

In order to minimize this expression we isolate the terms with u<sup>j</sup> in them

$$\frac{1}{2}u\_j(u\_j-1)R\_j + \frac{1}{2}u\_j^2a\_j^T\Sigma\_{j+1}a\_j + u\_ja\_j^T\Sigma\_{j+1}e\_j + u\_ja\_j^T\Psi\_{j+1}$$

and take the derivative with respect to u<sup>j</sup> and set the value to 0,

$$(u\_j - \frac{1}{2})R\_j + a\_j^T \Sigma\_{j+1} a\_j u\_j + a\_j^T \Sigma\_{j+1} e\_j + a\_j^T \Psi\_{j+1} = 0$$

This yields the solution u<sup>j</sup> for the optimal control

$$\hat{u}\_j = \frac{\frac{1}{2}R\_j - a\_j^T \Sigma\_{j+1} e\_j - a\_j^T \Psi\_{j+1}}{R\_j + a\_j^T \Sigma\_{j+1} a\_j} \tag{28}$$

In order to simplify notation, we let

$$\begin{array}{rcl} S\_j &=& \frac{-a\_j^T \Sigma\_{j+1}}{R\_j + a\_j^T \Sigma\_{j+1} a\_j} \end{array} \tag{29}$$

$$\delta\_j \quad = \frac{\frac{1}{2}R\_j - a\_j^T \Psi\_{j+1}}{R\_j + a\_j^T \Sigma\_{j+1} a\_j} \tag{30}$$

and we can now write

$$
\hat{u}\_j = S\_j e\_j + \delta\_j \tag{31}
$$

We equate the Riccati form Equation (26) with the value function in Equation (27) evaluated at uˆ<sup>j</sup> from Equation (31), yielding

$$\begin{aligned} \frac{1}{2}e\_j^T \Sigma\_j e\_j + e\_j^T \Psi\_j + \Omega\_j &= \frac{1}{2} e\_j^T Q\_j e\_j + \frac{1}{2} (S\_j e\_j + \delta\_j)(S\_j e\_j + \delta\_j - 1) R\_j \\ &+ \frac{1}{2} \big( e\_j + a\_j (S\_j e\_j + \delta\_j) \big)^T \Sigma\_{j+1} \Big( e\_j + a\_j (S\_j e\_j + \delta\_j) \big)^T \\ &+ \left( e\_j + a\_j (S\_j e\_j + \delta\_j) \right)^T \Psi\_{j+1} + \Omega\_{j+1} \end{aligned}$$

We now solve for Σ<sup>j</sup> and Ψ<sup>j</sup> by separating the quadratic terms from the linear terms in e<sup>j</sup> . Isolating the quadratic terms in e<sup>j</sup> , we have

$$\frac{1}{2}e\_j^T \Sigma\_j e\_j = \frac{1}{2} e\_j^T Q\_j e\_j + \frac{1}{2} e\_j^T S\_j^T R\_j S\_j e\_j + \frac{1}{2} e\_j^T \left(I + a\_j S\_j\right)^T \Sigma\_{j+1} \left(I + a\_j S\_j\right) e\_j$$

which yields the Riccati equation corresponding to Σ<sup>j</sup>

$$
\Sigma\_j = Q\_j + S\_j^T R\_j S\_j + \left(I + a\_j S\_j\right)^T \Sigma\_{j+1} \left(I + a\_j S\_j\right) \tag{32}
$$

Isolating the linear terms in e<sup>j</sup> , we have

$$e\_j^T \Psi\_j = e\_j^T S\_j^T (\delta\_j - \frac{1}{2}) R\_j + e\_j^T (I + a\_j S\_j)^T \Sigma\_{j+1} a\_j \delta\_{j+1} + e\_j^T (I + a\_j S\_j)^T \Psi\_{j+1}$$

and factoring out e<sup>T</sup> <sup>j</sup> , the tracking equation for Ψ<sup>j</sup> is

$$\Psi\_j = S\_j^T(\delta\_j - \frac{1}{2})R\_j + \left(I + a\_j S\_j\right)^T \Sigma\_{j+1} a\_j \delta\_j + \left(I + a\_j S\_j\right)^T \Psi\_{j+1} \tag{33}$$

Therefore, Σ<sup>j</sup> and Ψ<sup>j</sup> can be found backwards in time by Equations (32) and (33) from initial conditions Σ<sup>n</sup> = F, Ψ<sup>n</sup> = 0.

Given Σ<sup>j</sup> and Ψ<sup>j</sup> , we can calculate uˆ<sup>j</sup> from Equations (28), (22) and (23). To calculate u¯<sup>j</sup> for our implementation with quantization, we use the same Σ<sup>j</sup> and Ψ<sup>j</sup> , but introduce rounding to the nearest integer in Equations (28), (22) and (23) to obtain:

$$\bar{u}\_j = \text{int}\left[\frac{\frac{1}{2}R\_j - a\_j^T \Sigma\_{j+1} \hat{e}\_j - a\_j^T \Psi\_{j+1}}{R\_j + a\_j^T \Sigma\_{j+1} a\_j}\right] \tag{34}$$

and

$$
\bar{e}\_{j+1} = \text{int}[\bar{e}\_j + a\_j \bar{u}\_j] \tag{35}
$$

with e¯<sup>0</sup> = −int[b].

#### References


Reprinted from *Entropy*. Cite as: Buonomo, A.; Schiavo, AL. Evaluating the Spectrum of Unlocked Injection Frequency Dividers in Pulling Mode. *Entropy* **2013**, *15*, 4026-4041.

### *Article*
