Open Access
This article is

- freely available
- re-usable

*Entropy*
**2013**,
*15*(9),
3592-3601;
doi:10.3390/e15093592

Article

A Discrete Meta-Control Procedure for Approximating Solutions to Binary Programs

Department of Industrial and Systems Engineering, University of Washington, Seattle, WA 98195, USA

^{*}

Author to whom correspondence should be addressed.

Received: 15 June 2013; in revised form: 30 August 2013 / Accepted: 30 August 2013 / Published: 4 September 2013

## Abstract

**:**Large-scale binary integer programs occur frequently in many real-world applications. For some binary integer problems, finding an optimal solution or even a feasible solution is computationally expensive. In this paper, we develop a discrete meta-control procedure to approximately solve large-scale binary integer programs efficiently. The key idea is to map the vector of n binary decision variables into a scalar function defined over a time interval $[0,n]$ and construct a linear quadratic tracking (LQT) problem that can be solved efficiently. We prove that an LQT formulation has an optimal binary solution, analogous to a classical bang-bang control in continuous time. Our LQT approach can provide advantages in reducing computation while generating a good approximate solution. Numerical examples are presented to demonstrate the usefulness of the proposed method.

Keywords:

large-scale binary integer programs; linear quadratic tracking; optimal control## 1. Introduction

Many decision problems in economics and engineering can be formulated as binary integer programming (BIP) problems. These BIP problems are often easy to state but difficult to solve due to the fact that many of them are NP-hard [1], and even finding a feasible solution is considered NP-complete [2,3]. Because of their importance in formulating many practical problems, BIP algorithms have been widely studied. These algorithms can be classified into exact and approximate algorithms as follows [4]:

(1) Exact algorithms: The exact algorithms are guaranteed either to find an optimal solution or prove that the problem is infeasible, but they are usually computationally expensive. Major methods for BIP problems include branch and bound [5], branch-and-cut [6], branch-and-price [7], dynamic programming methods [8], and semidefinite relaxations [9].

(2) Approximate algorithms: The approximate algorithms are used to achieve efficient running time with a sacrifice in the quality of the solution found. Examples of well-known metaheuristics, as an approximate approach, are simulated annealing [10], annealing adaptive search [11], cross entropy [12], genetic algorithms [13] and nested partitions [14]. Moreover, many hybrid approaches that combine both the exact and approximate algorithms have been studied to exploit the benefits of each [15]. For additional references regarding large-scale BIP algorithms, see [1,16,17,18].

Another effective heuristic technique that transforms discrete optimization problems into problems falling in the control theory and information theory or signal processing domains has also been studied recently. In [19,20], circuit related techniques are used to transform unconstrained discrete quadratic programming problems and provide high quality suboptimal solutions. Our focus is on problems with linear objective functions, instead of quadratic, and linear equality constraints, instead of unconstrained.

In our previous work [21], we introduced an approach to approximating a BIP solution using continuous optimal control theory, which showed promise for large-scale problems. The key innovation to our optimal control approach is to map the vector of n binary decision variables into a scalar function defined over a time interval $[0,n]$ and define a linear quadratic tracking (LQT) problem that can be solved efficiently. In this paper, we use the same mapping, but instead of solving the LQT problem in continuous time, we explore solving the LQT problem in discrete time, because the time index in our reformulation of the BIP represents the dimension of the problem, $\{0,1,\dots ,n\}$, and a discrete time approach more accurately represents the partial summing reformulation than the continuous approach. In addition, in our previous work, the transformation into a continuous LQT problem was based on a reduced set of constraints, and a least squares approach was used to estimate the error due to the constraint reduction. The algorithm iteratively solved the LQT problem and the least squares problem until convergence conditions were satisfied. In this paper, instead of iteratively solving the LQT problem based on a reduced set of constraints, we solve the LQT problem only once for the full state space. This approach improves the flow of information for convergence.

We have chosen a quadratic criterion for our approach because its formalism includes a measure of the residual entropy of the dynamics of the algorithm as it computes successive approximation to a solution. Because of the mapping used in our algorithm, the information measure is given by the inverse of the Riccati equation that we solve. That inverse of the solution of the Riccati equation is a Fisher information matrix of the algorithm as a dynamical system [22,23]. The information from the algorithm in the criterion determines the quality of the solution.

The computational complexity for solving the LQT problem is polynomial in the time horizon, the dimension of the state space and the number of control variables. In our LQT problem, the time horizon is n, the dimension of the state space is the number of constraints m, and the number of control variables is 1. Our meta-control approach solves the LQT problem to obtain an efficient approximate solution to the original BIP problem.

## 2. Development of the Meta-Control Algorithm for BIP Problems

The original BIP problem is:

**Problem 1.**

$$\begin{array}{ccc}\hfill \underset{\underset{j=0,\dots ,n-1}{{u}_{j}}}{min}& & \sum _{j=0}^{n-1}{\tilde{c}}_{j}{u}_{j}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \text{s.t.}\phantom{\rule{1.em}{0ex}}& & \sum _{j=0}^{n-1}{\tilde{a}}_{ij}{u}_{j}={\tilde{b}}_{i}\phantom{\rule{1.em}{0ex}}i=1,\dots ,m\hfill \end{array}$$

$$\begin{array}{ccc}& & {u}_{j}\in \left\{0,1\right\}\phantom{\rule{1.em}{0ex}}\phantom{\rule{1.em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

#### 2.1. Partial Summing Formulation

We start by defining partial summing variables as in [21] from the original BIP problem as
for $i=1,\dots ,m$ and $j=0,\dots ,n-1$, with initial conditions ${f}_{0,0}=$${f}_{i,0}=0.$

$$\begin{array}{ccc}\hfill {f}_{0,j+1}& =& {f}_{0,j}+{\tilde{c}}_{j}{u}_{j}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {f}_{i,j+1}& =& {f}_{i,j}+{\tilde{a}}_{ij}{u}_{j}\hfill \end{array}$$

For ease of notation, we create a new $\left(m+1\right)\times 1$ vector ${x}_{j}={\left[{f}_{0,j},{f}_{1,j},\dots ,{f}_{m,j}\right]}^{T}$ and the ${i}^{th}$ element of ${x}_{j}$ is denoted ${x}_{j(i)}$ for $i=1,\dots ,m+1$ and for $j=0,\dots ,n.$ We also define the $\left(m+1\right)\times 1$ vector ${a}_{j}={\left[{\tilde{c}}_{j},{\tilde{a}}_{1j},\dots ,{\tilde{a}}_{mj}\right]}^{T}$ for $j=0,\dots ,n-1$, and the $\left(m+1\right)\times 1$ vector $b={\left[0,{\tilde{b}}_{1},\dots ,{\tilde{b}}_{m}\right]}^{T}$, where the ${i}^{th}$ element of b is denoted ${b}_{(i)}$ for $i=1,\dots ,m+1$. We define Problem 2 as follows, with initial conditions ${x}_{0}$ as a vector of zeros:

**Problem 2.**

$$\begin{array}{ccc}\hfill \underset{\underset{j=0,\dots ,n-1}{{u}_{j}}}{min}& & {x}_{n(1)}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \text{s.t.}\phantom{\rule{1.em}{0ex}}& & {x}_{j+1}={x}_{j}+{a}_{j}{u}_{j}\phantom{\rule{2.em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

$$\begin{array}{ccc}& & {x}_{0}=0\hfill \end{array}$$

$$\begin{array}{ccc}& & {x}_{n(i)}={b}_{(i)}\phantom{\rule{2.em}{0ex}}\phantom{\rule{2.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}i=2,\dots ,m+1\hfill \end{array}$$

$$\begin{array}{ccc}& & {u}_{j}({u}_{j}-1)=0\phantom{\rule{2.em}{0ex}}\phantom{\rule{1.em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

**Proposition 1.**

Problem 2 exactly represents Problem 1.

The proof is straight-forward; the constraints ensure feasibility and the objective function is equivalent to Problem 1.

#### 2.2. Construct the LQT Problem

We construct an LQT problem, Problem 3, by first defining an error term, as a measure of unsatisfied constraints, an $(m+1)\times 1$ vector ${e}_{j}$ for $j=0,\dots ,n$, as
We develop the dynamics in terms of the measure ${e}_{j}$, by combining Equation (10) with Equation (6), yielding
and note that ${e}_{0}=-b$, given initial conditions ${x}_{0}=0$. The criterion is to minimize the measure of unsatisfied constraints using a terminal penalty for infeasibility and objective function value, which is given by
We also relax constraint (9) with $0\le {u}_{j}\le 1$.

$${e}_{j}={x}_{j}-b$$

$${e}_{j+1}={e}_{j}+{a}_{j}{u}_{j}$$

$$J(u)=\frac{1}{2}\sum _{j=0}^{n-1}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}{e}_{n}^{T}F{e}_{n}$$

The parameters ${Q}_{j}$ and F are positive semi-definite and user-specified. The $(m+1)\times (m+1)$ matrix ${Q}_{j}$ is used to penalize the unsatisfied constraints. The $(m+1)\times (m+1)$ matrix F is used to penalize the terminating conditions and aid in minimizing the original objective function.

We now summarize our discrete LQT problem with the criterion in Equation (12) as follows:

**Problem 3.**

$$\begin{array}{ccc}\hfill \underset{\underset{j=0,\dots ,n-1}{{u}_{j}}}{min}& & J(u)=\frac{1}{2}\sum _{j=0}^{n-1}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}{e}_{n}^{T}F{e}_{n}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \text{s.t.}\phantom{\rule{1.em}{0ex}}& & {e}_{j+1}={e}_{j}+{a}_{j}{u}_{j}\phantom{\rule{1.em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

$$\begin{array}{ccc}& & 0\le {u}_{j}\le 1\phantom{\rule{2.em}{0ex}}\phantom{\rule{1.em}{0ex}}\phantom{\rule{0.277778em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

$$\begin{array}{ccc}& & {e}_{0}=-b\hfill \end{array}$$

It is known that solving Problem 3 directly is numerically unstable [24]. However, Theorem 1 suggests an algorithmic approach to solving Problem 3, by making a discrete analog to a bang-bang control with a switching function.

**Theorem 1.**

Analogous to a bang-bang control in continuous time, Problem 3 has an optimal binary solution with ${u}_{j}\in \{0,1\}$ for discrete times $j=0,1,\dots ,n-1$ with non-singular arcs.

Proof.

We first construct the Hamiltonian function [24] as follows
where ${\lambda}_{j}$ is the $(m+1)\times 1$ costate vector, for $j=0,\dots ,n-1$, and it satisfies
Let ${e}^{*},{\lambda}^{*}$ and ${u}^{*}$ be the optimal solution, by the necessary conditions for the optimality [24], we have: $\phantom{\rule{2.em}{0ex}}H({e}_{j}^{*},{\lambda}_{j+1}^{*},{u}_{j}^{*})\le H({e}_{j}^{*},{\lambda}_{j+1}^{*},{u}_{j})$
Thus, we have

$$H({e}_{j},{\lambda}_{j+1},{u}_{j})=\frac{1}{2}{e}_{j}^{T}{Q}_{j}{e}_{j}+{\lambda}_{j+1}^{T}\left({e}_{j}+{a}_{j}{u}_{j}\right)$$

$${\lambda}_{j}={\lambda}_{j+1}+{Q}_{j}{e}_{j}\phantom{\rule{0.277778em}{0ex}}\text{and}\phantom{\rule{0.277778em}{0ex}}{\lambda}_{n}=F{e}_{n}$$

$$\begin{array}{cc}& \Rightarrow \frac{1}{2}{e}_{j}^{*T}{Q}_{j}{e}_{j}^{*}+{\lambda}_{j+1}^{*T}\left({e}_{j}^{*}+{a}_{j}{u}_{j}^{*}\right)\le \frac{1}{2}{e}_{j}^{*T}{Q}_{j}{e}_{j}^{*}+{\lambda}_{j+1}^{*T}\left({e}_{j}^{*}+{a}_{j}{u}_{j}\right)\hfill \end{array}$$

$$\begin{array}{cc}& \Rightarrow {\lambda}_{j+1}^{*T}{a}_{j}{u}_{j}^{*}\le {\lambda}_{j+1}^{*T}{a}_{j}{u}_{j},\phantom{\rule{1.em}{0ex}}\forall {u}_{j}\in [0,1]\hfill \end{array}$$

$${u}_{j}^{*}=\left\{\begin{array}{cc}1& \text{if}\phantom{\rule{4.pt}{0ex}}{\lambda}_{j+1}^{*T}{a}_{j}<0\hfill \\ \in [0,1]& \text{if}\phantom{\rule{4.pt}{0ex}}{\lambda}_{j+1}^{*T}{a}_{j}=0\hfill \\ 0& \text{if}\phantom{\rule{4.pt}{0ex}}{\lambda}_{j+1}^{*T}{a}_{j}>0\hfill \end{array}\right.$$

☐

If ${\lambda}_{j+1}^{*T}{a}_{j}\ne 0$, binary values for ${u}_{j}^{*}$ are determined by Equation (20). When ${\lambda}_{j+1}^{*T}{a}_{j}=0$, the arc is singular, and we may reintroduce constraint (9), ${u}_{j}(1-{u}_{j})=0$, to force a binary solution.

To get an intuitive understanding of the singularity issue, suppose all ${Q}_{j}=0$, and the element at row 1, column 1 of matrix F equals zero. Then Problem 3 reduces to minimize the infeasibility penalty term, $\frac{1}{2}{\displaystyle \sum _{i=1}^{m}}\left[{\left({\displaystyle \sum _{k=0}^{n-1}}{\tilde{a}}_{ik}{u}_{k}-{\tilde{b}}_{i}\right)}^{2}{F}_{i}\right]$. If this term equals zero, then ${e}_{n}=0$, satisfying all of the original constraints (2), and ${\lambda}_{n}=0$ from Equation (18), and because ${Q}_{j}=0$, all ${\lambda}_{j}=0$. Then ${\lambda}_{j+1}^{*T}{a}_{j}=0$ for all j. However, if ${Q}_{j}$ and the first element of F have positive values, then ${\lambda}_{j+1}^{*T}{a}_{j}$ may be positive or negative and Equation (20) is useful. An auxiliary problem to determine values for ${Q}_{j}$ and F that resolve the singularity will be explored in future research.

To create an LQT problem that is practical to solve, we introduce a penalty term ${u}_{j}({u}_{j}-1){R}_{j}$ in the criterion, where ${R}_{j}$ is a Lagrangian multiplier associated with constraint (9):

**Problem 4.**

$$\begin{array}{ccc}\hfill \underset{\underset{j=0,\dots ,n-1}{{u}_{j}}}{min}& & \frac{1}{2}\sum _{j=0}^{n-1}\left({e}_{j}^{T}{Q}_{j}{e}_{j}+{u}_{j}({u}_{j}-1){R}_{j}\right)+\frac{1}{2}{e}_{n}^{T}F{e}_{n}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \text{s.t.}\phantom{\rule{1.em}{0ex}}& & {e}_{j+1}={e}_{j}+{a}_{j}{u}_{j}\phantom{\rule{2.em}{0ex}}j=0,\dots ,n-1\hfill \end{array}$$

$$\begin{array}{ccc}& & {e}_{0}=-b\hfill \end{array}$$

The optimal control for Problem 4 ${\widehat{u}}_{j}$ can be solved by the standard dynamic programming method [25] (see appendix for details). The computation associated with solving Problem 4 is $O(n{m}^{3})$. We then obtain an approximate binary solution to the original BIP problem as follows:
for $j=0,1,\dots ,n-1$.

$${u}_{j}^{*}=\left\{\begin{array}{c}0\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}{\widehat{u}}_{j}<0.5\\ 1\phantom{\rule{4.pt}{0ex}}\text{for}\phantom{\rule{4.pt}{0ex}}{\widehat{u}}_{j}\ge 0.5\end{array}\right.$$

Motivated by the successive overrelaxation method [24], we introduce a weighting factor ω to improve the stability of our proposed method. Rather than applying quantization at the final step as shown in Equation (24), we did quantization at each step and propagate the binary value ${\overline{u}}_{j}$ during the dynamic programming procedure (see appendix for details). At the final step, we then replace ${\widehat{u}}_{j}$ in Equation (24) with $\omega {\widehat{u}}_{j}+(1-\omega ){\overline{u}}_{j}$ to get the approximate binary solution.

## 3. Numerical Results

We explore the limits of the algorithm with some test problems obtained from MIPLIB [27]. MIPLIB is a standard and widely used benchmark for comparing the performance of various mixed integer programming algorithms, and most of the problems in the MIPLIB arise from real-world applications. We have presented 6 tests in our numerical result section, where $air01$, $air03$, $air04$, $air05$ and $nw04$ are airline crew scheduling type problems. The dimensions and the optimal solutions for the test problems and the numerical results are shown in Table 1. The CPU time is given for a single run with branch-and-cut with CPLEX, branch-and-bound in MATLAB, and our method in MATLAB. In Table 1, the feasibility measure is the summation of the absolute differences of feasibility over all constraints, and the optimality measure is defined as $\frac{\widehat{f}-{f}^{*}}{{f}_{W}-{f}^{*}}$ [28], where ${f}^{*}$ denotes the true objective function value, $\widehat{f}$ denotes the function value found by our proposed method and ${f}_{W}$ denotes the worst (largest) function value. All tests are done on an Intel(R) Core(TM) i3 CPU @2.4 GHz machine under 64bit Windows7 with 4 GB RAM.

Problem | n | m | Time(sec) with branch-and-cut in CPLEX | Time(sec) with branch-and-bound in MATLAB | Time(sec) with our method in MATLAB | Feasibility measure | Optimality measure (%) |
---|---|---|---|---|---|---|---|

enigma | 21 | 100 | 0.23 | 4.02 | 0.03 | 18 | 0 |

air01 | 771 | 23 | 0.28 | 2.86 | 0.22 | 13 | 2.55% |

air03 | 124 | 10,757 | 1.05 | 17.64 | 34.00 | 138 | -11.68% |

air04 | 8,904 | 823 | 34.35 | too large to run | 3231.5 | 811 | 1.43% |

air05 | 426 | 7,195 | 26.66 | too large to run | 698.6 | 322 | -0.55% |

nw04 | 87,482 | 36 | 9.83 | too large to run | 37.9 | 19 | 1.36% |

In the numerical tests, we experimented with different values for parameters ${Q}_{j}$, ${R}_{j}$ and F on the small problems $enigma$ and $air01$. The diagonal elements of ${Q}_{j}$ were set to 0, 1 and 10, and we found that smaller values were better, so we report results with ${Q}_{j}=0$ in Table 1. We also tested values for parameter ${R}_{j}$ set to 1, 10, 100 and 1000, and there was not much difference in performance, so we set ${R}_{j}=10$. As for parameter F, we found that bigger values were better, so we set the diagonal elements of F to $100,000$. The parameters ${Q}_{j}$ penalize the intermediate error values whereas the parameter F penalizes the terminal error at n. Since the terminal error better reflects the original BIP optimality and infeasibility measures, intuitively, it makes sense to set ${Q}_{j}=0$ and F large.

Values for the weighting factor ω ranged between $0.5$ to $0.9$ in our exploratory tests, and the best results were typically for ω between $0.5$ and $0.6$.

CPLEX ran very quickly and always found an optimal solution; branch-and-bound in MATLAB was slower and only found a feasible solution for $enigma$, $air01$ and $air03$; our method in MATLAB ran slower than CPLEX, but generally faster than branch-and-bound in MATLAB. Even though our numerical results are “worse" than CPLEX, our methodology has a potential for extension with polynomial computational complexity.

## 4. Summary and Conclusion

The meta-control algorithm for approximately solving large-scale BIPs shows much promise because the computational complexity is linear in n (the number of variables) and polynomial in m (the number of constraints), specifically on the order of $O(n{m}^{3})$. An LQT approach is suggested by the result in Theorem 1, which proves the existence of an optimal binary solution to the LQT problem. We provide numerical results with experimentally chosen parameter values that demonstrate the effectiveness of our approach.

In our future research, we will develop an auxiliary iterative method that can provide an explicit algorithm for detecting valid parameter values automatically and investigate other ways to integrate the quantization into the meta-control algorithm to improve the performance of this algorithm. We will also develop a stochastic decomposition method to reduce the computation time.

## Acknowledgements

This research is sponsored, in part, by the National Science Foundation through Grant CMMI-0908317.

## Conflicts of Interest

The authors declare no conflict of interest.

## Appendix

We solve for ${\widehat{u}}_{j}$ in Problem 4 using a dynamic programming approach. We write the cost-to-go equation as:
with $V({e}_{n},n)=\frac{1}{2}{e}_{n}^{T}F{e}_{n}$, and equate it to the Riccati form
where ${\Sigma}_{j}$ represents a symmetric positive-definite $\left(m+1\right)\times \left(m+1\right)$ matrix, ${\Psi}_{j}$ is a positive $\left(m+1\right)\times 1$ vector, and ${\Omega}_{j}$ is a positive scalar.

$$V\left({e}_{j},j\right)=\underset{{u}_{j}}{min}\left\{\begin{array}{c}\frac{1}{2}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}{u}_{j}({u}_{j}-1){R}_{j}+V({e}_{j+1},j+1)\end{array}\right\}$$

$$V\left({e}_{j},j\right)=\frac{1}{2}{e}_{j}^{T}{\Sigma}_{j}{e}_{j}+{e}_{j}^{T}{\Psi}_{j}+{\Omega}_{j}$$

Combining the Equations (25), (26) and the dynamics in Equation (22), we have
In order to minimize this expression we isolate the terms with ${u}_{j}$ in them
and take the derivative with respect to ${u}_{j}$ and set the value to 0,
This yields the solution ${u}_{j}$ for the optimal control
In order to simplify notation, we let
and we can now write

$$\begin{array}{cc}\hfill V\left({e}_{j},j\right)=\underset{{u}_{j}}{min}\{& \frac{1}{2}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}{u}_{j}({u}_{j}-1){R}_{j}+\frac{1}{2}{\left({e}_{j}+{a}_{j}{u}_{j}\right)}^{T}{\Sigma}_{j+1}\left({e}_{j}+{a}_{j}{u}_{j}\right)\hfill \end{array}$$

$$\begin{array}{cc}& +{\left({e}_{j}+{a}_{j}{u}_{j}\right)}^{T}{\Psi}_{j+1}+{\Omega}_{j+1}\}\hfill \end{array}$$

$$\frac{1}{2}{u}_{j}({u}_{j}-1){R}_{j}+\frac{1}{2}{u}_{j}^{2}{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}+{u}_{j}{a}_{j}^{T}{\Sigma}_{j+1}{e}_{j}+{u}_{j}{a}_{j}^{T}{\Psi}_{j+1}$$

$$({u}_{j}-\frac{1}{2}){R}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}{u}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{e}_{j}+{a}_{j}^{T}{\Psi}_{j+1}=0$$

$${\widehat{u}}_{j}=\frac{\frac{1}{2}{R}_{j}-{a}_{j}^{T}{\Sigma}_{j+1}{e}_{j}-{a}_{j}^{T}{\Psi}_{j+1}}{{R}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}}$$

$$\begin{array}{ccc}\hfill {S}_{j}& =& \frac{-{a}_{j}^{T}{\Sigma}_{j+1}}{{R}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\delta}_{j}& =& \frac{\frac{1}{2}{R}_{j}-{a}_{j}^{T}{\Psi}_{j+1}}{{R}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}}\hfill \end{array}$$

$${\widehat{u}}_{j}={S}_{j}{e}_{j}+{\delta}_{j}$$

We equate the Riccati form Equation (26) with the value function in Equation (27) evaluated at ${\widehat{u}}_{j}$ from Equation (31), yielding

$$\begin{array}{cc}\hfill \frac{1}{2}{e}_{j}^{T}{\Sigma}_{j}{e}_{j}+{e}_{j}^{T}{\Psi}_{j}+{\Omega}_{j}=& \phantom{\rule{5.0pt}{0ex}}\frac{1}{2}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}\left({S}_{j}{e}_{j}+{\delta}_{j}\right)\left({S}_{j}{e}_{j}+{\delta}_{j}-1\right){R}_{j}\hfill \\ & +\frac{1}{2}{\left({e}_{j}+{a}_{j}({S}_{j}{e}_{j}+{\delta}_{j})\right)}^{T}{\Sigma}_{j+1}\left({e}_{j}+{a}_{j}({S}_{j}{e}_{j}+{\delta}_{j})\right)\hfill \\ & +{\left({e}_{j}+{a}_{j}({S}_{j}{e}_{j}+{\delta}_{j})\right)}^{T}{\Psi}_{j+1}+{\Omega}_{j+1}\hfill \end{array}$$

We now solve for ${\Sigma}_{j}$ and ${\Psi}_{j}$ by separating the quadratic terms from the linear terms in ${e}_{j}$. Isolating the quadratic terms in ${e}_{j}$, we have
which yields the Riccati equation corresponding to ${\Sigma}_{j}$

$$\begin{array}{c}\hfill \frac{1}{2}{e}_{j}^{T}{\Sigma}_{j}{e}_{j}=\frac{1}{2}{e}_{j}^{T}{Q}_{j}{e}_{j}+\frac{1}{2}{e}_{j}^{T}{S}_{j}^{T}{R}_{j}{S}_{j}{e}_{j}+\frac{1}{2}{e}_{j}^{T}{\left(I+{a}_{j}{S}_{j}\right)}^{T}{\Sigma}_{j+1}\left(I+{a}_{j}{S}_{j}\right){e}_{j}\end{array}$$

$${\Sigma}_{j}={Q}_{j}+{S}_{j}^{T}{R}_{j}{S}_{j}+{\left(I+{a}_{j}{S}_{j}\right)}^{T}{\Sigma}_{j+1}\left(I+{a}_{j}{S}_{j}\right)$$

Isolating the linear terms in ${e}_{j}$, we have
and factoring out ${e}_{j}^{T}$, the tracking equation for ${\Psi}_{j}$ is

$$\begin{array}{c}\hfill {e}_{j}^{T}{\Psi}_{j}={e}_{j}^{T}{S}_{j}^{T}({\delta}_{j}-\frac{1}{2}){R}_{j}+{e}_{j}^{T}{\left(I+{a}_{j}{S}_{j}\right)}^{T}{\Sigma}_{j+1}{a}_{j}{\delta}_{j+1}+{e}_{j}^{T}{\left(I+{a}_{j}{S}_{j}\right)}^{T}{\Psi}_{j+1}\end{array}$$

$$\begin{array}{cc}\hfill {\Psi}_{j}=& {S}_{j}^{T}({\delta}_{j}-\frac{1}{2}){R}_{j}+{\left(I+{a}_{j}{S}_{j}\right)}^{T}{\Sigma}_{j+1}{a}_{j}{\delta}_{j}+{\left(I+\phantom{\rule{4pt}{0ex}}{a}_{j}{S}_{j}\right)}^{T}{\Psi}_{j+1}\hfill \end{array}$$

Therefore, ${\Sigma}_{j}$ and ${\Psi}_{j}$ can be found backwards in time by Equations (32) and (33) from initial conditions ${\Sigma}_{n}=F,{\Psi}_{n}=0.$

Given ${\Sigma}_{j}$ and ${\Psi}_{j}$, we can calculate ${\widehat{u}}_{j}$ from Equations (28), (22) and (23). To calculate ${\overline{u}}_{j}$ for our implementation with quantization, we use the same ${\Sigma}_{j}$ and ${\Psi}_{j}$, but introduce rounding to the nearest integer in Equations (28), (22) and (23) to obtain:
and
with ${\overline{e}}_{0}=-\text{int}\left[b\right]$.

$${\overline{u}}_{j}=\text{int}\left[\frac{\frac{1}{2}{R}_{j}-{a}_{j}^{T}{\Sigma}_{j+1}{\widehat{e}}_{j}-{a}_{j}^{T}{\Psi}_{j+1}}{{R}_{j}+{a}_{j}^{T}{\Sigma}_{j+1}{a}_{j}}\right]$$

$${\overline{e}}_{j+1}=\text{int}[{\overline{e}}_{j}+{a}_{j}{\overline{u}}_{j}]$$

## References

- Wolsey, L.A. Integer Programming; Wiley: New York, NY, USA, 1998. [Google Scholar]
- Danna, E.; Fenelon, M.; Gu, Z.; Wunderling, R. Generating Multiple Solutions for Mixed Integer Programming Problems. In Integer Programming and Combinatorial Optimization; Fischetti, M., Williamson, D.P., Eds.; Springer: Berlin, Germany, 2007; pp. 280–294. [Google Scholar]
- Jarre, F. Relating Max-Cut Problems and Binary Linear Feasibility Problems. Available online: http://www.optimization-online.org (accessed on 15 June 2013).
- Bertsimas, D.; Tsitsiklis, J.N. Introduction to Linear Optimization; Athena Scientific: Nashua, NH, USA, 1997. [Google Scholar]
- Mitten, L.G. Branch-and-bound methods: General formulation and properties. Oper. Res.
**1970**, 18, 24–34. [Google Scholar] [CrossRef] - Caprara, A.; Fischetti, M. Branch-and-Cut Algorithms. In Annotated Bibliographies in Combinatorial Optimization; Wiley: Chichester, UK, 1997; pp. 45–64. [Google Scholar]
- Barnhart, C.; Johnson, E.L.; Nemhauser, G.L.; Savelsbergh, M.W.P.; Vance, P.H. Branch-and-price: Column generation for solving huge integer programs. Oper. Res.
**1998**, 46, 316–329. [Google Scholar] [CrossRef] - Lew, A.; Holger, M. Dynamic Programming: A Computational Tool; Springer: New York, NY, USA, 2007; Volume 38. [Google Scholar]
- Jünger, M.; Liebling, T.; Naddef, D.; Nemhauser, G.; Pulleyblank, W.; Reinelt, G.; Rinaldi, G.; Wolsey, L. 50 Years of Integer Programming 1958–2008: From the Early Years to the State-of-the-Art; Springer: Berlin, Germany, 2009. [Google Scholar]
- Kirkpatrick, S.; Gelatt, C.D., Jr.; Vecchi, M.P. Optimization by simulated annealing. Science
**1983**, 220, 671–680. [Google Scholar] [CrossRef] [PubMed] - Zabinsky, Z.B. Stochastic Adaptive Search for Global Optimization; Kluwer Academic Publishers: Boston, MA, USA, 2003. [Google Scholar]
- Rubinstein, R.Y.; Kroese, D.P. The Cross Entropy Method: A Unified Combinatorial Approach to Combinatorial Optimization, Monte-Carlo Simulation and Machine Learning; Springer: Berlin, Germany, 2004. [Google Scholar]
- Haupt, R.L.; Sue, E.H. Practical Genetic Algorithms; Wiley: New York, NY, USA, 2004. [Google Scholar]
- Shi, L.; Ólafsson, S. Nested partitions method for global optimization. Oper. Res.
**2000**, 48, 390–407. [Google Scholar] [CrossRef] - Hoffman, K.L. Combinatorial optimization: Current successes and directions for the future. J. Comput. Appl. Math.
**2000**, 124, 341–360. [Google Scholar] [CrossRef] - Grötschel, M.; Krumke, S.O.; Rambau, J. Online Optimization of Large Scale Systems: State of the Art; Springer: Berlin, Germany, 2001. [Google Scholar]
- Martin, R.K. Large Scale Linear and Integer Optimization; Kluwer: Hingham, MA, USA, 1998. [Google Scholar]
- Schrijver, A. Combinatorial Optimization: Polyhedra and Efficiency; Springer: Berlin, Germany, 2003. [Google Scholar]
- Callegari, S.; Bizzarri, F.; Rovatti, R.; Setti, G. On the Approximate solution of a class of large discrete quadratic programming problems by ΔΣ modulation: The case of circulant quadratic forms. IEEE Trans. Signal Process.
**2010**, 58, 6126–6139. [Google Scholar] [CrossRef] - Callegari, S.; Bizzarri, F. A Heuristic Solution to the Optimisation of Flutter Control in Compression Systems (and to Some More Binary Quadratic Programming Problems) via ΔΣ Modulation Circuits. In Proceedings of the 2010 IEEE International Symposium Circuits and Systems (ISCAS), Paris, France, 30 May–2 June 2010; pp. 1815–1818.
- Von Haartman, K.; Kohn, W.; Zabinsky, Z.B. A meta-control algorithm for generating approximate solutions to binary programming problems. Nonlinear Anal. Hybrid Syst
**2008**, 2, 1232–1244. [Google Scholar] [CrossRef] - Frieden, B.R. Science from Fisher Information: A Unification; Cambridge University Press: Cambridge, UK, 2004. [Google Scholar]
- Zhen, S.; Chen, Y.; Sastry, C.; Tas, N.C. Optimal Observation for Cyber-Physical Systems: A Fisher-Information-Matrix-Based Approach; Springer: Berlin, Germany, 2009. [Google Scholar]
- Lewis, F.L.; Syrmos, V.L. Optimal Control; Wiley: New York, NY, USA, 1995. [Google Scholar]
- Bertsekas, D.P. Dynamic Programming and Optimal Control, 3rd ed.; Athena Scientific: Nashua, NH, USA, 2005; Volume I. [Google Scholar]
- Varga, R.S. Matrix Iterative Analysis; Springer: Berlin, Germany, 2000. [Google Scholar]
- MIPLIB—Mixed Integer Problem Library. Available online: http://miplib.zib.de/ (accessed on 15 June 2013).
- Ali, M.M.; Khompatraporn, C.; Zabinsky, Z.B. A numerical evaluation of several stochastic algorithms on selected continuous global optimization test problems. J. Glob. Optim.
**2005**, 31, 635–672. [Google Scholar] [CrossRef]

© 2013 by the authors; licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution license (http://creativecommons.org/licenses/by/3.0/).