Decompositions for MPC of Linear Dynamic Systems with Activation Constraints

: The interconnection of dynamic subsystems that share limited resources are found in many applications, and the control of such systems of subsystems has fueled signiﬁcant attention from scientists and engineers. For the operation of such systems, model predictive control (MPC) has become a popular technique, arguably for its ability to deal with complex dynamics and system constraints. The MPC algorithms found in the literature are mostly centralized, with a single controller receiving the signals and performing the computations of output signals. However, the distributed structure of such interconnected subsystems is not necessarily explored by standard MPC. To this end, this work proposes hierarchical decomposition to split the computations between a master problem (centralized component) and a set of decoupled subproblems (distributed components) with activation constraints, which brings about organizational ﬂexibility and distributed computation. Two general methods are considered for hierarchical control and optimization, namely Benders decomposition and outer approximation. Results are reported from a numerical analysis of the decompositions and a simulated application to energy management, in which a limited source of energy is distributed among batteries of electric vehicles.


Introduction
Several systems found in the industry and society emerge from the interconnection of dynamic subsystems that share limited resources [1,2]. Representative systems include stations for recharging electric vehicles, energy building management, and cooling fluid distribution in buildings, among others.
For instance, we can consider a building that has a Heating, Ventilation, and Air Conditioning (HVAC) as an example of a resource-constrained dynamic system. In this building, the cooling fluid is a limited resource shared among the rooms (subsystems), each with its energetic demand, depending on the number of users and the wall materials, among other factors. As an alternative, some approaches combine the characteristics of decentralized and centralized controls, with emphasis on decomposition strategies that enable hierarchical control, which can be seen in Figure 1c, where there is a master problem that coordinates other controllers, so these controllers can be considered decoupled, having independent dynamics. Bilevel decomposition [8], Lagrangean decomposition [9], Benders decomposition [10], and Outer Approximation [11] are examples of such approaches.
In [12], the authors presented a hierarchical decomposition approach for MPC of dynamic systems using Bilevel, Lagrangean, and Benders decompositions. The authors reported results from

Model Predictive Control
Model Predictive Control (MPC) has developed considerably over the last two decades; it is one of the few advanced control techniques that have significantly impacted industrial process control. This impact can be attributed to the fact that MPC is, perhaps, the most general way of posing the process control problem in the time domain, and the possibility to be applied in SISO (Single Input Single Output) and MIMO (Multiple Inputs Multiple Outputs) systems. Soft and hard constraints can be included in the formulation of the control law, through the solution of optimization problems, whereby an objective function is minimized over a prediction horizon [15].
In the literature, two of the most cited MPC strategies are the Dynamic Matrix Control (DMC) [16] and Generalized Predictive Control (GPC) [17,18]. Linear models are often used; in DMC, a step response model is used and, in GPC, a transfer function model. DMC is widely used in the industry due to the ease of obtaining this type of model, but it cannot be used with unstable systems [6]. State-space models have been increasingly used in the development of predictive controllers because they allow for systematically describing multivariable and complex behaviors, and analyzing characteristics such as controllability and observability of the system. In this sense, state-space models will be used in this paper.
The flexibility of MPC allows its use to solve numerous dynamic problems found in the industry and of academic interest. Such issues have been modeled and adapted to work with the MPC methodology and its variations, such as by [19,20], who integrated the holding and priority control strategies for bus rapid transit systems in a network approach, using deterministic models, by means of MPC strategies.
The authors in [21,22] presented approaches to economic energy management of microgrids that have, in a level, the objective of delivering the energy optimally, and in another level that takes into account the financial cost to provide the energy or even use the power from a consumer that is generating power as well.
In [23], a distributed MPC formulation is introduced to coordinate the computations for the control of the distributed systems. Several applications can benefit from a distributed MPC approach, particularly large-scale applications such as power systems, water distribution systems, traffic networks, manufacturing plants, and economic systems. In such applications, distributed or decentralized control schemes are desired, where local control inputs are computed using local measurements and reduced-order models of the local dynamics.
Hybrid techniques are also appearing in the literature, as shown in [24], where the authors employed a nonlinear model predictive control of an oil well with Echo State Networks (ESNs).

General Formulation
Regarding the strategy chosen for modeling the problem, the objective function or cost function can be written in several forms to obtain the control law when using MPC. The most common objective is to minimize the error between the future output y and desired reference w, which is achieved by computing a suitable control variation ∆u that imposes a penalty in the objective function. Usually, the objective function is described as follows: where N 1 and N 2 are the minimum and maximum prediction horizons respectively, and N u is the control horizon. These indexes define the limits within which it is desirable for the output to follow the reference, and when it is important to limit the control action. Certain modifications on the horizon values may be used to compensate for transmission delays and non-minimum phase [6].
By changing N u , it is possible to adjust the time instants at which the control action is penalized. Q and R are positive definite matrices that penalize trajectory tracking and control variation errors. All real processes are subject to constraints, and these constraints are related to economic restrictions, operational restrictions, and minimum and maximum ranges for actuators. Reference [25] states that constructive reasons, such as safety or environmental ones, can impose limits on the process variables as states and outputs. The operating conditions are generally defined by the intersection of certain constraints so that the control system will operate close to the boundaries.
The controller should anticipate these operational points and correct the input values such that the process remains stable and does not violate the operational values. The MPC strategy is useful in predicting this violation on the operational values because it is possible, within the MPC related optimization problem, to introduce boundaries by means of constraints to the problem within the prediction horizon.
The constrained MPC solution is carried out by minimizing the objective function through convex optimization algorithms, often expressed as the minimization of a quadratic convex function (1) subject to linear constraints, which renders a quadratic programming problem (QP). The algorithms solve similar problems, such as the one that follows: subject to: in which constraints (2b) and (2c) represent system dynamics, (2e) through (2g) define lower and upper bounds on the variables, and (2d) is the constraint that refers to initial conditions.

Optimization Models
A wide diversity of real-world and industrial problems is described with nonlinear models to be integrated in MPC strategies. Consequently, they become nonlinear optimization problems, and commonly with this class of problems are those that involve integer or discrete variables such as in an integer programming problem. When discrete and continuous variables are mixed in a linear problem, the problem becomes mixed-integer linear programming (MILP), further assuming that constraints and objectives are linear.
If the problem consists of a set of nonlinear functions with discrete and continuous variables, the problem is said to be mixed-integer nonlinear programming or MINLP. The coupling of the integer with the continuous domain and their associated nonlinearities renders the class of MINLP problems, which are challenging from the theoretical, algorithmic, and computational points of view [26].
MINLP is fitted in many applications such as process synthesis in chemical engineering, design, scheduling, and planning of batch processes. In addition, in other areas such as facility location problems in a multi-attribute space, the optimal unit allocation in an electric power system, and the planning of electric power generated by a facility. A general form of MINLP is stated as x ∈ X , (3d) where X represents the set of continuous variables, and Y is the integer variable set. In some works, the set Y is named the complicating variable set, since, with a fixed y, the problem becomes an easier optimization problem to be solved in x.
The set X is assumed to be a convex compact set, X = {x | x ∈ R n , Dx ≤ d, x L ≤ x ≤ x U }, and the complicating variables correspond to a polyhedral set of integer points, Y = {y | y ∈ Z m , Ay ≤ a} which in most applications is restricted to 0-1 values, y ∈ {0, 1} m .
Regarding MINLP problems, [10] states some particularities on MINLP problems that may arise under some assumptions: (a) for fixed y, problem (3) separates into a number of independent optimization problems, each involving a different subvector of x; (b) for fixed y, problem (3) assumes a well-known special structure, such the classical transportation form, for which efficient solution procedures are available; and (c) Problem (3) is not a convex program in x and y jointly, but fixing y renders it so in x.
Such situations occur in many practical applications of mathematical programming and in the literature of large-scale optimization, where the central objective is to exploit particular structures such as the design of effective solution procedures. It has been more than fifty years of studies in this field, and methods have been proposed over the years for solving these problems, such as (i) Branch and Bound; (ii) Decompositions: Outer Approximation [11,27], Extended Cutting Planes [28], Benders Decomposition (introduced by [29] and generalized by [10]); (iii) combination of Branch and Bound and these Decompositions.
In this context, the techniques Outer Approximation and Benders decomposition are used in this work.

Outer Approximation
The Outer Approximation (OA) method's main objective is to take problems with complicating variables, which, when temporarily fixed, yield a problem significantly easier to handle.
For instance, the authors in [30] created an MINLP problem to solve an optimal lot-sizing policy in the supply chain (SC) that has an essential role in companies applying SC management to their system, using OA. In [31], a new algorithm based on the OA algorithm was designed for the stochastic shortest-path problem, where path costs are a weighted sum of the expected cost and standard deviation cost. The authors in [32] used outer approximation with an equality relaxation and augmented penalty algorithm for optimal batch-sizing in an integrated multiproduct, multi-buyer supply chain under penalty, green and quality control policies, and a vendor-managed inventory with consignment stock agreement.
Some assumptions and modifications in the MINLP (3) are needed to simplify the application of decomposition strategies, allowing a better view of the master problem's design and the subproblem associated with the Outer Approximation and Benders decomposition.
In most applications of interest, [14] considers that the objective and constraint functions are linear in y, and there are no nonlinear equations h(x) = 0. Under these assumptions, problem (3) becomes: x ∈ X .
To enable the application of the method, some assumptions are made such as the functions f (·) and g(·) being convex and differentiable. Considering a fixed y k , a subproblem is associated with problem (4), which is an easier NLP to be solved: x ∈ X .
In addition, if S(y k ) is infeasible, a relaxed version is solved instead, where e is a vector of ones of suitable size. Minimizing γ means that the relaxed region of a solution to S is minimized. If γ k > 0, the NLP subproblem S(y k ) associated with the MINLP (4) is infeasible for y = y k . The OA method proposed by [11] arises when NLP subproblems S and F, and the MILP master problem, are solved successively in a cycle of iterations to generate the points (x k , y k ), in a relaxed form, Even with the relaxation on the objective with the α variable, to obtain an equivalent form of MINLP (4) into a MILP, a first order Taylor-series approximation at x k ∈ X of f (·) and g(·) on each iteration k is performed, Given an optimal solution of K convex NLP subproblems S(y k ) at y k = 1, . . . , K, with points x k , a relaxed MILP master problem of Outer Approximation is obtained: where the set K = K feas ∪ K infeas is the set of iterations, such that The solution of problem (9) yields a valid lower bound to problem (4). This bound is non decreasing with the number of linearization points K. In order for these linearizations to hold in the process of solving the problem, some conditions must be imposed to add a cut for the feasible region: • if x k is strictly inside the feasible region, then the cuts should not be introduced.
• if x k is outside the feasible region, then the feasibility subproblem F(y k ) produces a cut. • one should consider in the cut, only binding constraints. In other words, if there are infeasible subproblems, the optimal value y * , given by problem F(y k ), should be met at equality for the left-hand side of the constraints.

Benders Decomposition
Benders Decomposition (BD) is a method that decomposes a problem into several simple subproblems and then solves a master problem adding cuts to the feasible region. Given that Benders decomposition is similar to the outer approximation method, we show how the Benders formulation can be derived from the OA formulation. As stated in [33], the Benders decomposition method is based on a sequence of projections, outer linearizations, and relaxation operations. The model is first projected onto the subspace defined by the set of complicating variables. The resulting formulation is then dualized, and the associated extreme rays and points respectively define the feasibility requirements (feasibility cuts) and the projected costs (optimality cuts) of the complicating variables [34,35].
Hence, the Benders decomposition method solves the equivalent model by applying feasibility and optimality cuts to a relaxation, yielding a master problem and a subproblem, which are iteratively solved to guide the search process and generate the violated cuts, respectively.
For instance, the authors at [36] used Benders decomposition in a production planning problem, in which several individual factories collaborate despite having different objectives; Reference [37] presents a production routing problem (PRP) that deals with the distribution of a single product from a production plant to several customers-using limited capacity vehicles-in a discrete and finite time horizon; in [38], an algorithm based on Benders decomposition is designed to solve an ideal energy flow problem, with safety restrictions.
To obtain the Benders decomposition formulation, some steps have to be performed to develop a dual representation of y from the Outer Approximation at y k given in Equation (9). Once solved for the convex NLP (5), let µ k ≥ 0 be the optimal Lagrange multiplier of g(x) + By k ≤ 0. Thus, if constraint (9c) is premultiplied by µ k and moving ∇g( From the subproblem S(y k ) (5), the KKT (Karush-Kuhn-Tucker) stationarity condition [12] leads to If Equation (11) is post multiplied by (x − x k ), From Equations (10) and (12), which substituted into constraint (9b) results in which is called Benders cut, produced when the subproblem S(y k ) (5) is feasible. This inequality is known as Benders optimality cut. If problem S(y k ) is infeasible for y k , a feasibility cut is produced by where µ k and x k are obtained by solving F(y k ). Therefore, after theses transformations, the reduced MILP master problem, for Benders decomposition, is stated as: where K feas is the set of iteration indices at which the subproblem S(·) is feasible, whereas K infeas is the set of indices for which an infeasibility subproblem F(·) had to be solved. Reference [14] offers some remarks on the similarities of Outer Approximation and Benders decomposition, such as: • M OA and M GB are MILPs that accumulate linearizations as iterations proceed; • Outer Approximation predicts stronger lower bounds than Benders decomposition; • Outer Approximation requires fewer iterations; • The master's problem in the Benders decomposition is much smaller.

Decompositions for MPC of Resource-Constrained Dynamic Systems with Activation Constraints
In this section, the formulation for the MPC of a class of resource-constrained dynamic systems is introduced, and, later, hierarchical decomposition strategies are formulated, namely Benders Decomposition and Outer Approximation. The hierarchical formulations presented here advance the previously presented in [12] to consider activation/deactivation constraints. Using classic decomposition techniques in the literature, the two hierarchical formulations proposed here are especially useful for distributed and smart systems, so that the central control (master problem) does not need all the information of the subproblems to coordinate the energy distribution (i.e., the charging station for electric vehicles does not need to know specific information about a car, it only needs to manage the distribution of resources).
The MPC refers to an ample range of control methods that make explicit use of a process model to obtain the control signals by optimizing an objective function over a prediction horizon [6]. At the current instant k, the optimization produces an optimal control sequence, but only the first control signal is applied to the process. At instant k + 1 with the measurements updated, the process is repeated over the prediction horizon.
Let M = {1, . . . , M} be the set of subsystems, R = {1, . . . , R} be the set of the resources, N u = {0, . . . , N u − 1} defines the control horizon, and N 1 and N 2 establish the minimum and maximum prediction horizon for the outputs. The MPC problem of interest is given by: while, for all m ∈ M being subject to: in which:

Optimality and Feasibility Subproblems
Here, we begin by introducing the subproblem structure that is the same for both Benders Decomposition and Outer Approximation. The main distinction between BD and OA regards the master problem and cutting planes. Let ss m = (ss r,m (k + j) : j = 0, . . . , N 2 − 1, r ∈ R) be a vector with the amount of each resource r ∈ R allocated at each time k + j for a subsystem m, and let ss = (ss m : m ∈ M) be the vector with all resource allocations. Likewise, let β m = (δ m (k + j) : j ∈ N u ) be the vector with activation/deactivation variables for subsystem m over the control horizon, and β = (β m : m ∈ M).
For a feasible resource allocation ss (i.e., ∑ m∈M ss r,m (k + j) ≤ s max r (k + j) for all r ∈ R, and j = 0, . . . , N 2 − 1) and binary vector β with activation decisions, the optimality subproblem is given by where: (ii) P m and Q m are suitable matrices that define constraint (17i), and R m and S m representing constraint (17k); (iii) particularly so, for the problem of concern, S m is the identity matrix and R m z m is effectively R m u m since only the terms s T r,m u m (k + j) are needed in the constraint; and (iv) the vector functions h m and g m represent the equalities (17b)-(17f) and inequalities (17g), (17h), respectively.
Notice that o(ss, β) induces an upper bound for a feasible (ss, β), meaning one vector ss that satisfies the constraints (17k) and one vector β that satisfies the constraints (17i), which renders problem O(ss, β) feasible. Clearly, o(ss, β) can be computed in parallel as follows: for which For an infeasible resource allocation and combination of binary variables, (ss, β), the following feasibility subproblem is to be solved: with e m = (1, 1, . . . , 1) being a vector of suitable dimension and γ a nonnegative scalar. The optimal γ can be obtained by solving an auxiliary subproblem only for the infeasible O m . Let M in f eas = {m ∈ M : O m (ss m , β m ) is infeasible}. Then, F(ss, β) is solved as follows: and, for all m ∈ M in f eas , we solve the problem: In fact, if (ss m , β m ) is feasible for O m , then the corresponding F m will have an optimal value f m (ss m , β m ) = 0 and the feasibility subproblem does not have to be solved.

Master Problem of the Benders Decomposition
At iteration p of the Benders algorithm, let ss (p) be the resource allocation vector, β (p) the vector of binary variables, and assume that O(ss (p) , β (p) ) is feasible. Let ν with α B being the lower bound of the overall objective of P given by Equation (17). Due to the complementarity conditions, the local constraints (20b) and (20c) do not play a part in the cutting plane. The decision space of the Benders master problem consists of the vector (ss, β), with resource allocation and binary decisions, and the lower bound α B . However, if (ss (p) , β (p) ) is an infeasible vector at iteration p, then let M (p) ⊂ M be the subset of subproblems m such that γ m = f (ss, β). Solving F(ss, β) renders the following feasibility cut: where the Lagrange multipliers µ At iteration p, an optimality cut is obtained by successfully solving O(ss (p) , β (p) ), or else a feasibility cut by solving F(ss (p) , β (p) ). Let O (p) and F (p) be the indices of the iterations for which an optimality and feasibility cut was produced, respectively. Then, the Benders master problem at iteration p can be stated as follows: Algorithm 1 formalizes the Benders decomposition. The algorithm does not require a feasible starting point, since it can produce a feasible solution if one exists with the aid of the feasibility cuts. Output: z (p) , lb (p) , ub (p) ;

Outer Approximation Master Problem
The problem of concern (17) is an MIQP with constraints being all affine and only the objective being nonlinear convex. By applying the OA algorithm, the MPC problem can be solved with a MILP algorithm (for the master) and a set of QPs (for the subproblems). To put it in another way, the constraint appearing in the master are the same constraints of the baseline problem (17), whereas the objective will be iteratively approximated with linear under estimators. This means that the OA algorithm will not produce infeasible iterates for the subproblems, and only optimality cuts will be generated.
At iteration p, let ss (p) be the resource allocation vector, β (p) the vector with binary variables, and assume that O(ss (p) , β (p) ) is feasible. Then, using the optimal solution z (p) = (z M ) of the optimality subproblem (18), it is possible to create the linear approximation for the convex objective f , in which ∇ f m is the gradient for the objective function of subsystem m. Since the equality h(·) and inequality g(·) constraints are all affine, there is no need to linearize the corresponding vector functions in the master problem. Then, the OA master problem at iteration p can be stated as follows: for m = 1, . . . , M : Algorithm 2 formalizes the Outer Approximation. Output: z (p) , lb (p) , ub (p) ;

Computational Analysis
In this section, the resource-constrained MPC formulation with the addition of activation/deactivation constraints on the control variables is validated. In this sense, the problem in its centralized form is solved for benchmark purposes; then, Outer Approximation and Benders Decomposition are applied to the baseline problem. In other words, the problem is reformulated into a hierarchical structure according to both decomposition strategies. This section is divided into two parts: (i) numerical experiments using synthetic instances, and (ii) an example problem of charging batteries to illustrate the use of the presented decompositions in a practical application.

Numerical Experiments
First, numerical analyses were performed, in synthetic instances, to give exemplification and insights into possibilities of solutions to these problems in a distributed fashion. The data and procedures followed to generate these problems are detailed in Table 1. Table 2 reports the size of the proposed problems. For a given number of subsystems and a stipulated horizon, the increase in the number of variables in the OA master problem and the subproblems is significant, compared to Benders decomposition. The number of variables is all added together in each column, and the column "Binary vars" contains the number of {0, 1} variables in each problem. Notice that, given the generality of the formulation presented here, the numerical results given varying numbers of subsystems and prediction horizons are analogous to, for example, increasing the number of electric vehicles in a charging station or the number of HVACs in a building. In addition, the difficulty of the optimization problem increases with the number of binary variables. The Julia Programming Language [39] was used for being a new approach to numerical computing. Julia is an open-source language developed at the Massachusetts Institute of Technology (MIT) in 2012. Besides being a general-purpose language that can be used to write any application, many of its features are well-suited for numerical analysis and computational applications. Julia is dynamically typed, feels like a scripting language, and has good support for interactive use. Julia has been downloaded over 13 million times, and the Julia community has registered over 3000 Julia packages for community use. These include various mathematical libraries, data manipulation tools, and packages for general-purpose computing.
The algorithm was implemented using the modeling language for mathematical programming, JuMP [40], which offers a high-level interface with solvers like IPOPT [41], Gurobi, Artelys KNITRO, IBM CPLEX, and many others, for solving optimization problems.
The numerical experiments were performed in a computer with an Intel R Xeon TM CPU E5-2665 (2.40 GHz and 20 MB of cache), with 8 cores and 16 threads, 40 GB of RAM, and in a Linux environment.
The Gurobi solver, version 9.0.1, was used with a tolerance of 10 −6 for the master and subproblems of both decomposition strategies, Benders and Outer Approximation. The relative gap between LB and UB was set to 10 −2 % as defined below: · 100 (%) (29) According to [14], the Outer Approximation method yields a tighter lower bound than the one produced by the master problem of Benders Decomposition. However, in our experiments, the tighter bounding procedures did not slow down the OA algorithm, which converged faster than the Benders decomposition.
For problems that the algorithms did not reach the tolerance defined by the relative gap, the Outer Approximation method was closer to the tolerance than the Benders decomposition when the CPU limit was reached, after 3600 s of computation-this behavior confirms the assumption that the OA algorithm produces tighter bounds in comparison to the Benders decomposition.
Suppose one compares the problem's size, the number of binary variables in Table 2, and the solution times in Table 3. In that case, it can be noticed that the number of binary variables makes the solution of the problems more challenging. Table 2. Number of variables of synthetic problems with binary variables, for analysis of decomposition strategies. Column "MP" stands for the number of variables in the master problem, "Binary vars" corresponds to the binary variables of the master problem, and "SP" is the number of variables in the subproblem.   Figures 2 and 3 show the optimal trajectory of the upper bound UB and lower bound LB generated by decomposition strategies when they are applied to the MPC problem with resource and activation constraints, for the case with M = 8 subsystems and prediction and control horizon of length T = 6. Again, it is possible to see that the Outer Approximation bounds are much closer and converge significantly faster than the Benders decomposition.  Another interesting aspect can be observed by analyzing the convergence of both methods for varying length T of predictions horizons. Figure 4 illustrates the slow convergence of the Benders Decomposition, especially regarding the lower bound. Figure 5 illustrates the convergence of Outer Approximation, where slower convergence can also be noticed regarding the lower bound. Overall, in a set of test problems, both decompositions were able to obtain nearly optimal and globally optimal solutions, with Outer Approximation having a better performance and achieving the global optimal faster than Benders Decomposition for the synthetic problems proposed. Benders decomposition had a lower performance with problems with more subsystems or higher horizons compared to Outer Approximation. However, the results show that the decomposition algorithms produce iterates converging to the same global optimum obtained by their centralized counterpart.

M
Finally, the results reported in the experiments show that both decomposition strategies can be effective. They allowed the baseline MPC problem to be decomposed into a set of subproblems, whereby the master handles the binary variables (for the activation of control signals) and resource allocations, whereas the subproblems consist of small quadratic programs. The decompositions enabled a distributed solution of the MPC problem to achieve nearly optimal solutions for a given small tolerance. In other words, both hierarchical decomposition formulations can be directly applied to energy management problems under limited resources. The next subsection illustrates the application of the framework to a battery charging problem in electric vehicles.

Batteries Charging with Activation Constraints
This subsection presents an example problem of charging batteries to illustrate the use of the presented decompositions with activation constraints. Consider an electric car charging station, where each car is represented by an independent system subproblem that reports to the central station master to allocate resources. For each vehicle, there is a state of charge (SoC) associated with its battery, which can be defined as in [42]: in which i(k) refers to the current [A] applied at the instant k which, depending on its sign, determines whether the battery is charging or discharging, which also influences the value of η which defines battery charging efficiency; Θ sets the battery charge capacity to [Ah], and SoC varies within the [0, 100] (%) range (from depleted to fully charged). The structure of the battery charging station and the system behavior are illustrated in Figure 6. The MPC problem (17) with limited resource constraints can thus be employed in the management of battery charging, using the model given by Equation (30). Problem P, in order to demonstrate a practical application, can be recast as follows: For r = 1, . . . , R, j = 0, . . . , N 2 − 1 : where: • SoC m (k + j|k) is the state of charge prediction for time k + j calculated with the information available until time k; • w m (k + j) is the desired SoC trajectory; • u m (k + j|k) is the predicted current to be applied to the battery for charging in [A] and ∆u m (k + j|k) is the predicted current variation; • Θ m is the battery charge capacity of vehicle m given in [Ah]; • δ m (k + j|k) ∈ {0, 1} is the activation variable used for turning the subsystems on/off; • Q m = Q T m and W m = W T m are positive definite matrices that penalize the errors on trajectory tracking and control variation, respectively; • SoC m (k) and u m (k − 1) are known values with the initial conditions;  In this work, only battery charge is considered, so the current signal is non-negative, and, therefore, the charge efficiency value (η) is considered constant for all vehicles. As shown in Equation (31b), the system output y m is the value of the state SoC m . Therefore, we assume that the state is observable; otherwise, it would be necessary to apply a state observer or Kalman filter [43].
The objective of the problem defined in (31) is the charging of batteries, and, thus, by defining a reference so that all batteries have SoC of 100%, the reference tracking error and the control effort applied to the subsystems are minimized.

Application to a Sample Instance
Consider a battery charging problem with the following characteristics, M = 4, that is, four vehicles, which have identical batteries with a charging capacity Θ m = 100 [Ah], and a charging efficiency η = 0.98. The sampling time chosen was T s = 5 [min], a prediction and control horizon of T = 5 samples were considered, the maximum resource was s max = 100 [A] for all instants and vehicles, and the current injection limits and its variations were defined as u max = ∆u max = 50 [A], u min = 20 [A], and ∆u min = −50 [A].
The results of applying Benders Decomposition and Outer Approximation to the battery charging instance are shown in Figure 7 regarding the control signals and in Figure 8 regarding the state-of-charge of the batteries. It is possible to notice that all subsystems (vehicle batteries) were charged independently, using the distributed formulation and respecting the maximum available resource and the limits imposed on the defined variables.  The control actions favored the systems with less SoC, causing them to recharge first. Only after reaching a higher SoC, the control actions direct the available current to vehicles with a higher SoC at the start of charging. The behavior induced by the controller keeps a balance between the resource available and the SoC of all vehicles, thereby achieving the primal goal of charging all cars fully. The analysis showed that the electric vehicles can be recharged in a distributed fashion, approaching the optimal behavior that is achieved by a centralized counterpart.
The fact that the control actions show a similar behavior corroborates with the numerical results presented in Section 4.1, where it can be seen that both formulations manage to converge to a global minimum within a predefined tolerance. Although the Benders Decomposition and Outer Approximation have slightly different behaviors along the charging path, it is expected that they reach close points due to the search for the global optimum. This similar behavior can also be noticed in Figure 9, where it can be seen that the objective value of the cost function is equivalent between the approaches along the iterations of the MPC.

Conclusions
In this paper, we presented hierarchical decompositions for MPC of resource-constrained dynamic systems with activation constraints on the manipulated variables. First, the MPC problem was cast in a centralized form. Then, Outer Approximation and Benders Decomposition methods were developed for the baseline problem, which was reformulated into a hierarchical structure according to each decomposition strategy. Numerical experiments were performed, in synthetic instances, with the purpose of analysis and to give insights into the efficiency of such decompositions at solving the MPC problem in a distributed distributed fashion.
Considering a set of test problems, both decompositions could obtain nearly optimal and globally optimal solutions, with Outer Approximation achieving a better performance and reaching the global optimum faster than Benders Decomposition. Benders decomposition had a lower performance, particularly in problems with more subsystems or longer horizons, when compared to Outer Approximation. However, the results showed that decomposition strategies produce iterates that converge to the same global optimum achieved by their centralized counterpart.
An application to battery charging of electric vehicles, using activation constraints, was reported using these decompositions methods. The results show that the recharging of electric vehicles can be coordinated in a distributed fashion, approaching the optimal behavior achieved by a centralized controller. The problem's hierarchical structure makes Benders Decomposition and Outer Approximation ideal for intelligent and distributed control, which are typical of smart systems.
As future work, regularization strategies can be considered for Benders decomposition to increase convergence speed. Multi-cuts can be designed for both decomposition strategies to improve the lower bounds produced by the master problem. In addition, on the application side, the proposed hierarchical frameworks can be applied to other energy systems, possibly considering real data.