Next Article in Journal
Rumex japonicus Houtt. Ethanol Extract and Its Active Component on Prevention of Dyslipidemia and Hyperglycemia in Diet-Induced C57BL/6J Obese Mice
Previous Article in Journal
Three-Dimensional Nanofluid Flow with Heat and Mass Transfer Analysis over a Linear Stretching Surface with Convective Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Quadrotor Formation Strategies Based on Distributed Consensus and Model Predictive Controls

Department of Aerospace Engineering, Tamkang University, Tamsui, New Taipei City 25137, Taiwan
*
Author to whom correspondence should be addressed.
Appl. Sci. 2018, 8(11), 2246; https://doi.org/10.3390/app8112246
Submission received: 18 October 2018 / Revised: 9 November 2018 / Accepted: 12 November 2018 / Published: 14 November 2018
(This article belongs to the Section Mechanical Engineering)

Abstract

:
In this study, the distributed consensus control and model predictive control (MPC)-based formation strategies for quadrotors are proposed. First, the formation-control problem is decoupled into horizontal and vertical motions. The distributed consensus control and MPC-based formation strategy are implemented in the follower’s horizontal formation control. In the horizontal motion, the leader tracks the given waypoints by simply using the MPC, and generates the desired formation trajectory for each follower based on its flight information, predicted trajectory, and the given formation pattern. On the other hand, the followers carry out the formation flight based on the proposed horizontal formation strategy and the desired formation trajectories generated by the leader. In the vertical motion, formation control is carried out using only the MPC for both the leader and the follower. Likewise, the leader tracks the desired altitude/climb rate and generates the desired formation trajectories for the followers, and the followers track the desired formation trajectories generated by the leader using the MPC. The optimization problem considered in the MPC differs for the horizontal and vertical motions. The problem is formulated as a quadratic programming (QP) problem for the horizontal motion, and as a linear quadratic tracker (LQT) for the vertical motion. Simulation of a comprehensive maneuver was carried out under a Matlab/Simulink environment to examine the performance of the proposed formation strategies.

1. Introduction

As the quadrotor becomes increasingly popular and easy to acquire, it has gained much attention from researchers and manufacturers. Along with the various well-developed control algorithms of a single quadrotor, researchers have shifted their attention toward formation-control algorithms in recent years. A survey [1] classifies the formation flying control algorithms of spacecraft into five architectures: multiple-input and multiple-output, leader–follower, virtual structure, cyclic, and behavioral. The above classifications can be extended to the formation-control algorithms of the quadrotor. Among the above formation-control architectures, the leader–follower architecture [2,3,4,5,6] has received most attention from researchers.
In this study, the distributed consensus control and model predictive control (MPC)-based formation strategies considering the leader–follower architecture are proposed. First of all, the formation-control problem is decoupled into horizontal and vertical motions. In the horizontal motion, the leader tracks the given waypoints by using the MPC, and generates the desired formation trajectory for each follower based on its flight information, predicted trajectory, and the given formation pattern. The followers carry out the formation flight based on the proposed horizontal formation strategy and the desired formation trajectories generated by the leader. The follower’s horizontal formation strategy is set up based on the distributed consensus control and the MPC. Moreover, the optimized problem, which is considered in the MPC as adopted by the leader’s tracking control and the follower’s formation strategy in the horizontal motion, is formulated as a quadratic programming (QP) problem. In the vertical motion, the leader tracks the desired altitude and climb rate and generates the followers’ desired formation trajectories and the followers track the desired formation trajectories generated by the leader using the MPC. The optimized problem, which is considered in the MPC as adopted by the leader and the follower in the vertical motion, is formulated as a linear quadratic tracker (LQT).
Section 2 provides the background of this study. Both the linear and nonlinear equations of motion (EOMs) of the quadrotor are derived in Section 3. Section 4 briefly introduces the control laws that are adopted in the formation strategies. The full map of the formation strategies for the leader and the follower are carried out in Section 5. In Section 6, the preparation for the simulation, including the gains for the control laws as well as the Matlab/Simulink environment setup (Matlab R2016b, The MathWorks, Inc., Natick, MA 01760, USA), is provided. A simulation of the comprehensive maneuver was carried out to examine the performance of the formation strategies, and simulation results are illustrated in Section 7. Section 8 discusses the simulation results and the performance of the proposed formation strategies. The conclusions of the study are summarized in Section 9.

2. Background

Various methodologies of formation control have been proposed in recent years. For example, the backstepping method considering the nonlinear quadrotor model [7,8], linear quadratic control considering the linear quadrotor model [2], consensus control, and MPC [9]. In the field of consensus formation control, a comparative review of the latest trends has been carried out in Reference [10]. The author of Reference [11] provides a theoretical framework for analyzing the consensus algorithms for multiagent networked systems. Furthermore, a consensus algorithm for a second-order system is proposed in Reference [12].
This study is motivated by the results in Reference [13], in which the authors propose a formation-control method using decentralized MPC and consensus-based control. First, the leader computes all of the followers’ reference trajectories using a consensus-based algorithm, and transmits the trajectories to the followers according to the proposed network structure (Figure 1a). Then, each follower tracks its reference trajectory using the MPC.
In this study, the consensus-based algorithm is moved into the follower’s formation strategy in order to minimize the data that are transmitted around the network. The distributed network structure in Figure 1b is considered. In the horizontal motion, the leader tracks the given waypoints by using the MPC, and generates the desired formation trajectory of each follower based on its flight information, predicted trajectory, and the given formation pattern. The leader transmits the desired formation trajectories to the followers but does not receive any data from the followers. Moreover, the desired formation trajectory of each follower contains only the information of where the follower should be, that is, the desired position over the predicted time horizon. In the vertical motion, the leader tracks the desired altitude and climb rate by using the MPC, and generates the desired formation trajectory for each follower. Likewise, the desired formation trajectory for each follower in the vertical motion only contains the information of the desired altitude.
The follower’s formation control in the horizontal motion is carried out based on the distributed consensus control and the MPC. Consensus control requires the information of the desired velocity, attitude, and angular velocity. However, the desired formation trajectory of each follower generated by the leader does not contain this information. Therefore, an algorithm using a finite difference method is added to the formation strategy in the horizontal motion to generate the desired information from the desired formation trajectory required by the consensus control. Once the complete desired formation trajectory is generated, the formation control engages. The consensus control generates the flight paths of the follower under consideration and the other followers that are connected to it in the proposed network structure. However, the flight paths of other followers are not used by the proposed formation strategy of the follower under consideration. Despite that these paths might be valuable in cases such as interruption of communication or a collision-avoidance situation, they are beyond the scope of this study. The flight path of the follower under consideration is passed to the MPC. The MPC tracks the flight path and computes control inputs based on the unconstrained QP problem derived from the LQT. In the vertical motion, the LQT requires the information of the desired vertical velocity that is not included in the desired formation trajectory generated by the leader. Likewise, an algorithm using a finite-difference method is added to the formation strategy in the vertical motion to generate the desired information from the desired formation trajectory required by the LQT. Then, the LQT can compute the formation control based on the complete desired formation trajectory.

3. Dynamic Model of a Quadrotor

3.1. EOM of the Quadrotor

A quadrotor’s dynamics considering a flat Earth with an atmosphere at rest can be described by Newton–Euler formalism. The reference frames of the quadrotor are defined before EOMs are derived. The configuration, forces, and reference frames of the quadrotor that is considered in this study are illustrated in Figure 2.
In Figure 2, I = i ^ , j ^ , k ^ denotes the inertial frame and B = b ^ 1 , b ^ 2 , b ^ 3 denotes the body-fixed frame of the quadrotor. The axes i ^ , j ^ , k ^ of the frame I point to North, East, and downward under the consideration of flat Earth, respectively. The origin of the body-fixed frame B coincides with the center of gravity (CG) of the quadrotor at O . In addition, L represents the arm length of the rotor measured from O to the rotation axis. The weight of the quadrotor is not shown in Figure 2, but it should be considered while deriving the EOMs. T i and τ i are the thrust and torque of i-th rotor i = 1 , 2 , 3 , 4 with respect to the frame B , respectively. T i and τ i can be written as follows:
T i = T i b ^ 3 ,
τ i = ( 1 ) i τ i b ^ 3 .
The Newton–Euler formalism with respect to frame B is shown as follows:
m I 3 0 3 × 3 0 3 × 3 I J V ˙ B ω ˙ B + ω B × ( m V B ) ω B × ( I J ω B ) = F B τ B ,
I J = I x x I x y I x z I y x I y y I y z I z x I z y I z z .
In Equation (3), m is the mass of the quadrotor and I 3 , 0 3 × 3 I R 3 × 3 denote the identity matrix and square zero matrix, respectively. I J is the inertial tensor of the quadrotor. The quadrotor is symmetric to the xz and yz planes. Therefore, I x y = I y x = I y z = I z y = 0 . Furthermore, I z x and I x z are relatively small compared to I x x , I y y , and I z z . Hence, the inertial tensor of the quadrotor becomes I J = d i a g ( I x x , I y y , I z z ) . V B = u , v , w and ω B = p , q , r are the velocity and angular velocity of the quadrotor with respect to frame B . F B are the forces acting on the quadrotor with respect to frame B , including the thrust of each rotor and the weight of the quadrotor. τ B includes the torque induced by each rotor and the moment induced by the thrust of each rotor.
To obtain the overall EOMs, the transformation between the velocity in frame I and frame B , and the transformation between the angular velocity and the rate change of the Euler angles must be included. The above transformations are commonly seen in the derivation of the EOMs of unmanned aerial vehicles (UAVs) and drones. More details can be found in [14,15]. Here, rotation matrix R I 2 B , which transforms the coordinate of a vector from frame I to frame B , is introduced. Rotation matrix R I 2 B is defined as follows:
R I 2 B = C θ C ψ C θ S ψ S θ S ϕ S θ C ψ C ϕ S ψ S ϕ S θ S ψ + C ϕ C ψ S ϕ C θ C ϕ S θ C ψ + S ϕ S ψ C ϕ S θ S ψ S ϕ C ψ C ϕ C θ ,
where S ( · ) = sin ( · ) and C ( · ) = cos ( · ) . In addition, R I 2 B is an orthogonal matrix. Therefore, the following property holds:
R I 2 B 1 = R I 2 B T = R B 2 I .
The transformation between the velocity with respect to frame I , V I = x ˙ , y ˙ , z ˙ , and the velocity with respect to frame B , V B = u , v , w , can be written as follows:
V I = R B 2 I V B ,
x ˙ y ˙ z ˙ = C θ C ψ S ϕ S θ C ψ C ϕ S ψ C ϕ S θ C ψ + S ϕ S ψ C θ S ψ S ϕ S θ S ψ + C ϕ C ψ C ϕ S θ S ψ S ϕ C ψ S θ S ϕ C θ C ϕ C θ u v w ,
where ( x , y , z ) are the position of the quadrotor with respect to frame I . Additionally, the transformation between the angular velocity and the rate change of the Euler angles is shown as follows:
ϕ ˙ θ ˙ ψ ˙ = 1 S ϕ tan θ C ϕ tan θ 0 C ϕ S ϕ 0 S ϕ sec θ C ϕ sec θ p q r .
Finally, the EOMs of the quadrotor can be obtained with the above definitions and transformations. In this study, the formation control is decoupled into horizontal and vertical motions. Therefore, the EOMs of the quadrotor are arranged in a similar form.
In the horizontal motion, states ( x , y , u , v , θ , ϕ , q , p ) are considered, and the EOMs are arranged as follows:
P i t c h A x i s : x ˙ u ˙ θ ˙ q ˙ = ( C θ C ψ ) u + ( S ϕ S θ C ψ C ϕ S ψ ) v + ( C ϕ S θ C ψ + S ϕ S ψ ) w r v q w g sin θ q C ϕ r S ϕ r p ( I z z I x x ) I y y + 1 I y y M θ , R o l l A x i s : y ˙ v ˙ ϕ ˙ p ˙ = ( C θ S ψ ) u + ( S ϕ S θ S ψ + C ϕ C ψ ) v + ( C ϕ S θ S ψ S ϕ C ψ ) w p w r u + g sin ϕ cos θ p + q S ϕ tan θ + r C ϕ tan θ q r ( I y y I z z ) I x x + 1 I x x M ϕ .
On the other hand, the EOMs considering the states ( z , w , ψ , r ) in the vertical motion are arranged as follows:
z ˙ w ˙ ψ ˙ r ˙ = ( S θ ) u + ( S ϕ C θ ) v + ( C ϕ C θ ) w q u p v + g cos ϕ cos θ 1 m T t o t a l q S ϕ sec θ + r C ϕ sec θ p q ( I x x I y y ) I z z + 1 I z z M ψ .
In Equations (10) and (11), M θ , M ϕ , M ψ , and T t o t a l are the controls and defined as follows:
M ϕ = 2 2 L ( T 1 + T 2 T 3 T 4 ) , M θ = 2 2 L ( T 1 T 2 T 3 + T 4 ) , M ψ = τ 1 + τ 2 τ 3 + τ 4 , T t o t a l = T 1 + T 2 + T 3 + T 4 .

3.2. Linearization of the EOMs

The quadrotor model that is used in the consensus control and MPC is obtained by linearizing Equations (10) and (11). The small disturbance method is considered to linearize the EOMs, and the hovering condition of the quadrotor is chosen as the equilibrium reference. The equilibrium point of each state in horizontal and vertical motions are listed in Table 1 and Table 2, respectively.
Replacing the states in Equations (10) and (11) by the sum of the equilibrium point and the small disturbance of the states, and ignoring the cross-product terms of the small disturbance of the states, the linearized EOMs of the quadrotor in the horizontal and vertical motions can be obtained as follows:
P i t c h A x i s : Δ x ˙ Δ u ˙ Δ θ ˙ Δ q ˙ = 0 1 0 0 0 0 g 0 0 0 0 1 0 0 0 0 Δ x Δ u Δ θ Δ q + 0 0 0 1 I y y Δ M θ , R o l l A x i s : Δ y ˙ Δ v ˙ Δ ϕ ˙ Δ p ˙ = 0 1 0 0 0 0 g 0 0 0 0 1 0 0 0 0 Δ y Δ v Δ ϕ Δ p + 0 0 0 1 I x x Δ M ϕ ,
Δ z ˙ Δ w ˙ Δ ψ ˙ Δ r ˙ = 0 1 0 0 0 0 0 0 0 0 0 1 0 0 0 0 Δ z Δ w Δ ψ Δ r + 0 0 1 m 0 0 0 0 1 I z z Δ T t o t a l Δ M ψ .

4. Control Law Preliminaries

4.1. Consensus Control

The main objective of the consensus control in this study is to generate the flight path for the MPC in the follower’s horizontal formation strategy. The consensus control of an agent is generated by taking information such as the flight states of other connected agents in the network structure into consideration, and eventually all the agents reach a consensus state through the consensus control.
Consider the leader–follower architecture with one leader and N F followers; consensus control input u i ( k ) of the i-th follower at time step k is given as follows:
u i ( k ) = j = 1 N F + 1 a i j k s = 1 N s β k s s ^ i ( k s ) ( k ) s ^ j ( k s ) ( k ) , s ^ i ( k s ) ( k ) = s i ( k s ) ( k ) d i ( k s ) ( k ) , s ^ j ( k s ) ( k ) = s j ( k s ) ( k ) d j ( k s ) ( k ) ,
where subscript j denotes the j-th follower in the network structure other than i-th follower. As in the case that subscript j = N F + 1 , it refers to the leader. a i j represents the communication status between the i-th and j-th followers. If the i-th follower is receiving data from the j-th follower or leader ( j = N F + 1 ), then a i j = 1 . Otherwise, a i j = 0 . β k s ( k s = 1 , 2 , , N s ) are the control gains, where N s is the total number of the states. s ^ ( · ) ( k s ) ( k ) denotes the state error between the state s ( · ) ( k s ) ( k ) and reference state d ( · ) ( k s ) ( k ) at time step k.

4.2. Model Predictive Control

Model predictive control is a control algorithm based on the optimal theory and the plant model. In MPC, it first obtains the optimal trajectory and control sequence by solving the optimal problem according to the given cost function. The optimal trajectory can be seen as the prediction of the plant behavior over the predicted time horizon with the optimal control sequence. However, the plant might not behave identically to the prediction. Thus, MPC applies only to the first control in the optimal control sequence to the plant. When the next measurements of the states are acquired, MPC repeats the above sequence. By doing so, the MPC algorithm works like an optimal feedback control.
The following sections show the problem statement, the solution to the LQT problem, and how to rewrite an LQT problem into a QP problem.

4.2.1. Linear Quadratic Tracker

This section shows the problem statement and the solution to the LQT. The derivation of the solution to the LQT is not covered in this section. If the readers are interested in the derivation, it can be found in Reference [16].
Consider the following discrete state-space model with n states, m controls, and q outputs:
x k + 1 = A x k + B u k , y k = C x k ,
where the system matrix A I R n × n , the control matrix B I R n × m , and the output matrix C I R q × n . x k I R n is a column vector, which denotes the state at time step k in the considered time horizon. u k I R m is a column vector, which denotes the control at time step k in the considered time horizon. y k I R q is a column vector, which denotes the output state at time step k in the considered time horizon. Next, define the linear quadratic cost function based on the control and output state in the discrete state-space model and the reference trajectory for the output state as follows:
J = 1 2 ( y N p r N p ) T P ( y N p r N p ) + 1 2 k = 0 N p 1 ( y k r k ) T Q ( y k r k ) + u K T R u k ,
where k k = 0 , 1 , , N p denotes the time step over the time horizon. r k k = 0 , 1 , , N p is a column vector, which denotes the reference trajectory for the output state at time step k. P , Q , R are the weighting matrices. All the weighting matrices are symmetric. P, Q I R q × q are positive semidefinite. R I R m × m is positive definite:
P 0 , Q 0 , R > 0 .
The optimized control sequence u k ( k = 0 , 1 , , N p 1 ) that minimizes the cost function J in Equation (17) can be obtained by solving the following equations:
K k = ( B T S k + 1 B + R ) 1 B T S k + 1 A , S N p = C T P C , S k = A T S k + 1 ( A B K k ) + C T Q C , v k = ( A B K k ) T v k + 1 + C T Q r k , v N p = C T P r N p , K k v = ( B T S k + 1 B + R ) 1 B T , u k = K k x k + K k v v k + 1 .
In Equation (19), K k , S k , and K k v can be solved offline since they only consider the matrices in the discrete state-space model (16) and the weighting matrices P, Q, and R given by the users. Furthermore, by solving v k backward in time, the optimal tracking control sequence can be obtained. Substituting the optimal control sequence into the discrete state-space model (16), the optimal tracking trajectory over the time horizon can be obtained.

4.2.2. Unconstrained Quadratic Programming Problem

The quadratic programming problem deals with the quadratic cost function that is subject to linear equality and/or inequality constraints on the variables in the cost function. A quadratic programming problem can be stated as follows:
m i n i m i z e J Q P = 1 2 ξ T Q c ξ + c T ξ , s u b j e c t t o A c ξ b , E ξ = d ,
where J Q P denotes the cost function of the quadratic programming problem. ξ I R n is a column vector that contains n variables to be optimized. Q c I R n × n is a real symmetric matrix. c I R n is a real-valued column vector. A c I R m × n and b I R m describe the inequality constraints of the variables. Similarly, E I R m × n and d I R m describe the equality constraints of the variables.
To find an analytical solution to a constrained quadratic programming problem is difficult, if not impossible. However, it can still be solved by methods such as interior-point, active-set, or gradient-projection. On the contrary, the optimized solution to the unconstrained quadratic programming problem can be found simply by solving the linear equations of the partial derivative of the modified cost function with respect to each variable and set to zero.
In formation control, our objective is to find the control input that tracks the reference trajectory. By rewriting the LQT into a quadratic programming problem, the variables to be optimized become the control input sequence. That is, we can obtain the optimal control sequence by solving linear equations directly.
To rewrite the cost function, we start from rearranging the discrete state-space model (16). Expanding the discrete state-space model with respect to time step k k = 0 , 1 , , N p 1 :
x 1 = A x 0 + B u 0 , x 2 = A x 1 + B u 1 = A A x 0 + B 0 + B u 1 = A 2 x 0 + A B u 0 + B u 1 , x 3 = A x 2 + B u 2 = A A 2 x 0 + A B u 0 + B u 1 + B u 2 = A 3 x 0 + A 2 B u 0 + A B u 1 + B u 2 , x N p = A x N p 1 + B u N p 1 = A N p x 0 + A N p 1 B u 0 + + B u N p 1 ,
and defining the following vectors:
x ˜ = x 1 T x 2 T x N P T T , u ˜ = u 0 T u 1 T u N P 1 T T ,
where x ˜ I R n N p × 1 and u ˜ I R m N p × 1 . Equation (21) can be rewritten into the matrix form expressed by the initial condition and the vectors in Equation (22) as follows:
x ˜ = A A 2 A 3 A N p x 0 + B 0 n × m 0 n × m A B B 0 n × m A N p 1 B A N p 2 B B u ˜ = A ˜ x 0 + B ˜ u ˜ ,
where A ˜ I R n N p × n and B ˜ I R n N p × m N p . Before substituting Equation (23) into the cost function (17), substitute the output function in Equation (16) into cost function (17) and expand the cost function:
J = 1 2 y N p r N p T P y N p r N p + 1 2 k = 0 N p 1 y k r k T Q y k r k + u K T R u k = 1 2 C x N p r N p T P C x N p r N p + 1 2 k = 0 N p 1 C x k r k T Q C x k r k + u K T R u k = 1 2 x N p T C T P C x N p 2 x N p T C T P r N p + r N p T P r N p + 1 2 k = 0 N p 1 x k T C T Q C x k 2 x k T C T Q r k + r k T Q r k + u k T R u k = 1 2 x ˜ T C T Q C 0 n × n 0 n × n 0 n × n C T Q C 0 n × n 0 n × n 0 n × n C T P C x ˜ x ˜ T C T Q 0 n × q 0 n × q 0 n × q C T Q 0 n × q 0 n × q 0 n × q C T P r ˜ + u ˜ T R 0 m × m 0 m × m 0 m × m 0 m × m 0 m × m 0 m × m R u ˜ + r ˜ T Q 0 q × q 0 q × q 0 q × q Q 0 q × q 0 q × q 0 q × q P r ˜ + x 0 C T Q C x 0 2 x 0 C T Q r 0 + r 0 T Q r 0 = 1 2 x ˜ T Q ˜ 1 x ˜ x ˜ T Q ˜ 2 r ˜ + 1 2 u ˜ T R ˜ u ˜ + 1 2 r ˜ T Q ˜ 3 r ˜ + x 0 C T Q C x 0 2 x 0 C T Q r 0 + r 0 T Q r 0 C o n s t a n t c o n t r i b u t i o n t o c o s t f u n c t i o n ,
where Q ˜ 1 I R n N p × n N p , Q ˜ 2 I R n N p × q N p , Q ˜ 3 I R q N p × q N p , r ˜ I R q N p × 1 and R ˜ I R m N p × m N p . In Equation (24), x 0 is the given initial condition and r 0 is the reference of the output state at k = 0 . r ˜ is a column vector consists of the given reference trajectory of the output state at each time step. Thus, the terms depend only on x 0 , r 0 , or r ˜ are constant contribution to cost function J and can be ignored in the subsequent derivation. The modified cost function now becomes:
J ˜ = 1 2 x ˜ T Q ˜ 1 x ˜ x ˜ T Q ˜ 2 r ˜ + 1 2 u ˜ T R ˜ u ˜ .
Replacing x ˜ in Equation (25) by Equation (23)
J ˜ = 1 2 A ˜ x 0 + B ˜ u ˜ T Q ˜ 1 A ˜ x 0 + B ˜ u ˜ A ˜ x 0 + B ˜ u ˜ T Q ˜ 2 r ˜ + 1 2 u ˜ T R ˜ u ˜ = 1 2 u ˜ T B ˜ T Q ˜ 1 B ˜ + R ˜ u ˜ + x 0 T r ˜ T A ˜ T Q ˜ 1 B ˜ Q ˜ 2 T B ˜ u ˜ + 1 2 x 0 T A ˜ T Q ˜ 1 A ˜ x 0 x 0 T A ˜ T Q ˜ 2 r ˜ C o n s t a n t c o n t r i b u t i o n t o c o s t f u n c t i o n
and ignoring the constant contribution part in cost function J ˜ , the cost function becomes
J ¯ = 1 2 u ˜ T B ˜ T Q ˜ 1 B ˜ + R ˜ u ˜ + x 0 T r ˜ T A ˜ T Q ˜ 1 B ˜ Q ˜ 2 T B ˜ u ˜ = 1 2 u ˜ T H u ˜ + x ¯ F u ˜ ,
where H I R m N p × m N p , x ¯ I R ( n + q N p ) is a row vector consists of initial condition x 0 and the reference trajectory at each time step and F I R ( n + q N p ) × m N p . The optimal control sequence can be obtained by taking the partial derivative of J ¯ with respect to u ˜ and setting it to zero:
J ¯ u ˜ = 1 2 H u ˜ + H T u ˜ + x ¯ F T = 0 = H u ˜ + x ¯ F T = 0 .
The optimal control sequence is then given by
u ˜ = H 1 F T x ¯ T .
Equation (29) will be used in the MPC in leader’s and follower’s horizontal formation strategies to compute the optimal tracking control sequence.

5. Formation Strategies

The formation control strategies of the leader and the followers are discussed in detail in the following sections. How the leader generates followers’ desired formation trajectories in both horizontal and vertical motions will be carried out in Section 5.1. Section 5.2 shows how a follower computes the formation control input using consensus control and MPC after receiving the desired formation trajectory information.

5.1. Formation Strategy: Leader

Figure 3 shows the block diagram of a leader’s formation strategy. First, the leader tracks the predefined waypoints and desired altitude/climb rate using the MPC. The MPC is formulated as a QP problem for the horizontal motion and LQT for the vertical motion. After the predicted trajectory is generated along with the tracking control input, a built-in algorithm generates followers’ desired formation trajectories and the trajectories are transmitted to each follower.

5.1.1. Generation of the Desired Formation Trajectory in the Horizontal Motion

In the horizontal motion, the leader generates the followers’ desired formation trajectories based on its position, direction of the horizontal velocity in frame I , predicted trajectory, and the given formation pattern. The transformation from the leader’s predicted trajectory to the desired formation trajectories of the followers can be derived according to Figure 4.
First of all, from Figure 4a, the follower’s desired formation position in horizontal plane in frame I can be described as follows:
r d , F = r L + r L 2 F , I ,
where r d , F denotes the position vector of the follower’s desired formation position and r L denotes the leader’s position vector. r L 2 F , I is a vector pointing from leader’s position to follower’s desired formation position with respect to frame I .
Considering Figure 4b, the desired formation position with respect to the leader’s velocity tangential coordinate can be written as follows:
r L 2 F , V = ρ ρ V ^ L V ^ L ,
where V ^ L and V ^ L are the unit vectors of the leader’s velocity tangential coordinate in which V ^ L is parallel to V L and V ^ L is perpendicular to V L . ρ , ρ is the coordinate of the desired formation position with respect to the leader’s tangential coordinate.
The rotation matrix, denoted as R V 2 I , that transforms the position vector from r L 2 F , V to r L 2 F , I is defined as follows:
R V 2 I = cos θ V L sin θ V L sin θ V L cos θ V L ,
where θ V L is the angle between V L and i ^ , which can be obtained by
θ V L = cos 1 V L · i ^ V L .
V L and its magnitude can be computed as follows:
V L = x ˙ i ^ + y ˙ j ^ , V L = x ˙ 2 + y ˙ 2 ,
where x ˙ and y ˙ can be obtained from Equation (8). Thus, position vector r L 2 F , I can be obtained as follows:
r L 2 F , I = ρ ρ cos θ V L sin θ V L sin θ V L cos θ V L i ^ j ^ .
Finally, the coordinate of the follower’s desired formation position r d , F can be obtained by
r d , F = r L + r L 2 F , I r d , F I = r L I + ρ ρ cos θ V L sin θ V L sin θ V L cos θ V L .
Applying Equation (36) to every points of the leader’s position in the predicted trajectory, the desired formation trajectory of each follower in the horizontal motion over the predicted time horizon can be obtained.

5.1.2. Generation of the Desired Formation Trajectory in the Vertical Motion

In this study, followers are designated to maintain the same altitude as the leader throughout the flight. Thus, the leader’s predicted trajectory of the altitude computed by the MPC algorithm directly becomes the followers’ desired formation trajectories. The leader’s vertical motion is determined by the desired altitude and/or climb rate, which can be classified into four cases as shown in Table 3.
In the vertical motion, the first two states, Δ z and Δ w in the state-space model (13) are used to construct a new second-order state-space model. The new second-order state-space model describes the leader’s vertical motion and can be rewritten as follows:
Δ z ˙ Δ w ˙ = 0 1 0 0 Δ z Δ w + 0 1 m Δ T t o t a l .
Δ z is positive as the quadrotor moving downward. However, it is more intuitive for us to consider moving upward as positive. Hence, variable Δ h is defined to replace Δ z in the state-space model:
Δ h = Δ z Δ h ˙ = Δ w ,
and therefore
Δ h ˙ Δ w ˙ = 0 1 0 0 Δ h Δ w + 0 1 m Δ T t o t a l .
With the given desired altitude and/or climb rate, the vertical reference trajectory of the leader for the MPC can be obtained. The MPC computes the control input and the predicted trajectory according to the reference trajectory and the state-space model (38). The following illustrate how the reference trajectory for the MPC is generated in each case in Table 3:
  • Case I: maintain the current altitude.
    In this case, the leader is designated to maintain the altitude. Thus, the reference trajectories of Δ h and Δ w over the predicted time horizon are equal to zero.
  • Case II: climb to the desired altitude.
    If the desired altitude is given but not the climb rate, the leader is designated to climb to the desired altitude with the default climb rate. The reference for Δ w is the negative default climb rate and the reference trajectory of Δ h is computed by
    d A l t ( k ) = L ¯ p + R C d e f a u l t × k × Δ h s ,
    where d A l t ( k ) denotes the reference altitude at time step k, which is equal to the leader’s current altitude L ¯ p plus the default climb rate R C d e f a u l t times the time step k and the interval of the time step Δ h s . Time step k is defined over the predicted time horizon considered by the MPC with the interval of Δ h s .
    A deceleration action is taken as the leader is approaching the desired altitude. If the distance between the leader’s current position and the desired altitude is smaller than the altitude that can increase/decrease according to the climb rate within one second, then the reference of the altitude is set to the desired altitude and the reference of the climb rate will be set to zero.
  • Case III: continuous climbing with the given climb rate.
    In this case, the reference trajectory of the altitude is generated by Equation (39) as well, but R C d e f a u l t is replaced by the given climb rate. No deceleration action is taken.
  • Case IV: climb to desired altitude with the given rate.
    In this case, the reference trajectory of the altitude is generated by Equation (39) as well, but R C d e f a u l t is replaced by the given climb rate. This is the same if the statement in Case II is taken into consideration as the leader is approaching the desired altitude.

5.2. Formation Strategy: Follower

The follower’s formation strategies in horizontal and vertical motions are illustrated in Figure 5. In follower’s formation strategy, a follower is expected to receive two sets of data. The first data set includes the desired formation trajectories of itself and the followers that have a direct connection to it in the network structure. The second data set includes the states of the connected followers. How the blocks manipulate the received data sets and generate the control input is discussed in the following sections.

5.2.1. Formation Strategy in the Horizontal Motion

In the horizontal motion, the formation strategy is carried out by the consensus control and MPC. As mentioned in Section 2, the desired formation trajectories possess only the information of the desired position for the followers over the time horizon. Therefore, an algorithm of the finite difference method, the block FDM Algorithm in Figure 5, is added to the formation strategy in the horizontal motion to generate the required information for the consensus control.
Consider point ζ i on given function f ( ζ ) , the first to third derivative of ζ i given by the finite difference method are shown as follows:
f ( ζ i ) = f ( ζ i + 2 ) + 8 f ( ζ i + 1 ) 8 f ( ζ i 1 ) + f ( ζ i 2 ) 12 h , f ( ζ i ) = f ( ζ i + 2 ) + 16 f ( ζ i + 1 ) 30 f ( ζ i ) + 16 f ( ζ i 1 ) f ( ζ i 2 ) 12 h 2 , f ( ζ i ) = f ( ζ i + 3 ) + 8 f ( ζ i + 2 ) 13 f ( ζ i + 1 ) + 13 f ( ζ i 1 ) 8 f ( ζ i 2 ) + f ( ζ i 3 ) 8 h 3 ,
where h is the time interval between each point. As shown in Equation (40), computing the third derivative of f ( ζ ) at ζ i requires the points three steps prior to, and three steps after, ζ i . This would be a problem when dealing with the first three points and the last three points of the desired formation trajectory, since we don’t have knowledge of the points prior to the first point and after the last point of the desired formation trajectory. However, these points can be acquired by applying linear extrapolation backward in time to the first point and forward in time to the last point of the desired formation trajectory. Once the complete desired trajectory is generated, consensus control engages. The consensus control algorithm computes the flight paths of the followers that track the corresponding desired formation trajectories using the control input given in Equation (15). Here, we rearrange the states at time step k of the i-th follower as follows:
s i ( 1 ) ( k ) = Δ x ( k ) Δ y ( k ) , s i ( 2 ) ( k ) = Δ u ( k ) Δ v ( k ) , s i ( 3 ) ( k ) = Δ θ ( k ) Δ ϕ ( k ) , s i ( 4 ) ( k ) = Δ q ( k ) Δ p ( k ) .
The corresponding desired formation trajectory of each modified state s i ( k s ) ( k ) k s = 1 , 2 , 3 , 4 at time step k is defined as d i ( 1 ) ( k ) , d i ( 2 ) ( k ) , d i ( 3 ) ( k ) and d i ( 4 ) ( k ) . The error between the modified state and the corresponding desired formation trajectory at time step k can be written as:
s ^ i ( k s ) ( k ) = s i ( k s ) ( k ) d i ( k s ) ( k ) , k s = 1 , 2 , 3 , 4 .
The consensus control input at time step k can be computed by using the modified states and the control gains β k s ( k s = 1 , 2 , 3 , 4 ). The control input will result in a two-dimensional column matrix. The first and second elements of the control input are the pitch and roll control commands, respectively. The next step is to apply the consensus control input to the discrete state-space model of (13). Repeating the steps above until the flight path over the predicted time horizon is obtained.
After the flight path of the follower under consideration is generated, it then serves as the reference trajectory for the MPC. The MPC obtains the optimal control sequence using Equation (29). First, rearrange the initial condition and the reference trajectory in the form of x ¯ described in Equation (27). With the discrete state-space model and the selected weighting matrices, the predicted trajectory and the control input sequence can be obtained by Equations (23) and (29). Then, the first control input of the optimal control sequence becomes the formation control input for the follower.

5.2.2. Formation Strategy in the Vertical Motion

The formation strategy in the vertical motion is simpler than the one in the horizontal motion. The state-space model that is adopted in the vertical motion is describe by Equation (38). As mentioned in Section 2, the desired formation trajectory generated by the leader possesses only the information of the altitude, namely Δ h . Therefore, an algorithm of the finite-difference method, block FDM Algorithm in Figure 5 is added to the follower’s vertical formation strategy to generate the required information for the LQT.
With the complete desired formation trajectory and the selected weighting matrices, the LQT computes the optimal control sequence using Equation (19). The formation control input in the vertical motion for the follower is the first input of the optimal control sequence.

6. Simulation Preparation

This section introduces the configuration of the systems that are used in the simulation under the Matlab/Simulink environment, as well as the necessary setup for the simulation. A transformation of variables is carried out in Section 6.1 for coding purpose. In Section 6.2, the configuration of the simulation systems under the Matlab/Simulink environment are introduced block by block and leader to follower. To mention, one leader, three followers, and the network structure in Figure 1b are considered in the simulation.

6.1. Transformation of the Variable of the Discrete State-Space Model

The state-space model coded in the formation strategies is the discretized state-space model with a sampling time of 0.1 s. That is, the controls for the leader and the followers are expected to be updated every 0.1 s. For the convenience of coding the control algorithm in the horizontal motion, a transformation of variables is applied to the horizontal state-space model (13). The states and the controls in the horizontal state-space model (13) are arranged as x h and U h in Equation (43), respectively. The transformation of states T s and the transformation of controls T u are considered and shown as follows:
x h = Δ x Δ y Δ u Δ v Δ θ Δ ϕ Δ q Δ p = 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 g 0 0 0 0 0 0 0 0 1 g 0 0 0 0 0 0 0 0 1 g 0 0 0 0 0 0 0 0 1 g Δ x Δ y Δ u Δ v Δ θ ˜ Δ ϕ ˜ Δ q ˜ Δ p ˜ = T s x ˜ h , U h = Δ M θ Δ M ϕ = I y y g 0 0 I x x g Δ M ˜ θ Δ M ˜ ϕ = T u U ˜ h .
Applying the above transformation to the states and the controls in Equation (13), the normalized horizontal state-space model considering the modified state x ˜ h and the modified control U ˜ h can be obtained. The normalized system matrix A ˜ h and the control matrix B ˜ h are shown as follows:
A ˜ h = 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 , B ˜ h = 0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 1 .
The discrete state-space model that is adopted by the formation strategies in the horizontal motion is obtained by applying the Matlab function c2d to the normalized continuous state-space model (44).

6.2. Simulink Environment Setup for the Formation Simulation

The basic configuration of the simulated quadrotor system under the Matlab/Simulink environment is shown in Figure 6. The system includes a Control System module, a Gravity Influence module, and a six degrees of freedom (6DOF) model module (Quad 6DOF Model). Both leader and followers adopt the same configuration with the difference in the Control System module and the input/output ports.
The Gravity Influence module computes the distribution of the quadrotor’s weight on the axes of frame B . The 6DOF rigid body EOMs solver inside the Quad 6DOF Model module is shown in Figure 7. Block 6DOF (Euler Angles) solves the nonlinear 6DOF EOMs of the rigid body with the given inputs with respect to frame B . The block is provided by the Simulink/Aerospace Blockset toolbox [17].
The Control System module basically includes a yaw control submodule, pitch/roll control submodule, and an altitude control submodule. Figure 8 shows the configuration of the leader’s Control System module. The follower’s Control System module differs from the leader’s Control System module in the pitch/roll control submodule. The leader’s and follower’s pitch/roll control submodule will be introduced later in the section. The yaw control submodule controls the yaw motion of the quadrotor based on the dual-loop proportional plus integral (PI) control. The pitch/roll and altitude controls carry out the formation control strategies depending on the role that it is a leader or a follower. Likewise, the input/output ports differ depending on the role of the quadrotor.
Figure 9 shows the configuration of the Altitude Control submodule of the leader and the follower. The Altitude Control submodule carries out the vertical formation strategies described in Section 5.1.2 for the leader and in Section 5.2.2 for the follower. In the leader’s Altitude Control submodule, the function Track Generator generates the reference trajectory Lv_ref for the LQT using the method provided in Section 5.1.2 depending on the vertical state of the leader, desired altitude, and climb rate. The vertical state of the leader, labeled as vN, is defined by Equation (38). The desired altitude and the climb rate, labeled as required_alt and required_RC, are given by the user. Function LQT tracks reference trajectory Lv_ref by solving Equation (19) with the given weighting matrices, vertical state vN, and vertical state-space model (38). Function LQT outputs thrust command dT, the follower’s reference trajectory F_vertical_ref, and leader’s velocity L_velocity_w on b ^ 3 of frame B . In the follower’s Altitude Control submodule, function FDM Algorithm computes the complete reference trajectory Fv_ref consisting of the references for states Δ h and Δ w in Equation (38) using the finite-difference method (40). Then, function LQT solves the equations in (19) depending on the given weighting matrices, the vertical state of follower vN, and vertical state-space model (38). After the computation, function LQT outputs the thrust command dT. It is worth mentioning that the thrust command dT in both the leader’s and follower’s control submodules is the thrust deviation from the thrust at the equilibrium point T t o t a l , 0 . Therefore, actual thrust command T_t in Figure 6 that is input into the 6DOF model equals the thrust deviation dT plus the equilibrium thrust T t o t a l , 0 . Leader’s velocity L_velocity_w is used to compute the leader’s velocity in frame I in the control strategy in horizontal motion.
Figure 10 and Figure 11 show the pitch/roll control submodule for the leader and the follower, respectively. The leader’s pitch/roll control submodule, MPC Control(Pitch, Roll) in Figure 8, includes the waypoints datasets (orange dashed square), the waypoints update rules (orange square), the block Preset Constant which includes the formation pattern and the parameters for the waypoint update rules, the MPC algorithm Leader MPC Algorithm, and the function Follower ref. Trajectory Generator that generates the followers’ reference trajectories as shown in Figure 10. The waypoints datasets in the orange dashed square includes the coordinate of the waypoints in frame I , named as L_wp, and the number of the waypoints, named as wp_length. The waypoint update rules update the waypoint if the distance between the leader and the waypoint, denoted by R2wp, is smaller than d 2 and hold at the last waypoint if R2wp is smaller than d 2 . Parameter d 1 is used under the circumstances that if the user designates the followers to hold a specific position in which the last waypoint is reached. Function Leader MPC Algorithm solves the optimal tracking control input M by using Equation (29) with the given weighting matrices, the horizontal state of the leader, and the horizontal state-space model. The predicted trajectory for the leader is computed by Equation (23). Note that the horizontal state and the state-space model that are adopted in the Leader MPC Algorithm are the modified state x ˜ h and the normalized state-space model in Equation (44). In addition, the control output by the Leader MPC Algorithm is the modified control in Equation (43). Function Follower ref. Trajectory Generator computes the reference trajectories for the followers based on the method in Section 5.1.1 using the leader’s predicted trajectory and the formation pattern. The leader’s vertical velocity L_velocity_w is input into the function to compute the leader’s velocity in frame I using Equation (8).
The control submodule shown in Figure 11 is the control submodule of Follower #1. The follower’s control submodule first computes the complete desired formation trajectories, including the information of the velocity, attitude, and angular velocity, using the finite difference method coded in function FDM Algorithm. Then, the function Consensus Algorithm computes all the followers’ flight paths over the predicted time horizon based on the consensus control (15), modified state x ˜ h (43), and the normalized state-space model (44). The signal, labeled as rN, is the modified state of the follower under consideration and the signal, labeled as rNF_all, contains all the modified states of other followers connected to it in the network structure. In the case shown in Figure 11, rN is the modified state of Follower #1 and rNF_all contains the modified states of Follower #2 and Follower #3. After the flight paths are computed, the function MPC algorithm engages. The MPC algorithm computes the optimal tracking control by using Equadtion (29) based on the modified state (43), the normalized state-space model (44) of the quadrotor, and the corresponding flight path.

7. Simulation Results

The simulation of a comprehensive maneuver is carried out to examine the performance of the proposed formation strategies. The parameters of the quadrotor, the gains of the consensus control, and the weighting matrices that are used in the simulation are given in Section 7.1. The simulation results of the comprehensive maneuver are shown in Section 7.2.

7.1. Quadrotor Parameter, Control Gain, and Weighting Matrices

This section summarizes the parameters of the quadrotor (Table 4), weighting matrices, and consensus control gains that are used in the simulation.
In the simulation, we assume that all the states in both horizontal and vertical motions are obtainable. That is, output matrix C in the state-space model for both motions for leader and followers is an identity matrix. The weighting matrices for the LQT and QP problem for the leader are given as follows:
  • Weighting matrices P, Q, and R of the unconstrained QP problem coded in simulink function Leader MPC Algorithm in Figure 10 for the leader’s horizontal motion control are given as follows:
    Q = I 8 , R = 0.5 I 2 , P = 29.8176 0 37.9636 0 25.3123 0 7.1204 0 0 29.8176 0 37.9636 0 25.3123 0 7.1204 37.9636 0 86.4879 0 67.0493 0 20.8541 0 0 37.9636 0 86.4879 0 67.0493 0 20.8541 25.3123 0 67.0493 0 75.6689 0 26.9672 0 0 25.3123 0 67.0493 0 75.6689 0 26.9672 7.1204 0 20.8541 0 26.9672 0 18.4432 0 0 7.1204 0 20.8541 0 26.9672 0 18.4432 .
  • Weighting matrices P, Q, and R for the LQT coded in the simulink function LQT in Figure 9 for the leader’s altitude control are given as follows:
    P = Q = 3 0 0 10 , R = 0.5 .
As for the follower, the control gains for the consensus control in the horizontal motion and the weighting matrices for the LQT in vertical motion and the unconstrained QP problem in horizontal motion are given as follows:
  • Consensus control gain β k s ( k s = 1 , 2 , 3 , 4 ) in the horizontal motion considering the states in (41) and the consensus control (15) coded in the Consensus Algorithm in Figure 11 are given as follows:
    β 1 , β 2 , β 3 , β 4 = 13 , 30 , 60 , 5 .
  • Weighting matrices P, Q, and R of the unconstrained QP problem coded in simulink function MPC Algorithm in Figure 11 for the follower’s horizontal motion control are given as follows:
    Q = I 8 , R = 0.3 I 2 , P = 28.8987 0 35.3119 0 22.1493 0 5.5294 0 0 28.8987 0 35.3119 0 22.1493 0 5.5294 35.3119 0 78.6321 0 57.3341 0 15.6844 0 0 35.3119 0 78.6321 0 57.3341 0 15.6844 22.1493 0 57.3114 0 62.9640 0 19.4694 0 0 22.1493 0 57.3114 0 62.9640 0 19.4694 5.5294 0 15.6844 0 19.4694 0 12.6822 0 0 5.5294 0 15.6844 0 19.4694 0 12.6822 .
  • Weighting matrices P, Q, and R for the LQT coded in simulink function LQT in Figure 9 for the follower’s altitude control are given as follows:
    Q = P = 3 0 0 10 , R = 0.5 .
The selection of weighting matrix P for the unconstrained QP problem utilizes the methodology provided by Reference [18]. The method ensures the MPC objective function has the same quadratic cost as the infinite horizon quadratic cost used by LQR.

7.2. Simulation of the Comprehensive Maneuver

A simulation of the comprehensive maneuver that was carried out is outlined in this section along with the simulation results. The objective of the simulation was to examine whether the followers can carry out the formation flight, including maintaining the given formation pattern and tracking the desired altitude, with the proposed formation strategies while the leader performs a series of maneuvers.
Assume that two quests, mapping the landscape and taking photos of a target building’s exterior, are assigned to the leader. The predefined waypoints are illustrated in Figure 12. The first quest, mapping the landscape, is carried out through Waypoint #1 to Waypoint #9. An imaginary terrain is placed at waypoint #6 so that the leader has to change its altitude in order to cross the terrain. The leader is designated to climb to 20 meters with respect to the zeros reference altitude at the climb rate of 4 m/s at Waypoint #5 and descend to the zeros reference altitude at the climb rate of –4 m/s at Waypoint #7. The second quest, taking photos of a target building’s exterior, is carried through Waypoint #15 to Waypoint #32. The leader follows a circle trajectory in the x y plane and climbs at the climb rate of 0.45 m/s at the same time. The climbing continues until the leader reaches 60 meters with respect to the zeros reference altitude. After all the quests are completed, the leader heads to its holding point at Waypoint #47 by following the Waypoint #33 to Waypoint #47. The leader will start to descend to zero reference altitude at Waypoint #38 at the climb rate of –2 m/s.
The triangle formation pattern is chosen and the simulation results are shown in the following figures. The trajectories of the quadrotors throughout the maneuver are shown in Figure 13. Figure 14 shows the top view of the quadrotors with the formation pattern throughout the flight. In Figure 13 and Figure 14, the x-axis points to the north, the y-axis points to the east, and the z-axis points upward, opposite to the k ^ axis in frame I . Figure 15 shows the altitude and climb rate of the quadrotors. The attitude of the leader and followers are shown in Figure 16, Figure 17 and Figure 18.

8. Discussion

In the simulation of the comprehensive maneuver, the leader performs a series of maneuvers while carrying out the assigned quests. The quadrotor trajectories in Figure 13 show that the followers carry out the formation flight well based on the proposed formation strategies. Furthermore, from the top view of the quadrotor trajectories in Figure 14, the designated formation pattern is maintained throughout the flight. Although the formation pattern slightly deforms during the turn, it recovers soon after the turn.
In the vertical motion, the leader climbs to the desired altitude at the designated climb rate in three stages, R1, R2, and R3, as shown in Figure 15. Moreover, the followers can keep up with the leader’s vertical motion with the proposed formation strategy in Section 5.2.2. In Figure 15, oscillation in the altitude and climb rate can be observed throughout the flight. The oscillation can be seen more clearly in Figure 19, which is a zoom-in view of the vertical motion responses in Figure 15 during the time interval of 190 to 210 seconds. The oscillation is caused by the coupling effects between the horizontal and vertical motions in the 6DOF nonlinear EOMs. The quadrotor tilts while tracking the waypoints as shown in Figure 16 and Figure 17, which disturbs the equilibrium condition and causes the altitude to change. Since the equilibrium condition is disturbed, the altitude control algorithm will engage to regain the equilibrium condition in the vertical motion and therefore result in the oscillating phenomenon.
In the horizontal motion, the responses of the pitch and roll angles in Figure 16 and Figure 17 show that the angles remain in an acceptable range ( ± 25 ) throughout the flight. Furthermore, the yaw angle is well-controlled, close to zero, by the dual-loop PI control as shown in Figure 18. Additionally, the responses of the pitch and roll angles before and after the upward circling motion are shown in Figure 20 and Figure 21, respectively. Before the quadrotors initiate and after the quadrotors finish the upward circling motion, they are simply performing a straight flight with a 90 turn. According to the results illustrated in Figure 20 and Figure 21, the formation strategies in the horizontal motion are fully adequate to perform the formation in straight flight. Even during the turn, the pitch and roll angles stay in an acceptable range ( ± 25 ).
A zoom-in figure of the responses of the attitude during the upward circling motion is shown in Figure 22. The figure zooms in the responses in the time interval of 80 to 100 seconds during the upward circling motion. In this 20-second interval, the quadrotors travel about 1/4 of a circle in an x y plane. During the upward circling motion, the leader has to change its direction frequently to stay on the circle track as well as the followers to stay in formation. The results in Figure 22 show that the pitch and roll angles are still within an acceptable range for leader and followers. The followers can keep up with leader’s horizontal motion, or, in other words, to stay in formation during the upward circling motion with the proposed formation strategy in Section 5.2.1. The yaw angles of the leader and followers are controlled closed to zero with the dual-loop PI control in the upward circling motion as shown in Figure 18 and Figure 22.

9. Conclusions

In this study, the formation control strategies for quadrotors considering the leader–follower architecture are proposed in Section 5. The leader is designated to track the predefined waypoints in horizontal motion and the desired altitude/climb rate in vertical motion. Furthermore, the leader generates the followers’ desired formation trajectories using the methods in Section 5.1.1 and Section 5.1.2 as a guidance for the followers to perform the formation flight. The follower’s formation strategy is proposed in Section 5.2. The consensus control and MPC with the QP problem are adopted in the horizontal formation control, and the MPC with the LQT is adopted in the vertical formation control. The formation control strategies are examined based on the 6DOF nonlinear simulation under the Matlab/Simulink environment and the simulation results are illustrated in Section 7. A discussion of the simulation results as well as the performance of the formation strategies are carried out in Section 8. The results of the simulation of comprehensive maneuver in Section 7.2 show that the proposed formation strategies are feasible in straight flight, turning, and upward circling motion.

Author Contributions

J.-K.S. organized and supervised the main research of the study. C.-W.C. completed the design and simulation of the study under Shiau’s supervision.

Acknowledgments

This research was supported by the Ministry of Science and Technology (MOST), Taiwan, under Grant MOST 107-2221-E-032-028.

Conflicts of Interest

The authors declare no conflict of interest. The founding sponsors had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; and in the decision to publish the results.

References

  1. Scharf, D.P.; Hadaegh, F.Y.; Ploen, S.R. A survey of spacecraft formation flying guidance and control (Part II): Control. In Proceedings of the 2004 American Control Conference, Boston, MA, USA, 30 June–2 July 2004. [Google Scholar]
  2. Rinaldi, F.; Chiesa, S.; Quagliotti, F. Linear quadratic control for quadrotors UAVs dynamics and formation flight. J. Intell. Robot. Syst. 2013, 70, 203–220. [Google Scholar] [CrossRef]
  3. Mercado, D.; Castro, R.; Lozano, R. Quadrotors flight formation control using a leader-follower approach. In Proceedings of the 2013 European Control Conference (ECC), Zurich, Switzerland, 17–19 July 2013; pp. 3858–3863. [Google Scholar]
  4. Wu, F.; Chen, J.; Liang, Y. Leader-follower formation control for quadrotors. In IOP Conference Series: Materials Science and Engineering; IOP Publishing: Bristol, UK, 2017; Volume 187, p. 012016. [Google Scholar]
  5. Roldão, V.; Cunha, R.; Cabecinhas, D.; Silvestre, C.; Oliveira, P. A leader-following trajectory generator with application to quadrotor formation flight. Robot. Auton. Syst. 2014, 62, 1597–1609. [Google Scholar] [CrossRef]
  6. Hou, Z.; Fantoni, I. Distributed leader-follower formation control for multiple quadrotors with weighted topology. In Proceedings of the 2015 10th IEEE System of Systems Engineering Conference (SoSE), San Antonio, TX, USA, 17–20 May 2015; pp. 256–261. [Google Scholar]
  7. Min, Y.X.; Cao, K.C.; Sheng, H.H.; Tao, Z. Formation tracking control of multiple quadrotors based on backstepping. In Proceedings of the 2015 34th Chinese Control Conference (CCC), Hangzhou, China, 28–30 July 2015; pp. 4424–4430. [Google Scholar]
  8. Lee, K.U.; Choi, Y.H.; Park, J.B. Backstepping Based Formation Control of Quadrotors with the State Transformation Technique. Appl. Sci. 2017, 7, 1170. [Google Scholar] [CrossRef]
  9. Zhao, W.; Go, T.H. Quadcopter formation flight control combining MPC and robust feedback linearization. J. Frankl. Inst. 2014, 351, 1335–1355. [Google Scholar] [CrossRef]
  10. Gulzar, M.M.; Rizvi, S.T.H.; Javed, M.Y.; Munir, U.; Asif, H. Multi-Agent Cooperative Control Consensus: A Comparative Review. Electronics 2018, 7, 22. [Google Scholar] [CrossRef]
  11. Olfati-Saber, R.; Fax, J.A.; Murray, R.M. Consensus and cooperation in networked multi-agent systems. Proc. IEEE 2007, 95, 215–233. [Google Scholar] [CrossRef]
  12. Ren, W. Consensus based formation control strategies for multi-vehicle systems. In Proceedings of the American Control Conference, Minneapolis, MN, USA, 14–16 June 2006. [Google Scholar]
  13. Kuriki, Y.; Namerikawa, T. Formation control with collision avoidance for a multi-UAV system using decentralized MPC and consensus-based control. SICE J. Control Meas. Syst. Integr. 2015, 8, 285–294. [Google Scholar] [CrossRef]
  14. Nelson, R.C. Flight Stability and Automatic Control; WCB/McGraw Hill: New York, NY, USA, 1998; Volume 2. [Google Scholar]
  15. Roskam, J. Airplane Flight Dynamics and Automatic Flight Controls; DARcorporation: Lawrence, KS, USA, 1998. [Google Scholar]
  16. Lewis, F.L.; Vrabie, D.; Syrmos, V.L. Optimal Control; John Wiley & Sons: New York, NY, USA, 2012. [Google Scholar]
  17. MathWorks. Aerospace ToolBox Release Notes. Available online: https://www.mathworks.com/help/aerotbx/release-notes.html (accessed on 16 October 2018).
  18. MathWorks. Solve Custom MPC Quadratic Programming Problem and Generate Code. Available online: https://www.mathworks.com/help/mpc/examples/solve-custom-mpc-quadratic-programming-problem-and-generate-code.html (accessed on 25 September 2018).
Figure 1. Network structure comparison: (a) network structure adopted in Reference [13]; (b) network structure adopted in this study.
Figure 1. Network structure comparison: (a) network structure adopted in Reference [13]; (b) network structure adopted in this study.
Applsci 08 02246 g001
Figure 2. Thrusts, torques, quadrotor’s parameter, and reference frames.
Figure 2. Thrusts, torques, quadrotor’s parameter, and reference frames.
Applsci 08 02246 g002
Figure 3. Formation control strategy of the leader.
Figure 3. Formation control strategy of the leader.
Applsci 08 02246 g003
Figure 4. Relative position of the leader and the follower (left); tangential coordinate and leader’s velocity V L in a horizontal plane (right).
Figure 4. Relative position of the leader and the follower (left); tangential coordinate and leader’s velocity V L in a horizontal plane (right).
Applsci 08 02246 g004
Figure 5. Follower’s control strategy.
Figure 5. Follower’s control strategy.
Applsci 08 02246 g005
Figure 6. Simulated quadrotor system: leader.
Figure 6. Simulated quadrotor system: leader.
Applsci 08 02246 g006
Figure 7. Simulink/Aerospace Blockset 6DOF block.
Figure 7. Simulink/Aerospace Blockset 6DOF block.
Applsci 08 02246 g007
Figure 8. Configuration of the control system.
Figure 8. Configuration of the control system.
Applsci 08 02246 g008
Figure 9. Configuration of altitude control of leader (top) and follower (bottom).
Figure 9. Configuration of altitude control of leader (top) and follower (bottom).
Applsci 08 02246 g009
Figure 10. Configuration of leader’s horizontal motion control.
Figure 10. Configuration of leader’s horizontal motion control.
Applsci 08 02246 g010
Figure 11. Configuration of follower’s horizontal motion control.
Figure 11. Configuration of follower’s horizontal motion control.
Applsci 08 02246 g011
Figure 12. Comprehensive maneuver mission trajectory.
Figure 12. Comprehensive maneuver mission trajectory.
Applsci 08 02246 g012
Figure 13. Quadrotor trajectories of comprehensive maneuvers.
Figure 13. Quadrotor trajectories of comprehensive maneuvers.
Applsci 08 02246 g013
Figure 14. Top view of quadrotor trajectories throughout the comprehensive maneuver.
Figure 14. Top view of quadrotor trajectories throughout the comprehensive maneuver.
Applsci 08 02246 g014
Figure 15. Quadrotor altitude and climb rate throughout the comprehensive maneuver.
Figure 15. Quadrotor altitude and climb rate throughout the comprehensive maneuver.
Applsci 08 02246 g015
Figure 16. Quadrotor roll angle throughout the comprehensive maneuver.
Figure 16. Quadrotor roll angle throughout the comprehensive maneuver.
Applsci 08 02246 g016
Figure 17. Quadrotor pitch angle throughout the comprehensive maneuver.
Figure 17. Quadrotor pitch angle throughout the comprehensive maneuver.
Applsci 08 02246 g017
Figure 18. Quadrotor yaw angle throughout the comprehensive maneuver.
Figure 18. Quadrotor yaw angle throughout the comprehensive maneuver.
Applsci 08 02246 g018
Figure 19. Zoom-in of the responses of altitude and climb rate during 190 to 210 seconds.
Figure 19. Zoom-in of the responses of altitude and climb rate during 190 to 210 seconds.
Applsci 08 02246 g019
Figure 20. Responses of pitch and roll angles from start to 55 seconds.
Figure 20. Responses of pitch and roll angles from start to 55 seconds.
Applsci 08 02246 g020
Figure 21. Responses of pitch and roll angles from 255 to 280 seconds.
Figure 21. Responses of pitch and roll angles from 255 to 280 seconds.
Applsci 08 02246 g021
Figure 22. Zoom-in of altitude vs. time during upward circling motion.
Figure 22. Zoom-in of altitude vs. time during upward circling motion.
Applsci 08 02246 g022
Table 1. Equilibrium pt. and small disturbance expression of each state and controls in horizontal motion.
Table 1. Equilibrium pt. and small disturbance expression of each state and controls in horizontal motion.
StateEquilibrium pt.Small Disturbance ExpressionUnit
x x 0 = x I n i t Δ x m
y y 0 = y I n i t Δ y m
u u 0 = 0 Δ u m/s
v v 0 = 0 Δ v m/s
θ θ 0 = 0 Δ θ rad
ϕ ϕ 0 = 0 Δ ϕ rad
q q 0 = 0 Δ q rad/s
p p 0 = 0 Δ p rad/s
M θ M θ , 0 = 0 Δ M θ m · N
M ϕ M ϕ , 0 = 0 Δ M ϕ m · N
Table 2. Equilibrium pt. and small disturbance expression of each state and controls in vertical motion.
Table 2. Equilibrium pt. and small disturbance expression of each state and controls in vertical motion.
StateEquilibrium pt.Small Disturbance ExpressionUnit
z z 0 = z I n i t Δ z m
w w 0 = 0 Δ w m/s
ψ ψ 0 = 0 Δ ψ rad
r r 0 = 0 Δ r rad/s
M ψ M ψ , 0 = 0 Δ M ψ m · N
T t o t a l T t o t a l , 0 = m g Δ T t o t a l N
where x I n i t , y I n i t , and z I n i t are the initial position of the quadrotor. Subscript 0 denotes the equilibrium point of the state, and Δ ( · ) denotes the small disturbance of the state.
Table 3. Cases of leader’s vertical motion with desired altitude and climb rate.
Table 3. Cases of leader’s vertical motion with desired altitude and climb rate.
CaseDesired AltitudeClimb RateVertical Motion
INot GivenNot GivenMaintain current altitude.
I I GivenNot GivenClimb to desired altitude.
I I I Not GivenGivenContinuous climbing with given climb rate.
I V GivenGivenClimb to desired altitude with given climb rate.
Table 4. Parameters of the quadrotor.
Table 4. Parameters of the quadrotor.
ParameterSymbolValueUnit
Mass of the Quadrotorm1.5 k g
Acceleration of Gravityg9.807 m / s 2
Quadrotor Arm LengthL0.225m
I x x 0.022 k g · m 2
Moment of Inertia I y y 0.022 k g · m 2
I z z 0.0018 k g · m 2
Note: the above parameters were acquired from the quadrotor in the lab.

Share and Cite

MDPI and ACS Style

Chang, C.-W.; Shiau, J.-K. Quadrotor Formation Strategies Based on Distributed Consensus and Model Predictive Controls. Appl. Sci. 2018, 8, 2246. https://doi.org/10.3390/app8112246

AMA Style

Chang C-W, Shiau J-K. Quadrotor Formation Strategies Based on Distributed Consensus and Model Predictive Controls. Applied Sciences. 2018; 8(11):2246. https://doi.org/10.3390/app8112246

Chicago/Turabian Style

Chang, Chia-Wei, and Jaw-Kuen Shiau. 2018. "Quadrotor Formation Strategies Based on Distributed Consensus and Model Predictive Controls" Applied Sciences 8, no. 11: 2246. https://doi.org/10.3390/app8112246

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop