1. Introduction
New high-aspect-ratio lightweight wings development is now one of the main targets in aviation, with the objective of reducing fuel consumption and diminishing greenhouse gas emissions. However, these advanced wings are more susceptible to aeroelastic instabilities, such as flutter or divergence. A solution to overcome such instabilities is the implementation of the active control on the wing structure, thus extending its flight envelope [
1]. Active flutter suppression (AFS) has been known for some time; see, for example, [
2,
3]. Despite its potential, the practical implementation of AFS faces several challenges. These include the need for a robust method to compute unsteady aerodynamics valid in the Laplace domain, devising a control law that remains effective across the full compressible regime and determining an optimal control strategy capable of minimizing energy consumption.
As mentioned by Livne [
4], for any AFS solution to be accepted, it must be fully understood and supported by reliable tools. Consequently, there exists a pressing need for a practical and robust AFS system suitable for integration into aircraft systems. For multi-input–output systems, AFS often requires many trial designs before achieving satisfactory results. Modern state-space control design methods seem more compatible with general multi-input–output systems, and many techniques like linear quadratic Gaussian (LQG) [
5],
control [
6,
7,
8] or pole and partial or complete eigenstructures assignment [
9,
10] have proven their validity for applications with multiple-input multiple-output systems. However, when applied to active flutter suppression, these methods suffer from the need to greatly augment the states of the model in order to express the generalized aerodynamic forces matrix in the state-space. These additional states are fictitious, unmeasurable and very sensitive to the modeling approximation inherent in the aerodynamic formulation used.
Linear Matrix Inequalities (LMIs) have proven to derive suboptimal designs for linear control problems [
11]. Chiali and Gahinet [
12] extended the LMI approach to multiobjective output feedback synthesis. Gahinet and Apkarian [
13] used LMI and extended the concept of
controllers replacing the Ricatti equations with Ricatti inequalities to parameterize the suboptimal
controllers, including reduced-order variants.
For high-aspect-ratio wings, like the ones that are within the objective of this work, strip theory is a very valuable procedure to determine their stability and control. To apply this theory, a well-established and validated airfoil section method is required. Among others, some recent publications like Darabseh et al. [
14] and Micheli [
15] have considered, for an airfoil section in incompressible flow, the active flutter suppression by a linear quadratic Gaussian optimal control [
14] and the influence of the actuator saturation in [
15]. In this study, the compressibility effects on the unsteady aerodynamic forces will be taken into account for a wing section with three degrees of freedom. The aerodynamics fully valid in the Laplace domain presented in [
16] will be used, and different procedures to determine the optimal control law will be investigated. It will be shown that the
controllers provide the best result in terms of stability for all the cases presented.
The paper’s organization is as follows:
Section 2 presents the equations of motion for the state-space model; in
Section 3, a brief resume on the development of the standard compensator for the system stabilization is given;
Section 4 introduces the
controller and discusses its internal stability;
Section 5 presents the results obtained in the full Mach number regime; and
Section 6 offers the concluding remarks for this work.
2. Equations of Motion and State-Space Modelization
The active control of a three degrees of freedom (dof) wing section (
Figure 1) is considered. The control of the system is performed through the control surface (variable
). The dynamic equation of the system in the Laplace domain, as derived in Equations 2.1 and 2.21 of Edward’s work [
17], is
where
,
is the nondimensional mass ratio and
are the structural degrees of freedom; the structural inertia, damping and rigidity matrices are after nondimensionalization, given as
where
and
G is the gain matrix given by
accounts for the unsteady aerodynamic loads matrix obtained in the Laplace plane, and
u is the external moment applied to the control surface. In this work, the aerodynamic forces are valid in the complete Laplace plane since they are not determined by an approximation or curve fitting of the unsteady aerodynamic forces in the frequency domain. This is a big advantage of the proposed method since, as pointed out by several investigators [
18], the curve fit approximations may have questionable applicability out of the imaginary axis of the Laplace plane. In Ref. [
16], a detailed description of how these aerodynamic forces have been calculated in the subsonic, transonic and supersonic linear flows can be found. For the subsonic flow, a collocation method is developed; for the flow at the sonic Mach number, the solution of Rott [
19] has been generalized to the Laplace domain; and for supersonic flows, the solution of Garrick applied to the Laplace plane by [
17] has been used. The generalized force matrix can then be expressed in a uniform manner as a function of the nondimensional Laplace variable
and of the Mach number as
, where
is the state for
,
and
.
Then, Equation (
1) in the nondimensional Laplace variable
s can be expressed as
In order to manipulate the system and to obtain an adequate control law, the system’s equations of motion must be transformed to the state-space. For the control law evaluation, a Rational Function Approximation (RFA) methodology will be employed, following Roger’s model [
20]. It must be noted that the use of RFAs introduces inherent inaccuracies in the model since the data fitting may not be exact. Although RFA has been widely used for control law evaluations, recently, a new method based on tangential interpolation [
21] has also proved to be a valid approach instead of the RFA. In this work, however, the RFA will only be used to compute the control law, and the exact dynamic equation will be applied when presenting the resultant behavior in a closed-loop configuration, with this being one of the main advantages of the proposed method.
Roger’s approximation contains a number of common lags, representing the aerodynamic lag of the system. Its usual expression is presented by Karpel [
3] as
As the model’s order grows linearly with the number of lags, it is not advisable to select a high number of lag states. Typically,
is enough for most applications, as suggested by Karpel. When represented in matrix form, this RFA can be conveniently expressed as
where
R contains the lags of the system (common for all the degrees of freedom) and
D acts as a summation matrix.
To determine the matrices
,
,
and
E, an adequate number of generalized aerodynamic force matrices to fit the model must be provided. A conventional approach involves computing these loads under the assumption of simple harmonic oscillations, i.e., pure imaginary reduced frequencies. However, by incorporating data from across the Laplace plane, the model is expected to increase its accuracy far from the imaginary axis. These aerodynamic loads serve as the foundation for fitting the model through the use of a least squares method [
22]. Additionally, the incorporation of Lagrange multipliers enables the imposition of steady load values, with the flexibility to introduce additional constraints if necessary, such as the frequency derivative of the loads in the proximity to
. This condition can only be applied in the subsonic regime since for
, there exists a singularity in the aerodynamic loads for
; see also [
19] for the case of
for harmonic oscillations.
In order to validate the RFA, the approximate loads are compared with the exact ones in
Figure 2. The loads are computed along three branches in the form
. For the fitting of the RFA,
is used, and values of the complex Laplace variable
s are computed within the region of interest. It is noteworthy that the fitting process is performed based on the loads obtained in the full Laplace plane and not only along the imaginary axis. The results reveal a close match between the loads from the RFA and the exact results. However, the accuracy of the approximation significantly depends on the selected values of
s used when moving away from the imaginary axis.
Once the coefficients of the RFA are obtained, the equations of the augmented aerodynamic states
, which are based on the lags within the RFA, can be expressed as a function of the system states
through a linear relation with the
s variable. The aerodynamic states can then be defined as follows:
where
R and
D were obtained directly from Equation (
6), with
being the derivatives of the system states.
Subsequently, the aerodynamic loads can be expressed as
By incorporating these aerodynamic loads into the dynamic system and reorganizing the matrices, the resulting state equation takes the form
where
Here, the matrices , , and are defined in terms of airspeed and Mach number, considering that the RFA has also been computed for the same Mach number, . These matrices collectively represent the system dynamics for a given operating condition.
Gathering these equations in a matricial form, the resulting state-space model possesses a total of
states (with
typically being three), and it is expressed as
The expanded states, together with the initial vector
and its nondimensional derivatives, form a new state vector,
x. The equation determining the internal dynamics of the state-space system is then expressed as
This time-invariant system is validated by comparing the root locus of a typical airfoil obtained by using the p-method solution of Equation (
4) with the poles of the state-space model and the RFA applied solution to Equation (
10), as illustrated in
Figure 3. The specific airfoil parameters are detailed in
Table 1.
In this context, represents the nondimensional distance from the center of mass to the elastic axis, represents the nondimensional radius of gyration of the airfoil, represents the nondimensional distance from the center of mass of the control surface to the hinge axis and stands for the nondimensional radius of gyration of the control surface. Additionally, corresponds to the nondimensional location of the elastic axis, to the nondimensional separation between the elastic axis and the hinge axis of the control surface and denotes the nondimensional damping coefficient of the control surface mode.
In
Figure 3, it can be observed that for the subsonic regime at
, our state-space analysis closely matches the results derived from the exact solution, expressed in the Laplace plane (Equation (
4)) obtained previously at [
16]. However, for higher Mach numbers, we notice a mismatch mainly in the aileron mode. Although the RFA used here considers values out of the imaginary axis, some differences with respect to the exact p-method are still observed. If the fitting is made by taking only points along the imaginary axis, a larger difference would occur with respect to the exact solution given by the p-method. Therefore, despite these small discrepancies, we will proceed with our calculations as the plunging and torsion modes align well with our model.
3. Standard Compensator
In order to stabilize the system, the implementation of a control law based on the state-space system is required. Various control strategies have been studied either theoretically or implemented in active flutter suppression, including proportional–integral–derivative control (PID), pole placement, the Linear Quadratic Regulator (LQR) method and Kalman filters [
23], among other options. In this work, the first approach will be to study the stabilization of the system by the use of pole placement and the LQR method.
The control loop can be split into two essential parts, each serving a specific role in the overall control strategy. First, we have the feedback gain matrix, known as K, which is responsible for generating the control input u. This input relies on the estimated full state vector, , representing the internal states. These states are estimated by using a linear observer, which uses the system’s output vector y to track the system evolution over time. The separation principle states that the regulator and the estimator can be implemented separately, simplifying the control system design. The implementation of an observer together with a regulator is often referred to as a compensator.
In this section, a control law will be derived based on the previously presented state-space system. First, the state-space system will be completed. Subsequently, an examination of the system’s controllability and observability will be undertaken. Then, an observer and a regulator will be separately obtained. Finally, the effectiveness and efficiency of the resulting control law will be assessed.
3.1. Completion of the State-Space Model
The state-space system presented in Equation (
10) must be completed since it only has information on the states of the system and its inputs. To accomplish this, the output matrix
C is defined here. In our particular system, the moment applied to the control surface cannot be measured directly, so the direct transmission matrix of the system,
D, is set to zero. Hence, the output equation is
As established earlier, the system’s states include both physical and aerodynamic states. Only the physical states (
h,
,
and its derivatives) can be measured directly; therefore, the output matrix is defined as
In this case, it is assumed that we are measuring all the physical states of the system. In practice, one should include here only the states that can be measured by using the available sensor configuration.
Gathering the state-space matrices, a linear time-invariant system is obtained, represented through the standard formulation
3.2. Controllability Matrix
Given the high number of states (
) compared to the number of control inputs (1), it is possible that the system is not fully controllable. The controllability matrix [
24] is defined as
with
n being the rank of matrix
A.
By definition, a system is considered controllable if the rank of its controllability matrix matches the rank of the system itself. As expected, the investigated airfoils are uncontrollable. Nevertheless, an uncontrollable system may still be stabilizable. This can be achieved by breaking down the system into controllable and uncontrollable subsystems and, provided that the uncontrollable one is stable, a control law can be derived that stabilizes the full system. The decomposition will be carried out hereafter.
3.3. Decomposition into Controllable and Uncontrollable Subsystems
Let
P be the controllability matrix of the system and assume
. An invertible matrix
can be found that transform the system into its echelon form:
Let
be the new states, with
and
being the controllable and uncontrollable states. The state-space system then becomes
The resulting matrices
,
have the form
The system
can be controlled independently from
since
Now, a control law can be developed that feeds the controllable states to the input, following the expression
. The control matrix
can be obtained for the subsystem (
,
) via pole placement or any other method. To apply the resulting matrix to the original system, the similarity transformation is reversed:
3.4. Regulator Implementation
In order to achieve the desired behavior of the system, the pole-placement method is introduced. This technique is based on the canonical transformation of the system, with its algorithm for MIMO systems presented by Nguyen [
25,
26].
The application of this algorithm yields a feedback matrix tailored to the controllable subsystem. Following Equation (
20), the resulting control law takes the full state as an input:
However, conventional pole placement results in numerical instabilities during the algorithm implementation. While such instabilities may not significantly impact systems with a limited number of states, due to the large number of aerodynamic states created in this model, an exact placement of the poles cannot be achieved. Valášek and Olgac [
27] presented a modified approach that minimizes these errors.
The primary challenge in this method lies in the arbitrary selection of the target poles, which often overlooks the significance of the controller’s own poles. The consequences of this choice will be exposed later.
3.5. Linear Observer Implementation
A linear observer, often referred to as a state observer or estimator, mimics the behavior of the system by using available sensor measurements and knowledge of the system’s dynamics. Since the model is a representation of reality, it will not be exact due to nonlinearities in the original model or to modeling inaccuracies. To amend these differences, the observer is corrected with the available system output. The state equation governing the observer’s behavior is as follows:
with
L being the observer gain matrix.
Since the estimated output is available from the second equation, the system can be expressed as
The observer’s dynamics are given by the eigenvalues of the matrix . The observer matrix can then be obtained by pole placement or other methods in a similar manner to the regulator gain, replacing A and B with and . It is essential that the observer’s poles are positioned significantly distant from those of the regulator so the dynamics of the estimated states are faster than those of the regulator. A common practice involves placing the observer poles 10 times farther than those of the regulator.
In this particular case, the matrix L will be computed by the use of a Linear Quadratic Regulator (LQR). The primary objective of the LQR is to determine the adequate control inputs that minimize a quadratic cost function. This cost function typically includes terms related to the state of the system and the control input. Consequently, the primary challenge is to find an equilibrium between control effort and performance trade-offs.
3.6. Closed-Loop Dynamics
Once the observer gain matrix L and the regulator matrix K have been determined, the dynamics of the state-space model are completed, and a control law that ensures system stability can be derived.
Mapping the regulator inputs to the estimated state,
, and incorporating the feedback gain into Equation (
23) leads to the following equation:
where
represents information obtained from the original plant. Combining the observer dynamics with the primary state-space model, an extended representation of the state-space model governing the closed-loop system is obtained and can be expressed as
The controller can now be derived from this extended system. Based on Equation (
24), the control law is given as the transfer function:
This is the control law that stabilizes the state-space model. It relies only on measurable outputs as well as the system’s reduced Laplace variable. Its analysis in the Laplace domain and its practical implementation for an airfoil system can be carried out following the schematic block diagram depicted in
Figure 4.
3.7. Specific Control Law and Results
To proceed with the actual implementation of the control law, the parameters for the pole placement and the LQR observer must be configured. To ensure adequate system stability, a stability margin is defined to prevent the poles from approaching the imaginary axis too closely. The stable poles in the closed loop are positioned at the same locations as the open-loop branches, while those on the unstable branches are adjusted to maintain the specified stability margin. For the weight and cost matrices of the LQR controller, identity matrices are selected. For improved performance, the results can be tuned with these matrices.
To circumvent the potential inaccuracies arising from the aerodynamic model approximation, the p-method is employed to compute the pole locations. The control law has been computed based on the extended system, as illustrated in Equation (
25). However, to make it applicable to the original model, it must be incorporated into Equation (
4). Considering that the control law relies on the original states
and their derivatives, the following matrix is included in the feedback gain:
Consequently, the dynamic equation for the system including the control law takes the form
with the feedback matrix defined as
where the
matrix is the exact form of the aerodynamic solution without further approximation.
Figure 5 shows the analysis for the root locus of the closed-loop system for the airfoil parameters given in
Table 1. The control law is activated when an open-loop pole approaches the stability margin, effectively maintaining system stability for every airspeed. The originally unstable branch remains in the left plane, maintaining a stability margin from the imaginary axis. Furthermore, the system’s branches remain continuous, ensuring the smooth operation of the control law across all velocities.
Figure 6 shows the same results, focusing on the rest of the poles from the state-space system. As anticipated, new branches emerge in the left plane, relating to the compensator poles, whose characteristics are directly influenced by the chosen control law parameters. This inherent dependence on parameter selection highlights a key limitation of this approach, as the resulting system dynamics are arbitrarily arranged by these choices.
4. H-Infinity Controller
In the preceding section, the controller derived exhibited a degree of arbitrariness due to the influence of the chosen weighting matrices within the Linear Quadratic Regulator (LQR) framework, as well as the arbitrary location of the poles. This section aims to procure an optimal controller to overcome these issues.
For this purpose, it is imperative to establish a clear definition of optimality. The primary objective herein is to formulate a controller that accomplishes system stabilization while minimizing energy consumption. Additionally, the optimal controller should demonstrate efficiency across a broad spectrum of frequencies, obviating the need for frequency-specific tuning. With these objectives in mind, the norm of a system is introduced here.
The (H-infinity) norm is a fundamental concept in control theory and signal processing. It serves as a critical measure of the frequency-domain performance of a system, offering insights into its robustness and stability characteristics. The norm quantifies the maximum gain from a given input to the system’s output across all frequencies, making it a powerful tool for assessing a system’s susceptibility to external disturbances and uncertainties. In essence, it provides a means to evaluate the worst-case behavior of a system, which is essential for designing robust control systems and ensuring that they perform reliably under a variety of real-world conditions. Moreover, control does not require a precise model of the system. It can accommodate modeling errors and uncertainties, making it suitable for real-world systems where accurate modeling can be challenging.
In the upcoming section, we commence by exploring the computation of the
norm. We then shift our focus to the aggregate plant, where we consider a multidimensional system encompassing various inputs and outputs, emphasizing the minimization of the
norm for regulated outputs. Next, we investigate suboptimal
controller design, highlighting an iterative approach and the advantages of closed-form solutions. Finally, we address the issue of strong stabilization, ensuring controller stability in the presence of various factors, and present a methodology for achieving internal stability. Together, these sections provide a comprehensive overview of the application of
control to flutter alleviation systems, emphasizing the pursuit of optimal system performance while maintaining stability and reliability. Applications of this method for AFS can also be found in the work of Waitman and Marcos [
28].
4.1. Computation of the Norm
As has been stated, the
norm measures the worst-case behavior of a system. This assessment is achieved through the computation of the maximum singular value, denoted as
, across the entire frequency domain for a given transfer function,
. Consequently, the
norm is formally expressed as
Traditionally, this computation would include a Bode plot, in which the maximum response of the system would be obtained. However, Boyd, Balakrishnan and Kabamba [
11] introduced an algorithmic approach for the expedient numerical estimation of the
norm. The fundamental steps of this algorithm are as follows:
Select an initial guess for the value for the norm of the system, .
Define the Hamiltonian matrix
where
;
; and
A,
B,
C and
D are the state-space plant matrices.
Check if H has no eigenvalues on the imaginary axis. If so, then , and is subsequently decreased. Else, increase .
Repeat. The calculations finish when the solution converges to .
This method enables a more systematic and mathematically rigorous approach, reducing the reliance on graphical techniques and facilitating the optimization of systems that demand stringent performance criteria.
4.2. Aggregate Plant
The control method considers not only the closed-loop system’s stability but also aims to minimize the control effort and estimation errors. Hence, the system input and outputs must be expanded to account for these variables.
For this purpose, the system is represented as a nine-matrices system, resulting in the creation of an aggregate plant with an augmented set of inputs and outputs. The expanded plant includes the following categorizations for inputs and outputs:
Exogenous inputs w: unmodelled dynamics, sensor noise and tracking signal.
Actuator inputs u: effect of actuators.
Regulated outputs z: states to be minimized.
Sensed outputs y: states that can be measured.
The dynamic equations governing the relationships among these variables are expressed as
Since the objective is to minimize the effect of the exogenous inputs on the regulated outputs, the exogenous inputs must include not only potential noise and inaccuracies but also account for actuator inputs. In the context of our airfoil system, the aggregate plant, when realized in its nine-matrices representation, takes the form
The stabilization of the aggregate plant is achieved through the incorporation of a feedback loop from the sensed outputs to the actuator inputs. In the closed-loop configuration, the primary objective of this section is to minimize the norm of the transfer function from exogenous inputs to regulated outputs, denoted as .
4.3. Suboptimal Controller
Various methodologies for computing an
controller are available within the control theory domain. Notably, Gahinet and Apkarian [
13] presented a method for obtaining an optimal
controller with the introduction of Linear Matrix Inequalities (LMIs). This optimization framework relies on Semidefinite Programming (SDP) and is solved through the utilization of convex optimization and Interior-Point optimization algorithms [
29,
30]. Nevertheless, the original method lacks the flexibility to incorporate specific constraints or limitations. Addressing this limitation, Chilali and Gahinet [
12] included the use of pole placement constraints, which can be employed, for instance, to restrict the velocity of the resultant controller. Furthermore, Scherer, Gahinet and Chilali [
31] expanded the method by enabling the incorporation of multiobjective output-feedback optimization.
However, the direct optimization approach using LMIs is considered less suitable for its use in this work. Firstly, the resulting controller is not guaranteed to possess internal stability, which is a crucial attribute for controller performance. Secondly, the computation of suboptimal controllers presents a closed-form solution, which renders it a more time-efficient strategy. Consequently, in this case, it is more advantageous to first derive suboptimal controllers and subsequently search for the optimal controller.
The objective of suboptimal
controllers will be to find all admissible controllers K such that
, given an arbitrary scalar
. Doyle et al. [
32] present a method to compute this controller based on the solution to two Algebraic Riccati Equations (AREs). The algorithm is presented hereafter.
For a given
, define the following Hamiltonian matrices:
An admissible controller such that exists if and only if the following conditions hold:
- i
and .
- ii
and .
- iii
.
with being the Riccati operator, being the spectral radius and being an LMI, i.e., it refers to the real part of the eigenvalues of .
If the three conditions above are fulfilled, then the suboptimal controller is defined as
where
In order to find an optimal controller, an iterative procedure in is implemented, searching for its minimum value. The implementation of this control strategy demonstrates efficient computation since the solution to the Riccati equations is constructed directly from the eigenvectors of both the and matrices. However, it must be noted that the proof of this algorithm does not ensure the internal stability of the controller. Remarkably, in the cases studied in this paper, the controller obtained is internally unstable, i.e., some poles of the controller are in the right half-plane. The stabilization of the controller is considered next.
4.4. Strong Stabilization
As has been stated, one notable concern with the
approach is the fact that internal stability is not guaranteed [
33]. The concept of internal stability remains crucial when implementing such controllers, as any deviation could lead to numerical errors and erratic controller behavior. Consequently, assuring the stability of an
design is imperative and demands independent verification.
The foundation of this control strategy rests on the distinct and unique stabilizing solution to the Riccati equations, resulting in what is commonly referred to as the “central controller”. Nevertheless, a family of admissible controllers that fulfill the condition
exists. Doyle [
32] obtains them through the application of a linear fractional transformation (LFT [
34]) between the central controller and an arbitrary matrix
. The central controller has the following form:
Based on this family of controllers, several authors have dealt with this problem. Campos-Delgado and Zhou [
35] adopted an iterative approach to stabilize the original controller, creating a series of nested feedback loops. In this case, the main problem lies in the order of the resulting controller, since with each control loop, the order increases linearly. Most authors propose a solution based on coprime factorization [
36,
37].
Here, the approach proposed by Zeren and Özbay [
38], which is built around the solution to two additional Algebraic Riccati Equations (AREs) will be applied.
For the existence of a stabilizing controller, the following conditions are necessary:
- i
is stabilizable and is detectable.
- ii
has no eigenvalues in the imaginary axis.
Provided these conditions are fulfilled, the stabilizing solution
X to the following ARE is computed:
Then, a stabilizing controller exists if a stabilizing solution exists for the ARE:
with
being an arbitrary scalar and
Under this sufficient condition,
is a strongly stabilizing controller with
, where
Since Equation (
36) stated that the subcontroller must have an
norm under
, then the calculation of the subcontroller
is an iterative procedure, starting with
, with
being subsequently decreased so as to find an optimal subcontroller. In the case that for
a solution to the aforementioned AREs cannot be reached, it can be concluded that a stabilizing controller does not exist.
With this additional feedback loop, the final state-space system of the controller corresponds to the linear fractional transformation
. Its state-space form is depicted as follows:
Following the same procedure as in the case of the compensator, the controller system is expressed as its transfer function form. This is required to be able to apply the p-method in the Laplace plane. The control law is then
Equations (
27) and (
28) are then used to obtain the dynamic equation of the system in a closed loop, substituting the compensator control law by the one presented here.
4.5. Specific Control Law and Results
Once again for the parameters given in
Table 1, the control procedure is tested on the airfoil in a compressible regime.
Figure 7 shows the results. It can be seen that the branch corresponding to the plunging mode is stabilized before approaching the imaginary axis, averting any potential flutter issues. It is interesting to mention that after mitigating flutter, the system not only remains stable but also moves away from the imaginary axis.
In
Figure 8, we present the system’s pole distribution across the complete Laplace plane. A set of new branches emerged, positioned significantly distant from the imaginary axis, as in the previous case. Unlike the compensator scenario, the closed-loop poles in this case prove to be optimal and do not depend on arbitrary parameters.
5. Results for a Three Dof Airfoil
Following the results presented above for the closed-loop dynamics of the airfoil given in
Table 1, a comparison of the two methods employed to obtain the control law is presented now. The effectiveness and efficiency of the system hinged upon the assessment of the
norm of the transfer function, specifically from exogenous inputs to regulated outputs, is denoted as
. It is essential to underscore that these regulated outputs encompass not only the system’s states but also its inputs. Hence, a higher norm implies higher control effort from the actuator, which, in turn, may cause mechanical wear and tear on the system’s actuators.
Figure 9 shows this comparison. The
norm of the system in the case of the compensator is nearly two orders of magnitude higher than that of the
controller, thereby showing the efficiency of the
approach.
With respect to the dynamics of the control law,
Figure 10 shows the root locus of the two controllers. In both cases, the resulting controllers are stable, a prerequisite for the practicable deployment of the control loop, as previously highlighted. However, it is pertinent to acknowledge that the natural frequencies of the
poles are lower than the ones from the compensator. This implies that the
controller does not require internal dynamics as fast as the compensator, which is a positive point for its implementation (faster controllers are usually more expensive due to the necessity for high-speed processors and precision components to achieve rapid control responses).
Once the viability of this implementation has been shown, it will be applied to a system in a compressible regime. It is important to highlight the fact that even though the control law was obtained by using a state-space modelization of the system, it is applied to the exact dynamic equations and computed via the p-method. This ensures that the results will be as close to reality as possible.
Figure 11 shows the root locus for the aforementioned airfoil, operating within subsonic, transonic and sonic regimes. Similar to the previous results, the controller is able to stabilize the system without any problem. In this case, we observe the influence of the controller on the torsion branch, even though it has little effect on its stability.
In the examination of the
norm for this closed-loop system, a noticeable dispersion in the results becomes apparent (
Figure 12). This dispersion stems from the process of computing the state-space model for the airfoil, which involves determining the lags in Roger’s RFA through an optimization procedure. These lags are inherently dependent on the Mach number. While the approximate model closely mimics the exact one, it is essential to acknowledge that numerical optimization is executed for each specific Mach number. Consequently, the obtained lags exhibit nonsmooth variations. One potential avenue to address this issue would involve the development of a regression model for the lags to ensure a smoother transition, though this solution requires further investigation to comprehensively consider all the contributing factors.
Next, to assess the method’s suitability in a supersonic regime, we introduce a different airfoil configuration, characterized by the parameters delineated in
Table 2. A sweep in the Mach number up to
is performed, with
m/s. The behavior of both the plunging and torsion modes is modified, as shown in
Figure 13. Initially, the lower branch approaches the imaginary axis, subsequently reversing its direction. As the Mach number increases, all the modes seem to increase their stability. Regarding its performance,
Figure 14 shows that the
controller performance also exceeds that of the compensator in this regime.
In order to exemplify the efficacy of the
method, we examine the behavior of the airfoil detailed in
Table 1 in the time domain, specifically in the sonic regime with
m/s. A numerical simulation is initiated with a minor perturbation in the torsional degree of freedom, and the equations are integrated by using the Crank–Nicolson method with a time step of
ms.
Figure 15 illustrates the evolution of the physical states of the airfoil. The open-loop system exhibits clear instability at this speed, surpassing the flutter velocity. As anticipated, upon closing the feedback loop, the system stabilizes and the oscillations of the states converge. Notably, the
controller demonstrates the faster damping of the oscillations compared to the compensator. To measure its performance, we introduce the settling time, defined as the duration for which oscillation amplitudes are confined within
of the initial perturbation. The compensator requires
s to settle, while the
controller achieves stabilization in just
s.
Regarding the control effort of the system,
Figure 16 shows the input of the system resulting from the compensator and
controller, along with its integral over time, revealing the superior effectiveness of the
controller. Furthermore, the integral of the control input over time is lower in the case of
, suggesting reduced total energy consumption for stabilization. However, it is important to note that the maximum response required for control is higher with the
controller, a fact needed to be considered when implementing the control law in a real system.
6. Discussion
This report aimed to introduce a method for stabilizing aeroelastic systems that are inherently prone to flutter, a crucial issue in aeroelasticity. The control law is obtained based on unsteady aerodynamic force matrices that are valid in the full Laplace plane for the subsonic, sonic and supersonic flows. To assess the effectiveness of the proposed controller, we used the norm, a standard measure of system robustness. To provide a performance baseline, we applied traditional control techniques, like pole placement and Linear Quadratic Regulator (LQR), to a three degree-of-freedom airfoil with a trailing-edge control surface.
Our chosen approach involved using an controller, initially derived in a suboptimal closed form, which was later fine-tuned through an optimization process. We began by expressing the aeroelastic system through a linear relation with the s variable, transforming the system into a state-space representation, following a nine-matrix formulation. It is worth noting that the conventional method does not guarantee internal controller stability. In response, a family of strongly stabilizing controllers, achieved by applying a linear fractional transformation (LFT) to the central controller, is introduced.
The distinctive contribution of this work lies in the practical application of the control law, seamlessly integrated into the exact dynamic equations (unsteady aerodynamics fully valid in the Laplace plane) of the aeroelastic system. Although the controller originated from a state-space model, it was later transformed into a transfer function in the Laplace domain and directly incorporated into the system’s equations of motion. The p-method, a reliable analytical technique, to compute the system’s poles across the three different branches corresponding to the physical modes, was used. This extensive analysis covered both subsonic and supersonic compressible flow regimes, resulting in the successful stabilization of the system. In all cases, the norm outperformed the basic compensator.
Our research continues to progress, with ongoing investigations focused on extending this control methodology to aeroelastic wing systems. This represents a significant step in advancing the understanding and practical implementation of robust control strategies in the field of aeroelasticity. Currently, we are studying the application of this control system to a complete wing. Aerodynamic strip theory will be used, maintaining the theoretical basis presented here.