Next Article in Journal
An Innovative Aircraft Skin Damage Assessment Using You Only Look Once-Version9: A Real-Time Material Evaluation System for Remote Inspection
Previous Article in Journal
Development of a Passive Vibration Damping Structure for Large Solar Arrays Using a Superelastic Shape Memory Alloy with Multi-Layered Viscous Lamination
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Neural Network and Hybrid Methods in Aircraft Modeling, Identification, and Control Problems

by
Gaurav Dhiman
,
Andrew Yu. Tiumentsev
and
Yury V. Tiumentsev
*
Department of Flight Dynamics and Control, Moscow Aviation Institute (National Research University), Volokolamskoe Highway 4, 125080 Moscow, Russia
*
Author to whom correspondence should be addressed.
Aerospace 2025, 12(1), 30; https://doi.org/10.3390/aerospace12010030
Submission received: 24 November 2024 / Revised: 28 December 2024 / Accepted: 31 December 2024 / Published: 3 January 2025
(This article belongs to the Section Aeronautics)

Abstract

:
Motion control of modern and advanced aircraft has to be provided under conditions of incomplete and inaccurate knowledge of their parameters and characteristics, possible flight modes, and environmental influences. In addition, various abnormal situations may occur during flight, in particular, equipment failures and structural damage. These circumstances cause the problem of a rapid adjustment of the used control laws so that the control system can adapt to the mentioned changes. However, most adaptive control schemes have a model of the control object, which plays a crucial role in adjusting the control law. That is, it is required to solve also the identification problem for dynamical systems. We propose an approach to solving the above-mentioned problems based on artificial neural networks (ANNs) and hybrid technologies. In the class of traditional neural network technologies, we use recurrent neural networks of the NARX type, which allow us to obtain black-box models for controlled dynamical systems. It is shown that in a number of cases, in particular, for control objects with complicated dynamic properties, this approach turns out to be inefficient. One of the possible alternatives to this approach, investigated in the paper, consists of the transition to hybrid neural network models of the gray box type. These are semi-empirical models that combine in the resulting network structure both empirical data on the behavior of an object and theoretical knowledge about its nature. They allow solving with high accuracy the problems inaccessible by the level of complexity for ANN models of the black-box type. However, the process of forming such models requires a very large consumption of computational resources. For this reason, the paper considers another variant of the hybrid ANN model. In it, the hybrid model consists not of the combination of empirical and theoretical elements, resulting in a recurrent network of a special kind, but of the combination of elements of feedforward networks and recurrent networks. Such a variant opens up the possibility of involving deep learning technology in the construction of motion models for controlled systems. As a result of this study, data were obtained that allow us to evaluate the effectiveness of two variants of hybrid neural networks, which can be used to solve problems of modeling, identification, and control of aircraft. The capabilities and limitations of these variants are demonstrated on several examples. Namely, on the example of the problem of aircraft longitudinal angular motion, the possibilities of modeling the motion using the NARX network as applied to a supersonic transport aircraft (SST) are first considered. It is shown that under complicated operating conditions this network does not always provide acceptable modeling accuracy. Further, the same problem, but applied to a maneuverable aircraft, as a more complex object of modeling and identification, is solved using both a NARX network (black box) and a semi-empirical model (gray box). The significant advantage of the gray box model over the black box one is shown. The capabilities of the hybrid model realizing deep learning technologies are demonstrated by forming a model of the control object (SST) and neurocontroller on the example of the MRAC adaptive control scheme. The efficiency of the obtained solution is illustrated by comparing the response of the control object with a failure situation (a decrease in the efficiency of longitudinal control by 50%) with and without adaptation.

1. Introduction

Machine learning (ML) is currently one of the most actively developing research areas. Significant advances have been made in techniques based on artificial neural networks (ANN) and reinforcement learning (RL). Among the applied problems in which ML methods are increasingly being applied, problems related to dynamical systems have received much attention. In particular, in the last two decades, efforts have been made to use artificial neural network technology both for modeling and identification of dynamical systems [1,2,3,4] and for their control [5,6,7,8]. The methods of reinforcement learning [9,10,11], especially such a variant of this approach as Approximate Dynamic Programming (ADP) [12,13,14,15], are also actively investigated for the same purposes. Various variants of the ADP approach represent a very useful tool for synthesizing feedback control laws [16], including those satisfying the requirements of optimality and adaptivity [17,18].
One of the areas in which the application of machine learning technologies is being actively explored is aircraft of various kinds and purposes. This state of affairs is caused by the complexity and variety of tasks that are assigned to aircraft. The complicating factor in this case is incomplete and inaccurate knowledge of the properties of the object under study and the conditions in which it operates. This is a typical situation for objects of this kind, which prompts the expansion of the tools that provide the solution of the required tasks. In addition, the objects of this class are characterized by the multi-mode of their use, as well as the multi-criteria character of the evaluation of the tasks solved by them.
For aircraft, as one of the most important kinds of dynamical systems, their behavior control has to be ensured under conditions of significant and diverse uncertainties in the values of their parameters and characteristics, flight modes, and environmental influences [19,20,21,22]. In addition, various abnormal situations may occur during flight, in particular equipment failures and structural damage, which must be counteracted by reconfiguring the aircraft control system [23,24,25,26]. Thus, it is necessary to operate under conditions of incomplete and inaccurate knowledge of the properties of both the aircraft and the environment in which it operates. In addition, the situation in which the aircraft finds itself at any given time may change in a significant and unpredictable way due to the occurrence of abnormal situations. The aircraft control system must be able to operate efficiently under these conditions by promptly changing the parameters and/or structure of the control laws used. Adaptive control methods allow satisfying this requirement [22,27,28,29,30].
However, the online adjustment of the control law represents only one of the functions of an adaptive control system, although it is the most important one. The reason is that most adaptive control schemes include a model of the control object, which plays a crucial role in adjusting the control law. Obtaining such a model from data on the behavior of the control object constitutes the content of the classical dynamical system identification problem [31,32]. The well-known manual on system identification [31] says, “Inferring models from observations and studying their properties is really what science is about”. To a large part this is true, but still we should not forget that models are created for certain purposes, namely, to be able to analyze the behavior of modeled systems, and in the case of controlled systems we are interested in, also to provide the synthesis of control laws for them. It is needs of this kind that largely determine the progress in the field of modeling systems.
As noted above, the properties of a control object may be inaccurately known when forming its model, or they may change for some reason already in the process of its functioning. In such a case, in adaptive systems using the object model, it is necessary to be able to promptly restore the adequacy of the model to the control object. In other words, in such systems, not only the control law but also the object model should have the property of adaptability.
Obtaining a model of an object for its use as part of an adaptive control system is closely related to three classes of problems arising in the development of aircraft of various kinds. These classes include analysis of aircraft behavior as dynamical systems [33,34,35], synthesis of control algorithms for them [36,37,38], and identification of their unknown or imprecisely known characteristics [39,40,41]. Mathematical and computer models of controlled dynamical systems play a critical role in solving problems in all three classes. In most cases, when solving these problems, the aircraft is considered a solid body with six degrees of freedom. The traditional class of mathematical models in this situation are systems of ordinary differential equations. These models, with appropriate numerical methods [42,43,44,45], are widely used in solving problems of synthesis and analysis of controlled motion of aircraft of various classes.
The methods of formation and use of the above-mentioned models of traditional type have been developed in detail and successfully used for solving a wide range of problems. However, for modern and advanced complex technical systems, including aircraft of different classes, some problems arise, the solution of which cannot be provided by traditional techniques. These problems are caused, among other things, by the needs of adaptive control systems, i.e., the need to promptly adjust the control law and the model of the control object to maintain their adequacy to the changing situation.
Additionally, the problem of creating an adaptive control system for aircraft behavior is complicated by the fact that it becomes truly effective when working with a nonlinear model of the dynamical system under consideration. As experience shows [2,32], one of the most efficient approaches to solving the modeling problem regarding nonlinear systems is the use of methods and tools of artificial neural networks (ANN). Neural network modeling allows for the building of sufficiently accurate and computationally efficient ANN models. This approach can be considered as some alternative to methods of modeling dynamical systems based on the use of differential equations. It provides, among other things, the possibility of obtaining adaptive models. At the same time, traditional neural network models of controlled dynamical systems, in particular models of NARX and NARMAX classes [2,32,46], which are recurrent neural networks, are the most commonly used for modeling controlled dynamical systems. These models are purely empirical; that is, they are black-box type models. Such models are based solely on experimental data about the behavior of the object. However, in problems of the complexity level typical for aerospace engineering, it is often impossible to achieve the required level of accuracy for this kind of empirical model, which would provide the solution to aircraft motion control problems. In addition, due to the structural organization of such models, they do not allow solving the problem of identification of characteristics of dynamical systems (e.g., aerodynamic characteristics of aircraft), which is a serious disadvantage of this class of models.
One of the most important reasons for the low efficiency of ANN models of the traditional type in tasks related to complicated technical systems is the fact that it is required to form a black-box type model, which should cover all the details of the behavior of the system under consideration. For this purpose, it is necessary to build an ANN model of high enough dimensionality, i.e., with numerous adjustable parameters in it. At the same time, it is known from the experience of ANN modeling that the higher the dimensionality of the ANN model, the larger the amount of training data required for its adjustment [46]. As a result, with the amount of experimental data that can actually be obtained for complex technical systems, it is not possible to train such models at a given level of accuracy. This, in turn, does not allow us to obtain efficient adaptive control systems.
Obviously, as the complexity of the modeling problem to be solved grows, an ANN model of greater dimensionality will be required. For this reason, it is necessary to increase the number of adjustable parameters in it to ensure the complexity of the model is adequate for this problem. As an index of the complexity of an ANN model, we will take the number of connections in it, i.e., the number of adjustable parameters of the model. This index will be further referred to as the model dimensionality. Similarly, the number of examples in the training set will be called the dimensionality of the training set. The complexity of the model depends on the following factors: (1) the number of state and control variables in the model, i.e., the sum of the dimensionality of the state space and control space of the dynamical system under consideration; (2) the ranges of change for the state and control variables; and (3) the number of value samples for each of the state and control variables. But, as noted above, the increase in the dimensionality of the ANN model causes an increase in the required amount of training data. Because of this, there is a complexity threshold for the recurrent ANN model of empirical type above which it is not possible to train the network to a level acceptable in terms of modeling accuracy.
It is well known that Richard Bellman introduced the notion of the curse of dimensionality [47,48], which characterizes the catastrophic complexity growth of solving dynamic programming problems with increasing dimensionality of the problem to be solved. A very similar phenomenon, also a kind of curse of dimensionality, takes place in modeling controlled dynamical systems using recurrent neural networks of the empirical type.
To overcome the above-mentioned difficulties, which are typical for traditional models, both in terms of differential equations and ANN models, it is suggested to use a combined approach. Its basis is ANN modeling because only in this variant is it possible to obtain adaptive models. Theoretical knowledge about the modeling object, existing in the form of ordinary differential equations (for example, traditional models of aircraft motion), is introduced in a special way into the ANN model of hybrid type (semi-empirical ANN model, gray box model). In this case, a part of the ANN model is formed on the basis of the available theoretical knowledge and does not require further adjustment (training). Only those elements that contain uncertainties, for example, the aerodynamic characteristics of the aircraft, are subject to adjustment and/or structural correction in the process of training the ANN model.
This approach results in semi-empirical ANN models that allow us to solve problems that are inaccessible to traditional ANN methods [49,50,51,52,53,54,55]. The semi-empirical approach allows us to radically reduce the dimensionality of the ANN model, which allows us to achieve the required accuracy from it using training sets that are insufficient in size for traditional ANN models. In addition, it makes it possible to identify the characteristics of the dynamical system described by nonlinear functions of many variables, e.g., coefficients of aerodynamic forces and moments.
The semi-empirical approach to ANN modeling of aircraft allows us to effectively overcome the above-mentioned model complexity threshold. As our experience shows [56,57,58,59,60,61], it can be used to successfully solve problems that are many times more complex than what is possible with empirical ANN models. The area of effective use of such models is the identification of aircraft characteristics from the available experimental data describing its behavior. However, these results are achieved at a very high cost. Namely, the computational time consumption of semi-empirical models is such that it completely excludes the possibility of their online adjustment, i.e., adaptation. Of course, without such a possibility, these models cannot be used as part of adaptive control systems. It is necessary to create some algorithms for estimating adjustments, which must be used together with a model that is not adaptive. This is a severe limitation for semi-empirical ANN models.
Another point to be noted is that recurrent neural networks (RNNs) exist in both the empirical and semi-empirical variants. Due to its dynamic nature, RNN is a very challenging object to train [46,62,63,64,65,66]. Therefore, we should look for variants of neural network-based adaptive control schemes in which we can use only feedforward networks, which are much easier to train than recurrent networks.
So, the hybrid (semi-empirical) model allows us to solve the problem of model accuracy based on the available training data. However, this model is too complex and time-consuming to train due to the dynamic (recurrent) nature of the used network. One of the options to reduce the impact of these limitations is also to build a hybrid ANN model, but this hybridization is of a quite unfamiliar nature compared with semi-empirical models. Namely, semi-empirical models combine elements based on both theoretical and empirical data on the behavior of the object under consideration. An alternative variant of the hybrid ANN model, which will be discussed below, is based on the combination of feedforward and recurrent layers as modules. In such a variant, it is possible to involve the currently well-developed methods and tools of deep neural networks and deep learning in solving the problem [67,68,69,70]. Here, the possibility of implementing algorithms for online adjustment of models appears, i.e., they are potentially suitable for their use as part of adaptive control systems.
The above-mentioned approaches to adaptive control are based on working directly with a nonlinear model of the control object. This causes considerable difficulties, which, as noted above, are due to the dynamic nature of recurrent networks. In the case of hybrid models of the second kind, the problem is somewhat mitigated since only part of the ANN model will be recurrent, but the difficulties remain, but they will not be so severe.
In our opinion, there is currently no reasonable alternative to machine learning methods and tools for creating adaptive systems for complex systems, in particular aircraft. However, as just mentioned, it is very difficult to realize object models (as well as control laws) as dynamic networks, and we would like to have more efficient methods at our disposal. In this connection, we need to search for such variants to work with a nonlinear control object, which would allow us to limit ourselves to using only feedforward networks. The training of such networks is many times easier than the learning of recurrent networks.
This requirement is satisfied by two approaches that are potentially suitable for developing adaptive systems.
The first of them is based on the linearization of the control object. This approach is traditionally used to analyze the perturbed motion of an aircraft and synthesize control laws for it. In this case, linearized models obtained by Taylor series expansion of the right-hand sides of the original nonlinear differential equations are used [33,34,35,36]. This approach severely reduces the capabilities of the resulting adaptive systems.
An alternative variant is based on online linearization of the control object. In this option, the most promising is the use of nonlinear transformation in feedback (feedback linearization), which is selected in such a way that the dynamical system closed by such feedback becomes linear [71,72,73]. At the same time, the traditional linearization used in flight dynamics problems [33,34,35] yields a model that has the required accuracy only in the single operational mode of the nonlinear system, as well as in a small neighborhood of this mode. In contrast, the feedback linearization approach allows obtaining an accurate model in the whole region of the operating modes of the object under consideration. For aircraft control, this approach is often referred to as nonlinear dynamic inversion (NDI) [74,75,76,77,78,79,80]. The adaptability of the system based on the NDI approach is provided by adjusting the nonlinear transformation in the negative feedback enclosing the nonlinear control object. A valuable property of the NDI approach is that this transformation is a nonlinear function of several variables. This means that it is not necessary to use a recurrent neural network to represent a nonlinear dynamic model [81,82,83,84,85,86]. A feedforward network that efficiently handles the representation of functional transformation in feedback is sufficient.
Another approach that avoids the use of recurrent networks is based on reinforcement learning [87]. The research area called adaptive/approximate dynamic programming (ADP) is the most actively developed in reinforcement learning for control problems in dynamical systems [12,14,16,88,89]. The ADP approach involves primarily a class of methods based on the concept of adaptive critic design (ACD) [9,90,91,92]. Reinforcement learning methods, especially as applied to adaptive control problems for complex engineering systems, are very effective. This effectiveness is due to the joint use of reinforcement learning principles and feedforward neural networks. In addition, the ACD system’s learning algorithms for adaptive control problems are based on the techniques of optimal control, in particular, dynamic programming. In other words, this approach is also a kind of hybrid approach that combines the ideas and techniques of reinforcement learning, artificial neural networks, and optimal control.
The joint use of NDI and ADP-based approaches also opens up a wide range of possibilities. The attractive thing in such a case is that we can stay within the framework of well-developed linear control theory methods. Combining these methods with approximate dynamic programming techniques ensures the adaptability of the resulting systems. However, the traditional approach to obtaining a linearized representation of the system, based on the use of Taylor series expansion, significantly limits the applicability domain of the obtained control laws. NDI methods allow for the elimination of this limitation as they provide accurate linearization of the model for the whole range of operating modes of the system under consideration. The combined use of the model of a nonlinear control object linearized by NDI with ADP methods allows us to obtain an adaptive control system, which is especially valuable when the dynamics of the control object during its operation change unpredictably due to failures and/or damage. Promising options for ADP + NDI combinations include systems based on such variants of the ACD scheme as SNAC (Single Network Adaptive Critic) [93,94,95,96,97,98,99,100] and LQR Agent [101,102,103,104,105,106]. These variants, although they can be applied directly to nonlinear systems, are more efficient for a linear control object. The required linearization for both of these variants is conducted on an NDI basis. The first of these variants, i.e., SNAC, is a modification of the ACD scheme in which the critic is implemented as a feedforward network and the actor as an optimization algorithm. This distinguishes SNAC from the standard variant of ACD, in which the actor is also implemented as a feedforward network. The second variant, i.e., the LQR Agent, is a generalization of the conventional Linear Quadratic Regulator (LQR) problem of synthesizing an optimal regulator for a linear system under a quadratic criterion. While in the standard LQR method the desired control law gain coefficients are derived from the solution of the Riccati equation, in the case of the LQR Agent they are found using machine learning algorithms. We investigated the capabilities of both SNAC [100] and LQR Agent [106], including their combinations with NDI. The results demonstrate the potential of both of these approaches, but more research is needed to obtain results suitable for real-world applications.
In the following sections, the issues mentioned in the introduction are discussed in more detail.
In Section 2, three classes of problems are formulated for an aircraft interpreted as a controllable dynamical system: analysis of system behavior, synthesis of control laws for them, and system identification. In all these classes of problems, the model of the object under consideration plays a key role. A general formulation of the modeling problem for controlled dynamical systems is given, and its specific features are analyzed. Further, in this section, the problem of training set generation for ANN modeling of dynamical systems is discussed. The results of solving this problem are critical for all three classes of problems mentioned above.
Section 3 presents a variant of ANN modeling of controlled dynamical systems based on the use of ANN models of the empirical type. These are models of the black-box type. All traditional neural networks belong to this kind, as they are built exclusively on the basis of experimental (empirical) data. This approach can be quite useful and, in some cases, the only possible one when there is nothing but empirical data about the object at our disposal. But this circumstance is also a source of limitation for the capabilities of this class of models. These issues are discussed using the example of the modeling problem for the longitudinal angular motion of an airplane. It is shown that for ANN models of empirical type, this problem is in most cases a marginal one in complexity level. At the same time, real aircraft modeling and control problems are much more complicated, which limits the applicability of empirical models.
Section 4 discusses an approach to solving the problem of overcoming the complexity threshold identified in the previous section. This approach is based on the transition from empirical ANN models to semi-empirical ones, i.e., from black-box to gray-box models. Semi-empirical ANN models are hybrid, as they use both empirical data on the behavior of the object and theoretical knowledge about its nature. After considering the procedure for the formation of such models, the analysis of their capabilities, including in comparison with empirical ANN models, is carried out using two examples. One of them deals with a simple dynamical system, which allows us to demonstrate the formation of a semi-empirical ANN model as well as give a comparison between the capabilities of this model and the empirical ANN model. In the second example, we again use the longitudinal angular motion problem for the same purposes. This example allows us to demonstrate another important property of the semi-empirical approach. Namely, this approach allows us to solve not only the problem of system identification (i.e., obtaining a model of the object under consideration) but also the problem of identifying the characteristics of this system. In other words, if some nonlinear functions are unknown to us in the problem to be solved, the semi-empirical approach makes it possible to reconstruct them. For an aircraft, such characteristics are usually related to the aerodynamic forces and moments acting on it. It is shown that the semi-empirical approach solves this problem with high accuracy, despite a number of complicating circumstances.
In Section 5, we return again, although at a different level, to ANN models of the empirical type. This return is because, despite the obvious advantages of semi-empirical models, they also have a severe disadvantage. Namely, the process of forming such models requires a massive consumption of computational resources. For this reason, online adjustment of such models, which is necessary for their use in adaptive systems, is practically impossible. An alternative option to both empirical ANN models of the traditional kind and semi-empirical ANN models is the construction of another variant of the hybrid ANN model. In this variant, the hybrid nature of the model consists not in the combination of empirical and theoretical elements resulting in a recurrent network of a special kind but in the combination of elements of feedforward networks and recurrent networks. This variant opens up the possibility of using deep learning technology to build models of aircraft motion, as well as models for other types of controlled dynamic systems. This approach is illustrated again by the example of longitudinal angular motion. First, we form an ANN model for this example and then a neurocontroller based on the same approach. The possibility of fault-tolerant control using the proposed tools is also demonstrated.
Among the approaches that can successfully cope with adaptive control and modeling problems for nonlinear dynamical systems, including aircraft, two more options based on nonlinear dynamic inversion and reinforcement learning were mentioned above. These options are not considered in this paper. The issues related to the NDI approach and the ADP approach for the considered problem domain require a separate presentation due to the vastness of both of these topics. The authors intend to prepare relevant publications on these issues, which are clearly important for the control of aircraft behavior as nonlinear dynamical systems under uncertainty conditions.
In addition to the two critical elements of an adaptive system discussed in this paper, i.e., the control law and the model of the control object, there is also a third element that is critical to the performance of such systems. This element should provide the first two with information about the current situation, i.e., about the current state of the control object and the environment in which it operates. At the same time, the concept of environment should be interpreted broadly. It is not only the state of the atmosphere and gravity field that immediately affect the aircraft.
In such tasks as formation flying, air traffic control, etc., it is important to have information about the state and behavior of objects around our aircraft. Information about the current situation in which our aircraft is found is commonly referred to as situational awareness. Because of the importance of this topic, we have made a special study of the situational awareness and the ways of its formation. This challenge is relevant for both piloted and unmanned vehicles, with unmanned vehicles being much more difficult than piloted ones. This is due to the fact that, for example, in the case of a piloted aircraft, part of the formation of situational awareness can be delegated to crew members. In the case of UAVs, as with any other unmanned vehicle, everything must be accomplished by the on-board tools of the control object under consideration. This is why our study was focused on the case of an unmanned aerial vehicle. The results of this study were published in our article [107].

2. Controlled Dynamical System as an Object of Neural Network Modeling and Control

2.1. Main Problems Related to Controlled Dynamical Systems

A controllable dynamical system S in the most general form can be represented as an ordered triple of the following kind:
S = U , P , Y ,
where U are input influences on the system S (initial motion conditions, control actions, and uncontrolled external influences), P is an aircraft as a modeling/control object, and Y are observable reactions of the object P to the input influences U . In the special case of an uncontrolled dynamical system, U includes only the initial conditions of motion and uncontrolled external disturbances.
Keeping in mind the definition (1), we can distinguish three kinds of problems related to controlled dynamical systems. In this definition, two of the three elements— U , P , and Y —are fixed, and one is free, i.e., it must be found knowing the other two elements. Then these classes of problems look as follows (the free element in them is highlighted in bold):
1.
U , P , Y —analysis the behavior of the S system (by U and P to find Y );
2.
U , P , Y —control synthesis for the system S (by P and Y to find U );
3.
U , P , Y —system identification for S (by U and Y find P ).
It is no overestimation to say that the most important of these three tasks is the task of system identification, i.e., the task of obtaining a model of the S system. Indeed, if we do not have such a model at our disposal, it is extremely difficult to solve the problems of analyzing the behavior of the S system and control synthesis for it.
We will consider the model of the S system in the traditional state-space form:
x ˙ = F ( x ( t ) , u ( t ) , ξ ( t ) ) , y ( t ) = G ( x ( t ) , ζ ( t ) ) .
Here, x is the state variable vector, y is the observed output vector, u is the vector of control variables, ξ is the vector of uncontrolled external disturbances, ζ is the vector of measurement errors, and F and G are known vector functions.
The S system realizes the mapping Φ ( · ) , which is a composition of the mappings F ( · ) and G ( · ) (see Figure 1). The mapping F ( · ) is realized by the aircraft as a control object P , its inputs are the controls u ( t ) and external disturbances ξ ( t ) , and its outputs are the states x ( t ) . The mapping G ( · ) defines the algorithm of the observer forming the output of the system y ( t ) as an estimate of the values of the states x ( t ) considering the presence of measurement errors ζ ( t ) . Thus, the mapping Φ ( · ) describes the relationship between the controlled input u U of the system S and its output y Y , taking into account the influence of uncontrolled influences ξ and ζ on the system under consideration.
A similar approach and notations are adopted in the models used later in the paper. In some cases (see, for example, relations (8), (34)–(36)), it is assumed that the state vector fully characterizes the behavior of the object under consideration, i.e., only the first equation in relation (8) is used. In this case, such an approach is acceptable, since in them training examples are formed by numerical integration of differential equations, which are the reference model of motion. Of course, in the case of using data from flight experiments, the presence of the second relation in model (8) is mandatory.
Taking into account the introduced notations, the above-mentioned identification problem, which is critical for solving problems related to dynamical systems, is formulated as follows: it is necessary, using data on the behavior of the system under consideration, to obtain some mapping Φ ^ ( · ) that reproduces the mapping Φ ( · ) with the required accuracy.
Let n observations be made over the system S :
{ y i } = Φ ( u i , ξ , ζ ) , i = 1 , , n ,
in each of which the current values of the controlled input u i and output y i were recorded.
The observational results y i = y ( t i ) , t i [ t 0 , t n ] form a set of n ordered pairs
{ u i , y i } , u i U ¯ R , y i Y ¯ R , i = 1 , , n .
Using these data, we need to find such an approximation Φ ^ ( · ) for the mapping Φ ( · ) realized by the system S that the following condition holds
Φ ^ ( u ( t ) , ξ ( t ) , ζ ( t ) ) Φ ( u ( t ) , ξ ( t ) , ζ ( t ) ) ε , u U , ξ Ξ , ζ Z ; t [ t 0 , t n ] .
Here, the mapping Φ ^ ( · ) is the desired model of the dynamical system S . The main requirement for the model Φ ^ ( · ) is as follows. According to the (3), the desired approximate mapping Φ ^ ( · ) must have the required accuracy not only when reproducing the observations
{ u i , y i } , u i U ¯ R , y i Y ¯ R , i = 1 , , n ,
but also for all admissible values of u U , as well as for all t [ t 0 , t n ] . In this case, the required accuracy is ensured if the uncontrolled influences on the system S satisfy the condition ξ Ξ , and the measurement noise satisfies the condition ζ Z . It is commonly said that the model Φ ^ ( · ) , satisfying the basic requirement for it, has a generalization property.
We need to clarify how to understand the norm · in the expression (3), i.e., how to interpret the magnitude of the difference at [ t 0 , t n ] between the results given by the mappings Φ ^ ( · ) and Φ ( · ) . Typical interpretations of this kind are the maximum deviation of Φ ^ ( · ) from Φ ( · ) :
Φ ^ ( u , ξ , ζ ) Φ ( u , ξ , ζ ) = max t 0 t t n | Φ ^ ( u ( t ) , ξ , ζ ) Φ ( u ( t ) , ξ , ζ ) |
and the integral deviation of Φ ^ ( · ) from Φ ( · ) :
Φ ^ ( u , ξ , ζ ) Φ ( u , ξ , ζ ) = t 0 t n [ Φ ^ ( u ( t ) , ξ , ζ ) Φ ( u ( t ) , ξ , ζ ) ] 2 d t .
However, the available number of experiments generating a set of pairs
{ u i , y i } , u i U ¯ R , y i Y ¯ R , i = 1 , , n
is finite; respectively, finite-dimensional versions of the definitions (4) and (5) are required. Since the finite-dimensional variant (5) is the most commonly used in training neural network models, we will give two of its frequently used forms. Namely, they are the standard deviation Φ ^ ( · ) of Φ ( · ) (Mean Squared Error, MSE)
Φ ^ ( u , ξ , ζ ) Φ ( u , ξ , ζ ) = 1 n i = 0 n [ Φ ^ ( u i , ξ i , ζ i ) Φ ( u i , ξ i , ζ i ) ] 2
and the root mean square of the standard deviation (Root Mean Squared Error, RMSE).
Φ ^ ( u , ξ , ζ ) Φ ( u , ξ , ζ ) = 1 n i = 0 n [ Φ ^ ( u i , ξ i , ζ i ) Φ ( u i , ξ i , ζ i ) ] 2 .

2.2. Generation of Training Sets for ANN Modeling of Dynamical Systems

Obtaining a training set possessing the required level of informativeness is a critical and very nontrivial stage of solving the problem of ANN model development. If some features of the dynamical system’s behavior are not represented in the training set, they will not be reproduced by the model. In one of the fundamental manuals on system identification, this principle is formulated as the basic rule of system identification: “If it is not in the data, it cannot be identified” (see [39], p. 85).

2.2.1. Informativeness of the Training Set

For now, we will assume that a training set is informative if the data it contains is sufficient to obtain an ANN model that reproduces the behavior of the dynamical system with the required level of accuracy over the entire domain of possible values for the variables and their derivatives describing this behavior.
We interpret the informativeness of the training set as follows. Formed by the results of experiments, the examples included in the training set should describe as fully as possible all typical situations (i.e., “state-control” x ( t k ) , u ( t k ) pairs) in which the control object may find itself. The source data, using which the training set is formed, can be obtained in the course of computer and/or flight (aircraft, simulator) experiments by implementing the principle of persistent excitation [72,73].
Let us clarify the notion of the informativeness of the training set. Consider a controlled dynamical system of the following kind:
x ˙ = F ( x , u , t ) ,
where x = ( x 1 , x 2 , , x n ) are state variables, u = ( u 1 , u 2 , , u m ) are control variables, and t T = [ t 0 , t f ] is time.
The variables x 1 , x 2 , , x n and u 1 , u 2 , , u m taken at a particular time instant t k T characterize, respectively, the state of the dynamical system and the control actions on it at a given time instant. Each of these variables takes values from the appropriate domain:
x 1 ( t k ) X 1 R , , x n ( t k ) X n R ; u 1 ( t k ) U 1 R , , u n ( t k ) U m R .
In addition, there are generally limitations on the values of combinations of these variables:
x = x 1 , , x n R X X 1 × X n , u = u 1 , , u m R U U 1 × U n ,
as well as on mixtures of these combinations:
x , u R X U R X × R U .
The example included in the training set should demonstrate the response of the dynamical system to some combination x , u . By such a response we mean the state x ( t k + 1 ) to which the system (8) will move from the state x ( t k ) at the value u ( t k ) of the control action (Figure 2):
x ( t k ) , u ( t k ) F ( x , u , t ) x ( t k + 1 ) .
Accordingly, an example p from the training set P will include two parts: an input part (it is the pair x ( t k ) , u ( t k ) ) and an output part (it is the system response x ( t k + 1 ) ).
Ideally, the training set should reveal the responses of the dynamical system to any combinations x , u satisfying the condition (11). Then, according to the above-mentioned Basic Rule of Identification, in this case the training set will be informative, i.e., it will allow the model to reproduce all the specifics of the modeled system behavior. Let us clarify this position. Let us introduce the following notation:
p i = { x ( i ) ( t k ) , u ( i ) ( t k ) , x ( i ) ( t k + 1 ) } ,
where p i P is the i-th example from the training set P. In this example,
x ( i ) ( t k ) = ( x 1 ( i ) ( t k ) , , x n ( i ) ( t k ) ) , u ( i ) ( t k ) = ( u 1 ( i ) ( t k ) , , u m ( i ) ( t k ) ) .
The response of the considered system x ( i ) ( t k + 1 ) to the example p i is as follows:
x ( i ) ( t k + 1 ) = ( x 1 ( i ) ( t k + 1 ) , , x n ( i ) ( t k + 1 ) ) .
Similarly, let us introduce another, example p j P :
p j = { x ( j ) ( t k ) , u ( j ) ( t k ) , x ( j ) ( t k + 1 ) } .
The source data of the examples p i and p j will be assumed to be mismatched:
x ( i ) ( t k ) x ( j ) ( t k ) , u ( i ) ( t k ) u ( j ) ( t k ) .
The responses of dynamical system to examples satisfying this condition will also generally not coincide
: x ( i ) ( t k + 1 ) x ( j ) ( t k + 1 ) .
Let us introduce the notion of ε -proximity for a pair of examples p i and p j . Namely, we consider examples p i and p j to be ε -close if the following condition is satisfied:
x ( i ) ( t k + 1 ) x ( j ) ( t k + 1 ) ε ,
where ε > 0 is a predetermined real number.
From the set of examples P = { p i } i = 1 N p , let us extract a subset consisting of such examples p s for which the condition of ε -proximity with respect to example p s is satisfied, i.e.,
x ( i ) ( t k + 1 ) x ( j ) ( t k + 1 ) ε , s I s I .
Here, I s is the set of numbers of those examples for which the condition ε -proximity with respect to example p s is satisfied, with I s I = { 1 , , N p } .
Let us call the example p i the ε -representative of the entire set of examples p s , s I s , i.e., for any example p s , s I s , the ε -proximity condition is satisfied. Accordingly, we can now replace the set of examples { p s } , s I s , by a single ε -representative p i , and the error introduced by such a replacement will not exceed ε . The input parts of the set of examples { p s } , s I s , select the subarea R X U ( s ) , s I s , in the region R X U , defined by the relation (11), and the following relation must be satisfied:
s = 1 N p R X U ( s ) = R X U .
Now we can state the problem of training set formation as such a set of ε -representatives that covers the region R X U (11) of all possible values of x , u pairs.
The relation (19) represents the condition of ε -covering by the training set P of the region R X U . The set P, which realizes ε -covering of the region R X U , will be called ε -informative or, for short, simply informative.
If the training set P is ε -informative, it means that for any pair x , u R X U , there is at least one example p i P that is an ε -representative for the given pair ε .
With respect to the ε -covering (19) of the region R X U ; the following two problems can be formulated:
1.
Given a number of examples N p in the training set, find their distribution in the region R X U minimizing the error ε .
2.
The acceptable value of the error ε is set, form the minimal set of examples N p by the number N p , which provides obtaining this value of ε .

2.2.2. Indirect Approach to Generation of Training Sets for ANN Modeling of Dynamical Systems

A straightforward discretization of the region R X , U containing admissible values for combinations of state and control variables consists of constructing a grid of values for all admissible combinations of x , u pairs. The nodes of this grid are then used as a basis for obtaining training examples. This approach works satisfactorily only in problems of low complexity (see above regarding the factors determining the complexity of the model), i.e., elementary problems. Aircraft are characterized by a significantly higher complexity of problems than those for which the approach of direct formation of training sets works. An alternative way of solving this problem is to use an indirect approach to the formation of the required training set. It is based on the data obtained by forming and using a set of specially organized test control influences on the dynamical system.
In this approach, the real motion of the dynamical system ( x ( t ) , u ( t ) ) is composed of the program (test maneuver) motion ( x * ( t ) , u * ( t ) ) , caused by the control signal u * ( t ) , and the motion ( x ˜ ( t ) , u ˜ ( t ) ) produced by the additive excitatory action u ˜ ( t ) :
x ( t ) = x * ( t ) + x ˜ ( t ) , u ( t ) = u * ( t ) + u ˜ ( t ) .
Test maneuvers should be selected taking into account the specifics of the dynamic system under consideration. For the case of an aircraft, the following test maneuvers can be provided as examples:
  • Straight-line horizontal flight at a constant speed;
  • Flight with monotonically increasing angle of attack;
  • Horizontal turn;
  • Ascending/descending spiral.
Possible variants of test excitations u ˜ ( t ) are discussed below.
The kind of test maneuver ( x * ( t ) , u * ( t ) ) in (20) determines the resulting ranges of changes in the values of the state and control variables; the kind of excitation u ˜ ( t ) specifies the variety of examples within these ranges.

2.2.3. Generating a Set of Test Maneuvers

It was noted above that the selected program motion (reference trajectory) as a component of the test maneuver determines the range of values of the state variables in which the training data will be obtained. It is required to select a set of reference trajectories that covers the entire range of changes in the values of the state variables of the dynamical system. The required number of trajectories in such a set is determined from the ε -proximity condition of the phase trajectories for the dynamical system:
x i ( t ) x j ( t ) ε , x i ( t ) , x j ( t ) X , t T .
Let us set a family of reference trajectories of the dynamical system:
{ x i * ( t ) } i = 1 N R , x i * ( t ) X , t T .
Let us assume that the reference trajectory x i * ( t ) , i = 1 , , N R , is the ε -representative of the family X i X of phase trajectories for the dynamical system in the region X i X if for each of the phase trajectories x ( t ) X i the following condition is satisfied:
x i * ( t ) x ( t ) ε , x i * ( t ) X i , x ( t ) X i , t T .
The family of reference trajectories of the dynamical system { x i * ( t ) } i = 1 N R must be such that
i = 1 N R X i = X 1 X 2 X N R = X ,
where X is the family of all phase trajectories (trajectories in the state space) potentially realizable by the dynamical system under consideration. This condition means that the family of reference trajectories { x i * ( t ) } i = 1 N R must represent in the aggregate all potentially possible behaviors of the dynamical system under consideration. This condition can be interpreted as a condition of completeness of ε -covering by reference trajectories of the area for possible variants of the dynamical system behavior.
The problem of optimal ε -covering of the region X of possible behaviors for a dynamical system can be stated as minimizing the number N R of reference trajectories in a set { x i * ( t ) } i = 1 N R :
{ x i * ( t ) } i = 1 N R * = min N R { x i * ( t ) } i = 1 N R ,
which allows minimizing the size of the training set while preserving its informativity.
Desirable, but difficult to implement, is also the condition
i = 1 N R X i = X 1 X 2 X N R = .

2.2.4. Generating a Test Excitation Signal

As already noted, the kind of test maneuver in (20) determines the resulting ranges of changes in the values of the state and control variables, while the kind of excitation action provides a variety of examples within these ranges.
In solving identification problems for controlled dynamical systems, a number of typical test excitations are used [31,39,40,41]. Typical examples of such influences include the step action, rectangular pulse, and doublet (Figure 3), as well as more complex and informative variants such as the random signal (Figure 4a) and the polyharmonic signal (Figure 4b).
The set of examples that make up the training set should adequately represent the dynamics of the modeled object, i.e., describe the transitions described by relation (12) as completely as possible. At the same time, since the modeled system is a dynamical one, it is necessary for the training set to capture not only the combinations of possible values of the state and control variables but also the rates of change for them. Varying the rate of change in the state variables during the formation of the training set is realized by varying the excitation command signal ϕ a c t . The more often the value of this signal changes and the greater the variety of variations of its value between adjacent moments of time, the more informative the training set becomes. It should be noted that, from this point of view, the traditional test excitatory signals shown in Figure 3 are poorly informative. An alternative to these signals is the one shown in Figure 4, which provides an informative set of examples. The random and polyharmonic signals are roughly equivalent in their informativity. However, the random signal can be used only in the case of single-channel control, i.e., for a single control variable. The polyharmonic signal, with informativity comparable to the random one, is more difficult to implement, but in the case of multichannel control, it allows one to ensure the orthogonality of control signals (see Equation (27)). The polyharmonic signal plays an important role in system identification problems, including those related to aircraft [108,109,110]. Both of these variants (random and polyharmonic) allow training data for any systems and modes of operation. In this case, depending on the specifics of the problem to be solved, it will be necessary to select particular values of the parameters of these signals in order to obtain a training set with the required informativity.
The test excitation signals shown in Figure 4 were obtained for experiments with a maneuverable aircraft, the data for which are given in [111]. It has to be noted that the influence of a particular control object on the shape of random and polyharmonic signals will be expressed, first of all, in the range of acceptable values of the command signal. In addition, it will be necessary to set the values of the minimum time interval between switching of the command signal (for a random signal) or the maximum frequency value for harmonics (for a polyharmonic signal), adequate to the dynamics of the control object under consideration.
Determining the composition of experiments for modeling a dynamical system in the frequency domain is an essential part of solving the identification problem. The experiments should be carried out with excitation signals applied to the input of the dynamical system that cover a given frequency range.
A modern aircraft, as one of the most important kinds of modeled dynamical systems, has a significant number of controls (elevators, rudders, ailerons, etc.). When obtaining the data necessary for dynamical system identification, it is highly desirable to be able to affect all these controls simultaneously with a test excitation signal to reduce the total time required for data acquisition. In Schröder’s work [112], it was shown that it is promising to use for this purpose a polyharmonic excitation signal (multisine excitation). Such a signal is a set of sinusoids shifted in phase relative to each other. In the case of several simultaneously acting controls, the mathematical model of the input excitation signal u j acting on the j-th control is a harmonic polynomial
u j = k I k A k sin 2 π k t T + φ k , I k K , K = { 1 , 2 , , M } ,
which is a finite linear combination of the fundamental harmonic A 1 sin ( ω t + φ 1 ) and the higher order harmonics A 2 sin ( 2 ω t + φ 2 ) , A 3 sin ( 3 ω t + φ 2 ) , etc.

3. Empirical Neural Network Models of Controllable Dynamic Systems

As it is known, the result of the solution of the system identification problem in its traditional formulation is a black-box model of the object under consideration. Methods for solving problems of this kind are described, for example, in the well-known manual [31], as well as in two detailed reviews [113,114]. Since the second half of the 1980s, neural network technologies have been actively used for system identification (see, e.g., [115,116,117]). The state of the art of the field is covered in the comprehensive monograph [32], where new approaches based on neural networks, fuzzy, and neuro-fuzzy technologies are also considered, along with methods that are quite traditional for system identification.
This problem is relatively easy to solve for uncontrolled dynamical systems. Namely, for systems of the kind x ˙ = f ( x , t ) , where x = ( x 1 , , x n ) , for time instant t = t k + 1 with unchanged vector function f ( · ) , the state of the system x k + 1 = x ( t k + 1 ) is completely determined by the state x k at the previous time instant t = t k . Let us denote such a transition of the system from one state to another as
x ( t k ) , u ( t k ) F ( x , u , t ) x ( t k + 1 ) ,
and the k-th training example takes the form x k , u k , x k + 1 . Now the value of x k + 1 , to which the system will move, is determined not only by the value of x k , but also by what values all m components of the vector of control variables u k take. It should be kept in mind that, first, each of these components can vary over a rather wide range. Moreover, secondly, to ensure the informativity of the training set, it is necessary for each x k not only to vary each of the components of the vector u, but also to look through all possible combinations of the values of these components. This circumstance dramatically increases the required training set size.
For example, in an airplane motion control problem using three channels (pitch, yaw, roll), the ranges of deflection angles of the controls (elevator, rudder, and ailerons) can be taken as ± 25 grad. Let the sample of values for each of these controls include 10 elements. Then we will have to generate 1000 examples for each moment of time t k and its corresponding current state x k instead of the single example required for an uncontrolled system in the considered case of three-channel control. Even in relatively short-duration problems involving short-period angular motion of an airplane, the typical time interval is 20 s. For a maneuverable aircraft, in order to capture the peculiarities of its motion dynamics, the usual value of the time step is Δ t = t k + 1 t k + 1 = 0.02 s, i.e., a set of examples showing the reaction of the system under study to various control actions is required to be obtained in this case for 10 3 time instants. In real problems, especially for maneuverable aircraft, the above assumption will be too coarse; we need to increase the sampling at least twice, which leads to the need to generate already 8000 examples for each t k .
It should be noted that we have so far proceeded from a simplified formulation of the problem when the uncontrolled system x ˙ = f ( x , t ) realizes a single trajectory determined only by the initial conditions x ( t 0 ) = ( x 1 ( t 0 ) , , x n ( t 0 ) ) . As just shown, for the system x ˙ = f ( x , u , t ) under the same conditions, we must additionally take into account the control variability, which leads to an increase in the required number of examples for the controlled system by three to four orders of magnitude compared with the uncontrolled system. If we also take into account that the state variables x = ( x 1 , , x n ) also vary over a wide range, this gap increases even more. It is for this reason that we have to use the indirect approach to training set generation for controlled dynamical systems discussed above in Section 2.2.
For the reasons noted above, for the implementation of adaptive systems dealing with nonlinear control objects, it is advisable to use neural network models of such objects. In this section, we consider the use of this approach for the case when the control object is an aircraft and analyze the possibilities and limitations of this approach.

3.1. General Structure of ANN Model for Aircraft Motion Based on Multilayer Neural Network

Aircraft flight dynamics is described in general by a nonlinear model. In neural networks, the NARX (Nonlinear AutoRegression with eXogenous inputs) network is often used as such a model. An extension of the NARX model is NARMAX (Nonlinear AutoRegression with Moving Average and eXogenous inputs), which allows explicitly taking into account random external influences on the system when training the network. Both of these models [2,32,46,63] are recurrent neural networks in the considered case. Next, as an example of neural network-based nonlinear system identification, we will consider the NARX model, whose structural scheme is shown in Figure 5.
The general scheme of the identification problem-solving is shown in Figure 6, which demonstrates that the same control signal u is fed to the input of both the control object (plant) and the formed ANN model of the object (NARX in our case). The object responds to this control signal by producing an output value y p , which is compared with the ANN model output y m . In addition, the object output y p is also used as additional information in the ANN model generation process. The result of comparing y p and y m equals ε . This is the error with which the ANN model reproduces the behavior of the object at a given instant of time and at given values of adjustable parameters, i.e., synaptic weights of the neural network. The learning algorithm takes the ε error as input and generates corrective actions ξ that change the weights to reduce the ε error.
The NARX model, which we use to solve the identification problem, operating in discrete time, realizes a dynamic mapping described by a difference equation of the following kind:
y ^ ( k ) = f ( y ^ ( k 1 ) , y ^ ( k 2 ) , , y ^ ( k N y ) , u ( k 1 ) , u ( k 2 ) , , u ( k N u ) ; w ) ,
where the value of the output signal y ^ ( k ) for a given time instant k is computed based on the values y ^ ( k 1 ) , y ^ ( k 2 ) , , y ^ ( k N y ) of this signal for a sequence of previous time instants, as well as the values of the input (control) signal u ( k 1 ) , u ( k 2 ) , , u ( k N u ) external to the NARX model. In the general case, the length of the prehistory by outputs and controls may not coincide, i.e., N y N u . In (29), w represents the set of adjustable parameters of the NARX model (synaptic weights). A convenient way to implement the NARX model is to use a multilayer feedforward network of multiperceptron type to approximate the mapping f ( · ) in the relation (29) and time delay lines (TDL) to obtain the values of y ^ ( k 1 ) , y ^ ( k 2 ) , , y ^ ( k N y ) and u ( k 1 ) , u ( k 2 ) , , u ( k N u ) . In this case, the NARX network contains one hidden layer, as well as TDL elements for input and output signals. The hidden layer neurons are sigmoidal (linear input part and hyperbolic tangent as an activation function); the output layer neurons are linear, i.e., in addition to the linear input part, they also contain a linear activation function. Variable parameters in training the network are weights of its connections; the RTRL (Real-Time Recurrent Learning) algorithm was used for training. The hyperparameters in the NARX network training task are the number of neurons in the hidden layer, the number of cells in the TDL (delays), and the maximum allowable number of iterations of the training process. The number of neurons in the hidden layer and the number of delays are adjusted experimentally. In particular, the results shown in Figure 7 are obtained for 15 neurons in the hidden layer (the range of variation of this parameter from 10 to 20 was investigated) and for two delays in the TDL element (the range of variation of this parameter from 1 to 10 was investigated). Each iteration of the training process, called a training epoch, consists of running through the network all examples from the training set, which contains 2000 examples (time interval of 40 s, sampling step 0.02 s).
The training of the considered ANN model is performed in the standard way [62,63] as a process of minimizing the sum of squares of errors e ( w ) = y ( w ) y ^ ( w ) over the whole training sequence of length N:
E ( w ) = 1 2 e T ( w ) e ( w ) , e = [ e 1 , e 2 , , e N ] T .
Here, y ( w ) represents the output values of the modeled object (they are taken from the training set), and y ^ ( w ) is the output estimate obtained by the ANN model for the current value of the set of its adjustable parameters w.

3.2. Example of an Empirical Neural Network Model of Aircraft Motion

As an example, let us consider one of the problems of modeling the motion of controlled dynamical systems that we have solved, namely, the problem of controlling the longitudinal angular motion of a supersonic transport plane (SST) [118,119]. In flight dynamics, such motion is described by a mathematical model of the following form [33,36]:
α ˙ = q q ¯ S m V C L ( α , q , δ e ) + g V , q ˙ = q ¯ S c J y C m ( α , q , δ e ) , T 2 δ ¨ e = 2 T ζ δ ˙ e δ e + δ e a c t ,
where α is the angle of the attack, deg; q is the pitch angular velocity, deg/s; δ e is the deflection angle of the elevator, deg; C L is the lift coefficient; C m is the pitching moment coefficient; m is the mass of the aircraft, kg; V is the airspeed, m/s; q ¯ = ρ V 2 / 2 is the airplane dynamic pressure; ρ is the mass air density, kg/m3; g is the acceleration of gravity, m/s2; S is the wing area of the aircraft, m2; c is the mean aerodynamic chord, m; J y is the pitching moment inertia, kg · m2. Dimensionless coefficients C L and C m are nonlinear functions of angle of attack; T and ζ are the time constant and relative damping factor for the elevator actuator; and δ e a c t is the command signal value for the elevator actuator limited by ± 25 .
In the model (31), variables α , q, δ e , and δ ˙ e are aircraft states; variable δ e a c t is aircraft control.
The source data for training of the considered ANN model were generated using a random step signal u ( t ) = δ in ( t ) fed to the input of the dynamical system. This signal is a sequence of steps with the following parameter values: elevator deflection angle in the range from 10 to 10 degrees for a time from 0.25 to 0.5 s. The time step was chosen to be 0.01 s. The total length of the sequence was equal to 40 s. An example of such a signal is presented in the upper part of Figure 7.
The system under study responds to the input signal u ( t ) by changing its state x ( t ) . The pair { t , u ( t ) } in this case is the source information for generating a set of training examples. Each of the examples p included in the training set P should show the reaction of the system to some combination x , u . By such a response we mean the state x ( t k + 1 ) to which the system (31) will move from the state x ( t k ) at the value u ( t k ) of the control action (see also Figure 2):
x ( t k ) , u ( t k ) F ( x , u , t ) x ( t k + 1 ) .
Thus, the example p from the training set P will include two parts, namely, the input (this is the pair x ( t k ) , u ( t k ) ) and the output (this is the response of the system x ( t k + 1 ) ); see also Figure 2.
To implement the scheme described above, a number of variants with different numbers of neurons in the hidden layer, delays, and training epochs were tried with respect to the NARX network. The results for one of these variants are shown in Figure 7.
As can be seen from Figure 7, the modeling error reaches unacceptable values at some time instants. Different combinations of values for the network and training process parameters did not improve the results. In particular, increasing the delay parameter (i.e., the depth of the prehistory taken into account in (29)) required a sharp increase in the training time. Within the given constraints on computational resources, this requirement could not be met.
It should be noted here that such intensive elevator operation, i.e., sharp and frequent changes in the command signal ϕ a c t of the elevator actuator, is very unlikely in real practice. In other words, we obtain training data “with a large margin” for aircraft operation modes, i.e., under conditions much more severe than they will be in real flight. Nevertheless, these data allow us to understand the limits of the capabilities of ANN models of the black-box type for the considered class of modeling tasks. Our experience in building and using such models for aircraft of various classes shows [120,121,122,123] that the threshold complexity of the flight dynamics problem for such models in most cases is the longitudinal angular motion problem (31) just considered. In more complicated problems, as a rule, it is not possible to obtain acceptable generalization properties from these models.
Within the considered approach, we formed and used ANN models of the black-box type for aircraft of different classes [120,121,122,123,124]. Our experience shows that the threshold complexity of the flight dynamics problem for such models in most cases is the longitudinal angular motion problem (31) just considered. In more complicated problems, as a rule, it is not possible to achieve acceptable generalization properties from these models, i.e., their reasonable accuracy on test data. The same will be true for dynamical systems of other classes similar in complexity level, which is determined by the total number of state and control variables, the ranges of variation in the values of these variables, and the number of samples in each of these ranges.
As noted in the introduction, one possible way to overcome this limitation is to move from purely neural network recurrent models of the black-box type to hybrid ANN models of the gray-box type. These issues are discussed in the next section, which compares empirical and semi-empirical ANN models of controllable dynamical systems. In the same section, two more examples of ANN models of the black-box type are discussed. The results obtained for them are compared with the results for semi-empirical ANN models for the same controlled objects.

4. Semi-Empirical Models of Controlled Dynamical Systems

In this section, as in the previous one, we consider the problem of mathematical and computer modeling of nonlinear controllable dynamical systems operating under numerous and diverse uncertainties, including uncertainties associated with insufficient and inaccurate knowledge about the modeling object and its operating conditions.
As it was shown earlier, empirical ANN models have serious limitations in the complexity level of solved problems. This is due to the high dimensionality of such models, i.e., the significant number of connections in them, which causes the need for numerous training examples. In this regard, there arises the problem of reducing the dimensionality of the created model in such a way that its flexibility does not degrade. One of the possible ways to solve this problem is through the transition to semi-empirical networks that combine the capabilities of theoretical and neural network modeling.
This area has a long history of progress already [49,50,51,52,53,54,55]. The assumptions for it were as follows. There is a situation when, using only available data on the behavior of the object under study, we cannot obtain a model of the required accuracy. At the same time, quite often we also have at our disposal theoretical knowledge about this object, which in the case of a dynamic system has the form of a system of differential equations. When forming an empirical model, we do not use this knowledge at all. In a situation when this approach fails to meet the requirements for modeling accuracy, a natural question arises: maybe it is necessary to somehow involve the available theoretical knowledge about the object in order to meet these requirements? The answer to this question is hybrid models of the gray box type. Such models can be obtained for both uncontrolled and controlled dynamical systems.
In particular, a field called physics-informed neural networks (PINN) is being actively developed [125,126,127,128,129,130,131,132,133]. This field is mainly focused on modeling uncontrolled dynamical systems for which the available theoretical knowledge is expressed in the form of ordinary differential equations or partial differential equations. Even if the system under consideration contains control variables, it is generally assumed that their form is specified. This approach, despite all its advantages, is of limited use for obtaining models of controllable dynamic systems oriented to the needs of adaptive control systems.
We are developing an alternative variant of the gray box models [56,57,58,59,60,61,123,134,135,136], which we call semi-empirical because of their hybrid nature: They are based on an empirical black-box model (recurrent neural network), as it ensures the model’s adaptability. At the same time, knowledge about the modeled object, which takes the form of a system of ordinary differential equations, is embedded in such a network. This approach, as will be demonstrated below, allows us to drastically reduce the size of the resulting model and substantially raise the complexity threshold of the modeling and identification problems to be solved.

4.1. Semi-Empirical Model Development

The formation of semi-empirical ANN models consists of the following steps [56,57,58]:
1.
Obtaining a theoretical continuous-time model for the dynamical system under study and collecting available experimental data on the behavior of this system;
2.
Accuracy assessment for the theoretical model of the dynamical system on the available data; if its accuracy is insufficient, hypothesizing about the reasons for it and possible ways to eliminate them;
3.
Transformation of the source continuous-time model into a discrete-time model;
4.
Formation of neural network representation for the obtained discrete-time model;
5.
Training of the neural network model;
6.
Evaluation of the accuracy for the trained neural network model;
7.
Correction, in case of insufficient accuracy, of the neural network model by making structural changes in it.
Let us assume that the source theoretical model of the object under consideration has already been obtained in one way or another. A typical situation is when such a model is a system of differential equations. In particular, this is the case in the flight dynamics of aircraft. Since this is a continuous-time model and its implementation is carried out in a digital environment, the first step is to discretize the source model. This operation is necessary to obtain a dynamical system with discrete time, for which the corresponding recurrent neural network is built. The choice of the discretization method [42,43,44,45] plays an important role since the consequences of this choice affect the stability of the resulting discrete-time model. The algorithmic basis for discretization of continuous-time models is numerical methods for solving ordinary differential equations (ODE) combined with experience in solving various kinds of such problems. The transition to discrete time in the problem (2) can be based on almost any of the known methods of numerical integration for ODE systems [42,43,44,45]. As an example, to compare semi-empirical models with empirical ones, we will use two explicit difference schemes, namely the 1st order Euler scheme and the 4th order Adams scheme:
  • Euler difference scheme:
    x ( k + 1 ) = x ( k ) + Δ t · f ( k ) ;
  • Adams difference scheme:
    x ( k + 1 ) = x ( k ) + 1 24 Δ t 55 · f ( k ) 59 · f ( k 1 ) + 37 · f ( k 2 ) 9 · f ( k 3 ) .
In (32) and (33), the following notations are used:
x ( k ) = x ( t k ) , f ( k ) = f ( t k , x ( k ) ) , t = t 0 , t 1 , , t k , , t N , Δ t = t k t k 1 .
It is well known that any mathematical model or algorithm can be represented as a network structure. In the case under consideration, related to ordinary differential equations and algorithms for their numerical integration, this structure has a recurrent form similar to the structure of recurrent neural networks. Obviously, different source continuous-time models will lead to a variety of recurrent networks. The problem of unification of such networks arises, whose solution avoids the need to build for each of the networks its own training algorithm. Such a solution was proposed in [51,53]. It is based on reducing the original model to some “canonical” form. This problem can be solved for any recurrent neural network, and its final canonical form will be the minimum complexity model in the state space. The canonical form of the recurrent neural network is given in Figure 8, which shows that the core of this representation is a feedforward neural network, and all feedbacks available in the model are external to the core of this model and contain only unit delays.

4.2. Semi-Empirical Model of a Simple Dynamical System

In order to compare the capabilities of the empirical and semi-empirical models, we consider two examples. The first one involves a dynamical system described by the following equations:
x ˙ 1 ( t ) = ( x 1 ( t ) + 2 x 2 ( t ) ) 2 + u ( t ) ,
x ˙ 2 ( t ) = 8.322109 sin ( x 1 ( t ) ) + 1.135 x 2 ( t ) .
The dynamical system defined by these relations uses part of the example given in [52] as a prototype. This example is not a model of any particular system; it is purely formal (in [52] it is referred to as “a didactive example”). The purpose of the system (34), (35) is to illustrate the procedure of semi-empirical model formation, as well as its superiority over a purely empirical neural network model in the form of the NARX network.
In problems (34) and (35), the adopted discretization schemes allow us to obtain the canonical representation of the network either immediately (for the explicit Euler scheme (32), Figure 9) or after minor adjustments to the initial non-canonical version (for the explicit Adams scheme (33), Figure 10). Some graphical simplification is introduced in Figure 10 in order not to clutter this figure with redundant elements. Namely, the simplified form shows the formation of the Adams scheme from the values of f ( k ) , f ( k 1 ) , f ( k 2 ) , and f ( k 3 ) , which are vector functions of the right-hand sides of the equations of motion. Detailed information about the order of formation of this scheme can be obtained from the relation (33) describing it.
Equations (34) and (35) will be used to obtain training and test data in the manner described above. Once these data are obtained, they are used to form a model of the system under consideration, which will be either empirical (black box) or semi-empirical (gray box). In the case of the black-box type model (this is the NARX network), it is assumed that the kinds of both relations (34) and (35) are unknown to us (see Figure 11). For the gray-box model, we will assume that only the kind of relation (35) is unknown, while Equation (34) represents the available theoretical knowledge about the dynamical system under consideration. In such a situation, we deal with a source model of this kind:
x ˙ 1 ( t ) = ( x 1 ( t ) + 2 x 2 ( t ) ) 2 + u ( t ) , x ˙ 2 ( t ) = φ ( x 1 ( t ) , x 2 ( t ) ) ,
where φ ( x 1 ( t ) , x 2 ( t ) ) is the unknown relationship that needs to be reconstructed during the training of the grey-box model using the available experimental data on the behavior of the dynamical system under consideration. As mentioned above, the results of the integration of the equations (34) and (35) with given initial conditions, integration step, and control function u ( t ) are used as these data.
To obtain a gray-box neural network model, including Equation (34) and the relation corresponding to the unknown second equation in the model under consideration, two steps are required. The first step consists of going from the original continuous-time model, i.e., the differential equations, to the discrete-time model, i.e., the difference equations. For this purpose, we will use two explicit difference schemes, namely the 1st order Euler scheme (32) and the 4th order Adams scheme (33).
We consider a system (36) on a time interval t [ 0 ; 100 ] with a sampling step Δ t = 0.025 s and initial conditions x 1 ( 0 ) = x 2 ( 0 ) = 0 . The state vector is partially observable: y ( t ) = x 2 ( t ) , with additive white noise with root-mean-square (RMS) deviation σ = 0.01 affecting the system output y ( t ) .
Ideally, when the neural network model reproduces the original system of differential equations (34) and (35), the modeling error will be completely determined by the noise affecting the system output. It follows that the comparison of the modeling error with the noise RMS allows us to estimate the degree of success in solving the modeling problem of the system under consideration, and the given value of the noise RMS can be taken as the target value of the modeling error.
Training was carried out using Matlab for LDDN (Layered Digital Dynamic Networks) networks using the algorithm common for this class of networks [63]. The Levenberg–Marquardt algorithm is used to optimize the network parameters, and the Jacobi matrix is calculated using the RTRL (Real-Time Recurrent Learning) algorithm [46]. The RMS prediction error of the model on the training sample is used as an optimization criterion:
R M S E = 1 N i = 1 N ( d i y i ) 2 ,
where N is the number of training set elements, d = { d 1 , , d N } is the vector of target values of the observed variable (according to the model (34) and (35)), and y = { y 1 , , y N } is the vector of outputs of the neural network model.
In the case under consideration, the kind of relation (35) shows that the recoverable right-hand side of the second equation of this model must be nonlinear and have both state variables, x 1 ( t ) and x 2 ( t ) , as arguments. In the general case, however, these facts are not known to us. Therefore, we need to make a sequence of hypotheses concerning the kind of right-hand side of the second equation in (36). These hypotheses should be ordered by the complexity of the resulting relation: on which variables it depends and whether this relation will be linear or nonlinear on the selected variables. The reasonableness of this approach is determined by the fact that as the complexity of this relationship increases, the number of connections in the neural network that will reproduce it also increases. According to the well-known result [46,63], the dimensionality of the network (i.e., the number of connections in it) and the dimensionality of the training set (the number of examples in it) are strictly related. Consequently, using more complicated relations for the right-hand side of the second equation in (36) will require a larger number of training examples and, consequently, an increase in the time required to train the model. It is for this reason that one should introduce a sequence of hypotheses regarding the kind of relationship to be recovered and then experimentally determine the lowest complexity relationship that allows for the required modeling accuracy.
In the considered example, we omit all intermediate steps and define the reconstructed relation as a nonlinear function of two variables, φ ( x 1 ( t ) , x 2 ( t ) ) , which depends on both state variables x 1 ( t ) and x 2 ( t ) but does not depend on the control variable u ( t ) . The structural diagram of the network corresponding to this variant of the ANN model for the Euler discretization scheme is shown in Figure 9; similarly, it can be obtained in the case of the Adams scheme (Figure 10). The accuracy of the obtained model (denoted as DS-1 in Figure 11), estimated by the relation (37), is 0.01394 for the Euler method and 0.01219 for the Adams method, with the target value of this index equal to 0.01.
To compare with the traditional approach, let us demonstrate the results of training the NARX model. The best accuracy of 0.02821 in this example was achieved by a network with three neurons in the hidden layer and five delays in the feedback. From this comparison, we can see the undeniable advantage of the semi-empirical model over the empirical model in terms of achievable solution accuracy: even for Euler’s method, the accuracy is 0.01394 versus 0.02821 for NARX. In the case of Adams’ method, the accuracy is even higher. It is equal to 0.01219, and it is possible to improve the accuracy even further by using more efficient difference schemes.
An analysis similar to the one described above was also performed for two more variants of the considered system (34) and (35), denoted as DS-2 and DS-3 in Figure 11. First, instead of Equation (35), this system uses its complicated version with a mixture of harmonics on the right-hand side:
x ˙ 2 ( t ) = 8.322109 sin ( x 1 ( t ) ) + 0.7 cos 1.33 π x 2 ( t ) .
In the second variant, Equation (35) is replaced by a relation that contains a more complex mixture of harmonics than in (38):
x ˙ 2 ( t ) = 8.322109 sin ( x 1 ( t ) ) + 0.7 cos 1.33 π x 2 ( t ) 2 .
The results of calculations for these variants, including for the NARX model, are presented in Figure 11.
A comparison of the experimental results for all three examples allows us to draw the following conclusions.
The kind of difference scheme used influences the accuracy of modeling. In this regard, the problem of choosing an adequate (the best in the ideal case) discretization scheme for a particular applied problem to be solved arises.
In order to eliminate this kind of influence of the difference scheme on the obtained result, we have proposed and implemented an alternative variant of the formation and training of the semi-empirical model [60]. The essence of this variant is as follows. Section 4.1 demonstrates the sequence of steps performed during semi-empirical modeling. It shows that the transformation of the original continuous-time model into a discrete-time model precedes the formation of a neural network representation for the resulting discrete-time model, as well as its training. This means that if the difference scheme used for discretization is unsuccessful, it must be replaced by another one. It is not excluded that this operation will have to be repeated many times to obtain the desired result. Each such replacement must be accompanied by training of the resulting semi-empirical model, and this requires a significant computational resource consumption. In an alternative variant, training is performed for the source model in continuous time, and only after that is its discretization, necessary for operation in a digital environment, performed. In this case, the algorithm for training the model in continuous time is based on the method of sensitivity functions, long known in automatic control theory [137]. Of course, training a continuous-time model will be more time-consuming than a discrete-time model, but it will only have to be conducted once rather than repeatedly. The final decision as to which training option should be used, discrete or continuous, will depend on how repeatedly the reasonable difference scheme is expected to be selected in the first of these two options.
The second important conclusion is related to the comparison of results for empirical and semi-empirical models. It can be seen that semi-empirical models are many times more accurate than purely empirical ones. At the same time, the gap in modeling accuracy increases with the complexity of the behavior of the system under consideration. In this regard, in those cases where the accuracy obtained from the empirical model is insufficient and there is theoretical knowledge about the modeling object, the transition to a model of a semi-empirical type has a significant effect.

4.3. Semi-Empirical Model of Longitudinal Angular Motion of an Aircraft

The basic idea binding the examples discussed below is as follows. We want to solve not only the problem of system identification, for which we have experimental data showing the input influences on the system and its reactions to these influences, and the model that converts inputs to outputs is a black box. We are more interested in the very important task of system characteristics identification. This is, first of all, the aerodynamic characteristics of the aircraft. If we were able to solve this problem, then, substituting the obtained relations into the equations of motion, we obtain the model of the aircraft as a control object in general. To solve this problem within the framework of the semi-empirical approach, we form a hybrid model of motion, the basis for which is one of the traditional models of flight dynamics in the form of a system of ordinary differential equations. We transform it into a network structure, which includes, as customizable elements, feedforward subnets that implement nonlinear functions describing the corresponding dimensionless coefficients of aerodynamic forces and moments. All other elements of the mentioned network structure, which do not coincide with the subnets for the aerodynamic coefficients, obtain the corresponding numerical values of the connection weights from the source motion model in the form of ODEs, after which they are frozen. Thus, we obtain a gray-box type model, which, unlike black-box type models, has significantly fewer adjustable parameters, which greatly simplifies the process of training the model being formed.
The second of two examples that demonstrate the potential of a semi-empirical approach to modeling and identifying controllable dynamical systems involves the longitudinal angular motion of an airplane. We have obtained results that allow us to assess how effective this approach would be in comparison with empirical ANN models (NARX-type recurrent networks).
The theoretical model of aircraft motion is usually a system of first-order ordinary differential equations in normal form [33,34,35]. For example, to describe the full angular motion of an aircraft with pitch, yaw, and roll control, a system of 14 such equations is used, which in general form looks as follows:
x ˙ = f ( x , u , t ) ,
where x = ( φ , ψ , θ , p , r , q , α , β , δ r , δ e , δ a , δ ˙ r , δ ˙ e , and δ ˙ a ) are state variables, and u = ( δ e a c t , δ a a c t , and δ r a c t ) are controls. Here, φ , ψ , and θ are roll, yaw and pitch angles; p, r, and q are roll, yaw, and pitch angular rates; α and β are angle of attack and sideslip; δ r , δ e , δ a , δ ˙ r , δ ˙ e , and δ ˙ a are angles of deflection and angular rates for rudder, elevator, and ailerons; δ e a c t , δ a a c t , and δ r a c t are command signals of actuators for elevator, rudder and ailerons.
The right-hand sides of the equations in the model (40) contain the following dimensionless coefficients of aerodynamic forces and moments:
C D ( α , β , δ e , q ) , C L ( α , β , δ e , q ) , C y ( α , β , δ r , δ a , p , r ) , C l ( α , β , δ e , δ r , δ a , p , r ) , C n ( α , β , δ e , δ r , δ a , p , r ) , C m ( α , β , δ e , q ) .
It is the incomplete and inaccurate knowledge of the nonlinear functions (41) that is the main source of uncertainties that make it very difficult to obtain motion models of the kind (40).
There are two approaches to the formation of dependencies (41): direct and indirect. Let us illustrate the relationship between these two approaches with the example of the modeling problem for longitudinal angular motion of an airplane and identification of the lift force and pitching moment coefficients (see Figure 12).
In the previous example, the model (34), (35) served as sources of training and test data for obtaining neural network models. Similarly, we will use as such a source the model (31) traditional for aircraft flight dynamics [33,34,35], which we introduced earlier in the section on empirical ANN models of motion. As an example of a specific modeling object, we considered a maneuverable F-16 aircraft, the source data for which were taken from [111]. A computational experiment with the model (31) was conducted for a time interval t [ 0 ; 20 ] sec with a sampling step Δ t = 0.02 sec for the partially observed state vector y ( t ) = [ α ( t ) ; ω z ( t ) ] T , with additive white noise with standard deviation σ = 0.01 affecting the output of the y ( t ) system.
This motion model can be used in its original form, i.e., as a system of ordinary differential equations, or as a semi-empirical neural network model built based on these equations. In the first case, it is required to somehow specify the functions C L ( α , β , δ e , q ) and C m ( α , β , δ e , q ) , which are included in the right-hand sides of the equations (31). In the second case, these functions are included as separate modules in a gray-box type neural network. These functions can be represented in various ways, including kind of feedforward neural networks. The direct and indirect approaches mentioned above differ in the kind of available source data and the way of obtaining the required functions.
In the direct approach, we have data obtained from wind tunnel tests or computational fluid dynamics methods. The results obtained by these techniques have the kind of tabulated functions (41) that serve as a source of both training and test data. In this case, a separate feedforward neural network is formed for each of the aerodynamic coefficients. These networks, as separate functions, can then be inserted into the appropriate places in a model (40) or similar, after which this model can be used in the traditional way as a system of differential equations. An example of such a substitution can be seen in Figure 12.
In the indirect approach, as seen in Figure 12, the input data are state and control variables obtained as functions of time during a flight or computer experiment (as in the previous example). Thus, unlike the direct approach, there are no tabulated functions (41) at our disposal. We have only data characterizing the behavior of our object under given conditions, which is influenced by the values of aerodynamic forces and moments. If all other factors affecting the behavior of the aircraft are fixed, then in this case it is possible to set and solve the problem of identification for aerodynamic coefficients from the above-listed experimental data. It has already been mentioned above that neural network models of the semi-empirical type can solve this problem. Figure 12 shows how such a model looks as applied to the longitudinal angular motion of an aircraft.
It should be emphasized that purely empirical models, in particular the recurrent network of the traditional kind, allow us to solve only the problem of system identification, i.e., they allow us to obtain a model of the control object as a whole. However, no less important is also the problem of identifying the characteristics of the object under consideration, for example, the coefficients of aerodynamic forces and moments in the case of an aircraft. Models of the black-box type do not provide an opportunity to solve problems of this kind. At the same time, models of the gray box type, i.e., semi-empirical models, have such a possibility. They allow for solving both the problem of system identification and the problem of characteristic identification.
This fact is of great importance, since the task of identifying the aerodynamic characteristics of an aircraft from experimental data is one of the most important for flight dynamics and control. The approach to solving this problem based on semi-empirical neural network modeling differs significantly from the traditional one.
Namely, the traditional approach is based on the linearization of functions (41) using their Taylor series expansion with terms not higher than the first order. In this case, the restoring object in the identification problem is the expansion coefficients, which include the derivatives C L α = C L / α , C m α = C m / α etc. Moreover, this operation is performed “at a point”, i.e., for a particular combination of arguments of these functions and not for the whole area of their definition.
At the same time, the semi-empirical approach recovers nonlinear functions of several arguments— C L , C D , C y , p , r , and q—as holistic objects in the whole range of changes in the values of their arguments. As can be seen from Figure 12, these functions are localized inside the semi-empirical model (they are highlighted in Figure 12). Each of them has its own feedforward network. The training algorithm of this model selects the values of the connection weights of these networks. At the same time, the weights of other connections obtained from its theoretical part are frozen. After the training process is completed, the ANN modules realizing the functions (41) can be extracted from the semi-empirical model and, if necessary, placed as ordinary functions in the model represented by the differential equations of motion. In particular, as shown in Figure 12, the ANN modules for the functions C L ( α , q , φ ) and C m ( α , q , φ ) can be used in the right-hand sides of the equations describing the longitudinal angular motion of the airplane.
In some cases, for the restored functions C L ( α , β , δ e , q ) , C m ( α , δ e , q ) , etc., we need to find the values of their partial derivatives, for example, C L α = C L α , C l β = C l β , C m α = C m α , etc. For example, the variation of the partial derivative of the pitching moment coefficient with the angle of attack provides useful information for flight dynamics and control specialists.
If necessary, the partial derivatives of the functions C D ( · ) , C L ( · ) , …, C m ( · ) , given in the form of ANN modules, can be computed using an algorithm similar to the error backpropagation algorithm used to train the ANN model as a whole.
Applying to (31) the above-mentioned procedure of forming a semi-empirical ANN model leads to the structure shown in Figure 13 (it is based on the Euler discretization scheme). A similar structure for the case of a fully empirical model (NARX) corresponding to the same problem (31) is shown in Figure 14. For the case of the Adams scheme, a semi-empirical model can be constructed in a manner similar to that shown in Figure 11 for the previous example.
As it was mentioned earlier, one of the critical issues arising in the formation of empirical and semi-empirical ANN models is obtaining a training set that provides an adequate representation of the peculiarities of the behavior of the modeled system. This formation is carried out by developing an appropriate test control action on the modeled object and evaluating the object’s response to this action.
As in the previous example, we will use the RMS of additive noise acting on the system output as the target value of the modeling error.
The model under consideration is trained on the sample { α i , q i , ϕ i , ϕ ˙ i , ϕ ˙ i } , i = 1 , , N , obtained using the source model (31), steady state straight line horizontal flight as the reference mode, and a polyharmonic excitation signal. It was performed in Matlab for networks in the form of LDDN (Layered Digital Dynamic Networks) using the Levenberg–Marquardt algorithm and the MSE criterion. The Jacobi matrix is calculated using the RTRL (Real-Time Recurrent Learning) algorithm [46].
As follows from the results shown in Figure 15, the generated semi-empirical model has a very high accuracy. The pitch rate error is within ± 0.3 %, and the angle of attack error is within ± 2 %. Since the uncertainty factors in the source model are only the relationships for the aerodynamic force and moment coefficients, these results indicate a successful solution not only to the problem of system identification but also to the problem of characteristic identification, i.e., the lift force and pitching moment coefficients in the problem under consideration.
Let us compare the results obtained for the semi-empirical and empirical motion models corresponding to the system (31). For the empirical ANN model of NARX type, the results are presented in Figure 16. A comparison of the results for these two variants of ANN models of longitudinal angular motion of the airplane allows us to draw the following conclusions. The values of the current modeling error in angle of attack lie within ± 0.5 deg for the semi-empirical model and within ± 2.5 deg for the NARX model. Typical RMS error values are RMS α = 0.05 deg, RMS q = 0.1 deg/s for the semi-empirical model, and RMS α = 1.3 deg, RMS q = 2.7 deg/s for the empirical model. Thus, as in the previous example, the semi-empirical model has an undeniable advantage in accuracy over the empirical model.
As noted above, the source of uncertainty in the motion model (31) is the functions C L ( α , δ e ) and C m ( α , δ e ) which describe the dependence of the lift coefficient and pitching moment coefficient on the angle of attack α and elevator deflection angle δ e . These dependencies have a complicated character, especially for a maneuverable aircraft. As an example, the dependence C m ( α , δ e ) for an F-16 airplane, plotted using data from the [111], is shown in Figure 17a. We see that the value of the coefficient C m is subject to frequent and significant changes when the parameters α and δ e are varied over wide ranges. Even more significantly, the derivative of this coefficient, especially C m / α , repeatedly changes sign since for values of the angle of attack at which this derivative is positive, the control object is unstable. It follows that a nonlinear function of several arguments, C m ( α , δ e ) , must be reconstructed with very high accuracy to avoid incorrect values of its derivative if possible. The semi-empirical approach allows us to cope with this problem, as can be seen from Figure 17b for the considered example.
The above results of the problem (31) clearly demonstrate the capabilities of the semi-empirical approach. It provides a model of aircraft motion that is considerably more accurate than can be achieved by the empirical approach. In addition, unlike the empirical approach, it is possible to identify the characteristics of the object under consideration. In the case considered, these are the aerodynamic coefficients C L and C m as nonlinear functions of several variables.
As already noted above, the problem of modeling the controlled longitudinal angular motion of an airplane is the threshold problem in terms of the complexity of the empirical approach. The semi-empirical approach can be used to form a model of this kind without any significant difficulties while ensuring, as already noted, a much higher level of modeling accuracy.
Our experience shows [134,135,136] that this approach can be used to successfully solve identification problems for objects much more complicated than (31). In particular, the problem (40), (41) and fourteen state variables instead of four in (31), as well as three control variables instead of one. In problems of this kind, it was possible to obtain very high accuracy both for the model of the control object as a whole and for the functions describing all six coefficients of aerodynamic forces and moments. However, to ensure such a result, it was necessary to significantly increase the number of examples in the training set. If in problems of the kind (31) the typical number of examples was 10 3 , then in problems of the kind (40), (41), and similar problems, about 5 · 10 5 is required.
The accuracy of the model to be formed is determined by how precisely the nonlinear functions describing the aerodynamic characteristics of the aircraft are reconstructed, i.e., how effectively the problem of identifying these characteristics is solved. To obtain an answer to this question, we can extract from the semi-empirical model the ANN modules corresponding to the restored functions for C L , C Y , C l , C n , and C m , and then compare the values they produce with the experimentally available data from [111]. In doing so, we can obtain the mean square error values of the derivation of each of the functions C L , C Y , C l , C n , and C m by the corresponding ANN module. In the experiments performed, these values are as follows: MSE C L = 9.2759 · 10 4 , MSE C Y = 5.4257 · 10 4 , MSE C l = 2.1496 · 10 5 , MSE C n = 1.3873 · 10 5 , MSE C m = 1.4952 · 10 4 . This is an integral estimate of the accuracy for the mentioned dependencies. In addition, the dynamics of changes in the current values of the reproduction error for C Y , C l , C n , and C m in the process of testing the model are also interesting. These data, shown in Figure 18, show that the error level changes insignificantly over time; no significant changes in it, which could adversely affect the adequacy of the empirical ANN model, are detected.
It has to be noted that the semi-empirical approach to solving problems of modeling and identification of controlled dynamical systems requires a rather large consumption of computational resources for its implementation. In comparison with it, traditional variants of solving such problems are much more economical. However, in our opinion, the higher computational resource-demanding nature of our approach pays off with the advantages it provides. Namely, as noted above, the main factors of uncertainty in models of aircraft motion are its aerodynamic characteristics, characterized by dimensionless coefficients of aerodynamic forces and moments. Each of these coefficients is a nonlinear function of several variables. The traditional approach to the problem of the identification of aerodynamic characteristics (see [39,40,41]) is based on linearization of these functions by means of their Taylor series expansion. This operation is performed “at a point”, i.e., for one particular combination of values of the arguments of the recovered function, rather than for the entire domain of their definition. The main advantage of the semi-empirical approach is that with its use we can reconstruct from experimental data the above-mentioned nonlinear functions of many variables as holistic objects in the whole range of changes in the values of their arguments with high accuracy. In other words, firstly, there is no need for linearization of these functions, and secondly, the obtained solution is no longer bound to a single specific combination of values of the function arguments.

5. Deep Learning Techniques for Modeling and Controlling Aircraft Motion

So, the use of semi-empirical ANN models of aircraft motion allows us to overcome the complexity threshold that takes place for empirical ANN models. However, the price to be paid for this achievement will be unacceptably high in some cases. These are the situations when it is necessary to promptly correct the ANN model in order to restore its adequacy to the object with changed properties. For such applications, semi-empirical ANN models are absolutely unsuitable. They can be successfully applied, as was shown in the previous section, to those tasks where online adjustment of the model is not required. In particular, one of the most important problems of this kind is the identification of aerodynamic characteristics of aircraft with their representation in the form of nonlinear functions of many variables, with high accuracy in the whole area of definition of these functions.
However, as noted in the introduction, in addition to semi-empirical ANN models, there is another alternative approach to overcoming the complexity threshold inherent in recurrent ANN models of the black-box type. This approach produces models of the same category, i.e., they too will be of the black-box type. However, such models will nevertheless also be hybrid models by combining both feedforward and recurrent layers in the same model. This makes it possible to build motion models involving deep neural network technology and deep learning methods. In such a variant, it becomes possible to implement algorithms for online adjustment of models, i.e., they become potentially suitable for their use as part of adaptive control systems.
A considerable number of publications have been devoted to the present moment to the formation of controlled dynamic systems models based on deep learning methods, e.g., [138,139,140,141,142,143,144,145,146].
Let us demonstrate the capabilities of the combined approach based on deep learning methods by using the example of forming a model of a control object and using this model to realize adaptive fault-tolerant control for the problem of longitudinal angular motion (31). As a control object, we will consider, as in the example with the NARX model, a supersonic transport airplane (SST) [118,119].

5.1. Deep Neural Network as a Model of Aircraft Motion

The ANN model of SST motion being formed is a combination of convolutional layers and long short-term memory (LSTM) layers. A number of different connection structures were compared for such a network. The sequential configuration shown in Figure 19 was found to be the most rational.
The network used for modeling SST motion consists of an entry layer and four sets of layers that perform specialized functions.
The entry layer in the structures shown in Figure 19 is, as is usual in neural networks, a set of elements (memory cells) that contain the current value of the input vector components. For the structure in Figure 19 (the SST motion model), these are the five components from the motion model (31): four states (angle of attack, pitch angular velocity, deflection angle, and deflection rate for the elevator) and one control component (the command signal for the elevator actuator).
The first set starts with a one-dimensional convolution layer, which performs a convolution operation for input elements of size 5 × 1 . This allows the layer to select fragments of the input data and search for patterns among them. Such fragments are called convolution kernels. There are 16 neurons in the layer. The data processed by this layer goes to the layer that performs the ReLU operation. The ReLU layer is followed by a layer of one-dimensional subsampling (pooling) of the average pooling type, which averages one-dimensional data with a step of four units.
The second set of layers resembles the first, but with 32 neurons. The other two layers in the set perform the same operations as in the first set.
The third set of layers involves elements of neural network memory. For this purpose, we use a layer of the LSTM type applied in sequence-to-sequence tasks, i.e., the transformation of one sequence into another. In our case, the input action on the dynamical system influences the response of the dynamical system to this action as a function of time. This layer consists of 100 LSTM neurons.
Following the memory layer is the fully connected layer. Its purpose is to approximate the data stored in the LSTM memory layer. The fully connected layer contains 100 neurons.
In artificial intelligence systems using LSTM memory, there is usually a dropout layer. Such a layer removes some neurons by selecting them randomly. This technique allows for so-called regularization, which improves the convergence of the learning process. In this case, the dropout coefficient is chosen to be 0.1. This means that 10% of neurons are eliminated.
Finally, the output of this set is the activation sigmoid function, which introduces nonlinearity into the approximated surface and cuts off the results below a certain threshold.
The fourth set of layers is the output for the whole neural network. A fully connected layer of a single neuron compresses the data, which has already passed all layers of processing, into a one-dimensional array, which is then fed to the layer that performs nonlinear regression of the input data by the RMS error criterion.
The network of this architecture was successfully trained, requiring several thousand epochs. The training results are shown in Figure 20. The trained SST model predicts the angle of attack by responding to the generated random control. The current modeling error does not exceed 0.3 deg, which is quite a satisfactory result for the problem under consideration.
We have demonstrated in this example that using a hybrid model obtained by combining feedforward and recurrent neural networks, it is possible to simulate the behavior of even such complex dynamic objects as SST with acceptable accuracy. In this model, the feedforward network (a convolutional network based on one-dimensional convolution, Conv1d) preprocesses the signals received at the input of the LSTM layer, which reproduces the dynamics of the modeled object. Then follow again the feedforward layers, which postprocess the outputs of the LSTM network and form the value of the state variables of the control object for the next moment of time. The purpose of this example is to show that it is possible to successfully model complex dynamics by combining elements of feedforward networks and recurrent networks. Of course, this model is related to a specific object (SST in our case); when working with other objects, it will be necessary to form other combinations of the mentioned elements adequate to these objects.

5.2. Neural Network Adaptive Control of Aircraft Motion

There are a number of adaptive control schemes that use neural networks [116,147,148,149,150]. One of the most popular among them is the MRAC (Model Reference Adaptive Control) scheme [151]. A possible variant of its neural network realization is shown in Figure 21. This scheme is specialized to solve the tracking problem for a certain reference signal. The requirements for the desired properties of the system under the influence of the neurocontroller are expressed using a reference model.
As we can see from Figure 21, the system based on the MRAC scheme includes two neural networks as elements: for the object model (Figure 19) and for the controller (Figure 22). To adjust the controller using the error backpropagation method, we need to know the mismatch u ˜ u * at the output of the neural network implementing it. This mismatch leads to inaccurate control of the object with error y r m y p . We cannot, however, compute the value of u ˜ u * since we do not know the “correct” value of u * but can only measure the error y r m y p . Therefore, an ANN model of the control object is included in the MRAC scheme to allow us to use y r m y p instead of y r m y ^ . Thus, we compare the output of the reference model y r m not with the output of the object y p but with the output of the ANN model y ^ . This allows us to convert the error y r m y ^ into a mismatch u ˜ u * using the standard error backpropagation method. However, with this approach, we train the neurocontroller to control not the object of interest but its ANN model. The inevitable inaccuracy of this model must be compensated for in some way. For this purpose, we consider that the error of the ANN model is nothing but another disturbance that must be considered when controlling our object. To compensate for this disturbance, a compensating PD feedback on the ANN modeling error is introduced into the MRAC structure, as shown in Figure 21.
The ANN model of the control object was created for use in the MRAC scheme, shown in Figure 21. As mentioned above, such a system includes a neurocontroller, the structure of which is shown in Figure 22. We have synthesized this neurocontroller using deep neural network technology, which is currently finding increasing application as a tool for solving control problems for various dynamical systems [152,153,154].
The neurocontroller under consideration consists of several layers. First comes the entry layer, which contains two components: the reference signal r (the reference signal for the angle of attack in the considered case) and the current value of the control object output (the current realized value of the angle of attack). The next layer, containing 200 neurons of the LSTM type, performs the function of memory. The following fully connected layer approximates the data transferred from the LSTM layer. The dropout layer removes 10% of the randomly selected neurons, thus improving the convergence of the learning process. The activation sigmoid function adds nonlinearity to the approximation process. Data below a certain threshold is cut off. Finally, the last fully connected layer of one neuron produces the result with the dimensionality expected by the pre-trained ANN model of SST.
Obtaining the source data for training the combined network, including the ANN controller and ANN model, is based on a procedure of random sequence generation similar to the one described above. In this case, however, a random reference signal was generated for the angle of attack in the range from 2 to 11 deg for a time of 4 to 8 s, and then the response of the reference model to this signal was computed. The time step was 0.01 s, and the total length of the sequence was 100 s. Then both the reference signal and the response to it were applied in training the neurocontroller. Moreover, similar to the training of the ANN model, several dozen sequences were generated for the neurocontroller, some of which were used only for its training and some of which were used only for testing. The source data for training the combined network and the results of training this network are shown in Figure 23, from which we can see that the trained neurocontroller controls the SST, producing control signals that allow tracking the target signal by the angle of attack with an acceptable error.
For the generated ANN controller, an experiment was conducted to test the created adaptive system for fault tolerance. A simulation of a failure that leads to a 50% decrease in the efficiency of the elevator rudder was performed. The behavior of the system after the failure, which occurred at 40 s, is shown in Figure 24. We can see that the tracking error value for the reference signal has increased significantly, which leads to the control system malfunctioning. Next, the pretraining of the coupled system (ANN model + ANN controller) was performed for 1600 epochs. The obtained results are shown in Figure 25. The operability of the control system under consideration was restored.

6. Conclusions

There is an important applied problem, which consists in controlling the behavior of a dynamic system under conditions of incomplete and inaccurate knowledge of the properties of the control object and the environment in which it operates. One of the most important classes of such systems is aircraft of various types and purposes. For them, the control task is further complicated by the fact that during flight various abnormal situations may occur, in particular, equipment failures and structural damage, which change the dynamic properties of the control object.
At the same time, for both control laws and control object models, it is not always possible to achieve success based on recurrent neural networks of the traditional kind (ANN models of the black-box type), which are quite often used for modeling dynamical systems and their control. In cases where the system has complicated dynamics, this approach is inefficient. As alternative approaches, we have considered two variants of hybrid ANN models that allow us to overcome the limitations inherent in traditional ANN models of controlled dynamical systems. The first of these options is based on the joint use of empirical data and theoretical knowledge about the control object. This option leads us to the gray box-type models (semi-empirical ANN models). The second option is based on combining elements of feedforward and recurrent networks. It makes it possible to use deep learning methods to develop the required ANN models.
The analysis of the capabilities and limitations of the considered approaches to solving problems of modeling and control for dynamical systems is carried out on examples related to supersonic passenger aircraft. The results of this analysis allow us to reveal the areas of preferable use for each of these approaches.
In addition to the mentioned variants, there are two more that are of considerable interest from the point of view of the control problem for a complex nonlinear dynamic object under uncertainty. These are the nonlinear dynamic inversion approach (NDI approach) and the approach based on reinforcement learning in a version of approximate (adaptive) dynamic programming (ADP approach). The joint use of these two approaches also seems promising. However, a review of the NDI approach and the ADP approach is beyond the scope of this paper. The authors intend to carry out such a review in the following publications.

Author Contributions

Conceptualization, Y.V.T.; methodology, Y.V.T.; software, A.Y.T. and G.D.; validation, G.D. and A.Y.T.; formal analysis, Y.V.T. and A.Y.T.; investigation, Y.V.T. and A.Y.T.; resources, G.D.; data curation, A.Y.T.; writing—original draft preparation, A.Y.T.; writing—review and editing, Y.V.T.; visualization, A.Y.T.; supervision, Y.V.T.; project administration, Y.V.T.; funding acquisition, Y.V.T. All authors have read and agreed to the published version of the manuscript.

Funding

The paper was prepared under the Program for the Development of the World-Class Research Center “Supersonic” in 2020–2025, funded by the Russian Ministry of Science and Higher Education (Agreement dated 20 April 2022, No. 075-15-2022-309).

Data Availability Statement

Part of the source data related to the maneuverable aircraft is taken from a publicly available resource [111], the other part of it, related to the supersonic transport aircraft, is related to an unfinished research project and is currently not circulated openly.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
ACDAdaptive Critic Design
ADPApproximate Dynamic Programming
ANNArtificial Neural Network
LDDNLayered Digital Dynamic Network
LSTMLong Short-Term Memory
LQRLinear Quadratic Regulator
MLMachine Learning
MRACModel Reference Adaptive Control
MSEMean Squared Error
NARXNonlinear AutoRegressive network with eXogenous inputs
NARMAX                    Nonlinear AutoRegressive model with Moving Average and
eXogenous inputs
NCNeuroController
NDINonlinear Dynamic Inversion
ReLURectified Linear Unit
RMSERoot Mean Squared Error
RNNRecurrent Neural Network
RTRLReal Time Recurrent Learning
SNACSingle Network Adaptive Critic
SSTSuperSonic Transport aircraft
TDLTime Delay Line
UAVUnmanned Aerial Vehicle
Notations
x = ( x 1 , x 2 , , x n ) state variables
u = ( u 1 , u 2 , , u m ) control variables
x * ( t ) , u * ( t ) program motion and control for its realization (see Equation (20))
x ˜ ( t ) , u ˜ ( t ) motion produced by the additive excitatory action
Φ ( · ) the composition of the mappings F ( · ) and G ( · ) realized by controlled
dynamical system S (see Figure 1 and Equation (2))
Φ ^ ( · ) mapping realized a desired model of the system S
α , β angles of attack and sideslip
φ , ψ , θ roll, yaw and pitch angles
p, r, qroll, yaw and pitch angular rates
δ r , δ e , δ a angles of deflection rudder, elevator and aileron
δ ˙ r , δ ˙ e , δ ˙ a angular rates for rudder, elevator and ailerons
δ e a c t , δ a a c t , δ r a c t actuator command signals for elevator, rudder and ailerons
C D , C L , C y           drag, lift and side force coefficients
C l , C n , C m           rolling, yawing and pitching moment coefficients
m          mass of aircraft
S          wing area of aircraft
c          mean aerodynamic chord
q ¯ = ρ V 2 / 2           airplane dynamic pressure
ρ           mass air density
V          airspeed

References

  1. Korjani, M.M.; Bazzaz, O.; Menhaj, M.B. Real time identification and control of dynamic systems using recurrent neural networks. Artif. Intell. Rev. 2008, 30, 1–17. [Google Scholar] [CrossRef]
  2. Billings, S.A. Nonlinear System Identification: NARMAX Methods in the Time, Frequency and Spatio-Temporal Domains; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  3. Pan, S.; Duraisamy, K. Long-time predictive modeling of nonlinear dynamical systems using neural networks. Hindawi Complex. 2018, 4801012. [Google Scholar] [CrossRef]
  4. Kuptsov, P.V.; Kuptsova, A.V.; Stankevich, N.V. Artificial neural network as a universal model of nonlinear dynamical systems. Russ. J. Nonlinear Dyn. 2021, 17, 5–21. [Google Scholar]
  5. Prokhorov, D. (Ed.) Computational Intelligence in Automotive Applications; Springer: Berlin/Heidelberg, Germany, 2008. [Google Scholar]
  6. Li, S.; Zhang, Y. Neural Networks for Cooperative Control of Multiple Robot Arms; Springer Nature: Singapore, 2018. [Google Scholar]
  7. Lawryńczuk, M. Computationally Efficient Model Predictive Control Algorithms: A Neural Network Approach; Springer: Cham, Switzerland, 2014. [Google Scholar]
  8. Yu, Q.; Lei, T.; Tian, F.; Hou, Z.; Bu, X. Predictive Learning Control for Unknown Nonaffine Nonlinear Systems: Theory and Applications; Springer Nature: Singapore, 2023. [Google Scholar]
  9. Kamalapurkar, R.; Walters, P.; Rosenfeld, J.; Dixon, W. Reinforcement Learning for Optimal Feedback Control: A Lyapunov-Based Approach; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  10. Vamvoudakis, K.G.; Wan, Y.; Lewis, F.L.; Cansever, D. (Eds.) Handbook of Reinforcement Learning and Control; Springer Nature: Cham, Switzerland, 2021. [Google Scholar]
  11. Buşoniu, L.; de Bruin, T.; Tolić, D.; Kober, J.; Palunko, I. Reinforcement learning for control: Performance, stability, and deep approximators. Annu. Rev. Control. 2018, 46, 8–28. [Google Scholar] [CrossRef]
  12. Liu, D.; Wei, Q.; Wang, D.; Yang, X.; Li, H. Adaptive Dynamic Programming with Applications in Optimal Control; Springer: Berlin/Heidelberg, Germany, 2017. [Google Scholar]
  13. Szuster, M.; Hendzel, Z. Intelligent Optimal Adaptive Control for Mechatronic Systems; Springer International Publishing: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  14. Song, R.; Wei, Q.; Li, Q. Adaptive Dynamic Programming: Single and Multiple Controllers; Science Press: Beijing, China; Springer Nature: Singapore, 2019. [Google Scholar]
  15. Liu, D.; Xue, S.; Zhao, B.; Luo, B.; Wei, Q. Adaptive dynamic programming for control: A survey and recent advances. IEEE Trans. Syst. Man, Cybern. Part B 2023, 1, 142–160. [Google Scholar] [CrossRef]
  16. Lewis, F.L.; Vrabie, D. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. Mag. 2009, 9, 32–50. [Google Scholar] [CrossRef]
  17. Buchli, J.; Farshidian, F.; Winkler, A.; Sandy, T.; Giftthaler, M. Optimal and Learning Control for Autonomous Robots; Swiss Federal Institute of Technology: Zürich, Switzerland, 2017; (arXiv:1708.09342v1). [Google Scholar]
  18. Kiumarsi, B.; Vamvoudakis, K.G.; Modares, H.; Lewis, F.L. Optimal and autonomous control using reinforcement learning: A survey. IEEE Trans. Neural Networks Learn. Syst. 2018, 29, 2042–2062. [Google Scholar] [CrossRef]
  19. Martynyuk, A.A.; Martynyuk-Chernenko, Y.A. Uncertain Dynamic Systems: Stability and Motion Control; CRC Press: London, UK, 2012. [Google Scholar]
  20. Matasov, A.I. Estimators for Uncertain Dynamic Systems; Springer: Berlin/Heidelberg, Germany, 1998. [Google Scholar]
  21. Piegat, A. Fuzzy Modeling and Control; Springer: Berlin, Germany, 2001. [Google Scholar]
  22. Zhou, J.; Xing, L.; Wen, C. Adaptive Control of Dynamic Systems with Uncertainty and Quantization; CRC Press: London, UK, 2021. [Google Scholar]
  23. Ducard, G.J.J. Fault-Tolerant Flight Control and Guidance Systems: Practical Methods for Small Unmanned Aerial Vehicles; Springer: Berlin, Germany, 2009. [Google Scholar]
  24. Hajlyev, C.; Caliskan, F. Fault Diagnosis and Reconfiguration in Flight Control Systems; Springer: Berlin, Germany, 2003. [Google Scholar]
  25. Blanke, M.; Kinnaert, M.; Lunze, J.; Staroswiecki, M. Diagnosis and Fault-Tolerant Control, 2nd ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  26. Noura, H.; Theilliol, D.; Ponsart, J.-C.; Chamseddine, A. Fault-Tolerant Control Systems: Design and Practical Applications; Springer: Berlin, Germany, 2009. [Google Scholar]
  27. Astolfi, A.; Karagiannis, D.; Ortega, R. Nonlinear and Adaptive Control with Applications; Springer: Berlin, Germany, 2008. [Google Scholar]
  28. Ioannou, P.A.; Sun, J. Robust Adaptive Control; Prentice Hall: Upper Saddle River, NJ, USA, 1995. [Google Scholar]
  29. Mosca, E. Optimal, Predictive, and Adaptive Control; Prentice Hall: Upper Saddle River, NJ, USA, 1994. [Google Scholar]
  30. Tao, G. Adaptive Control Design and Analysis; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2003. [Google Scholar]
  31. Ljung, L. System Identification: Theory for the User, 2nd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 1999. [Google Scholar]
  32. Nelles, O. Nonlinear System Identification: From Classical Approaches to Neural Networks, Fuzzy Models, and Gaussian Processes, 2nd ed.; Springer: Berlin, Germany, 2020. [Google Scholar]
  33. Cook, M.V. Flight Dynamics Principles, 2nd ed.; Elsevier: Amsterdam, The Netherlands, 2007. [Google Scholar]
  34. Hull, D.G. Fundamentals of Airplane Flight Mechanics; Springer: Berlin/Heidelberg, Germany, 2007. [Google Scholar]
  35. Zipfel, P.H. Modeling and Simulation of Aerospace Vehicle Dynamics, 2nd ed.; AIAA: Reston, VA, USA, 2007. [Google Scholar]
  36. Stevens, B.L.; Lewis, F.L.; Johnson, E.N. Aircraft Control and Simulation: Dynamics, Controls Design and Autonomous Systems, 3rd ed.; Wiley: Hoboken, NJ, USA, 2016. [Google Scholar]
  37. Vepa, R. Flight Dynamics, Simulation, and Control: For Rigid and Flexible Aircraft; CRC Press: Boca Raton, FL, USA, 2015. [Google Scholar]
  38. Tewari, A. Automatic Control of Atmospheric and Space Flight Vehicles; Springer: Berlin, Germany, 2011. [Google Scholar]
  39. Jategaonkar, R.V. Flight Vehicle System Identification: A Time Domain Methodology; AIAA, Inc.: Reston, VA, USA, 2006. [Google Scholar]
  40. Klein, V.; Morelli, E.A. Aircraft System Identification: Theory and Practice; AIAA, Inc.: Reston, VA, USA, 2006. [Google Scholar]
  41. Tischler, M.B.; Remple, R.K. Aircraft and Rotorcraft System Identification: Engineering Methods with Tlight-Test Examples; AIAA, Inc.: Reston, VA, USA, 2006. [Google Scholar]
  42. Hairer, E.; Norsett, S.P.; Wanner, G. Solving Ordinary Differential Equations I: Nonstiff Problems, 2nd ed.; Springer: Berlin, Germany, 2010. [Google Scholar]
  43. Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems, 2nd ed.; Springer: Berlin/Heidelberg, Germany, 2010. [Google Scholar]
  44. Butcher, J.C. Numerical Methods for Ordinary Differential Equations; John Wiley & Sons, Inc.: Hoboken, NJ, USA, 2003. [Google Scholar]
  45. Scott, L.R. Numerical Analysis; Princeton University Press: Princeton, NJ, USA, 2011. [Google Scholar]
  46. Haykin, S. Neural Networks and Learning Machines, 3rd ed.; Pearson: London, UK, 2009. [Google Scholar]
  47. Bellman, R. Dynamic Programming; Princeton University Press: Princeton, NJ, USA, 1957. [Google Scholar]
  48. Bellman, R. Adaptive Control Processes: A Guided Tour; Princeton University Press: Princeton, NJ, USA, 1961. [Google Scholar]
  49. Bohlin, T. A case study of grey box identification. Automatica 1994, 30, 307–318. [Google Scholar] [CrossRef]
  50. Jorgensen, S.B.; Hangos, K.M. Grey box modelling for control: Qualitative models as a unifying framework. Intern. J. Adapt. Control. Signal Process. 1995, 9, 547–562. [Google Scholar] [CrossRef]
  51. Dreyfus, G.; Idan, Y. The canonical form of nonlinear discrete-time models. Neural Comput. 1998, 10, 133–164. [Google Scholar] [CrossRef]
  52. Oussar, Y.; Dreyfus, G. How to be a gray box: Dynamic semi-phisical modeling. Neural Netw. 2001, 14, 1161–1172. [Google Scholar] [CrossRef] [PubMed]
  53. Dreyfus, G. Neural Networks: Methodology and Applications; Springer: Berlin/Heidelberg, Germany, 2005. [Google Scholar]
  54. Bohlin, T. Practical Grey-Box Identification: Theory and Applications; Springer: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  55. Cen, Z.; Wei, J.; Jiang, R. A gray-box neural network based model identification and fault estimation scheme for nonlinear dynamic systems. Intern. J. Neural Syst. 2013, 23, 1–15. [Google Scholar] [CrossRef] [PubMed]
  56. Egorchev, M.V.; Tiumentsev, Y.V. Semi-empirical neural network based approach to modelling and simulation of controlled dynamical systems. Procedia Comput. Sci. 2018, 123, 134–139. [Google Scholar] [CrossRef]
  57. Egorchev, M.V.; Tiumentsev, Y.V. Neural network semi-empirical modeling of the longitudinal motion for maneuverable aircraft and identification of its aerodynamic characteristics. Stud. Comput. Intell. 2018, 736, 65–71. [Google Scholar]
  58. Egorchev, M.V.; Tiumentsev, Y.V. Neural network identification of aircraft nonlinear aerodynamic characteristics. IOP Conf. Ser. Mater. Sci. Eng. (MSE) 2018, 312, 1–6. [Google Scholar] [CrossRef]
  59. Kozlov, D.S.; Tiumentsev, Y.V. Neural network based semi-empirical models of 3d-motion of hypersonic vehicle. Stud. Comput. Intell. 2019, 799, 196–201. [Google Scholar]
  60. Egorchev, M.V.; Tiumentsev, Y.V. Semi-empirical continuous-time neural network based models for controllable dynamical systems. Opt. Mem. Neural Networks 2019, 28, 192–203. [Google Scholar] [CrossRef]
  61. Kozlov, D.S.; Tiumentsev, Y.V. Semi-empirical neural network models of hypersonic vehicle 3D-motion represented by index 2 DAE. Stud. Comput. Intell. 2020, 856, 335–341. [Google Scholar]
  62. Haykin, S. (Ed.) Kalman Fltering and Neural Networks; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  63. Hagan, M.T.; Demuth, H.B.; Beale, M.H.; De Jesús, O. Neural Network Design, 2nd ed.; PSW Publishing Co.: Redditch, UK, 2014. [Google Scholar]
  64. Mandic, D.P.; Chambers, J.A. Recurrent Neural Networks for Prediction: Learning Algorithms, Architectures and Stability; John Wiley & Sons: Hoboken, NJ, USA, 2001. [Google Scholar]
  65. Medsker, L.R.; Jain, L.C. (Eds.) Recurrent Neural Networks: Design and Applications; CRC Press: Boca Ratonc, FL, USA, 2001. [Google Scholar]
  66. Leondes, C.T. (Ed.) Neural Network Systems Techniques and Applications. Volume 7: Control and Dynamic Systems; Academic Press: New York, NY, USA, 1999. [Google Scholar]
  67. Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
  68. Skansi, S. Introduction to Deep Learning: From Logical Calculus to Artificial Intelligence; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  69. Calin, O. Deep Learning Architectures: A Mathematical Approach; Springer: Berlin/Heidelberg, Germany, 2020. [Google Scholar]
  70. Moons, B.; Bankman, D.; Verhelst, M. Embedded Deep Learning: Algorithms, Architectures and Circuits for Always-On Neural Network Processing; Springer Nature Switzerland: Cham, Switzerland, 2019. [Google Scholar]
  71. Isidori, A. Nonlinear Control Systems, 3rd ed.; Springer: Berlin/Heidelberg, Germany, 1995. [Google Scholar]
  72. Slotine, J.-J.E.; Li, W. Applied Nonlinear Control; Prentice Hall: Hoboken, NJ, USA, 1991. [Google Scholar]
  73. Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Hoboken, NJ, USA, 2002. [Google Scholar]
  74. Enns, D.; Bugajski, D.; Hendrick, R.; Stein, G. Dynamic inversion: An evolving methodology for flight control design. Intern. J. Control 1994, 59, 71–91. [Google Scholar] [CrossRef]
  75. Horn, J.F. Non-linear dynamic inversion control design for rotorcraft. Aerospace 2019, 6, 38. [Google Scholar] [CrossRef]
  76. da Costa, R.R.; Chu, Q.P.; Mulder, J.A. Reentry flight controller design using nonlinear dynamic inversion. J. Spacecr. Rocket. 2003, 40, 64–71. [Google Scholar] [CrossRef]
  77. Smeur, E.J.J.; Chu, Q.P.; de Croon, G.C.H.E. Adaptive incremental nonlinear dynamic inversion for attitude control of micro air vehicles. J. Guid. Control. Dyn. 2016, 39, 450–461. [Google Scholar] [CrossRef]
  78. Sieberling, S.; Chu, Q.P.; Mulder, J.A. Robust flight control using incremental nonlinear dynamic inversion and angular acceleration prediction. J. Guid. Control. Dyn. 2010, 33, 1732–1742. [Google Scholar] [CrossRef]
  79. Ludeña, T.J.; Choi, S.H.; Kim, B.S. Flight control design using incremental nonlinear dynamic inversion with fixed-lag smoothing estimation. Intern. J. Aeronaut. Space Sci. 2020, 21, 1047–1058. [Google Scholar] [CrossRef]
  80. Wang, X.; van Kampen, E.-J.; Chu, Q.P.; Lu, P. Stability Analysis for Incremental Nonlinear Dynamic Inversion Control. J. Guid. Control. Dyn. 2019, 42, 1116–1129. [Google Scholar] [CrossRef]
  81. Yeşildirek, A.; Lewis, F.L. Feedback linearization using neural networks. Automatica 1995, 31, 1659–1664. [Google Scholar] [CrossRef]
  82. Ge, S.S. Robust adaptive NN feedback linearization control of nonlinear systems. Intern. J. Syst. Sci. 1996, 27, 1327–1338. [Google Scholar] [CrossRef]
  83. He, S.; Reif, K.; Unbehauen, R. A neural approach for control of nonlinear systems with feedback linearization. IEEE Trans. Neural Netw. 1998, 9, 1409–1421. [Google Scholar] [PubMed]
  84. Van Den Boom, T.; Botto, M.A.; Da Costa, J.S. Robust control of dynamical systems using neural networks with input-output feedback linearization. Intern. J. Control 2003, 76, 1783–1789. [Google Scholar] [CrossRef]
  85. Necsulescu, D.; Jiang, Y.-W.; Kim, B. Neural network based feedback linearization control of an unmanned aerial vehicle. Intern. J. Autom. Comput. 2007, 4, 71–79. [Google Scholar] [CrossRef]
  86. Şahin, S. Learning feedback linearization using artificial neural networks. Neural Process. Lett. 2016, 44, 625–637. [Google Scholar] [CrossRef]
  87. Sutton, R.S.; Barto, A.G. Reinforcement Learning: An Introduction, 2nd ed.; The MIT Press: Cambridge, MA, USA, 2018. [Google Scholar]
  88. Powell, W.B. Approximate Dynamic Programming: Solving the Curse of Dimensionality, 2nd ed.; Wiley: Hoboken, NJ, USA, 2011. [Google Scholar]
  89. Wang, F.-Y.; Zhang, H.; Liu, D. Adaptive dynamic programming: An introduction. IEEE Comput. Intell. Mag. 2009, 4, 39–47. [Google Scholar] [CrossRef]
  90. Wang, D.; Ha, M.; Zhao, M. Advanced Optimal Control and Applications Involving Critic Intelligence; Springer Nature: Singapore, 2023. [Google Scholar]
  91. Wang, D.; He, H.; Liu, D. Adaptive critic nonlinear robust control: A survey. IEEE Trans. Cybern. 2017, 47, 1–22. [Google Scholar] [CrossRef]
  92. Wang, D.; Mu, C. Adaptive Critic Control with Robust Stabilization for Uncertain Nonlinear Systems; Springer Nature: Singapore, 2019. [Google Scholar]
  93. Lakshmikanth, G.S.; Padhi, R.; Watkins, J.M.; Steck, J.E. Single network adaptive critic aided dynamic inversion for optimal regulation and command tracking with online adaptation for enhanced robustness. Opt. Control Appls. Methods 2014, 35, 479–500. [Google Scholar] [CrossRef]
  94. Lakshmikanth, G.S.; Padhi, R.; Watkins, J.M.; Steck, J.E. Adaptive flight-control design using neural-network-aided optimal nonlinear dynamic inversion. J. Aerosp. Inform. Syst. 2014, 11, 785–806. [Google Scholar] [CrossRef]
  95. Tiwari, S.N.; Padhi, R. Optimal and robust control of a class of nonlinear systems using dynamically re-optimised single network adaptive critic design. Intern. J. Syst. Sci. 2018, 49, 246–263. [Google Scholar] [CrossRef]
  96. Ashraf, I.K.; van Kampen, E. Adaptive critic control for aircraft lateral-directional dynamics. In Proceedings of the AIAA Scitech 2020 Forum, Orlando, FL, USA, 6–10 January 2020; AIAA-2020-1945. pp. 1–24. [Google Scholar]
  97. Bu, X. An improvement of single-network adaptive critic design for nonlinear systems with asymmetry constraints. J. Frankl. Inst. 2019, 356, 9646–9664. [Google Scholar] [CrossRef]
  98. Fu, Z.; Yuan, P.; Zhou, F.; Guo, Y.; Guo, P. Self-learning control of model uncertain active suspension systems with observer -critic structure. Meas. Control 2022, 55, 411–420. [Google Scholar] [CrossRef]
  99. Long, X.; He, Z.; Wang, Z. Online optimal control of robotic systems with single critic NN-based reinforcement learning. Hindawi Complex. 2021, 2021, 8839391. [Google Scholar] [CrossRef]
  100. Tiumentsev, Y.V.; Tshay, R.A. SNAC approach to aircraft motion control. Stud. Comput. Intell. 2023, 1120, 420–434. [Google Scholar]
  101. Fernandez, G.I.; Togashi, C.; Hong, D.W.; Yang, L.F. Deep reinforcement learning with linear quadratic regulator regions. arXiv 2020, arXiv:2002.09820v2. [Google Scholar]
  102. Perrusquia, A. Solution of the linear quadratic regulator problem of black box linear systems using reinforcement learning. Inf. Sci. 2022, 595, 364–377. [Google Scholar] [CrossRef]
  103. Yaghmaie, A.; Gustafsson, F.; Ljung, L. Linear quadratic control using model-free reinforcement learning. IEEE Trans. Autom. Control 2023, 68, 737–752. [Google Scholar] [CrossRef]
  104. Rizvi, S.A.A.; Lin, Z. Reinforcement learning-based linear quadratic regulation of continuous-time systems using dynamic output feedback. IEEE Trans. Cybern. 2020, 50, 4670–4679. [Google Scholar] [CrossRef]
  105. Park, Y.; Rossi, R.A.; Wen, Z.; Wu, G.; Zhao, H. Structured policy iteration for linear quadratic regulator. arXiv 2020, arXiv:2007.06202v1. [Google Scholar]
  106. Chulin, M.I.; Tiumentsev, Y.V.; Zarubin, R.A. LQR approach to aircraft control based on the adaptive critic design. Stud. Comput. Intell. 2023, 1120, 406–419. [Google Scholar]
  107. Igonin, D.M.; Kolganov, P.A.; Tiumentsev, Y.V. Situational awareness and problems of its formation in the tasks of UAV behavior control. Appl. Sci. 2021, 11, 11611. [Google Scholar] [CrossRef]
  108. Morelli, E.A. Multiple input design for real-time parameter estimation in the frequency domain. In Proceedings of the 13th IFAC Conference on System Identification, Rotterdam, The Netherlands, 27–29 August 2003; Paper REG-360. pp. 1–7. [Google Scholar]
  109. Morelli, E.A.; Klein, V. Real-time parameter estimation in the frequency domain. J. Guid. Control. Dyn. 2000, 23, 812–818. [Google Scholar] [CrossRef]
  110. Smith, M.S.; Moes, T.R.; Morelli, E.A. Flight investigation of prescribed simultaneous independent surface excitations for real-time parameter identification. In Proceedings of the AIAA Atmospheric Flight Mechanics Conference and Exhibit, Austin, TX, USA, 11–14 August 2003; No. 2003-5702. pp. 1–23. [Google Scholar]
  111. Nguyen, L.T.; Ogburn, M.E.; Gilbert, W.P.; Kibler, K.S.; Brown, P.W.; Deal, P.L. Simulator Study of Stall/Post-Stall Characteristics of a Fighter Airplane with Relaxed Longitudinal Static Stability; NASA TP-1538’ National Aeronautics and Space Administration: Washington, DC, USA, 1979; 223p. [Google Scholar]
  112. Schroeder, M.R. Synthesis of low-peak-factor signals and binary sequences with low autocorrelation. IEEE Trans. Inform. Theory 1970, 16, 85–89. [Google Scholar] [CrossRef]
  113. Sjöberg, J.; Zhang, Q.; Ljung, L.; Benveniste, A.; Deylon, B.; Glorennec, P.-Y.; Hjalmarsson, H.; Juditsky, A. Nonlinear black-box modeling in system identification: A unified overview. Automatica 1995, 31, 1691–1724. [Google Scholar] [CrossRef]
  114. Juditsky, A.; Hjalmarsson, H.; Benveniste, A.; Deylon, B.; Ljung, L.; Sjöberg, J.; Zhang, Q. Nonlinear black-box modeling in system identification: Mathematical foundations. Automatica 1995, 31, 1725–1750. [Google Scholar] [CrossRef]
  115. Chen, S.; Billings, S.A. Neural networks for nonlinear dynamic systems modelling and identification. Int. J. Control 1992, 56, 319–346. [Google Scholar] [CrossRef]
  116. Narendra, K.S.; Parthasarathy, K. Identification and control of dynamic systems using neural networks. IEEE Trans. Neural Netw. 1990, 1, 4–27. [Google Scholar] [CrossRef]
  117. Rivals, I.; Personnaz, L. Black-box modeling with state-space neural networks. In Neural Adaptive Control Technology; Zbikowski, R., Hint, K.J., Eds.; World Scientific: Singapore, 1996; pp. 237–264. [Google Scholar]
  118. Grishina, A.Y.; Efremov, A.V. Development of a controller law for a supersonic transport using alternative means of automation in the landing phase. In Recent Developments in High-Speed Transport; Strelets, D.Y., Korsun, O.N., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 41–49. [Google Scholar]
  119. Prodanik, V.A.; Efremov, A.V. Synthesis of a controller based on the principle of inverse dynamics and the online identification of a lateral motion model in a next-generation supersonic transport. In Recent Developments in High-Speed Transport; Strelets, D.Y., Korsun, O.N., Eds.; Springer: Berlin/Heidelberg, Germany, 2023; pp. 41–49. [Google Scholar]
  120. Kondratiev, A.I.; Tiumentsev, Y.V. Neural network modeling of controlled aircraft motion. Aerosp. MAI J. 2010, 17, 5–11. (In Russian) [Google Scholar]
  121. Kondratiev, A.I.; Tiumentsev, Y.V. Application of neural networks for synthesizing flight control algorithms. I. Neural network inverse dynamics method for aircraft flight control. Russ. Aeronaut. (IzVUZ) 2013, 56, 23–30. [Google Scholar] [CrossRef]
  122. Kondratiev, A.I.; Tiumentsev, Y.V. Application of neural networks for synthesizing flight control algorithms. II. Adaptive tuning of neural network control law. Russ. Aeronaut. (IzVUZ) 2013, 56, 34–39. [Google Scholar]
  123. Kolganov, P.A.; Kondratiev, A.I.; Tiumentsev, A.Y.; Tiumentsev, Y.V. Neural network nonlinear adaptive fault tolerant motion control for unmanned aerial vehicles. Opt. Mem. Neural Netw. (Inf. Opt.) 2022, 31, 1–15. [Google Scholar] [CrossRef]
  124. Efremov, A.V.; Tjaglik, M.S.; Tiumentsev, Y.V.; Tan Wenqian. Pilot behavior modeling and its application to manual control tasks. IFAC PapersOnLine 2016, 49, 159–164. [Google Scholar] [CrossRef]
  125. Nghiem, T.X.; Drgoňa, J.; Jones, C.; Nagy, Z.; Schwan, R.; Dey, B.; Chakrabarty, A.; Di Cairano, S.; Paulson, J.A.; Carron, A.; et al. Physics-informed machine learning for modeling and control of dynamical systems. arXiv 2023, arXiv:2306.13867v1; Proc. 2023 Am. Control Conf. [Google Scholar]
  126. Karniadakis, G.E.; Kevrekidis, I.G.; Lu, L.; Perdikaris, P.; Wang, S.; Yang, L. Physics-informed machine learning. Nat. Rev. Phys. 2021, 3, 422–440. [Google Scholar] [CrossRef]
  127. Zhou, T.; Droguett, E.L.; Mosleh, A. Physics-informed deep learning: A promising technique for system reliability assessment. Appl. Soft Comput. 2022, 126, 109217. [Google Scholar] [CrossRef]
  128. Wang, R.; Yu, R. Physics-guided deep learning for dynamical systems: A survey. arXiv 2023, arXiv:2107.01272v6. [Google Scholar]
  129. Lettermann, L.; Jurado, A.; Betz, T.; Worgotter, F.; Herzog, S. AdoptODE: Fusion of data and expert knowledge for modeling dynamical systems. arXiv 2023, arXiv:2305.09325v2. [Google Scholar]
  130. Ji, W.; Qiu, W.; Shi, Z.; Pan, S.; Deng, S. Stiff-PINN: Physics-informed neural network for stiff chemical kinetics. J. Phys. Chem. A 2021, 125, 8090–8106. [Google Scholar] [CrossRef] [PubMed]
  131. Daryakenaria, N.A.; De Florio, M.; Shukla, K.; Karniadakis, G. AI-Aristotle: A physics-informed framework for systems biology gray-box identification. arXiv 2023, arXiv:2310.01433v1. [Google Scholar]
  132. Giacomuzzo, G.; Libera, A.D.; Romeres, D.; Carli, R. A black-box physics-informed estimator based on Gaussian process regression for robot inverse dynamics identification. arXiv 2023, arXiv:2310.06585v1. [Google Scholar] [CrossRef]
  133. Lai, Z.; Mylonas, C.; Nagarajaiah, S.; Chatzi, E. Structural identification with physics-informed neural ordinary differential equations. J. Sound Vib. 2021, 508, 1–17. [Google Scholar] [CrossRef]
  134. Egorchev, M.V.; Kozlov, D.S.; Tiumentsev, Y.V. Identification of aircraft aerodynamic characteristics: A neural network based semi-empirical approach. Aerosp. MAI J. 2014, 21, 13–24. (In Russian) [Google Scholar]
  135. Egorchev, M.V.; Tiumentsev, Y.V. Learning of semi-empirical neural network model of aircraft three-axis rotational motion. Opt. Mem. Neural Netw. (Inf. Opt.) 2015, 24, 210–217. [Google Scholar] [CrossRef]
  136. Egorchev, M.V.; Tiumentsev, Y.V. Neural network based semi-empirical approach to longitudinal motion modeling and identification of aerodynamic characteristics for maneuvering aircraft. Tr. MAI 2017, 1–24. (In Russian) [Google Scholar]
  137. Rozenwasser, E.N.; Yusupov, R.M. Sensitivity of Control Systems; Science: Moscow, Russia, 1981. (In Russian) [Google Scholar]
  138. Ogunmolu, O.P.; Gu, X.; Jiang, S.B.; Gans, N.R. Nonlinear systems identification using deep dynamic neural networks. arXiv 2016, arXiv:1610.01439v1. [Google Scholar]
  139. Schlaginhaufen, A.; Wenk, P.; Krause, A.; Dörfler, F. Learning stable deep dynamics models for partially observed or delayed dynamical systems. arXiv 2021, arXiv:2110.14296v2. [Google Scholar]
  140. Manek, G.; Kolter, J.Z. Learning stable deep dynamics models. arXiv 2020, arXiv:2001.06116v1. [Google Scholar]
  141. Zarzycki, K.; Lawryńczuk, M. LSTM and GRU neural networks as models of dynamical processes used in predictive control: A comparison of models developed for two chemical reactors. Sensors 2021, 21, 5625. [Google Scholar] [CrossRef]
  142. Liu, B.; Luo, W.; Li, G.; Huang, J.; Yang, B. Do we need an encoder-decoder to model dynamical systems on networks? arXiv 2023, arXiv:2305.12185v1. [Google Scholar]
  143. Dong, Y. An application of deep neural networks to the in-flight parameter identification for detection and characterization of aircraft icing. Aerosp. Sci. Technol. 2018, 77, 34–49. [Google Scholar] [CrossRef]
  144. Cheng, X.; Zhang, S.; Nguyen, P.C.H.; Azarfar, S.; Chern, G.-W.; Baek, S.S. Convolutional neural networks for large-scale dynamical modeling of itinerant magnets. arXiv 2023, arXiv:2306.11833v1. [Google Scholar] [CrossRef]
  145. Sormoli, M.A.; Samadi, A.; Mozaffari, S.; Koufos, K.; Dianati, M.; Woodman, R. A novel deep neural network for trajectory prediction in automated vehicles using velocity vector field. arXiv 2023, arXiv:2309.10948v1. [Google Scholar]
  146. Punjani, A.; Abbeel, P. Deep learning helicopter dynamics models. In Proceedings of the 2015 IEEE International Conference on Robotics and Automation (ICRA), Seattle, WA, USA, 26–30 May 2015; pp. 3223–3230. [Google Scholar]
  147. El Hamidi, K.; Mjahed, M.; El Kari, A.; Ayad, H. Adaptive control using neural networks and approximate models for nonlinear dynamic systems. Model. Simul. Eng. 2020, 2020, 8642915. [Google Scholar] [CrossRef]
  148. Narendra, K.S.; Mukhopadhyay, S. Adaptive control using neural networks and approximate models. IEEE Trans. Neural Netw. 1997, 8, 475–485. [Google Scholar] [CrossRef] [PubMed]
  149. Cheng, L.; Wang, Z.; Jiang, F.; Li, J. Adaptive neural network control of nonlinear systems with unknown dynamics. Adv. Space Res. 2021, 67, 114–1123. [Google Scholar] [CrossRef]
  150. Ge, S.S.; Hang, C.C.; Zhang, T. Adaptive neural network control of nonlinear systems by state and output feedback. IEEE Trans. Syst. Man, Cybern. Part B 1999, 29, 818–828. [Google Scholar] [CrossRef] [PubMed]
  151. Nguyen, N.T. Model-Reference Adaptive Control—A Primer; Springer: Berlin/Heidelberg, Germany, 2018. [Google Scholar]
  152. Padhy, R.P.; Verma, S.; Ahmad, S.; Choudhury, S.K.; Sa, P.K. Deep neural network for autonomous UAV navigation in indoor corridor environments. Procedia Comput. Sci. 2018, 133, 643–650. [Google Scholar] [CrossRef]
  153. Makke, O.; Lin, F. Learning in dynamic systems and its application to adaptive PID Control. arXiv 2023, arXiv:2308.10851v1. [Google Scholar]
  154. Goyal, P.; Benner, P. LQResNet: A deep neural network architecture for learning dynamic processes. arXiv 2023, arXiv:2103.02249v2. [Google Scholar]
Figure 1. General structure of a controlled dynamical system.
Figure 1. General structure of a controlled dynamical system.
Aerospace 12 00030 g001
Figure 2. Scheme of generating a training example based on the results of the experiment with the control object.
Figure 2. Scheme of generating a training example based on the results of the experiment with the control object.
Aerospace 12 00030 g002
Figure 3. Typical test excitations used in the study of the controlled systems dynamics: (a)—step; (b)—rectangular pulse; (c)—doublet.
Figure 3. Typical test excitations used in the study of the controlled systems dynamics: (a)—step; (b)—rectangular pulse; (c)—doublet.
Aerospace 12 00030 g003
Figure 4. Test excitations as functions of time (in seconds) used in the study of the dynamics of controlled systems: (a)—random signal; (b)—polyharmonic signal. Here, ϕ a c t is the command signal of the elevator actuator; the red dashed–dotted line shows the trimming value of the elevator deflection angle, providing horizontal flight of the airplane for the given values of altitude and airspeed.
Figure 4. Test excitations as functions of time (in seconds) used in the study of the dynamics of controlled systems: (a)—random signal; (b)—polyharmonic signal. Here, ϕ a c t is the command signal of the elevator actuator; the red dashed–dotted line shows the trimming value of the elevator deflection angle, providing horizontal flight of the airplane for the given values of altitude and airspeed.
Aerospace 12 00030 g004
Figure 5. Structure of the NARX model; see Equation (29) and its explanations for variable designations.
Figure 5. Structure of the NARX model; see Equation (29) and its explanations for variable designations.
Aerospace 12 00030 g005
Figure 6. Scheme of neural network identification of the control object. Here, u is control, y p is output of the control object, y m is output of the ANN model of the control object, e is the difference between the outputs of the control object and the ANN model, and ξ is a correcting action.
Figure 6. Scheme of neural network identification of the control object. Here, u is control, y p is output of the control object, y m is output of the ANN model of the control object, e is the difference between the outputs of the control object and the ANN model, and ξ is a correcting action.
Aerospace 12 00030 g006
Figure 7. Training results for ANN model of SST with number of neurons in the hidden layer = 15, delays = 2; training epochs = 30,000.
Figure 7. Training results for ANN model of SST with number of neurons in the hidden layer = 15, delays = 2; training epochs = 30,000.
Aerospace 12 00030 g007
Figure 8. Canonical form of the recurrent neural network.
Figure 8. Canonical form of the recurrent neural network.
Aerospace 12 00030 g008
Figure 9. Canonical semi-empirical form of source model (36) discretized using explicit Euler method; only the subnetwork whose elements are marked in color is being trained.
Figure 9. Canonical semi-empirical form of source model (36) discretized using explicit Euler method; only the subnetwork whose elements are marked in color is being trained.
Aerospace 12 00030 g009
Figure 10. Canonical semi-empirical form of source model (36) discretized using explicit Adams method; only the subnetwork whose elements are marked in color is being trained.
Figure 10. Canonical semi-empirical form of source model (36) discretized using explicit Adams method; only the subnetwork whose elements are marked in color is being trained.
Aerospace 12 00030 g010
Figure 11. Example comparing the capabilities of black-box and gray-box approaches with modeling dynamical systems with neural networks: Source ODE system (1) in this figure corresponds to dynamical system (34), (34), while (2) and (3) are its variants with complicated dynamics; trained elements of the semi-empirical ANN model and the NARX model are marked in color.
Figure 11. Example comparing the capabilities of black-box and gray-box approaches with modeling dynamical systems with neural networks: Source ODE system (1) in this figure corresponds to dynamical system (34), (34), while (2) and (3) are its variants with complicated dynamics; trained elements of the semi-empirical ANN model and the NARX model are marked in color.
Aerospace 12 00030 g011
Figure 12. An example of using the direct and indirect approaches to find relationships for the coefficients of aerodynamic forces and moments affecting an aircraft. Trained elements of the semi-empirical ANN model and the representation of functions are marked in color. In the semi-empirical ANN model, the left subnetwork marked in color represents the C L ( α , q , φ ) function and the right subnetwork represents the C m ( α , q , φ ) function.
Figure 12. An example of using the direct and indirect approaches to find relationships for the coefficients of aerodynamic forces and moments affecting an aircraft. Trained elements of the semi-empirical ANN model and the representation of functions are marked in color. In the semi-empirical ANN model, the left subnetwork marked in color represents the C L ( α , q , φ ) function and the right subnetwork represents the C m ( α , q , φ ) function.
Aerospace 12 00030 g012
Figure 13. Semi-empirical ANN model for longitudinal angular aircraft motion (based on the Euler scheme). In this model, the left subnetwork marked in color represents the C L ( α , q , φ ) function and the right subnetwork represents the C m ( α , q , φ ) function. Only these marked out elements of the model are to be trained.
Figure 13. Semi-empirical ANN model for longitudinal angular aircraft motion (based on the Euler scheme). In this model, the left subnetwork marked in color represents the C L ( α , q , φ ) function and the right subnetwork represents the C m ( α , q , φ ) function. Only these marked out elements of the model are to be trained.
Aerospace 12 00030 g013
Figure 14. Empirical ANN model for longitudinal angular aircraft motion (NARX). In this model, unlike the model in Figure 13, all elements are to be trained and are marked in color.
Figure 14. Empirical ANN model for longitudinal angular aircraft motion (NARX). In this model, unlike the model in Figure 13, all elements are to be trained and are marked in color.
Aerospace 12 00030 g014
Figure 15. Accuracy evaluation of the semi-empirical ANN model shown in Figure 13. The red dashed–dotted line shows the trimming value of the elevator deflection angle, providing horizontal flight of the airplane for the given values of altitude and airspeed.
Figure 15. Accuracy evaluation of the semi-empirical ANN model shown in Figure 13. The red dashed–dotted line shows the trimming value of the elevator deflection angle, providing horizontal flight of the airplane for the given values of altitude and airspeed.
Aerospace 12 00030 g015
Figure 16. Accuracy assessment of the empirical ANN model shown in Figure 14.
Figure 16. Accuracy assessment of the empirical ANN model shown in Figure 14.
Aerospace 12 00030 g016
Figure 17. Example of pitching moment coefficient reconstitution for a maneuverable aircraft with a wide range of flight modes: (a)—coefficient C m ( α , δ e ) for various elevator deflection angle δ e values according to [111]; (b)—approximation error E C m for fixed values of pitch rate q = 0 deg/s and airspeed V T = 150 m/s.
Figure 17. Example of pitching moment coefficient reconstitution for a maneuverable aircraft with a wide range of flight modes: (a)—coefficient C m ( α , δ e ) for various elevator deflection angle δ e values according to [111]; (b)—approximation error E C m for fixed values of pitch rate q = 0 deg/s and airspeed V T = 150 m/s.
Aerospace 12 00030 g017
Figure 18. Error values for C L , C Y , C l , C m , C n according to the reconstructed dependencies for them in the process of testing the hybrid ANN model (referred to the ranges of change for these variables obtained during testing).
Figure 18. Error values for C L , C Y , C l , C m , C n according to the reconstructed dependencies for them in the process of testing the hybrid ANN model (referred to the ranges of change for these variables obtained during testing).
Aerospace 12 00030 g018
Figure 19. The architecture of the SST motion model in the form of a deep neural network.
Figure 19. The architecture of the SST motion model in the form of a deep neural network.
Aerospace 12 00030 g019
Figure 20. Training results of the SST motion model in the form of a deep neural network.
Figure 20. Training results of the SST motion model in the form of a deep neural network.
Aerospace 12 00030 g020
Figure 21. Adaptive control of a dynamical system using the MRAC scheme.
Figure 21. Adaptive control of a dynamical system using the MRAC scheme.
Aerospace 12 00030 g021
Figure 22. Neurocontroller architecture and its interaction with the ANN model of the control object.
Figure 22. Neurocontroller architecture and its interaction with the ANN model of the control object.
Aerospace 12 00030 g022
Figure 23. Source data for training the combined network including the neurocontroller (NC) and the ANN model and its results (angle of attack tracking error e α is measured in degrees).
Figure 23. Source data for training the combined network including the neurocontroller (NC) and the ANN model and its results (angle of attack tracking error e α is measured in degrees).
Aerospace 12 00030 g023
Figure 24. SST response to a failure situation without adaptation (failure at t = 40 s; angle of attack tracking error e α is measured in degrees).
Figure 24. SST response to a failure situation without adaptation (failure at t = 40 s; angle of attack tracking error e α is measured in degrees).
Aerospace 12 00030 g024
Figure 25. SST response to a failure situation with adaptation (failure at t = 40 s; angle of attack tracking error e α is measured in degrees).
Figure 25. SST response to a failure situation with adaptation (failure at t = 40 s; angle of attack tracking error e α is measured in degrees).
Aerospace 12 00030 g025
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Dhiman, G.; Tiumentsev, A.Y.; Tiumentsev, Y.V. Neural Network and Hybrid Methods in Aircraft Modeling, Identification, and Control Problems. Aerospace 2025, 12, 30. https://doi.org/10.3390/aerospace12010030

AMA Style

Dhiman G, Tiumentsev AY, Tiumentsev YV. Neural Network and Hybrid Methods in Aircraft Modeling, Identification, and Control Problems. Aerospace. 2025; 12(1):30. https://doi.org/10.3390/aerospace12010030

Chicago/Turabian Style

Dhiman, Gaurav, Andrew Yu. Tiumentsev, and Yury V. Tiumentsev. 2025. "Neural Network and Hybrid Methods in Aircraft Modeling, Identification, and Control Problems" Aerospace 12, no. 1: 30. https://doi.org/10.3390/aerospace12010030

APA Style

Dhiman, G., Tiumentsev, A. Y., & Tiumentsev, Y. V. (2025). Neural Network and Hybrid Methods in Aircraft Modeling, Identification, and Control Problems. Aerospace, 12(1), 30. https://doi.org/10.3390/aerospace12010030

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop