You are currently viewing a new version of our website. To view the old version click .
Aerospace
  • Article
  • Open Access

12 August 2022

System Identification of an Aerial Delivery System with a Ram-Air Parachute Using a NARX Network

and
Mechanical Engineering Department, Başkent University, Ankara 06500, Turkey
*
Author to whom correspondence should be addressed.

Abstract

Neural networks are one of the methods used in system identification problems. In this study, a NARX network with a serial-parallel structure was used to identify an unknown aerial delivery system with a ram-air parachute. The dataset was created using the software-in-the-loop method (Software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. The performance of the NARX network differed according to parameters used, such as the selected training algorithm, input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a. The results demonstrated that the model with one hidden layer and five neurons, which was trained using the Bayesian regularization algorithm, was sufficient for this problem.

1. Introduction

In aircraft, system identification can be thought of as estimating aerodynamic parameters or defining a mathematical model of the system. Three methods have been proposed in the literature for the estimation of aerodynamic parameters of parachute landing systems [1]. The first of these covers analytical methods based on computational fluid dynamics. Others are wind tunnel tests and flight tests. In this study, we focused on the methods used in flight tests.
The purpose of system definition is to obtain a mathematical model according to the inputs and outputs obtained from the flight tests. Hamel and Jategaonkar proposed the 4M (maneuver, measurement, method, model—see Figure 1) requirements for successful system identification [2], arguing that:
Figure 1. 4M-based system identification process.
  • Control inputs should be created to cover extreme points;
  • High-resolution measurements should be used;
  • The possible mathematical model of the vehicle should be defined; and
  • The most suitable method for the data should be chosen.
Jann and Strickert suggested separating the symmetric and asymmetric maneuvers that need to be carried out in the formation of data to be used in the definition process [3] (Figure 2).
Figure 2. Recommended control input.
The methods used in parameter estimation can be listed as the Equation-error, output-error, and filter-error methods. The question of which method to choose can be decided according to the measurement and the noise present in the process [2] (Figure 3). If disruptive factors can be ignored in both, the fastest method, the equality-error method, is preferred. If the disturbing factors are only assumed in the measurements, the output-error method is recommended, and if both are present, the filter-error method is recommended.
Figure 3. Output-error method.
The output-error method is the most widely preferred method for parameter estimation in the literature. In his study, Grauer calculated a dynamic model of an aircraft during flight by adapting the output-error method, which is usually carried out using post-flight data, to real-time flight data [4]. In another study using the output-error method, Jann estimated the state variables of a parachute landing system called ALEX via sensor inputs (GPS, Magnetometers, Gyros, Accelerometers) [5]. On the other hand, Jaiswal, Prakash, and Chaturvedi estimated the aerodynamic coefficients of a parachute landing system using the maximum likelihood method and the output-error method [6].
In addition to statistical methods, machine learning techniques, which are increasing in popularity day by day, have also been successfully used in solving system identification problems. In the literature, artificial neural networks have been used in modeling aircraft dynamics [7,8,9,10,11], estimating aerodynamic forces and moments [12,13,14,15], and in controller designs [16,17]. Both feed-forward neural networks [14,18] and recurrent neural networks have been widely used in these studies [19]. Roudbari and Saghafi proposed a new method for describing the dynamics of highly maneuverable aircraft. In the model they developed, they modeled the flight dynamics with artificial neural networks. The difference between their approach and those of traditional methods is that they did not use aerodynamic information during the training process [20]. Bagherzadeh supported a model with flight dynamics in order to increase the performance of the artificial neural network model [21].
The development of deep learning methods has enabled these methods to be used frequently in system identification problems. The residual neural network approach, which is one type of deep neural network, is one of the methods used to solve these problems. Goyal and Benner developed a special architecture for dynamic systems called LQResNET [22]. The method they proposed allowed for the use of observations in the modeling of dynamical systems. Their model was based on the principle that the rate of a variable depends on the linear and quadratic forms of the variable. Chen and Xiu suggested the framework called gResNet. They defined the residual as the estimation error of the prior model. They also used a DNN to model the residual [23].
In this study, a NARX Network with a serial-parallel structure was used to identify an unknown aerial delivery system with a ram-air parachute. The dataset was created using the software-in-the-loop method (software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. The performance of the NARX network differed according to parameters used, such as the selected training algorithm, the input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a.

2. Mathematical Model

In this study, a 6-degree-of-freedom model developed for a parachute landing system was used [24]. The Equations of motion of the vehicle can be written as:
m I 3 x 3 u ˙ v ˙ w ˙ = F m S ω u v w ,
I p ˙ q ˙ r = M S ω I p q r ,
where m is the mass, I is the inertia matrix, [u, v w] are linear velocities, [p, q r] are the angular velocities in the body frame, S(ω) is a skew-symmetric matrix consisting of linear velocity vectors, F is the force, and M is moment.
Due to the xz-symmetry plane of the parachute landing system, the inertial matrices consist of 4 unique components.
S ω = 0 r q r 0 p q p 0
I = I x x 0 I x z 0 I y y 0 I x z 0 I z z
The forces and moments affecting the parachute are caused by gravity and aerodynamic forces. The gravitational force can be written according to the body (b) axis.
F g = m g sin θ cos θ sin Φ cos θ cos Φ
The aerodynamic forces acting on the system are written using the relevant aerodynamic coefficients ( C D 0 , C D α 2 , C D δ s ,   C Y β ,   C L 0 ,   C L α ,   C L δ s ), according to the body axis.
F a = Q S R w b C D 0 + C D α 2 α 2 + C D δ s δ ¯ s C Y β β C L 0 + C L α α + C L δ s δ ¯ s
In this Equation, S represents the parachute surface area, δ ¯ s represents symmetric trailing edge deflection, and ( R w b ) is the rotation matrix from the aerodynamic coordinate system to the body axis.
R w b = R α R β = cos α 0 sin α 0 1 0 sin α 0 cos α cos β sin β 0 sin β cos β 0 0 0 1
R w b = R α R β = cos α cos β cos α sin β sin α sin β cos β 0 sin α cos β sin α sin β cos α
The angle of attack and slip angle are obtained from the velocity vector in the body axis.
α = tan 1 v z v x
β = tan 1 v y v x 2 + v z 2
The velocity vector in the body axis consists of the global velocity and the wind effect.
V a = v x v y v z = u v w R n b w x w y w z
R n b is the rotation matrix from the coordinate system on the North-East-Down-axis which has its origin in the center of mass of the parachute to the body axis. Euler angles (roll, pitch, yaw) are used in this notation.
R ϕ = 1 0 0 0 cos ϕ sin ϕ 0 sin ϕ cos ϕ
R θ = cos θ 0 sin θ 0 1 0 sin θ 0 cos θ
R ψ = 1 0 0 0 cos ψ sin ψ 0 sin ψ cos ψ
R n b = R ϕ R θ R ψ
Aerodynamic moments affecting the parachute can also be written using the relevant coefficients ( C l β ,   C l p , C l r , C l δ a ,   C m 0 ,   C m α ,   C m q ,   C n β ,   C n p ,   C n r ,   C n δ a ). These are roll, pitch, and yaw moments, respectively [2].
M a = ρ V a 2 S 2 b C l β β + b 2 V a C l p p + b 2 V a C l r r + C l δ a δ ¯ a c ¯ C m 0 + C m α α + c 2 V a C m q q b C n β β + b 2 V a C n p p + b 2 V a C n r r + C n δ a δ ¯ a
here, ρ is air density, c ¯   represents mean aerodynamic chord, δ ¯ a = δ a / δ a   m a x is asymmetric trailing-edge deflection, and S is the canopy reference area.

3. Materials and Methods

The dataset was created using the software-in-the-loop method (software in the loop). Gazebo was used as the simulator and PX4 was used as the autopilot software. A virtual flight was performed in the Gazebo environment (Figure 4).
Figure 4. Gazebo simulation of the system.
The parameters required for the simulation were used considering the autonomous landing system with a parachute model named Snowflake (Table 1) [3].
Table 1. Parameters of the Snowflake parachute model [3].
Gazebo compatible sensor models were used to obtain the flight data for the vehicle in the simulation environment. These consisted of a gyroscope, magnetometer, accelerometer, barometer, and GPS. The estimation of the state variables of the vehicle was carried out with PX4 software, using the extended Kalman filter.
PX4 has a state estimation module called EKF2 which uses the EKF algorithm. It uses IMU data in the state prediction phase. To correct these values, a GPS and barometer are used in the state correction phase [24].
Simplified models of the sensors used can be shown similarly [25]:
x m = x + b + n ,
b ˙ = n b ,
where x m is the measured value; x is the real value; and b , n , and n b   represent bias and Gaussian noise, respectively. The sensor parameters can also be expressed using this notation. The sensor parameters used in the simulation are given in Table 2.
Table 2. Parameters used in the simulation.
The simulation was carried out in a windless environment and the air density was 1.225 kg/m3. In the simulation, the system was released from a height of 500 m. Dropping occurred in 30 s. Control inputs δ ¯ a and δ ¯ s are given as full right and full left (Figure 5).
Figure 5. Control inputs.
The flight data received from the system were arranged and the input vector x and the output vector y were created.
x = a ¯   s ¯
y = u   v   w   p   q   r
A total of 270 s of data were reduced to 225 s to cover the flight section, and 2250 pieces of data were produced using a 10 Hz measurement. The position and velocity of the vehicle in the flight data used are shown in Figure 6, Figure 7 and Figure 8.
Figure 6. Position of the system.
Figure 7. Euler angles.
Figure 8. Velocity of the system.
In order to improve the performance of the model, 70% of the flight was used for training and the remaining 30% was used in the testing process. Since the landing position is the most important phase, the first phase of the flight was selected as the training data.

3.1. NARX Network

A nonlinear autoregressive exogenous (NARX) network is a nonlinear model representation used in time series models. In this notation, the model’s outputs depend on the past output values, the inputs, and the past values of the inputs. Its mathematical expression is given as follows:
y t = f y t 1 ,   y t 2 ,   .   .   .   ,   y t n y ; u t ,   u t 1 ,   .   .   .   ,   u t n u ,
where y denotes outputs, u denotes inputs, and f represents a nonlinear function. The structure in which f is modeled as a neural network is named the NARX neural network (NARX network) [26]. This model has been used for modeling conventional fixed-wing [27,28] and rotary-wing [29,30] aircraft. A NARX neural network can be modeled using two types of models: parallel and serial-parallel (Figure 9). In the parallel model, the estimated output values are fed back into the system.
y ^ t = f y ^ t 1 ,   y ^ t 2 ,   .   .   .   ,   y ^ t n y ;   u t ,   u t 1 ,   .   .   .   ,   u t n u
Figure 9. Parallel (Left) and serial-parallel (Right) NARX networks.
In the serial-parallel model, only real system outputs are used:
y ^ t = f y t 1 ,   y t 2 ,   .   .   .   ,   y t n y ; u t ,   u t 1 ,   .   .   .   ,   u t n u
where y ^ t represents the estimated output value time t.
Since the data set used in this study included real system outputs, the serial-parallel structure was preferred. The feed-forward network block shown in Figure 9 consisted of multilayer feedforward neural networks, which consisted of at least one hidden layer and neurons. Each neuron calculated the outputs with the help of the activation function, determined using the inputs and their weights, as shown in Figure 10, where, x n , w n , b, and f represent inputs, weights, bias, and the activation function, respectively. The architecture of the NARX neural network with a serial-parallel structure is shown in Figure 11.
Figure 10. Structure of a neuron.
Figure 11. Serial-parallel NARX network architecture.
The selection of the activation functions plays an important role in the model design. The functions used in the hidden layers and the functions used in the output layer vary. Differentiable functions are preferred in hidden layers. These functions, which are preferred over linear functions during training, enable the models to perform successfully with more complex problems. In the literature, functions that are frequently used in hidden layers are ReLU (Rectified Linear Activation), sigmoid (logistic), and Tanh (hyperbolic tangent) functions. The function used in the output layer differs according to the type of problem. Linear functions are used in regression problems, whereas softmax or sigmoid functions are used in classification problems. This concept is illustrated in detail in Table 3.
Table 3. Activation functions.
The process of calculating and updating the weights is called training. The aim here is to minimize the targeted error function for model performance. In the neural network model, this function can be written as the sum of the squares of the errors:
E = i = 1 n e i 2 ,
where e is the error and n is the number of data.
The training algorithm used in feed-forward neural network methods is known as the back-propagation algorithm [31]. Since the convergence rate of the steepest descent method, which is used as a standard in the back-propagation algorithm, is slow, many learning algorithms have been developed for neural network training. The main ones are the Levenberg–Marquardt algorithm [32], the Bayesian regularization algorithm [33], and the scaled conjugate gradient algorithm [34].

3.2. Levenberg–Marquardt

The Levenberg–Marquardt algorithm is a second-order training algorithm used in solving nonlinear optimization problems. According to the weight values that need to be updated, the Jacobian of the error function shown in Equation (23) can be calculated as follows:
J = e 1 w 1 e 1 w m e n w 1 e n w m ,
where m is the number of weights in the network. After finding the Jacobian matrix, the gradient vector (g) and the Hessian matrix (H) can also be calculated.
g = J T e
H = J T J
The weights are updated based on the Jacobian matrix.
w i + 1 = w i   J i T J i + α i I 1 2   J i T e i g = J T e ,
where α i is the learning coefficient and I is the unit matrix. A theoretical analysis can be found in [35].

3.3. Bayesian Regularization

The error function is rearranged using the regularization method to generalize the neural network [36]:
F = μ E w + ν E ,
where μ and ν are the regularization parameters and E w is the sum of the squared weights. The Bayesian regularization method is used for the optimization of the editing parameters. Considering the weight values as random variables, it aims to calculate the weight values that will maximize the posterior probability distribution of the weights in the given data set. The posterior distribution can be expressed according to the Bayes rule:
P w | D , μ ,   ν ,   N = P D | w ,   ν ,   N   P w | μ , N P D | μ ,   ν , N ,
where D represents the dataset and N represents the neural network model. P D | w ,   ν ,   N expresses the likelihood function, P w | μ , N   is the prior density, and P D | μ ,   ν , N   is the normalization factor. It can be said that the noise in the dataset and in the weights has a Gaussian distribution. Thus, the likelihood function and antecedent intensity values can be calculated.
P D | w ,   ν ,   N = e ν E Z ν
P w | μ , N = e μ E w Z w μ
Here, Z = π ν n / 2 and Z w = π μ m / 2 . By rearranging these equations, the posterior distribution to the weights can be rewritten.
P w | D , μ ,   ν ,   N = e μ E w + ν E Z w μ Z ν
Regularization parameters are effective in the N model. The Bayes rule can be applied for the optimization of these parameters.
P μ ,   ν | D ,   N = P D | μ ,   ν ,   N   P μ ,   ν | N P D | N
As can be seen in Equation (34), the function P D | μ ,   ν ,   N is directly proportional to P μ ,   ν | D ,   N . Therefore, the maximum value of the function P D | μ ,   ν ,   N   must be calculated. Adjustment parameters can be calculated using the Taylor expansion of Equation (29). A theoretical analysis can be found in [37].
μ = γ 2 E w
ν = n γ 2 E
γ = m μ   t r ( H 1 )

3.4. Scaled Conjugate Gradient

In the steepest descent algorithm implemented in the standard back-propagation algorithm, a search is made in the opposite direction of the gradient vector while updating the weights. Although the error function decreases rapidly in this direction, the same cannot be said for the convergence rate. Conjugate gradient algorithms search using the direction with the fastest convergence. This direction is called the conjugate direction. In this method, the search first starts in the reverse of the gradient vector, similarly to the steepest descent algorithm. It differs from the second iteration as follows.
p 0 = g 0
x k + 1 = x k + α k g k
p k = g k + β k p k 1
Different algorithms have been developed according to the way in which the β k   coefficient is calculated. Moller, on the other hand, combined the LM algorithm and the conjugate gradient algorithm for the calculation of the number of steps in the algorithm he developed. This algorithm is called the scaled conjugate gradient algorithm [35]. In this algorithm, which is based on calculating the approximate value of the Hessian matrix, the design parameters change in each iteration and are independent of the user. This is the most important factor affecting the success of the algorithm.
H k = E w k + σ k p k E w k σ k + λ k p k
β k = g k + 1 2 g k + 1 T g k g k T g k
p k + 1 = g k + 1 + β k p k

4. Results and Discussion

The performance of the NARX network differs according to parameters used, such as the selected training algorithm, the input and output delays, the hidden layer, and the number of neurons. Within the scope of this study, each parameter was examined independently. Models were trained using MATLAB 2020a. The root-mean-square error (RMSE) and mean absolute error (MAE) values were used to evaluate model performance. The metrics used are presented in Table 4.
Table 4. Metrics used in the evaluation of models.
First, the performance of the training algorithms (Bayes arrangement, Levenberg–Marquardt, scaled conjugate gradient) in a model consisting of a single hidden layer and 15 neurons was compared. The input and output delay vectors were determined as in [12]. A hyperbolic tangent was used as the activation function in the hidden layer and a linear function was used in the output layer. The errors according to the training algorithms are shown in Table 5.
Table 5. Performance based on the training algorithms.
Despite the fast training time, SCG performed worse than LM and BR. At this stage, the hidden layer and the number of neurons within it were changed and the results were examined and shown in Table 6. BR was used as the training algorithm.
Table 6. Performance based on the number of hidden layers and neurons.
According to the angle of attack and the slip angle, it can be seen that model 4, which consisted of a single hidden layer and five neurons, showed the best performance. A comparison of the model results with the real system is shown in Figure 12.
Figure 12. Estimation errors.
In order to observe the performance of the models with the same aerodynamic characteristics and different weights, the system weight was increased to 10 kg and a flight was carried out from an altitude of 1000 m. Control inputs produced during the flight are shown in Figure 13.
Figure 13. Control inputs for the 10 kg system.
The model performances for a 120 s flight were compared using error metrics and computational costs measures. As can be seen in Table 7, increasing the number of hidden layers and neurons increased the computation time. Considering the model performances, model number 4 exhibited the best performance.
Table 7. Model performances for increased weight.
The estimation errors for increased weight are shown in Figure 14. The increase in the number of hidden layers increased the overshoot values, although it did not result in any significant changes in model performance. Finally, the performance of the developed models was examined in a system with different aerodynamic properties. An aerial delivery system called ALEX was used to determine the necessary parameters (Table 8) [3].
Figure 14. Estimation errors for increased weight.
Table 8. Parameters of ALEX [3].
The control inputs used in this flight, starting from a 200 m altitude, are shown in Figure 15.
Figure 15. Control inputs for ALEX.
The performances of the models are shown in Table 9 and Figure 16. Model 1, which was found to have the best performance, consisted of a single hidden layer and 10 neurons.
Table 9. Model performances for ALEX.
Figure 16. Estimation errors for ALEX.
In order to determine the limits of the developed models, the effects of weight and aerodynamic coefficients on the models were observed. The aerodynamic coefficients that determined the effects of control inputs on force and moment were chosen. Error rates were observed by changing the selected parameters by ±10%. RMSE was used as the error metric. As can be seen in Table 10, model 4, which consisted of a single hidden layer and five neurons, demonstrated the best performance.
Table 10. Model performances according to changes of 10%.
Considering the maximum error of five degrees, the limits of the models could be determined approximately, according to the parameters, via interpolation. The results are shown in Table 11.
Table 11. Limits of models.

5. Conclusions

In this study, a simulation environment was designed for a parachute landing system in the Gazebo/ROS environment. By implementing an aerial delivery system in PX4 autopilot software, the necessary infrastructure for a software-in-the-loop system was created. Flights were performed in the simulation environment and flight data were collected. Using these data for the description of the system, an NARX network model was trained, and a dynamic model was used to estimate the system. During the training process, different training algorithms were used (LM, BR, and SCG) and the effects of the numbers of hidden layers and neurons were observed. The effects of weight and aerodynamic coefficients on the models were also examined and the model limits were determined. As a result of the examinations, the model consisting of a single hidden layer and five neurons outperformed the other models evaluated. As the rates of different model parameters increase, the model which has the best performance may change. Therefore, errors in models can be improved by means of online training methods.
In future studies, pre-trained models will be updated using online training methods. Furthermore, the trained model will be tested using real flight data. After the model is verified, controller studies will be carried out and autonomous landing of the landing system will be carried out at the desired target location.

Author Contributions

Conceptualization, K.G. and A.T.Ş.; methodology, K.G.; software, K.G.; validation, K.G. and A.T.Ş.; investigation, K.G.; writing—original draft preparation, K.G.; writing—review and editing, A.T.Ş. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yakimenko, O.A. Precision Aerial Delivery Systems: Modeling, Dynamics, and Control; American Institute of Aeronautics and Astronautics: Monterey, CA, USA, 2015. [Google Scholar]
  2. Hamel, P.G.; Jategaonkar, R.V. Evolution of flight vehicle system identification. J. Aircr. 1996, 33, 9–28. [Google Scholar] [CrossRef]
  3. Jann, T.; Strickert, G. System Identification of a Parafoil-Load Vehicle-Lessons Learned. In Proceedings of the 18th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Munich, Germany, 23–26 May 2005. [Google Scholar]
  4. Grauer, J.A. Real-Time Parameter Estimation using Output Error. In Proceedings of the AIAA Atmospheric Flight Mechanics Conference, National Harbor, MD, USA, 13–17 January 2014. [Google Scholar]
  5. Jann, T. Aerodynamic model identification and GNC design for the parafoil-load system ALEX. In Proceedings of the 16th AIAA Aerodynamic Decelerator Systems Technology Conference and Seminar, Boston, MA, USA, 21–24 May 2001. [Google Scholar]
  6. Jaiswal, R.; Prakash, O.; Chaturvedi, S.K. A Preliminary Study of Parameter Estimation for Fixed Wing Aircraft and High Endurability Parafoil Aerial Vehicle. INCAS Bull. 2020, 12, 95–109. [Google Scholar] [CrossRef]
  7. Heimes, F.; Zalesski, G.; Land, W.; Oshima, M. Traditional and evolved dynamic neural networks for aircraft simulation. In Proceedings of the 1997 IEEE International Conference on Systems, Man, and Cybernetics. Computational Cybernetics and Simulation, Orlando, FL, USA, 12–15 October 1997. [Google Scholar]
  8. Saghafi, F.; Heravi, B.M. Identification of Aircraft Dynamics Using Neural Network Simultaneous Optimization Algorithm. In Proceedings of the 2005 European Modeling and Simulation Conference (ESM), Porto, Portugal, 24–26 October 2005. [Google Scholar]
  9. Harris, J.; Arthurs, F.; Henrickson, J.V.; Valasek, J. Aircraft system identification using artificial neural networks with flight test data. In Proceedings of the 2016 International Conference on Unmanned Aircraft Systems (ICUAS), Arlington, VA, USA, 7–10 June 2016; pp. 679–688. [Google Scholar]
  10. Narendra, K.; Parthasarathy, K. Identification and control of dynamical systems using neural networks. IEEE Trans. Neural Netw. 1990, 1, 4–27. [Google Scholar] [CrossRef] [PubMed]
  11. Phan, M.Q.; Juang, J.N.; Hyland, D.C. On Neural Networks in Identification and Control of Dynamic Systems. In Wave Motion, Intelligent Structures and Nonlinear Mechanics; National Aeronautics and Space Administration: Washington, DC, USA, 1995; pp. 194–225. [Google Scholar] [CrossRef]
  12. Valmorbida, G.; Wen-Chi, L.; Mora-Camino, F. A neural approach for fast simulation of flight mechanics. In Proceedings of the 38th Annual Simulation Symposium (ANSS’05), San Diego, CA, USA, 4–6 April 2005. [Google Scholar]
  13. Hu, Z.; Balakrishnan, S.N. Parameter Estimation in Nonlinear Systems Using Hopfield Neural Networks. J. Aircr. 2005, 42, 41–53. [Google Scholar] [CrossRef]
  14. Linse, D.J.; Stengel, R.F. Identification of aerodynamic coefficients using computational neural networks. J. Guid. Control. Dyn. 1993, 16, 1018–1025. [Google Scholar] [CrossRef]
  15. Puttige, V.R.; Anavatti, S.G. Real-Time Neural Network Based Online Identification Technique for a UAV Platform. In Proceedings of the 2006 International Conference on Computation Intelligence for Modelling Control and Automation and International Conference on Intelligent Agents Web Technologies and International Commerce (CIMCA’06), Washington, DC, USA, 28 November–1 December 2006; p. 92. [Google Scholar] [CrossRef]
  16. Kamasaldan, S.; Ghandakly, A. A neural network parallel adaptive controller for fighter aircraft pitch-rate tracking. IEEE Trans. Instrum. Meas. 2011, 60, 258–267. [Google Scholar] [CrossRef]
  17. Savran, A.; Tasaltin, R.; Becerikli, Y. Intelligent adaptive nonlinear flight control for a high performance aircraft with neural networks. ISA Trans. 2006, 45, 225–247. [Google Scholar] [CrossRef]
  18. Hess, R. On the use of back propagation with feed-forward neural networks for the aerodynamic estimation problem. In Proceedings of the Flight Simulation and Technologies, Guidance, Navigation, and Control and Co-Located Conferences, Monterey, CA, USA, 9–11 August 1993. [Google Scholar]
  19. Raol, J.; Jategaonkar, R. Aircraft parameter estimation using recurrent neural networks: A critical appraisal. In Proceedings of the 20th Atmospheric Flight Mechanics Conference, Guidance, Navigation, and Control and Co-located Conferences, Baltimore, MD, USA, 7–10 August 1995. [Google Scholar]
  20. Roudbari, A.; Saghafi, F. Intelligent modeling and identification of aircraft nonlinear flight dynamics. Chin. J. Aeronaut. 2014, 27, 759–771. [Google Scholar] [CrossRef]
  21. Bagherzadeh, S.A. Nonlinear aircraft system identification using artificial neural networks enhanced by empirical mode decomposition. Aerosp. Sci. Technol. 2018, 75, 155–171. [Google Scholar] [CrossRef]
  22. Goyal, P.; Benner, P. LQResNet: A Deep Neural Network Architecture for Learning Dynamic Processes. arXiv 2021, arXiv:2103.02249. [Google Scholar]
  23. Chen, Z.; Xiu, D. On Generalized Residual Network for Deep Learning of Unknown Dynamical Systems. J. Comput. Phys. 2021, 438, 110362. [Google Scholar] [CrossRef]
  24. PX4 Development. Available online: https://docs.px4.io/master/en/development/development.html (accessed on 1 April 2022).
  25. Xiao, K.; Tan, S.; Wang, G.; An, X.; Wang, X.; Wang, X. XTDrone: A Custom-izable Multi-Rotor UAVs Simulation Platform. In Proceedings of the 4th International Conference on Robotics and Automation Sciences (ICRAS), Chengdu, China, 12–14 June 2020. [Google Scholar]
  26. Menezes, J.M.P., Jr.; Barreto, G.A. Long-term time series prediction with the NARX network: An empirical evaluation. Neurocomputing 2008, 71, 3335–3343. [Google Scholar] [CrossRef]
  27. Sezginer, K.; Kasnakoğlu, C. Autonomous navigation of an aircraft using a narx recurrent neural network. In Proceedings of the 2019 11th International Conference on Electrical and Electronics Engineering (ELECO), Bursa, Turkey, 28–30 November 2019. [Google Scholar]
  28. Puttige, V.R. Neural Network Based Adaptive Control for Autonomous Flight of Fixed Wing Unmanned Aerial Vehicles. Ph.D. Thesis, University of New South Wales, Sydney, NSW, Australia, 2008. [Google Scholar]
  29. Tooba, H.; Kadri, M.B. Comparison of different techniques for experimental modeling of a Quadcopter. In Proceedings of the 2019 Second International Conference on Latest Trends in Electrical Engineering and Computing Technologies (INTELLECT), Karachi, Pakistan, 13–14 November 2019. [Google Scholar]
  30. Avdeev, A.; Assaleh, K.; Jaradat, M.A. Quadrotor Attitude Dynamics Identification Based on Nonlinear Autoregressive Neural Network with Exogenous Inputs. Appl. Artif. Intell. 2021, 35, 265–289. [Google Scholar] [CrossRef]
  31. Werbos, P.J. Beyond Regression: New Tools for Prediction and Analysis in The Behavioral Sciences. Ph.D. Thesis, Harvard University, Boston, MA, USA, 1974. [Google Scholar]
  32. Lera, G.; Pinzolas, M. Neighborhood based Levenberg-Marquardt algorithm for neural network training. IEEE Trans. Neural Netw. 2002, 13, 1200–1203. [Google Scholar] [CrossRef] [PubMed]
  33. Foresee, F.D.; Hagan, M.T. Gauss-Newton approximation to Bayesian learning. In Proceedings of the International Joint Conference on Neural Networks, Lausanne, Switzerland, 8–10 June 1997. [Google Scholar]
  34. Saipraneeth, G.; Jyotindra, N.; Roger, S.; Sachin, G. A Bayesian regularization-backpropagation neural network model for peeling computations. J. Adhes. 2020. [Google Scholar] [CrossRef]
  35. Moré, J.J. The Levenberg-Marquardt algorithm: Implementation and theory. Numer. Anal. Lect. Notes Math. 1978, 630, 105–116. [Google Scholar]
  36. MacKay, D.J.C. A practical Bayesian framework for backpropagation networks. Neural Comput. 1992, 4, 448–472. [Google Scholar] [CrossRef]
  37. Møller, M.F. A scaled conjugate gradient algorithm for fast supervised learning. Neural Netw. 1993, 6, 525–533. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.