Experiments with Neural Networks in the Identiﬁcation and Control of a Magnetic Levitation System Using a Low-Cost Platform

: In this article, we designed and implemented neural controllers to control a nonlinear and unstable magnetic levitation system composed of an electromagnet and a magnetic disk. The objective was to evaluate the implementation and performance of neural control algorithms in a low-cost hardware. In a ﬁrst phase, we designed two classical controllers with the objective to provide the training data for the neural controllers. After, we identiﬁed several neural models of the levitation system using Nonlinear AutoRegressive eXogenous (NARX)-type neural networks that were used to emulate the forward dynamics of the system. Finally, we designed and implemented three neural control structures: the inverse controller, the internal model controller, and the model reference controller for the control of the levitation system. The neural controllers were tested on a low-cost Arduino control platform through MATLAB/Simulink. The experimental results proved the good performance of the neural controllers.


Introduction
From a very early age, control has played a fundamental role in society, being currently present in most systems with several control techniques.Among the most used algorithms, the Proportional-Integral-Derivative (PID) controller stands out, which is part of the classic control methods [1,2].However, with scientific and technological developments in the field of Artificial Intelligence (AI), there are now control techniques called intelligent control, which include methods such as fuzzy control, evolutionary control, and neural control [3][4][5][6].Neural networks are now widely studied and used in the areas of image processing [7,8], complex optimization problems [9], systems control [10], among others.
By applying AI techniques like neural networks to a control system, the algorithms obtained are capable of solving problems that classical control cannot, due to the universal approximation capabilities of the neural networks.In control systems, the objective is to find an appropriate feedback function that maps the measured outputs in order to control the inputs.For that, there are several architectures that use neural networks that can be employed [11,12].This type of control can be applied to several types of systems; however, its performance stands out compared to other techniques in the control of nonlinear and unstable systems [13,14].
Magnetic levitation consists of the suspension of an object in the air, using magnetic forces of attraction or repulsion, thus counteracting the gravitational force applied to that object [15,16].Nowadays, it is increasingly used in areas of passenger and cargo transportation because it drastically reduces the mechanical losses that conventional systems entail [17].However, systems that employ magnetic levitation often require a robust control algorithm as they are normally unstable and nonlinear [18,19].Magnetic suspension controlling a DC electromagnet is the most used method, from advanced land transport to contactless bearings for high and low speeds.This is a method that has been studied for over a century, but which is constantly improving due to technological advances in several areas such as power electronics, control theory, and mechanics [15,17].
A similar magnetic levitation system is studied in [11], where the authors use a Nonlinear Autoregressive Moving Average (NARMA) neural network to perform the system identification and a neural network that learns the system dynamics, thus emulating its response to the desired input.The authors then use a neural network predictive controller to control the levitation system.Studies on the same system are conducted in [20], where a model reference controller is trained to control the system.In Reference [21], the authors propose an RBF ARX model based MPC strategy for the ball position control of a fast-response magnetic levitation system.The applied strategy worked like a timevarying locally linear ARX model-based linear MPC process.It was illustrated that the RBF-ARX-MPC is effective and feasible in modeling and controlling the fast-responding, strongly nonlinear, and open-loop unstable system.Work [22] presents an application of magnetic levitation to stabilization and tracking control of a single degree-of-freedom mass-spring-damper vibrating mechanical system.An output feedback control scheme based on differential flatness for global stabilization and asymptotic tracking tasks of some desired reference position trajectory is proposed.Experimental and simulation results showed the efficient and satisfactory performance of the tracking control scheme and the acceptable signal estimation.Other similar mechanisms such as air levitation systems are also considered in various works.In Reference [23], the control of an air levitation system with limited sensing is studied.It uses a sliding mode control, but it was modified to take only a measure of the position, which is the only one available.The experimental results show the feasibility of the controller and assess its performance in a real system.In Reference [24], a low-cost air levitation system to be used in both a virtual and a remote version is proposed.It provides a complete open-source and open-hardware remote lab solution that is low-cost and easy to replicate.This work applies several neural control schemes to a challenging system like the nonlinear and unstable magnetic levitation system to test the performance of neural controllers in a low-cost hardware.Nowadays this type of hardware is extensively used both in academia and scientific projects and its validation to incorporate more recent technologies such as neural networks is needed.The main differentiation of this contribution with respect to former control methods is that the system in many of those research works used repulsion to oppose the gravitational acceleration and the suspended magnet was constrained so that it could only move in the vertical direction.In this article, the system uses attraction to counteract gravity and the suspended magnet is not constrained in any way, so it is more susceptible to perturbations.Moreover, in this work, we apply several neural control structures and compare the results among them to verify the feasibility of each control structure in the control of the levitation system.Also, some linear classical controllers are developed to give the training data for the neural controllers, as well to enhance the better performance of the neural controllers.In fact, the neural controllers can control the levitation system in a wider-range of operation in contrast with the linear controllers, which only operates on small variations around the linearized point.Furthermore, the proposed neural controllers are implemented in a low-cost hardware, which can be easily built and used for in site laboratory experiments or take-home laboratories.
Here, we describe the whole process of implementing a neural controller applied to a magnetic levitation system, as well as some classical controllers for the comparison of the results.Section 2 shows the details of the hardware setup, whereas Section 3 presents the modeling and identification of the levitation system.In Section 4 described the design of the two classical controllers with the objective of providing the training data for the neural controllers.Section 5 describes the implementation and testing of three neural controllers on a real levitation system.Finally, Section 6 draws the main conclusions and addresses perspectives to future developments of the presented work.

System Architecture
In this system, a microcontroller sends a Pulse Width Modulation (PWM) signal to a driver circuit that converts it into a suitable signal to control the electromagnetic circuit.The position of the magnet and the current flowing in the electromagnet are obtained by using the 10 bit Analog-to-Digital Converter (ADC) of the microcontroller ATmega2560 to convert the signals sent by the sensors.The position and current sensors are linear Hall effect sensors, respectively the A1324 and ACS711 of Allegro Microsystems [25,26].The system is monitored via an USB serial connection through microcontroller using Simulink together with the Simulink Support Package for Arduino Hardware Add-on [27].Figure 1 illustrates the block diagram of the general system architecture.The driver circuit is represented in Figure 2, and it contains the diode (D) for the currents generated by the inverse polarities due to the inductive nature of the system, a pull-down resistor (R pd = 4.7 kΩ) to prevent the gate of the Metal-Oxide-Semiconductor Field-Effect Transistor (MOSFET) (Q) from floating, and a capacitor (C = 1.0 mF) to ensure that the 5 V supplied by the voltage regulator is stable.It is a simple driver that effectively supplies the electromagnetic system with the current required to levitate the magnetic disc.In Figure 2 the orange rectangle represents the electromagnet, where R e and L e are the electromagnet resistance and inductance, respectively.As shown in the Figure 3, the magnetic disc stays below the electromagnet and the position sensor stays directly under the electromagnet and as such it will suffer the influence of the magnetic field of both the magnetic disc and the electromagnet.The parameters values of the system are listed on Table 1, where the weight, diameter, and thickness correspond to the disc.A photo of the experimental levitation system is presented in Figure 4.A passive heat-sink was used so the voltage regulator was able to supply the current needed to the electromagnet.This system used a Funduino Mega 2560 board, which is a copy of Arduino Mega 2560.The board has 8 bits ATmega2560 microcontroller from Microchip.This microcontroller was chosen due to the available Flash Memory and Static Random-Access Memory (SRAM), as shown in the specifications of Table 2.This is one of the most expensive parts of the control system, costing about 10 euros.With the Simulink Add-on, the code will be generated by Simulink model and deployed on the microcontroller.The sampling period for the controllers was set to T = 0.003 s, which was the maximum value found for obtaining a steady stable levitation using the root-locus method described in Section 4 for the design of the classical linear controllers.The same sampling interval was adopted in the design of the neural controllers of Section 5.

System Analysis
The magnetic field is generated by the electromagnet in order to levitate the neodymium disc, as shown in Figure 5.The system is controlled through the voltage supplied to the electrical system terminals, which in turn generates the current that produces the magnetic field.As shown in the figure, v(t) is the applied voltage, i(t) the generated current, e z (t), and e i (t) the voltages at the terminals of the position and current sensors, respectively, z(t) the distance from the magnetic disk to the sensor, F m the magnetic force generated by the magnetic field of the electromagnet, and F g the force caused by gravity.

Electromagnetic System Modeling
The equation that describes the resultant force on the neodymium disc is given by [28,30]: where m is the mass of the levitating object, g is the acceleration due to gravity, and k is a magnetic constant related to turn ratio and cross sectional area of the electromagnet.
As shown, it is the sum of the gravitational pull on the disc and the magnetic force.The force applied by the magnetic field generated by the electromagnet on the disc is influenced by the current that is generating the magnetic field, and the distance of the disc to the electromagnet [28,30].
The equation of the electromagnet circuit is: where R e and L e are the resistance and inductance of the electromagnet, respectively.In order to obtain a stable levitation of the neodymium disc, the magnetic force of attraction needs to be equal to the gravitational pull so that the resultant force in the disc is null.So in the equilibrium point the system obeys the following conditions: Therefore, in the equilibrium point, the magnetic constant is given by: As can be seen, the above equations represent a nonlinear system, and to design a linear controller the system must be linearized around an equilibrium point [31], (i eq , z eq , v eq ), in which ∆i(t), ∆z(t), and ∆v(t) denote small variations around it, such as: Applying the Taylor series expansion in (1) and ( 2) around (i eq , z eq , v eq ), we obtain the linearized equations: Replacing ( 6) into (10), we get: With Equations ( 11) and ( 12) linearized, the Laplace transform is used to obtain the transfer function between the supplied voltage and the distance of the disc from the electromagnet, as: Note that ( 13) is an unstable system since one of the poles is in the right-half of the s-plane.

Sensors Parameterization
The Hall effect sensors produce a voltage related to the magnetic field they are inserted in.The voltages generated by both sensors have a portion related to the quiescent voltage they produce [25,26], the influence of the current and, in the case of the position sensor, a portion related to the distance of the magnetic disc.
The equations of the output voltages for the current and position Hall-effect sensors are of the form, respectively: where m, b, α, β, e, γ are the constants to be determined.
To obtain the sensors constants, first, the quiescent voltages of the current and position sensors were measured as b = 2.5220 and α = 2.4829 V, respectively.After the current and distance were increased in small steps in order to observe the variation of the respective output voltages, as shown in Figures 6 and 7.In both figures, the value of the quiescent voltages were removed, so that it facilitates the determination of the constants through regression.With the voltage data points of Figures 6 and 7 we used the Trendline option of Microsoft Excel to obtain the linear Equations ( 16) and (17), which translates the influence of the current on both current and position sensors, respectively: where m = 0.1447 and γ = 0.3339 V/A.The expression for the influence of the distance on the position sensor was obtained as: where β = 4.4941 × 10 −6 Vm 3 .In this case the current sensor does not suffer any influence by the movement of the disc.All regressions obtained a good R 2 , 0.9988 for e ii , 0.9995 for e zi , and 0.9961 for e zz .
Replacing the determined parameter values in ( 14) and ( 15), we arrive to the following equations for the output voltage of each sensor: Because this system uses the 10 bits ADC of the ATMega2560, Equations ( 19) and ( 20) are converted to Analog-to-Digital Units (ADU), where 5 V corresponds to 1023 ADU, yielding: where in order to facilitate the calculation, the distance was also converted from m to cm, and as such β is now 1090 ADUcm 3 .Table 3 lists the converted constants corresponding to both sensors.With the sensor constants determined, we used the Solver tool from Microsoft Excel to optimize the constants.To obtain the Table 4, we placed the magnetic disc at different distances and applied enough voltage to the electromagnet to pull the disc up.The values of the ADC i and ADC z were measured at when the magnetic disc started to rise.Then, we calculated the current (i calc ) corresponding to ADC i using Equation ( 21), and with those values, we calculated the distance (z calc ) from Equation (22).Then we calculated the sum of the absolute difference between the real distance of the disc (z) and z calc .The Solver tool was then used to change the sensor constants in order to minimize that sum.Table 4 shows the calculated distances after optimizing the values of the constants (z optim ).
Table 5 shows the optimized parameter values corresponding to both sensors.Rearranging ( 21) and ( 22) and substituting the values in Table 5, we obtain the expressions for the computation of the current i(t) and distance z(t): Lastly, some tests were performed to determine the equilibrium point, as the current for which the magnetic disc is almost lifted off at 2 cm from the electromagnet.The measured value was 0.27 A, and thus (i eq , z eq ) = (0.27 A, 2 cm).Substituting the system parameters in (13), the resulting transfer function is:

Classical Controllers
Due to the inherent instability of the system, it is not possible to apply a random signal directly to the system in order to get the training data.So linear controllers were designed and tested in order to obtain the training data for the neural networks.
In order to simplify the design of the controllers, we used the dominant pole approximation [32,33], that states that the closer poles to the imaginary axis are considered the dominant poles because the transient response of the system is characterized by them.In this particular system, because the ratio between the furthest pole from the imaginary axis (s = −200) and the dominant pole (s = −44.29) is at least four, the transfer function (25) can be rewritten, while maintaining the static gain of the system, as: Using a zero-order-hold (ZOH) with a sampling period of T = 0.003 s, the discrete system has the following transfer function: The first controller was a lead compensator in order to improve the stability of the closed-loop system [32,33].The root-locus method [32] is applied for a time response with less than 5% overshoot and a rise time of ten times the sampling period (t r = 0.03 s), yielding: From the lead compensator we designed a lead-lag compensator to reduce the steady state error [32,33].The root-locus method was again used, and in order to maintain the stability of the transient response provided by the lead compensator, we design the lag portion of the controller with a zero and pole extremely near z = 1 [34].The zero obtained for the lag portion was z = 0.9979 and for the pole z = 1, thus providing an integrative action to the lead-lag controller: The simulated system response is depicted in Figure 8.We observe that both controllers have the same transient response but the integrative action of the lead-lag compensator eliminates the steady-state error.In Figure 10 we present the experimental responses of the system to a step input for the lead and lead-lag compensators.As shown, both controllers can maintain a stable levitation; however the lead-lag compensator generates an output response around the desired step reference, and thus having an improved steady-state error.

Neural Control Strategies
In order to obtain a good performance of neural networks, a good training data set is required.With the designed classical controllers, an appropriate reference signal can be developed and applied to the system.The lead-lag compensator was the chosen controller as it provides the best response, as demonstrated in the previous section.
The reference signal is a random signal with varying amplitudes at different frequencies in order to get the most rich data possible of the system within the limits of the classical lead-lag controller.Figure 11 shows the reference signal and the system response, whereas Figure 12 presents the corresponding control signal.

System Identification
The system identification was done by using Nonlinear AutoRegressive eXogenous (NARX) neural networks of type [12]: where u(n) is the input control signal and y(n) the output system response.Several two-layer feed-forward networks with hyperbolic tangent hidden neurons and one linear output neuron were trained.Figure 13 shows one example of a neural network using 10 past values of input and output signals and 15 neurons on the first layer, denoted as 10d15n.We use the Levenberg-Marquardt Backpropagation algorithm to minimize the Mean Squared Error (MSE) between the output of the network and system response [20].Table 6 shows the MSE obtained for each network after training.After some trials, the neural network that provided the best MSE is composed of 10 neurons on the first layer and using 10 past values of the input and output signals (10d10n).Table 7 lists the MSE of the best performing networks for the step, sinusoidal, and sawtooth reference signals.The Total column represents the sum of the other three columns, where we confirm that the 10d10n network has the best general performance.In Figure 14, we illustrate the responses of the system with the 10d10n identification network when controlled with the lead-lag compensator, for the step, sinusoidal, and sawtooth reference signals.A time slot of only 5 s is used for the step response to provide a clear view of the transient behavior of the system; the other input references required more time to see multiple periods of the signals.We verify that the network follows the system response very well without having learned the noise and perturbations of the system.Note that this identification network will be used as a neural model of the levitation system in the simulation of the neural control systems reported in next subsections.

Inverse Neural Controller
An inverse model controller learns the inverse dynamics of the system, receiving as inputs the reference signal r(n), past samples of the control signal u(n), and system output y(n), of the form: The block diagram of Figure 15 shows the configuration of an inverse model controller, where the controller receives as inputs the reference r(n), the system output y(n), and past value of control signal u(n − 1).In order to obtain a more robust controller with a better performance we can use more past values of output and control signals.In a first attempt, two-layer networks were created and trained, but their performance was poor.Thus, we trained two hidden layer networks, the first with one hyperbolic tangent neuron and the second with sigmoid neurons followed by one Rectified Linear Unit (ReLU) output neuron.The ReLU function in the last layer is used because the controller can not provided a negative value to the system.The Levenberg-Marquardt back-propagation was again used to train the networks, and the training data were preprocessed to the interval [0, 1].
After training and testing different networks, we list in Table 8 the MSE of the best performing networks for the step, sinusoidal, and sawtooth reference signals.The chosen network was the one with best performance-1th5s5d network, which means one neuron on first layer (1th), five neurons on the second layer (5s), and five past values of control and system output (5d).Figure 16 illustrates this network, where the input consists of 11 sample values corresponding to 5 past values of control and system output, and the reference value.The bigger networks with more neurons and more past values did not achieve the same performance because they started to overfit and lose their generalization capabilities.We verify that the inverse model network can stabilize the system, however it presents a steady-state error in all reference signals.
In order to improve further the system response, we can insert an adaptive block whose output is added to the output of the inverse model network, defined by expression: where β is an adjustable gain between 0 and 1 [35].Note that because the system has a negative gain, the tuning parameter β has also a negative gain.It is expected that the addition of this scheme will eliminate the steady-state error.
When implementing the adaptive block and testing the system, we obtained the responses in Figure 18.It is clear that the steady-state error that the inverse model network entails was eliminated with the addition of the adaptive block.For comparison of the controllers we compute the performance indexes of the Integral Square Error (ISE), the Integral Absolute Error (IAE), and the Integral Time Absolute Error (ITAE), as: where e(t) is the error between the reference signal and the actual position response of the levitation system.We use T sim = 40 s that proved to be enough in order to compare the responses of the different controllers.Table 9 shows the values of ISE, IAE, and ITAE for the inverse model network with and without the adaptive block.As expected, due to the steadystate error, the inverse model without adaptive block has the worst performance indexes.Figure 19 illustrates the responses of the system when controlled with the lead-lag compensator and the inverse model network with the adaptive block.We clearly see the better response of the inverse controller in tracking the reference signals, also shown by the much lower values of ISE, IAE, and ITAE indexes presented in Table 10.

Internal Model Controller
With both the identification and the inverse controller network trained and tested, we proceed to the developing of an internal model controller.This controller reduces the existing noise and perturbations on the system's response by subtracting the error between the system and the identification network from the input reference of the inverse controller [11,35].As shown in Figure 20, a first-order filter is used to provide robustness, in which the time constant is five times higher than the sampling period to ensure closedloop stability.Using a time constant at least five times bigger than the sampling period, that is, τ F = 0.02 s, the transfer function of the filter is given by: (36) The Simulink diagram of Figure 21 is used to simulate the response of the system.The 10d10n identification network is used as system model and white noise is added to the output to better approximate the response to the real system.The noise has a normal distribution with zero mean (µ = 0) and variance of σ 2 = 0.005.Figure 22 shows the simulated responses of the system controlled with the inverse and internal model controllers, when subjected to step, sinusoidal, and sawtooth inputs.Both controllers are using an adaptive block to reduce the steady-state error.As can be seen, the noise in the responses with the internal model controller is much smaller when compared with the responses of the inverse model controller.Some oscillations still exist but most of the high frequency noise is eliminated.In Table 11 the indexes of performance ISE, IAE and ITAE are compared, showing the slightly better performance of the internal controller.However, as represented in Figure 23, the control signal is very similar in both controllers.So, in order to better understand the difference between the control signals, we computed the control effort (CE =

Model Reference Control
The last neural controller analyzed in this work is the model reference control, whose diagram is represented in Figure 24.This controller uses an identification network during the training of the neural controller, because, as the name suggests, the controller learns to control the system so that it follows a reference behavior, such as a first-order or secondorder system [11,12].In this scheme we choose to use the response of the system for the random signal generated before, but this time the system is controlled by the inverse model controller with the adaptive block.The reference and the corresponding system response are presented in Figure 25 while the control signal is in Figure 26.Before training, a five layered neural network was created, where the first three layers correspond to the controller and the last two to the system, as illustrated in Figure 27.The 10d10n identification network already trained in Section 5.1 is used here as the system model.For the controller layers, we used a similar structure to the 1th5s5d network in terms of the number of neurons per layer and activation functions, but in this case, the network receives five delayed samples of the control signal, output signal, and reference signal.
For the training of the controller, we turned off the learning in the last two layers so the system remains the same during training.In this case, the reference signal will be the input of the network and the position signal of the system will be the output of the network.The control signal is the output of the third layer into the forth layer.The controller will learn to apply a control signal that makes the system follow the reference.12.The Simulink diagram of Figure 28 was used to simulate the system with added noise, where it clearly shows the five layers of the network.The level noise is the same as that used with the internal model controller so as to compare both responses.With this diagram, we simulated the responses for the three references (step, sawtooth, and sinusoidal signals) for the model reference and internal model controllers, as shown in Figure 29.We observe that both controllers have a similar response to the different references, except in the case of the step, where the model reference controller has less oscillation around the reference value.Analyzing the performance indexes ISE, IAE, and ITAE of Table 13 we clear see the better performance of the model reference controller.Figure 30 shows the control signal applied to the system by the model reference controller for the sinusoidal reference.We verify that the control signal has little noise except when the disc is further away from the reference, requiring more current.An important point to notice is that this controller has almost no steady-state error without using an adaptive block unlike the internal model.This is given by the fact that the network was trained to make the system follow the reference and not to produce a control signal for a given reference.
Due to the high computing power needed by both the internal model and model reference controllers, it was not possible to implement them in real-time in the Arduino with the Simulink Support Package for the Arduino Hardware Add-on.We verified that the clock frequency of the microcontroller was not high enough to compute the control cycles at the required T = 0.003 s, even though the Simulink Add-on deployed the code in the microcontroller, meaning the ATmega2560 had enough memory.However, due to the good performance of the identification network demonstrated in the experiments, it is expected that the response of the real system would be similar to the simulated ones.

Conclusions
In this work, we developed and applied several neural controller structures to a nonlinear and unstable real magnetic levitation system.
First the system was linearized and obtained its transfer function and system parameters around an equilibrium point.Next we designed classical lead and lead-lag compensators that provided the training data for the neural networks used in the control.These training data were generated by applying a random reference signal in order to stimulate the system and obtain a response with rich information.The training data consist of samples of the reference signal, the control signal, and output of the system.These input and output samples were used to train an identification network that learned to emulate the dynamics of the system.This network was used to simulate the system response for different input references.
The first neural controller designed was the inverse neural network that learns to model the inverse dynamics of the system.In this case, a three layered network gave the best performance for the different reference signals.This network was tested on the experimental levitation system with and without adaptive block.The addition of the adaptive block proved to eliminate the steady-state error.
An internal model controller was also analyzed by using the inverse model controller network and the identification network designed in previous steps.We verified that this structure successfully reduce the noise and perturbations on the system.
The last neural controller trained was the model reference controller, using, as training data, the response of the system controlled by the inverse model controller with the adaptive block.When comparing both the model reference and internal model controllers, we observed that the responses were very similar with slight differences.In fact, the model reference controller gives a response with zero steady-state error without using an adaptive block, unlike all the other tested neural controllers.
In this work, we proved that neural networks can be used as robust controllers for nonlinear and unstable systems, with performance levels better than classical controllers.
As a future investigation, we could implement and test the internal model controller and the model reference controller into the experimental levitation system, and compare the responses and robustness of the control structures against the results presented in this study.For that, using higher clock frequency, faster ADC, and more memory would allow us to experiment with deeper neural networks and smaller sampling periods.

Figure 6 .
Figure 6.Influence of the current on both sensors.

Figure 7 .
Figure 7. Influence of the distance on the position sensor.

Figure 8 .
Figure 8. Response to step input of the linearized system with both classic controllers.

Figure 9
Figure 9 shows the real-time Simulink diagram used to test the classical controllers.A similar control diagram will be used for the neural controllers.

Figure 9 .
Figure 9. Simulink diagram used to test the classical controllers.

Figure 10 .
Figure 10.Real system responses with both classical controllers.

Figure 11 .
Figure 11.Random reference and system response with the lead-lag controller.

Figure 12 .
Figure 12.Control signal for the random reference with the lead-lag controller.

Figure 14 .
Figure 14.Comparison between the system and the network responses.

Figure 17
Figure17depicts the experimental responses of the system controlled by the 1th5s5d inverse neural network, where the reference signal is represented in blue and the system response in red.

Figure 17 .
Figure 17.Responses of the real system when controlled by the 1th5s5d network.

Figure 18 .
Figure 18.System response with the inverse model controllers.

Figure 19 .
Figure 19.Comparison of system responses with lead-lag compensator and the inverse model with adaptive block.

Figure 20 .
Figure 20.Diagram of the internal model controller.

Figure 21 .
Figure 21.Simulink diagram of the internal model controller.

Figure 22 .
Figure 22.Comparison of system responses with the inverse model with adaptive block and internal model.
)dt) for each controller.We get CE = 56.0820for the inverse model and CE = 55.9194 for the internal model, which confirms the similarity of both control signals.

Figure 23 .
Figure 23.Control signal applied by the inverse model with adaptive block and internal model for the sinusoidal reference.

Figure 24 .
Figure 24.Diagram of the model reference controller.

Figure 25 .
Figure 25.Random reference and system response with the inverse model controller with adaptive block.

Figure 26 .
Figure 26.Control signal for the random reference applied by the inverse model controller with adaptive block.

Figure 27 .
Figure 27.Reference model controller network.The Levenberg-Marquardt back-propagation algorithm was used for the training.After 50 iterations, the algorithm converged to the value of MSE = 0.25.To prove that the training was in fact successful, we collected the MSE for the step, sawtooth, and sinusoidal reference signals.The results are given in Table12.

Figure 28 .
Figure 28.Simulink diagram of the model reference controller.

Figure 29 .
Figure 29.Comparison of the system responses with the internal model and reference model controllers.

Figure 30 .
Figure 30.Control signal applied by the reference model control for the sinusoidal reference.

Table 4 .
Optimization of the sensor constants .

Table 7 .
MSE of the best identification networks for different references.

Table 8 .
MSE obtained by the inverse model control networks for the different reference signals.

Table 9 .
Integral Square Error (ISE), the Integral Absolute Error (IAE), and the Integral Time Absolute Error (ITAE) values for the inverse model controllers.

Table 10 .
ISE, IAE and ITAE values for the lead-lag and the Inverse model with adaptive block.

Table 11 .
ISE, IAE and ITAE values for the inverse model and the internal model controller.

Table 12 .
MSE obtained by the reference model controller for the different signals.

Table 13 .
ISE, IAE, and ITAE values for the internal model and the model reference controller.