Impacts of Discretization and Numerical Propagation on the Ability to Follow Challenging Square Wave Commands

: This study determines the threshold for the computational rate of actuator motor controllers for unmanned underwater vehicles necessary to accurately follow discontinuous square wave commands. Motors must track challenging square-wave inputs, and identiﬁcation of key computational rates permit application of deterministic artiﬁcial intelligence (D.A.I.) to achieve tracking to a machine-precision degree of accuracy in direct comparison to other state-of-art approaches. All modeling approaches are validated in MATLAB simulations where the motor process is discretized at varying step-sizes (inversely proportional to computational rate). At a large step-size (fast computational rate), discrete D.A.I. shows a mean error more than three times larger than that of a ubiquitous model-following approach. Yet, at a smaller step size (slower computational rate), the mean error decreases by a factor of 10, only three percent larger than that of continuous D.A.I. Hence, the performance of discrete D.A.I. is critically affected by the sampling period for discretization of the system equations and computational rate. Discrete D.A.I. should be avoided when small step-size discretization is unavailable. In fact, continuous D.A.I. has surpassed all modeling approaches, which makes it the safest and most viable solution to future commercial applications in unmanned underwater vehicles.


Introduction
The United States Navy has that recognized unmanned vehicles are a key part of future naval capabilities [1], as depicted in Figure 1. The development of adaptive and learning systems has greatly expanded the possibility of unmanned vehicles, allowing human control in distant operations that are otherwise impossible. The automation of DC motor control has thus earned its latest highlight as a resurgent, promising field of research. Deterministic artificial intelligence (D.A.I.) utilizes self-awareness assertion in the feedforward process dynamics, where the feedback signal is formulated by 2-norm optimal least squares (learning) or by proportional derivative feedback (adaption). This manuscript serves both as a sequel to the analysis of discrete D.A.I. (described in the following literature review), and the publication is written advocating for commercial application of D.A.I. to unmanned vehicles as depicted in Figure 2. The main text includes an in-depth comparison to a chosen state-of-the art benchmark approach mainly focusing on their disparate trajectory tracking ability.  [1] for robotic agent command and sensing to serve as the core autonomy technology for the ONR Swarm demonstration on the James River in Virginia. Image used is consistent with NOAA policy, "NOAA still images, audio files and video generally are not copyrighted. You may use this material for educational or informational purposes, including photo collections, textbooks, public exhibits, computer graphical simulations and webpages." [2].   Office of Naval Research swarm demonstration in the James River in Virginia using NASA's Jet Propulsion Laboratory's control architecture [2] for robotic agent command and sensing to serve as the core autonomy technology for the ONR Swarm demonstration on the James River in Virginia. Image used is consistent with NOAA policy, "NOAA still images, audio files and video generally are not copyrighted. You may use this material for educational or informational purposes, including photo collections, textbooks, public exhibits, computer graphical simulations and webpages." [3].

Introduction
The United States Navy has that recognized unmanned vehicles are a key part of future naval capabilities [3], as depicted in Figure 1. The development of adaptive and learning systems has greatly expanded the possibility of unmanned vehicles, allowing human control in distant operations that are otherwise impossible. The automation of DC motor control has thus earned its latest highlight as a resurgent, promising field of research. Deterministic artificial intelligence (D.A.I.) utilizes self-awareness assertion in the feedforward process dynamics, where the feedback signal is formulated by 2-norm optimal least squares (learning) or by proportional derivative feedback (adaption). This manuscript serves both as a sequel to the analysis of discrete D.A.I. (described in the following literature review), and the publication is written advocating for commercial application of D.A.I. to unmanned vehicles as depicted in Figure 2. The main text includes an in-depth comparison to a chosen state-of-the art benchmark approach mainly focusing on their disparate trajectory tracking ability.

Figure 2.
Remus 600 unmanned underwater vehicle used by the National Oceanic and Atmospheric Administration (NOAA) [4]. Image used is consistent with NOAA policy, "NOAA still images, audio files and video generally are not copyrighted. You may use this material for educational or informational purposes, including photo collections, textbooks, public exhibits, computer graphical simulations and webpages." [2].
Reference [1] describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory, Pasadena, USA) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities (see Figure 1). Missions naturally suited for utilization include traverse, mapping, and potentially neutralizing mine fields [5,6], as displayed in Figure 3 from the study in reference [7] for the Phoenix vehicle in figure 3b.  [4]. Image used is consistent with NOAA policy, "NOAA still images, audio files and video generally are not copyrighted. You may use this material for educational or informational purposes, including photo collections, textbooks, public exhibits, computer graphical simulations and webpages." [3].
Reference [2] describes a tightly integrated instantiation of an autonomous agent called CARACaS (Control Architecture for Robotic Agent Command and Sensing) developed at JPL (Jet Propulsion Laboratory, Pasadena, USA) that was designed to address many of the issues for survivable ASV/AUV control and to provide adaptive mission capabilities (see Figure 1). Missions naturally suited for utilization include traverse, mapping, and potentially neutralizing mine fields [5,6], as displayed in Figure 3 from the study in reference [7] for the Phoenix vehicle in Figure 3b.  The development of adaptive and learning systems has a long, distinguished lineage in the literature  with many optional techniques available to choose from. The trendsetting work of Isidori and Byrnes [36] on the control of exogenous signals revealed the close tie between the nonlinear regulator equations and the output regulation of a nonlinear system. The momentum continued, and the nonlinear output regulation has been further explored by numerous authors including Cheng, Tarn, and Spurgeon [37], Khalil [38], and Wang and Huang [39] across autonomous and nonautonomous systems. The lineage emphasized in this manuscript stems from a heritage in vehicle guidance and control techniques [8][9][10][11][12][13][14][15] extended to apply to motor controllers [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] that generate vehicle motion. Vehicle maneuvering is controlled by the actuator fins displayed in figure 3b generating navigation as displayed in figure 3a. Actuation is accomplished by sending control signals to motors (Figure 3c) that rotate the fins.
This manuscript proposes a preferred instantiation of adaptive and learning systems [26,27] by evaluating the efficacy of motor control techniques based on iterated computational rates and system discretization. The materials and methods in Section 2 first describe model discretization and then introduces the two compared: one adaptive and one learning each with interconnected lineage of research in the literature.

Learning Teachniques
The learning techniques examined in this manuscript stem from heritage in Slotine and Li's nonlinear adaptive methods developed originally for robotics [8] and spacecraft [9][10][11], while the method has been similarly applied to ocean vehicles [14][15]. The method was initially expressed in the non-rotating inertial reference frame [8,9] and resulted in cumbersome numerical burdens, therefore Fossen re-parameterized the method into the coordinates of the body reference frame [10], while [11] illustrated separate tunability of feedforward and feedback elements. The feedforward elements substantiated what eventually became known as self-awareness statements [12] of deterministic artificial intelligence [13].
Fossen also prolifically published application to ocean vehicles [14] including the most recent text [15] which includes contains trajectory tracking control via pole placement PID, LQR, feedback linearization, nonlinear backstepping, sliding mode control, which might now be deemed commonly accepted approaches. Reference [7] illustrates the efficacy of such approaches to guide autonomous underwater vehicles through simulated minefields illustrated in Figure 3a,b. The feedforward elements were used to develop deterministic artificial intelligence through maturation as applied in so-called physics-based methods championed by Lorenz [16] and his students [11,[17][18][19][20][21][22][23][24] for many years, which also extended the method from vehicles to actuator control circuits where The development of adaptive and learning systems has a long, distinguished lineage in the literature  with many optional techniques available to choose from. The trendsetting work of Isidori and Byrnes [36] on the control of exogenous signals revealed the close tie between the nonlinear regulator equations and the output regulation of a nonlinear system. The momentum continued, and the nonlinear output regulation has been further explored by numerous authors including Cheng, Tarn, and Spurgeon [37], Khalil [38], and Wang and Huang [39] across autonomous and nonautonomous systems. The lineage emphasized in this manuscript stems from a heritage in vehicle guidance and control techniques [8][9][10][11][12][13][14][15] extended to apply to motor controllers [17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33][34][35] that generate vehicle motion. Vehicle maneuvering is controlled by the actuator fins displayed in Figure 3b generating navigation as displayed in Figure 3a. Actuation is accomplished by sending control signals to motors (Figure 3c) that rotate the fins.
This manuscript proposes a preferred instantiation of adaptive and learning systems [26,27] by evaluating the efficacy of motor control techniques based on iterated computational rates and system discretization. The materials and methods in Section 2 first describe model discretization and then introduces the two compared: one adaptive and one learning each with interconnected lineage of research in the literature.

Learning Teachniques
The learning techniques examined in this manuscript stem from heritage in Slotine and Li's nonlinear adaptive methods developed originally for robotics [8] and spacecraft [9][10][11], while the method has been similarly applied to ocean vehicles [14,15]. The method was initially expressed in the non-rotating inertial reference frame [8,9] and resulted in cumbersome numerical burdens, therefore Fossen re-parameterized the method into the coordinates of the body reference frame [10], while [11] illustrated separate tunability of feedforward and feedback elements. The feedforward elements substantiated what eventually became known as self-awareness statements [12] of deterministic artificial intelligence [13].
Fossen also prolifically published application to ocean vehicles [14] including the most recent text [15] which includes contains trajectory tracking control via pole placement PID, LQR, feedback linearization, nonlinear backstepping, sliding mode control, which might now be deemed commonly accepted approaches. Reference [7] illustrates the efficacy of such approaches to guide autonomous underwater vehicles through simulated minefields illustrated in Figure 3a,b. The feedforward elements were used to develop deterministic artificial intelligence through maturation as applied in so-called physics-based methods championed by Lorenz [16] and his students [11,[17][18][19][20][21][22][23][24] for many years, which also extended the method from vehicles to actuator control circuits where representative results following challenging discontinuous commands are depicted in Figure 4. Zhang et al. [17] illustrated fault-tolerance, while Apoorva et al. [18] revealed loss reduction and Flieh et al. demonstrated loss minimization [19] and dead-beat control [20] in addition to self-sensing [21], the precursor to using the physics-based dynamics for virtual sensing [22] following the illustration of optimality in [23] and self-sensing [24] specifically applied to DC motors. representative results following challenging discontinuous commands are depicted in Figure 4. Zhang et al. [17] illustrated fault-tolerance, while Apoorva et al. [18] revealed loss reduction and Flieh et al. demonstrated loss minimization [19] and dead-beat control [20] in addition to self-sensing [21], the precursor to using the physics-based dynamics for virtual sensing [22] following the illustration of optimality in [23] and self-sensing [24] specifically applied to DC motors.  Despite stochastic learning methods still holding some interest [25] applied to motor control, this manuscript continues the investigation of deterministic learning approaches [26] following Shah's recommendations [27]. Specifically, [26] illustrated a marked improvement in tracking performance, while Shah's attempt in [27] to duplicate the results revealed a strong correlation to performance improvement and system discretization and speed of computation. One novelty presented here is analysis of Shah's identified correlated factors.

Adaptive Techniques as Benchmarks for Comparison
Many alternative approaches are available as benchmarks for comparison. A short survey of alternative methods is presented in [30] presenting multiple model adaptive control (MMAC) techniques available for the control of a DC motor under load changes. Direct torque control is an option based on discontinuities in rapid modulating commands. [31] Speed control is presented using a model-reference adaptive control in [32] offering the possibility to compensate the torque ripples and load torque. Akin to the optimization approach applied to vehicles (second order systems) [22], extremum-seeking adaptive control of first-order systems was proposed in [33,34].
Alternative approaches are generally tested with step and/or square wave inputs. The ability to track step functions or square wave sequences of step functions is a challenging requirement for DC motor control. Square wave command is chosen because the tracking ability of a nonlinear adaptive method can easily be discerned by the magnitude of overshoot and undershoot at the discontinuities in the square wave. Figure 4 val- Despite stochastic learning methods still holding some interest [25] applied to motor control, this manuscript continues the investigation of deterministic learning approaches [26] following Shah's recommendations [27]. Specifically, [26] illustrated a marked improvement in tracking performance, while Shah's attempt in [27] to duplicate the results revealed a strong correlation to performance improvement and system discretization and speed of computation. One novelty presented here is analysis of Shah's identified correlated factors.

Adaptive Techniques as Benchmarks for Comparison
Many alternative approaches are available as benchmarks for comparison. A short survey of alternative methods is presented in [30] presenting multiple model adaptive control (MMAC) techniques available for the control of a DC motor under load changes. Direct torque control is an option based on discontinuities in rapid modulating commands. [31] Speed control is presented using a model-reference adaptive control in [32] offering the possibility to compensate the torque ripples and load torque. Akin to the optimization approach applied to vehicles (second order systems) [22], extremum-seeking adaptive control of first-order systems was proposed in [33,34].
Alternative approaches are generally tested with step and/or square wave inputs. The ability to track step functions or square wave sequences of step functions is a challenging requirement for DC motor control. Square wave command is chosen because the tracking ability of a nonlinear adaptive method can easily be discerned by the magnitude of overshoot and undershoot at the discontinuities in the square wave. Figure 4 validates the challenge by illustrating a just-published novel sensor-less methods struggling to follow step and square wave commands, respectively. Figure 5 displays the results of model reference adaptive control and robust adaptive control in Figure 5a and self-tuning regulators in Figure 5b. These methods display disparate natures illustrating the difficulties. idates the challenge by illustrating a just-published novel sensor-less methods struggling to follow step and square wave commands, respectively. Figure 5 displays the results of model reference adaptive control and robust adaptive control in figure 5a and selftuning regulators in Figure 5b. These methods display disparate natures illustrating the difficulties.
(a) (b) Figure 5. (a) Comparison of model reference adaptive control and robust adaptive control tracking square waves from [29]. Notice the square waves are rounded to reduce the deleterious challenge of discontinuity; (b) self-tuning regulators tracking square wave commands from [35]. Notice the square waves are not rounded, implying a relatively more challenging demand.
The chosen comparative benchmark adaptive technique is the model-following selftuning regulator [28] in keeping with the prequel research by Shah [27] who sought to duplicate the results in [26], which seemingly exactly followed a challenging square wave (with non-rounded discontinuous points) after an initial startup transient.
Following the publication of [26], Shah et al. revealed performance limitations in [27] indicating computational rate is the driving influence when the system is discretized. This manuscript presents that recommended sequel to Shah: evaluation of computational rate and recommendations for application in adaptive and learning methods. Section 3 results display the results of comparative analysis of computational rate (via step size) and makes recommendations based on multi-variate figures of merit: target tracking error mean and standard deviations.  [29]. Notice the square waves are rounded to reduce the deleterious challenge of discontinuity; (b) self-tuning regulators tracking square wave commands from [35]. Notice the square waves are not rounded, implying a relatively more challenging demand.
The chosen comparative benchmark adaptive technique is the model-following selftuning regulator [28] in keeping with the prequel research by Shah [27] who sought to duplicate the results in [26], which seemingly exactly followed a challenging square wave (with non-rounded discontinuous points) after an initial startup transient.
Following the publication of [26], Shah et al. revealed performance limitations in [27] indicating computational rate is the driving influence when the system is discretized. This manuscript presents that recommended sequel to Shah: evaluation of computational rate and recommendations for application in adaptive and learning methods. Section 3 results display the results of comparative analysis of computational rate (via step size) and makes recommendations based on multi-variate figures of merit: target tracking error mean and standard deviations.

Proposed Novelties
Several innovations are proposed foremost by analysis in Section 2 followed by validating simulation experiments in Section 3 culminating in direct comparison to modern benchmarks in Section 4.

1.
Validation of the original prequel [26] seemingly illustrating perfect tracking of challenging squares compared to a state-of-the-art benchmark as depicted in the figures in Section 1.

2.
Validation of first sequel's [27] identification of paramountcy of discretization and computational speed.

3.
Recommendation of key threshold discretization and computational speed to duplicate the results of the original prequel [26].

Materials and Methods
This section offers sufficient details to allow others to replicate and build on the published results. Modeling is described in Section 2.1 followed by adaptive and learning methods, respectively. The newest method is the learning one: deterministic artificial intelligence, while the parallel comparison to a well-known state-of-the-art nonlinear adaptive technique offers contextualization to aid the nature of the novel recommendations. The complete code of the program is appended at the end of the manuscript to aid the readers' repeatability of the results presented in Section 3.

Discretized Process Truth Model for DC Motor
Consider a continuous-time process, precisely a normalized model for a DC motor. The process is described by the transfer function in Equation (1). The continuous-time process is initially discretized at a time step of 050 s using an internal MATLAB function provided in the Appendix A. Equation (2) shows the discretized process truth model expressed in the frequency domain. Alternatively, the final system response can be written as Equation (3).

Model-Following Self Tuner
The pulse transfer operator of the process is given by Equation (4) where A and B are polynomials in the forward shift operator q, and the polynomials are assumed to be relatively prime. The process model, which is linear in the parameters, may be expressed in the form of a differential equation whose parameters are estimated by the recursive least-squares (RLS) method.
The process is of second order; the coefficients of the controller polynomials (R, S, and T) are of first order and the closed-loop system is of third order. The compatibility condition, as described by Equation (5), requires the model to have the same zero as the process. The desired transfer system thus can be found via cancellation of polynomial factors B+ and B− that represent canceled zeros and uncanceled zeros, respectively.
The coefficients of controller polynomials are computed by Diophantine equation, described by AR + BS = Ac. Diophantine equation without process zero-cancellation is given by Equation (6). The coefficients of controller polynomials may be expressed in terms of the estimated process parameters, as shown in Equations (7)- (9). The polynomial T requires an additional model-following condition described by Equation (10). q 2 + a 1 q + a 2 (q + r 1 ) + (b 0 q + b 1 )(s 0 q + s 1 ) = q 2 + a m1 q + a m2 (q + a 0 ) (6)

Deterministic Artificial Intelligence
Deterministic artificial intelligence requires self-awareness assertion, which can be established by isolating u(t) in the left-hand side of Equation (3). The mathematical manipulation, as shown by Equation (11), allows u(t) to be expressed as the product of a matrix of knowns and a vector of unknowns. The matrix of knowns, [φ d ], represents the desired trajectory; the vector of unknowns, {θ}, represents the learned parameters from proportional-derivative (PD) feedback to generate the process input. The regression form of the process input (t) is thus written as u * (t) as described by Equations (12) and (13).
The desired trajectory is computed by propagating states to y (t + 1) and by applying the feedforward control to Equation (3). The rough initial estimates of the feedback parameters along with the values of output y and regression u * (t) are used in recursive least squares (RLS) to learn the updated feedback parameters {θ}.
To evaluate a continuous system using DAI, the transfer function in Equation (1) should be converted back into an ordinary differential equation (ODE), where ODE is reparametrized as in Equation (13). Alternatively, the feedback parameters can be learned in a discrete environment via optimal feedback adjustment introduced by Smeresky [12], as described by Equation (14).
The updated and optimal feedback parameters are fed back into Equation (12) to calculate the control u(t) and output a sinusoidal trajectory given by Equation (15), where A 0 and A each represent the original state and the target state, respectively.

Results
This section first compares discrete deterministic artificial intelligence and the modern benchmark, model-following control. Revelations include a higher susceptibility of deterministic artificial intelligence to larger step sizes, but increased efficacy relative to model following when using smaller step sizes. Next is a presentation of results comparing continuous versus discrete deterministic artificial intelligence.

Comparison of Discrete Deterministic Artificial Intelligence and Model-Following Approach
The deterministic artificial intelligence modeling approach shows a significantly larger tracking error than the model-following approach when the process is discretized with a large sampling period. Specifically, as seen in Table 1, the mean tracking error is 3.08 times larger, and the error standard deviation is approximately two times larger at a step-size of 0.50 s. The large discrepancy in the tracking performance is well illustrated in Figure 1. The output via the modeling approach almost immediately follows the input signal with measurable accuracy. Contrarily, deterministic artificial intelligence shows significant oscillations at discontinuities where the sign of the input signal changes. Table 1. Error distribution of D.A.I. and model-following method (M.F.) at varying step-sizes.

Method
Step-Size The performance of deterministic artificial intelligence however is elevated considerably when the step-size is reduced. As shown in Table 1, the mean tracking error of deterministic artificial intelligence is reduced to approximately 20% of its initial value when the step-size is lowered to 0.27 s. The error standard deviation is also reduced by a factor of 3. The improvement in deterministic artificial intelligence performance is highlighted in Figure 2. The output via deterministic artificial intelligence shows marginal overshoots at discontinuities and follows the input signal with minor tracking error. In contrast, the model-following approach shows the degradation of performance; at a step-size of 0.27 s, the output shows significant oscillations in the initial transient which is initially not observed at a step-size of 0.50 s.

Comparison of Discrete D.A.I. and Continuous D.A.I.
Continuous D.A.I. has high tracking capability. It follows the input signal without any visible tacking error 50 after the initial transient. From the previous comparison in Section 2.1 through Section 2.2, it is apparent that D.A.I. is less favorable for a discretized process with a large step-size. It is also revealed in Table 2 that the performance of deterministic artificial intelligence increases significantly when the step-size is reduced and tuned to precision. In fact, discrete D.A.I. shows tracking performance that is comparable to that of continuous D.A.I. when the step-size is reduced. The mean error of discrete deterministic artificial intelligence is nearly equal to that of continuous D.A.I. with a 3% difference. Table 2. Error distribution of discrete D.A.I. and continuous D.A.I. at varying step-sizes.

Type
Step In fact, the error standard deviation of discrete D.A.I. is half of that of continuous deterministic artificial intelligence. However, it is important to note that the smaller standard deviation of discrete D.A.I. does not suggest its superior performance over its continuous twin. The relatively large standard deviation of continuous deterministic artificial intelligence is due to the oscillations in the initial transient. When the time window is pushed past the initial transient, it is expected that continuous D.A.I. will outperform discrete deterministic artificial intelligence due to marginal or no tracking error. The results in Section 3 are formulated inside MATLAB. The complete code is attached in the Appendix A to help replication of the results.

Discussion
The results in Tables 3-5 validate the ability of deterministic artificial intelligence to track challenging, discontinuous square wave commands in a manner that favorably compares to modern techniques. Foundational research seemed to indicate the efficacy of continuous deterministic artificial intelligence, but subsequent prequel research discerned a failure under certain conditions of discretization, and this manuscript validates the exemplary performance of continuous control and furthermore establishes threshold for discretization to maintain good performance. Table 3. Comparison of different discretization methods in discrete D.A.I.

Discretization Method
Step-Size  Table 4. Percent performance improvement for D.A.I. and model-following adaptive control.

Method
Step  Table 5. Percent performance improvement for continuous and discrete D.A.I.

Method
Step

Future Research Recommendations
Following successful duplication of these results to establish the benchmark for the sequel study, random parameter variation should be explored to ascertain the ability of deterministic artificial intelligence to learn the time-varying parameters and maintain high performance.

Conclusions
In essence, the manuscript reveals not only that different control algorithms yield disparate control effects (as seen in Figures 5 and 6), but also that the degree of discretization in a control algorithm dictates the tracking quality of the algorithm, as presented in Figure 7. Integration solver step-size was also iterated for both continuous and discrete system equations. The choosing of different discretization methods, such as zero-order hold (ZOH), bilinear approximation (Tustin), and linear interpolation (FOH), visibly reduced the tracking error at a large step-size. The discrepancy in the results decreased with step size and eventually became negligible and, thus, was omitted. Surprisingly, the best performance was achieved with discrete deterministic artificial intelligence using a small step-size with continuous deterministic artificial intelligence performance next best.
crete system equations. The choosing of different discretization methods, such as zeroorder hold (ZOH), bilinear approximation (Tustin), and linear interpolation (FOH), visibly reduced the tracking error at a large step-size. The discrepancy in the results decreased with step size and eventually became negligible and, thus, was omitted. Surprisingly, the best performance was achieved with discrete deterministic artificial intelligence using a small step-size with continuous deterministic artificial intelligence performance next best. in Figure 7. Integration solver step-size was also iterated for both continuous and discrete system equations. The choosing of different discretization methods, such as zeroorder hold (ZOH), bilinear approximation (Tustin), and linear interpolation (FOH), visibly reduced the tracking error at a large step-size. The discrepancy in the results decreased with step size and eventually became negligible and, thus, was omitted. Surprisingly, the best performance was achieved with discrete deterministic artificial intelligence using a small step-size with continuous deterministic artificial intelligence performance next best.