# Sparse in the Time Stabilization of a Bicycle Robot Model: Strategies for Event- and Self-Triggered Control Approaches

^{1}

^{2}

^{*}

Previous Article in Journal

Faculty of Electrical Engineering, Institute of Control, Robotics and Computer Science, Poznan University of Technology, 60-965 Poznan, Poland

Reacto, os. Jagiellonskie 21, 61-229 Poznan, Poland

Author to whom correspondence should be addressed.

Received: 13 October 2018
/
Revised: 14 November 2018
/
Accepted: 23 November 2018
/
Published: 28 November 2018

In this paper, the problems of event- and self-triggered control are studied for a nonlinear bicycle robot model. It has been shown that by applying control techniques based on triggering conditions, it is possible to reduce both state-based performance index, as well as the number of triggers, in comparison to a standard linear-quadratic control which consumes less energy of the control system and decreases the potential mechanical wear of the robot parts. The results presented in this paper open a new research field for further studies, as discussed in the Summary section, and form the basis for further research in energy-efficient control techniques for stabilizing a bicycle robot.

The bicycle invented by Karl von Drais [1,2,3] operates in an upright unstable equilibrium point, just as many animals do, leading to its inverted pendulum-like structure. Such structures are usually underactuated, which in the case of vehicles makes riding them fun. However, when handicapped or elderly people use them, additional stabilization units are welcome.

Current applications of such units enable one to stop the bicycle from maintaining its balance without any external forces, for instance, by using reaction wheels (see, e.g., a satellite application [4]). The reaction momentum phenomena can be found in multiple situations (see [5,6,7]). Numerous control laws have been used to control the reaction wheel pendulum, such as sliding mode control [8], fuzzy logic control [9,10], linear control laws with the use of feedback linearization (FBL) [11], predictive control [12], and neural network control [13]. In this paper, we aim to extend this list from a self-triggered control approach.

Dealing with the control task of bicycles using a traditional approach, when the moment of forces is exerted on a handlebar, especially for low linear speeds, e.g., $1\frac{\mathrm{m}}{\mathrm{s}}$, is troublesome. Stabilization by a reaction wheel is, however, independent of the speed of the bicycle (robot). There are widely applied solutions in single-track vehicles stabilization, such as that of Lit Motors Inc. [14], where stabilization is based on using moments of forces resulting from a gyroscopic effect exerted by two rotating masses with variable rotation axes, and of Honda Riding Assist [15], which enables stabilization of a motor by controlling its handlebar. The problem, still, is the energy efficiency of such control laws. In the current paper, self-triggered and event-triggered control approaches are considered, being active only at certain time instants, and are used to present the energy-efficient control laws of such a structure.

The considered bicycle robot model has the same equilibrium point as a usual bicycle and has been built and used for experimental tests in our previous work. In this paper, the authors present the next step in their research on the control of this mechanical structure. In previous papers, the authors performed 4DoF simulations [16] by introducing robustness into the linear-quadratic regulator (LQR) control scheme or by comparing different control strategies, e.g., LQR, LQI, and LMI-based LQR control in [17]. The results helped the authors to focus on problems with a lesser DoF to be concerned, e.g., by introducing robustness into LQR/LQI control laws with results based on experiments [18] and then by introducing feedback linearization to a simulation model [19], with an in-depth analysis of the impact of the initial conditions and the robustness parameter [20], and different linearization schemes [21]. In [22], the authors extended the results, referring to the best combination of control approaches at the research stage of [18] when feedback linearization was used, for possible actuator failure in order to represent the uncertainty introduced by the linearization of the robot model or by possible constraints imposed on the control signal.

The current challenge in control systems, especially those where sensor readings or actuator actions should be made only when necessary, is to preserve the control performance of the closed-loop systems. Since control loops no longer have at their disposal enough communication resources, these implementation aspects have become important. Two of the most important techniques that reduce occupation of the bandwidth to the minimum necessary level and, at the same time, allow for the reduction of control energy, exerting control signals only when needed, are event- and self-triggered control algorithms. In the first one, the state of the plant is continuously measured, and the triggering condition indicating when to act monitors the plant continuously. In the self-triggered system, there is a prediction mechanism available, defining when updates in control should be triggered.

The above is the reason why authors have decided to perform the next step in their studies to find an efficient way of modifying the formerly presented control laws in the bicycle robot research field. These control approaches seem most appealing from the viewpoint of the energy of the control signal, which, whenever changed, relates to the changing rotation velocity of the reaction wheel, requiring higher energy consumption.

The novelty of this paper is the application of the proposed control strategies to a single-track vehicle with a stabilization task, taking energy efficiency of the control law into account. The similar up-to-date approaches involve the stabilization of the motor by a gyroscopic effect by means of three rotating flywheels [14], the stabilization of motorcycles driving at a low speed by the automatic change in the rotation angle of the handlebar and the angle between the fork of the handlebar and the ground [15], or, in a less common approach, the stabilization of a cube on one of its corners [23]. No triggering approaches have been, to the best of the knowledge of the authors, applied to bicycle robot control tasks.

The paper presents simulation studies related to a nonlinear model developed, tested, and verified at multiple stages of previous experiments (see the above references). The paper is structured as follows: Section 2 introduces the mathematical model of the robot. Section 3 presents the considered control strategies that extend those considered hitherto. In Section 4, the experimental model is explained in brief, with a comparison of performances of the considered control strategies. Section 5 provides a summary.

A two-degrees-of-freedom dynamic model [24] of the bicycle, taking the deflection angle from the vertical pose and the angle of rotation of the reaction wheel into account, is used in the form of an inverted pendulum. In addition, the authors have assumed that the handlebar is still, and centrifugal forces do not affect the whole structure. In this way, the actuator impact on the bicycle robot is studied for the considered control regime only.

The mathematical description of a 2DoF robot model is given by the following set of non-linear differential equations [19,20]:

$$\begin{array}{ccc}\hfill {\dot{x}}_{1}(t)& =& {x}_{2}(t)\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\dot{x}}_{2}\left(t\right)& =& \frac{g\phantom{\rule{0.166667em}{0ex}}{h}_{r}\phantom{\rule{0.166667em}{0ex}}{m}_{r}\phantom{\rule{0.166667em}{0ex}}sin\phantom{\rule{-0.166667em}{0ex}}\left({x}_{1}\left(t\right)\right)}{{I}_{rg}}-\frac{{b}_{r}\phantom{\rule{0.166667em}{0ex}}{x}_{2}\left(t\right)}{{I}_{rg}}+\hfill \\ & & -\frac{{b}_{I}\phantom{\rule{0.166667em}{0ex}}{x}_{4}\left(t\right)}{{I}_{rg}}+\frac{{k}_{m}\phantom{\rule{0.166667em}{0ex}}u\left(t\right)}{{I}_{rg}}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\dot{x}}_{3}\left(t\right)& =& {x}_{4}\left(t\right)\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\dot{x}}_{4}\left(t\right)& =& \frac{{k}_{m}\phantom{\rule{0.166667em}{0ex}}u\left(t\right)}{{I}_{I}+{I}_{mr}}-\frac{{b}_{I}\phantom{\rule{0.166667em}{0ex}}{x}_{4}\left(t\right)}{{I}_{I}+{I}_{mr}}\hfill \end{array}$$

See Figure 1 for the kinematic scheme of the robot. The notation adopted in this paper is summarized in Table 1, and such forces as centrifugal, gravitation, and reaction momentum from the reaction wheel are taken into account with full reference to [25].

A basic friction model that assumes that the moment of the friction force is proportional to the angular velocity of the robot in a specific joint has been used. At this point of research, the main focus is on control techniques to use actuators effectively, not to verify the impact of the friction model on the use of a model.

In order to apply self- and event-triggered control strategies, the model has been linearized using the Jacobian matrix, such as in [16,18,21], to obtain $\dot{\underline{x}}\left(t\right)={\mathit{A}}_{J}\underline{x}\left(t\right)+{\underline{b}}_{\phantom{\rule{0.166667em}{0ex}}J}u\left(t\right)$, where
with the linearization point (units omitted) taken as ${\underline{x}}_{l}=\underline{0}$.

$$\begin{array}{ccc}\hfill {\mathit{A}}_{J}& =& \left[\begin{array}{cccc}0& 1& 0& 0\\ \frac{g\phantom{\rule{0.55542pt}{0ex}}{h}_{r}\phantom{\rule{0.55542pt}{0ex}}{m}_{r}}{{I}_{rg}}& -\frac{{b}_{r}}{{I}_{rg}}& 0& -\frac{{b}_{I}}{{I}_{rg}}\\ 0& 0& 0& 1\\ 0& 0& 0& -\frac{{b}_{I}}{{I}_{I}+{I}_{mr}}\end{array}\right]\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {\underline{b}}_{J}& =& \left[\begin{array}{c}0\\ \frac{{k}_{m}}{{I}_{rg}}\\ 0\\ \frac{{k}_{m}}{{I}_{I}+{I}_{mr}}\end{array}\right]\hfill \end{array}$$

Secondly, the linearized model has been discretized into the following form:
by means of step-invariant discretization with the sampling period ${T}_{S}=0.01\phantom{\rule{0.166667em}{0ex}}\mathrm{s}$. The output of the discrete-time model is $y\in \mathcal{R}$, the constrained control signal $u\in \mathcal{R}$, and the state vector $\underline{x}\in {\mathcal{R}}^{n}$, and all are given in discrete time (denoted by subscript t, where t is a sample number referring to time instant $t{T}_{S}$), and $\underline{c}={[1,0,0,0]}^{T}$.

$$\begin{array}{ccc}\hfill {\underline{x}}_{t+1}& =& \mathit{A}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}+\underline{b}{u}_{t}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill {y}_{t}& =& {\underline{c}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}\hfill \end{array}$$

The real robot, whosemodel has been considered in this paper, is presented in Figure 2 for illustrative purposes only. The current paper presents the early stage of research in the field, and, as in the previous work of the authors, simulation results are considered first to identify the most attractive control approach.

Parameters of the model with their descriptions are given in Table 2, with the current denoted as $u\left(t\right)$ (i.e., the control input).

The implemented model has the same parameters as the real robot, which has been proved by numerous simulation vs. experimental results comparisons performed by the authors. At first, it is to be stressed that all parts of the robot have been initially weighed using a professional weight with the precision of $\pm 1\phantom{\rule{0.166667em}{0ex}}\mathrm{g}$. The moment of inertia (MOI) of the reaction wheel on the basis of its inner ${r}_{1}$, its outer ${r}_{2}$ radii, height l, and mass density $\rho $ has been evaluated on the basis of
and afterwards verified by the use of Autodesk Inventor software, which has a tool to estimate moments of inertia of rotating elements. The MOI of the motor has been obtained by modeling the rotor in Autodesk Inventor, again made of the same material as the real one, which in addition has been verified by means of calculating the moment of inertia of a rotating cylinder, with the diameter equal to the diameter of the rotor, measured by a precision caliper and weighed prior to that. The MOI of the robot relative to the ground has been obtained from Autodesk Inventor and verified by the Steiner formula to calculate moments of inertia for rotating masses with rotation axis shifted by a distance d. Distance from the ground to the centre of the mass has been obtained on the basis of
where ${\underline{r}}_{\phantom{\rule{0.166667em}{0ex}}k}$ represents the known locations of the centers of masses of the components of the robot (Autodesk), and ${m}_{k}$ represents their weighed masses. The gravity constant has been taken from physical tables, and the motor constant has been obtained on the basis of the specification of the motor (the moment of force equals the product of the motor constant and the motor current, and the nominal current and moment have been given in specification of the motor. The friction coefficient in the robot rotation has been estimated by a trial-and-error approach during real-world experiments. By means of improving the construction of the real robot, the friction effects have been minimized, and the coefficient has subsequently been selected by means of simulation to mimic the experimental results, related to angle delays in rotating joints of the robot. The same holds for the friction coefficient of the reaction wheel.

$${I}_{\mathrm{rw}}=\rho \pi l\frac{({r}_{2}^{1}-{r}_{1}^{2})({r}_{2}^{2}+{r}_{1}^{2})}{2}\phantom{\rule{0.166667em}{0ex}},$$

$${\underline{r}}_{\phantom{\rule{0.166667em}{0ex}}0}=\frac{{\sum}_{k}{m}_{k}{\underline{r}}_{\phantom{\rule{0.166667em}{0ex}}k}}{{\sum}_{k}{m}_{k}}\phantom{\rule{0.166667em}{0ex}},$$

In addition, it has been assumed that the measurement of the output of the closed-loop system $y\left(t\right)={x}_{1}\left(t\right)$ (see (8)) is subject to white noise with normal distribution, to reflect the real-world conditions.

Up to this point, as presented in the first section of this paper, the authors have focused on different linearization schemes applied to both discrete-time and continuous-time control techniques. This has led to identifying the most attractive control scheme [22], being at the same time energy-effective. Starting again from LQR approach the authors turn their attention now to triggered control techniques, which should allow to decrease both energy consumption (related to change in rotation of the reaction wheel), and at the same time reduce the mechanical wear of a robot. In order to satisfy this condition, three control techniques are considered, and finally compared, as presented in the following sections.

In this paper, three approaches are compared. As the reference point to all other considerations, the standard LQR control law of the form
is taken into account, where ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}$ is the complete sampled state vector from the robot, originating from the nonlinear model of (1)–(4), which can be obtained from a full-state observer. This approach is presented here whenever a triggering parameter is taken as $\gamma =0$. The remaining control approaches are presented in the two following subsections.

$$\begin{array}{c}\hfill {u}_{t}={\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}\end{array}$$

The optimal vector $\underline{k}$ from (9) in a traditional approach is the solution for
where $\mathit{P}>0$ is the solution to the Riccati equation, and appropriate vectors and matrices result from (7) and (8). and the performance index
with $\mathit{Q}\ge 0$, and $R\ge 0$ as design criteria in the standard LQR approach.

$$\begin{array}{ccc}\hfill {\underline{k}}^{T}& =& -{\left({\underline{b}}^{T}\mathit{P}\underline{b}+R\right)}^{-1}{\underline{b}}^{T}\mathit{P}\mathit{A}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \mathit{P}& =& \mathit{Q}+{\mathit{A}}^{T}\mathit{P}\mathit{A}-{\mathit{A}}^{T}\mathit{P}\underline{b}{\left({\underline{b}}^{T}\mathit{P}\underline{b}+R\right)}^{-1}{\underline{b}}^{T}\mathit{P}\mathit{A}\hfill \end{array}$$

$$\begin{array}{c}\hfill J=\sum _{t=0}^{\infty}\left({\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{T}\mathit{Q}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}+R{u}_{t}^{2}\right)\phantom{\rule{0.166667em}{0ex}},\end{array}$$

Let us assume that ${\underline{x}}_{t}^{\ast}$ is the most recently transmitted measurement of the full state of the nonlinear plant to the controller at sample t (at the same time, it can be obtained from a state observer on the basis of the separation principle [26]). Transmitting states to the controller, and the control update action, is based solely on triggering conditions. If a triggering condition is met, the state is transmitted to the controller, and the control ${u}_{t}$ sample is updated accordingly to the law of (9), to the form where ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}:={\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$. Otherwise, no state updates are transmitted, and the control action of (9) is not updated and is kept at a previously generated value. In this case, no calculations are needed, which reduces the computation burden, reduces the use of bandwidth, saves energy, etc.

According to [27,28], for the event-triggered system, the following triggering condition is used:
where $\gamma >0$ can be related to a threshold value, and $\underline{k}$ is calculated on the basis of (10), (11), and $l=1,2,\dots $ (i.e., l grows until the condition of (13) is violated). This condition is related to a Lyapunov function.

$$\begin{array}{c}\hfill \left|{\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}-{\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}\right|>\gamma \left|{\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}\right|\end{array}$$

If the condition (13) is met, the control signal is generated according to (9), where ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}:={\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$, updating the state ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}$, which corresponds now to the most recent measurement, accepted by the condition. In the other case, we simply move to the next sampling period, verifying (13) at the next sample hit. The value $\gamma >0$ allows one to define the frequency of triggers and, at the same time, updates in the control signal, whereas for $\gamma =0$, and by supplying the state on the basis of the separation principle, a conventional LQR-type control is obtained.

The authors of [27] relate (13) to the quadratic event-triggering condition
where

$$\begin{array}{c}\hfill {\underline{z}}_{t+l}^{T}\mathbf{\Gamma}{\underline{z}}_{t+l}>0\end{array}$$

$$\begin{array}{ccc}\hfill {\underline{z}}_{t+l}& =& {[{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}^{T},\phantom{\rule{0.166667em}{0ex}}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast T}]}^{T}\hfill \end{array}$$

$$\begin{array}{ccc}\hfill \mathbf{\Gamma}& =& \left[\begin{array}{cc}(1-{\gamma}^{2}){\underline{k}}^{T}\underline{k}& -{\underline{k}}^{T}\underline{k}\\ -{\underline{k}}^{T}\underline{k}& {\underline{k}}^{T}\underline{k}\end{array}\right]\phantom{\rule{0.166667em}{0ex}}.\hfill \end{array}$$

In addition to (13), one can also assume the maximum number N of permissible sampling periods, after which a control update must take place. In this paper, it has been assumed that $N=10$.

In the case of event-triggered systems, the triggering condition is to be verified at all sampling instants, which requires additional computational power. In the self-triggered control approach, the control signal is updated whenever the sampling instant, calculated on the basis of, e.g., internal prediction and a triggering condition function, is satisfied, which requires the availability of the model of the plant.

In this approach, an altered method is inspired by [29,30,31], where the condition (13) is verified with respect to the prediction of ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}$ made at sample t. This prediction, accompanied with the information about the next sample hit, is obtained from (7) and (8). To perform it, the control signal is assumed to be constant, and the first predicted sample of ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$ that violates the triggering condition yields the control update time instant.

Therefore, the triggering condition (13) for self-triggered control (STC) has the following form:
where ${\underline{\hat{x}}}_{\phantom{\rule{0.166667em}{0ex}}t+l}$ are predicted states ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}$ with the use of the discrete model (7), i.e., according to
where ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}={\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$ and ${u}_{t}={\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$.

$$\begin{array}{c}\hfill \left|{\underline{k}}^{T}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}-{\underline{k}}^{T}{\underline{\hat{x}}}_{\phantom{\rule{0.166667em}{0ex}}t+l}\right|>\gamma \left|{\underline{k}}^{T}{\underline{\hat{x}}}_{\phantom{\rule{0.166667em}{0ex}}t+l}\right|\end{array}$$

$$\begin{array}{c}\hfill {\hat{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}={\mathit{A}}^{l}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}+\left[\begin{array}{cccc}{\mathit{A}}^{l-1}\underline{b}& \dots & \mathit{A}\underline{b}& \underline{b}\end{array}\right]\left[\begin{array}{c}\hfill {u}_{t}\\ \hfill \vdots \\ \hfill {u}_{t}\\ \hfill {u}_{t}\end{array}\right]\phantom{\rule{0.166667em}{0ex}},\end{array}$$

The advantage of this approach is that there is no need to perform continuous measurements, and the actual state is measured whenever the control signal is updated, which improves energy-saving properties and reduces the occupation of the communication channel. A serious drawback, however, is that the prediction in this approach, is based on a linearized model.

It opens, though, a new field of research, taking different linearization schemes into account.

In this approach, it has been assumed that the prediction of the state ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}$ is available at time instant t, and in order to calculate the control that suffices until the next ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$, it is assumed that the state is used to compute samples of the control signal until ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}^{\ast}$ appears. In this way, the control signal ${u}_{t}$ is constant no more but simply satisfies (9).

The triggering condition for ISTC is the same as for STC, (19), but now it is assumed that the input signal can change between triggers is constant. Therefore, the predictions of ${\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}$ are calculated using the following model:

$$\begin{array}{c}\hfill {\hat{x}}_{\phantom{\rule{0.166667em}{0ex}}t+l}={\mathit{A}}^{l}{\underline{x}}_{\phantom{\rule{0.166667em}{0ex}}t}+\left[\begin{array}{cccc}{\mathit{A}}^{l-1}\underline{b}& \dots & \mathit{A}\underline{b}& \underline{b}\end{array}\right]\left[\begin{array}{c}{u}_{t}\\ {u}_{t+1}\\ \vdots \\ {u}_{t+l-2}\\ {u}_{t+l-1}\end{array}\right]\phantom{\rule{0.166667em}{0ex}}.\end{array}$$

In addition, a cut-off constraint imposed on ${u}_{t}$ at the level of $\pm 2\phantom{\rule{0.166667em}{0ex}}\mathrm{A}$ is taken into account, to mimic the real behavior of the robot controlled by the proposed scheme. Next, a complete state measurement is transmitted over the control channel, allowing one to calculate its prediction and improving the control quality, in comparison to standard STC. A similar approach is adopted in [32], where it is assumed that control signal samples are known between consecutive update instants.

As in our previous paper [21], extensive simulation studies have been performed, to analyze and compare the three presented approaches, namely,

- standard LQR control, where the following weighting matrices have been taken: $\mathit{Q}={\mathit{I}}^{4\times 4}$, $R=1$, and the control signal is updated at every step with sampling period ${T}_{S}=0.01\phantom{\rule{0.166667em}{0ex}}\mathrm{s}$,
- event-triggered control, where it is assumed that the control update should be made no less than every 10 sampling periods, which forms an additional triggering condition, ${T}_{S}=0.01\phantom{\rule{0.166667em}{0ex}}\mathrm{s}$, and
- self-triggered control/improved-self triggered control based on prediction from the linearized model, where it is also assumed that control update should be made no less than every 10 sampling periods, ${T}_{S}=0.01\phantom{\rule{0.166667em}{0ex}}\mathrm{s}$.

All the simulations have been performed for the following initial condition $\underline{x}\left(0\right)={[0,\phantom{\rule{0.166667em}{0ex}}0,\phantom{\rule{0.166667em}{0ex}}0,\phantom{\rule{0.166667em}{0ex}}0]}^{T}$ (units omitted), and a noise sequence acting on ${x}_{1}$.

A set of flowcharts, depicting the algorithms of event-triggered control (ETC), STC, and improved STC (ISTC) methods, respectively, has been included in Figure 3, Figure 4 and Figure 5, where red denotes LQR control (saturated) with a linear, continuous-time model, green denotes a nonlinear continuous-time model, and yellow denotes a linear discrete-time model for prediction purposes.

In Figure 6, a set of responses of closed-loop systems is presented, where for ${T}_{S}=0.01\phantom{\rule{0.166667em}{0ex}}\mathrm{s}$ the maximum number of triggers in the case of LQR control equals 500 (i.e., for $\gamma =0$). As can be seen, all the considered control approaches yield in comparable control performance, resulting in proper stabilization of the bicycle robot model in an upright position. The difference is in the behavior of control signals, where, in the case of LQR control, a continuous update is visible, increasing control signal variance, leading to both the potential mechanical wear of the robot parts and the severe occupation of the communication channels. The remaining control approaches allow one to update the control signal less frequently, preserving the control performance at the same level.

In the case of the considered TC approaches, much less frequent update occurrences are visible, at the cost of more abrupt changes in control signals, which are smoothed in the case of the ISTC approach, in which velocity signals (i.e., states ${x}_{2}$ and ${x}_{4}$) are smoothed as a result of model-based updates in the control signal, spanned over several samples.

It is to be stressed that the considered methods need considerably fewer updates than the two other approaches to obtain similar responses, at the cost of more computations (as in a standard LQR control).

In Figure 7, the numbers of triggering events are presented for the considered control approaches, depicting an almost linear reduction trend for increasing $\gamma $ in the case of ETC control, and a similar trend in the case of STC and ISTC control. It is visible that STC/ISTC require slightly more triggers in comparison to ETC, but this comes with the benefit of a less severe occupation of control channels, as the triggering condition is based on the model, not solely on the measurements, being a good alternative to the LQR approach requiring not only continuous measurement transmission but also 500 control updates in the considered control horizon, as shown in the simulations presented in this paper.

Finally, Figure 8 presents performance indices for the considered control strategies, stating that
where ${t}_{\mathrm{max}}$ is the simulation time. The presented performance index plot demonstrates the trade-off that potentially can be made between the closed-loop performance and the number of triggers in the corresponding control strategies.

$$\begin{array}{c}\hfill I={\int}_{0}^{{t}_{\mathrm{max}}}{\u2225\underline{x}\left(t\right)\u2225}^{2}dt\phantom{\rule{0.166667em}{0ex}}\end{array}$$

Based on the results presented, it can be stated that the main point of application of the presented approach can be any control problem, where control updates are connected with additional energy consumption, unwanted transient behavior of the system, increased mechanical wear of the parts, or abrupt changes in signals, which should be avoided.

In this paper, we aimed at providing an overview of possible control strategies that can potentially be used in the stabilization of a bicycle robot, taking the occurrences of an actuator use into account. The aim was not to cover all of the most up-to-date results, as this field in the last couple of years has expanded extensively, but to identify potentially interesting methods to perform a set of experiments on the bicycle robot. It has been shown that, despite the reduction in the frequency of control updates, it is still possible to stabilize the structure in the upright, unstable equilibrium point, increasing the energy efficiency of the control system.

The main point of this paper is to identify the control approach, which could be used most effectively on the real robot, as the authors did in their previous papers. According to the initial tests, the control algorithms proved to be effective, and the implementation on the real robot will be the matter for subsequent publications.

Up to this point, the STC or ETC mechanisms have been used for underactuated robots in the case of, e.g., an underwater robot [33], with no ISTC approach compared or verified. Tests of the application to bicycle robots have not yet been reported, to the best of the authors’ knowledge, though some papers are related to simple inverted pendulums in the form of event-triggered control [34] or to a linearized model of the pendulum and self-triggered control [35].

In all approaches, a Lyapunov function is applied to build a triggering condition, the function is widely used in ETC and STC approaches. The ETC approach from this paper is based on [27], and STC is inspired on [29,30,31], though the current paper uses a different prediction mechanism, which originates from using a different model. Here, we use a discrete-time model obtained from the linearization of the continuous-time nonlinear model at a point. In [32], it is remarked that there is no need to send control commands between consecutive triggers. Like the presented approach, the full control vector is sent. In their approach, however, the control is calculated for a continuous-time model, with subsequent control sampling.

In future work, we would also like to identify the impact of constraints imposed on the control signal, based on obtaining optimal control subject to control limits, to avoid implementing control using cut-off constraints only. One of the approaches to be considered might be the so-called tube MPC, and the STC control based on it. This could extend the result presented in [36] for linear systems to nonlinear plants, such as the bicycle robot itself, using, e.g., feedback linearization techniques, on the basis of our previous experience with the bicycle robot, especially [18,22]. In these papers, the feedback linearization approach allowed us to identify the conditions where control energy consumption is reduced by a visible amount, allowing one to maintain high control performance, despite constraints imposed on the current in the real robot.

Conceptualization: J.Z., D.H. and A.O.; Methodology: J.Z. and D.H.; Software: J.Z.; Validation: J.Z., D.H. and A.O.; Formal analysis: J.Z. and D.H.; Investigation: D.H.; Resources: J.Z. and A.O.; Data curation: J.Z. and A.O.; Writing—original draft preparation: J.Z. and D.H.; Writing—review and editing: J.Z., D.H. and A.O.; Visualisation: J.Z. and D.H.; Supervision: J.Z. and D.H.; Project administration: D.H.

This research was financially supported as a statutory work of Poznan University of Technology (DSPB 04/45/DSPB/0184).

The authors declare no conflict of interest.

- Clayton, N. A Short History of the Bicycle; Amberley: Gloucestershire, UK, 2016. [Google Scholar]
- Hadland, T.; Lessing, H.E.; Clayton, N.; Sanderson, G.W. Bicycle Design: An Illustrated History; The MIT Press: Cambridge, MA, USA, 2014. [Google Scholar]
- Herlihy, D.V. Bicycle: The History; Yale University Press: New Haven, CT, USA, 2004. [Google Scholar]
- Tehrani, E.S.; Khorasani, K.; Tafazoli, S. Dynamic neural network-based estimator for fault diagnosis in reaction wheel actuator of satellite attitude control system. In Proceedings of the International Joint Conference on Neural Networks, Montreal, QC, Canada, 31 July–4 August 2005; pp. 2347–2352. [Google Scholar]
- Halliday, D.; Recnick, R.; Walker, J. Fundamentals of Physics Extended, 10th ed.; Wiley: Hoboken, NJ, USA, 2013. [Google Scholar]
- Acosta, V.; Cowan, C.L.; Graham, B.J. Essentials of Modern Physics; Harper & Row: New York, NY, USA, 1973. [Google Scholar]
- Norwood, J. Twentieth Century Physics; Prentice-Hall: Upper Saddle River, NJ, USA, 1976. [Google Scholar]
- Khalil, H.K. Nonlinear Systems, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2002. [Google Scholar]
- Bapiraju, B.; Srinivas, K.N.; Prem, P.; Behera, L. On balancing control strategies for a reaction wheel pendulum. In Proceedings of the IEEE INDICON, Khangapur, India, 20–24 December 2004; pp. 199–204. [Google Scholar]
- Gao, Q. Universal Fuzzy Controllers for Non-Affine Nonlinear Systems; Springer: Singapore, 2017. [Google Scholar]
- Fahimi, F. Autonomous Robots: Modeling, Path Planning, and Control; Springer: Berlin, Germany, 2008. [Google Scholar]
- Maciejowski, J.M. Predictive Control with Constraints; Prentice Hall: Upper Saddle River, NJ, USA, 2000. [Google Scholar]
- Miller, W.T., III; Sutton, R.S.; Werbos, P.J. Neural Networks for Control; MIT Press: Cambridge, UK, 1995. [Google Scholar]
- Carpenter, S. Lit Motors Hopes Its C-1 Becomes New Personal Transportation Option. Available online: http://www.latimes.com/business/autos/la-fi-motorcycle-car-20120526,0,4147294.story (accessed on 16 June 2012).
- Cameron, K. Honda Shows Self-Balancing but Non-Gyro Bike. Available online: https://www.cycleworld.com/honda-self-balancing-motorcycle-rider-assist-technology (accessed on 6 January 2017).
- Horla, D.; Owczarkowski, A. Robust LQR with actuator failure control strategies for 4DOF model of unmanned bicycle robot stabilised by inertial wheel. In Proceedings of the 15th International Conference IESM, Seville, Spain, 21–23 October 2015; pp. 998–1003. [Google Scholar]
- Owczarkowski, A.; Horla, D. A Comparison of Control Strategies for 4DoF Model of Unmanned Bicycle Robot Stabilised by Inertial Wheel. In Progress in Automation, Robotics and Measuring Techniques, Advances in Intelligent Systems and Computing; Roman, S., Ed.; Springer: Basel, Switzerland, 2015; pp. 211–221. [Google Scholar]
- Owczarkowski, A.; Horla, D. Robust LQR and LQI control with actuator failure of 2DoF unmanned bicycle robot stabilized by inertial wheel. Int. J. Appl. Math. Comput.Sci.
**2016**, 26, 325–334. [Google Scholar] [CrossRef] - Zietkiewicz, J.; Horla, D.; Owczarkowski, A. Robust Actuator Fault-Tollerant LQR Control of Unmanned Bicycle Robot. A Feedback Linearization Approach; Challenges in Automation, Robotics and Measurement Techniqies; Szewczyk, R., Zieliński, C., Kaliczynska, M., Eds.; Springer: Berlin, Germany, 2016; pp. 411–421. [Google Scholar]
- Zietkiewicz, J.; Owczarkowski, A.; Horla, D. Performance of Feedback Linearization Based Control of Bicycle Robot in Consideration of Model Inaccuracy; Challenges in Automation, Robotics and Measurement Techniqies; Szewczyk, R., Zieliński, C., Kaliczynska, M., Eds.; Springer: Berlin, Germany, 2016; pp. 399–410. [Google Scholar]
- Horla, D.; Zietkiewicz, J.; Owczarkowski, A. Analysis of Feedback vs. Jacobian Linearization Approaches Applied to Robust LQR Control of 2DoF Bicycle Robot Model. In Proceedings of the 17th International Conference on Mechatronics, Prague, Czech Republic, 7–9 December 2016; pp. 285–290. [Google Scholar]
- Owczarkowski, A.; Horla, D.; Zietkiewicz, J. Introduction of feedback linearization to robust LQR and LQI control—Analysis of results from an unmanned bicycle robot with reaction wheel. Asian J. Control
**2019**, 21, 1–13. [Google Scholar] [CrossRef] - Muehlebach, M.; D’Andrea, R. Nonlinear analysis and control of a reaction-wheel-based 3-D inverted pendulum. IEEE Trans. Control Syst. Technol.
**2016**, 25, 235–246. [Google Scholar] [CrossRef] - Åström, K.J.; Klein, R.E.; Lennartsson, A. Bicycle dynamics and control: Adapted bicycles for education and research. IEEE Control Syst.
**2005**, 25, 26–47. [Google Scholar] - Åström, K.J.; Block, D.J.; Spong, M.W. The Reaction Wheel Pendulum; Morgan & Claypool: San Rafael, CA, USA, 2007. [Google Scholar]
- Kwakernaak, H.; Sivan, R. Linear Optimal Control Systems; Wiley-Interscience: Hoboken, NJ, USA, 1972. [Google Scholar]
- Heemels, W.P.M.H.; Donkers, M.C.F.; Teel, A.R. Periodic Event-Triggered Control for Linear Systems. IEEE Trans. Autom. Control
**2013**, 58, 847–861. [Google Scholar] [CrossRef] - Liu, T.; Jiang, Z.-P. Event-based control of nonlinear systems with partial state and output feedback. Automatica
**2015**, 53, 10–22. [Google Scholar] [CrossRef] - Heemels, W.P.M.H.; Johansson, K.H.; Tabuda, P. An Introduction to Event-triggered and Self-triggered Control. In Proceedings of the 51st IEEE Conference on Decision and Control, Maui, HI, USA, 10–13 December 2012. [Google Scholar]
- Barradas Berglind, J.D.J.; Gommans, T.M.P.; Heemels, W.P.M.H. Self-triggered MPC for constrained linear systems and quadratic costs. In Proceedings of the 4th IFAC Nonlinear Model Predictive Control Conference, Noordwijkerhout, The Netherlands, 23–27 August 2012. [Google Scholar]
- Gommans, T.; Antunes, D.; Donkers, T.; Tabuda, P.; Heemels, M. Self-triggered linear quadratic control. Automatica
**2014**, 50, 1279–1287. [Google Scholar] [CrossRef] - Wang, W.; Li, H.; Yan, W.; Shi, Y. Self-Triggered Distributed Model Predictive Control of Nonholonomic Systems. In Proceedings of the 11th Asian Control Conference, Gold Coast, Australia, 17–20 December 2017. [Google Scholar]
- Heshmati-Alamdari, S.; Eqtami, A.; Karras, G.C.; Dimarogonas, D.V.; Kyriakopoulos, K.J. A self-triggered visual servoing model predictive control scheme for under-actuated underwater robotic vehicles. In Proceedings of the 2014 IEEE International Conference on Robotics and Automation ICRA, Hong Kong, China, 31 May–5 June 2014; pp. 3826–3831. [Google Scholar]
- Durand, D.; Castellanos, F.G.; Marchand, N.; Sánchez, W.F.G. Event-Based Control of the Inverted Pendulum: Swing up and Stabilization. J. Control Eng. Appl. Inform. SRAIT
**2013**, 15, 96–104. [Google Scholar] - Hashimoto, K.; Adachi, A.; Dimarogonas, D.V. Self-Triggered Model Predictive Control for Nonlinear Input-Affine Dynamical Systems via Adaptive Control Samples Selection. IEEE Trans. Autom. Control
**2017**, 62, 177–189. [Google Scholar] [CrossRef][Green Version] - Brunner, F.D.; Heemels, W.P.M.H. Robust Self-Triggered MPC for Constrained Linear Systems. In Proceedings of the European Control Conference, Strasbourg, France, 24–26 June 2014. [Google Scholar]

Symbol | Meaning |
---|---|

$\underline{x}\in {\mathcal{R}}^{n}$ | state vector |

${x}_{1}\phantom{\rule{0.166667em}{0ex}}\left[\mathrm{rad}\right]$ | vertical deflection angle of the robot |

${x}_{2}\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{rad}}{\mathrm{s}}\right]$ | angular velocity of the robot |

${x}_{3}\phantom{\rule{0.166667em}{0ex}}\left[\mathrm{rad}\right]$ | angle of rotation of the reaction wheel |

${x}_{4}\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{rad}}{\mathrm{s}}\right]$ | angular velocity of the reaction wheel |

$u\phantom{\rule{0.166667em}{0ex}}\left[\mathrm{A}\right]\in \mathcal{R}$ | control signal (current of the motor) |

${m}_{r}\left[\mathrm{kg}\right]$ | weight of the robot |

${I}_{I}\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}\right]$ | moment of inertia of the reaction wheel |

${I}_{mr}\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}\right]$ | moment of inertia of the rotor of the motor |

${I}_{rg}\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}\right]$ | moment of inertia of the robot (rel. to the ground) |

${h}_{r}\phantom{\rule{0.166667em}{0ex}}\left[\mathrm{m}\right]$ | distance between the ground and the center of mass of the robot |

$g\phantom{\rule{0.166667em}{0ex}}\left[\frac{\mathrm{m}}{{\mathrm{s}}^{2}}\right]$ | gravity force |

${k}_{m}\phantom{\rule{0.166667em}{0ex}}[-]$ | constant of the motor |

${b}_{r}\phantom{\rule{0.166667em}{0ex}}[-]$ | friction coefficient in rotational movement |

${b}_{I}\phantom{\rule{0.166667em}{0ex}}[-]$ | friction coefficient in the rotation of the reaction wheel |

${P}_{1}$, ${P}_{2}$ | contact points of the wheels with the ground |

${C}_{1}$ | center of the rear wheel |

${C}_{2}$ | center of the front wheel |

LQR | linear-quadratic regulator |

ETC | event-triggered control |

STC | self-triggered control |

ISTC | improved self-triggered control |

Parameter | Value | Description |
---|---|---|

${m}_{r}$ | $3.962\phantom{\rule{0.166667em}{0ex}}\mathrm{kg}$ | weight of the robot |

${I}_{I}$ | $0.0094\phantom{\rule{0.166667em}{0ex}}\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}$ | moment of inertia (MOI) of the reaction wheel |

${I}_{mr}$ | $0.001\phantom{\rule{0.166667em}{0ex}}\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}$ | MOI of the rotor |

${I}_{rg}$ | $0.0931\phantom{\rule{0.166667em}{0ex}}\frac{\mathrm{kg}}{{\mathrm{m}}^{2}}$ | MOI of the robot related to the ground |

${h}_{r}$ | $0.13\phantom{\rule{0.166667em}{0ex}}\mathrm{m}$ | distance from the ground to the center of mass |

g | $9.8066\phantom{\rule{0.166667em}{0ex}}\frac{\mathrm{m}}{{\mathrm{s}}^{2}}$ | gravity constant |

${k}_{m}$ | $0.421$ | motor constant |

${b}_{r}$ | $0.0001$ | friction coefficient in the robot rotation |

${b}_{I}$ | $0.0001$ | friction coefficient of the reaction wheel |

© 2018 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).