Next Article in Journal
An Event Driven Hybrid Identity Management Approach to Privacy Enhanced e-Health
Previous Article in Journal
Inertial Sensor-Based Methods in Walking Speed Estimation: A Systematic Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems

1
Key Lab of Visual Media Processing and Transmission, Shenzhen Institute of Information Technology, Shenzhen 518029, Guangdong, China
2
Department of Electrical and Computer Engineering, Stevens Institute of Technology, Hoboken, NJ 07030, USA
3
Department of Computer Science, University of Massachusetts, Amherst, MA 01003, USA
4
School of Mechatronics and Information, Yiwu Industrial and Commercial College, Yiwu 322000, Zhejiang, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Sensors 2012, 12(5), 6117-6128; https://doi.org/10.3390/s120506117
Submission received: 4 April 2012 / Revised: 16 April 2012 / Accepted: 29 April 2012 / Published: 10 May 2012
(This article belongs to the Section Physical Sensors)

Abstract

: Variable structure strategy is widely used for the control of sensor-actuator systems modeled by Euler-Lagrange equations. However, accurate knowledge on the model structure and model parameters are often required for the control design. In this paper, we consider model-free variable structure control of a class of sensor-actuator systems, where only the online input and output of the system are available while the mathematic model of the system is unknown. The problem is formulated from an optimal control perspective and the implicit form of the control law are analytically obtained by using the principle of optimality. The control law and the optimal cost function are explicitly solved iteratively. Simulations demonstrate the effectiveness and the efficiency of the proposed method.

1. Introduction

With the development of mechatronics, automatic systems consisting of sensors for perception and actuators for action are more and more widely used in applications [14]. Besides the proper choices of sensors and actuators and an elaborate fabrication of mechanical structures, the control law design also plays a crucial role in the implementation of automatic systems especially for those with complicated dynamics. For most mechanical sensor-actuator systems, it is possible to model them in Euler-lagrange equations [4,5]. In this paper, we are concerned with the sensor-actuator systems modeled by Euler-lagrange equations.

Due to the importance of Euler-lagrange equations in modeling many real sensor-actuator systems, much attention has been paid to the control of such kind systems. According to the type of constraints, the Euler-lagrange system can be categorized into Euler-lagrange system without nonholonomic constraints (e.g., fully-actuated manipulator [6,7], omni-directional mobile robot [8]), and the system subject to nonholonomic constraint [9] (e.g., the cart-pole system [10], the under-actuated multiple body system [11]). For Euler-lagrange system without nonholonomic constraints, the dimension of inputs are often equal to the dimension of output and the system are often able to be transformed into a double integrator system by employing feedback linearization [12]. Other methods, such as control Lyapunov function method [13], passivity based method [14], optimal control method [15], etc., are also successfully applied to the control of Euler-lagrange system without nonholonomic constraints. In contrast, as the dimension of inputs is lower than that of outputs, it is often impossible to directly transform the Euler-lagrange system subject to nonholonomic constraints to a linear system and thus feedback linearization fails to stabilize the system. To tackle the difficulty, variable structure control based method [16], backstepping based control [17], optimal control based method [18], discontinuous control method [19], etc., are widely investigated and some useful design procedures are proposed. However, due to the inherent nonlinearity and nonholonomic constraints, most existing methods [1619] are strongly model dependent and the performance are very sensitive to model errors. Inspired by the success of human operators for the control of Euler-lagrange systems, various intelligent control strategies, such as fuzzy logic [20], neural networks [21], evolutionary algorithms [22], to name a few of them, are proposed to solve the control problem of of Euler-lagrange systems subject to nonholonomic constraints. As demonstrated by extensive simulations, these type of strategies are indeed effective to the control of Euler-lagrange systems subject to nonholonomic constraints. However, rigorous proof on the stability are difficult for this type of methods and there may exist some initializations of the state, from which the system cannot be stabilized.

In this paper, we propose a self-learning control method applicable to Euler-lagrange systems. In contrast to existing work on intelligent control of Euler-lagrange systems, the stability of the close loop system with the proposed method is proven in theory. On the other hand, different from model based design strategies, such as backstepping based design [17], variable structure based design [16], etc., the proposed method does not require information of the model parameters and therefore is a model independent method. We formulate the problem from an optimal control perspective. In this framework, the goal is to find the input sequence to minimize the cost function defined on infinite horizon under the constraint of the system dynamics. The solution can be found by solving a Bellman equation according to the principle of optimality [23]. Then an adaptive dynamic programming strategy [2426] is utilized to numerically solve the input sequence in real time.

The remainder of this paper is organized as follows: in Section 2, preliminaries on Euler-lagrange systems and variable structure control are given briefly. In Section 3, the problem is formulated as a constrained optimization problem and the critic model and the action model are employed to approximate the optimal mappings. The control law is then derived in Section 4. In Section 5, simulations are given to show the effectiveness of the proposed method. The paper is concluded in Section 6.

2. Preliminaries on Variable Structure Control of the Sensor-Actuator System

In this paper, we are concerned with the following sensor-actuator system in the Euler-Lagrange form,

D ( q ) q ¨ + C ( q , q ˙ ) q ˙ + ϕ ( q ) = u
where q ∈ ℝn, D(q) ∈ ℝn is the inertial matrix, C(q,q˙) ∈ ℝn×n, ϕ(q) ∈ ℝn and u ∈ ℝn. Note that the inertial matrix D(q) is symmetric and positive definite. There are three terms on the left side of the above equation. The first term involve the inertial force in the generalized coordinates, the second one models the Coriolis force and friction, the values of which depend on and the third one is the conservative force, which is in correspondence to the potential energy. The control force u applied on the system drives the variation of the coordinate q. It is also noteworthy that we assume the dimension of u is equal to that of q here. This definition also admits the case for u with lower dimension than that of q by imposing constraints to u, e.g., the constraint u = [u1,u2, …,un] with u1 = 0 restricts u in a n – 1 dimensional space. Defining state variables x1 = q and x2 = q, the Euler-Lagrange Equation (1) can be put into the following state-space form:
x ˙ 1 = x 2 x ˙ 2 = D 1 ( x 1 ) ( u + C ( x 1 , x 2 ) x 2 + ϕ ( x 1 ) )
Note that the matrix D(x1) is invertible as it is positive definite. The control objective is to asymptotically stabilize the Euler-Lagrange system (2), i.e., design a mapping (x1,x2) → u such that x1 → 0 and x2 → 0 when time elapses.

As an effective design strategy, variable structure control finds applications in many different type of control systems including the Euler-Lagrange system. The method stabilizes the dynamics of a nonlinear system by steering the state to a elaborately designed sliding surface, on which the state inherently evolves towards the zero state. Particularly for the system (2), we define s = s(x1,x2) as follows:

s = c 0 x 1 + x 2
where c0 > 0 is a constant. Note that s = c0x1 + x2 = 0 together with the dynamics of x1 in Equation (2) gives the dynamics of x1 as 1) =c0x1 for c0 > 0. Clearly, x1 asymptotically converges to zero. Also we know x2 = 0 when x1 = 0 according to s = c0x1 + x2 = 0. Therefore, we conclude the states x1, x2 on the sliding surface s = 0 for s defined in Equation (3) converge to zero with time. With this property of the sliding surface, a control law driving the states to s = 0 definitely grantees the ultimate convergence to the zero states. Accordingly, the stabilization of the system can be realized by controlling s to zero. To reach this goal, a positive definite control Lyapunov function V(s), e.g., V(s) = s2, is often used to design the control law. For stability consideration, the time derivative of V(s) is required to be negative definite. In order to guarantee the negative definiteness of the time derivative of V(s), exact information about the system dynamics (2) is often necessary, which results in the model based design strategies.

About the Euler-Lagrange Equation (1) for modeling sensor-actuator systems, we have the following remark:

Remark 1 In this paper, we are concerned with the class of sensor-actuator systems modeled by the Euler-Lagrange Equation (1). Actually, the dynamics of mechanical systems can be described by the Euler-Lagrange equation according to the rigid body mechanics [4,5], which is essentially equivalent to Newton's laws of motion. Therefore, mechanical sensor-actuator system can be modeled by Equation (1). In this regard, the Euler-Lagrange equation employed in the paper models a general class of sensor-actuator systems.

3. Problem Formulation

Without losing generality, we stabilize the system (1) by steering it to the sliding surface s = 0 with s defined in Equation (3). Different from existing model based design procedures, we design a self-learning controller, which does not require accurate knowledge about D(q), C(q,q̇) and ϕ(q) in Equation (1). In this section, we formulate such a control problem from the optimal control perspective.

In this paper, we set the origin as the desired operating point, i.e., we consider the problem of controlling the state of the system (1) to the origin. For the case with other desired operating points, the problem can be equivalently transformed to the one with the origin as the operating point by shifting the coordinates. At each sampling period, the norm of s = c0x1 + x2, which measures the distance from the desired sliding surface s = 0, can be used to evaluate the one step performance. Therefore, we define the following utility function associated with the one-step cost at the ith sampling period,

U i = U ( s )
with
U ( s ) = { 0 | s 1 | < δ 1 , | s 2 | < δ 2 , , | s n | < δ n 1 otherwise
where s is defined in Equation (3) and s = [s1,s2, …, sn]T, |si| denotes the absolute value of the ith component of the vector s, the parameter δi> 0 for i = 1, 2,…, n. At each step, there is a value Ui and the total cost starting from the kth step along the infinite time horizon can be expressed as follows,
J k = J ( x ( k ) , u ¯ ( k ) ) = i = k γ i k U i
where x(k) is the state vector of system (1) sampled at the kth step with x ( k ) = [ x 1 T ( k ) , x 2 T ( k ) ] T γ is the discount factor with 0 < γ < 1, (k) = (uk,uk+1,…,u) is the control sequence starting from the kth step. Note that for the deterministic system (1), the preceding states after the kth step are determined by x(k) and the control sequence k. Accordingly, Jk is a function of x(k) and (k) with Jk = J(x(k), (k)). Also note that both the cost function Jk and the utility function Uk are defined based on the discrete samplings of the continuous system (1). Now, we can define the problem of controlling the sensor-actuator system (1) in this framework as follows,
min u ( 0 ) , u ( 1 ) , , u ( ) Ω J 0 = i = 0 γ i U i
subject to:
{ x ˙ 1 ( t ) = x 2 ( t ) x ˙ 2 ( t ) = D 1 ( t ) ( x 1 ( t ) ) ( u ( t ) + C ( x 1 ( t ) , x 2 ( t ) ) x 2 ( t ) + ϕ ( x 1 ( t ) ) )
u ( t ) = u ( i ) for i τ t < ( i + 1 ) τ
where Ui is defined by Equations (4) and (5), τ > 0 is the sampling period, the set Ω defines the feasible control actions, J0 is the cost function for k = 0 in Equation (6). It is worth noting that J0 is a function of (0) = (u0, u1,…, u) and x(0) according to Equation (6). The optimization in Equation (7) is relative to (0) with a given initial state x(0). Also note that in the optimization problem in Equation (7), the decision variable u(0),u(1), …,u(∞) are defined in every sampling period. The control action keeps the value in the duration of two consecutive sampling steps. This formulation is consistent with the real implementations of digital controllers.

Remark 2 There are infinitely many decision variables, which are u(0), u(1), …, u(∞), in the optimization problem in Equation (7). Therefore, this is an infinite dimensional problem. It cannot be solved directly using numerical methods. Conventionally, such kind of problem is often solved by using a finite dimensional approximation [27]. In addition, note that the dynamic model of the system appears in the optimization problem in Equation (7) and it will also show up in the finite dimensional relaxation of the problem, which means the resulting solution requires model information and thus is also model-dependent. In contrast, in this paper we investigate the model-independent variable structure control of sensor-actuator systems on the infinite time horizon.

4. Model-Free Control of the Euler-Lagrange System

In this section, we present the strategy to solve the constrained optimization problem efficiently without knowing the model information of the chaotic system. We first investigate the optimality condition of Equation (7) and present an iterative procedure to approach the analytical solution. Then, we analyze the convergence of the iterative procedure and the stability with the derived control strategy.

4.1. Optimality Condition

Denoting J* the optimal value to the optimization problem in Equation (7), i.e.,

J = min u ( 0 ) , u ( 1 ) , , u ( ) Ω J 0

subject to: (7b); (7c)

According to the principle of optimality [23], the solution of Equation (7) satisfy the following Bellman equation:

J ( y ) = min u k Ω ( U k + γ J ( z ) ) x , k = 0 , 1 , 2 ,
where z is the solution of Equation (7b) at t = k + 1 with x(k) = y and the control action u(t) = uk for kτ ≤ t < (k + 1)τ. Without introducing confusion, we simply write Equation (9) as follows
J = min ( U k + γ J )

Define the Bellman operator relative to function h(z) as follows

h ( z ) = min ( U k + γ h ( z ) )

Then, the optimality condition in Equation (10) can be simplified into the following with the Bellman operator,

J = J

Note that the function Uk is implicitly included in the Bellman operator. The Equation (12) constitutes the optimality condition for problem in Equation (7). It is difficult to solve the explicit form of J* analytically from Equation (9). However, it is possible to get the solution by iterations. We use the following iterations to solve J*,

J ^ ( n + 1 ) = J ^ ( n )
subject to: (7b); (7c)

The control action keeps constant in the duration between the kth and the k + 1th step, i.e., u*(t) = u k for kτ ≤ t < (k + 1)τ. u k can be obtained from Equation (9) based on Equation (13),

u k = argmin u k Ω ( U k + γ J )

4.2. Approximating the Action Mapping and the Critic Mapping

In the previous sections, the iteration (13) is derived to calculate J* and the optimization (14) is obtained to calculate the control law. The iteration to approach J* and the optimization to derive u* have to be run in every time step in order to obtain the most up-to-date values. Inspired by the learning strategies widely studied in artificial intelligence [26,28], a learning based strategy is used in this section to facilitate the processing. After a enough long time, the system is able to memorize the mapping of J* and the mapping of u*. After this learning period, there will be no need to repeat any iterations or optimal searching, which will make the strategy more practical.

Note that the optimal cost J* is a function of the initial state. Counting the cost from the current time step, J* can also be regarded as a function of both the current state and the optimal action at current time step according to Equation (10). Therefore, ĵ(n), the approximation of J*, can also be regarded as a function relative to the current state and the current optimal input. As to the optimal control action u*, it is a function of the current state. Our goal in this section is to obtain the mapping from the current state and the current input to ĵ (n) and the mapping from the current state to the optimal control action u* using parameterized models, denoted as the critic model and the action model, respectively. Therefore, we can write the critic model and the action model as Jn( u n xn, Wc) and u n (xn, Wa) respectively, where Wc is the parameters of the critic model and Wa is the parameters of the action model.

In order to train the critic model with the desired input-output correspondence, we define the following error at time step n + 1 to evaluate the learning performance,

e c ( n + 1 ) = J ^ ( n ) J ^ ( n + 1 ) E c ( n + 1 ) = 1 2 e c 2 ( n + 1 )

Note that (n) is the desired value of ĵ(n + 1) according to Equation (13). Using the back-propagation rule, we get the following rule for updating the weight Wc of the critic model,

W c ( n + 1 ) = W c ( n ) + δ W c ( n ) = W c ( n ) l c ( n ) E c ( n ) W c ( n ) = W c ( n ) l c ( n ) E c ( n ) J ^ ( n ) J ^ ( n ) W c ( n )
where lc(n) is the step size for the critic model at the time step n.

As to the action model, the optimal control u* in Equation (14) is the one that minimizes the cost function. Note that the possible minimum cost is zero, which corresponds to the scenario with the state staying inside the desired bounded area. In this regard, we define the action error as follows,

e a ( n ) = J ^ n E a ( n ) = 1 2 e a 2 ( n )

Then, similar to the update rule of Wc for the critic model, we get the following update rule of Wa for the action model,

W a ( n + 1 ) = W a ( n ) l a ( n ) E a ( n ) J ^ ( n ) J ^ ( n ) u ( n ) u ( n ) W a ( n )
where la(n) is the step size for the action model at the time step n.

Equations (16) and (18) update the critic model and the action model progressively. After Wc and Wa have learnt the model information by learning for a long enough time, their values can be fixed at the one obtained at the final step and no further learning is required any longer, which is in contrast to Equation (14) requiring to solve an optimization problem even after a long enough time.

5. Simulation Experiment

In this section, we consider the simulation implementation of the proposed control strategy. The dynamics given in Equation (1) model a wide class of sensor-actuator systems. Particularly, to demonstrate the effectiveness of the proposed self-learning variable structure method, we apply it to the stabilizations of a typical benchmark system: the cart-pole system.

The cart-pole system, as sketched in Figure 1, is a widely used testbed for the effectiveness of control strategies. The system is composed of a pendulum and a cart. The pendulum has its mass above its pivot point, which is mounted on a cart moving horizontally. In this part, we apply the proposed control method to the cart-pole system to test the effectiveness of our method.

5.1. The Model

The cart-pole model used in this work is the same as that in [29], which can be described as follows.

θ ¨ = g sin θ + cos θ [ F m l θ ˙ 2 sin θ + μ c sgn ( y ˙ ) ] μ p θ ˙ m l l ( 4 3 m cos 2 θ m c + m )
y ¨ = F + m l [ θ ˙ 2 sin θ θ ¨ cos θ ] μ c sgn ( y ˙ ) m c + m
where
sgn ( x ) = { 1 i f x > 0 0 i f x = 0 1 , i f x < 0
with the following values of the parameters:
  • g: 9.8 m/s2, acceleration due to gravity;

  • mc: 1.0 kg, mass of cart;

  • m: 0.1 kg, mass of pole;

  • l: 0.5 meter, half-pole length;

  • μc: 0.0005, coefficient of friction of cart on track;

  • μp: 0.000002, coefficient of friction of pole on cart;

  • F: ±10 Newtons, force applied to cart center of mass.

This system has four state variables: y is the position of the cart on track, θ is the angle of the pole with respect to the vertical position, and and θ̇ are the cart velocity and angular velocity, respectively.

Define A 1 ( θ ) = l cos θ ( 4 3 m cos 2 θ m c + m ), A 2 ( θ ) = g sin θ cos θ, A 3 ( θ , θ ˙ ) = m l θ ˙ sin θ + μ p ml cos θ, A 4 ( y ˙ ) = μ c sgn ( y ˙ ) y ˙, A5 = mc + m, A6(θ,θ̇) = mlθ̇ sinθ, A7(θ) =ml cosθ. With these notations, Equation (19) can be re-written as:

A 1 θ ¨ = F + A 2 + A 3 θ ˙ + A 4 y ˙ A 1 + A 5 A 1 + A 7 y ¨ = F + A 2 A 7 A 1 + A 7 + A 1 A 6 + A 3 A 7 A 1 + A 7 θ ˙ + A 1 A 4 + A 4 A 7 A 1 + A 7 y ˙

By choosing

D = [ A 1 0 0 A 1 A 5 A 1 + A 7 ] , C = [ A 3 A 4 A 1 A 6 + A 3 A 7 A 1 + A 7 A 1 A 4 + A 4 A 7 A 1 + A 7 ] ϕ = [ A 2 A 2 A 7 A 1 + A 7 ] , q = [ θ y ] , u = [ F F ]
the system of Equation (19) coincides with the model of Equation (1). Note that the input u in this situation is constrained in the set Ω = {u = [u1, u2]T, u1 = u2 ∈ ℝ}.

5.2. Experiment Setup and Results

In the simulation experiment, we set the discount factor γ = 0.95, the sliding surface parameter k = 10, δ1 = 2, δ2 = 24. The feasible control action set Ω in Equation (7) is defined as Ω = {u = [u1,u2]T,u1 ∈ ℝ,u2 ∈ ℝ,u1 = u2 = ±10 Newtons}. This definition corresponds to the widely used bang-bang control in industry. To make the output of the action model within the feasible set, the output of the action network is clamped to 10 if it is greater than or equal to zero and clamped to – 10 if less than zero. The sampling period τ is set to 0.02 seconds. Both the critic model and the action model are linearly parameterized. The step size of the critic model, which is lc(n) and that of the action model, which is la(n) are both set to 0.03. Both the update of the critic model weight Wc in Equation (16) and the update of the action model weight Wa in Equation (18) last for 30 seconds. For the uncontrolled cart-pole system with F = 0 in Equation (19), the pendulum will fall down. The control objective is to stabilize the pendulum to the inverted direction (θ = 0). Time history of the state variables are plotted in Figure 2 for the system with the proposed self-learning variable structure control strategy. From this figure, it can be observed that θ is stabilized in a small vicinity around zero (with a small error of ±0.1 rads), which corresponds to the inverted direction.

6. Conclusions and Future Work

In this paper, the self-learning variable structure control is considered to solve a class of sensor-actuator systems. The control problem is formulated from the optimal control perspective and solved via iterative methods. In contrast to existing models, this method does not need pre-knowledge on the accurate mathematic model. The critic model and the the action model are introduced to make the method more practical. Simulations show that the control law obtained by the proposed method indeed achieves the control objective. Future work on this topic includes the theoretical proof of the convergence and exploration on the performance limit of the proposed strategy. Also, the control of other mechanical systems modeled by Euler-Lagrange system, such as manipulators etc., will be explored in our future work.

Acknowledgments

Shuai Li would like to share with the readers the poem by Rabindranath Tagore “The traveler has to knock at every alien door to come to his own and one has to wander through all the outer worlds to reach the innermost shrine at the end”. The authors would like to acknowledge the support by the National Natural Science Foundation of China under Grant No. 61172165 and Guangdong Science Foundation of China under Grant No. S2011010006116 and No. 10151802904000013.

References and Notes

  1. Isermann, R. Modeling and design methodology for mechatronic systems. IEEE/ASME Trans. Mechatr. 1996, 1, 16–28. [Google Scholar]
  2. van de Panne, M.; Fiume, E. Sensor-actuator networks. Proceedings of the 20th Annual Conference on Computer Graphics and Interactive Techniques (SIGGRAPH '93), Anaheim, CA, USA, 1–6 August 1993; pp. 335–342.
  3. Liu, B.; Chen, S.; Li, S.; Liang, Y. Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration. Sensors 2012, 12, 2632–2653. [Google Scholar]
  4. de Silva, C. Sensors and Actuators: Control System Instrumentation; Taylor & Francis, CRC Press: Boca Raton, FL, USA, 2007. [Google Scholar]
  5. Beer, F.P. Vector Mechanics for Engineers: Statics and Dynamics; McGraw-Hill: New York, NY, USA, 2003. [Google Scholar]
  6. Lewis, F.L.; Dawson, D.M.; Abdallah, C.T. Manipulator Control Theory and Practice; Marcel Dekker: New York, NY, USA, 2004; Volume 15. [Google Scholar]
  7. Li, S.; Chen, S.; Liu, B.; Li, Y.; Liang, Y. Decentralized kinematic control of a class of collaborative redundant manipulators via recurrent neural networks. Neurocomputing 2012, 8, 108–121. [Google Scholar]
  8. Li, S.; Meng, M.Q.H.; Chen, W. SP-NN: A novel neural network approach for path planning. Proceedings of IEEE International Conference on Robotics and Biomimetics, Sanya, Hainan, China, 15–18 December 2007; pp. 1355–1360.
  9. Bloch, A.M. Nonholonomic Mechanics and Control; Springer-Verlag: New York, NY, USA, 2003. [Google Scholar]
  10. Yu, H.; Liu, Y.; Yang, T. Tracking control of a pendulum-driven cart-pole underactuated system. Proceedings of IEEE International Conference on Systems, Man and Cybernetics, Montreal, QC, Canada, 7–10 October 2007; pp. 2425–2430.
  11. Seifried, R. Two approaches for feedforward control and optimal design of underactuated multibody systems. Multibody Syst. Dynam. 2012, 27, 75–93. [Google Scholar]
  12. Isidori, A. Nonlinear Control Systems II; Springer-Verlag: New York, NY, USA, 1999. [Google Scholar]
  13. Primbs, J.A.; Nevistic, V.; Doyle, J.C. Nonlinear optimal control: A control lyapunov function and receding horizon perspective. Asian J. Control 2009, 1, 14–24. [Google Scholar]
  14. Ortega, R.; Loria, A.; Nicklasson, P.J.; Sira-Ramirez, H. Passivity-Based Control of Euler-Lagrange Systems; Springer-Verlag: New York, NY, USA, 1998. [Google Scholar]
  15. Azhmyakov, V. Optimal control of mechanical systems. Diff. Equat. Nonlin. Mech. 2007, 12, 3–16. [Google Scholar]
  16. Huo, W. Predictive variable structure control of nonholonomic chained systems. Int. J. Comput. Math. 2008, 85, 949–960. [Google Scholar]
  17. Dumitrascu, B.; Filipescu, A.; Minzu, V.; Filipescu, A. Backstepping control of wheeled mobile robots. Proceedings of 15th International Conference on System Theory, Control, and Computing (ICSTCC 2011), Sinaia, Romania, 14–16 October 2011; pp. 1–6.
  18. Hussein, I.I.; Bloch, A.M. Optimal control of underactuated nonholonomic mechanical systems. IEEE Trans. Autom. Control. 2005, 53. [Google Scholar] [CrossRef]
  19. Pazderski, D.; Kozowski, K.; Krysiak, B. Nonsmooth stabilizer for three link nonholonomic manipulator using polar-like coordinate representation. In Robot Motion and Control; Kozlowski, K., Ed.; Springer: Berlin/Heidelberg, Germany, 2009. [Google Scholar]
  20. Cuesta, F.; Ollero, A.; Arrue, B.C.; Braunstingl, R. Intelligent control of nonholonomic mobile robots with fuzzy perception. Fuzzy Sets Syst. 2003, 134, 47–64. [Google Scholar]
  21. Wai, R.J.; Liu, C.M. Design of dynamic petri recurrent fuzzy neural network and its application to path-tracking control of nonholonomic mobile robot. IEEE Trans. Indust. Electr. 2009, 56, 2667–2683. [Google Scholar]
  22. Kinjo, H.; Uezato, E.; Duong, S.C.; Yamamoto, T. Neurocontroller with a genetic algorithm for nonholonomic systems: Flying robot and four-wheel vehicle examples. Artif. Life Robot. 2009, 13, 464–469. [Google Scholar]
  23. Bertsekas, D.P. Dynamic Programming and Optimal Control, 3rd ed.; Athena Scientific: Nashua, NH, USA, 2005. [Google Scholar]
  24. Murray, J.J.; Cox, C.J.; Lendaris, G.G.; Saeks, R. Adaptive dynamic programming. IEEE Trans. Syst. Man Cyber. 2002, 32, 140–153. [Google Scholar]
  25. Lewis, F.L.; Vrabie, D. Reinforcement learning and adaptive dynamic programming for feedback control. IEEE Circuits Syst. Mag. 2009, 9, 32–50. [Google Scholar]
  26. Si, J.; Barto, A.; Powell, W.; Wunsch, D. Handbook of Learning and Approximate Dynamic Programming; John Wiley and Sons: Hoboken, NJ, USA, 2004. [Google Scholar]
  27. Mayne, D.Q.; Michalska, H. Receding horizon control of nonlinear systems. IEEE Trans. Autom. Control 1990, 35, 814–824. [Google Scholar]
  28. Bishop, C.M. Pattern Recognition and Machine Learning; Springer: New York, NY, USA, 2006. [Google Scholar]
  29. Si, J.; Wang, Y.T. Online learning control by association and reinforcement. IEEE Trans. Neural Netw. 2001, 12, 264–276. [Google Scholar]
Figure 1. The cart-pole system.
Figure 1. The cart-pole system.
Sensors 12 06117f1 1024
Figure 2. State profiles of the cart-pole system with the proposed control strategy.
Figure 2. State profiles of the cart-pole system with the proposed control strategy.
Sensors 12 06117f2 1024

Share and Cite

MDPI and ACS Style

Chen, S.; Li, S.; Liu, B.; Lou, Y.; Liang, Y. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems. Sensors 2012, 12, 6117-6128. https://doi.org/10.3390/s120506117

AMA Style

Chen S, Li S, Liu B, Lou Y, Liang Y. Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems. Sensors. 2012; 12(5):6117-6128. https://doi.org/10.3390/s120506117

Chicago/Turabian Style

Chen, Sanfeng, Shuai Li, Bo Liu, Yuesheng Lou, and Yongsheng Liang. 2012. "Self-Learning Variable Structure Control for a Class of Sensor-Actuator Systems" Sensors 12, no. 5: 6117-6128. https://doi.org/10.3390/s120506117

Article Metrics

Back to TopTop