Next Article in Journal
Linear Operators That Preserve the Genus of a Graph
Previous Article in Journal
Application of the Laplace Homotopy Perturbation Method to the Black–Scholes Model Based on a European Put Option with Two Assets
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

The Time-Optimal Control Problem of a Kind of Petrowsky System

1
School of Mathematics and Statics, Guizhou University, Guiyang 550025, China
2
School of Mathematics Science, Zunyi Normal University, Zunyi 563006, China
3
School of Mathematics, Guizhou Minzu University, Guiyang 550025, China
*
Author to whom correspondence should be addressed.
Mathematics 2019, 7(4), 311; https://doi.org/10.3390/math7040311
Submission received: 27 February 2019 / Revised: 18 March 2019 / Accepted: 21 March 2019 / Published: 28 March 2019

Abstract

:
In this paper, we consider the time-optimal control problem about a kind of Petrowsky system and its bang-bang property. To solve this problem, we first construct another control problem, whose null controllability is equivalent to the controllability of the time-optimal control problem of the Petrowsky system, and give the necessary condition for the null controllability. Then we show the existence of time-optimal control of the Petrowsky system through minimum sequences, for the null controllability of the constructed control problem is equivalent to the controllability of the time-optimal control of the Petrowsky system. At last, with the null controllability, we obtain the bang-bang property of the time-optimal control of the Petrowsky system by contradiction, moreover, we know the time-optimal control acts on one subset of the boundary of the vibration system.

1. Introduction

In physics, the Petrowsky systems are usually used to describe the phenomena about the vibration of elastic beams or perches, etc. The vibration phenomena are very common in a large number of engineering problems, so they are widely studied in practice [1,2,3,4,5,6,7,8,9,10]. Naturally, the optimal control problems of the Petrowsky systems are also studied. Some approaches are discussed to solve the problems about the optimal control of vibration systems [11,12,13,14,15,16,17,18,19,20,21,22,23]. In [11,12,13], the methods of multiplier and operator semigroups are applied to solve the optimal control problems of some general Petrowsky systems. In [14,15,16], a functional analysis approach is used to obtain the necessary conditions for controllability and approximate controllability of the vibration systems. Sometimes the approximate controllability of a vibration system can also be solved through the convergence of a sequence of standard control problems as in [17]. In [18], the optimal control of discretely connected parallel vibration beams is discussed by the means of the Galerkin coupled with parameterization. Moreover, the wavelet’s approach can be used to solve the optimal control problems of some special kinds of vibration system as in [19,20,21,22,23]. However, the time-optimal control problems of vibration equations are not talked about according to the literatures that we have reviewed. In practical engineering problems, we often need to control a certain vibration system with an external force to keep it in a given equilibrium state, where we may face a new problem: how to control the oscillation system to reach the equilibrium state as soon as possible. This is a time-optimal control problem for vibration systems. In practice, time-optimal control problem is an important control problem, which is used to study how to reach some given target in the shortest time, namely, what gives the strategy to solve the problem as soon as possible.
In this paper, we investigate the time-optimal control problem of a kind of Petrowsky system about the vibration of an elastic beam which is a joint of two parts of a building, where the elastic beam is used to reduce the damage caused by vibration to the building. Basing on the actual situation, the elastic beams are allowed to oscillate to a certain extent, for the material of the beams has certain seismic resistance. The joints of elastic beams must be fixed, otherwise the beams will be broken, and can not protect the building. So we can construct the model for the time-optimal control problem of this Petrowsky system. At first, according to the vibration of elastic beam, we model the controlled system.
d 2 d t 2 y ( x , t ) + Δ 2 y ( x , t ) = χ ω u ( x , t ) ( x , t ) Ω × R + , y ( x , t ) = Δ y ( x , t ) ( x , t ) Ω × R + , y ( · , 0 ) = y 0 x Ω , y t ( x , 0 ) = y 1 x Ω ,
where Ω R N is a bounded, simply connected and open subset with smooth boundary Ω , and ( y 0 , y 1 ) H 0 2 × L 2 ( Ω ) . Also, y ( x , t ) C ( 0 , + ; H 0 2 ( Ω ) is the statement variable, u ( x , t ) L 2 ( 0 , + ; ω ) is the control variable with s u p p u ω , and ω Ω is a nonempty open subset on which the control act. The second, according to the property of the material of elastic beam, we know that the target state Y T C ( 0 , + ; H 0 2 ( Ω ) satisfies the following conditions
d 2 d t 2 y T ( x , t ) + Δ 2 y T ( x , t ) = 0 ( x , t ) Ω × R + , y T ( x , t ) = 0 ( x , t ) Ω × R + , y T ( · , 0 ) = 0 x Ω , ( y T ) t ( x , 0 ) = y T 1 x Ω ,
where y T ( x , t ) = 0 , ( x , t ) Ω × R + means the boundary of the joint is solid throughout the vibration process, y T ( · , 0 ) = 0 , x Ω describes the elastic beam when it is non-vibrating before it is acted on by outside force, and y T 1 is the permissible degree of vibration. So ( y T ) t ( x , 0 ) = y T 1 , x Ω , hints that the elastic beam should be controlled to vibrate beyond the permissible degree of given y T 1 . In practice, the outside control is limited, so we denote the admissible control set by U { u L 2 ( 0 , + ; ω ) u L 2 ( 0 , + ; ω ) M , M > 0 } , where M is a fixed constant, the maximum external force can be provided by the equipment facility. The control u is the outside force, which acts on the elastic beam, and is used to keep the vibration from damaging the building. In construction, the engineers usually control the damage of vibration by building some elastic beams, where the vibration of the elastic beam can be controlled under a target statement through a bounded outside force. So the time-optimal control problem of a kind of Petrowsky system is: find a u U such that y ( · ; y 0 , y 1 , u ) = y T in the shortest time, namely,
T * = inf u U { T | y ( T ; y 0 , y 1 , u ) = y T ( T ) , u U } ,
where y is the solution of (1) corresponding to u. T * is called the optimal time, the corresponding u * is called the optimal control of system (1).
From the model of the time-optimal control of the Petrowsky system, we find that the states of the target and the control system have a similar structure. So we can construct an appropriate control problem, such that the controllability of the time-optimal control of the Petrowsky system is equivalent to the null controllability of the constructed control problem for the linearity of the systems. The null controllability of optimal control system can be mainly obtained trough the extreme value of a functional as in [13,14], or through the Carleman inequality of the control systems as in [23,24,25,26,27,28,29].
In this paper, for the operator Δ 2 , the Carleman inequality of the constructed control system can not be acquired easily as one of the control systems with the operator Δ , but its null controllability can be considered as an extreme value of a functional, which is defined by integration according to the solution of the constructed control system. Therefore, as in the following Section 2, we do not solve the control problem of system (1) directly, but solve it through a constructed control problem for its null controllability, where the null controllability is equivalent to the controllability of the time-optimal control problem of system (1). So we first need to construct another control problem, and obtain the necessary and sufficient conditions for its null controllability, then use the null controllability to get the existence of time-optimal control of system (1) and the bang-bang property of the time-optimal control of system (1).
This paper is organized as follows. In Section 2, we firstly construct the control problem, which is used to solve the time-optimal control of the Petrowsky system for its null controllability, then list the necessary condition for its null-controllability. In Section 3, we show the existence of time-optimal control problem through the null-controllability of the constructed control problem. In the last section, with the null-controllability, we prove the bang-bang property of the time-optimal control of the Petrowsky system by contraction, and finally find out the time-optimal control just act on the boundary of the elastic beam.

2. The Conditions for the Null Controllability of Constructed Control Problem

In this section, in order to help to obtain the existence of time-optimal control of the Petowsky system, we first transform it to another control problem, whose null controllability is equivalent to the controllability of the time-optimal control of system (1). So the existence of the time-optimal control of the Petrowsky system (1) can be solved by the minimizing null controllability sequence of the constructed control problem. Naturally, we then give a necessary and sufficient condition of the null controllability for the constructed control problem. At last, since the null controllability can be considered as the extremum value of a functional F, defined by an integral according to the control system (1), we can get the formula of the control through the null controllability, which can be used to denote the time-optimal control u * of the time-optimal control of the Petrowsky system later.
Assume that y is the solution of system (1), and let z = y y T , from the systems (1) and (2) above. We know that z satisfies
d 2 d t 2 z ( x , t ) + Δ 2 z ( x , t ) = χ ω u ( x , t ) ( x , t ) Ω × R + , z ( x , t ) = Δ z ( x , t ) ( x , t ) Ω × R + , z ( · , 0 ) = z 0 = y 0 x Ω , z t ( x , 0 ) = z 1 = y 1 y T 1 x Ω .
According to the systems above, we know the controllability of the time-optimal control of system (1) is equivalent to the null controllability for the control system (3). So the time-optimal control of system (1) is acquired through the minimizing of sequences of the control problem of system (3) for its null controllability. Namely, we find a control u U such that the control u can leads the state z ( T ) to zero in the shortest time. The optimal time T * is defined by
T * = inf u U { T | z ( T ) = 0 , u U } ,
where z is the solution of control system (3). T * is called the optimal time, the corresponding control u is called the optimal control u * for the control system (1).
From above we know that the null controllability of the control system, (3) is the key to the time-optimal control of the Petrowsky system. So we first give a necessary and sufficient condition for the null controllability of the control system (3) through a functional below. ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) , ( z 0 , z 1 ) H 0 2 ( Ω ) × L 2 ( Ω ) , setting · , · 2 , 2 the dual product between H 0 2 ( Ω ) and H 2 ( Ω ) , is defined by
( ψ 0 , ψ 1 ) , ( z 0 , z 1 ) = ψ 1 , z 0 2 , 2 Ω ψ 0 z 1 d x .
Consider the system
d 2 d t 2 ψ ( x , t ) + Δ 2 ψ ( x , t ) = 0 ( x , t ) Ω × R + , ψ ( x , t ) = Δ ψ ( x , t ) = 0 ( x , t ) Ω × R + , ψ ( · , T ) = ψ T 0 x Ω , ψ t ( x , T ) = ψ T 1 x Ω .
If ( ψ ( t ) , ψ t ( t ) ) L 2 ( Ω ) × H 2 ( Ω ) is the unique weak solution of system (5), we then have the following result.
Theorem 1.
To system (3), there are some u C 0 ( 0 , T ; ω ) and some time T, such that z ( T ) = 0 if and only if the solution ψ of (5) satisfies
0 T ω ψ u d x d t = ψ t 0 , y 0 y T 0 2 , 2 Ω ψ 0 ( y 1 y T 1 ) d x .
Proof. 
Multiplying the first equation of system (3) by ψ , with integration by parts, we obtain
0 T ω ψ u d x d t = 0 T ω ψ ( z t t + Δ 2 z ) d x d t = Ω [ ψ z t ψ t z ] | 0 T d x + 0 T ω z ( ψ t t + Δ 2 ψ ) d x d t = Ω [ ψ T 0 z t ( T ) ψ T 1 z ( T ) ] d x Ω [ ψ 0 z 1 ψ t 0 z 0 ] d x .
From the above and the definition of · , · 2 , 2 , we acquire
0 T ω ψ u d x d t = ψ T 1 , z ( T ) 2 , 2 + Ω ψ T 0 z t ( T ) d x + ψ t ( 0 ) , z 0 2 , 2 Ω ψ ( 0 ) z 1 d x .
Therefore, we get z ( T ) = 0 if and only if
0 T ω ψ u d x d t = ψ t ( 0 ) , z 0 2 , 2 Ω [ ψ ( 0 ) z 1 d x .
Replacing ( z 0 , z 1 ) with ( y 0 y T 0 , y 1 y T 1 ) correspondingly, we then have
0 T ω u d x d t = ψ t 0 , y 0 y T 0 2 , 2 Ω ψ 0 ( y 1 y T 1 ) d x .
This completes the proof. □
Besides the necessary and sufficient condition for the null controllability of system (3), we can also get the formula of the controls for the control system (3) as the controls lead the state z to zero at some time T. It is well known that, from another perspective, the null controllability of a control system can be regard as an extreme value of a functional, which is defined according to the weak solution of the control system. So we can obtain the formula of the control through the condition of the extreme value of the functional, where the functional is decided by the control system. Before defining the functional F for the control system (3), we consider the system below
d 2 d t 2 ψ ( x , t ) + Δ 2 ψ ( x , t ) = 0 ( x , t ) Ω × R + , ψ ( x , t ) = Δ ψ ( x , t ) = 0 ( x , t ) Ω × R + , ψ ( · , 0 ) = ψ 0 x Ω , ψ t ( x , 0 ) = ψ 1 x Ω ,
where ( ψ ( t ) , ψ t ( t ) ) L 2 ( Ω ) × H 2 ( Ω ) denotes the unique solution of (10), from [9], we have
ψ L ( 0 , T ; L 2 ( Ω ) ) 2 + ψ t ( t ) L ( 0 , T ; H 2 ( Ω ) ) 2 ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) 2 .
Since the operator Δ 2 can generate an isometric group on the space H 0 2 ( Ω ) × L 2 ( Ω ) , according to the regularity of parabolic equation, the condition (6) can be expressed as the following.
Corollary 1.
For any initial statement ( z 0 , z 1 ) H 0 2 × L 2 ( Ω ) , under the control u U , the statement z of system (3) can reach zero at some time T if and only if
0 T ω ψ u d x d t = < ( ψ 0 , ψ 1 ) , ( z 0 , z 1 ) >
holds, where ψ is the solution of system (10).
In fact, the condition (12) is a necessary condition for the extremum value of the functional F which is defined by
F : L 2 ( Ω ) × H 2 ( Ω ) R , F ( ψ 0 , ψ 1 ) = 1 2 0 T ω | ψ | 2 d x d t + ( ψ 0 , ψ 1 ) , ( z 0 , z 1 ) ,
where ψ is the solution of (10) with the initial data ( ψ 0 , ψ 1 ) . To show that condition (12) is a necessary condition of the extremum value of functional F, we first present a lemma which guarantees the existence of the extremum value for a functional.
Lemma 1.
Let H is a reflexive Banach space, K is a convex subspace of H, if the mapping h has the following properties:
(i)h is convex;
(ii)h is low semi-continuous;
(iii) lim x h ( x ) = ;
then there exists a unique x 0 K such that
h ( x 0 ) = m i n x K h ( x ) .
From the lemma above and the observability inequality of dual system (10) as in [10], we know, for all ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) , that there exists a constant C > 0 such that the inequality
0 T ω | ψ | 2 d x d t C ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) 2
holds. There is a following result about the functional F.
Theorem 2.
For any initial statement ( z 0 , z 1 ) , if the observability inequality (14) holds, then there exists a unique minimum ( ψ ^ 0 , ψ ^ 1 ) L 2 ( Ω ) × H 2 ( Ω ) .
Proof. 
(i) ( ϕ 0 , ϕ 1 ) L 2 ( Ω ) × H 2 ( Ω ) , λ ( 0 , 1 ) , we have
F ( λ ( ψ 0 , ψ 1 ) + ( 1 λ ) ( ϕ 0 ϕ 1 ) ) = λ F ( ψ 0 , ψ 1 ) + ( 1 λ ) F ( ϕ 0 ϕ 1 ) λ ( 1 λ ) 2 0 T Ω ψ ϕ 2 d x d t .
By the observability inequality, we have
0 T Ω ψ ϕ 2 d x d t C ( ψ 0 , ψ 1 ) ( ϕ 0 ϕ 1 ) L 2 ( Ω ) × H 2 ( Ω ) 2 ,
therefore, for any ( ψ 0 , ψ 1 ) ( ϕ 0 , ϕ 1 )
F ( λ ( ψ 0 , ψ 1 ) + ( 1 λ ) ( ϕ 0 ϕ 1 ) ) < F ( ψ 0 , ψ 1 ) + ( 1 λ ) F ( ϕ 0 ϕ 1 ) ,
so F is convex.
(ii) It’s obvious that F is continuous, so it is low semi-continuous.
(iii) In fact
F ( ψ 0 , ψ 1 ) 1 2 0 T ω ψ 2 d x d t ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) ( z 0 , z 1 ) L 2 ( Ω ) × H 0 2 ( Ω ) C 2 ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) 2 ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) ( z 0 , z 1 ) L 2 ( Ω ) × H 0 2 ( Ω ) ,
therefore,
lim ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) F ( ψ 0 , ψ 1 ) = .
From (i)–(iii) and Lemma 1, we get that F has a unique minimum ( ψ ^ 0 , ψ ^ 1 ) L 2 ( Ω ) × H 2 ( Ω ) . □
According to the results above, we can obtain the formula of the control u of the constructed control problem through the uniqueness of the minimum of the functional F.
Theorem 3.
For any ( z 0 , z 1 ) H 0 2 ( Ω ) × L 2 ( Ω ) , if ( ψ ^ 0 , ψ ^ 1 ) L 2 ( Ω ) × H 2 ( Ω ) is the unique minimal value point of the functional F, and ψ ^ is he solution of (10) with the initial data ( ψ ^ 0 , ψ ^ 1 ) , then the control u = χ ω ψ ^ is one of the controls which can leads z 0 to zero at some time T > 0 , i.e., z ( T ) = 0 .
Proof. 
Since the functional F takes the unique minimal value at ( ψ ^ 0 , ψ ^ 1 ) , then for any ( ψ 0 , ψ 1 ) L 2 ( Ω ) × H 2 ( Ω ) , we have
lim θ 0 1 θ [ F ( ( ψ ^ 0 , ψ ^ 1 ) + θ ( ψ 0 , ψ 1 ) ) F ( ψ ^ 0 , ψ ^ 1 ) ] = 0 T Ω ψ ^ ψ d x d t ψ 1 , z 0 2 , 2 + Ω ψ 0 z 1 d x = 0 .
Then we obtain
0 T Ω ψ ^ ψ d x d t = ψ 1 , z 0 2 , 2 Ω ψ 0 z 1 d x .
Combining (4) with z 0 = y 0 y T 0 and z 1 = y 1 y T 1 , we have
( ψ 0 , ψ 1 ) , ( z 0 , z 1 ) = ψ 1 , z 0 2 , 2 Ω ψ 0 z 1 d x .
According to Corollary 1, from the above, we acquire
0 T Ω ψ ^ ψ d x d t = 0 T ω ψ u d x d t .
Therefore, we obtain that u = χ ω ψ ^ is the control of the control system (3), and this completes the proof. □
In Section 3, we will know if there is a control u which can lead the state of the control system (3) to zero, then it can control the state of control system (1) to the target. Then for the boundedness of ψ from (11) and its observability inequality, we can get that the time-optimal control u * of system (3) is also the time-optimal control of the Petrowsky system (1) through minimizing sequences. So in this way, we can also get the formula of the optimal control u * of the time-optimal control problem of this kind of Petrowsky system.

3. Existence of Time-optimal Control

In this section, we will show the existence of time-optimal control of the Petrowsky system (1) through the null controllability of the control system (3) in the similar way as that in [9,10,11]. At first, under the necessary and sufficient condition, with the null controllability of the control system (3) we show that the admissible control set U of the control system (1) is nonempty, i.e., U ϕ , where ϕ is the null set. Then, we give the existence of time-optimal control for the control system (3) by minimizing sequences, where its state is controlled to zero. Finally, we get the existence of optimal time control of the Petrowsky system (1) for the equivalence between the null controllability of the control system (3) and the controllability of the control system (1).
Since the state y of time-optimal control system (1) arrives at the target y T as the state z of control system (3) reaches zero, we have the following result.
Theorem 4.
Under the condition (6), for any ( y 0 , y 1 ) H 0 2 × L 2 ( Ω ) , there is a control u U and some T such that y ( T ) = y T ( T ) , where y is the solution of system (1) and y T ( T ) is the target state at time T.
Proof. 
Under condition (6), from Section 2 we know that there is a control u U such that z ( T ) = 0 , where z is the solution of (3) with the initial data z 0 , z 1 corresponding to the control u. Combining systems (2) and (3), we know z = y y T . Then for the corresponding initial data ( y 0 , y 1 ) , ( y T 0 , y T 1 ) and a control u, we can get y ( T ) y T ( T ) = 0 . This means that there is a control u U , which will lead y to the target y T at some time T, namely, y ( T ) = y T ( T ) . This completes the proof. □
From the above result of Theorem 4, we know U ϕ . If there are unique control u and time T such that y ( T ) = y T , then T = T * is the optimal time of the time-optimal control system (1). Otherwise there are more controls u n corresponding time T n such that z ( T n ) = y T n . For the boundedness of u and T, there are a subsequence u n k and T n k such that z ( T n k , u n k ) = 0 , so we can define the optimal time T * as following
T * = i n f { T n k z ( T n k ) = 0 , u n k U } ,
where z n k = z ( · , u n k ) are the solution of system (3) corresponding to u n k . Therefore, there is a result about the optimal time of the control system (1) below.
Theorem 5.
Under the condition (6), for any ( y 0 , y 1 ) H 0 2 × L 2 ( Ω ) , there is an optimal control u * U and the optimal time T * such that y ( T * ) = y T ( T * ) , where y is the solution of the Petrowsky system (1).
Proof. 
To prove the result, considering the system (3), we firstly show the existence of the time-optimal control of system (1) through the existence of the null controllability for the system (3). For system (3), we know that the weak solution ( z , z t ) C ( 0 , T ; H 0 2 ( Ω ) × L 2 ( Ω ) ) can be presented by
( z , z t ) = S ( t ) ( z 0 , z 1 ) + 0 t S ( t τ ) u ( t ) d τ ,
where S ( t ) is a C 0 —semigroup generated by the operator— Δ 2 in the space H 0 2 ( Ω ) × L 2 ( Ω ) . If u W 2 , 1 ( 0 , T ; L 2 ( Ω ) ) , then there is a unique strong solution of system (3) with
( z , z t ) ( C 1 ( 0 , T ; H 0 2 ( Ω ) × L 2 ( Ω ) ) , C ( 0 , T ; H 0 2 ( Ω ) × L 2 ( Ω ) ) ) .
From the proof of Theorem 4, we have
( z n k , ( z n k ) t ) = S ( t ) ( z 0 , z 1 ) + 0 t S ( t τ ) u n k ( τ ) d τ .
By the Lebesgue control theorem and the convergence of bounded sequence, we have
( z , ( z ) t ) = lim k ( z n k , ( z n k ) t ) = lim k { ( S ( t ) ( z 0 , z 1 ) + 0 t S ( t τ ) u n k ( τ ) d τ } = S ( t ) ( z 0 , z 1 ) + 0 t S ( t τ ) u ( τ ) d τ .
Setting z n k = y n k y T , z n k 0 = y 0 y T 0 , z n k 1 = y 1 y T 1 , then we have
( y * y T , y t * ( y T ) t ) = lim k ( y n k y T , ( y n k ) t ( y T ) t ) = lim k { ( S ( t ) ( y 0 y T 0 , y 1 y T 1 ) + 0 t S ( t τ ) u n k ( τ ) d τ } = S ( t ) ( y 0 y T , y 1 y T 1 ) + 0 t S ( t τ ) u * ( τ ) d τ .
From Theorem 1 and the boundedness of u, there are u n k U and T n k such that z n k ( T n k ) = 0 , therefore
lim k z n k ( T n k ) = z ( T * ) = y ( T * ) y T ( T * ) = 0 , i . e , y ( T * ) = y T ( T * ) ,
and lim k T n k = T * = inf u U { T | z ( T ) = 0 , u U } .
Furthermore, according to the Theorem 3 and the boundedness of ψ from (11), we obtain the formula of the optimal time control
u * = lim n k χ ω ψ ^ n k = χ ω ψ ^ ,
where ψ ^ n k is the solution of (10) for any initial value ( ψ ^ n k 0 , ψ ^ n k 1 ) , and ( ψ ^ n k 0 , ψ ^ n k 1 ) is the unique minimum of the functional F with the initial value ( y 0 , y 1 ) for system (1). This completes the proof. □

4. Bang-Bang Property for Time-Optimal Control of Petrowsky System

From Section 3, we know there exists a time-optimal control for the control system (1). In fact, the time-optimal control has a special property: bang-bang property. In this section, we give the bang-bang property of the time-optimal control of the Petrowsky system (1) by contradiction-combining the methods in [13,14,15].
In order to get the bang-bang property of the time-optimal control of the Petrowsky system (1), we firstly show the bang-bang property of the time-optimal control of the control system (3) by contradiction. The target of the control system (3) is z ( T * ) = 0 .
Theorem 6.
Under the condition (6), for any ( z 0 , z 1 ) H 0 2 ( Ω ) × L 2 ( Ω ) , T * is the shortest time for the control u to lead the state z of system (3) from the initial value z 0 to zero, namely, z ( T * ) = 0 . Then the corresponding control u is the time-optimal control control u * U for the control system (3), and u * U .
Proof. 
We prove the Theorem 6 by the way of contradiction. Suppose that there exists a subset E [ 0 , T * ] with positive measure E > 0 and ε > 0 such taht
u * ( t ) U   a n d   d ( u * ( t ) , U ) ε   f o r   e a c h   t E ,
where d ( u * ( t ) , U ) is the distance between the point u * ( t ) and the boundary U in L 2 ( Ω ) . Therefore, we have
B ( u * ( t ) , ε 2 ) U   f o r   e a c h   t E .
For the contradiction, we need a positive number δ < T * , and a control v δ in the set U a d such that
z ( T * δ ; v δ , z 0 ) = z ( T * ; u * , z 0 ) = 0 ,
where z ( t ; v δ , z 0 ) and z ( t ; u * , z 0 ) are the solutions of system (3) with the initial data ( z 0 , z 1 ) and corresponding controls v δ and u * .
According to the presentation of the solution for the system (3) by semigroup, we have
z ( T * δ ; v δ , z 0 ) = S ( T * δ ) z 0 + 0 T * δ S ( T * δ τ ) χ ω v δ ( τ ) d τ
and
z ( T * ; u * , z 0 ) = S ( T * ) z 0 + 0 T * S ( T * τ ) χ ω u * ( τ ) d τ .
Hence, according to (29), there is a positive number δ < T * , and a control v δ U a d such that
0 T * δ S ( T * δ τ ) χ ω v δ ( τ ) d τ = [ S ( T * ) S ( T * δ ) ] z 0 + 0 T * S ( T * τ ) χ ω u * ( τ ) d τ .
Therefore, for any positive number δ < T * , we have
0 T * S ( T * τ ) χ ω u * ( τ ) d τ = 0 δ S ( T * τ ) χ ω u * ( τ ) d τ + δ T * S ( T * τ ) χ ω u * ( τ ) d τ = S ( T * δ ) 0 δ S ( δ τ ) χ ω u * ( τ ) d τ + 0 T * δ S ( T * δ τ ) χ ω u * ( δ + τ ) d τ ,
and
[ S ( T * ) S ( T * δ ) ] z 0 = S ( T * δ ) [ ( S ( δ ) I ) z 0 ] .
Equation (32) means that there exist a positive number δ < T * and a control v δ U a d such that
0 T * δ S ( T * δ τ ) χ ω v δ ( τ ) d τ = S ( T * δ ) [ 0 δ S ( δ τ ) χ ω u * ( τ ) d τ + ( S ( δ ) I ) z 0 ] + 0 T * δ S ( T * δ τ ) χ ω u * ( δ + τ ) d τ = S ( T * δ ) z δ + 0 T * δ S ( T * δ τ ) χ ω u * ( δ + τ ) d τ ,
where z δ = 0 δ S ( δ τ ) χ ω u * ( τ ) d τ + ( S ( δ ) I ) z 0 .
For each 0 < δ < T * , we denote E δ by the set { t : t + δ E } and χ E the characteristic function of the set E δ . Then for each δ sufficiently small, there exists a control u δ L ( 0 , + ; L 2 ( Ω ) ) and u δ ( t ) L 2 ( Ω ) ε 2 for almost t 0 . Therefore, we can construct the control v δ by
v δ ( t ) = u * ( t + δ ) + χ E δ ( t ) u δ ( t ) , i f t [ 0 , T * δ ] , u 0 , i f t > T * δ ,
where u 0 is any control in U a d . It is clear that v δ : [ 0 , + ) L 2 ( Ω ) is measurable. In one hand, we have t + δ E as t [ 0 , T * δ ] E δ , and from (28), we have B ( u * ( t + δ ) , ε 2 ) U . So if u δ L 2 ( Ω ) ε 2 for almost t > 0 , then we have
v δ ( t ) u * ( t + δ ) L 2 ( Ω ) = u δ ( t ) L 2 ( Ω ) ε 2 ,
for almost t [ 0 , T * δ ] E δ , namely, v δ ( t ) B ( u * ( t + δ ) , ε 2 ) for almost t [ 0 , T * δ ] E δ . On the other hand, for almost t [ 0 , T * δ ] E δ C we have v δ ( t ) = u * ( t + δ ) U . Therefore, v δ U a d . From Section 2, we know that, ( z 0 , z 1 ) H 0 2 ( Ω ) × L 2 ( Ω ) , if ( ψ ^ 0 , ψ ^ 1 ) L 2 ( Ω ) × H 2 ( Ω ) is the unique minimum value of the function F, and letting ψ ^ be the solution of (10) with the initial value ( ψ ^ 0 , ψ ^ 1 ) , then u = χ ω ψ ^ is one of the controls in U a d such that z ( T * δ ; v δ , ( z 0 , z 1 ) ) = 0 . This contradicts that T * is the optimal time, therefore, u * U .
Let z = y y T , from the proof above and the property of y T according to (2), then the following result holds. □
Corollary 2.
To the time-optimal control problem of the Petrowsky system (1), ( y 0 , y 1 ) H 0 2 ( Ω ) × L 2 ( Ω ) , if T * is the optimal time of the Petrowsky system, and u * is the optimal control corresponding to T * , then u * U . i.e., u * L 2 ( Ω ) = M .
According to Theorem 3, we know that χ ω ψ ^ = u * and u * U , furthermore, we have the result about the subset ω .
Corollary 3.
If there exists a time optimal control u * for the control system (1), then the time-optimal control u * must act on the boundary of the system’s domain Ω, i.e., ω Ω .
This interesting result shows us a useful strategy for anti-vibration in practical engineer project. If we want to save the elastic beam or perches of the building to vibrate within a decided safe degree by outside force control), the fastest action is to apply the largest outside force to act on the boundary of some part of the elastic beam or perches.

5. Conclusions

From the results in the above sections, we know that if the controlled Petrowsky system is linear, the time-optimal control of it can be obtained by null controllability of a constructed time-optimal control. The null controllability of the constructed control system is equivalent to the controllability of the Petrowsky system. It is the key to acquiring both the existence and bang-bang property of time-optimal control problem. However, the null controllability can not always be acquired by the Carleman inequality as the control problem of heat equation, and here it is solved by the extreme value of a functional defined according to weak solution of the control system. The bang-bang property of the time-optimal control of the Petrowsky system shows us a fast strategy for a practical engineering project. If we want to guarantee the elastic beam or perches of a building are safe with some device, the fast and efficient approach is to apply the largest force on the device and let it act on the boundary. These results are not only interesting in theory but also useful in practice.

Author Contributions

Conceptualization, D.L. and W.W.; methodology, D.L.; validation, W.W., H.D. and D.L.; formal analysis, D.L.; investigation, Y.L.; data curation, H.D.; writing–original draft preparation, D.L.; writing–review and editing, D.L.; supervision, W.W.

Funding

This work is supported by NSF of China (Nos.11261011 Nos.71471158); Guizhou Province Science Technology Cooperation Plan (LH[2015]7004; [2016]7028); the Science and Technology Program of Guizhou Province under grant no. [2016]1074.

Acknowledgments

We acknowledge Guangjun Xu for his help in English writing.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Curtain, R.F.; Zwart, H. An Introduction to Infinite-Dimensional Linear Systems Theory; Springer: New York, NY, USA, 1995. [Google Scholar]
  2. Sadek, I.; Abukhaled, M.; Abualrub, T. Coupled Galerkin and parameterization methods for optimal control of discretely connected parallel beams. Appl. Math. Model. 2010, 34, 3949–3957. [Google Scholar] [CrossRef]
  3. Guo, B.Z.; Zhang, Q. On harmonic Disturbance Rejection of an undamped Euler-Bernouli Beam with Rigid tip Body. Esaim Control Optim. Calc. Var. 2011, 10, 615–623. [Google Scholar] [CrossRef]
  4. Chan, W.L.; Zhu, G.B. Pointwise Stabilization for a Chain of Coupled Vibrating Strings. IMA J. Math. Control Inf. 1990, 7, 307–315. [Google Scholar] [CrossRef]
  5. Chen, G. Control and stabilization for the wave equation in a bounded domain. Siam J. Control Optim. 1979, 17, 114–122. [Google Scholar] [CrossRef]
  6. Han, H.; Cao, D.; Liu, L. Green’s functions for forced vibration analysis of bending-torsion coupled Timoshenko beam. Appl. Math. Model. 2017, 45, 621–635. [Google Scholar] [CrossRef]
  7. Lin, C.Y.; Huang, Y.H.; Chen, W.T. Multimodal suppression of vibration in smart flexible beam using piezoelectric electrode-based switching control. Mechatronics 2018, 53, 152–167. [Google Scholar] [CrossRef]
  8. Sayed, T.A.; Mongy, H.H. Application of variational iteration method to free vibration analysis of a tapered beam mounted on two-degree of freedom subsystems. Appl. Math. Model. 2018, 58. [Google Scholar] [CrossRef]
  9. Ding, H.; Zhu, M.H.; Chen, L.Q. Nonlinear vibration isolation of a viscoelastic beam. Nonlinear Dyn. 2018, 92, 325–349. [Google Scholar] [CrossRef]
  10. Ma, J.; Liu, F.; Nie, M.; Wang, J. Nonlinear free vibration of a beam on Winkler foundation with consideration of soil mass motion of finite depth. Nonlinear Dyn. 2018, 92, 429–441. [Google Scholar] [CrossRef]
  11. Zhang, C.G.; Zhao, H.L.; Liu, K.S. Exponential stabilization of a nonhomogeneous Timoshenko beam with the coupled control of locally distributed feedback and boundary feedback. Chin. Ann. Math. Ser. A 2003, 6, 757–764. [Google Scholar]
  12. Liu, K.; Liu, Z. Analyticity and Differentiability of Semigroups Associated with Elastic Systems with Damping and Gyroscopic Forces. J. Differ. Equ. 1997, 141, 340–355. [Google Scholar] [CrossRef] [Green Version]
  13. Liu, K. Equivalentness Between Controllability and Stabilizability for Conservative Systems and Applications. Annu. Math. 1994, 39, 1424–1429. (In Chinese) [Google Scholar]
  14. Zong, X.; Zhao, Y. Controllability of Petrowsky equation in bounded domain: A functional approach. Control Theory Appl. 2007, 24, 380–384. [Google Scholar]
  15. Nakoulima, O.; Omrane, A.; Velin, J. On the Pareto Control and No-Regret Control for Distributed Systems with Incomplete Data. Siam J. Control Optim. 2012, 42, 1167–1184. [Google Scholar] [CrossRef]
  16. Abukhaled, M. Optimal control of thermoelastic beam vibrations by piezoelectric actuation. J. Control Theory Appl. 2013, 11, 463–467. [Google Scholar]
  17. Khorshidi, K.; Rezaei, E.; Ghadimi, A.; Pagoli, M. Active vibration control of circular plates coupled with piezoelectric layers excited by plane sound wave. Appl. Math. Model. 2015, 39, 1217–1228. [Google Scholar] [CrossRef]
  18. Ismail, K.; Kenan, Y.; Sarp, A. Optimal piezoelectric control of a plate subject to time-dependent boundary moments and forcing function for vibration damping. Comput. Math. Appl. 2015, 69, 291–303. [Google Scholar]
  19. Hu, H.; Tang, B.; Zhao, Y. Active control of structures and sound radiation modes and its application in vehicles. J. Low Freq. Noise Vib. Act. Control 2016, 35, C291–C302. [Google Scholar] [CrossRef]
  20. Abualrub, T.; Abukhaled, M.; Jamal, B. Wavelets approach for the optimal control of vibrating plates by piezoelectric patches. J. Vib. Control 2016, 1–8. [Google Scholar] [CrossRef]
  21. Kattimani, S.C.; Ray, M.C. Vibration control of multiferroic fibrous composite plates using active constrained layer damping. Mech. Syst. Signal Process. 2018, 106, 334–354. [Google Scholar] [CrossRef]
  22. Li, L.; Liao, W.H.; Zhang, D.; Zhang, Y. Vibration control and analysis of a rotating flexible FGM beam with a lumped mass in temperature field. Compos. Struct. 2019, 208, 244–260. [Google Scholar] [CrossRef]
  23. Troltzsch, F. Optimal Control of Partial Differential Equations, Theory, Mehtods and Applications: Graduate Studies in Mathematics; AMS: Providence, RI, USA, 2010; Volume 112. [Google Scholar]
  24. Phung, K.D.; Wang, G. Quantitative Unique Continuation for the Semilinear Heat Equation in a Convex Domain. J. Funct. Anal. 2010, 259, 1230–1247. [Google Scholar] [CrossRef]
  25. Kunisch, K.; Wang, L.J. Time Optimal Control of the Heat Equation with Pointwise Control Constrains. ESIAM Control Opim. Calc. Var. 2013, 19, 460–485. [Google Scholar] [CrossRef]
  26. Pazy, A. Semigroups of Linear Operators and Applications to Partial Differential Equations; Springer: Berlin, Germany, 1983. [Google Scholar]
  27. Kunisch, K.; Wang, L.J. Bang-bang Property of Time Optimal Controls of Burgers equation. Discret. Contin. Dyn. Syst. 2014, 34, 3611–3637. [Google Scholar] [CrossRef]
  28. Phung, K.D.; Wang, L.J.; Zhang, C. Bang-bang property for Time Optimal Control of Semilinear Heat Equation. Ann. Inst. Henry Poincare Nonlinear Anal. 2014, 31, 477–499. [Google Scholar] [CrossRef]
  29. Kunisch, K.; Wang, L.J. Bang-bang Property Of Time Optimal Control of Semilinear Parabolic Equation. Discret. Contin. Dyn. Syst. 2016, 36, 279–302. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Luo, D.; Wei, W.; Deng, H.; Liao, Y. The Time-Optimal Control Problem of a Kind of Petrowsky System. Mathematics 2019, 7, 311. https://doi.org/10.3390/math7040311

AMA Style

Luo D, Wei W, Deng H, Liao Y. The Time-Optimal Control Problem of a Kind of Petrowsky System. Mathematics. 2019; 7(4):311. https://doi.org/10.3390/math7040311

Chicago/Turabian Style

Luo, Dongsheng, Wei Wei, Hongyong Deng, and Yumei Liao. 2019. "The Time-Optimal Control Problem of a Kind of Petrowsky System" Mathematics 7, no. 4: 311. https://doi.org/10.3390/math7040311

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop