Next Article in Journal
The Impact of Sustainable Regional Development Policy on Carbon Emissions: Evidence from Yangtze River Delta of China
Previous Article in Journal
New Insights into the Genetic Mechanism of the Miocene Mounded Stratigraphy in the Qiongdongnan Basin, Northern South China Sea
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Recurrent Neural Network-Based Nonlinear Optimization for Braking Control of Electric Vehicles

1
School of Electrical Engineering and Automation, Hefei University of Technology, Hefei 230009, China
2
School of Software and Electrical Engineering, Faculty of Science, Engineering and Technology, Swinburne University of Technology, Melbourne, VIC 3122, Australia
*
Author to whom correspondence should be addressed.
Energies 2022, 15(24), 9486; https://doi.org/10.3390/en15249486
Submission received: 6 November 2022 / Revised: 27 November 2022 / Accepted: 6 December 2022 / Published: 14 December 2022
(This article belongs to the Topic Advanced Electric Vehicle Technology)

Abstract

:
In this paper, electro-hydraulic braking (EHB) force allocation for electric vehicles (EVs) is modeled as a constrained nonlinear optimization problem (NOP). Recurrent neural networks (RNNs) are advantageous in many folds for solving NOPs, yet existing RNNs’ convergence usually requires convexity with calculation of second-order partial derivatives. In this paper, a recurrent neural network-based NOP solver (RNN-NOPS) is developed. It is seen that the RNN-NOPS is designed to drive all state variables to asymptotically converge to the feasible region, with loose requirement on the NOP’s first-order partial derivative. In addition, the RNN-NOPS’s equilibria are proved to meet Karush–Kuhn–Tucker (KKT) conditions, and the RNN-NOPS behaves with a strong robustness against the violation of the constraints. The comparative studies are conducted to show RNN-NOPS’s advantages for solving the EHB force allocation problem, it is reported that the overall regenerative energy of RNN-NOPS is 15.39% more than that of the method for comparison under SC03 cycle.

1. Introduction

The momentum of the development of pure electric vehicles (EVs) has been increasing due to the shortage of foreseeable fossil fuel and the air pollution caused from fossil fuel combustion. Limited battery power, however, has become one of the main shortcomings influencing EVs’ further commercialization. Recently, many efficient energy management strategies have been developed, offering a significant improvement of EV’s performance. It has been shown that in urban driving, the energy consumed in braking is almost half of the total traction power [1]. Therefore, an effective electro-hydraulic braking (EHB) force allocation strategy is required for achieving optimal regenerative energy while guaranteeing the braking safety.
Various optimization and control algorithms have been developed for regenerative braking control. In [2], given the desired yaw moment and road friction coefficient, the regenerative and EHB torque can be designed by using a genetic algorithm (GA). It is noted that the quality of a GA’s solution could be improved by using a larger initial population, which, however, would increase GA’s computational cost; GA also suffers from premature convergence [3]. A regenerative braking control method based on model predictive control (MPC) was proposed in [4], where maximization of energy recuperation and wheel slip regulation are achieved. However, it has been noted that high-quality system models built with accurate measurement data are required by MPC, which may increase model complexity and computational cost [5].
To attain optimal recovered energy with braking safety guaranteed, the EHB force allocation problem of EVs has been treated as a nonlinear optimization problem (NOP) recently. The NOP is widely used in portfolio optimization [6,7,8], system control [9,10,11,12], and machine learning [13]. For solving NOPs, many numerical algorithms have been proposed over decades [14,15]. It has been noted that the NOP algorithms’ computing cost is highly affected by the dimension of the state variable vector and the complexity of the NOP solvers’ structures and thus, most of these algorithms are less effective in real-time applications as pointed out in [16,17].
Since Tank and Hopfield’s work [18], many recurrent neural networks (RNNs) have been available for solving NOPs. The benefits of the RNN-based methods are threefold: (i) the availability of electronic implementation by using very large-scale integration (VLSI) chips, (ii) the capability of solving NOPs with time-varying parameters, (iii) the efficiency of applying the numerical ordinary differential equation (ODE) techniques for solving NOPs [17]. Variants of RNN such as long short-term memory (LSTM) have been employed for battery state of health and power consumption forecasting [19]. Since the proposal of neural differential equations [20], where Residual NN [21] was proposed to be a discretized ODE. It has been a trend to bridge neural networks and ODEs and exploit off-the-shelf researches on ODEs for further enhancing deep learning strategies. Other neural network-based algorithms, such as an SGTM neural-like structure can also be used to process large amounts of data from a variety of industries [22].
In [16], an RNN was proposed for solving NOPs and it was proved that the state variables can all converge to an exact Karush–Kuhn–Tucker (KKT) point with an assumption on the convexity of both the objective function and constraints. Moreover, a single-layered RNN was developed for solving NOPs in [23], where it is required that the gradients of all inequality constraints equal to 0 and are linearly independent, and other conditions are also necessary for exact penalty. In [24], a neurodynamic model based on an augmented Lagrangian function was proposed; the states of the model are asymptotically stable at a strict local minimum given that the second-order sufficiency conditions (SOSC) [25] holds. In addition to the above typical RNNs, two projection neural networks with reduced model complexity (RDPNNs) for solving NOPs were proposed in [26], where RDPNNs were proved to be globally convergent to the points satisfying the reduced optimality condition, under the condition that the Hessian matrix of the associated Lagrangian function is positive definite at each KKT point.
In this paper, a novel RNN-based NOP solver (RNN-NOPS) is proposed for solving the EHB force allocation of EVs. It will be shown that an RNN-NOPS is not only able to solve NOPs with constraints fully met and optimality guaranteed, but also suitable for a wide category of timely industrial applications. The distinctive contributions of this paper are summarized as follows:
  • It is proved that, given the constraint violated, the only equilibrium at the origin of the constraint mapping space R m is globally asymptotically stable in the Lyapunov sense, ensuring that all the RNN-NOPS’s state variables are able to reach the feasible region from the outside. This property only requires a first-order partial derivative of the NOP, whose verification costs less computation compared to existing methods.
  • The RNN-NOPS’s equilibria are designed to hold the KKT condition and, therefore, the valid local minima of the NOP can be obtained.
  • The comparative studies in the simulation section show the advantages of the RNN-NOPS for solving the problem under different braking processes with guaranteed constraints and optimality, compared with the existing optimization-solving approaches discussed in [16].
  • The RNN-NOPS is based on the neural network model with parallel structure, which is competent for industrial applications where real-time solutions are required. RNN-NOPS is not only able to solve NOPs with constraints fully met and optimality guaranteed, but also suitable for a wide category of timely industrial applications.
The remainder of this paper is organized as follows. In Section 2, the background of RNN-based optimization approaches and the EHB force allocation problem are briefly introduced. In Section 3, the mechanism and theoretical properties of the RNN-NOPS are discussed in detail. The results of algorithmic comparative studies on EHB force allocation problem are presented in Section 4, followed by the conclusion given in Section 5.

2. Background and Problem Formulation

2.1. RNN-Based Optimization

Consider the following nonlinear optimization problem (NOP) [17]:
min   f ( x ) s . t .   c j ( x ) 0 ,         j = 1 ,   . . . ,   m                 x i 0 ,       i = 1 ,   . . . ,   n
where x = [ x 1 ,   . . . ,   x n ] T is the decision variable vector, f : R n R , c j : R n R , f and all c j are assumed to be twice differentiable.
Given the Lagrangian function of the NOP of the form:
L ( x ,   y ) = f ( x ) + j = 1 m y j c j ( x )
where y = [ y 1 ,   . . . ,   y j ,   . . . ,   y m ] T is the Lagrangian multiplier vector. If x * is a local optimal point, then there exists y * R m such that ( x * ,   y * ) is a Karush–Kuhn–Tucker (KKT) point satisfying [22,27].
f ( x * ) + j = 1 m c j ( x * ) y j * = 0 c j ( x * ) y j * = 0 c j ( x * ) 0 y j * 0 ,   x i * 0
where c j ( x * ) and f ( x * ) are the gradients of the functions c j ( x ) and f ( x ) at x = x * , y = y * .
Definition 1.
Let  x * satisfy all the constraints in Equation (1) and  J ( x * ) is a set of index  j with  c j ( x * ) = 0 . If the gradients  c j ( x * ) with  j J ( x * ) are linearly independent, then  x * is called regular point [28].
Lemma 1.
Consider the NOP in Equation (1) with the Lagrangian function in Equation (2). Suppose that  x * is a regular point of the NOP in Equation (1),  x * is a strict local minimum of the NOP if (i) there exists  y * such that  ( x * ,   y * ) satisfies KKT conditions in Equation (3); (ii) for any  d 0 ,   d R n such that c j ( x * ) T d = 0 for every  j J ( x * ) , it follows that [28]:
d T [ 2 f ( x * ) + j y j * 2 c j ( x * ) ] d > 0  
(iii) y * satisfies the strict complementary assumption given by:
y j * > 0 ,   j J ( x * )
The conditions (i), (ii) and (iii) in Lemma 1 are called second-order sufficient conditions (SOSC).
In [28], the following augmented Lagrangian function is defined:
L c ( x ,   y ) = f ( x ) + j = 1 m y j 2 c j ( x ) + c 2 j = 1 m ( y j c j ( x ) ) 2
where c is a positive penalty parameter. A Lagrange-type neural network is then constructed as:
d x i d t = x i L c ( x ,   y ) ,     i = 1 ,   . . . ,   n
d y j d t = 2 y j c j ( x ) ,   j = 1 ,   . . . ,   m
It is easy to show that, under SOSC, ( x * ,   y * ) is a strict local minimum of the augmented Lagrangian function. Furthermore, the neural network modeled in Equation (7a,b) is locally asymptotically stable at its local minimum.
Remark 1.
The discussions in Equations (6) and (7b) describe a generalized framework for designing RNNs. It should be addressed that the augmented Lagrangian function determines both the dynamics of KKT pairs and the stability of the RNN in Equation (7a,b) in a Lyapunov sense. By reasonably constructing the penalty function or the regulating rule, the KKT points could correspond to the largest invariant set in LaSalle’s invariance principle [28,29] and the dynamics of the designed RNN are able to globally converge to the KKT pairs.
Further in [17], an RNN for solving NOP was proposed with the following state equations:
d d t x y = λ x + ( x ) + f ( x + ) c ( x + ) y +                 y + y + + c ( x + )
with the output equation:
v = x +
Given that (i) x 2 L ( x ,   y * ) is positive semidefinite on R + n , (ii) x 2 L ( x * ,   y * ) is positive definite, (iii) for initial point z ( t 0 ) = ( x ( t 0 ) ,   y ( t 0 ) ) , x 2 L ( x ,   y ) is positive definite on R + n × S 0 , where L ( x ,   y ) = f ( x ) + y T c ( x ) is the Lagrangian function, S 0 = y R m | y = ( y 0 ( t ) + e t 0 t y ( t 0 ) ) + ,   t [ t 0 ,   ) and y 0 ( t ) is the second state trajectory of Equation (8) with zero initial point. Then it was proved that the output trajectory of the network converges to the optimal solution of the NOP.
In [16], an RNN was proposed for nonlinear convex programming and regarded as an extended projection neural network (EPNN) based on the model proposed in [30]. EPNN’s state space equations are given as:
d d t x y = x + ( x α ( f ( x ) + c ( x ) y ) ) +                 y + ( y + α c ( x ) ) + .
It was proved that the solution of Equation (10) converges to a KKT point under the conditions that the objective function is convex and all constraint functions are strictly convex, or the objective function is strictly convex and the constraint function is convex. In fact, [16] has provided with a generalized projection network framework and it is possible to develop EPNN-like NOPS with mild requirements on the NOP’s convexity.
In practice, the above RNN-based optimization solving methods are hardware-implementable by using very large-scale integrated circuit chips, characterized with powerful parallel process functions, More importantly, these optimization solvers can be widely used in networked autonomous vehicles, power grids, communication systems, and the Internet of Things (IoT) infrastructures [31] for solving optimal engineering design issues.

2.2. EHB Force Allocation Problem Formulation

Consider a front-wheel-drive (FWD) pure electric vehicle on level ground, whose acting forces are shown in Figure 1, where F r e s i s is the resistance, including aerodynamic drag and rolling resistance, T r e g is the regenerative braking torque on the front axle, while T f f and T r f denote frictional braking torque on the front and rear axle, respectively. F b f and F b r are the tire-ground braking forces acting on the front and rear axles, respectively, and for the corresponding tire-ground braking torques T b f and T b r , we have:
T b f = T r e g + T f f ;     T b r = T r f ;     F b f = T b f r ;     F b r = T b r r .
Meanwhile, for T r e g , we have:
T r e g = F r e g r
where r is the wheel radius and F r e g is the regenerative braking force. The braking forces F b f and F b r can then be written as [32]:
F b f   max = ϕ G ( L b + z h g ) L
F b r   max = ϕ G ( L a z h g ) L
where F b f   max and F b r   max are the maximum braking forces acting on front and rear axles, respectively; ϕ is the tire-ground adhesion coefficient, z is the braking rate given by:
z = a g
where a is the deceleration of the vehicle, g denotes the gravitational acceleration (9.8 N/kg). We can then express the deceleration a as:
a = g ( F b f + F b r + F r e s i s ) G .
with G the weight of vehicle and F r e s i s the resistance including air drag and force of friction.
In order to ensure the stability and the braking safety of the vehicle, the regulation (ECE-R13) established by the United Nations Economic Commission [33,34] suggests that the adhesion coefficients should satisfy the following relationships [32]:
ϕ f ϕ r
for z [ 0.15 ,   0.8 ] , and
ϕ f ,   ϕ r z + 0.07 0.85
for z [ 0.1 ,   0.61 ] , where ϕ f and ϕ r are the adhesion coefficients of front wheels and rear wheels, respectively.
Since electric motors features a wider operational range with higher efficiency compared to internal combustion engine, the conventional variable transmissions are not necessarily required. Then, in this paper, considering EV’s physical properties and braking safety, we formulate the EHB force allocation as the following NOP:
min   f ( x ) s . t .   c j ( x ) 0         j = 1 ,   . . . ,   11
where x = [ F r e g ,   F f f ] T , f ( x ) = 1 1 + F r e g 2 with constraints as:
c 1 ( x ) = F b f F b f   max ;                             c 2 ( x ) = F b f ; c 3 ( x ) = F b r F b r   max ;                             c 4 ( x ) = F b r ; c 5 ( x ) = ϕ r ϕ f ;                     ( z [ 0.15 ,   0.8 ] )                           c 6 ( x ) = ϕ f z + 0.07 0.85 ;         ( z [ 0.1 ,   0.61 ] ) c 7 ( x ) = ϕ r z + 0.07 0.85 ;         ( z [ 0.1 ,   0.61 ] )           c 8 ( x ) = T r e g T r e g   max ;                           c 9 ( x ) = ω m ω m   max ;         c 10 ( x ) = F r e g ;                                                     c 11 ( x ) = F f f ;
where ω m and P m are the motor rotational speed and the motor power, bounded by ω m   max and P m   max , respectively.
The vehicle model is employed from the EV model in ADVISOR (Advanced Vehicle Simulator) software, where the RNN-NOPS is implemented as the braking strategy embedded in the vehicle control module. Important parameters of the considered vehicle given in ADVISOR are listed in Table 1, other parameters can also be found in the software. In this paper, the vehicle model is employed from the pure EV specified in ADVISOR, which is shown in Figure 2.

3. RNN-NOPS Design

In this section, the design of RNN-NOPS is presented in detail. It will be shown that the RNN-NOPS is capable of driving the state variable that violates the constraints to converge to the feasible region with sufficient number of iterations, and KKT conditions hold at RNN-NOPS’s equilibria. Then the EHB force allocation problem will be confirmed to be solvable by using the RNN-NOPS.

3.1. Formulation of RNN-NOPS

In this section, the generalized framework of the RNN-NOPS is described as follows. Considering the NOP in Equation (1) with the Lagrangian function in Equation (2), the RNN-NOPS can be modelled by the following state equations:
x ˙ i = λ 1 f ( x ) x i + k = 1 m c k ( x ) x i y k + ( c k ( x ) ) +
y ˙ k = λ 2 y k + y k + 2 s i g n ( c k ( x ) ) + 1 s i g n ( y k ) ( c k ( x ) ) + +
with:
x i ( 0 ) R ,       i = 1 ,   . . . ,   n y k ( 0 ) R ,     k = 1 ,   . . . ,   m    
where x i ( 0 ) and y k ( 0 ) are the initial values of x i and y k , respectively, with the state vector x defined as:
x = ( x i ,   . . . ,   x n ) T
y k is the Lagrange multiplier corresponding to the constraint c k ( x ) , ( x i ) + = max { 0 ,   x i } , ( c k ( x ) ) + = max { 0 ,   c k ( x ) } , and s i g n ( ) is the sign function, λ 1 and λ 2 are the positive learning rates.
Remark 2.
Because of the nonlinearity in Equation (21b), the stability of  y k can be analyzed piece-wisely as follows:
For  c k ( x ) > 0 :
Equation (21b) can be expressed as:
y ˙ k = λ 2 y k + y k + s i g n ( y k ) + c k ( x ) +
If y k > 0 ,Equation (23) becomes:
y ˙ k = λ 2 1 + c k ( x ) > 0
Equation (24a) indicates that y k as t .
If y k = 0 , Equation (23) can be written as:
y ˙ k = λ 2 c k ( x ) > 0
y k will then move away from the origin y k = 0 and go to infinity as time t .
If y k < 0 ,Equation (23) is of the form:
y ˙ k = λ 2 y k + y k 1 + c k ( x ) + = λ 2 y k + ( y k + 1 ) + c k ( x ) +
In this case, y k will move toward y k = 0 .
The above analysis shows that, when the constraint c k ( x ) is violated with c k ( x ) 0 , the corresponding y k either goes to infinity ( y k 0 ) or converges to 0 ( y k < 0 ).
For c k ( x ) = 0 :
Equation (21b) becomes:
y ˙ k = λ 2 y k + y k s i g n ( y k ) +
and then Equation (25) can be written as:
y ˙ k = λ 2 y k + y k 1 +         = λ 2 < 0 ,                           y k > 1 λ 2 y k < 0 ,     0 < y k 1 0 ,                                         y k = 0 λ 2 > 0 ,         1 y k < 0 λ 2 y k > 0 ,             y k < 1
According to Lyapunov theory [35], Equation (26a) means that, with a selected Lyapunov function  V y k = 0.5 y k 2 , we have:
V ˙ y k = y k y ˙ k < 0
for y k 0 , and V ˙ y k = 0 if and only if y k = 0 , therefore y k can then asymptotically converge to zero.
Similarly, for c k ( x ) < 0 , Equation (21b) can be written as:
y ˙ k = λ 2 y k + y k 3 s i g n ( y k ) +         = 3 λ 2 < 0 ,                                 y k > 3 λ 2 y k < 0 ,               0 < y k 3 0 ,                                                   y k = 0 3 λ 2 > 0 ,                       3 y k < 0 λ 2 y k > 0 ,     y k < 3
Using Lyapunov stability theory, we can easily prove that, when the constraint c k ( x ) < 0 ,  y k asymptotically converges to zero. □
Assumptions 1.
For the NOP given by Equation (1), the following assumptions are made: (i) The partial derivative of objective  f ( x ) with respect to  x i ,  f ( x ) x i , is bounded for all  i ; (ii) When  x is out of the feasible region with  c a ( x ) > 0 , there always exists  x i making  c a ( x ) / x i 0 holds,  i = 1 ,   . . . ,   n .
Based on the Assumptions 1, we have Proposition 1 as follows:
Proposition 1.
Consider the RNN-NOPS in Equation (21a–c) for solving the constrained NOP in Equations (1) and (2). If, a system state is out of the feasible region with the  a th constraint violated, that is,  c a ( x ) > 0 , after sufficient number of iterations, RNN-NOPS will drive  x to move to the feasible region satisfying  d c a ( x ) d t < 0 , ensuring that all the constraints are within the feasible region.
Proof. 
Firstly, assuming that, at some sub-region of the state space, the a th constraint is violated, that is, c a ( x ) > 0 . According to Remark 2, if the corresponding y a we have y ˙ a > 0 ; for other constraints with c k ( x ) 0 , after sufficient number of iterations, y k = 0 . Then according to Equation (21a), any constraints c k ( x ) satisfying c k ( x ) 0 , have no effect on the updates on x i . Therefore, after sufficient number of iterations, Equation (21a) becomes:
x ˙ i = λ 1 f ( x ) x i + k = 1 m c k ( x ) x i y k + ( c k ( x ) ) +         = λ 1 f ( x ) x i + c a ( x ) x i y a + ( c a ( x ) ) +
Since f ( x ) x i is assumed to be bounded, when the violated constraint c a ( x ) > 0 makes y ˙ a > 0 , y a will continuously increase such that, after a large number of iterations, the following inequality is held:
c a ( x ) x i y a + ( c a ( x ) ) + f ( x ) x i
then we have:
x ˙ i λ 1 c a ( x ) x i y a + ( c a ( x ) ) +
Equation (30) means that all x i will move approximately along the negative gradient direction of c a ( x ) , which is similar to gradient descent approach, and the positive λ 1 y a + ( c a ( x ) ) + in Equation (28) can be regarded as updating step size. From Assumptions 1, x i making c a ( x ) / x i 0 holds, and then
x ˙ i c a ( x ) x i λ 1 c a ( x ) x i 2 y a + ( c a ( x ) ) + < 0 ,     i = 1 ,   . . . ,   n
Adding n equations of Equation (31) for i = 1 ,   . . . ,   n , we obtain:
d c a ( x ) d t = i = 1 n x ˙ i ( c a ( x ) x i )                     λ 1 i = 1 n c a ( x ) x i 2 y a + ( c a ( x ) ) + < 0
Equation (32) means that there exists a time t 0 , with t > t 0 , we have c k ( x ) 0 . □
Remark 3.
It is seen from Proposition 1 that the RNN-NOPS proposed in this paper has the following remarkable robustness and convergence properties:
(i) If the state variable vector is out of the feasible region with some constraint violated ( c a ( x ) > 0 ), the RNN-NOPS is capable of continuously increasing the values of the corresponding Lagrangian multipliers (the state variables) such that, after a number of iterations with the large variable step sizes, the changing rate of the violated constraints becomes negative ( c a ( x ) > 0 ). This process is named as the “constraint recovering process”;
(ii) After all constraints are valid within the feasible region, the RNN-NOPS drives all Lagrangian multipliers to converge to zero ( y k = 0 ) in the Lyapunov sense. Then the original constrained optimization, as specified in Equation (1), behaves as an unconstrained optimization. This process can be considered as the “objective optimizing process”.

3.2. KKT Condition and Convergence Analysis

In this section, two theorems regarding RNN-NOPS’s convergence and optimality are presented.
Theorem 1.
Consider the RNN-NOPS in Equation (21a–c) for solving the constrained NOP in Equations (1) and (2). If  ( x , y ) is an equilibrium of the RNN-NOPS,  ( x , y ) is a KKT point of the optimization problem.
Proof. 
Let  ( x * , y * ) be a KKT point, from Equation (3) we have:
f ( x * ) + k = 1 m c k ( x * ) y k * = 0 c k ( x * ) y k * = 0 y k * 0 c k ( x * ) 0
where x i * 0 is included in c k ( x * ) 0 .
Given ( x , y ) is an equilibrium with x ˙ i = y ˙ k = 0 , according to Remark 2, y ˙ k = 0 is met only if c k ( x ) 0 , then given y ˙ k = 0 we have c k ( x ) 0 , that is the 4th KKT condition in Equation (33) is met.
According to Remark 2, when c k ( x ) 0 , y k is only stable at y k = 0 , so given y ˙ k = 0 , y k = 0 is obtained, that is the 3rd KKT condition in Equation (33) is satisfied.
Given ( x , y ) is an equilibrium, x ˙ i = 0 for all i , then Equation (21a) yields:
        λ 1 f ( x ) x i + k = 1 m c k ( x ) x i y k + ( c k ( x ) ) + = λ 1 f ( x ) x i + k = 1 m c k ( x ) x i y k = 0
that is, the 1st KKT condition in Equation (33) is satisfied.
Finally, from c j ( x ) 0 along with Remark 2, y j is stable only if y j = 0 , then y j = 0 and c j ( x ) y j = 0 , that is, the 2nd KKT condition in Equation (33) holds.
Thereby, the proof is finished. □
Remark 4.
Based on the Lyapunov theory along with Assumptions 1, the convergence of RNN-NOPS is ready to be investigated. Firstly, a constraint mapping variable  w = [ w 1 ,   . . . ,   w k ,   . . . ,   w m ] T is defined with  w k = ( c k ( x ) ) + , then  w k = 0 given  c k ( x ) 0 ; and according to Remark 2,  y k is stable only if  y k = 0 and  c k ( x ) 0 .
For a constraint recovering process, there is single equilibrium 0 at the space R m where w locates. 0 can be seen as the mapping of the whole feasible region, therefore for the upcoming objective optimizing process, w is stable at w = 0 . Then, the theorem on RNN-NOPS’s convergence is given as follows:
Theorem 2.
For a whole solving process of RNN-NOPS consisting of one constraints recovering process and one objective optimization process, on the basis of Assumptions 1, when there is one constraint  c a ( x ) > 0 , 0 in  R m where  w locates is asymptotically stable.
Proof. 
Consider a Lyapunov function V :
V = 1 2 w T w
where w = [ w 1 ,   . . . ,   w k ,   . . . ,   w m ] T , w k = ( c k ( x ) ) + , we have V 0 and V is radically unbounded. When the state vector x is within the feasible region Ω = { x | c k ( x ) 0 ,     k = 1 ,   . . . ,   m } , w is stable at R m space equilibrium 0 and V = 0 , V reaches its minimum value. Then we have:
V = 1 2 k = 1 m w k 2
And
V ˙ = k = 1 m w k d w k d t
where V ˙ is the sum of m terms, w k = 0 when c j ( x ) 0 . Only if there is c a ( x ) > 0 , there could be w a w ˙ a 0 , and in this case we also have w a = c a ( x ) and w ˙ a = d c a ( x ) d t . That is, the value of V ˙ is determined by c a ( x ) and its corresponding multiplier y a :
V ˙ = w a w ˙ a = c a ( x ) d c a ( x ) d t
According to Proposition 1, c a ( x ) > 0 makes its corresponding y a keeps increasing until sufficiently large, making Equation (38) hold, that is d c a ( x ) d t < 0 , then we have:
V ˙ = c a ( x ) d c a ( x ) d t < 0
Therefore, equilibrium 0 in R m space where w locates is globally asymptotically stable.
Thereby, the proof is finished. □
Remark 5.
The convergence properties discussed in the above are based on Assumptions 1, where the first condition could be easily realized by properly constructing the objective function. For example, if  f ( x ) = 1 F r e g 2 instead of  f ( x ) = 1 1 + F r e g 2 , the  f ( x ) / x i when  F r e g = 0 . The second condition in the Assumptions 1 is also mild and valid for the optimization problem formulated in Equations (19) and (20).

3.3. Analysis of the NOP to Be Solved

In this section, the EHB force allocation problem in NOP form is verified to fit Assumptions 1.
Remark 6.
The partial derivatives of  f ( x ) and  c j ( x ) , j = 1 ,   . . . ,   m with respect to  F r e g and F f f are given in Table 2. Substituting the parameters’ values, it is seen that the denominator of  f ( x ) / F r e g is always positive and the numerator is bounded, and that  f ( x ) / F f f = 0 . Meanwhile, for any  c k ( x ) ,  f ( x ) / F r e g and f ( x ) / F f f are not equal to 0 at the same time. For example, c 5 ( x ) / F r e g = c 5 ( x ) / F f f = 0 if and only if L a h g z = L b + h g z , and there is no solution for z . Therefore, conditions in Assumptions 1 are all satisfied. It is also worth noting that the only exception, c 9 ( x ) , is always satisfied with the driving cycle not exceeding the operating limit of the vehicle, therefore c 9 ( x ) has no effect on the update of state variables.

4. Simulations and Discussion

In this section, the comparative study of the EHB force allocation problem is carried out under both predefined braking process and standard driving cycle. The aforementioned extended projection neural network (EPNN) [16] is implemented as comparison.

4.1. Learning Performance Evaluation

In this section, EHB force allocation strategies are conducted in the EV under a designed braking process, with the initial speed set as 18 m/s, the acceleration set as −1 m/s2 in 1–4 s, −2 m/s2 in 4–7 s and −3 m/s2 in 7–10 s, and the EV is stationary at the 10th second. By doing this, the strategy can be validated under various velocities and braking rates.
The simulation step is set as 0.1 s. At every simulation step, RNN-NOPS and EPNN are aligned 5 × 104 iterations to make sure the training is sufficient. The learning rates of RNN-NOPS are set as λ 1 = λ 2 = 0.04 , as for EPNN, α = 0.42 , y k are all initialized as 0. F r e g and F f f are initialized as −100, respectively, making c 5 ( x ) > 0 at some steps with z [ 0.1 ,   0.61 ] , so that the training process of all networks must take constraints into consideration.
The speed of the designed braking process and the EV’s actual speed with RNN-NOPS are shown in Figure 3, where it is shown that the vehicle speed under control of RNN-NOPS follows the target speed with delay and the overall trends are consistent. Speed following results with EPNN are similar.
Take 2 representative constraints, c 5 ( x ) and c 6 ( x ) as examples, their 59 training trajectories (representing 59 training sample times considering c 5 ( x ) and c 6 ( x ) ) at first 5 × 104 iterations of RNN-NOPS and EPNN are shown in Figure 4. It is seen that c 5 ( x ) is often positive at initial iterations, representing ϕ r exceeds ϕ f . Since the braking ratio is fixed at every step, according to Equation (20), increasing ϕ f is the feasible way to recover the violated c 5 ( x ) , which means c 6 ( x ) must be increased. From Figure 4b,d, it is seen that negative c 6 ( x ) is increased but within its feasible range, indicating that all the algorithms effectively find a way to recover violated constraints.
Overall results during the designed braking process are shown in Table 3, where it is shown that all the constraints are perfectly obeyed by both algorithms. With regard to the overall regenerative energy, RNN-NOPS recovers 6.77% more energy than EPNN.

4.2. Braking Performance Evaluation

It is found that for both strategies, the braking time is from 1.3 s to 10.9 s. The braking results are shown in Figure 5, where the following observations can be seen:
(i) Adhesion utilization results of RNN-NOPS and EPNN are shown in Figure 5a,b, respectively. When z = 0.1 , ( z + 0.07 ) / 0.85 = 0.2 ; when z = 0.15 , ( z + 0.07 ) / 0.85 is about 0.2588. Then from Equation (20), given ( z + 0.07 ) / 0.85 < 0.2588 , c 5 ( x ) is not involved in the NOP and ϕ r > ϕ f is allowed; and when ( z + 0.07 ) / 0.85 < 0.2 , c 6 ( x ) and c 7 ( x ) are not considered and ϕ f ,   ϕ r > ( z + 0.07 ) / 0.85 is allowed.
The c 5 ( x ) is not involved in 1.3–5.0 s, so it is seen from Figure 5a,b that ϕ r > ϕ f during this time. The difference is that in this case, ϕ r of RNN-NOPS is much lower than that of EPNN, indicating that RNN-NOPS decreases torque consumed by friction on the rear axle and brings more energy on the front axle for possible regeneration.
The z is higher in 5.1–10.9 s, when c 5 ( x ) , c 6 ( x ) and c 7 ( x ) are all involved. EPNN concentrates more torque on front axle at this time.
(ii) Torque allocation results of RNN-NOPS and EPNN are shown in Figure 5c,d, respectively. It is seen that T r e g T r e g   max holds for both algorithms, T f f and T r e g are close in both pictures. However, in 1–4 s, T r e g of EPNN is almost 0, while that of RNN-NOPS is much higher. This is because EPNN tends to allocate more braking force/torque to the rear axle, where all energy is dissipated by friction of hydraulic braking. This result explains the different regenerative energy of both algorithms.
(iii) Power allocation results of RNN-NOPS and EPNN are shown in Figure 5e,f, respectively. It is seen that, as the braking process tends to complete, the rotational speed of the motor decreases, which makes the energy available for regeneration decrease. At this time, EPNN concentrates more torque on front axle but does not regenerate much more energy than RNN-NOPS. This is verified by the results in Table 3.

4.3. Performance under Standard Driving Cycle

In this section, the comparative study with a few existing RNN algorithms for solving NOPs is conducted under a standard driving cycle, which reflects the practical environment and the driver’s behavior.
SC03 Supplemental Federal Test Procedure (SFTP) is a testing cycle proposed by the Environmental Protection Agency (EPA) in 2007, and it is chosen as the testing cycle in this section. The simulation step size is set as 1 s, where 1 × 104 iterations are allocated for both algorithms. The learning rates of RNN-NOPS are selected as λ 1 = λ 2 = 0.32 , and α = 2 in EPNN. Other settings are similar to Section 4.1.
The speed-following results are shown in Figure. 6, where the speed of SC03 and speed under control of RNN-NOPS are shown in Figure 6a, and the speed-following error is shown in Figure 6b. It is seen that SC03 is well-followed.
The trajectories of c 5 ( x ) and c 6 ( x ) are shown in Figure 7, where the results of RNN-NOPS are illustrated in Figure 7a,b, and those of EPNN are shown in Figure 7c,d. The first 600 iterations of RNN-NOPS and 8000 iterations of EPNN are given, respectively. In Figure 7a,c, 13 training trajectories representing 13 sample times with z [ 0.15 ,   0.8 ] considering c 5 ( x ) , are given, respectively. In Figure 7a,c, 34 training trajectories representing 34 sample times with z [ 0.1 ,   0.61 ] considering c 6 ( x ) , are given, respectively. Initial c 5 ( x ) is sometimes violated, and finally the trajectories are all stable at valid negative values for both algorithms, indicating reliable learning performance.
Finally, the overall results in terms of regenerative energy and constraint violation are shown in Table 4, where it is seen that all constraints are obeyed for both algorithms. Yet the overall regenerative energy of RNN-NOPS is 15.39% more than that of EPNN.

5. Conclusions

In this paper, an RNN-NOPS has been proposed and applied for EHB force allocation of EVs. The RNN-NOPS has been designed to ensure that the state variables converge to the feasible region and the equilibria meet KKT conditions of the NOP. The network’s update consists of two processes dealing with constraints and the objective function, respectively. It is reported that the overall regenerative energy of RNN-NOPS is 15.39% more than that of the method for comparison under SC03 cycle. The main benefits of the proposed RNN-NOPS are threefold: (1) The RNN-based method with parallel computation is suitable for timely industrial applications; (2) Guaranteed optimality can be obtained by RNN-NOPS with the constraints met, with a requirement on the first-order partial derivative of the NOP; (3) The simulation results of the EHB force allocation problem under different braking processes have demonstrated the excellent performance of the RNN-NOPS regarding the convergence, braking safety, regenerative energy, etc. Yet the limitation of RNN-NOPS is that the solving result is sensitive to parameters setting; not all constraints are satisfied before convergence with an inappropriate learning rate. The further work on RNN-based NOP-solving models aims at broadening the application scope with reduced model complexity and less sensitivity to parameters.

Author Contributions

Conceptualization, J.Y.; methodology, J.Y.; software, J.Y.; validation, J.Y.; formal analysis, J.Y., H.K. and Z.M.; investigation, J.Y.; resources, H.K. and Z.M.; data curation, J.Y. and Z.M.; writing—original draft preparation, J.Y.; writing—review and editing, H.K. and Z.M.; visualization, J.Y.; supervision, H.K. and Z.M.; project administration, H.K.; funding acquisition, H.K. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Anhui Provincial Key Research and Development Plan, grant number JZ2021AKKG0310, National Science and Technology Support Program, grant number 2014BAG06B02 and Fundamental Research Funds for the Central Universities, grant number 2014HGCH0003.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Gao, Y.; Chen, L.; Ehsani, M. Investigation of the Effectiveness of Regenerative Braking for EV and HEV; SAE Transactions: Warrendale, PA, USA, 1999; pp. 3184–3190. [Google Scholar]
  2. Kim, D.H.; Kim, J.M.; Hwang, S.H.; Kim, H.S. Optimal brake torque distribution for a four-wheel drive hybrid electric vehicle stability enhancement. Proc. Inst. Mech. Eng. Part D J. Automob. Eng. 2007, 221, 1357–1366. [Google Scholar] [CrossRef]
  3. Katoch, S.; Chauhan, S.S.; Kumar, V. A review on genetic algorithm: Past, present, and future. Multimed. Tools Appl. 2021, 80, 8091–8126. [Google Scholar] [CrossRef] [PubMed]
  4. Satzger, C.; de Castro, R. Predictive brake control for electric vehicles. IEEE Trans. Veh. Technol. 2017, 67, 977–990. [Google Scholar] [CrossRef]
  5. Behrooz, F.; Mariun, N.; Marhaban, M.H.; Radzi, M.A.M.; Ramli, A.R. Review of control techniques for HVAC systems—nonlinearity approaches based on fuzzy cognitive maps. Energies 2018, 11, 495. [Google Scholar] [CrossRef] [Green Version]
  6. Liu, Q.; Guo, Z.; Wang, J. A one-layer recurrent neural network for constrained pseudoconvex optimization and its application for dynamic portfolio optimization. Neural Netw. 2012, 26, 99–109. [Google Scholar] [CrossRef]
  7. Cui, X.; Li, X.; Li, D. Unified framework of mean-field formulations for optimal multi-period mean-variance portfolio selection. IEEE Trans. Autom. Control 2014, 59, 1833–1844. [Google Scholar] [CrossRef] [Green Version]
  8. Leung, M.-F.; Wang, J. Minimax and biobjective portfolio selection based on collaborative neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 2020, 32, 2825–2836. [Google Scholar] [CrossRef]
  9. Pan, Y.; Wang, J. Model predictive control of unknown nonlinear dynamical systems based on recurrent neural networks. IEEE Trans. Ind. Electron. 2011, 59, 3089–3101. [Google Scholar] [CrossRef]
  10. Yan, Z.; Wang, J. Robust model predictive control of nonlinear systems with unmodeled dynamics and bounded uncertainties based on neural networks. IEEE Trans. Neural Netw. Learn. Syst. 2013, 25, 457–469. [Google Scholar] [CrossRef]
  11. Yan, Z.; Wang, J. Nonlinear model predictive control based on collective neurodynamic optimization. IEEE Trans. Neural Netw. Learn. Syst. 2015, 26, 840–850. [Google Scholar] [CrossRef]
  12. Peng, Z.; Wang, J.; Wang, D. Distributed maneuvering of autonomous surface vehicles based on neurodynamic optimization and fuzzy approximation. IEEE Trans. Control. Syst. Technol. 2017, 26, 1083–1090. [Google Scholar] [CrossRef]
  13. Xia, Y.; Wang, J. A One-layer recurrent neural network for support vector machine learning. IEEE Trans. Syst. Man Cybern. Part B (Cybernetics) 2004, 34, 1261–1269. [Google Scholar] [CrossRef] [PubMed]
  14. Solodov, M.V.; Tseng, P. Modified projection-type methods for monotone variational inequalities. SIAM J. Control Optim. 1996, 34, 1814–1830. [Google Scholar] [CrossRef] [Green Version]
  15. Konnov, I.V. A Class of combined iterative methods for solving variational inequalities. J. Optim. Theory Appl. 1997, 94, 677–693. [Google Scholar] [CrossRef]
  16. Xia, Y.; Wang, J. A Recurrent neural network for nonlinear convex optimization subject to nonlinear inequality constraints. IEEE Trans. Circuits Syst. I Regul. Pap. 2004, 51, 1385–1394. [Google Scholar] [CrossRef]
  17. Xia, Y.; Feng, G.; Wang, J. A novel recurrent neural network for solving nonlinear optimization problems with inequality constraints. IEEE Trans. Neural Netw. 2008, 19, 1340–1353. [Google Scholar]
  18. Tank, D.; Hopfield, J. Simple ‘neural’ optimization networks: An A/D converter, signal decision circuit, and a linear programming circuit. IEEE Trans. Circuits Syst. 1986, 33, 533–541. [Google Scholar] [CrossRef] [Green Version]
  19. Khan, N.; Haq, I.U.; Ullah FU, M.; Khan, S.U.; Lee, M.Y. CL-Net: ConvLSTM-Based Hybrid Architecture for Batteries’ State of Health and Power Consumption Forecasting. Mathematics 2021, 9, 3326. [Google Scholar] [CrossRef]
  20. Chen, R.T.; Rubanova, Y.; Bettencourt, J.; Duvenaud, D.K. Neural ordinary differential equations. Adv. Neural Inf. Process. Syst. 2018, 31, 6572–6583. [Google Scholar]
  21. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
  22. Tkachenko, R.; Izonin, I.; Vitynskyi, P.; Lotoshynska, N.; Pavlyuk, O. Development of the non-iterative supervised learning predictor based on the ito decomposition and SGTM neural-like structure for managing medical insurance costs. Data 2018, 3, 46. [Google Scholar] [CrossRef] [Green Version]
  23. Li, G.; Yan, Z.; Wang, J. A one-layer recurrent neural network for constrained nonconvex optimization. Neural Netw. 2015, 61, 10–21. [Google Scholar] [CrossRef]
  24. Che, H.; Wang, J. A collaborative neurodynamic approach to global and combinatorial optimization. Neural Netw. 2019, 114, 15–27. [Google Scholar] [CrossRef] [PubMed]
  25. Bazaraa, M.S.; Sherali, H.D.; Shetty, C.M. Nonlinear Programming: Theory and Algorithms; John Wiley & Sons: Hoboken, NJ, USA, 2013. [Google Scholar]
  26. Xia, Y.; Wang, J.; Guo, W. Two projection neural networks with reduced model complexity for nonlinear programming. IEEE Trans. Neural Netw. Learn. Syst. 2019, 31, 2020–2029. [Google Scholar] [CrossRef] [PubMed]
  27. Granville, S.; Rodrigo de Miranda Alves, F. Active-reactive coupling in optimal reactive dispatch: A solution via Karush-Kuhn-Tucker optimality conditions. IEEE Trans. Power Syst. 1994, 9, 1774–1779. [Google Scholar] [CrossRef]
  28. Huang, Y. Lagrange-type neural networks for nonlinear programming problems with inequality constraints. In Proceedings of the 44th IEEE Conference on Decision and Control, Seville, Spain, 15 December 2005; pp. 4129–4133. [Google Scholar]
  29. La Salle, J.P. The Stability of Dynamical Systems; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1976. [Google Scholar]
  30. Xia, Y.; Leung, H.; Wang, J. A projection neural network and its application to constrained optimization problems. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 2002, 49, 447–458. [Google Scholar]
  31. Dall'Anese, E.; Simonetto, A.; Becker, S.; Madden, L. Optimization and learning with information streams: Time-varying algorithms and applications. IEEE Signal Process. Mag. 2020, 37, 71–83. [Google Scholar] [CrossRef]
  32. Guo, J.; Wang, J.; Cao, B. Study on braking force distribution of electric vehicles. In Proceedings of the 2009 Asia-Pacific Power and Energy Engineering Conference, Wuhan, China, 27–31 March 2009; pp. 1–4. [Google Scholar]
  33. Ma, K.; Chu, L.; Yao, L.; Wang, Y. Study on control strategy for regenerative braking in a pure electric vehicle. In Proceedings of the 2nd International Conference on Electronic & Mechanical Engineering and Information Technology, Shenyang, China, 7 September 2012; pp. 1875–1878. [Google Scholar]
  34. ECE/324/Rev.1/Add.12/Rev.8, Addendum 12: Regulation No.13, Agreement Concerning the Adoption of Uniform Technical Prescriptions for Wheeled Vehicles, Equipment and Parts which can be Fitted and/or be Used on Wheeled Vehicles and the Conditions for Reciprocal Recognition of Approvals Granted on the Basis of these Prescriptions. United Nations. 3 March 2014. Available online: https://www.unece.org/fileadmin/DAM/trans/main/wp29/wp29regs/updates/R013r8e.pdf (accessed on 10 February 2021).
  35. Khalil, H. Nonlinear Systems; Prentice-Hall: Hoboken, NJ, USA, 2002. [Google Scholar]
Figure 1. Forces acting on a vehicle while braking on the level ground [32].
Figure 1. Forces acting on a vehicle while braking on the level ground [32].
Energies 15 09486 g001
Figure 2. Pure EV model in ADVISOR.
Figure 2. Pure EV model in ADVISOR.
Energies 15 09486 g002
Figure 3. Speed of the EV during the braking process under control of RNN-NOPS.
Figure 3. Speed of the EV during the braking process under control of RNN-NOPS.
Energies 15 09486 g003
Figure 4. c 5 ( x ) and c 6 ( x ) ’s trajectories of (a), (b) RNN-NOPS and (c), (d) EPNN, respectively.
Figure 4. c 5 ( x ) and c 6 ( x ) ’s trajectories of (a), (b) RNN-NOPS and (c), (d) EPNN, respectively.
Energies 15 09486 g004
Figure 5. Braking performance of RNN-NOPS and EPNN in terms of adhesion utilization in (a,b), respectively; torque allocation in (c,d), respectively; power allocation in (e,f), respectively.
Figure 5. Braking performance of RNN-NOPS and EPNN in terms of adhesion utilization in (a,b), respectively; torque allocation in (c,d), respectively; power allocation in (e,f), respectively.
Energies 15 09486 g005
Figure 6. SC03 speed and real speed under control of RNN-NOPS in (a), and speed-following error in (b).
Figure 6. SC03 speed and real speed under control of RNN-NOPS in (a), and speed-following error in (b).
Energies 15 09486 g006
Figure 7. Trajectories during training process of RNN-NOPS and EPNN under SC03 cycle in terms of constraint 5 in (a) and (c), respectively, and constraint 6 in (b) and (d), respectively.
Figure 7. Trajectories during training process of RNN-NOPS and EPNN under SC03 cycle in terms of constraint 5 in (a) and (c), respectively, and constraint 6 in (b) and (d), respectively.
Energies 15 09486 g007
Table 1. Parameters of the considered vehicle.
Table 1. Parameters of the considered vehicle.
SymbolMeaningQuantity
m v Vehicle mass1144 kg
m c Vehicle cargo mass136 kg
h g Height of the vehicle center of mass0.5 m
L Wheelbase2.6 m
L a Longitudinal distance from the mass center to the front axle1.04 m
L b Longitudinal distance from the mass center to the rear axle1.56 m
r Wheel radius0.282 m
F A Frontal area2 m2
C d Coefficient of aerodynamic drag0.335
f Coefficient of rolling resistance0.009
i g b Ratio of the single speed transmission2.9362
Table 2. First order partial derivative of the objective function and constraints with respect to x i .
Table 2. First order partial derivative of the objective function and constraints with respect to x i .
f ( x ) / F r e g c 1 ( x ) / F r e g c 2 ( x ) / F r e g c 3 ( x ) / F r e g c 4 ( x ) / F r e g c 5 ( x ) / F r e g
2 F r e g F r e g 2 + 1 2 1−1−11 L G L a h g z L G L b + h g z
c 6 ( x ) / F r e g c 7 ( x ) / F r e g c 8 ( x ) / F r e g c 9 ( x ) / F r e g c 10 ( x ) / F r e g c 11 ( x ) / F r e g
L G L b + h g z L G L a h g z 10−10
f ( x ) / F f f c 1 ( x ) / F f f c 2 ( x ) / F f f c 3 ( x ) / F f f c 4 ( x ) / F f f c 5 ( x ) / F f f
01−1−11 L G L a h g z L G L b + h g z
c 6 ( x ) / F f f c 7 ( x ) / F f f c 8 ( x ) / F f f c 9 ( x ) / F f f c 10 ( x ) / F f f c 11 ( x ) / F f f
L G L b + h g z L G L a h g z 000−1
Table 3. Overall results under the designed braking process in terms of regenerative energy and constraint violation.
Table 3. Overall results under the designed braking process in terms of regenerative energy and constraint violation.
RNN-NOPSEPNN
Overall regenerative energy (J)5.3954 × 1045.0532 × 104
Overall constraint violation00
Table 4. Results of algorithms under SC03 in terms of regenerative energy and constraint violation.
Table 4. Results of algorithms under SC03 in terms of regenerative energy and constraint violation.
RNN-NOPSEPNN
Overall regenerative energy (J)1.3822 × 1051.1978 × 105
Overall constraint violation00
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Yan, J.; Kong, H.; Man, Z. Recurrent Neural Network-Based Nonlinear Optimization for Braking Control of Electric Vehicles. Energies 2022, 15, 9486. https://doi.org/10.3390/en15249486

AMA Style

Yan J, Kong H, Man Z. Recurrent Neural Network-Based Nonlinear Optimization for Braking Control of Electric Vehicles. Energies. 2022; 15(24):9486. https://doi.org/10.3390/en15249486

Chicago/Turabian Style

Yan, Jiapeng, Huifang Kong, and Zhihong Man. 2022. "Recurrent Neural Network-Based Nonlinear Optimization for Braking Control of Electric Vehicles" Energies 15, no. 24: 9486. https://doi.org/10.3390/en15249486

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop