# Multi-Level Deceleration Planning Based on Reinforcement Learning Algorithm for Autonomous Regenerative Braking of EV

^{1}

^{2}

^{3}

^{*}

## Abstract

**:**

## 1. Introduction

## 2. Algorithm Overview

## 3. Simulation Environments

#### 3.1. Vehicle Model Description

#### 3.2. Parameter Identification

#### 3.3. Regenerative Torque Control

## 4. Planning Algorithms

#### 4.1. Intelligent Driver Model-Based Planning

#### 4.1.1. Description of the Prediction Process

#### 4.1.2. Parameter Learning Algorithm

#### 4.1.3. Planning Results According to Each Driver

#### 4.2. Optimization Method Based Planning

#### 4.2.1. Model Predictive Control Scheme

^{2}to −5 m/s

^{2}.

#### 4.2.2. Planning Results Using the MPC Algorithm

_{df}is 4, and weight value q

_{vf}is 0.1.

## 5. Planning Management

#### 5.1. Overview of the Reinforcement Learning Algorithm

#### 5.2. Reinforcement Learning Algorithm for Planning Management

#### 5.3. Q Network Design and Learning Algorithm

## 6. Algorithm Validation Results

#### 6.1. Learning Results for a Deceleration Case

#### 6.2. Validation Results for Various Driving Conditions

## 7. Conclusions

## Author Contributions

## Funding

## Acknowledgments

## Conflicts of Interest

## References

- Balasubramanian, B.; Huzefa, A.C. Development of regeneration braking model for electric vehicle range improvement. In Proceedings of the 2017 IEEE Transportation Electrification Conference, Pune, India, 13–15 December 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–5. [Google Scholar]
- McCall, J.C.; Trivedi, M.M. Driver Behavior and Situation-aware Brake Assistance for Intelligent Vehicles. Proc. IEEE
**2007**, 95, 374–387. [Google Scholar] [CrossRef] - Hyundai UK. Discover the New Hyundai Kona Electric—Electric SUV. Available online: https://www.hyundai.co.uk/new-cars/kona-electric (accessed on 26 September 2018).
- LaValle, S.M. Planning Algorithms; Cambridge University Press: Cambridge, UK, 2006. [Google Scholar]
- Gonzalez, D.; Perez, J.; Milanes, V.; Nashashibi, F. A Review of Motion Planning Techniques for Automated Vehicles. IEEE Trans. Intell. Transp. Syst.
**2016**, 17, 1135–1145. [Google Scholar] [CrossRef] - Lang, D.; Schmied, R.; Del Re, L. Prediction of Preceding Driver Behavior for Fuel Efficient Cooperative Adaptive Cruise Control. SAE Int. J. Engines
**2014**, 7, 14–20. [Google Scholar] [CrossRef] - Butakov, V.A.; Ioannou, P. Personalized Driver Assistance for Signalized Intersections Using V2I Communication. IEEE Trans. Intell. Transp. Syst.
**2016**, 17, 1910–1919. [Google Scholar] [CrossRef] - Li, X.; Sun, Z.; Cao, D.; He, Z.; Zhu, Q. Real-Time Trajectory Planning for Autonomous Urban Driving: Framework, Algorithms, and Verifications. IEEE/ASME Trans. Mechatron.
**2016**, 21, 740–753. [Google Scholar] [CrossRef] - Nilsson, J.; Brannstrom, M.; Fredriksson, J.; Coelingh, E. Longitudinal and Lateral Control for Automated Yielding Maneuvers. IEEE Trans. Intell. Transp. Syst.
**2016**, 17, 1404–1414. [Google Scholar] [CrossRef] - Ziegler, J.; Bender, P.; Dang, T.; Stiller, C. Trajectory planning for Bertha—A local, continuous method. 2014 IEEE Intelligent Vehicles Symposium Proceedings, Dearborn, MI, USA, 8–11 June 2014; pp. 450–457. [Google Scholar]
- Malinauskas, R. The Intelligent Driver Model: Analysis and Application to Adaptive Cruise Control The Intelligent Driver Model: Analysis and Application. Master’s Thesis, Clemson University, Clemson, SC, USA, May 2014. [Google Scholar]
- Min, K.; Yeon, K.; Sim, G.; Sunwoo, P.M. Prediction Algorithm for Decelerating Driving States Based on Driver Characteristics for Smart Regenerative Control of Electric Vehicles. Presented at Aachen Colloquium China, Beijing, China, 9–10 October 2018. [Google Scholar]
- Qi, X.; Luo, Y.; Wu, G.; Boriboonsomsin, K.; Barth, M.J. Deep Reinforcement Learning-Based Vehicle Energy Efficiency Autonomous Learning System. In Proceedings of the 2017 IEEE Intelligent Vehicles Symposium (IV), Los Angeles, CA, USA , 11–14 June 2017; pp. 1228–1233. [Google Scholar]
- Liessner, R.; Schroer, C.; Dietermann, A.; Bernard, B. Deep Reinforcement Learning for Advanced Energy Management of Hybrid Electric Vehicles. In Proceedings of the 10th International Conference on Agents and Artificial Intelligence, Madeira, Portugal, 16–18 January 2018; Volume 2, pp. 978–989. [Google Scholar]
- Xiong, R.; Cao, J.; Yu, Q. Reinforcement Learning-Based Real-Time Power Management for Hybrid Energy Storage System in the Plug-In Hybrid Electric Vehicle. Appl. Energy
**2018**, 211, 538–548. [Google Scholar] [CrossRef] - Cao, J.; Xiong, R. Reinforcement Learning-based Real-time Energy Management for Plug-in Hybrid Electric Vehicle with Hybrid Energy Storage System. Energy Procedia
**2017**, 142, 1896–1901. [Google Scholar] [CrossRef] - Chen, M.; Rincon-Mora, G.A. Accurate Electrical Battery Model Capable of Predicting Runtime and I–V Performance. IEEE Trans. Energy Convers.
**2006**, 21, 504–511. [Google Scholar] [CrossRef] - Coleman, T.F.; Li, Y. On the Convergence of Interior-Reflective Newton Methods for Nonlinear Minimization Subject to Bounds. Math. Program.
**1994**, 67, 189–224. [Google Scholar] [CrossRef] - Coleman, T.F.; Li, Y. An Interior Trust Region Approach for Nonlinear Minimization Subject to Bounds. SIAM J. Optim.
**1996**, 6, 418–445. [Google Scholar] [CrossRef] [Green Version] - Chae, H.; Kang, C.M.; Kim, B.; Kim, J.; Chung, C.C.; Choi, J.W. Autonomous Braking System via Deep Reinforcement Learning. In Proceedings of the 2017 IEEE International Conference on Intelligent Transportation Systems, Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- Nair, A.; Srinivasan, P.; Blackwell, S.; Alcicek, C.; Fearon, R.; De Maria, A.; Panneershelvam, V.; Suleyman, M.; Beattie, C.; Petersen, S.; et al. Massively Parallel Methods for Deep Reinforcement Learning. Presented at the Deep Learning Workshop, International Conference on Machine Learning, Lille, France, 6–11 July 2015. [Google Scholar]
- Bellman, R. A Markovian Decision Process. J. Math. Mech.
**1957**, 6, 679–684. [Google Scholar] [CrossRef] - Watkins, C.J.C.H.; Dayan, P. Q-learning. Mach. Learn.
**1992**, 8, 279–292. [Google Scholar] [CrossRef] - Rajamani, R. Dynamics and Control of Hybrid Gas Electric Vehicles. In Vehicle Dynamics and Control; Springer: Boston, MA, USA, 2012. [Google Scholar]

Symbol | Description | Value [unit] | Symbol | Description | Value [unit] |
---|---|---|---|---|---|

${a}_{v}$ | Vehicle acceleration | [m/s^{2}] | ${I}_{w}$ | Inertia of wheel | 0.14 [khm^{2}] |

${v}_{v}$ | Vehicle velocity | [m/s] | ${I}_{m}$ | Inertia of motor | 0.028 [khm^{2}] |

${T}_{m}$ | Motor torque | [Nm] | ${I}_{s}$ | Inertia of shaft | 0.75 [khm^{2}] |

${F}_{d}$ | Drag force | [N] | ${c}_{d}$ | Air drag coefficient | 0.171 [Ns^{2}/m^{2}] |

${r}_{w}$ | Wheel radius | 0.318 [m] | ${c}_{a}$ | Rolling coefficient | 143 [N] |

${\theta}_{s}$ | Gear ratio of shaft | 7.98 [−] | ${c}_{b}$ | Rolling coefficient | 0.389 [Ns^{2}/m^{2}] |

${\eta}_{s}$ | Efficiency of shaft | 0.99 [−] | ${m}_{a}$ | Additional mass | 100 [kg] |

${m}_{e}$ | Empty vehicle mass | 1685 [kg] |

Symbol | Description | Value [unit] | Symbol | Description | Value [unit] |
---|---|---|---|---|---|

${R}_{0}$ | Series register | 0.0016 [Ohm] | ${C}_{1a}$ | Short capacitor param a | −649 |

${R}_{1a}$ | Short register param a | 76.52 | ${C}_{1b}$ | Short capacitor param b | −64.3 |

${R}_{1b}$ | Short register param b | −7.95 | ${C}_{1c}$ | Short capacitor param c | 12,692 |

${R}_{1c}$ | Short register param c | 23.83 | ${C}_{2a}$ | Long capacitor param a | −78,409 |

${R}_{2a}$ | Long register param a | 5.21 | ${C}_{2b}$ | Long capacitor param b | −0.013 |

${R}_{2b}$ | Long register param b | −35.23 | ${C}_{2c}$ | Long capacitor param c | 30,802 |

${R}_{2c}$ | Long register param c | 124.9 | ${V}_{oc}$ | Open circuit voltage | 356 [V] |

$\mathit{X}$ | ${\mathit{X}}_{\mathit{r}}$ | $\mathit{A}$ | $\mathit{B}$ | $\mathit{Q}$ |
---|---|---|---|---|

$\left[\begin{array}{c}\mathsf{\Delta}s\\ \mathsf{\Delta}v\end{array}\right]$ | $\left[\begin{array}{c}{d}_{r}\\ 0\end{array}\right]$ | $\left[\begin{array}{cc}1& \mathsf{\Delta}t\\ 0& 1\end{array}\right]$ | $\left[\begin{array}{c}-0.5\mathsf{\Delta}{t}^{2}\\ -\mathsf{\Delta}t\end{array}\right]$ | $\left[\begin{array}{cc}{q}_{df}& 0\\ 0& {q}_{vf}\end{array}\right]$ |

© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Min, K.; Sim, G.; Ahn, S.; Park, I.; Yoo, S.; Youn, J.
Multi-Level Deceleration Planning Based on Reinforcement Learning Algorithm for Autonomous Regenerative Braking of EV. *World Electr. Veh. J.* **2019**, *10*, 57.
https://doi.org/10.3390/wevj10030057

**AMA Style**

Min K, Sim G, Ahn S, Park I, Yoo S, Youn J.
Multi-Level Deceleration Planning Based on Reinforcement Learning Algorithm for Autonomous Regenerative Braking of EV. *World Electric Vehicle Journal*. 2019; 10(3):57.
https://doi.org/10.3390/wevj10030057

**Chicago/Turabian Style**

Min, Kyunghan, Gyubin Sim, Seongju Ahn, Inseok Park, Seungjae Yoo, and Jeamyoung Youn.
2019. "Multi-Level Deceleration Planning Based on Reinforcement Learning Algorithm for Autonomous Regenerative Braking of EV" *World Electric Vehicle Journal* 10, no. 3: 57.
https://doi.org/10.3390/wevj10030057