Next Article in Journal
Effects of Approach–Avoidance Swiping Interactions on the Valence Estimation Using Tablet AAT
Next Article in Special Issue
Blockchain Federated Learning for In-Home Health Monitoring
Previous Article in Journal
A Novel Approach for Emotion Detection and Sentiment Analysis for Low Resource Urdu Language Based on CNN-LSTM
Previous Article in Special Issue
Using Machine Learning and Software-Defined Networking to Detect and Mitigate DDoS Attacks in Fiber-Optic Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement Learning

1
School of Information Engineering, Chang’an University, Xi’an 710064, China
2
Key Laboratory of Road and Traffic Engineering of the Ministry of Education, College of Transportation Engineering, Tongji University, Shanghai 201804, China
3
RISE Research Institutes of Sweden AB, 41756 Gothenburg, Sweden
*
Author to whom correspondence should be addressed.
Electronics 2022, 11(24), 4097; https://doi.org/10.3390/electronics11244097
Submission received: 28 October 2022 / Revised: 1 December 2022 / Accepted: 6 December 2022 / Published: 8 December 2022

Abstract

With the rapid development of artificial intelligence technology, the deep learning method has been introduced for vehicle trajectory prediction in the internet of vehicles, since it provides relative accurate prediction results, which is one of the critical links to guarantee security in the distributed mixed-driving scenario. In order to further enhance prediction accuracy by making full utilization of complex traffic scenes, an improved multimodal trajectory prediction method based on deep inverse reinforcement learning is proposed. Firstly, a fused dilated convolution module for better extracting raster features is introduced into the existing multimodal trajectory prediction network backbone. Then, a reward update policy with inferred goals is improved by learning the state rewards of goals and paths separately instead of original complex rewards, which can reduce the requirement for predefined goal states. Furthermore, a correction factor is introduced in the existing trajectory generator module, which can better generate diverse trajectories by penalizing trajectories with little difference. Abundant experiments on the current popular public dataset indicate that the prediction results of our proposed method are a better fit with the basic structure of the given traffic scenario in a long-term prediction range, which verifies the effectiveness of our proposed method.
Keywords: multimodal trajectory prediction; rasterization; dilated convolution; maximum entropy inverse reinforcement learning (MaxEnt RL) multimodal trajectory prediction; rasterization; dilated convolution; maximum entropy inverse reinforcement learning (MaxEnt RL)

Share and Cite

MDPI and ACS Style

Chen, T.; Guo, C.; Li, H.; Gao, T.; Chen, L.; Tu, H.; Yang, J. An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement Learning. Electronics 2022, 11, 4097. https://doi.org/10.3390/electronics11244097

AMA Style

Chen T, Guo C, Li H, Gao T, Chen L, Tu H, Yang J. An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement Learning. Electronics. 2022; 11(24):4097. https://doi.org/10.3390/electronics11244097

Chicago/Turabian Style

Chen, Ting, Changxin Guo, Hao Li, Tao Gao, Lei Chen, Huizhao Tu, and Jiangtian Yang. 2022. "An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement Learning" Electronics 11, no. 24: 4097. https://doi.org/10.3390/electronics11244097

APA Style

Chen, T., Guo, C., Li, H., Gao, T., Chen, L., Tu, H., & Yang, J. (2022). An Improved Multimodal Trajectory Prediction Method Based on Deep Inverse Reinforcement Learning. Electronics, 11(24), 4097. https://doi.org/10.3390/electronics11244097

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop