A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving
Abstract
1. Introduction
- (1)
- Perception layer: Currently, in all the intelligent vehicles, various perception devices have been installed, such as cameras, radar, sensors, bird’s eye view (BEV), etc., to collect the surrounding area data and form multimodal perception and modeling. However, the in-vehicle sensors in the perception module can only directly obtain numerical data such as vehicle speed, direction, position, and distance, which may contain inherent measurement errors. Furthermore, abstract data—such as the vehicle’s own state, the driver’s intent, or the behavior of surrounding environmental targets (e.g., pedestrians, other vehicles)—cannot be directly measured. Instead, these must be inferred through in-depth interpretation of the collected data, potentially amplifying uncertainties and errors in the system.
- (2)
- Core technology of trajectory prediction: Most of the intelligent vehicles follow the procedure of modeling–prediction–evaluation/output as the trajectory prediction, but different vehicle models of different companies may reveal different features. The common thing is that the surrounding environmental information in vehicle trajectory prediction is dynamic and heterogeneous, so here it is divided into short-term domain and long-term domain. A vehicle’s driving intent is influenced not only by its own state but also critically depends on interactions with nearby vehicles, other moving objects, and static obstacles. Additionally, vehicle-to-vehicle (V2V) and vehicle-to-infrastructure (V2I) communication technologies can extend a vehicle’s perception range far beyond the limitations of onboard sensors. However, effectively modeling and analyzing these complex interactions remains highly challenging. In real-world scenarios, the intricacy of such conditions exacerbates the uncertainty in trajectory prediction, thereby reducing the accuracy of predicted trajectories.
- (3)
- Decision-making and planning of cooperation and optimization: When the trajectory prediction algorithm outputs the results, real-time decision-making and takeover planning between driver and vehicle should be made immediately, and optimization is required if the decision-making is not good enough, accompanied with super-computing and closed-loop testing. Nowadays, there are still quite a few issues to be considered, e.g., takeover time, takeover process, takeover performance, takeover evaluation and optimization, the driver’s status and intent, etc. The cooperation between drivers and vehicles may highly influence the decision-making and planning of trajectory prediction, thus causing the driving risk to be increased.
- (4)
- Scenario application: Vehicle trajectory prediction results exhibit multimodal phenomena, and some basic scenarios are similar, such as freeway or expressway, while there are some specific scenarios, for instance, in intersection scenarios, vehicles on the same road with identical historical trajectories at a given moment may exhibit divergent real-world maneuvers. This necessitates modeling and predicting trajectories based on distinct driving intents to account for potential behavioral variations. Therefore, more specific, and complicated scenarios should be constructed to reveal the real-world traffic conditions.
2. Vehicle Trajectory Prediction Procedures
2.1. Perception Layer
2.2. Core Technology of Trajectory Prediction
- A.
- Vehicle trajectory prediction of the short-term domain
- a.
- Physics-based models
- b.
- Machine learning models
- (1)
- Kalman filter prediction
- (2)
- Bayesian network prediction
- (3)
- Markov prediction
- B.
- Vehicle trajectory prediction of the long-term domain
- (1)
- Deep-learning method
- (2)
- Driving intention
2.3. Decision-Making Layer
Brand | Physics Model | Deep Learning | Specific Feature | ||
---|---|---|---|---|---|
BYD | Kinematic models: Generate baseline trajectory prediction for target vehicles/pedestrians; road topology constraints | Temporal Modeling (LSTM/GRU): Analyze historical trajectories (e.g., positional sequences over the past 2 s) to capture behavioral patterns | Interaction Awareness (GNN/Attention Mechanisms): Model dynamic interaction graphs between targets to predict group behaviors | Multimodal Probabilistic Output: Generate multiple plausible trajectories for the target over the next 3–5 s, annotated with confidence scores | Scenario-Specific Optimization: Urban intersections; highway scenarios; parking lots |
Tesla | Kinematic-constrained prediction: Estimates real-time acceleration, steering angles and the curvature of dynamic targets to generate baseline motion trajectories | Transformer attention mechanism: Capture inter-target interactions (e.g., adjacent vehicles, lane-change intent, pedestrian crossing paths) Game-theoretic frameworks: Simulate human drivers’ “compete–cooperate” behaviors [53]. | Spatiotemporal joint modeling: Integrate perception, prediction, and planning into a single neural network to directly output ego vehicle trajectories Imitation learning optimization: Train on millions of real-world driving datasets to learn human driver preferences | Dynamic risk heatmap: Map predicted trajectories into probabilistic risk fields, prioritizing avoidance of high-risk zones | Multimodal trajectory generation: Output six plausible trajectories for target over the next 5 s, with calculated collision probabilities |
Li Auto | Physics-based baseline prediction: Generates baseline motion trajectories based on the target’s current physical state | Temporal Modeling (LSTM/Transformer) Analyzes the target’s historical trajectory to capture behavioral patterns | Interaction Awareness (Social-LSTM/GNN): Models interactions between the target and surrounding vehicles/pedestrians via Graph Neural Networks (GNNs) to predict intent | Scene Semantic Understanding: Infers legally permissible paths by integrating high-definition map data (lane markings, traffic signs, traffic light states) | Multimodal Probabilistic Output: Predicts multiple possible trajectories for the target over the next 3–5 s, assigning a probability distribution to each for downstream planning module decision-making |
AITO | Physics-based short-term prediction: Real-time modeling of current motion states for dynamic agents to generate baseline trajectories; ideal for simple scenarios with ultra-low latency (<50 ms) | Temporal Modeling (LSTM/Transformer): Analyze historical trajectories from the past 3–5 s to identify behavioral patterns | Social-GAN: Predict multimodal trajectories for targets in complex scenarios; Interactive game theoretic modeling: Simulate human driver strategies in conflicting scenarios [53] | Intersection intent recognition: Predict target vehicle intention using turn signals, lane deviation trends, and intersection topology | Pedestrian behavior prediction: Infer crossing intent by analyzing posture and environmental cues Non-motorized vehicle trajectory modeling: Tailored to China-specific mixed traffic scenarios, and to address irregular movement patterns of e-bikes and bicycles |
Xpeng | Hierarchical long-short term modeling: short-term (0–2 s) to generate high-confidence trajectories using kinematic models; long-term (2–5 s) to infer agent intentions by integrating HD map lane topology and traffic rules | Social-LSTM/GNN: Construct an interaction graph between dynamic agents to predict group behaviors; Attention mechanism: Identify key influencing factors to reduce computation load from irrelevant targets. | Game-theory-reinforcement learning fusion: Simulate human drivers’ compete–cooperate strategies [53]; | Spatiotemporal joint modeling: Unify perception, prediction and planning modules into a single network to output ego trajectories. Imitation + reinforcement learning hybrid: Train human-like decision-making capabilities using massive real-world driving data, while optimizing safety via reward functions in simulation | Multimodal probability output: Produce distribution maps of future trajectories, annotated with confidence scores (e.g., 80% straight, 15% left lane change, 5% emergency braking for main-lane vehicles) |
2.4. Scenario Application
- (1)
- Algorithmic approaches: Comparing the algorithmic foundations of manufacturer implementations (often focused on real-world performance and scalability) with those of academic methods (which may emphasize theoretical rigor and innovation) could reveal complementary strengths and weaknesses. For example, manufacturer implementations might excel in robustness and real-time performance, while academic methods might offer more sophisticated modeling of complex interactions or driving behaviors.
- (2)
- Data utilization: Manufacturers often have access to vast amounts of real-world driving data, which can inform and refine their trajectory prediction models. Academic approaches, on the other hand, might rely more heavily on simulated data or publicly available datasets. A comparison of how these different data sources are utilized could shed light on their respective advantages and limitations.
- (3)
- Performance metrics: Evaluating the performance of manufacturer implementations and academic methods using a common set of metrics (e.g., prediction accuracy, computational efficiency, robustness to noise) would enable a more direct comparison. This could help to identify which approaches excel in particular aspects of trajectory prediction and where there is room for improvement.
- (4)
- Generalizability and transferability: Manufacturer implementations are often tailored to specific vehicle models and driving scenarios, whereas academic methods might aim for broader applicability. A comparative analysis could assess the generalizability and transferability of both types of approaches, highlighting the trade-offs between specialization and versatility.
- (5)
- Ethical and legal considerations: Manufacturer implementations and academic methods might also differ in how they address ethical and legal issues related to trajectory prediction, such as privacy concerns, liability in the event of accidents, and regulatory compliance. A comparison of these aspects could provide valuable insights for researchers and practitioners alike.
3. Research Outlook
- (1)
- Multi-platform and multi-source data fusion prediction
- (2)
- Lightweight framework for vehicle trajectory prediction models
- (3)
- Generalization and transferability of trajectory prediction methods
- (4)
- Real-world application of trajectory prediction
4. Conclusions
Funding
Acknowledgments
Conflicts of Interest
References
- Choi, D.; Yim, J.; Baek, M.; Lee, S. Machine Learning-Based Vehicle Trajectory Prediction Using V2V Communications and On-Board Sensors. Electronics 2021, 10, 420. [Google Scholar] [CrossRef]
- Zou, B.; Li, W.; Hou, X.; Tang, L.; Yuan, Q. A Framework for Trajectory Prediction of Preceding Target Vehicles in Urban Scenario Using Multi-Sensor Fusion. Sensors 2022, 22, 4808. [Google Scholar] [CrossRef]
- Lang, B.; Li, X.; Chuah, M.C. BEV-TP: End-to-End Visual Perception and Trajectory Prediction for Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2024, 25, 18537–18546. [Google Scholar] [CrossRef]
- Wang, Z.; Jin, Y.; Deligiannis, A.; Fuentes-Michel, J.-C.; Vossiek, M. Cross-Modal Supervision Based Road Segmentation and Trajectory Prediction with Automotive Radar. IEEE Robot. Autom. Lett. 2024, 9, 8083–8089. [Google Scholar] [CrossRef]
- Zhou, S.; Lashkov, I.; Zhang, G.; Yang, Y. Optimized long short-term memory network for LiDAR-based vehicle trajectory prediction through Bayesian optimization. IEEE Trans. Intell. Transp. Syst. 2025, 26, 2988–3003. [Google Scholar] [CrossRef]
- Mozaffari, S.; Al-Jarrah, O.Y.; Dianati, M.; Jennings, P.; Mouzakitis, A. Deep Learning-Based Vehicle Behavior Prediction for Autonomous Driving Applications: A Review. IEEE Trans. Intell. Transp. Syst. 2020, 23, 33–47. [Google Scholar] [CrossRef]
- Leon, F.; Gavrilescu, M. A Review of Tracking and Trajectory Prediction Methods for Autonomous Driving. Mathematics 2021, 9, 660. [Google Scholar] [CrossRef]
- Huang, Y.; Du, J.; Yang, Z.; Zhou, Z.; Zhang, L.; Chen, H. A Survey on Trajectory-Prediction Methods for Autonomous Driving. IEEE Trans. Intell. Veh. 2022, 7, 652–674. [Google Scholar] [CrossRef]
- Bharilya, V.; Kumar, N. Machine learning for autonomous vehicle’s trajectory prediction: A comprehensive survey, challenges, and future research directions. Veh. Commun. 2024, 46, 100733. [Google Scholar] [CrossRef]
- Yin, C.; Cecotti, M.; Auger, D.J.; Fotouhi, A.; Jiang, H. Deep-learning-based vehicle trajectory prediction: A review. IET Intell. Transp. Syst. 2025, 19, e70001. [Google Scholar] [CrossRef]
- Zhang, R.; Cao, L.; Bao, S.; Tan, J. A method for connected vehicle trajectory prediction and collision warning algorithm based on V2V communication. Int. J. Crashworthiness 2017, 22, 15–25. [Google Scholar] [CrossRef]
- Xie, G.; Gao, H.; Qian, L.; Huang, B.; Li, K.; Wang, J. Vehicle Trajectory Prediction by Integrating Physics- and Maneuver-Based Approaches Using Interactive Multiple Models. IEEE Trans. Ind. Electron. 2018, 65, 5999–6008. [Google Scholar] [CrossRef]
- Xiao, H.; Wang, C.; Li, Z.; Wang, R.; Bo, C.; Sotelo, M.A.; Xu, Y. UB-LSTM: A Trajectory Prediction Method Combined with Vehicle Behavior Recognition. J. Adv. Transp. 2020, 2020, 8859689. [Google Scholar] [CrossRef]
- Geng, M.; Li, J.; Xia, Y.; Chen, X. A physics-informed Transformer model for vehicle trajectory prediction on highways. Transp. Res. Part C Emerg. Technol. 2023, 154, 104272. [Google Scholar] [CrossRef]
- Yao, H.; Li, X.; Yang, X. Physics-Aware Learning-Based Vehicle Trajectory Prediction of Congested Traffic in a Connected Vehicle Environment. IEEE Trans. Veh. Technol. 2023, 72, 102–112. [Google Scholar] [CrossRef]
- Schulz, J.; Hubmann, C.; Lochner, J.; Burschka, D. Multiple Model Unscented Kalman Filtering in Dynamic Bayesian Networks for Intention Estimation and Trajectory Prediction. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1467–1474. [Google Scholar]
- Abbas, M.T.; Jibran, M.A.; Afaq, M.; Song, W. An adaptive approach to vehicle trajectory prediction using multimodel Kalman filter. Trans. Emerg. Telecommun. Technol. 2020, 31, e3734. [Google Scholar] [CrossRef]
- Lefkopoulos, V.; Menner, M.; Domahidi, A.; Zeilinger, M.N. Interaction-Aware Motion Prediction for Autonomous Driving: A Multiple Model Kalman Filtering Scheme. IEEE Robot. Autom. Lett. 2020, 6, 80–87. [Google Scholar] [CrossRef]
- Deng, M.; Li, S.; Jiang, X.; Li, X. Vehicle Trajectory Prediction Method Based on “Current” Statistical Model and Cubature Kalman Filter. Electronics 2023, 12, 2464. [Google Scholar] [CrossRef]
- Schreier, M.; Willert, V.; Adamy, J. An Integrated Approach to Maneuver-Based Trajectory Prediction and Criticality Assessment in Arbitrary Road Environments. IEEE Trans. Intell. Transp. Syst. 2016, 17, 2751–2766. [Google Scholar] [CrossRef]
- Jiang, T.; Liu, Y.; Dong, Q.; Xu, T. Intention-Aware Interactive Transformer for Real-Time Vehicle Trajectory Prediction in Dense Traffic. Transp. Res. Rec. J. Transp. Res. Board 2023, 2677, 946–960. [Google Scholar] [CrossRef]
- Ye, N.; Zhang, Y.; Wang, R.; Malekian, R. Vehicle trajectory prediction based on Hidden Markov Model. KSII Trans. Internet Inf. Syst. 2016, 10, 3150–3170. [Google Scholar] [CrossRef]
- Wang, X.; Jiang, X.; Chen, L.; Wu, Y. KVLMM: A Trajectory Prediction Method Based on a Variable-Order Markov Model with Kernel Smoothing. IEEE Access 2018, 6, 25200–25208. [Google Scholar] [CrossRef]
- Xiao, W.; Zhang, L.; Meng, D. Vehicle trajectory prediction based on motion model and maneuver recognition model and maneuver model fusion with interactive multiple models. SAE Int. J. Adv. Curr. Pract. Mobil. 2020, 2, 3060–3071. [Google Scholar] [CrossRef]
- Wang, S.; Zhao, P.; Yu, B.; Huang, W.; Liang, H.; Wang, K. Vehicle Trajectory Prediction by Knowledge-Driven LSTM Network in Urban Environments. J. Adv. Transp. 2020, 2020, 8894060. [Google Scholar] [CrossRef]
- Rossi, L.; Ajmar, A.; Paolanti, M.; Pierdicca, R.; Gadekallu, T.R. Vehicle trajectory prediction and generation using LSTM models and GANs. PLoS ONE 2021, 16, e0253868. [Google Scholar] [CrossRef]
- Li, R.; Zhong, Z.; Chai, J.; Wang, J. Autonomous Vehicle Trajectory Combined Prediction Model Based on CC-LSTM. Int. J. Fuzzy Syst. 2022, 24, 3798–3811. [Google Scholar] [CrossRef]
- Chen, J.; Fan, D.; Qian, X.; Mei, L. KGCN-LSTM: A graph convolutional network considering knowledge fusion of point of interest for vehicle trajectory prediction. IET Intell. Transp. Syst. 2023, 17, 1087–1103. [Google Scholar] [CrossRef]
- Dai, S.; Li, L.; Li, Z. Modeling Vehicle Interactions via Modified LSTM Models for Trajectory Prediction. IEEE Access 2019, 7, 38287–38296. [Google Scholar] [CrossRef]
- Messaoud, K.; Yahiaoui, I.; Verroust-Blondet, A.; Nashashibi, F. Attention Based Vehicle Trajectory Prediction. IEEE Trans. Intell. Veh. 2020, 6, 175–185. [Google Scholar] [CrossRef]
- Yu, D.; Lee, H.; Kim, T.; Hwang, S.-H. Vehicle Trajectory Prediction with Lane Stream Attention-Based LSTMs and Road Geometry Linearization. Sensors 2021, 21, 8152. [Google Scholar] [CrossRef] [PubMed]
- Sheng, Z.; Xu, Y.; Xue, S.; Li, D. Graph-Based Spatial-Temporal Convolutional Network for Vehicle Trajectory Prediction in Autonomous Driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 17654–17665. [Google Scholar] [CrossRef]
- Lin, L.; Li, W.; Bi, H.; Qin, L. Vehicle Trajectory Prediction Using LSTMs with Spatial–Temporal Attention Mechanisms. IEEE Intell. Transp. Syst. Mag. 2022, 14, 197–208. [Google Scholar] [CrossRef]
- Jiang, R.; Xu, H.; Gong, G.; Kuang, Y.; Liu, Z. Spatial-Temporal Attentive LSTM for Vehicle-Trajectory Prediction. ISPRS Int. J. Geo-Inf. 2022, 11, 354. [Google Scholar] [CrossRef]
- Zheng, X.; Chen, X.; Jia, Y. Vehicle Trajectory Prediction Based on GAT and LSTM Networks in Urban Environments. Promet-Traffic Transp. 2024, 36, 867–884. [Google Scholar] [CrossRef]
- Min, H.; Xiong, X.; Wang, P.; Zhang, Z. A Hierarchical LSTM-Based Vehicle Trajectory Prediction Method Considering Interaction Information. Automot. Innov. 2024, 7, 71–81. [Google Scholar] [CrossRef]
- Qiao, S.; Gao, F.; Wu, J.; Zhao, R. An Enhanced Vehicle Trajectory Prediction Model Leveraging LSTM and Social-Attention Mechanisms. IEEE Access 2024, 12, 1718–1726. [Google Scholar] [CrossRef]
- Chandra, R.; Guan, T.; Panuganti, S.; Mittal, T.; Bhattacharya, U.; Bera, A.; Manocha, D. Forecasting Trajectory and Behavior of Road-Agents Using Spectral Clustering in Graph-LSTMs. IEEE Robot. Autom. Lett. 2020, 5, 4882–4890. [Google Scholar] [CrossRef]
- Liu, J.; Xiong, H.; Wang, T.; Huang, H.; Zhong, Z.; Luo, Y. Probabilistic vehicle trajectory prediction via driver characteristic and intention estimation model under uncertainty. Ind. Robot 2021, 48, 778–791. [Google Scholar] [CrossRef]
- Chen, X.; Zhang, H.; Zhao, F.; Hu, Y.; Tan, C.; Yang, J. Intention-Aware Vehicle Trajectory Prediction Based on Spatial-Temporal Dynamic Attention Network for Internet of Vehicles. IEEE Trans. Intell. Transp. Syst. 2022, 23, 19471–19483. [Google Scholar] [CrossRef]
- Li, C.; Liu, Z.; Lin, S.; Wang, Y.; Zhao, X. Intention-convolution and hybrid-attention network for vehicle trajectory prediction. Expert Syst. Appl. 2023, 236, 121412. [Google Scholar] [CrossRef]
- Yuan, R.; Abdel-Aty, M.; Xiang, Q.; Wang, Z.; Gu, X. A Temporal Multi-Gate Mixture-of-Experts Approach for Vehicle Trajectory and Driving Intention Prediction. IEEE Trans. Intell. Veh. 2024, 9, 1204–1216. [Google Scholar] [CrossRef]
- Zhang, X.; Ge, H.; Cheng, R. MmSTCT: Spatial–temporal convolution transformer network considering driving intention for multimodal vehicle trajectory prediction of highway. Transp. A Transp. Sci. 2024, 2407076. [Google Scholar] [CrossRef]
- Zhang, Z.; Wang, C.; Zhao, W.; Cao, M.; Liu, J. Ego Vehicle Trajectory Prediction Based on Time-Feature Encoding and Physics-Intention Decoding. IEEE Trans. Intell. Transp. Syst. 2024, 25, 6527–6542. [Google Scholar] [CrossRef]
- Sun, N.; Xu, N.; Guo, K.; Han, Y.; Wang, L. Research on vehicle trajectory fusion prediction based on physical model and driving intention recognition. J. Automob. Eng. 2025, 239, 239–254. [Google Scholar] [CrossRef]
- Chen, Y.; Zou, Y.; Xie, Y.; Zhang, Y.; Tang, J. Multimodal vehicle trajectory prediction based on intention inference with lane graph representation. Expert Syst. Appl. 2025, 262, 125708. [Google Scholar] [CrossRef]
- Gao, K.; Li, X.; Hu, L.; Liu, X.; Zhang, J.; Du, R.; Li, Y. STMF-IE: A Spatial-Temporal Multi-Feature Fusion and Intention-Enlightened Decoding Model for Vehicle Trajectory Prediction. IEEE Trans. Veh. Technol. 2025, 74, 4004–4018. [Google Scholar] [CrossRef]
- Wang, Y.; Wang, J.; Jiang, J.; Xu, S.; Wang, J. SA-LSTM: A Trajectory Prediction Model for Complex off-road Multi-Agent Systems Considering Situation Awareness Based on Risk Field. IEEE Trans. Veh. Technol. 2023, 72, 14016–14027. [Google Scholar] [CrossRef]
- Liu, J.; Sheng, X.; Tan, L.; Zhang, W.; Zhang, P.; Liu, K. DPDRF: Dynamic Predictive Driving Risk Field Based on Multi-Agent Trajectory Prediction and Digital Twins System. IEEE Trans. Veh. Technol. 2025, 74, 3651–3665. [Google Scholar] [CrossRef]
- Wang, X.; Hu, J.; Wei, C.; Li, L.; Li, Y.; Du, M. A Novel Lane-Change Decision-Making with Long-Time Trajectory Prediction for Autonomous Vehicle. IEEE Access 2023, 11, 137437–137449. [Google Scholar] [CrossRef]
- Hu, H.; Wang, Q.; Zhang, Z.; Li, Z.; Gao, Z. Holistic transformer: A joint neural network for trajectory prediction and decision-making of autonomous vehicles. Pattern Recognit. 2023, 141, 109592. [Google Scholar] [CrossRef]
- Qie, T.; Wang, W.; Yang, C.; Li, Y. A Self-Trajectory Prediction Approach for Autonomous Vehicles Using Distributed Decouple LSTM. IEEE Trans. Ind. Inform. 2024, 20, 6708–6717. [Google Scholar] [CrossRef]
- Li, D.; Zhang, Q.; Xia, Z.; Zheng, Y.; Zhang, K.; Yi, M.; Jin, W.; Zhao, D. Planning-Inspired Hierarchical Trajectory Prediction via Lateral-Longitudinal Decomposition for Autonomous Driving. IEEE Trans. Intell. Veh. 2024, 9, 692–703. [Google Scholar] [CrossRef]
- Hussien, M.M.; Melo, A.N.; Ballardini, A.L.; Maldonado, C.S.; Izquierdo, R.; Sotelo, M.Á. RAG-based explainable prediction of road users behaviors for automated driving using knowledge graphs and large language models. Expert Syst. Appl. 2025, 265, 125914. [Google Scholar] [CrossRef]
- Xing, Y.; Hu, Z.; Hang, P.; Lv, C. Learning from the Dark Side: A Parallel Time Series Modelling Framework for Forecasting and Fault Detection on Intelligent Vehicles. IEEE Trans. Intell. Veh. 2023, 9, 3205–3219. [Google Scholar] [CrossRef]
- Gao, H.; Qin, Y.; Hu, C.; Liu, Y.; Li, K. An Interacting Multiple Model for Trajectory Prediction of Intelligent Vehicles in Typical Road Traffic Scenario. IEEE Trans. Neural Netw. Learn. Syst. 2023, 34, 6468–6479. [Google Scholar] [CrossRef]
- Hou, L.; Li, S.E.; Yang, B.; Wang, Z.; Nakano, K. Integrated Graphical Representation of Highway Scenarios to Improve Trajectory Prediction of Surrounding Vehicles. IEEE Trans. Intell. Veh. 2023, 8, 1638–1651. [Google Scholar] [CrossRef]
- Cong, P.; Deng, M.; Xiao, Y.; Zhu, Y.; Zhang, X. Trajectory prediction based on the dynamic characteristics and coupling relationships among vehicles in highway scenarios. Eng. Appl. Artif. Intell. 2025, 140, 109718. [Google Scholar] [CrossRef]
- Zhang, S.; Bai, R.; He, R.; Meng, Z.; Chang, Y.; Zhi, Y.; Sun, N. Research on Vehicle Trajectory Prediction Methods in Urban Main Road Scenarios. IEEE Trans. Intell. Transp. Syst. 2024, 25, 16392–16408. [Google Scholar] [CrossRef]
Brand | Camera | Millimeter Wave Radar |
Ultrasonic
Radar | LiDAR | BEV | Specific Feature |
---|---|---|---|---|---|---|
BYD | High-resolution front-facing binocular/trinocular cameras and surround-view cameras for mid-to-close-range object detection | Bosch 5th generation for tracking the speed of vehicles and pedestrians | Close-range parking scenarios | Perception redundancy in complex scenarios | Model the motion intent of dynamic objects | Tailored for China-specific scenarios, such as non-motorized vehicles (e.g., electric bicycles, tricycles), illegally parked vehicles, and construction barriers, long-tail edge cases |
Tesla | Eight cameras (360° surround view; 1.2-megapixel) + front-facing triple-camera system (detection range of 250 m) + side cameras for cross-traffic capture | No | No | No | Optional | 4D vector space modeling, pixel-level semantic segmentation, HydraNets’ multi-task network |
Li Auto | RGB image data for object classification and semantic segmentation | Speed and distance information to enhance moving target tracking capabilities | Optional | High-precision point cloud data for 3D object detection and distance measurement | Covered +frontal and rear wheels | Timestamp synchronization and coordinate transformation align multi-sensor data into a unified coordinate system, constructing a dynamic 4D spatiotemporal model (3D space + time series). |
AITO | 360° surround-view cameras + front-facing 8-megapixel high-definition camera, supporting long-range object detection (over 200 m) | 4D imaging radar, capable of outputting target height information | Close-range obstacle detection for parking scenarios | Optional | Transformer neural network, 3D motion field model of dynamic objects | GOD network for unknown road obstacles, predicting movement trajectories |
Xpeng | 8-megapixel cameras, a 360° surround-view system with front-facing binocular cameras enabling 250-m ultra-long-range detection | 5 radars fuse dynamic target speed and height information | Dual LiDARs with 120° horizontal field-of-coverage | Transformer architecture, 3D motion field for dynamic objects | Occupancy networks for unstructured obstacles; LiDAR-vision temporal alignment through cross-modal attention mechanism |
Brand | Decision-Making and Planning | Supercomputing and Closed Loop | Real-Time and Optimization |
---|---|---|---|
BYD | Risk field model: Converts predicted trajectories into dynamic risk heatmaps, guiding the planning module to prioritize paths with minimal risk | Comfort constraints: Predicted trajectories must adhere to comfort thresholds for jerk and lateral deviation, preventing abrupt braking or sharp steering maneuvers. | Real-time performance: Leverages self-developed computing platforms to ensure end-to-end prediction-planning latency remains <300 ms. |
Tesla | Shadow mode: Continuously compares trajectory predictions between human drivers and full self-driving, automatically triggering corner case data uploads. | Dojo supercomputing platform: Utilizes self-developed D1 chips and ExaPOD architecture, achieving 1.3 times faster training efficiency compared to GPU clusters, and supports real-time iteration of 100-billion parameter models. | Simulation scenario library: Generates tens of millions of extreme scenarios from real-world driving data (e.g., pedestrians crossing in heavy rain, multi-vehicle negotiations at intersections) to enhance long-tail scenario prediction capabilities. |
Li Auto | Uncertainty quantification: Incorporates Bayesian deep learning or Monte Carlo Dropout to evaluate prediction confidence. In low-confidence scenarios (e.g., sudden target braking), the system automatically activates conservative strategies (e.g., deceleration or lateral avoidance) [54]. | Real-time computing optimization: The algorithm is deployed on in-vehicle computing platforms (e.g., NVIDIA Orin or Li Auto’s self-developed computing chips), ensuring prediction latency below 100 ms through model light-weighting (e.g., TensorRT acceleration) and asynchronous pipeline processing. | |
AITO | Probabilistic risk field: Converts predicted trajectories into dynamic risk heatmaps, enabling the planning module to select optimal paths based on risk distribution (avoiding high-risk zones). | MDC computing platform: Huawei’s self-developed in-vehicle computing platform (e.g., MDC 810) delivers 400+ TOPS computing power, supporting multi-model parallel inference [55]. | Dynamic safety boundary adjustment: Automatically expands safety distances and reduces confidence thresholds for lane-changing decisions in low-visibility conditions (e.g., rain/fog) or occluded object scenarios (e.g., sudden pedestrian appearances). |
Xpeng | Shadow Mode: Real-time collection of trajectory discrepancy data between human driving and system predictions, automatically triggering data uploads for corner cases (e.g., aggressive cut-ins, unusual obstacles). | Simulation testing: Constructs a high-fidelity virtual scenario library using Unreal Engine 5 (e.g., intersections in heavy rain at night with no streetlights), covering tens of thousands of long-tail scenarios to accelerate prediction model iteration. | Miles per intervention (MPI) optimization: Leverages cloud-based training (Alibaba Cloud + XPeng’s in-house computing infrastructure) to continuously reduce intervention frequency for urban NGP. In 2023, MPI for urban scenarios improved to 200+ km per intervention. |
Brand | Basic Scenario | Specific Scenario | Unique Feature |
---|---|---|---|
BYD | In scenarios like highway cruising and traffic jam following, trajectory prediction demonstrates high accuracy with human-like decision-making (e.g., moderately tolerating lane-cutting by adjacent vehicles) | Prediction capabilities remain weak for extreme cases (e.g., temporary detours in construction zones or sudden animal intrusions), necessitating driver intervention | ADAS features in mid-to-low-end models prioritize meeting basic L2-level requirements, while advanced urban NOA (Navigate on Autopilot) is reserved for premium models like the Denza N7 and Yangwang U8 |
Tesla | Unprotected left turn: Predicts the time window for oncoming straight traffic to pass, dynamically adjusts decisions based on the host vehicle’s acceleration capability, achieving a success rate exceeding 90%. Highway ramp merging: Analyzes adjacent vehicles’ steering angles and acceleration changes to assess yielding intent, enabling adaptive selection of aggressive or conservative merging strategies. | Sudden pedestrian crossing: Detects abrupt directional shifts or speed changes in pedestrian gait (e.g., sudden backtracking) to trigger lateral evasion maneuvers or emergency braking. | Construction zone navigation: Identifies temporary obstacles via occupancy grid networks, predicts construction vehicle movement paths, and plans safe detour trajectories. |
Li Auto | Highway scenarios: Detects lane-cutting intent from adjacent vehicles (e.g., delayed lane changes after turn signals), adjusting the host vehicle’s following distance to ensure safety. | Urban roads: Predicts pedestrian crossings and sudden blind-spot intrusions by e-bikes (e.g., “sudden blind-spot intrusions” scenarios), proactively planning evasion trajectories. | Parking lots: Anticipates movement paths of pedestrians near parking spaces, mitigating collision risks caused by blind zones. |
AITO | Unprotected left turn: Predicts the time window for oncoming straight vehicles to pass, dynamically decides to proceed assertively or yield based on the host vehicle’s acceleration capability. | Sudden intrusion pedestrians: Analyzes shadows from roadside obstructions (e.g., buses, greenery) to preemptively detect pedestrians crossing unexpectedly, triggering automatic emergency braking (AEB). | Forced lane-cutting in traffic jams: Detects adjacent vehicles’ steering angles and speed variation trends, proactively fine-tunes the host vehicle’s speed to prevent forced cuts or side-swipe incidents. |
Xpeng | Urban intersection left turn: Predicts the time window for oncoming straight vehicles to pass and dynamically decides to proceed assertively or yield based on the host vehicle’s acceleration capability, achieving a success rate exceeding 95%. | Forced lane-cutting in congested traffic: Analyzes adjacent vehicles’ steering angles and acceleration changes to predict lane-cutting intent 0.5 s in advance, proactively fine-tuning speed to prevent being cut off. Mixed Pedestrian and Non-Motorized Traffic: Detects e-bike wrong-way riding and pedestrian “hesitant gait”, predicts sudden trajectory changes (e.g., abrupt backtracking), and triggers lateral evasion maneuvers. | Highway construction zones: Integrates temporary updates from high-definition maps to predict irregular trajectories of construction vehicles, enabling early lane changes to a safe path. |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, T.; Xiao, D.; Xu, X.; Yuan, Q. A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving. Sensors 2025, 25, 5129. https://doi.org/10.3390/s25165129
Wang T, Xiao D, Xu X, Yuan Q. A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving. Sensors. 2025; 25(16):5129. https://doi.org/10.3390/s25165129
Chicago/Turabian StyleWang, Tingjing, Daiquan Xiao, Xuecai Xu, and Quan Yuan. 2025. "A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving" Sensors 25, no. 16: 5129. https://doi.org/10.3390/s25165129
APA StyleWang, T., Xiao, D., Xu, X., & Yuan, Q. (2025). A Survey on Vehicle Trajectory Prediction Procedures for Intelligent Driving. Sensors, 25(16), 5129. https://doi.org/10.3390/s25165129