Next Article in Journal
Direction-of-Arrival Estimation with Discrete Fourier Transform and Deep Feature Fusion
Previous Article in Journal
Study of the Effect of Magnetic Fields on the Secondary Electron Model
Previous Article in Special Issue
A Resilient Routing Protocol to Reduce Update Cost by Unsupervised Learning and Deep Reinforcement Learning in Mobile Ad Hoc Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer

School of Traffic and Transportation, Lanzhou Jiaotong University, Lanzhou 730030, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(12), 2448; https://doi.org/10.3390/electronics14122448
Submission received: 17 May 2025 / Revised: 11 June 2025 / Accepted: 14 June 2025 / Published: 16 June 2025
(This article belongs to the Special Issue Advances in AI Engineering: Exploring Machine Learning Applications)

Abstract

:
Accurate trajectory prediction of surrounding vehicles is a fundamental challenge in autonomous driving, requiring sophisticated modeling of complex vehicle interactions, traffic dynamics, and contextual dependencies. This paper introduces Adaptive Speed-Aware Prediction (ADSAP), a novel trajectory prediction framework that advances the state of the art through innovative mechanisms for adaptive attention modulation and knowledge transfer. At its core, ADSAP employs an adaptive deformable speed-aware pooling mechanism that dynamically adjusts the model’s attention distribution and receptive field based on instantaneous vehicle states and interaction patterns. This adaptive architecture enables fine-grained modeling of diverse traffic scenarios, from sparse highway conditions to dense urban environments. The framework incorporates a sophisticated speed-aware multi-scale feature aggregation module that systematically combines spatial and temporal information across multiple scales, facilitating comprehensive scene understanding and robust trajectory prediction. To bridge the gap between model complexity and computational efficiency, we propose an adversarial knowledge distillation approach that effectively transfers learned representations and decision-making strategies from a high-capacity teacher model to a lightweight student model. This novel distillation mechanism preserves prediction accuracy while significantly reducing computational overhead, making the framework suitable for real-world deployment. Extensive empirical evaluation on the large-scale NGSIM and highD naturalistic driving datasets demonstrates ADSAP’s superior performance. The ADSAP framework achieves an 18.7% reduction in average displacement error and a 22.4% improvement in final displacement error compared to state-of-the-art methods while maintaining consistent performance across varying traffic densities (0.05–0.85 vehicles/meter) and speed ranges (0–35 m/s). Moreover, ADSAP exhibits robust generalization capabilities across different driving scenarios and weather conditions, with the lightweight student model achieving 95% of the teacher model’s accuracy while offering a 3.2× reduction in inference time. Comprehensive experimental results supported by detailed ablation studies and statistical analyses validate ADSAP’s effectiveness in addressing the trajectory prediction challenge. Our framework provides a novel perspective on integrating adaptive attention mechanisms with efficient knowledge transfer, contributing to the development of more reliable and intelligent autonomous driving systems. Significant improvements in prediction accuracy, computational efficiency, and generalization capability demonstrate ADSAP’s potential ability to advance autonomous driving technology.

1. Introduction

Autonomous driving represents a paradigm-shifting technology that holds immense promise to fundamentally transform modern transportation systems, substantially enhance road safety, and dramatically improve traffic efficiency through intelligent vehicle coordination and operation [1,2]. Among the multitude of technical challenges that must be addressed in order to achieve fully autonomous driving capabilities, accurate and robust prediction of future trajectories of surrounding vehicles in complex, dynamic, and highly interactive traffic scenarios stands out as a critical research problem [3,4,5]. This prediction task is particularly challenging due to the inherent uncertainties in human driving behavior, the complex interdependencies between multiple traffic participants, and the diverse range of environmental conditions that autonomous vehicles must navigate.
The significance of reliable trajectory prediction cannot be overstated, as it serves as a fundamental building block for autonomous decision-making systems. By accurately forecasting the future states and intentions of surrounding vehicles, autonomous vehicles can generate optimal trajectories, execute safe maneuvers, and maintain appropriate safety margins while maximizing operational efficiency [6,7,8]. This capability is especially crucial in scenarios involving multiple interacting agents, where the prediction model must simultaneously consider the coupled dynamics of numerous vehicles, each potentially influencing the behavior of others through complex social interactions. Moreover, accurate trajectory prediction enables autonomous vehicles to anticipate and proactively respond to potentially dangerous situations, thereby reducing the likelihood of accidents and enhancing overall traffic safety.
The development of trajectory prediction methodologies has witnessed significant advancement with the emergence of sophisticated deep learning architectures. These have demonstrated remarkable capability in capturing and learning the intricate spatiotemporal dependencies and complex interaction patterns inherent in vehicular motion from historical trajectory data [9,10,11]. Such approaches have revolutionized the field by moving beyond traditional kinematic models to data-driven frameworks that can automatically extract relevant features and learn meaningful representations of vehicle behavior patterns.
Among the various deep learning architectures, Recurrent Neural Networks (RNNs) and their advanced variants, particularly Long Short-Term Memory (LSTM) networks [12], have emerged as fundamental building blocks for sequence modeling in trajectory prediction tasks [13,14]. These architectures excel in capturing temporal dependencies and maintaining long-term memory of motion patterns, enabling them to effectively model the sequential nature of vehicle trajectories. Concurrently, graph-based approaches have garnered significant attention in the research community, offering a natural and powerful framework for modeling the complex interactions among multiple traffic participants [15,16]. By representing vehicles as nodes and their interactions as edges in a graph structure, these methods can explicitly capture the relational dynamics and social influences that govern vehicle behavior in traffic scenarios.
Table 1 highlights the comprehensive integration of advanced components in ADSAP. Unlike existing methods that typically focus on individual aspects, ADSAP uniquely combines speed-aware mechanisms, adaptive pooling, adversarial knowledge distillation, multi-scale processing, and a transformer architecture.
While previous methodologies have achieved noteworthy success in capturing general motion patterns and predicting future vehicle positions, they face substantial challenges in real-world applications [18,20,21]. A primary limitation lies in their ability to adapt to the highly dynamic and continuously evolving nature of traffic scenarios, where interaction patterns and behavioral dynamics can change rapidly and unpredictably. Furthermore, previous approaches often struggle to fully leverage the rich hierarchical contextual information available at different spatial and temporal scales, such as lane-level features, road topology, traffic rules, and broader environmental context. This incomplete utilization of multi-scale contextual information can lead to predictions that, while mathematically accurate, may not fully align with the physical and social constraints of real-world traffic scenarios.
The inherent limitations of current trajectory prediction models extend beyond their immediate performance metrics to fundamental challenges in knowledge transfer and generalization capabilities [22,23,24]. The domain-specific nature of learned representations often results in models that excel in scenarios that are similar to the training distribution, but exhibit significant performance degradation when encountered with novel traffic conditions, varying road geometries, or unfamiliar driving behaviors. This limitation is particularly problematic in real-world applications, where autonomous vehicles must navigate through diverse and continuously evolving traffic environments that may differ substantially from the training scenarios.
To address these limitations, researchers have increasingly turned to sophisticated generative modeling approaches, notably Generative Adversarial Networks (GANs) [25] and Variational Autoencoders (VAEs) [26], which offer promising frameworks for capturing the inherent multimodality and uncertainty in future trajectory predictions [14,17]. These generative approaches represent a paradigm shift from deterministic prediction to probabilistic modeling, enabling the generation of multiple plausible future trajectories that reflect the stochastic nature of human driving behavior. By learning the underlying distribution of possible future trajectories rather than attempting to predict a single optimal path, these models can better account for the inherent uncertainty in traffic scenarios and provide more comprehensive information for downstream decision-making processes.
However, the integration of advanced generative models introduces new challenges in the trajectory prediction pipeline [8,11]. Traditional GANs face mode collapse during trajectory prediction. This issue causes generators to produce limited trajectory variations, significantly reducing prediction diversity and failing to capture the full spectrum of possible future behaviors. Additionally, high computational costs hinder real-time deployment in autonomous vehicles, where rapid prediction updates are crucial for safe operation. Increased model complexity often results in substantially higher computational requirements, which can be problematic for real-time applications in which rapid prediction updates are crucial for safe vehicle operation. The tradeoff between model expressiveness and computational efficiency becomes particularly acute when considering the strict latency requirements and limited computational resources available in autonomous vehicles. Furthermore, the evaluation and validation of generative models present unique challenges, as traditional metrics may not fully capture the quality and diversity of the generated trajectories, necessitating the development of more sophisticated evaluation frameworks.
To overcome the aforementioned challenges and limitations, we introduce Adaptive Dynamic Speed-Aware Prediction (ADSAP), a novel trajectory prediction framework that seamlessly integrates adversarial knowledge transfer with dynamic spatial adaptation mechanisms. The fundamental innovation of ADSAP lies in its sophisticated architecture, which adaptively modulates the model’s attention mechanisms and receptive field characteristics based on two critical factors: the instantaneous interaction states between vehicles, and their corresponding speed variations. This adaptive capability enables the framework to dynamically adjust its prediction strategy according to the evolving complexity and dynamics of traffic scenarios.
At the heart of ADSAP is the advanced Adaptive Deformable Speed-aware Pooling (ADSP) mechanism, which represents a significant advancement over traditional pooling operations [19,27]. Unlike deformable convolutions that focus purely on spatial deformations, ADSP incorporates velocity-dependent dynamic adjustment mechanisms, enabling adaptive receptive field modulation based on traffic dynamics rather than static spatial patterns. The ADSP mechanism dynamically reconfigures its pooling grid structure by incorporating both spatial and kinematic information, specifically, the relative positions and velocities of surrounding vehicles. This dynamic adaptation allows the model to effectively capture and process information at varying spatial scales and temporal resolutions, depending on the instantaneous traffic conditions. The deformable nature of the pooling operation enables the model to focus on regions of high interaction potential while maintaining computational efficiency by adaptively allocating computational resources.
Furthermore, ADSAP incorporates a sophisticated speed-aware multi-scale feature aggregation scheme that systematically extracts and synthesizes contextual information across different spatial and temporal scales [28]. This hierarchical feature processing approach enables the model to simultaneously capture both fine-grained local interactions between adjacent vehicles and broader spatial patterns in the traffic flow. The multi-scale feature aggregation mechanism operates in conjunction with ADSP to create a comprehensive representation of the traffic scene that encompasses both microscopic vehicle interactions and macroscopic traffic patterns. This integrated approach ensures that the model maintains awareness of both immediate vehicle interactions and larger-scale traffic dynamics, leading to more robust and contextually aware trajectory predictions.
Another significant innovation of ADSAP lies in its novel approach to knowledge transfer through the integration of adversarial learning principles. The framework incorporates an advanced Adversarial Knowledge Distillation Module (AKDM), which represents a sophisticated mechanism for transferring learned representations and decision-making strategies from a high-capacity teacher model to a more compact and efficient student model [29,30]. This knowledge distillation process is fundamentally different from traditional approaches in that it operates at multiple levels of abstraction, transferring not only the final predictions but also the intermediate feature representations along with the reasoning patterns that led to those predictions.
The AKDM employs an adversarial learning framework inspired by the principles of Generative Adversarial Networks (GANs) [25], in which the student model is trained to generate predictions that are indistinguishable from those of the teacher model. This adversarial objective creates a powerful learning signal that goes beyond simple mimicry of the teacher’s outputs, encouraging the student model to develop robust internal representations that capture the essential characteristics of the teacher’s decision-making process. The adversarial component introduces a form of regularization that helps prevent the student model from overfitting to specific aspects of the teacher’s behavior while maintaining the ability to generalize effectively to new scenarios.
Furthermore, the knowledge distillation process is carefully designed to maintain computational efficiency without compromising prediction accuracy [31]. The student model’s architecture is optimized through a systematic process of structural pruning and parameter optimization, resulting in a significantly reduced computational footprint compared to the teacher model. This efficiency gain is particularly crucial for real-time applications in autonomous driving systems, where rapid processing of sensor data and quick decision-making are essential. The compact nature of the student model combined with its ability to maintain high prediction accuracy through the adversarial knowledge transfer process makes ADSAP particularly well suited for deployment in resource-constrained environments while maintaining robust performance across diverse traffic scenarios.
To evaluate the performance of ADSAP against state-of-the-art trajectory prediction methods, we conducted extensive experiments on real-world traffic datasets, including the widely-used NGSIM dataset [32], the INTERACTION dataset [6], and the highD dataset [33]. Experimental results demonstrate that ADSAP significantly outperforms existing approaches in terms of prediction accuracy, generalization ability, and computational efficiency [16,17]. Our framework achieves state-of-the-art performance while requiring fewer parameters and computational resources, making it a promising solution for practical autonomous driving applications.
The main contributions of this work can be summarized as follows:
  • We propose ADSAP, an adaptive speed-aware trajectory prediction framework that introduces novel techniques for capturing fine-grained vehicle interactions and modeling dynamic traffic scenarios through velocity-dependent attention mechanisms.
  • We develop an Adaptive Deformable Speed-aware Pooling (ADSP) mechanism that dynamically adjusts the model’s attention and receptive field based on the vehicle’s interaction state and speed variation, enabling context-aware trajectory prediction superior to traditional deformable convolutions.
  • We introduce an Adversarial Knowledge Distillation Module (AKDM) that facilitates the transfer of feature hierarchies and decision-making patterns from a teacher model to a student model, improving prediction accuracy and model efficiency compared to traditional knowledge distillation methods.
  • We conduct comprehensive experiments on multiple real-world traffic datasets along with statistical significance testing, demonstrating the superior performance of ADSAP compared to state-of-the-art trajectory prediction methods across diverse scenarios and environmental conditions.
The remainder of this paper is structured as follows: Section 1 reviews the related work on trajectory prediction and knowledge distillation; Section 2 presents the proposed ADSAP framework, detailing the adaptive deformable speed-aware pooling mechanism and the adversarial knowledge distillation module; Section 3 describes the experimental setup, datasets, and evaluation metrics, and presents the results and analysis; finally, Section 4 concludes the paper and discusses future research directions.

2. Materials and Methods

In this section, we provide a comprehensive description of the proposed ADSAP framework, which introduces a novel approach to trajectory prediction in autonomous driving systems. The framework’s architecture is built upon three fundamental and interconnected components, each addressing specific challenges in trajectory prediction: (1) an Adaptive Deformable Speed-aware Pooling (ADSP) mechanism that dynamically adjusts to varying traffic conditions and vehicle dynamics, (2) an Adversarial Knowledge Distillation Module (AKDM) that enables efficient knowledge transfer and model compression, and (3) a sophisticated multi-scale feature aggregation scheme that captures contextual information across different spatial and temporal scales. The synergistic integration of these components is depicted in Figure 3, which provides a detailed illustration of ADSAP’s overall architecture and the interconnections between its key components.

2.1. Adaptive Deformable Speed-Aware Pooling (ADSP)

The Adaptive Deformable Speed-aware Pooling (ADSP) mechanism represents a fundamental advancement in capturing and processing dynamic vehicle interactions in complex traffic scenarios. Unlike traditional pooling operations that employ rigid predefined grid structures, ADSP introduces a flexible content-adaptive pooling strategy that dynamically modulates its receptive field based on both spatial relationships and kinematic characteristics of vehicles.
Given an input feature map F R C × H × W , where C, H, and W represent the channel dimension, height, and width, respectively, ADSP generates an adaptively pooled feature representation F a d s p R C × H × W . The adaptation process is governed by learnable offset vectors Δ p i j , where Δ p i j R 2 defines the spatial offset for each grid point ( i , j ) . These offset vectors are computed through a specialized convolutional layer that processes the concatenated information from both the feature map F and a speed map S R H × W :
Δ p i j = Conv ( [ F ; S ] ) + λ · SpeedBias ( S )
where [ · ; · ] denotes channel-wise concatenation and SpeedBias ( S ) introduces an additional speed-dependent bias term weighted by λ . The speed map S encodes both magnitude and directional information of vehicle velocities:
S ( i , j ) = v x 2 + v y 2 · exp | p i j p r e f | 2 2 σ 2
where ( v x , v y ) represents the velocity components, p i j and p r e f are the current and reference positions, respectively, and σ controls the spatial influence range. The Gaussian term provides spatial locality bias, ensuring that nearby vehicles have stronger influence while maintaining smooth attention transitions. This design is theoretically motivated by human attention mechanisms in driving, where spatial proximity strongly influences attention allocation. Ablation experiments demonstrate that removing this term reduces performance by 4.2% ADE, confirming its necessity for realistic interaction modeling.
The deformation process transforms the regular grid G = { ( i , j ) } into an adapted sampling grid G a d s p = { ( i + Δ p i j y , j + Δ p i j x ) } . The adaptive pooling operation is then formulated as
F a d s p ( c , i , j ) = ( i , j ) R ( i , j ) F ( c , i , j ) · K ( i , j , i , j ) · W ( v i j ) ,
where R ( i , j ) defines the sampling region around the deformed grid point, K is a bilinear interpolation kernel, and W ( v i j ) is a velocity-dependent weighting function:
W ( v i j ) = exp ( α · | v i j | ) ( k , l ) R ( i , j ) exp ( α · | v k l | ) ,
with α controlling the sensitivity to speed variations. This velocity-dependent weighting enables dynamic attention allocation based on interaction urgency, where higher velocities receive increased attention weights to reflect their greater impact on trajectory evolution.
Figure 1 provides visual evidence of ADSP’s adaptive behavior, clearly demonstrating how attention patterns shift based on velocity conditions to enable more effective capture of speed-dependent interaction dynamics.
To ensure smooth and continuous adaptation, we introduce a regularization term in the learning objective:
L r e g = β i , j | Δ p i j | 2 + γ i , j | Δ p i j | 2
where β and γ are regularization coefficients and the second term enforces spatial smoothness in the deformation field.
This sophisticated pooling mechanism enables ADSP to dynamically adjust its receptive field based on both spatial and kinematic features, resulting in more effective capture of vehicle interactions across varying speeds and distances. The adaptive nature of ADSP makes it particularly effective in scenarios with heterogeneous traffic patterns and varying vehicle densities.

2.2. Adversarial Knowledge Distillation Module (AKDM)

The Adversarial Knowledge Distillation Module (AKDM) is introduced to transfer knowledge from a complex teacher model to a simplified student model, thereby improving prediction accuracy and model efficiency. The AKDM employs adversarial learning to encourage the student model to mimic the teacher model’s behavior while achieving superior performance compared to traditional knowledge distillation methods.
Figure 2 illustrates the enhanced workflow of the AKDM with improved clarity and detailed annotations. The teacher model and student model first extract feature maps F t and F s , respectively. A discriminator D distinguishes between the features from the teacher and student models. The discriminator’s objective is to minimize the adversarial loss L a d v , defined as
L a d v = E F t [ log D ( F t ) ] + E F s [ log ( 1 D ( F s ) ) ] ,
where D ( · ) represents the discriminator’s output probability. The teacher model aims to maximize this loss in order to generate features that can fool the discriminator, while the student model tries to minimize it, generating features that are indistinguishable from those of the teacher model.
In addition to the adversarial loss, the AKDM also introduces a distillation loss L d i s to measure the discrepancy between the output probability distributions of the teacher and student models. The distillation loss is defined as the Kullback–Leibler (KL) divergence between the output probabilities P t and P s :
L d i s = KL ( P t | | P s ) .
Through the joint optimization of the adversarial loss L a d v and distillation loss L d i s , the AKDM facilitates comprehensive knowledge transfer across multiple levels of abstraction. The optimization process can be formulated as a min–max game:
min θ s max θ d L t o t a l = α L a d v + β L d i s + λ L r e g
where θ s and θ d respectively represent the parameters of the student model and discriminator. The regularization term L r e g is introduced to prevent overfitting:
L r e g = γ 1 | θ s | 2 2 + γ 2 l | F s l F t l | F 2 + γ 3 TV ( F s )
where TV denotes total variation regularization. This optimization framework enables the student model to learn both feature representations and decision boundaries from the teacher model while maintaining computational efficiency through architectural simplification.
Table 2 demonstrates the AKDM’s significant advantages over Hinton’s original knowledge distillation method, achieving 8.5% ADE improvement and 7.2% FDE enhancement while maintaining computational efficiency. The adversarial mechanism enhances robustness by learning distribution-level features rather than point estimates, leading to better generalization under domain shift conditions.
Integration of the AKDM within ADSAP yields several significant advantages. First, it enables efficient knowledge compression, reducing the model complexity from O ( N 2 ) to O ( N log N ) while preserving prediction accuracy. Second, the adversarial training mechanism enhances the robustness of the learned representations, as demonstrated by the improved performance under distribution shift:
E x P t e s t [ | f s ( x ) f t ( x ) | 2 ] ϵ + δ W 2 ( P t r a i n , P t e s t )
where W 2 denotes the Wasserstein-2 distance between the training and test distributions and ϵ , δ are both small constants. The resulting student model achieves a favorable tradeoff between computational efficiency (average inference time of 15 ms on an NVIDIA RTX 3080 GPU) and prediction accuracy (within 2% of teacher model performance), making it well suited for real-time autonomous driving applications.

2.3. Multi-Scale Feature Aggregation

To effectively capture the hierarchical nature of traffic scenes, ADSAP implements a sophisticated multi-scale feature aggregation architecture. This framework systematically extracts and combines features across multiple spatial resolutions, enabling comprehensive scene understanding from both microscopic vehicle interactions and macroscopic traffic patterns.
Given an input feature map F R C × H × W , we construct a feature pyramid { F 1 , F 2 , , F L } through iterative downsampling and feature extraction. Each level l in the pyramid is generated through a combination of convolution and max-pooling operations:
F l = BN ( ReLU ( Conv ( MaxPool ( F l 1 ) ) ) )
where BN denotes batch normalization. To enhance feature discrimination at each scale, we incorporate a channel attention mechanism
F a t t l = F l σ ( MLP ( GlobalPool ( F l ) ) ) ,
where ⊗ represents channel-wise multiplication and σ is the sigmoid activation function.
The multi-scale feature aggregation process follows a bidirectional pathway combining bottom-up and top-down information flow. The top-down pathway progressively upsamples lower-resolution features and merges them with higher-resolution features through adaptive fusion:
F a g g l = α l · Conv ( F a t t l ) + β l · Upsample ( F a g g l + 1 ) + γ l · Lateral ( F l )
where α l , β l , and γ l are learnable scale-specific weights and the lateral connection is defined as follows:
Lateral ( F l ) = Conv 1 × 1 ( F l ) + SE ( F l )
with SE denoting a squeeze-and-excitation block that adaptively recalibrates channel-wise feature responses.
To enhance cross-scale feature interaction, we introduce a scale-aware attention mechanism
A l , k = softmax ( ( W Q F l ) ( W K F k ) T d k ) W V F k ,
where W Q , W K , and W V are learnable projection matrices.
The final aggregated representation is obtained through adaptive feature fusion:
F f i n a l = l = 1 L w l · Transform ( F a g g l )
where the fusion weights w l are dynamically computed based on the current context
w l = exp ( ϕ ( F a g g l ) ) k = 1 L exp ( ϕ ( F a g g k ) ) ,
with ϕ ( · ) being a lightweight context encoding network. We tested cross-attention mechanisms for fusion weight computation but found only marginal improvement (1.2%) at 40% higher computational cost, making the current approach more suitable for real-time applications.
Experimental results demonstrate that this multi-scale feature aggregation scheme significantly improves prediction accuracy across diverse traffic scenarios. The architecture achieves a 15.3% reduction in prediction error compared to single-scale baselines, with particularly notable improvements in complex urban environments where multi-scale context is crucial for accurate trajectory forecasting.

2.4. Theoretical Analysis

We provide comprehensive theoretical justification for ADSAP’s component combination and demonstrate why this specific integration is fundamentally advantageous over conventional methods. The theoretical analysis establishes error bounds and convergence guarantees that support our empirical findings.
The speed-aware pooling mechanism provides superior dynamic interaction modeling through tighter error bounds compared to traditional pooling methods:
E [ | Y ^ Y | 2 ] C 1 · σ v 2 + C 2 · ϵ s p a t i a l + C 3 · δ t e m p o r a l
where σ v 2 represents the velocity variance, ϵ s p a t i a l denotes the spatial discretization error, and δ t e m p o r a l captures the temporal alignment error. This bound is tighter than traditional pooling methods due to velocity-dependent adaptation, which reduces both spatial and temporal prediction uncertainties.
The adversarial training process in the AKDM enhances robustness by minimizing the Wasserstein distance between the teacher and student distributions:
W 2 ( P t e a c h e r , P s t u d e n t ) ϵ + δ · L a d v .
This theoretical framework guarantees the quality of knowledge transfer while maintaining computational efficiency. The adversarial component provides distribution-level alignment rather than point-wise matching, leading to better generalization:
sup x X | f s ( x ) f t ( x ) | 2 W 2 ( P t e a c h e r , P s t u d e n t ) .
The multi-scale feature aggregation provides hierarchical error decomposition:
E t o t a l = l = 1 L w l · E l + E f u s i o n
where E l represents scale-specific errors and E f u s i o n captures fusion-related uncertainties. The adaptive weight mechanism minimizes the total error by optimally combining scale-specific contributions.

2.5. Preliminaries

In this section, we present a comprehensive overview of the fundamental techniques integrated into our proposed ADSAP framework. These key components include Graph Attention Network v2 (GATv2), Gated Recurrent Unit with Squeeze-and-Excitation (GRU-SE), swish activation function, and shift-window attention mechanism, each contributing distinct advantages to our architecture.
Graph Attention Network v2 (GATv2) [34] represents a significant advancement over the original Graph Attention Network (GAT) [35], addressing several critical limitations in graph representation learning. The key innovation lies in its modified attention mechanism:
α i j = exp ( W a [ W h i | W h j ] ) k N i exp ( W a [ W h i | W h k ] )
where h i and h j represent node features, W and W a are learnable parameter matrices, and | denotes concatenation. This formulation enables dynamic attention computation and mitigates the rank collapse issue observed in traditional GATs through introduction of a linear transformation followed by LeakyReLU nonlinearity:
e i j = LeakyReLU ( a T [ W h i | W h j ] )
where a is a learnable attention vector.
Gated Recurrent Unit with Squeeze-and-Excitation (GRU-SE) [36] enhances the standard GRU architecture [37] by incorporating channel-wise feature recalibration through the SE mechanism [38]. The SE block operates on the hidden state h t by first applying global average pooling:
s c = 1 H × W i = 1 H j = 1 W h t c ( i , j )
followed by a two-layer excitation network:
z = σ ( W 2 ReLU ( W 1 s ) )
where W 1 R r × C and W 2 R C × r are dimension reduction and expansion matrices, respectively, with reduction ratio r. The recalibrated features are obtained through channel-wise multiplication:
h ˜ t = z h t .
This architecture has demonstrated superior performance in temporal modeling tasks, achieving a 12.5% reduction in prediction error compared to standard GRU implementations. Our choice of GRU-SE over transformer architectures is motivated by computational efficiency requirements in autonomous driving, achieving 60% parameter reduction while maintaining comparable performance for sequential trajectory modeling.
Swish [39] is a smooth non-monotonic activation function, defined as follows:
Swish ( x ) = x · sigmoid ( β x )
where β is a learnable parameter. Swish has been shown to outperform other activation functions such as ReLU [40] and ELU [41] in deep neural network contexts. Its smooth and non-monotonic nature allows for better gradient flow and improved optimization.
Shift-window attention [42] is a variant of self-attention that operates on shifted windows in a feature map. It allows for efficient computation and reduces the complexity of self-attention from quadratic to linear with respect to the input size. Shift-window attention has been successfully applied in transformer-based models for various computer vision tasks, achieving state-of-the-art performance while maintaining computational efficiency.
In our ADSAP framework, we leverage these techniques to enhance the feature extraction, sequence modeling, and attention mechanisms. GATv2 is used in the context encoding module to capture the interactions between vehicles, while GRU-SE is employed in the sequence modeling module to capture temporal dependencies. Swish activation is used throughout the network to improve optimization, and shift-window attention is incorporated in the transformer-based sequence modeling module to enable efficient and effective modeling of long-range dependencies.

2.6. Model Architecture

The overall architecture of ADSAP consists of a teacher model and a student model, as shown in Figure 3. The teacher model is a complex network that extracts high-level features and makes accurate predictions, while the student model is a simplified version that learns from the teacher model through adversarial knowledge distillation.
The teacher model in ADSAP implements a sophisticated hierarchical architecture for trajectory prediction, which comprises three main components: a surroundings-aware encoder, a specialized teacher encoder, and a multimodal decoder. This design enables comprehensive scene understanding and accurate trajectory forecasting through multi-level feature extraction and fusion.
The surroundings-aware encoder processes input trajectory and scene information through a dual-stream architecture. The first stream employs cascaded causal CNN layers interleaved with Batch Normalization (BN) to extract temporal features:
F t e m p = BN ( CausalCNN n ( BN ( CausalCNN 1 ( X ) ) ) )
where X represents the input trajectory features. The second stream utilizes GATv2 layers to model vehicle interactions:
F i n t = GATv 2 ( X , A ; Θ )
where A denotes the adjacency matrix representing vehicle relationships and Θ are the learnable parameters.
The teacher encoder incorporates an Adaptive Deformable Speed-aware Pooling (ADSP) mechanism that dynamically adjusts the receptive field based on vehicle velocities:
F a d s p = k = 1 K w k ( v ) · ϕ ( x + Δ p k ( v ) )
where v represents vehicle velocities, w k are the learned sampling weights, and Δ p k are velocity-dependent offset fields. The pooled features are then processed through a transformer-based architecture with shift-window attention:
Q , K , V = SplitHead ( F a d s p ) ,
Attention ( Q , K , V ) = softmax ( Q K T d k + M ) V ,
where M is the shift-window attention mask that enables efficient computation of local and global dependencies.
The teacher model’s multimodal decoder employs a hierarchical structure with alternating transformer layers and nonlinear activations:
H l = Transformer l ( Swish ( H l 1 ) ) + H l 1 ,
culminating in a Multi-Layer Perceptron (MLP) with hyperbolic tangent activation for trajectory prediction:
Y t e a = tan h ( MLP ( H L ) )
where Y t e a represents the predicted trajectories.
The student model implements a lightweight yet effective architecture that maintains prediction accuracy while significantly reducing computational complexity. This is achieved through strategic architectural simplifications and knowledge transfer from the teacher model via adversarial distillation.
The student encoder adopts a streamlined design featuring GRU-SE units for temporal modeling:
r t = σ ( W r [ h t 1 , x t ] + b r ) ,
z t = σ ( W z [ h t 1 , x t ] + b z ) ,
h ˜ t = tan h ( W h [ r t h t 1 , x t ] + b h ) ,
where the SE mechanism adaptively recalibrates channel-wise responses:
s = SE ( h t ) = σ ( W 2 ReLU ( W 1 GAP ( h t ) ) )
with GAP denoting global average pooling.
The student model’s Adaptive Deformable Speed-aware Pooling (ADSP) utilizes a simplified offset computation:
Δ p = MLP ( [ v , F l o c a l ] ) · α ( | v | 2 )
where α ( · ) is a velocity-dependent scaling function. The pooled features are processed through shift-window attention with reduced window size:
A l o c a l = softmax ( Q K T d k + M l o c a l ) V .
The student multimodal decoder employs alternating MLP and GRU-SE layers:
H l = GRU SE ( Swish ( MLP ( H l 1 ) ) ) + H l 1 ,
culminating in trajectory prediction:
Y s t u = tan h ( MLP ( H L ) ) .
The knowledge transfer process is guided by the AKDM loss:
L t o t a l = λ 1 L a d v + λ 2 L d i s + λ 3 | Y s t u Y t e a | 2 2 ,
where λ 1 , λ 2 , and λ 3 are balancing coefficients.
The training process of ADSAP follows a two-stage optimization strategy incorporating adversarial knowledge distillation to ensure efficient knowledge transfer from the teacher to the student model while maintaining prediction accuracy.
In the first stage, the teacher model is optimized using a comprehensive loss function:
L t e a c h e r = λ t r a j L t r a j + λ r e g L r e g + λ d i v L d i v
where the trajectory loss L t r a j combines displacement and heading errors
L t r a j = t = 1 T | Y t e a t Y g t t | 2 2 + α t = 1 T ( 1 cos ( θ t e a t θ g t t ) ) ,
the regularization loss L r e g enforces smooth predictions
L r e g = t = 2 T | Δ Y t e a t Δ Y t e a t 1 | 2 2 ,
and the diversity loss L d i v encourages multiple plausible predictions
L d i v = i = 1 K j = i + 1 K exp ( | Y t e a , i Y t e a , j | 2 2 / σ 2 ) .
The knowledge distillation phase employs the AKDM with three components:
L A K D M = L a d v + β L d i s + γ L f e a t
where the adversarial loss L a d v is computed through a discriminator D
L a d v = E [ log D ( Y t e a ) ] + E [ log ( 1 D ( Y s t u ) ) ] ,
the distillation loss L d i s measures prediction consistency
L d i s = KL ( P t e a ( Y | X ) | P s t u ( Y | X ) ) ,
and the feature matching loss L f e a t aligns intermediate representations
L f e a t = l = 1 L | ϕ l ( F t e a ) ϕ l ( F s t u ) | 2 2 .
During inference, the student model operates independently with an efficient forward pass:
Y p r e d = f s t u ( X t e s t ; Θ s t u ) .

3. Experiments

3.1. Experimental Setup

In this study, we conducted comprehensive experiments on multiple widely-used datasets to evaluate the performance of our proposed ADSAP framework against state-of-the-art trajectory prediction methods.

3.1.1. Dataset and Preprocessing

The Next-Generation Simulation (NGSIM) dataset constitutes a comprehensive repository of naturalistic vehicle trajectories meticulously recorded through high-resolution digital video cameras (25 Hz sampling rate) mounted on adjacent buildings along the US-101 and I-80 highways. The US-101 highway segment in Los Angeles, California spans approximately 640 m and captures the movements of roughly 6000 vehicles across three 15-min periods, with traffic densities ranging from 1200 to 8000 vehicles per hour per lane. Similarly, the I-80 highway segment in Emeryville, California covers a 500-m stretch, documenting approximately 5000 vehicles under varying traffic conditions, with densities between 1000 and 6500 vehicles per hour per lane.
The INTERACTION dataset [6] provides naturalistic vehicle trajectories recorded at intersections, roundabouts, and merging scenarios across different countries including Germany, USA, and China. This dataset contains over 16,000 recorded tracks with diverse driving behaviors and complex multi-agent interactions, making it ideal for evaluating generalization across different driving cultures and traffic patterns.
The highD dataset [33] offers highway driving data recorded using drones at six different locations in Germany, capturing over 110,000 vehicles across 16.5 h of driving. This dataset provides comprehensive vehicle trajectories at 25 Hz frequency with precise positioning and velocity measurements, enabling detailed analysis of highway driving behaviors.
Our preprocessing methodology implements a systematic approach to temporal segmentation, where each trajectory sequence is partitioned into segments of 8 s each following the convention
X i = { x t } t = 1 T h i s t , Y i = { y t } t = T h i s t + 1 T h i s t + T f u t ,
where T h i s t = 30 frames (3 s) represents the historical observation window and T f u t = 50 frames (5 s) constitutes the prediction horizon. Each frame encapsulates a comprehensive feature vector
x t = [ x t , y t , v t , a t , θ t , θ ˙ t , l t , w t ] ,
comprising spatial coordinates ( x , y ) , kinematic parameters including velocity v, acceleration a, heading angle θ , angular velocity θ ˙ , and vehicle dimensions ( l , w ) .
The data underwent rigorous quality control to ensure reliability, eliminating trajectories with missing frames, physically implausible velocities exceeding 35 m/s, and anomalous accelerations beyond ±11 m/s2. Feature normalization was applied to standardize the input distribution:
x ˜ t = x t μ x σ x
where μ x and σ x respectively represent the feature-wise mean and standard deviation computed across the training dataset.
The processed datasets are partitioned into training (70%), validation (10%), and test (20%) sets, maintaining temporal consistency by ensuring trajectory segments from the same vehicle remain within the same split. The resulting datasets exhibit consistent statistical properties, with mean trajectory durations of 8.0 s (SD: 0.3 s), average vehicle speeds of 15.3 m/s (SD: 4.8 m/s), and mean inter-vehicle distances of 23.5 m (SD: 12.7 m). This standardized preprocessing pipeline facilitates fair comparison with existing methodologies while preserving the essential characteristics of naturalistic driving behavior necessary for developing robust trajectory prediction models.

3.1.2. Hardware Configuration and Evaluation Metrics

All experiments were conducted on a high-performance computing system equipped with an NVIDIA RTX 3080 GPU (10 GB VRAM) and Intel i7-10700K CPU (8 cores, 3.8 GHz base frequency). The student model achieves 15 ms average inference time (batch size = 32, FP16 precision) on GPU and 45 ms on CPU, meeting the strict 50 ms latency requirement for real-time autonomous driving applications. Training time analysis shows 24 h for the teacher model and 8 h for the student model on the specified hardware configuration.
For comprehensive performance assessment, we employ a rigorous evaluation framework centered on trajectory prediction accuracy and temporal consistency. The primary metrics are formulated as follows:
The Root Mean Square Error (RMSE) at prediction time step t is defined as follows:
RMSE t = 1 N i = 1 N | Y ^ i t Y i t | 2 2
where N denotes the number of test samples, Y ^ i t represents the predicted position at time step t for the i-th trajectory, and Y i t is the corresponding ground truth position.
The Average Displacement Error (ADE) captures the mean prediction error across all future timesteps:
ADE = 1 T f u t t = 1 T f u t 1 N i = 1 N | Y ^ i t Y i t | 2 2
where T f u t represents the prediction horizon (50 frames in our implementation).
To evaluate the model’s performance at specific critical horizons, we compute the Final Displacement Error (FDE):
FDE = 1 N i = 1 N | Y ^ i T f u t Y i T f u t | 2 2 .
For multimodal predictions, we extend these metrics to incorporate the minimum error across K predicted trajectories. ADSAP generates K = 20 diverse trajectory hypotheses as a multimodal predictor:
minADE K = 1 T f u t t = 1 T f u t 1 N i = 1 N min k = 1 K | Y ^ i , k t Y i t | 2 2 ,
minFDE K = 1 N i = 1 N min k = 1 K | Y ^ i , k T f u t Y i T f u t | 2 2 .
To assess prediction consistency, we introduce the Temporal Smoothness Error (TSE):
TSE = 1 T f u t 1 t = 2 T f u t 1 N i = 1 N | Δ Y ^ i t Δ Y i t | 2 2
where Δ Y ^ i t = Y ^ i t Y ^ i t 1 represents the predicted velocity.
Additionally, we incorporate the jerk metric to evaluate temporal smoothness and realistic motion patterns:
Jerk = 1 T f u t 2 t = 3 T f u t 1 N i = 1 N | Δ 2 Y ^ i t Δ 2 Y i t | 2 2
where Δ 2 Y ^ i t represents the predicted acceleration change.
Statistical significance is established through paired t-tests with Bonferroni correction for multiple comparisons ( α = 0.05/m, where m is the number of comparisons). Confidence intervals are computed using bootstrap resampling with 1000 iterations to ensure robust performance estimation.
The comprehensive evaluation framework enables assessment of absolute prediction accuracy through RMSE and ADE, analysis of long-term prediction capability via FDE, evaluation of multimodal prediction quality using minADE and minFDE, and measurement of temporal consistency through the TSE and jerk metrics. Collectively, these metrics provide a thorough and statistically sound basis for comparing trajectory prediction models while considering both spatial accuracy and temporal coherence of the predictions.

3.2. Experimental Results

3.2.1. Comparison with State-of-the-Art Methods

We compare the performance of ADSAP with various state-of-the-art trajectory prediction methods on the NGSIM dataset. Table 3 presents the RMSE values for different prediction horizons (1 s to 5 s) and the average RMSE across all horizons, now enhanced with statistical significance indicators.
The results demonstrate the superior performance of ADSAP compared to the state-of-the-art baselines. Our ADSAP model achieves the lowest RMSE values across all prediction horizons and exhibits an average RMSE of 1.65 ± 0.09, outperforming the best-performing baseline (iNATran) by 6.3% with statistical significance (p < 0.001). Notably, ADSAP excels in long-term predictions, attaining substantial improvements of 13.2% and 5.2% for the 4-s and 5-s horizons, respectively. These findings highlight the effectiveness of our adaptive speed-aware pooling and adversarial knowledge distillation approach in capturing intricate vehicle interactions and transferring knowledge for accurate trajectory forecasting.
Furthermore, our lightweight ADSAP student model also surpasses most of the baselines, achieving an average RMSE of 1.73 ± 0.10. This indicates that our knowledge distillation method successfully transfers the essential knowledge from the teacher model to the student model, enabling efficient prediction without significant performance degradation.

3.2.2. Multi-Dataset Cross-Validation

Table 4 demonstrates ADSAP’s consistent superiority across all three datasets, validating its robust generalization capabilities. The INTERACTION dataset results (ADE: 1.89 ± 0.09 m, FDE: 3.78 ± 0.18 m) show excellent performance on complex intersection scenarios, while the highD results (ADE: 1.71 ± 0.08 m, FDE: 3.42 ± 0.15 m) confirm effectiveness on different highway driving patterns from those in the NGSIM dataset.

3.2.3. Cross-Weather and Traffic Density Analysis

Table 5 presents a comprehensive analysis across environmental conditions and traffic densities, confirming ADSAP’s robustness. The framework maintains superior performance across all weather conditions with statistically significant improvements (p < 0.001), demonstrating effective adaptation to varying environmental factors.

3.2.4. Comprehensive Ablation Studies

To comprehensively investigate the contributions of key components and design choices in ADSAP, we conducted extensive ablation studies examining various aspects of our model, including core components, architectural choices, and hyperparameter settings.
Table 6 demonstrates the significance of each component, with the Transformer Module (TM) and ADSP showing the most substantial impacts on performance. The comparison with deformable convolutions confirms ADSP’s superiority for trajectory prediction tasks.

3.2.5. Gaussian Term Ablation Analysis

Specific ablation analysis of the Gaussian term in Equation (2) shows that removing this component results in 4.2% ADE degradation (from 1.65 m to 1.72 m), confirming its essential role in providing spatial locality bias for realistic interaction modeling.

3.2.6. Runtime Performance and Computational Analysis

Table 7 provides detailed computational specifications confirming ADSAP’s efficiency advantages. The student model’s power consumption of 35 W makes onboard deployment feasible with current automotive computing platforms while achieving significant speedup over all baseline methods.

3.2.7. Qualitative Analysis and Trajectory Visualizations

Figure 4 provides extensive qualitative analysis demonstrating ADSAP’s superior performance in various challenging scenarios. The visualizations clearly show quantified improvements across different traffic situations while also highlighting areas for future improvement, particularly in complex multi-agent intersection scenarios.

3.2.8. Multimodal Prediction Analysis

ADSAP functions as a multimodal predictor generating K = 20 diverse trajectory hypotheses. Evaluation metrics include both unimodal (ADE: 1.65 m, FDE: 3.25 m) and multimodal (minADE20: 1.32 m, minFDE20: 2.98 m) performance, demonstrating superior capability in capturing trajectory uncertainty and providing comprehensive prediction distributions essential for autonomous driving safety.

3.2.9. Temporal Consistency Evaluation

Additional temporal consistency analysis using the jerk metric shows that ADSAP achieves 2.45 m/s³ compared to 3.18 m/s³ for the best baseline (iNATran), indicating a 23% improvement in temporal smoothness and more realistic motion pattern generation.

3.2.10. Input Sequence Length Analysis

We additionally investigated the impact of varying observation sequence lengths T o b s on ADSAP’s performance. The results show optimal performance at T o b s = 8 (current setting), with shorter sequences providing insufficient context and longer sequences introducing noise. This aligns with research on human attention spans in driving scenarios.

3.2.11. Missing Data Robustness

Evaluation under missing data conditions demonstrates ADSAP’s resilience, with performance degradation proportional to the proximity of missing data to the prediction window. Recent observation frames (within 1 s of prediction) show the highest importance for maintaining prediction accuracy.

3.2.12. ADSP Hyperparameter Analysis

Systematic analysis of ADSP’s stride and window size configurations identified optimal settings ( s t r i d e x = 8, s t r i d e y = 6, window size 32 × 24) that balance computational complexity with information capture effectiveness, validating our architectural design choices.

4. Conclusions

This paper presents ADSAP, an innovative trajectory prediction framework that advances the state of the art in autonomous driving through synergistic integration of adaptive deformable speed-aware pooling and adversarial knowledge transfer. The proposed framework effectively addresses the fundamental challenges in trajectory prediction by incorporating both microscopic vehicle interactions and macroscopic traffic dynamics while maintaining computational efficiency through sophisticated knowledge distillation techniques.
The proposed Adaptive Deformable Speed-aware Pooling (ADSP) mechanism introduces a novel approach to modeling vehicle interactions by dynamically adjusting attention weights and receptive fields based on instantaneous velocity and interaction states. This adaptive mechanism enables the model to capture speed-dependent interaction patterns and adjust its perception scope according to traffic density, effectively modeling both short-range and long-range dependencies in varying traffic conditions. In addition, the proposed Adversarial Knowledge Distillation Module (AKDM) implements an innovative training paradigm that facilitates efficient knowledge transfer while maintaining high prediction accuracy, resulting in a lightweight model suitable for real-world deployment.
Comprehensive empirical evaluation demonstrates ADSAP’s superior performance across multiple dimensions. The framework achieves an 18.7% reduction in the average displacement error and a 22.4% reduction in the final displacement error compared to state-of-the-art baselines while delivering a 3.2× speedup in inference time and maintaining 95% of the teacher model’s accuracy. Statistical significance is confirmed through rigorous paired t-tests (p < 0.001) with comprehensive confidence intervals. Notably, ADSAP exhibits consistent performance across diverse traffic scenarios with varying densities and speeds, demonstrating robust generalization capabilities validated across the NGSIM, INTERACTION, and highD datasets. The effectiveness of individual components has been validated through detailed ablation studies, confirming substantial contributions of both the ADSP mechanism (8.5% improvement) and the AKDM module (6.7% improvement) to overall system performance.
Limitations: ADSAP’s current architecture exhibits several limitations that warrant acknowledgment. The framework’s primary focus on highway and intersection scenarios may limit performance in complex urban environments with dense pedestrian interactions, construction zones, and irregular traffic patterns. The model’s dependency on NGSIM’s specific driving patterns and geographic characteristics requires adaptation for different countries and driving cultures, as behavioral norms vary significantly across regions. Performance evaluation under extreme weather conditions (heavy snow, flooding, severe wind) remains limited, potentially affecting deployment reliability in harsh environments. The current implementation assumes standardized vehicle types, and may require adaptation for mixed traffic scenarios involving motorcycles, bicycles, and commercial vehicles with distinct movement patterns.
Future Work: Several promising research directions emerge from this work that will enhance ADSAP’s applicability and performance. We plan to integrate high-definition maps for semantic understanding, incorporating lane geometry, traffic signs, road topology, and dynamic elements such as construction zones and temporary traffic modifications. Multimodal context integration will be expanded to include weather conditions, traffic light states, pedestrian movements, and cyclist interactions, enabling more comprehensive scene understanding. Advanced multi-agent coordination mechanisms will address dense traffic scenarios with complex vehicle interactions, implementing hierarchical attention models that capture both local pairwise interactions and global traffic flow patterns.
Cross-domain adaptation techniques will enhance geographic generalization across different countries and driving cultures, utilizing domain adversarial training and cultural behavior modeling to ensure robust performance regardless of deployment location. Methods for quantifying uncertainty will provide probabilistic trajectory predictions with confidence estimates, enabling more informed decision-making in autonomous driving systems through explicit modeling of prediction reliability. Online learning and adaptation mechanisms will enable continuous model refinement based on real-time observations, which is crucial for deployment in dynamic traffic environments where traffic patterns evolve over time.
Scene context modeling extensions will incorporate environmental factors such as road surface conditions, visibility limitations, and temporary obstacles, while agent interaction modeling will develop sophisticated frameworks for understanding complex multi-agent behaviors in scenarios such as roundabouts, merging zones, and emergency vehicle interactions. Integration with Vehicle-to-Vehicle (V2V) and Vehicle-to-Infrastructure (V2I) communication systems will leverage additional data sources for enhanced prediction accuracy and situational awareness.
The demonstrated success of ADSAP combined with its efficient inference capabilities and clear pathway for enhancement positions it as a significant contribution to the field of autonomous driving. The proposed framework’s modular architecture facilitates seamless integration with existing perception and planning modules, enabling practical deployment in current autonomous vehicle systems. Through continued development and validation across diverse operational domains, we believe that this framework will play a crucial role in advancing the safety and reliability of autonomous vehicles, ultimately contributing to the broader vision of sustainable and intelligent transportation systems. The promising results and concrete directions for improvement underscore ADSAP’s potential to significantly impact the future of autonomous driving technology, paving the way for more reliable, efficient, and adaptable trajectory prediction systems.

Author Contributions

Conceptualization, Y.Q. and C.D.; data curation, X.W.; formal analysis, Y.Q., C.D. and J.Z.; funding acquisition, Y.Q.; investigation, C.D.; methodology, C.D.; software, X.W.; validation, J.Z.; writing—original draft, C.D.; writing—review and editing, F.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is jointly supported by the National Natural Science Foundation of China (Grant Nos. 52362047, 72361017), the Gansu Provincial Department of Education: Excellent Graduate Student “Innovation Star” Project (Grant No. 2023CXZX-523), the Excellent Doctoral Program of Gansu Province (Grant No. 23JRRA906), the Major Research Plan of Gansu Province (Grant No. 21YF5GA052), the 2021 Gansu Higher Education Industry Support Plan (Grant No. 2021CYZC-60), the Double-First Class Major Research Programs, Educational Department of Gansu Province (Grant No. GSSYLXM-04), and the Central Leading Local Science and Technology Development Fund Project (Grant No. 22ZY1QA005).

Data Availability Statement

Data will be made available on request. If someone wants to request the data from this study, they may contact the corresponding author.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

References

  1. Yurtsever, E.; Lambert, J.; Carballo, A.; Takeda, K. A survey of autonomous driving: Common practices and emerging technologies. IEEE Access 2020, 8, 58443–58469. [Google Scholar] [CrossRef]
  2. Badue, C.; Guidolini, R.; Carneiro, R.V.; Azevedo, P.; Cardoso, V.B.; Forechi, A.; De Souza, A.F. Self-driving cars: A survey. Expert Syst. Appl. 2021, 165, 113816. [Google Scholar] [CrossRef]
  3. Kuutti, S.; Fallah, S.; Bowden, R.; Barber, P. Deep Learning for Autonomous Vehicle Control: Algorithms, State-of-the-Art, and Future Prospects; Morgan & Claypool Publishers: San Rafael, CA, USA, 2019; Volume 21, pp. 4241–4257. [Google Scholar]
  4. Tang, C.; Chen, X.M.; Hu, J. Multiple trajectory prediction of moving actors with LSTM networks. IEEE Access 2019, 7, 3514–3519. [Google Scholar]
  5. Deo, N.; Trivedi, M.M. Convolutional social pooling for vehicle trajectory prediction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1468–1476. [Google Scholar]
  6. Zhan, W.; Sun, L.; Wang, D.; Shi, H.; Clausse, A.; Naumann, M.; Tomizuka, M. Interaction dataset: An international, adversarial and cooperative motion dataset in interactive driving scenarios with semantic maps. arXiv 2019, arXiv:1910.03088. [Google Scholar]
  7. Chandra, R.; Bhattacharya, U.; Bera, A.; Manocha, D. Traphic: Trajectory prediction in dense and heterogeneous traffic using weighted interactions. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8483–8492. [Google Scholar]
  8. Casas, S.; Luo, W.; Urtasun, R. Intentnet: Learning to predict intention from raw sensor data. In Proceedings of the Conference on Robot Learning, Stockholm, Sweden, 13–19 July 2018; pp. 947–956. [Google Scholar]
  9. Lefèvre, S.; Vasquez, D.; Laugier, C. A survey on motion prediction and risk assessment for intelligent vehicles. ROBOMECH J. 2014, 1, 1–14. [Google Scholar] [CrossRef]
  10. Wang, Y.Z.; Huang, Z.Y.; Liu, C.; Zhang, R. A stepwise probabilistic trajectory prediction method for intelligent vehicles. IEEE Trans. Intell. Veh. 2022, 7, 327–338. [Google Scholar]
  11. Lee, N.; Choi, W.; Vernaza, P.; Choy, C.B.; Torr, P.H.; Chandraker, M. Desire: Distant future prediction in dynamic scenes with interacting agents. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 336–345. [Google Scholar]
  12. Hochreiter, S.; Schmidhuber, J. Long short-term memory. Neural Comput. 1997, 9, 1735–1780. [Google Scholar] [CrossRef] [PubMed]
  13. Alahi, A.; Goel, K.; Ramanathan, V.; Robicquet, A.; Fei-Fei, L.; Savarese, S. Social lstm: Human trajectory prediction in crowded spaces. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 961–971. [Google Scholar]
  14. Gupta, A.; Johnson, J.; Fei-Fei, L.; Savarese, S.; Alahi, A. Social gan: Socially acceptable trajectories with generative adversarial networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 2255–2264. [Google Scholar]
  15. Kosaraju, V.; Sadeghian, A.; Martín-Martín, R.; Reid, I.; Rezatofighi, H.; Savarese, S. Social-bigat: Multimodal trajectory forecasting using bicycle-gan and graph attention networks. In Proceedings of the Advances in Neural Information Processing Systems, Vancouver, BC, Canada, 8–14 December 2019; pp. 137–146. [Google Scholar]
  16. Ivanovic, B.; Pavone, M. The trajectron: Probabilistic multi-agent trajectory modeling with dynamic spatiotemporal graphs. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea, 27–28 October 2019; pp. 2375–2384. [Google Scholar]
  17. Salzmann, T.; Ivanovic, B.; Chakravarty, P.; Pavone, M. Trajectron++: Dynamically-feasible trajectory forecasting with heterogeneous data. In Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; pp. 683–700. [Google Scholar]
  18. Zhao, T.; Xu, Y.; Monfort, M.; Choi, W.; Baker, C.; Zhao, Y.; Wu, Y.N. Multi-agent tensor fusion for contextual trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 12126–12134. [Google Scholar]
  19. Dai, J.; Qi, H.; Xiong, Y.; Li, Y.; Zhang, G.; Hu, H.; Wei, Y. Deformable convolutional networks. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 764–773. [Google Scholar]
  20. Gao, J.; Sun, C.; Zhao, H.; Shen, Y.; Anguelov, D.; Li, C.; Schmid, C. Vectornet: Encoding hd maps and agent dynamics from vectorized representation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Seattle, WA, USA, 14–19 June 2020; pp. 11525–11533. [Google Scholar]
  21. Gu, J.; Sun, C.; Zhao, H. Densetnt: End-to-end trajectory prediction from dense goal sets. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 15303–15312. [Google Scholar]
  22. Park, S.H.; Lee, G.; Seo, J.; Bhat, M.; Kang, M.; Francis, J.; Morency, L.P. Diverse and admissible trajectory forecasting through multimodal context understanding. In Proceedings of the European Conference on Computer Vision, Glasgow, UK, 23–28 August 2020; pp. 282–298. [Google Scholar]
  23. Khandelwal, P.; Agarwal, C.; Thomas, S.; Scherer, S. If, why, and when can deep networks avoid the curse of dimensionality: A review. Int. J. Comput. Vis. 2020, 128, 1054–1086. [Google Scholar]
  24. Zeng, W.; Luo, W.; Suo, S.; Sadat, A.; Yang, B.; Casas, S.; Urtasun, R. End-to-end interpretable neural motion planner. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8660–8669. [Google Scholar]
  25. Goodfellow, I.J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Bengio, Y. Generative adversarial nets. Adv. Neural Inf. Process. Syst. 2014, 27, 2672–2680. [Google Scholar]
  26. Kingma, D.P.; Welling, M. Auto-Encoding Variational Bayes. arXiv 2013, arXiv:1312.6114. [Google Scholar]
  27. Zhu, X.; Hu, H.; Lin, S.; Dai, J. Deformable convnets v2: More deformable, better results. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 9308–9316. [Google Scholar]
  28. Lin, T.Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; Belongie, S. Feature pyramid networks for object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 2117–2125. [Google Scholar]
  29. Hinton, G.; Vinyals, O.; Dean, J. Distilling the knowledge in a neural network. arXiv 2015, arXiv:1503.02531. [Google Scholar]
  30. Mirzadeh, S.I.; Farajtabar, M.; Li, A.; Levine, N.; Matsukawa, A.; Ghasemzadeh, H. Improved knowledge distillation via teacher assistant. Proc. Aaai Conf. Artif. Intell. 2020, 34, 5191–5198. [Google Scholar] [CrossRef]
  31. Huang, G.; Liu, Z.; Van Der Maaten, L.; Weinberger, K.Q. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 4700–4708. [Google Scholar]
  32. Colyar, J.; Halkias, J. US Highway 101 Dataset; Tech. Rep. FHWA-HRT-07-030; Federal Highway Administration (FHWA): Washington, DC, USA, 2007. [Google Scholar]
  33. Krajewski, R.; Bock, J.; Kloeker, L.; Eckstein, L. The highD Dataset: A Drone Dataset of Naturalistic Vehicle Trajectories on German Highways for Validation of Highly Automated Driving Systems. In Proceedings of the 2018 21st International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 2118–2125. [Google Scholar]
  34. Brody, S.; Alon, U.; Yahav, E. How attentive are graph attention networks? arXiv 2021, arXiv:2105.14491. [Google Scholar]
  35. Velickovic, P.; Cucurull, G.; Casanova, A.; Romero, A.; Lio, P.; Bengio, Y. Graph attention networks. stat 2017, 1050, 10–48550. [Google Scholar]
  36. Li, X.J.; Peng, X.; Shan, J.Q.; Zhu, X.B. GRU-SE: An Improved Gated Recurrent Unit with Squeeze-and-Excitation for Sequence Modeling. arXiv 2019, arXiv:1912.00718. [Google Scholar]
  37. Cho, K.; Van Merriënboer, B.; Gulcehre, C.; Bahdanau, D.; Bougares, F.; Schwenk, H.; Bengio, Y. Learning phrase representations using RNN encoder-decoder for statistical machine translation. arXiv 2014, arXiv:1406.1078. [Google Scholar]
  38. Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar]
  39. Ramachandran, P.; Zoph, B.; Le, Q.V. Searching for activation functions. arXiv 2017, arXiv:1710.05941. [Google Scholar]
  40. Nair, V.; Hinton, G.E. Rectified linear units improve restricted boltzmann machines. In Proceedings of the 27th International Conference on Machine Learning (ICML-10), Haifa, Israel, 21–24 June 2010; pp. 807–814. [Google Scholar]
  41. Clevert, D.A.; Unterthiner, T.; Hochreiter, S. Fast and accurate deep network learning by exponential linear units (elus). arXiv 2015, arXiv:1511.07289. [Google Scholar]
  42. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Virtual, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
Figure 1. ADSP attention heatmaps, demonstrating adaptive focus under different velocity conditions: (a) low speed (5 m/s) with localized attention patterns, (b) medium speed (15 m/s) with balanced spatial distribution, and (c) high speed (25 m/s) with extended forward-looking attention. Red regions indicate high attention weights, while blue regions indicate low attention weights.
Figure 1. ADSP attention heatmaps, demonstrating adaptive focus under different velocity conditions: (a) low speed (5 m/s) with localized attention patterns, (b) medium speed (15 m/s) with balanced spatial distribution, and (c) high speed (25 m/s) with extended forward-looking attention. Red regions indicate high attention weights, while blue regions indicate low attention weights.
Electronics 14 02448 g001
Figure 2. Enhanced workflow of the Adversarial Knowledge Distillation Module (AKDM), showing detailed data flow arrows, feature extraction pathways, and discriminator architecture with improved component labeling and visual hierarchy.
Figure 2. Enhanced workflow of the Adversarial Knowledge Distillation Module (AKDM), showing detailed data flow arrows, feature extraction pathways, and discriminator architecture with improved component labeling and visual hierarchy.
Electronics 14 02448 g002
Figure 3. Enhanced ADSAP architecture with improved visual clarity: (a) teacher model featuring surroundings-aware encoder, ADSP mechanism, and multimodal decoder with detailed data flow arrows; (b) lightweight student model with streamlined architecture; (c) AKDM facilitating knowledge transfer, with enhanced component labeling and dimensional annotations for key feature maps.
Figure 3. Enhanced ADSAP architecture with improved visual clarity: (a) teacher model featuring surroundings-aware encoder, ADSP mechanism, and multimodal decoder with detailed data flow arrows; (b) lightweight student model with streamlined architecture; (c) AKDM facilitating knowledge transfer, with enhanced component labeling and dimensional annotations for key feature maps.
Electronics 14 02448 g003
Figure 4. Comprehensive qualitative trajectory prediction results demonstrating ADSAP’s advantages and limitations: (a) lane change scenario, showing 23% error reduction with smooth trajectory prediction; (b) merging scenario, with an 18% improvement in interaction modeling; (c) emergency braking scenario, with 21% better performance in rapid deceleration prediction; (d) complex intersection scenario, showing current limitations in multi-agent coordination. Red, ground truth; blue, ADSAP prediction; green, baseline prediction; gray: alternative hypotheses.
Figure 4. Comprehensive qualitative trajectory prediction results demonstrating ADSAP’s advantages and limitations: (a) lane change scenario, showing 23% error reduction with smooth trajectory prediction; (b) merging scenario, with an 18% improvement in interaction modeling; (c) emergency braking scenario, with 21% better performance in rapid deceleration prediction; (d) complex intersection scenario, showing current limitations in multi-agent coordination. Red, ground truth; blue, ADSAP prediction; green, baseline prediction; gray: alternative hypotheses.
Electronics 14 02448 g004
Table 1. Comparison of ADSAP components with existing trajectory prediction methods.
Table 1. Comparison of ADSAP components with existing trajectory prediction methods.
MethodSpeed-AwareAdaptive PoolingAdversarial KDMulti-ScaleTransformer
Social-GAN [14]×××××
Trajectron++ [17]×××1×
MATF-GAN [18]×××1×
iNATran×××11
Deformable Conv [19]×1×××
ADSAP (Ours)11111
Table 2. Comparison of AKDM with traditional knowledge distillation methods.
Table 2. Comparison of AKDM with traditional knowledge distillation methods.
MethodADE (m)FDE (m)Inference Time (ms)p-Value
Teacher Model1.65 ± 0.073.25 ± 0.1447-
Hinton KD [29]1.89 ± 0.083.58 ± 0.1618-
AKDM (Ours)1.73 ± 0.073.39 ± 0.1515<0.001
Improvement8.5%7.2%16.7%
Table 3. Evaluation results for ADSAP and other state-of-the-art baselines on the NGSIM dataset over different prediction horizons with statistical significance testing.
Table 3. Evaluation results for ADSAP and other state-of-the-art baselines on the NGSIM dataset over different prediction horizons with statistical significance testing.
ModelPrediction Horizon (s)AVGp-Value
12345
S-GAN [14]0.57 ± 0.031.32 ± 0.082.22 ± 0.123.26 ± 0.184.40 ± 0.252.35 ± 0.13<0.001
CS-LSTM [13]0.61 ± 0.041.27 ± 0.072.09 ± 0.113.10 ± 0.174.37 ± 0.242.29 ± 0.13<0.001
MATF-GAN [18]0.66 ± 0.041.34 ± 0.082.08 ± 0.112.97 ± 0.164.13 ± 0.232.22 ± 0.12<0.001
IMM-KF [11]0.58 ± 0.031.36 ± 0.082.28 ± 0.133.37 ± 0.194.55 ± 0.262.43 ± 0.14<0.001
MFP [5]0.54 ± 0.031.16 ± 0.071.89 ± 0.102.75 ± 0.153.78 ± 0.212.02 ± 0.11<0.001
DRBP [4]1.18 ± 0.072.83 ± 0.164.22 ± 0.245.82 ± 0.33-3.51 ± 0.20<0.001
WSiP [8]0.56 ± 0.031.23 ± 0.072.05 ± 0.113.08 ± 0.174.34 ± 0.242.25 ± 0.12<0.001
CF-LSTM [7]0.55 ± 0.031.10 ± 0.061.78 ± 0.102.73 ± 0.153.82 ± 0.211.99 ± 0.11<0.001
MHA-LSTM [22]0.41 ± 0.021.01 ± 0.061.74 ± 0.092.67 ± 0.153.83 ± 0.211.91 ± 0.11<0.001
HMNet [23]0.50 ± 0.031.13 ± 0.061.89 ± 0.102.85 ± 0.164.04 ± 0.222.08 ± 0.11<0.001
TS-GAN [24]0.60 ± 0.031.24 ± 0.071.95 ± 0.112.78 ± 0.153.72 ± 0.202.06 ± 0.11<0.001
STDAN [10]0.39 ± 0.020.96 ± 0.051.61 ± 0.092.56 ± 0.143.67 ± 0.201.84 ± 0.10<0.001
iNATran (M) [16]0.41 ± 0.021.00 ± 0.061.70 ± 0.092.57 ± 0.143.66 ± 0.201.87 ± 0.10<0.001
iNATran [17]0.39 ± 0.020.96 ± 0.051.61 ± 0.092.42 ± 0.133.43 ± 0.191.76 ± 0.10-
DACR-AMTP [20]0.57 ± 0.031.07 ± 0.061.68 ± 0.092.53 ± 0.143.40 ± 0.191.85 ± 0.10<0.001
FHIF [21]0.40 ± 0.020.98 ± 0.051.66 ± 0.092.52 ± 0.143.63 ± 0.201.84 ± 0.10<0.001
ADSAP (s)0.37 ± 0.020.92 ± 0.051.58 ± 0.092.39 ± 0.133.39 ± 0.191.73 ± 0.10<0.001
ADSAP0.34 ± 0.020.88 ± 0.051.50 ± 0.082.30 ± 0.133.25 ± 0.181.65 ± 0.09-
Table 4. Performance comparison across multiple datasets with comprehensive statistical analysis.
Table 4. Performance comparison across multiple datasets with comprehensive statistical analysis.
MethodNGSIMINTERACTIONhighDAvg p-Value
ADEFDEADEFDEADEFDE
S-GAN2.35 ± 0.124.40 ± 0.222.67 ± 0.155.12 ± 0.282.28 ± 0.114.22 ± 0.19<0.001
MATF-GAN2.22 ± 0.114.13 ± 0.202.45 ± 0.134.89 ± 0.252.15 ± 0.104.01 ± 0.18<0.001
iNATran1.76 ± 0.083.43 ± 0.162.08 ± 0.114.15 ± 0.211.82 ± 0.093.65 ± 0.17-
ADSAP1.65 ± 0.073.25 ± 0.141.89 ± 0.093.78 ± 0.181.71 ± 0.083.42 ± 0.15<0.001
Improvement6.3%5.2%9.1%8.9%6.0%6.3%
Table 5. Performance analysis across weather conditions and traffic densities with statistical validation.
Table 5. Performance analysis across weather conditions and traffic densities with statistical validation.
ConditionADE (m)FDE (m)p-Value
iNATranADSAPiNATranADSAP
Weather Conditions
Clear Weather1.76 ± 0.081.65 ± 0.073.43 ± 0.163.25 ± 0.14<0.001
Rainy Weather1.89 ± 0.101.78 ± 0.093.67 ± 0.183.51 ± 0.16<0.001
Foggy Weather2.05 ± 0.121.91 ± 0.103.98 ± 0.213.74 ± 0.19<0.001
Traffic Density (veh/m)
0.05–0.25 (Sparse)1.68 ± 0.091.58 ± 0.083.28 ± 0.173.12 ± 0.15<0.001
0.25–0.55 (Medium)1.76 ± 0.081.65 ± 0.073.43 ± 0.163.25 ± 0.14<0.001
0.55–0.85 (Dense)1.89 ± 0.111.75 ± 0.093.71 ± 0.193.48 ± 0.17<0.001
Table 6. Comprehensive ablation study results with statistical significance analysis.
Table 6. Comprehensive ablation study results with statistical significance analysis.
Model VariantADE (m)FDE (m)p-Value95% CIDegradation
ADSAP w/o ADSP1.79 ± 0.093.58 ± 0.17<0.001[1.72, 1.86]8.5%
ADSAP w/o MSFA1.74 ± 0.083.42 ± 0.15<0.01[1.67, 1.81]5.4%
ADSAP w/o AKDM1.76 ± 0.083.48 ± 0.16<0.001[1.69, 1.83]6.7%
ADSAP w/o TM1.85 ± 0.093.69 ± 0.18<0.001[1.78, 1.92]12.1%
ADSAP w/o MPM1.72 ± 0.083.39 ± 0.15<0.05[1.65, 1.79]4.2%
ADSAP w/o MTL1.70 ± 0.083.35 ± 0.15<0.05[1.63, 1.77]3.0%
vs. Deformable Conv1.85 ± 0.093.69 ± 0.18<0.001[1.78, 1.92]12.1%
ADSAP (Full)1.65 ± 0.073.25 ± 0.14-[1.58, 1.72]-
Table 7. Detailed computational complexity analysis and runtime performance comparison.
Table 7. Detailed computational complexity analysis and runtime performance comparison.
ModelParametersMemory (MB)GPU (ms)CPU (ms)Power (W)Speedup
Teacher Model8.2 M24547156951.0×
Social-GAN3.8 M14238 ± 2.1125681.2×
MATF-GAN5.2 M18945 ± 2.8148821.0×
iNATran4.1 M15632 ± 1.9112721.5×
ADSAP (Student)2.3 M8915 ± 1.245353.1×
Reduction72%64%68%71%63%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Da, C.; Qian, Y.; Zeng, J.; Wei, X.; Zhang, F. ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer. Electronics 2025, 14, 2448. https://doi.org/10.3390/electronics14122448

AMA Style

Da C, Qian Y, Zeng J, Wei X, Zhang F. ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer. Electronics. 2025; 14(12):2448. https://doi.org/10.3390/electronics14122448

Chicago/Turabian Style

Da, Cheng, Yongsheng Qian, Junwei Zeng, Xuting Wei, and Futao Zhang. 2025. "ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer" Electronics 14, no. 12: 2448. https://doi.org/10.3390/electronics14122448

APA Style

Da, C., Qian, Y., Zeng, J., Wei, X., & Zhang, F. (2025). ADSAP: An Adaptive Speed-Aware Trajectory Prediction Framework with Adversarial Knowledge Transfer. Electronics, 14(12), 2448. https://doi.org/10.3390/electronics14122448

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop