Next Article in Journal
A Formalized Zoned Role-Based Framework for the Analysis, Design, Implementation, Maintenance and Access Control of Integrated Enterprise Systems
Previous Article in Journal
Employee Attrition Prediction: An Explanatory and Statistically Robust Ensemble Learning Model
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Disentangling Interaction and Intention for Long-Tail Pedestrian Trajectory Prediction

1
School of Computer Science, Wuhan University, Wuhan 430072, China
2
National Engineering Research Center for Multimedia Software, Institute of Artificial Intelligence, Hubei Key Laboratory of Multimedia and Network Communication Engineering, Wuhan University, Wuhan 430072, China
*
Author to whom correspondence should be addressed.
Computers 2026, 15(3), 186; https://doi.org/10.3390/computers15030186
Submission received: 23 February 2026 / Revised: 8 March 2026 / Accepted: 10 March 2026 / Published: 12 March 2026
(This article belongs to the Section AI-Driven Innovations)

Abstract

Pedestrian trajectory prediction remains a challenging task, particularly in long-tail scenarios where goal distributions are sparse and inter-agent behaviors are uncertain. In this work, we propose to disentangle the trajectory prediction task into two complementary components: interaction modeling and intention modeling. For interaction modeling, we introduce an adaptive meta-strategy that proactively extracts latent and rare-yet-critical interaction patterns often overlooked by conventional trajectory-only approaches. For intention modeling, we propose Continuous Waypoint Slot-Driven Prototypical Contrastive Learning (PCL). It adapts prototype learning to the multi-modal reality where conventional PCL fails to model diverse and continuous goal distributions. Capitalizing on the complementary strengths of both components, we orchestrate a unified frequency-based fusion module that seamlessly integrates interaction and intention modeling, yielding enhanced overall prediction accuracy. In particular, our method is model-agnostic and can be seamlessly incorporated into a wide range of existing prediction frameworks. Extensive experiments on several datasets demonstrate that our approach not only achieves consistent performance gains in standard settings, but also significantly alleviates degradation on hard or long-tail trajectory samples.

1. Introduction

Trajectory prediction, essential for intelligent systems [1,2,3], has evolved from kinematic models [4,5,6] and traditional ML [7,8,9] to data-driven deep learning approaches [9,10,11,12,13,14] that better capture complex interactions. A recent study [15] employed contrastive learning to mitigate accuracy degradation in rare trajectories.
Although these studies [15,16] concentrate on the hard case issue, they rarely analyze the two fundamental sources of long-tail behaviors: abnormal social interactions and rare goal intentions. Consequently, by applying a monolithic autoencoder to inherently continuous trajectories without disentangling these underlying causes, their approaches suffer from limited reconstruction quality [17] and produce ill-suited discrete representations. Furthermore, their evaluation, often restricted to a single algorithm, leaves generalization to other representative methods unclear. As a result, they struggle to capture rare events while maintaining accuracy on common cases. Such rare samples often stem from either abnormal inter-agent interactions or uncommon goals that require distinct modeling strategies. To address this, we propose explicitly decoupling interaction and intention into modular components, allowing independent adaptation and improved generalization across both frequent and rare scenarios.
In modeling social interactions, we address two key limitations: the lack of explicit, interpretable uncertainty representation beyond Gaussian noise [18] and the failure to capture rare-but-critical abnormal behaviors (e.g., collision avoidance). To overcome these, a richer representation using distance (to describe interaction intensity), speed (to describe motion intent), and angle (to describe social structure) is necessary to provide a more physically meaningful and dynamic description than a static snapshot of relative positions. As shown in Figure 1a, pedestrians exhibiting extremely high speeds or remaining stationary can indicate abnormal interactions that an ego agent should pay more attention to. In intention modeling, although many studies generate goal candidates to assist intention modeling, they [15,16,19] do not emphasize the continuity and long-tail characteristic of continuous trajectory goals, such as those in Figure 1b, resulting in limited interpretability and performance. In this work, we propose an approach that jointly considers interactions and intentions together.
Meteorologists predict future weather patterns by analyzing satellite imagery that captures the evolving formations of anomalous cloud masses rather than examining the distribution of individual droplets [20]. Inspired by this high-level perspective, our work focuses on extracting abnormal social interactions and intentions by identifying structured patterns rather than modeling each in isolation. To address the challenges in modeling pedestrian intention and interaction in complex scenarios, we propose the following approach: (1) We first decouple social interaction from intention. (2) Social interactions are mapped to a Distance–Velocity–Angle (DVA) space modeled as multivariate normal distributions. A Gaussian Mixture Model (GMM) classifies interactions, enabling offline generation of an abnormal interaction database for scenario-specific aggregation. (3) An enhanced PCL framework incorporates a GMM-based module that generates pseudo-labels of rare intentions to supervise motion prediction while maintaining continuity, which eliminates ill-suited discrete representation and the need for training a separate autoencoder. (4) To ensure robust performance across both challenging cases and the entire dataset, we integrate (2) and (3) into a frequency-sensitive decoder which regards rare intentions predicted by (3) as goal-driven to generate complete trajectories. Our method demonstrates consistent improvements over several recent representative trajectory prediction models on multiple benchmarks.
The contributions of our work are summarized as follows: (1) a framework of decoupled interaction–intentions for motion forecasting; (2) a DVA space that models interaction uncertainty coupled with offline GMM-based preprocessing for efficient extraction and aggregation of abnormal interactions; (3) an improved prototypical contrastive learning method for rare intentions under continuous goal labels; (4) a frequency-sensitive decoder to combine abnormal interactions and intentions together which could seamlessly connect to existing methods. To our knowledge, this is the first work studying the uncertainty and long-tail of two important factors (interactions and intentions) together. Extensive experiments and in-depth analysis confirm that our method consistently outperforms several recent representative methods.

2. Related Work

Social Interaction Modeling. Social pooling methods [21,22,23] propagate neighboring pedestrians’ temporal features to the target agents. NADP [24] is a decoupled pedestrian trajectory prediction network that uses a near-aware attention module to extract core spatiotemporal features for prediction. While early attention-based approaches [11,23,25] employ factorized structures O ( N 2 ) to model social interactions, recent methods like T-MLSTG [26] (GNN-based), PMITra [27] (GNN-based), QCNet [28] (GNN-based), IA-STDGNN [29] (DGNN-based), and FJMP [30] (directed graphs) exhibit strong structural biases. SocialCircle [31] innovatively adopts angle-space aggregation, inspired by marine echo localization, but its uniform averaging of meta-components in angle space may overlook critical abnormal interactions in another feature dimension, which may be a key focus for motion prediction.
Intention Capturing in Trajectory Prediction. Agents have uncertain intentions (e.g., a person standing at an intersection can either go straight or turn left, as long as they behave in accordance with traffic rules). Most of the present methods regard the future trajectory of the ego agent as an M mixture distribution, where M denotes ’multi-modalities’. To model such uncertainty, PGP [18] decomposes it into lateral variability of anchor-based map candidates and longitude variability, which can be regarded as the random noise of normal distribution. MELON [32] decouples trajectory decoding into specialized modules with adaptive spatiotemporal uncertainty quantification and a streaming prediction scheme, achieving state-of-the-art performance on complex urban traffic datasets. PPT [33] uses a progressive learning architecture, modeling short-term goals and long-term goals in two stages. Current generative approaches [13,25,34,35,36,37,38] suffer from unreliable uncertainty modeling due to their uninterpretable noise sampling, highlighting the need for robust heuristic rules.
Long-Tail Distribution in Trajectory Prediction. Manual class balancing fails with increasing categories, inducing long-tail distributions. Previous studies [39,40] have addressed this imbalance for categorical targets. Prototypical contrastive learning (PCL) [41,42] addresses long-tailed data distributions by learning underlying features through instance discrimination, as demonstrated in previous work [41,43,44]. Pedestrian trajectories exhibit long-tail distributions (e.g., turns vs. straight paths) owing to inertial motion in unchanged scenes as well. FEND [15] proposes a future enhanced contrastive learning framework and a hypernetwork to recognize these long-tailed patterns. TrACT [16] proposes to incorporate richer information on training dynamics into a prototypical contrastive learning framework. Hi-SCL [19] fights long-tailed trajectory prediction with hierarchical wave-semantic contrastive learning. Unlike prior methods [15,16,19] that uniformly amplify minority patterns and distort the majority feature space, our approach selectively extracts long-tail signals as a plugin module, preserving the core representations and accuracy of normal data. As a continuous multi-modal regression problem, applying traditional prototype learning methods to pedestrian trajectory prediction is not feasible. Prior methods [15,16,19] often ignore the continuity imbalance of trajectory intention distributions, while applying direct binning method like [45] in the trajectory prediction task generates memory-intensive O ( N 2 ) 2D grids, making the precise calibration of bin size critical as well. Moreover, grid-based discretization of 2D coordinates fails to preserve the underlying data structure in clustering tasks, as rigid spatial partitioning disregards intrinsic density variations.

3. Method

3.1. Problem Formulation

Denote the past trajectory of ego pedestrian i with t h timesteps as O i = ( p 1 i , p 2 i , , p t h i ) , where 2D p t i = ( x t i , y t i ) . We aim to forecast their future trajectory F i = ( p t h + 1 i , p t h + 2 i , , p t h + t f i ) based on O i and the past trajectories of all their N a neighbors O / i = { O j 1 j   N a }, where j ∈ neighbor(i). ‘Neighbors’ refers to non-ego agents (e.g., optionally filtered by distance). The motion forecasting task is to find an optimal model θ * = max θ P ( F i | O i , O / i ) . In real-world scenarios, pedestrians usually decide their trajectories in a three-step manner that can be decomposed into a realistic cognitive-behavioral process: risk perception → goal formation → motion execution. Previous studies [46,47,48] disentangle the latent space and construct a Bayesian Network in deep learning tasks. In our work, we disentangle lateral variations into a social interaction part z s o c i a l ( z s o c ) and an individual intention part z i n t e n t i o n ( z i n t ), where we manage to minimize P ( F i | O i , O / i ) via Equation (1):
P ( F i | O i , O / i ) = z s o c z i n t P ( z s o c | O i , O / i ) · P ( z i n t | z s o c ) · P ( F i | z i n t , z s o c ) d z s o c d z i n t .
Equation (1) illustrates the decision-making process: A pedestrian first observes the surrounding environment z s o c , which subsequently informs and refines their intended goal z i n t . The future trajectory ( F i ) is then generated based on the combination of this contextual information and the refined goal. Figure 2a–c shows our overall framework, corresponding to the above three items in Equation (1). The key symbol definitions in the Methods section are given in Table 1.

3.2. Abnormal Social Interaction Modeling

We focus on how to model the social interaction part z s o c , which is used to simulate risk perception.
DVA Multivariate Gaussian Space. Previous studies [31,49] treat the neighbor social interactions to ego pedestrian i as social meta-components R m e t a i = { r m e t a i j | 1 j N a } . In this work, we construct meta-components as r m e t a i j = { r d i s i j , r v e l j , r θ i j }.
  • Relative Distance r d i s i j . Neighbor agents exert different influences on the target agent depending on their current distances from the ego one. Formally, for any neighbor j of ego pedestrian i,
    r d i s i j = p t h i p t h j 2 .
  • Absolute Velocity r v e l j . Not only do high-velocity neighbors pose a greater threat, but static agents can too, especially if they are close by, as this may prompt the target pedestrian to take proactive avoidance measures. Formally, for any neighbor j of an ego pedestrian i,
    r v e l j = p t h j p 0 j 2 .
  • Relative Angle r θ i j . Pedestrians often take advantage of the relative orientation angle to judge their surroundings (e.g., if there are crowds to the north). Formally, for any neighbor j of an ego pedestrian i,
    r θ i j = a t a n 2 ( y t h i y t h j , x t h i x t h j ) .
We project R m e t a into a multivariate Gaussian space { μ m e t a , Σ m e t a } to reflect its uncertainty, where
μ m e t a i j = [ μ d i s i j , μ v e l i j , μ θ i j ] ; Σ m e t a = σ d i s 2 ρ d i s _ v e l σ d i s σ v e l ρ d i s _ θ σ d i s σ θ ρ d i s _ v e l σ d i s σ v e l σ v e l 2 ρ v e l _ θ σ v e l σ θ ρ d i s _ θ σ d i s σ θ ρ v e l _ θ σ v e l σ θ σ θ 2 .
Abnormal Social Meta-Components. Abnormal social behaviors at the scene level have certain commonalities (e.g., in congested environments, agents may change direction to accelerate overtaking, while high-velocity agents will prioritize avoiding static neighbors on their paths forward). However, these interactions in DVA space may be long-tail and influenced by the combined effects of multiple independent variables (e.g., in the previous example, ‘high-velocity’ corresponds to absolute velocity r v e l and ‘on the forward path’ corresponds to relative angle r θ —the two factors interact to create the aforementioned long-tail scenarios). To deal with these long-tail social behaviors, a GMM Θ a b n = n = 1 N a b n λ n N ( μ n , Σ n ) with N a b n components is constructed to extract abnormal interactions. Inspired by [31], our abnormal interactions represent another spatial interactive context so that we can handle the abnormal interaction sequence along with the object trajectory ( N a b n = t h ) for easy alignment and concatenation as well. In detail, for all social meta-components in the training set of N t r pedestrians R t r = { R m e t a 1 , R m e t a 2 , , R m e t a i , , R m e t a N t r } , we use Θ a b n to fit them for the first time, the optimization goal L of which is to maximize the maximum-likelihood function in Equation (6) through an EM algorithm.
log L ( Θ a b n | R t r ) = i = 1 N t r log ( n = 1 N a b n λ n N ( R m e t a i ; { μ n , Σ n } ) ) ,
where n = 1 N a b n λ n = 1 and λ n 0 . N ( μ n , Σ n ) is the probability density function of a single trivariate Gaussian component representing the distribution of relative distance, absolute velocity, and relative angle.
We compute posterior probabilities log P ( R m e t a i | Θ a b n ) = log n = 1 N a b n λ n · N ( R m e t a i ; Θ a b n ) for each social meta-component in R t r , then extract abnormal interactions R a b n = { R m e t a i | log P ( R m e t a i | Θ a b n ) < ϵ a b n } by filtering out normal ones where ϵ a b n is the threshold for the definition of abnormal social interactions. After that, Θ a b n manages to fit the R a b n set for the second time with the initial center of the normal social meta-components of the first fit stage via Equation (7):
log L ( Θ a b n | R a b n ) = i = 1 | R a b n | log ( n = 1 N a b n λ n N ( R a b n i ; { μ n , Σ n } ) ) ,
where { μ n , Σ n } is named the n-th abnormal interaction base after the second fit. The workflow diagram in Figure 3 describes the above two-stage GMM fitting procedure, which can be conducted in an offline manner.
When training or performing inference online, I a b n is the indicator function used to judge whether the pedestrian j is an abnormal neighbor to which the pedestrian ego i should pay attention. ϵ a b n 2 is the filtering threshold for abnormal social interactions during training and evaluation.
I a b n i j = 0 log P ( R m e t a i j | Θ a b n ) ϵ a b n 2 ; 1 Otherwise .
Denote A i = { R m e t a i j | I a b n i j = 1 , j n e i g h b o r ( i ) } as the abnormal neighbor set for i. For an abnormal agent j, for i, R m e t a i j will be decomposed and projected to several abnormal social component bases generated during training via Equation (9):
P ( R m e t a i j ) = n = 1 N a b n λ n i j · N ( R m e t a i j ; { μ n , Σ n } ) ,
where n = 1 N a b n λ n i j = 1 and λ n i j 0 .
Under the assumption of mutual independence among abnormal interaction bases, we aggregate abnormal neighbors s i according to their projection lengths λ n i j to these bases as in Equation (10), where n represents the n-th abnormal social interaction base. Note that we adopt the reparameterization trick [50] to generate the n-th component of s i here.
s n i N ( 1 | A i | Σ j λ n i j μ n , 1 | A i | 2 ( Σ j λ n i j ) 2 Σ n ) .
Our serialized abnormal social uncertainty feature z a b n _ s o c is defined as Equation (11), where g e m b e d is denoted as an MLP layer with the tanh activation function. Note that if an ego agent has no abnormal interaction component, we pad its social abnormal feature with zero.
z a b n _ s o c = g e m b e d ( s 1 , s 2 , , s n ) | A i | > 0 ; g e m b e d ( 0 , 0 , , 0 ) Otherwise .
The characteristic of abnormal social interaction z a b n _ s o c R d s will be concatenated (∥) with the past trajectory characteristic of the ego agent f b e h R d i produced by the backbone encoder. Then, a temporal attention module g f u s e combined with a list of MLP modules is set to form our final social interaction feature z s o c :
z s o c = g f u s e ( f b e h z a b n _ s o c ) .

3.3. Rare Intention Modeling

In this part, we focus on modeling intention feature z i n t , which is used to simulate goal formation through a novel prototypical contrastive learning (PCL) method to address rare intentions. We initialize a learnable waypoint slots set S i n t with length of multi-modalities M. Another multi-head attention layer S a t t n uses S i n t as query Q , z s o c as key K , and value V to encode inputs of PCL z i n t to form P ( z i n t | z s o c ) via Equation (13):
z i n t = S a t t n ( Q = S i n t , K = z s o c , V = z s o c ) .
The number of our intention base is N i n t . We adopt a two-stage GMM fitting paradigm where the first stage captures global goal distributions and the second stage focuses on long-tail intentions. In detail, we regard the last points of future trajectories in training set e = [ e 1 , e 2 , , e N t r ] as goal intentions and first use a GMM with N i n t 2 components Θ e 1 as { μ e 1 , Σ e 1 } to fit them in the training set. The goals with the lowest log-likelihood score log P ( e | Θ e 1 ) among those filtered by the R i n t ratio, denoted as e r , are filtered out and defined as rare intentions. Then, we utilize another GMM Θ e 2 with N i n t 2 components to fit e r . We then combine the two GMMs Θ e 1 and Θ e 2 to form the new, larger N i n t 2 2 = N i n t -component GMM Θ i n t of Equation (14), where { 1 , 2 , , n , , N i n t }, corresponding to Gaussian components { ( μ 1 e , Σ 1 e ) , ( μ N i n t 2 e , Σ N i n t 2 e ) , ( μ N i n t 2 + 1 e r , Σ N i n t 2 + 1 e r ) , , ( μ N i n t e r , Σ N i n t e r ) } through optimization goal Equation (15) for contrastive supervision later, which means the first N i n t 2 Gaussian ( Θ e 1 ) components of Θ i n t are generated by all endpoints e while the second N i n t 2 Gaussian components of Θ i n t ( Θ e 2 ) are generated by the 10 % long-tail endpoints e r .
Θ i n t = { Θ e 1 , Θ e 2 } ; Θ e 1 = { ( μ 1 e , Σ 1 e ) , , ( μ N i n t 2 e , Σ N i n t 2 e } ; Θ e 2 = { ( μ N i n t 2 + 1 e r , Σ N i n t 2 + 1 e r ) , ( μ N i n t e r , Σ N i n t e r ) } .
log L ( Θ e 1 | e ) = i = 1 N t r log ( n = 1 N i n t 2 λ n e N ( e i ; ( μ n e , Σ n e ) ) ) ; log L ( Θ e 2 | e r ) = i = 1 N t r 10 log ( n = N i n t 2 + 1 N i n t λ n e r N ( e r i ; ( μ n e r , Σ n e r ) ) ) .
In intention-clustered label P i n t prediction for sample i, Θ i n t is used to find the intention Gaussian component P 0 with maximum posterior probability via Equation (16):
P i n t = arg   max P 0 log P ( e i | Θ i n t P 0 ) = arg   max P 0 N ( e i ; Θ i n t P 0 ) P 0 ( 1 , N i n t ) .
Having obtained the clustered labels P i n t , we proceed to leverage them in our contrastive learning framework to supervise the learning of intention features z i n t defined in Equation (13). Notably, a final MLP S p r o j prevents conflicts between motion forecasting loss and contrastive learning loss, which means z i n t is passed through S p r o j as the last layer for the prototypical contrastive learning but not in intention prediction. We denote the PCL encoder module list as f θ = [ S i n t , S a t t n , S p r o j ].
Traditional unsupervised contrastive learning methods like MoCo [39] get feature input without gradients. To align with them, we pretrain the model with only Winner-Takes-All (WTA) ADE loss for trajectory points L p r e d , the calculation of which is described in Equation (17), where p ^ t ( M ) means the M-th-modality predicted trajectory point of timestep t outputted by our frequency-based decoder D (whose details are shown in Section 3.4) and p t means the ground truth.
p t = D ( z i n t , z s o c ) ; L p r e d ( p ^ t , p t ) = min M t = t h + 1 t p + t f p ^ t ( M ) p t 2 .
The WTA strategy chooses one slot among M slots with the best ADE where the gradients exclusively backpropagate to preserve diversity. Then, we freeze (*) all parameters of the pretrained backbone prediction encoder B e n c * shown in Figure 2a, which eliminates the need to design a dual momentum encoder f θ containing the dual parameters of the previous abnormal social interaction modeling when we train the PCL because the input of PCL z s o c (which actually is the output of B e n c * as well) receives no grad.
Instead of conducting contrastive loss directly on the past trajectory context, we conduct our prototypical contrastive learning on multi-modal slots by using the WTA strategy in advance to maintain prediction diversity. Consequently, the feature clustering and updating procedure must be performed per iteration rather than per epoch because the specific slot with the best f A D E defined in Equation (17) to be used in loss computation remains undetermined and distinct for different samples. Since the clustering operates solely on x and y dimensions of intention, the computation is highly efficient.
We then focus on how to define our prototypical contrastive learning loss (PCL loss) L P r o t o N C E . In Equation (21), L P r o t o N C E consists of instance-wise term L i n s and instance-prototype term L p r o t o . The first term brings the sample features within the class closer, while the second term maximizes inter-cluster separation. Standard PCL [39,40] assigns a sample to a single discrete class label P i n t = [ P 0 ] , whereas our goal intention follows a continuous distribution. For each sample i, we use a finite set of discrete components P i n t i = [ P 0 i , P 1 i , , P K 1 i ] with length K to mimic the continuity. As a result, our approach handles continuous intention distributions through K-nearest GMM component allocation in the PCL loss, effectively simulating distributional continuity with multiple discrete elements but not a single component. We verify the effectiveness of the modeling in our experiments. In detail, given an endpoint intention e i , we only find the component P i n t with the highest posterior log likelihood via Equation (16).
For highly efficient computation, we want to directly look up the other K 1 components [ P 1 , , P K 1 ] based on P 0 . In detail, we calculate the pairwise Kullback–Leibler (KL) divergence K L i n t R N i n t N i n t of any two multivariant Gaussian components in Θ i n t .
The indexes of the K-nearest Gaussian distributions K i d R N i n t K for each intention component are calculated to look up the KL divergence of the K-nearest neighbors (including itself) K L i d R N i n t K from K L i n t , which is used to construct continuous labels and calculate our continuous contrastive loss later.
Denoting v i as the intention embeddings ( z i n t , defined in Equation (13)) of sample i, we construct positive sample feature pairs ( v i , v i + ) and negative sample feature pairs ( v i , v j ) . Notably, to simulate continuity of endpoints in the (x,y) two-dimensional space, v i + adopts a hierarchical structure containing all samples belonging to the K-nearest components of Θ i n t . In detail, v i + k means arbitrary samples belonging to the k-th nearest component of sample i. We look up K L i d and denote KL divergences between P i n t and P 0 of sample i as { K L ( P 0 i | P 0 i ) , K L ( P 1 i | P 0 i ) , …, K L ( P K 1 i | P 0 i ) }. Equation (18) shows our L i n s , where v j denotes an arbitrary sample in the same batch as i and r denotes batch size. σ is a softmax function used to assign weights based on the KL divergence between Gaussian components, and τ is the temperature coefficient.
L i n s = i = 1 r k = 1 K 1 | N i k | i + = 1 | N i k | σ ( K L ( P k i | P 0 i ) ) · log exp ( v i · v i + k / τ ) j = 1 r exp ( v i · v j / τ ) .
The prototypical features C = [ c 1 , , c n , , c N i n t ] are updated per batch sample according to their maximum-likelihood intention-clustered labels P 0 via Equation (19), where α is the momentum coefficient if the batch has any sample with the label and I ( P 0 j = = n ) is the indicator function to judge whether P 0 of sample j belongs to the n-th intention Gaussian component.
c n = α · c n + ( 1 α ) · Σ j I ( P 0 j = = n ) · v j | I ( P 0 j = = n ) | .
Equation (20) shows our L p r o t o , where c i k is the prototype of the cluster which the k-th nearest neighbor GMM component of sample i belongs to, and c j is the prototype of an arbitrary cluster j. In summary, our approach refines intention modeling by specifically targeting challenging edge-case scenarios through the training approach above. Algorithm 1 shows the whole process of our rare intention prototypical contrastive learning.
L p r o t o = i = 1 r k = 1 K σ ( K L ( P k i | P 0 i ) ) · log exp ( v i · c i k / τ ) j = 1 N i n t exp ( v i · c j / τ ) .
L P r o t o N C E = L i n s + L p r o t o .
Algorithm 1 Intention Prototypical Contrastive Learning
  • Input: KL divergence of K-nearest intention GMM components K L i d , past trajectories X, predicted trajectory in advance p ^ t , ground truth of future trajectory p t , past timesteps t h , future timesteps t f , cluster centroid feature C, momentum coefficient α .
  • Parameter: intention GMM Θ i n t , frozen backbone encoder B e n c * , PCL encoder f θ , momentum PCL encoder f θ .
  1:
Let f θ = f θ
  2:
C0.
  3:
while not MaxEpoch do
  4:
   for x in DataLoader(X) do
  5:
     e = p t h + t f . {Regard the last point of future trajectory e as intention.}
  6:
     Let  P = t o p K n log P ( e | Θ i n t n ) {Intention pseudo-labels P i n t as Equation (16).}
  7:
      z s o c = B e n c * ( X ) .
  8:
      z i n t = f θ ( z s o c ) , z i n t = f θ ( z s o c ) .
  9:
      M * = a r g m i n M L p r e d ( p ^ t , p t ) . {Lookup min ADE index M * in advance as Equation (17).}
10:
     Update prototype features C according to Equation (19).
11:
     Calculate L P r o t o N C E based on z i n t [ M * ] , z i n t [ M * ] , C as Equations (18), (20) and (21).
12:
      θ = S G D ( θ , L P r o t o N C E ) .
13:
     Momentum update θ based on θ .
14:
   end for
15:
end while

3.4. Frequency-Sensitive Decoder Combining Interactions and Intentions

In this part, we propose a novel frequency-based decoder D to integrate the social interaction part z s o c introduced in Section 3.2 and the intention part z i n t introduced in Section 3.3, corresponding to P ( F i | z i n t , z s o c ) , then obtain final motions F i = D ( z i n t , z s o c ) to execute. We have z s o c R t h d and z i n t R M d , where t h is past timesteps and M is the number of multi-modalities. As in Equation (22), we regress endpoints directly based on z i n t with an FC layer intention decoder D i n t to obtain goal endpoints e R M 2 .
e = t = t h + 1 t h + t f v e l t = D i n t ( z i n t ) .
The key is to adopt an endpoint-driven method, and the rest of the trajectory points will be conditionally completed under the drive of the terminal location in the frequency domain through Discrete Fourier Transform (DFT). By decomposing a signal into its constituent sinusoids, DFT allows deep learning models to identify and leverage repetitive patterns and structures within the data that are often obscure in the original time domain [51]. To interpolate intermediate trajectory points, we demonstrate a gate-weighting, as shown in Equation (23):
W = [ W s , W i ] = D σ ( D s w ( ( z i n t @ z s o c ) @ z s o c ) S o c i a l s , D i w ( z i n t ) I n t e n t i o n i ) ,
where D s w and D i w are MLPs for social part s and intention part i, respectively, and D σ is an MLP with a softmax activation function used to normalize weights between the two parts. The output social weight and intention weight are gathered as W = W s W i , which is used to generate the Alternate-Current Component (ACC) of the velocity in the frequency domain after passing through an FC layer D a c . The ACC is generated via Equation (24), which includes the Fourier components except the first Direct-Current Component (DCC):
A C C = D a c ( W s s + W i i ) .
Because of the theorem that the real part of the DCC after the Discrete Fourier Transform (DFT) equals the sum of all points in the temporal series [52], we concatenate goal endpoints e with the zero image part to form the DCC, which can be regarded as the accumulation of instantaneous velocity v e l t at all future steps. Finally, we concatenate the DCC with the ACC and employ an inverse Discrete Fourier Transform (iDFT) layer to reconstruct future velocity profiles whose cumulative sum is exactly the DCC, that is, the endpoint e, so that we obtain accurate predicted trajectory F i , as shown in Equation (25):
F i = C u m S u m ( i D F T ( e + 0 j D C C , A C C ) ) .

3.5. Loss Function

L P r o t o N C E could hardly bring more benefits to easy samples, so we adopt a gate θ to stop PCL loss on easy samples. In contrast to indicating a deterministic hardness of the samples [15], we determine hardness of the samples based on L p r e d , predicted in advance, which can be dynamically adjusted during the training process. Having denoted the WTA strategy’s advanced calculated L p r e d , we define our loss function L via Equation (26), where λ is an indicator value defined via Equation (27). θ is the threshold to filter out hard samples.
L = L p r e d + λ · L P r o t o N C E .
λ = 1 L p r e d > θ and is not pretraining ; λ = 0 L p r e d θ or is pretraining .

4. Experiment

4.1. Experimental Setup

Datasets. We use two pedestrian motion forecasting datasets, ETH [53]-UCY [54] and SDD [55], in this work. Recent studies on ETH-UCY primarily used cross-dataset validation, which means training on four scenarios and testing on the held out one. We retain this setting in our approach.
  • ETH-UCY is a dataset of pedestrian walking scenes consisting of five sub-scenarios: eth, hotel, univ, zara1, and zara2. A sample interval of 0.4 s is conducted in the length of the previous observer, t h = 3.2 s/0.4 s = 8 steps, and future prediction of t f = 4.8 s/0.4 s = 12 steps.
  • Stanford Drone Dataset (SDD) is a drone dataset of human behaviors on campus. A total of 60 drone videos are used to extract 290,243 trajectories (8 steps as observed steps and 12 steps as future steps to predict) partitioned into 60% to train, 20% to validate, and 20% to test.
As in previous studies [12,31,49,56], some preprocessor layers are used to transform the coordinates of the trajectories into scene-centric ones. ’Move’ means normalizing trajectory points p t at timestep t based on the current absolute position of ego agent p t h i via Equation (28) to get moved trajectory points p t = ( x , y ) . ’Rotate’ means rotating the moved historical trajectory p t = ( x , y ) based on the current target agent’s heading θ to get final preprocessed trajectory points p t = ( x , y ) via Equation (29).
p t = p t p t h i .
p t = x y = c o s θ s i n θ s i n θ c o s θ x y .
Backbone prediction networks. Since our method can be regarded as a plugin, we will briefly introduce some of the latest outperforming backbone prediction networks that are used:
  • Multi-Style Network (MSN) [10] provides multi-style predictions with a series of style channels, each of which is bound to a unique behavior.
  • View Vertically ( V 2 -Net) [12] transforms agents’ trajectories into the frequency domain to obtain potential characteristics which could not be extracted in the time domain.
  • E- V 2 Net [56] introduces Haar Transform instead of Fourier Transform and proposes a bilinear structure to model dimension interactions.
  • SocialCircle(Plus) (-SC/-SCP) [31,49] is an alternative structure which can be plugged into the existing SOTA methods mentioned above to promote their performance. Inspired by the location of marine animals, SocialCircle aggregates social interactions according to their relative directions. Moreover, SocialCirclePlus concentrates on physical interactions as well as social interactions.
These approaches predict key points followed by linear speed interpolation [10,12,31,49,56], achieving competitive ADE/FDE on ETH-UCY and the SDD.
Evaluation Protocol and Implementation Details. In order to eliminate the influence of hyperparameters on the experimental results, we fix all the official optimal hyperparameters of the backbone prediction networks to reproduce the results before our approach is activated. The epoch with the best ADE of key points is used to represent the experimental results of a single trial. As in previous work, we adopt leave-one-out cross-dataset validation for ETH-UCY to verify whether our method has a good generalization ability to new scenarios. Specifically, we choose four subsets of ETH-UCY (eth, hotel, univ, zara1, zara2) as a training set and the remaining one as a validation subset. Training and evaluation are carried out on a 24G VRAM NVIDIA 4090 GPU. In our model, the 50 nearest neighbors of the ego agent are used to compute social interactions. The number of abnormal meta-components n is set to the same as the observation steps (eight in our study) for convenient concatenation. ϵ a b n during the offline stage is set to −2 and ϵ a b n 2 is set to 0. It should be noted that we do not make any additional changes to the backbone structures other than the improvements described in the paper. It takes us about 30 min to train completely once on the univ dataset and about 3 h to train completely once on the SDD.
Metrics. The pedestrian motion forecasting task measures the prediction accuracy of the generated M (M = 20) trajectories with the best average displacement error m i n A D E 20 and the best final displacement error m i n F D E 20 . Their calculation can be denoted as Equations (30) and (31), where the hat mark ‘^’ means the predicted trajectory points.
m i n A D E 20 = min M t = t h + 1 t p + t f p ^ t ( M ) p t 2 .
m i n F D E 20 = min M p ^ t h + t f ( M ) p t h + t f 2 .
Training Details. Table 2 shows training details such as the preprocessing steps and hyperparameters for several backbone predictors and datasets. Our settings are the same as the original settings for backbone predictors without any change. Notably, the recent state-of-the-art methods shown in Table 2 adopt a strategy which only predicts key points and then utilizes linear speed interpolation to complete the whole trajectory. ‘K’ in Table 2 denotes the timesteps of key points.

4.2. Comparisons to State-of-the-Art Methods

We first design extensive experiments to verify the effectiveness of abnormal interactions modeling.
Cross-Validation Improvements with Abnormal Priors on ETH-UCY. As shown in Table 3, our learned abnormal social priors enhance the performance on ETH-UCY. V 2 -Net-SCP-abn outperforms PPT by 8% ADE and 2.5% FDE. Although MSN-SCP lags behind V 2 -Net-SCP/E- V 2 -Net-SCP, abnormal interaction modeling still improves it by 2.4% ADE and 4.0% FDE.
SDD. As shown in Table 4, V 2 -Net-SCP-abn surpasses V 2 -Net-SCP by 3.2% ADE and 3.4% FDE. Although MSN models perform below V 2 -Net, our abnormal interaction modeling still enhances MSN-SC by 2.3% ADE and 2.8% FDE. Notably, MSN-SC-abn matches the performance of MSN-SCP without physical input (e.g., RGB images). Even with simple backbones, our abnormal interaction plugin boosts a Transformer model by 10.7% ADE and 7.3% FDE.
Performance on Long-Tail Cases. Table 5 demonstrates the effectiveness of our rare intention modeling ‘-r’ through cross-dataset validation on ETH-UCY. As shown in Table 6, while FEND [15] improves the performance of the top 5% of cases with the degradation of 95% majority-class accuracy, our E- V 2 -Net-SCP-abn-r achieves comparable performance gains in challenging cases and effectively preserves the accuracy of the majority class, establishing a new SOTA (6.19/9.71 ADE/FDE) on the SDD benchmark as well. Despite omitting frequency-domain analysis and sophisticated architectures, our rare intention extraction approach ‘-r’ attains a performance (6.88/10.43) comparable to complex models when implemented on simple Transformer baselines. Further improvements of long-tailed samples on other backbone models are observed.

4.3. Discussions and Ablation Studies

Discussion I: Threshold Analysis of Abnormal Interactions. Our ablation study reveals two key threshold-dependent patterns in Table 7: (1) For offline abnormal interaction detection ( ϵ a b n ), V2-Net-SCP-abn achieves peak performance (1.1% ADE/1.8% FDE gain) at ϵ a b n = −2, with degradation beyond this threshold (0.8% ADE/0.7% FDE loss in (−2.0)) due to noisy abnormal interaction meta-components. (2) The optimal inference filtering threshold ( ϵ a b n 2 ) for abnormal interaction extraction exhibits architecture dependence, requiring 7.1k samples for V2-Net-SCP-abn versus 5.1k for E-V2-Net-SCP-abn, while insufficient extraction consistently degrades model performance. The study establishes ϵ a b n 2 = −2 as the optimal choice for SDD.
Discussion II: Settings of the Rare Intention Modeling. In Table 8, we analyze the effect of the number of GMM components with rare intention N i n t and the abnormal interaction activation threshold ( ϵ a b n and ϵ a b n 2 ) for L P r o t o N C E on the univ dataset. We can see that, for V 2 -Net-SCP-abn, N i n t = 512 , and θ = 0.6 is the best choice. This is because, when θ is too small, more easy samples will disturb the generation of features of the long-tailed class. On the other hand, when θ is too large, fewer long-tailed samples are taken into account. Both factors affect the accuracy of the model. Apart from that, selecting the proper component number of intention GMM is essential. Intuitively, excessive clustering dilutes prototype representativeness, while insufficient clustering overlooks minority-class prototypes.
Discussion III: Selection of K-Nearest Intention GMM Components and R int Rare Intention Definition. Next, we verify the effectiveness of the K-nearest intention GMM component selection shown in Table 9. On the SDD, discrete waypoint modeling (K = 1) greatly improves long-tail performance but significantly reduces overall precision, whereas our continuous approach (K = 5) generates smoother long-tail cluster centroids to address this limitation. We also discover the effect of the K value on the ADE/FDE metrics. In the context of cross-validation where the univ subset is set as a test set, K = 5 yields the highest model performance for V 2 -Net-SCP-abn. Additionally, R i n t = 0.05 gets the best results on the SDD.
Discussion IV: Inference Time. We test the inference time of the SDD, which has the most complex scenarios, on an NVIDIA GeForce RTX 4090 GPU. Note that all hyperparameters are same as those in Table 2. From Table 9, it can be seen that the average inference time and fast inference time do not increase because our rare intention modeling does not introduce any extra layer.

4.4. Qualitative Analysis

Visualized predictions. Figure 4a illustrates backbone predictions with our abnormal social interaction plugin in three SDD scenes: little1, hyang6, and bookstore6. Although different backbone prediction networks could generate varied predictions, all predictions with the abnormal interaction module preserve the quality and diversity of the original backbone predictions. Figure 4b shows a visual comparison of the same scene with and without our abnormal social interaction plugin. In the first row of Figure 4b, ego-biker should take full advantage of abnormal social interaction i 1 on the left side of the scene because it contains information about walkable roads. The MSN-SCP predictions on the backbone in the scene have a large gap between the ground truth because i 1 is averaged with other interactions in its SocialCircle space, so it cannot be given more attention. Agents tend to mimic the motion style of other agents in the same scene. In Figure 4b row 2, there are neighbor tracks that need to be imitated to walk around trees. The red circle prediction of V 2 -Net-SCP-abn succeeds in imitating the tendency of neighbor agents and fits the ground truth better than V 2 -Net-SCP. In general, places with few neighbor agents can be regarded as ‘uninhabited’ areas. People do not always like to go to an uninhabited place alone, including our ego agent. In Figure 4b row 3, all neighbor agents are gathered on the road near the roof while the rest of the scene could appear as ’uninhabited’ areas. Trans-SCP with our plugins narrows the gap to the ground truth by paying more attention to the crowded area.
Stress Testing. We verify the capability of our abnormal interaction modeling by manually adding an abnormal manual neighbor to observe our model reflection. Figure 5 visualizes and compares several toy examples of real-world SDD scenes. In Figure 5a, despite adding a side-by-side walking pedestrian close to the ego pedestrian, the V 2 -Net-SCP predictions do not show the avoidance tendency in relation to the added neighbor, while our V 2 -Net-SCP-abn predictions do show one. In Figure 5b,c, a manual interaction with high velocity toward the ego pedestrian guides our predictions. V 2 -Net-SCP-abn and Trans-SCP-abn appear to be highly effective in preventing possible collisions, while backbone predictions do not appear. In Figure 5d, a high-speed neighbor is added to influence the ego agent’s right turn motion. Our MSN-SCP-abn predictions exhibit a larger spacing d to the abnormal neighbor than the backbone predictions. The results confirm that our approach extracts abnormal interactions and adapts across networks.
Interpretability of Abnormal Interactions. Our abnormal interaction representation demonstrates interpretability through the semantically distinct clusters identified via GMM’s two-stage clustering (Figure 6): (1) static agents (purple cluster, normalized velocity < 0.2), (2) agents approaching ego (green cluster, relative angle ≈ π ), and (3) high-velocity neighbors (sky-blue cluster, normalized velocity > 0.5). This method effectively isolates abnormal interactions with clear behavioral semantics.
As illustrated in Figure 7 and the pie chart below, we select three scenarios involving abnormal social interactions and analyze the contributions of different semantic components to the model’s predictions. A common trend observed across all three scenarios is that the predicted trajectories of surrounding agents consistently avoid the abnormal interactions (highlighted in blue). However, the underlying semantic reasons for this avoidance vary by scenario. In Scenario 1, the “high-velocity” component is the dominant factor, accounting for 71% of the overall contribution. This suggests that the model relies primarily on the speed of the interacting agents to predict avoidance. In Scenario 2, where the target pedestrian’s orientation is nearly perpendicular (approx. π / 2 ) to the direction of the abnormal interaction, we see a shift in the contributing factors. Compared to a baseline of normal interactions, the contribution of the ’opposite direction’ component increases by 34%. Simultaneously, the ’high-velocity’ component still maintains a high absolute contribution of 45%. This combination implies that, in such an unusual configuration, the model integrates both the high speed and the opposing direction of the agents to forecast their behavior. Scenario 3 presents a different situation: Here, the speed of the abnormal interaction is notably lower than in the first two scenarios. Consequently, the model assigns greater importance to stationary agents, with the contribution of the “static agent” component rising by 34% relative to the baseline. This indicates that, when a dynamic interaction is slow, the presence of static elements in the scene becomes a more critical cue for prediction. Finally, it is worth noting that a substantial portion of the model’s decision-making remains unaccounted for by these interpretable components. In scenario 3, over 57% of the contributory factors are classified as ’other abnormal components’ with no clearly visible semantic meaning. This points to a limitation in the current semantic framework and highlights an area for future investigation.
Generation of Intention GMM Components. The 10% of the intentions with the worst fit in the first stage are retained as long tails and they are subjected to a second GMM fit to obtain the center of the long-tail cluster. These long-tail intention clusters are combined with the clusters obtained from the first stage of clustering into the final intention GMM model to better reflect the distribution of long-tail intention. As shown in Figure 8, the above clustering method (right) gives a large weight to the sparsely distributed long-tail intention endpoints (i.e., the GMM clustering center in the right figure is more dispersed).

4.5. Robustness Evaluation and Comparative Results

Figure 9 is the box plot of the ADE/FDE results of 15 parallel experiments on a univ dataset. The ADEs/FDEs of our method get Q1, Q2, Q3 better than the baseline corresponding to the lower bottom edge, the center line of the box body, and the upper edge. Moreover, all our results do not appear in *, indicating that there are no outliers. Figure 10 shows FDE metrics in a test dataset with training epochs. Our plugin not only reduces testing errors but also makes the training results more stable. For the hotel subset, the best FDE reduces by 0.01 from 0.14 to 0.13. For the univ subset, the FDE metrics of the red curve experiments do not increase with the number of training epochs, which proves a better generalization performance.

5. Outlook

We will explore more frequency-based methods to couple abnormal interaction and rare intention modeling in our future study. This direction holds promise for developing a more holistic understanding of outlier events in multi-agent systems.

6. Conclusions

We unravel the fundamental causes of hard case scenarios in pedestrian trajectory prediction as (1) abnormal social interactions and (2) rare intentions in challenging scenarios. We propose a method to obtain abnormal social interactions. Moreover, an improved PCL algorithm facilitates the learning of rare intentions under continuous pseudo-label settings. A frequency-sensitive goal-driven decoder fuses both factors. Compared to the SOTA, our method outperforms on both the full dataset and the long-tail subset, advancing trajectory prediction.

Author Contributions

Conceptualization, C.Y. and X.D.; methodology, C.Y.; software, C.Y.; validation, C.Y., J.L., and X.D.; formal analysis, C.Y.; investigation, C.Y.; resources, X.D.; data curation, J.L.; writing—original draft preparation, C.Y.; writing—review and editing, C.Y. and X.D.; visualization, C.Y.; supervision, X.D.; project administration, X.D. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

The publicly available datasets analyzed for this study can be found in the following repositories. The ETH dataset [53] and UCY dataset [54] are available at http://www.vision.ee.ethz.ch/datasets/ (accessed on 6 March 2026) and https://graphics.cs.ucy.ac.cy/research/downloads/crowd-data (accessed on 6 March 2026). The Stanford Drone Dataset (SDD) [55] is available at https://cvgl.stanford.edu/projects/uav_data/ (accessed on 6 March 2026).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
GMMGaussian Mixed Model
SCPSocialCircle-Plus
ACCAlternate-Current Component
DCCDirect-Current Component
PCLPrototypical Contrastive Learning

References

  1. Sreenu, G.; Durai, S. Intelligent video surveillance: A review through deep learning techniques for crowd analysis. J. Big Data 2019, 6, 1–27. [Google Scholar] [CrossRef]
  2. Pokle, A.; Martín-Martín, R.; Goebel, P.; Chow, V.; Ewald, H.M.; Yang, J.; Wang, Z.; Sadeghian, A.; Sadigh, D.; Savarese, S.; et al. Deep local trajectory replanning and control for robot navigation. In Proceedings of the 2019 International Conference on Robotics and Automation (ICRA); IEEE: New York, NY, USA, 2019; pp. 5815–5822. [Google Scholar]
  3. Samir, M.; Assi, C.; Sharafeddine, S.; Ebrahimi, D.; Ghrayeb, A. Age of information aware trajectory planning of UAVs in intelligent transportation systems: A deep learning approach. IEEE Trans. Veh. Technol. 2020, 69, 12382–12395. [Google Scholar] [CrossRef]
  4. Alhariqi, A.; Gu, Z.; Saberi, M. Calibration of the intelligent driver model (IDM) with adaptive parameters for mixed autonomy traffic using experimental trajectory data. Transp. B Transp. Dyn. 2022, 10, 421–440. [Google Scholar]
  5. Abbas, M.T.; Jibran, M.A.; Afaq, M.; Song, W.C. An adaptive approach to vehicle trajectory prediction using multimodel Kalman filter. Trans. Emerg. Telecommun. Technol. 2020, 31, e3734. [Google Scholar] [CrossRef]
  6. Herrero, D.A.; Pedroche, D.S.; Herrero, J.G.; López, J.M.M. AIS trajectory classification based on IMM data. In Proceedings of the 2019 22th International Conference on Information Fusion (FUSION); IEEE: New York, NY, USA, 2019; pp. 1–8. [Google Scholar]
  7. Tomar, R.S.; Verma, S.; Tomar, G.S. SVM based trajectory predictions of lane changing vehicles. In Proceedings of the 2011 International Conference on Computational Intelligence and Communication Networks; IEEE: New York, NY, USA, 2011; pp. 716–721. [Google Scholar]
  8. Lee, D.; Ott, C.; Nakamura, Y. Mimetic communication model with compliant physical contact in human—Humanoid interaction. Int. J. Robot. Res. 2010, 29, 1684–1704. [Google Scholar] [CrossRef]
  9. Cui, H.; Qi, H.; Zhou, J. DBN-MACTraj: Dynamic Bayesian Networks for Predicting Combinations of Long-Term Trajectories with Likelihood for Multiple Agents. Mathematics 2024, 12, 3674. [Google Scholar] [CrossRef]
  10. Wong, C.; Xia, B.; Peng, Q.; Yuan, W.; You, X. MSN: Multi-style network for trajectory prediction. IEEE Trans. Intell. Transp. Syst. 2023, 24, 9751–9766. [Google Scholar] [CrossRef]
  11. Zhou, Z.; Ye, L.; Wang, J.; Wu, K.; Lu, K. Hivt: Hierarchical vector transformer for multi-agent motion prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2022; pp. 8823–8833. [Google Scholar]
  12. Wong, C.; Xia, B.; Hong, Z.; Peng, Q.; Yuan, W.; Cao, Q.; Yang, Y.; You, X. View vertically: A hierarchical network for trajectory prediction via fourier spectrums. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2022; pp. 682–700. [Google Scholar]
  13. Girgis, R.; Golemo, F.; Codevilla, F.; Weiss, M.; D’Souza, J.A.; Kahou, S.E.; Heide, F.; Pal, C. Latent variable sequential set transformers for joint multi-agent motion prediction. arXiv 2021, arXiv:2104.00563. [Google Scholar]
  14. Zhang, S.; Zhao, G.; Lyu, F.; Wang, S.; Zhang, Z.; Zhao, F.; Li, J.; Shan, C.; Wang, L. MambaPTP: Exploring the Potential of Mamba for Pedestrian Trajectory Prediction. IEEE Trans. Circuits Syst. Video Technol. 2025, 36, 3795–3807. [Google Scholar] [CrossRef]
  15. Wang, Y.; Zhang, P.; Bai, L.; Xue, J. Fend: A future enhanced distribution-aware contrastive learning framework for long-tail trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2023; pp. 1400–1409. [Google Scholar]
  16. Zhang, J.; Pourkeshavarz, M.; Rasouli, A. Tract: A training dynamics aware contrastive learning framework for long-tail trajectory prediction. In Proceedings of the 2024 IEEE Intelligent Vehicles Symposium (IV); IEEE: New York, NY, USA, 2024; pp. 3282–3288. [Google Scholar]
  17. Wu, W.; Feng, X.; Gao, Z.; Kan, Y. Smart: Scalable multi-agent real-time motion generation via next-token prediction. Adv. Neural Inf. Process. Syst. 2024, 37, 114048–114071. [Google Scholar]
  18. Deo, N.; Wolff, E.; Beijbom, O. Multimodal Trajectory Prediction Conditioned on Lane-Graph Traversals. In Proceedings of the 5th Annual Conference on Robot Learning; PMLR: Cambridge, MA, USA, 2021. [Google Scholar]
  19. Lan, Z.; Ren, Y.; Yu, H.; Liu, L.; Li, Z.; Wang, Y.; Cui, Z. Hi-SCL: Fighting long-tailed challenges in trajectory prediction with hierarchical wave-semantic contrastive learning. Transp. Res. Part C Emerg. Technol. 2024, 165, 104735. [Google Scholar] [CrossRef]
  20. Romano, F.; Cimini, D.; Di Paola, F.; Gallucci, D.; Larosa, S.; Nilo, S.T.; Ricciardelli, E.; Iisager, B.D.; Hutchison, K. The evolution of meteorological satellite cloud-detection methodologies for atmospheric parameter retrievals. Remote Sens. 2024, 16, 2578. [Google Scholar] [CrossRef]
  21. Zhou, Y.; Wu, H.; Cheng, H.; Qi, K.; Hu, K.; Kang, C.; Zheng, J. Social graph convolutional LSTM for pedestrian trajectory prediction. IET Intell. Transp. Syst. 2021, 15, 396–405. [Google Scholar] [CrossRef]
  22. Zhang, S.; Wu, J.; Dong, J.; Liu, L. Social-Interaction GAN: Pedestrian Trajectory Prediction. In Proceedings of the International Conference on Wireless Algorithms, Systems, and Applications; Springer: Berlin/Heidelberg, Germany, 2021; pp. 429–440. [Google Scholar]
  23. Mohamed, A.; Qian, K.; Elhoseiny, M.; Claudel, C. Social-stgcnn: A social spatio-temporal graph convolutional neural network for human trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2020; pp. 14424–14432. [Google Scholar]
  24. He, Z.; Li, W.; Gan, X.; Chen, Z.; Wu, Y.; Zhang, Y. Decoupled Pedestrian Trajectory Prediction Network with Near-Aware Attention. Knowl.-Based Syst. 2025, 333, 114913. [Google Scholar] [CrossRef]
  25. Yuan, Y.; Weng, X.; Ou, Y.; Kitani, K.M. Agentformer: Agent-aware transformers for socio-temporal multi-agent forecasting. In Proceedings of the IEEE/CVF International Conference on Computer Vision; IEEE: New York, NY, USA, 2021; pp. 9813–9823. [Google Scholar]
  26. Sun, Y.; Xiao, D.; Huang, M.; Wang, J.; Tong, C.; Luo, J.; Pu, H. Transferable Multi-Level Spatial-Temporal Graph Neural Network for Adaptive Multi-Agent Trajectory Prediction. Knowl.-Based Syst. 2026, 338, 115451. [Google Scholar] [CrossRef]
  27. Yang, H.; Chen, Y.; Cai, J.; Yang, Y.; Zhou, L.; Tian, J.; Li, Y.; Xun, Y.; Zhao, X. Cross-domain pedestrian trajectory prediction via behavioral pattern-aware multi-instance GCN. Knowl.-Based Syst. 2025, 329, 114266. [Google Scholar] [CrossRef]
  28. Zhou, Z.; Wang, J.; Li, Y.H.; Huang, Y.K. Query-centric trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2023; pp. 17863–17873. [Google Scholar]
  29. Wang, R.; Lin, W.; Ren, G.; Cao, Q.; Zhang, Z.; Deng, Y. Interaction-aware vehicle trajectory prediction using spatial-temporal dynamic graph neural network. Knowl.-Based Syst. 2025, 327, 114187. [Google Scholar] [CrossRef]
  30. Rowe, L.; Ethier, M.; Dykhne, E.H.; Czarnecki, K. Fjmp: Factorized joint multi-agent motion prediction over learned directed acyclic interaction graphs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2023; pp. 13745–13755. [Google Scholar]
  31. Wong, C.; Xia, B.; Zou, Z.; Wang, Y.; You, X. Socialcircle: Learning the angle-based social interaction representation for pedestrian trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2024; pp. 19005–19015. [Google Scholar]
  32. Cui, Y.; Guo, D.; Han, Y. MELON: Hierarchical Multi-Agent Trajectory Prediction with Spatio-Temporal Uncertainty Adaptation. Knowl.-Based Syst. 2025, 334, 115143. [Google Scholar] [CrossRef]
  33. Lin, X.; Liang, T.; Lai, J.; Hu, J.F. Progressive pretext task learning for human trajectory prediction. In Proceedings of the European Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2024; pp. 197–214. [Google Scholar]
  34. Wei, C.; Wu, G.; Barth, M.J.; Abdelraouf, A.; Gupta, R.; Han, K. KI-GAN: Knowledge-Informed Generative Adversarial Networks for Enhanced Multi-Vehicle Trajectory Forecasting at Signalized Intersections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2024; pp. 7115–7124. [Google Scholar]
  35. Guo, L.; Ge, P.; Shi, Z. Multi-object trajectory prediction based on lane information and generative adversarial network. Sensors 2024, 24, 1280. [Google Scholar] [CrossRef]
  36. Gu, T.; Chen, G.; Li, J.; Lin, C.; Rao, Y.; Zhou, J.; Lu, J. Stochastic trajectory prediction via motion indeterminacy diffusion. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2022; pp. 17113–17122. [Google Scholar]
  37. Liu, Y.; Dong, X.; Lin, Y.; Ye, M. Diftraj: Diffusion inspired by intrinsic intention and extrinsic interaction for multi-modal trajectory prediction. In Proceedings of the Thirty-Third International Joint Conference on Artificial Intelligence; Curran Associates, Inc.: Red Hook, NY, USA, 2024. [Google Scholar]
  38. Mao, W.; Xu, C.; Zhu, Q.; Chen, S.; Wang, Y. Leapfrog diffusion model for stochastic trajectory prediction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2023; pp. 5517–5526. [Google Scholar]
  39. He, K.; Fan, H.; Wu, Y.; Xie, S.; Girshick, R. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2020; pp. 9729–9738. [Google Scholar]
  40. Li, J.; Zhou, P.; Xiong, C.; Hoi, S.C. Prototypical contrastive learning of unsupervised representations. arXiv 2020, arXiv:2005.04966. [Google Scholar]
  41. Yang, Z.; Pan, J.; Yang, Y.; Shi, X.; Zhou, H.Y.; Zhang, Z.; Bian, C. Proco: Prototype-aware contrastive learning for long-tailed medical image classification. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Berlin/Heidelberg, Germany, 2022; pp. 173–182. [Google Scholar]
  42. Lin, S.; Liu, C.; Zhou, P.; Hu, Z.Y.; Wang, S.; Zhao, R.; Zheng, Y.; Lin, L.; Xing, E.; Liang, X. Prototypical graph contrastive learning. IEEE Trans. Neural Netw. Learn. Syst. 2022, 35, 2747–2758. [Google Scholar] [CrossRef]
  43. Wang, P.; Han, K.; Wei, X.S.; Zhang, L.; Wang, L. Contrastive learning based hybrid networks for long-tailed image classification. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2021; pp. 943–952. [Google Scholar]
  44. Du, C.; Wang, Y.; Song, S.; Huang, G. Probabilistic contrastive learning for long-tailed visual recognition. IEEE Trans. Pattern Anal. Mach. Intell. 2024, 46, 5890–5904. [Google Scholar] [CrossRef]
  45. Yang, Y.; Zha, K.; Chen, Y.; Wang, H.; Katabi, D. Delving into deep imbalanced regression. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2021; pp. 11842–11851. [Google Scholar]
  46. Ding, Z.; Xu, Y.; Xu, W.; Parmar, G.; Yang, Y.; Welling, M.; Tu, Z. Guided variational autoencoder for disentanglement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition; IEEE: New York, NY, USA, 2020; pp. 7920–7929. [Google Scholar]
  47. Lee, J.; Kim, E.; Lee, J.; Lee, J.; Choo, J. Learning debiased representation via disentangled feature augmentation. Adv. Neural Inf. Process. Syst. 2021, 34, 25123–25133. [Google Scholar]
  48. Ngweta, L.; Maity, S.; Gittens, A.; Sun, Y.; Yurochkin, M. Simple disentanglement of style and content in visual representations. In Proceedings of the International Conference on Machine Learning; PMLR: Cambridge, MA, USA, 2023; pp. 26063–26086. [Google Scholar]
  49. Wong, C.; Xia, B.; Zou, Z.; You, X. Socialcircle+: Learning the angle-based conditioned interaction representation for pedestrian trajectory prediction. arXiv 2024, arXiv:2409.14984. [Google Scholar]
  50. Kingma, D.P.; Salimans, T.; Welling, M. Variational dropout and the local reparameterization trick. Adv. Neural Inf. Process. Syst. 2015, 28, 2575–2583. [Google Scholar]
  51. Xu, Y.; Hu, W.; Wang, S.; Zhang, X.; Wang, S.; Ma, S.; Guo, Z.; Gao, W. Predictive generalized graph Fourier transform for attribute compression of dynamic point clouds. IEEE Trans. Circuits Syst. Video Technol. 2020, 31, 1968–1982. [Google Scholar] [CrossRef]
  52. Rao, K.R.; Kim, D.N.; Hwang, J.J. Fast Fourier Transform-Algorithms and Applications; Springer Science & Business Media: Dordrecht, The Netherlands, 2011. [Google Scholar]
  53. Pellegrini, S.; Ess, A.; Schindler, K.; Van Gool, L. You’ll never walk alone: Modeling social behavior for multi-target tracking. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision; IEEE: New York, NY, USA, 2009; pp. 261–268. [Google Scholar]
  54. Lerner, A.; Chrysanthou, Y.; Lischinski, D. Crowds by example. In Proceedings of the Computer Graphics Forum; Wiley Online Library: Hoboken, NJ, USA, 2007; Volume 26, pp. 655–664. [Google Scholar]
  55. Robicquet, A.; Sadeghian, A.; Alahi, A.; Savarese, S. Learning social etiquette: Human trajectory understanding in crowded scenes. In Proceedings of the Computer Vision—ECCV 2016: 14th European Conference, Amsterdam, The Netherlands, 11–14 October 2016; Part VIII 14; Springer: Berlin/Heidelberg, Germany, 2016; pp. 549–565. [Google Scholar]
  56. Wong, C.; Xia, B.; Peng, Q.; You, X. Another vertical view: A hierarchical network for heterogeneous trajectory prediction via spectrums. arXiv 2023, arXiv:2304.05106. [Google Scholar] [CrossRef]
Figure 1. (a) Our social interaction space (DVA space) designed to reflect velocity, relative distance, and relative direction where we extract abnormal social interactions. (b) Long-tail intention example: Most pedestrians head to flats (blue) or schools (orange), which are the majority of map candidates, while the red pedestrian heads to a low-probability destination—the hospital—which is an outlier point of the map candidates.
Figure 1. (a) Our social interaction space (DVA space) designed to reflect velocity, relative distance, and relative direction where we extract abnormal social interactions. (b) Long-tail intention example: Most pedestrians head to flats (blue) or schools (orange), which are the majority of map candidates, while the red pedestrian heads to a low-probability destination—the hospital—which is an outlier point of the map candidates.
Computers 15 00186 g001
Figure 2. (a) Abnormal social interaction extraction module. (b) Long-tail intention contrastive learning module. (c) Details of our novel frequency goal-driven decoder to fuse outputs of (a,b).
Figure 2. (a) Abnormal social interaction extraction module. (b) Long-tail intention contrastive learning module. (c) Details of our novel frequency goal-driven decoder to fuse outputs of (a,b).
Computers 15 00186 g002
Figure 3. The workflow of our two-stage abnormal interaction meta-component extraction.
Figure 3. The workflow of our two-stage abnormal interaction meta-component extraction.
Computers 15 00186 g003
Figure 4. Qualitative results. (a) Visualization predictions of our abnormal plugins under varying backbone prediction networks. (b) Visualization comparisons with and without abnormal interactions.
Figure 4. Qualitative results. (a) Visualization predictions of our abnormal plugins under varying backbone prediction networks. (b) Visualization comparisons with and without abnormal interactions.
Computers 15 00186 g004
Figure 5. Stress testing, which significantly alters the circular trajectory modes by introducing abnormal manual neighbors. In scenario (ad), our ego agent shows avoidance tendency to manual abnormal neighbor added.
Figure 5. Stress testing, which significantly alters the circular trajectory modes by introducing abnormal manual neighbors. In scenario (ad), our ego agent shows avoidance tendency to manual abnormal neighbor added.
Computers 15 00186 g005
Figure 6. Abnormal interaction semantics when conducting cross-dataset validation on the eth subset. (orange: anomalies from hotel/univ/zara1/zara2). Purple cluster: Static Agents. Green cluster: Opposite Direction. Blue cluster: High velocity.
Figure 6. Abnormal interaction semantics when conducting cross-dataset validation on the eth subset. (orange: anomalies from hotel/univ/zara1/zara2). Purple cluster: Static Agents. Green cluster: Opposite Direction. Blue cluster: High velocity.
Computers 15 00186 g006
Figure 7. Contribution weights of each abnormal interaction semantic component.
Figure 7. Contribution weights of each abnormal interaction semantic component.
Computers 15 00186 g007
Figure 8. Comparison between the baseline intention generator and our two-stage GMM approach for rare intention modeling (all blue points represent intentions from hotel/univ/zara1/zara2 and red points represent intention GMM components).
Figure 8. Comparison between the baseline intention generator and our two-stage GMM approach for rare intention modeling (all blue points represent intentions from hotel/univ/zara1/zara2 and red points represent intention GMM components).
Computers 15 00186 g008
Figure 9. Box plot of ADE/FDE results of MSN-SCP and MSN-SCP-abn on the univ subset of ETH-UCY with abnormal interaction modeling.
Figure 9. Box plot of ADE/FDE results of MSN-SCP and MSN-SCP-abn on the univ subset of ETH-UCY with abnormal interaction modeling.
Computers 15 00186 g009
Figure 10. FDE metric of the univ and hotel test set with training procedure. We made N (N = 5) parallel experiments. Different colors means parallel experiments of each group. ‘-abn’ means with our abnormal interaction plugin.
Figure 10. FDE metric of the univ and hotel test set with training procedure. We made N (N = 5) parallel experiments. Different colors means parallel experiments of each group. ‘-abn’ means with our abnormal interaction plugin.
Computers 15 00186 g010
Table 1. Key symbol definitions.
Table 1. Key symbol definitions.
SectionSymbolExplanation
Section 3.2 r d i s i j Relative distance of pedestrian j to i.
r v e l j Absolute velocity of pedestrian j to i.
r θ i j Relative angle of pedestrian j to i.
ρ d i s _ v e l Correlation coefficient between relative distance and absolute velocity.
ρ v e l _ θ Correlation coefficient between absolute velocity and relative angle.
ρ d i s _ θ Correlation coefficient between relative distance and relative angle.
Θ a b n GMM to extract abnormal social interactions.
R m e t a i j Abnormal social interaction meta-component (relative distance, absolute velocity, and relative angle) of pedestrian j to i.
A i Abnormal neighbor set for pedestrian i.
s n i The n-th aggregated Gaussian component of the abnormal neighbors feature for pedestrian i.
λ n Weight coefficient for the n-th abnormal social interaction component whose addition is 1.
Section 3.3 Θ i n t GMM to extract rare intentions.
Θ e 1 Intention GMM for the first stage to capture glocal goal distributions.
Θ e 2 Intention GMM for the second stage to capture long-tail intentions.
N i n t Number of intention components in Θ i n t .
e i Intention and endpoint to predict for pedestrian i.
P i n t i Clustered intention labels by Θ i n t for pedestrian i.
v i Intention embeddings for pedestrian i.
v j Arbitrary sample in the same batch as i.
| N i k | Number of neighbors which belong to the k-th nearest intention cluster for sample i.
c i k Prototype of cluster which the k-th neighbor GMM component of pedestrian i belongs to.
α Momentum coefficient in our PCL algorithm.
τ Temperature coefficient in our PCL algorithm.
Table 2. Implementation details we used in this work. K is short for key points. ✓ means the preprocessing steps we conducted before model training.
Table 2. Implementation details we used in this work. K is short for key points. ✓ means the preprocessing steps we conducted before model training.
SizePreprocessorBackboneHyperparameters
N train N test MoveRotate bszlrEpochs K
eth36,7842614 V 2 -Net-SCP1000 3 × 10 4 2004 8 11
E- V 2 -Net-SCP1000 3 × 10 4 2004 8 11
MSN-SCP1000 3 × 10 4 20011
hotel38,3231075 V 2 -Net-SCP1000 4 × 10 4 2004 8 11
E- V 2 -Net-SCP1000 3 × 10 4 2004 8 11
MSN-SCP1000 4 × 10 4 20011
univ15,06424,334 V 2 -Net-SCP1000 6 × 10 4 3004 8 11
E- V 2 -Net-SCP1000 1 × 10 3 2004 8 11
MSN-SCP1000 3 × 10 4 20011
zara137,0422356- V 2 -Net-SCP1000 3 × 10 4 2004 8 11
E- V 2 -Net-SCP1000 4 × 10 4 2004 8 11
MSN-SCP1000 3 × 10 4 20011
zara233,4885910- V 2 -Net-SCP1000 3 × 10 4 2004 8 11
E- V 2 -Net-SCP1000 2 × 10 4 2504 8 11
MSN-SCP1000 3 × 10 4 20011
SDD251,61738,626- V 2 -Net-SC1000 3 × 10 4 2004 8 11
V 2 -Net-SCP1000 3 × 10 4 2004 8 11
E- V 2 -Net-SC1000 2 × 10 4 2004 8 11
E- V 2 -Net-SCP1000 2 × 10 4 2004 8 11
MSN-SC1000 2 × 10 4 20011
MSN-SCP1000 2 × 10 4 20011
Trans-SC1000 4 × 10 4 2504 8 11
Trans-SCP1000 4 × 10 4 2504 8 11
Table 3. The averaged ADE/FDE results across 5 ETH-UCY subsets in cross-dataset validation. ‘-abn’ indicates our abnormal interaction module; bold shows our improvement.
Table 3. The averaged ADE/FDE results across 5 ETH-UCY subsets in cross-dataset validation. ‘-abn’ indicates our abnormal interaction module; bold shows our improvement.
Models (ETH-UCY)Year-abn ADE 20 FDE 20
EigenTraj2023-0.210.34
TUTR2023-0.210.36
LED2023-0.210.33
MSN2023-0.210.34
EqMotion2023-0.210.35
PPT2024-0.200.31
MSN-SCP2025-0.2150.374
0.2080.355
V 2 -Net-SCP2025-0.1880.312
0.1850.302
E- V 2 -Net-SCP2025-0.1890.309
0.1850.305
Table 4. ADE/FDE results on the SDD with abnormal social interaction modeling ‘-abn’. Bold means improvement.
Table 4. ADE/FDE results on the SDD with abnormal social interaction modeling ‘-abn’. Bold means improvement.
Models (SDD)ADE/FDEModels (SDD)ADE/FDE
LED8.48/11.36LB-EBM8.87/15.61
FlowChain9.93/17.17AgentFormer10.18/16.91
UPDD6.59/13.50IMP8.98/15.54
RAN10.97/19.95EigenTraj7.42/12.49
LG-Traj7.80/12.79PPT7.03/10.65
V 2 -Net7.12/11.39E- V 2 -Net6.57/10.49
V 2 -Net-SC6.71/10.66E- V 2 -Net-SC6.54/10.36
V 2 -Net-SC-abn6.59/10.60E- V 2 -Net-SC-abn6.48/10.32
V 2 -Net-SCP6.59/10.39E- V 2 -Net-SCP6.44/10.22
V 2 -Net-SCP-abn6.38/10.04E- V 2 -Net-SCP-abn6.38/10.08
MSN7.69/12.16Transformer17.44/33.36
MSN-SC7.49/12.12Trans-SC16.47/32.08
MSN-SC-abn7.32/11.78Trans-SC-abn15.57/30.93
MSN-SCP7.32/11.76Trans-SCP16.11/31.43
MSN-SCP-abn7.25/11.51Trans-SCP-abn15.70/31.16
Table 5. Cross-dataset validation on ETH-UCY for abnormal interaction modeling ‘-abn’ and rare intention modeling ‘-r’. ‘Dataset’: validation subset; baseline: V 2 -Net-SCP-abn. Bold means the best ADE/FDE metric of each dataset.
Table 5. Cross-dataset validation on ETH-UCY for abnormal interaction modeling ‘-abn’ and rare intention modeling ‘-r’. ‘Dataset’: validation subset; baseline: V 2 -Net-SCP-abn. Bold means the best ADE/FDE metric of each dataset.
Dataset abn r Top 1%Top 5%Top 10%All
eth--1.186/2.3790.796/1.5400.650/1.2400.261/0.412
-1.189/2.3180.774/1.4960.638/1.2010.255/0.400
1.070/2.1310.713/1.4070.591/1.1500.258/0.421
hotel--0.643/1.1280.411/0.7120.331/0.5630.109/0.158
-0.644/1.2170.391/0.7240.321/0.5650.108/0.152
0.592/1.0850.369/0.6460.305/0.5190.108/0.155
univ--1.617/3.3760.986/2.0530.767/1.5620.251/0.446
-1.432/2.9380.897/1.8280.708/1.4290.250/0.444
1.419/2.8740.825/1.6100.657/1.2580.251/0.446
zara1--1.234/2.5890.660/1.3550.499/0.9920.180/0.308
-1.075/2.1600.596/1.1320.462/0.8530.174/0.282
1.054/2.0700.584/1.1060.453/0.8430.174/0.289
zara2--1.267/2.6840.723/1.4780.548/1.0750.137/0.232
-1.192/2.5060.701/1.4410.535/1.0440.137/0.233
1.170/2.3870.676/1.3450.515/0.9720.136/0.233
all--1.189/2.4310.715/1.4280.559/1.0860.188/0.311
-1.106/2.2280.672/1.3240.533/1.0180.185/0.302
1.061/2.1090.633/1.2230.504/0.9480.185/0.309
Table 6. ADE/FDE (m) for top 1–10% of long-tail samples in the SDD. ‘-abn’: abnormal interaction; ‘-r’: rare intention. Bold: the best test set performance for each comparison group. Underline: the best long-tail performance for each comparison group.
Table 6. ADE/FDE (m) for top 1–10% of long-tail samples in the SDD. ‘-abn’: abnormal interaction; ‘-r’: rare intention. Bold: the best test set performance for each comparison group. Underline: the best long-tail performance for each comparison group.
Models (SDD) abn r Top 1% ↓Top 5% ↓Majority (95%) ↓Top 10% ↓Majority (90%) ↓All ↓
Y-Net--65.82/134.0134.72/67.466.54/8.96--7.93/11.88
Y-Net + FEND--57.58/108.6131.27/57.986.64/9.24--7.87/11.68
MSN-SCP--80.43/124.9041.19/70.435.55/8.6829.95/51.754.82/7.337.33/11.77
-80.80/120.1640.40/67.575.52/8.5629.30/49.624.81/7.287.26/11.51
78.01/105.8738.14/57.975.31/8.0427.65/43.284.65/6.906.95/10.54
V 2 -Net-SCP--67.00/121.3035.76/63.995.05/7.5526.27/45.904.40/6.426.59/10.37
-63.60/115.4033.79/60.404.94/7.3925.03/43.744.31/6.306.38/10.04
62.69/110.5032.96/58.054.84/7.1624.35/42.094.24/6.106.25/9.70
E- V 2 -Net-SCP--69.61/127.9236.10/65.624.88/7.3026.25/46.504.24/6.196.44/10.22
-63.34/112.7833.93/60.334.93/7.4325.13/43.884.30/6.326.38/10.08
60.47/108.3032.13/57.224.82/7.2123.89/41.744.22/6.156.19/9.71
Trans-SCP--168.55/345.4895.17/197.1711.94/22.7170.12/144.9610.10/18.8216.10/31.43
-167.92/340.1493.45/194.3511.62/22.5968.86/143.129.80/18.7415.71/31.18
69.63/115.9135.83/60.325.36/7.8026.33/44.014.72/6.706.88/10.43
Table 7. Ablation study of the abnormal interaction extraction threshold on the SDD. ∥ N a b n t e s t ∥ means the number of abnormal interactions extracted during inference. Bold means the best ADE/FDE metric.
Table 7. Ablation study of the abnormal interaction extraction threshold on the SDD. ∥ N a b n t e s t ∥ means the number of abnormal interactions extracted during inference. Bold means the best ADE/FDE metric.
Model ϵ abn N abn train ϵ abn 2 N abn test ADE/FDE
V 2 -Net-SCP-abn−416.6k−42.7k6.45/10.22
−253.9k05.2k6.43/10.14
−253.9k−27.1k6.38/10.04
0309.0k087.8k6.43/10.11
E- V 2 -Net-SCP-abn−416.6k−42.8k6.49/10.26
−257.2k05.1k6.38/10.08
−257.2k−27.8k6.41/10.14
0293.4k035.2k6.44/10.22
Table 8. Ablation study of rare intention modeling ‘-r’ on univ. Our baseline model is V 2 -Net-SCP-abn. N i n t is the intention GMM components number and θ is the activation threshold for L P r o t o N C E . Bold means the best ADE/FDE metric.
Table 8. Ablation study of rare intention modeling ‘-r’ on univ. Our baseline model is V 2 -Net-SCP-abn. N i n t is the intention GMM components number and θ is the activation threshold for L P r o t o N C E . Bold means the best ADE/FDE metric.
Plugin N int θ Top 1%Top 5%Top 10%All
---1.62/3.380.99/2.050.77/1.560.25/0.44
-abn--1.43/2.940.90/1.830.71/1.430.25/0.44
-abn-r5120.41.57/3.260.92/1.870.71/1.410.25/0.43
5120.61.42/2.870.83/1.610.66/1.260.25/0.45
5120.81.51/3.050.88/1.750.69/1.330.25/0.45
2560.61.52/3.140.89/1.760.69/1.340.25/0.44
10240.61.54/3.170.90/1.780.69/1.340.25/0.43
Table 9. Ablation study of K-nearest GMM components for rare intention modeling on univ dataset. Baseline: V 2 -Net-SCP-abn. AI: average inference time (ms). FI: fast inference time (ms). Bold means the best ADE/FDE metric of each dataset.
Table 9. Ablation study of K-nearest GMM components for rare intention modeling on univ dataset. Baseline: V 2 -Net-SCP-abn. AI: average inference time (ms). FI: fast inference time (ms). Bold means the best ADE/FDE metric of each dataset.
DatasetK R int Top 1%Top 5%Top 10%AllAIFI
SDD--63.60/115.4033.79/60.4025.03/43.746.38/10.0443.740.7
10.156.47/94.4431.30/54.7023.79/41.816.77/11.3642.840.8
50.0561.95/111.6532.93/58.8524.41/42.626.29/9.8842.640.9
50.162.69/110.5032.96/58.0524.35/42.096.25/9.7043.140.6
50.262.73/112.5433.16/59.2624.54/42.916.30/9.8843.240.7
univ10.11.43/2.920.83/1.610.65/1.260.25/0.45--
20.11.46/2.930.85/1.640.67/1.270.25/0.44--
50.11.42/2.870.83/1.610.66/1.260.25/0.45--
60.11.50/3.020.87/1.690.68/1.300.25/0.44--
70.11.54/3.190.92/1.830.71/1.380.25/0.44--
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Yang, C.; Liu, J.; Dong, X. Disentangling Interaction and Intention for Long-Tail Pedestrian Trajectory Prediction. Computers 2026, 15, 186. https://doi.org/10.3390/computers15030186

AMA Style

Yang C, Liu J, Dong X. Disentangling Interaction and Intention for Long-Tail Pedestrian Trajectory Prediction. Computers. 2026; 15(3):186. https://doi.org/10.3390/computers15030186

Chicago/Turabian Style

Yang, Chengkai, Jincheng Liu, and Xingping Dong. 2026. "Disentangling Interaction and Intention for Long-Tail Pedestrian Trajectory Prediction" Computers 15, no. 3: 186. https://doi.org/10.3390/computers15030186

APA Style

Yang, C., Liu, J., & Dong, X. (2026). Disentangling Interaction and Intention for Long-Tail Pedestrian Trajectory Prediction. Computers, 15(3), 186. https://doi.org/10.3390/computers15030186

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop